status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,237 |
aireos modules unable to handle binary data from controller
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Cisco WLC release 8.8.120.0 returns non-ascii values with a `show run-config commands` command causing tasks and modules using this command to fail.
The binary data appears to happen in the same spot in the output from the controller and looks like this in the prompt:
```
wlan flexconnect learn-ipaddr 97 enable
wlan flexconnect learn-ipaddr 101 enable
`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R
wlan wgb broadcast-tagging disable 1
wlan wgb broadcast-tagging disable 2 `
```
There are a number of bytes that represent the binary data, one sample is: `\xc8\x92\xef\xbf\xbdR\x7f`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
aireos_command
aireos_config
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /Users/wsmith/temp/network_backup/ansible.cfg
configured module search path = [u'/Users/wsmith/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/wsmith/temp/ansible/lib/ansible
executable location = /Users/wsmith/temp/ansible/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_DEBUG(env: ANSIBLE_DEBUG) = True
DEFAULT_FORKS(/Users/wsmith/temp/network_backup/ansible.cfg) = 20
DEFAULT_GATHERING(/Users/wsmith/temp/network_backup/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'/Users/wsmith/temp/network_backup/inventory.ini']
DEFAULT_INVENTORY_PLUGIN_PATH(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'/Users/wsmith/temp/network_backup/inventory_plugins']
DEFAULT_KEEP_REMOTE_FILES(env: ANSIBLE_KEEP_REMOTE_FILES) = False
DEFAULT_LOG_PATH(/Users/wsmith/temp/network_backup/ansible.cfg) = /Users/wsmith/temp/network_backup/ansible.log
DEFAULT_ROLES_PATH(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'/Users/wsmith/temp/network_backup/roles']
DEFAULT_VAULT_PASSWORD_FILE(/Users/wsmith/temp/network_backup/ansible.cfg) = /Users/wsmith/temp/network_backup/vault_password.txt
HOST_KEY_CHECKING(/Users/wsmith/temp/network_backup/ansible.cfg) = False
INTERPRETER_PYTHON(/Users/wsmith/temp/network_backup/ansible.cfg) = /Users/wsmith/temp/ansible/venv/bin/python
INVENTORY_ENABLED(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'ini']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Cisco WLC running AireOS with code version 8.8.120.0
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run `ansible-playbook` against playbook with task that will execute the `show run-config commands` command.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: WLCs
vars:
ansible_connection: local
ansible_network_os: aireos
ansible_user: "user"
ansible_password: "password"
tasks:
- name: Get current running configuration
aireos_command:
commands:
- show run-config commands
provider:
timeout: 90
register: output
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
A successful task execution and the running configuration retrieved from the WLC.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
An `AnsibleConnectionFailure` exception is raised.
<!--- Paste verbatim command output between quotes -->
```paste below
2019-08-07 13:59:25,451 p=wsmith u=16872 | ansible-playbook 2.9.0.dev0
config file = /Users/wsmith/temp/network_backup/ansible.cfg
configured module search path = [u'/Users/wsmith/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/wsmith/temp/ansible/lib/ansible
executable location = /Users/wsmith/temp/ansible/bin/ansible-playbook
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
2019-08-07 13:59:25,451 p=wsmith u=16872 | Using /Users/wsmith/temp/network_backup/ansible.cfg as config file
2019-08-07 13:59:25,452 p=wsmith u=16872 | setting up inventory plugins
2019-08-07 13:59:25,463 p=wsmith u=16872 | Parsed /Users/wsmith/temp/network_backup/inventory.ini inventory source with ini plugin
2019-08-07 13:59:25,832 p=wsmith u=16872 | Loading callback plugin default of type stdout, v2.0 from /Users/wsmith/temp/ansible/lib/ansible/plugins/callback/default.pyc
2019-08-07 13:59:25,875 p=wsmith u=16872 | PLAYBOOK: pb.yml *****************************************************************************************************************************************************************************
2019-08-07 13:59:25,875 p=wsmith u=16872 | 1 plays in pb.yml
2019-08-07 13:59:25,879 p=wsmith u=16872 | PLAY [WLCs] *****************************************************************************************************************************************************************************
2019-08-07 13:59:25,888 p=wsmith u=16872 | META: ran handlers
2019-08-07 13:59:25,894 p=wsmith u=16872 | TASK [Get current running configuration] ****************************************************************************************************************************************************************
2019-08-07 13:59:25,992 p=wsmith u=16880 | Trying secret FileVaultSecret(filename='/Users/wsmith/temp/network_backup/vault_password.txt') for vault_id=default
2019-08-07 13:59:26,012 p=wsmith u=16880 | Trying secret FileVaultSecret(filename='/Users/wsmith/temp/network_backup/vault_password.txt') for vault_id=default
2019-08-07 13:59:26,033 p=wsmith u=16880 | <1.1.1.1> using connection plugin network_cli (was local)
2019-08-07 13:59:26,034 p=wsmith u=16880 | <1.1.1.1> starting connection from persistent connection plugin
2019-08-07 13:59:26,404 p=wsmith u=16886 | <1.1.1.1> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: username on PORT 22 TO 1.1.1.1
2019-08-07 13:59:28,447 p=wsmith u=16880 | <1.1.1.1> local domain socket does not exist, starting it
2019-08-07 13:59:28,449 p=wsmith u=16880 | <1.1.1.1> control socket path is /Users/wsmith/.ansible/pc/0d509e7b95
2019-08-07 13:59:28,450 p=wsmith u=16880 | <1.1.1.1> <1.1.1.1> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: username on PORT 22 TO 1.1.1.1
2019-08-07 13:59:28,451 p=wsmith u=16880 | <1.1.1.1> connection to remote device started successfully
2019-08-07 13:59:28,452 p=wsmith u=16880 | <1.1.1.1> local domain socket listeners started successfully
2019-08-07 13:59:28,452 p=wsmith u=16880 | <1.1.1.1> loaded cliconf plugin for network_os aireos
2019-08-07 13:59:28,453 p=wsmith u=16880 | network_os is set to aireos
2019-08-07 13:59:28,453 p=wsmith u=16880 | <1.1.1.1> ssh connection done, setting terminal
2019-08-07 13:59:28,454 p=wsmith u=16880 | <1.1.1.1> loaded terminal plugin for network_os aireos
2019-08-07 13:59:28,454 p=wsmith u=16880 | <1.1.1.1> Response received, triggered 'persistent_buffer_read_timeout' timer of 0.1 seconds
2019-08-07 13:59:28,455 p=wsmith u=16880 | <1.1.1.1> firing event: on_open_shell()
2019-08-07 13:59:28,455 p=wsmith u=16880 | <1.1.1.1> Response received, triggered 'persistent_buffer_read_timeout' timer of 0.1 seconds
2019-08-07 13:59:28,456 p=wsmith u=16880 | <1.1.1.1> ssh connection has completed successfully
2019-08-07 13:59:28,456 p=wsmith u=16880 | <1.1.1.1>
2019-08-07 13:59:28,457 p=wsmith u=16880 | <1.1.1.1> local domain socket path is /Users/wsmith/.ansible/pc/0d509e7b95
2019-08-07 13:59:28,457 p=wsmith u=16880 | <1.1.1.1> socket_path: /Users/wsmith/.ansible/pc/0d509e7b95
2019-08-07 13:59:28,460 p=wsmith u=16880 | <1.1.1.1> ESTABLISH LOCAL CONNECTION FOR USER: wsmith
2019-08-07 13:59:28,461 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c 'echo ~wsmith && sleep 0'
2019-08-07 13:59:28,477 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451 `" && echo ansible-tmp-1565211568.46-135266596630451="` echo /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451 `" ) && sleep 0'
2019-08-07 13:59:28,670 p=wsmith u=16880 | Using module file /Users/wsmith/temp/ansible/lib/ansible/modules/network/aireos/aireos_command.py
2019-08-07 13:59:28,672 p=wsmith u=16880 | <1.1.1.1> PUT /Users/wsmith/.ansible/tmp/ansible-local-16872CPCpFb/tmpaJTRIR TO /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/AnsiballZ_aireos_command.py
2019-08-07 13:59:28,674 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c 'chmod u+x /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/ /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/AnsiballZ_aireos_command.py && sleep 0'
2019-08-07 13:59:28,693 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c '/Users/wsmith/temp/ansible/venv/bin/python /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/AnsiballZ_aireos_command.py && sleep 0'
2019-08-07 14:00:08,812 p=wsmith u=16886 | Traceback (most recent call last):
File "/Users/wsmith/temp/ansible/lib/ansible/utils/jsonrpc.py", line 45, in handle_request
result = rpc_method(*args, **kwargs)
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 287, in exec_command
return self.send(command=cmd)
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 473, in send
response = self.receive(command, prompt, answer, newline, prompt_retry_check, check_all)
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 444, in receive
if self._find_prompt(window):
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 590, in _find_prompt
raise AnsibleConnectionFailure(errored_response)
AnsibleConnectionFailure: {u'answer': None, u'prompt': None, u'command': u'show run-config commands'}
Incorrect usage. Use the '?' or <TAB> key to list commands.
(WLC01) >
2019-08-07 14:00:08,827 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c 'rm -f -r /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/ > /dev/null 2>&1 && sleep 0'
2019-08-07 14:00:08,854 p=wsmith u=16872 | fatal: [WLC01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"commands": [
"show run-config commands"
],
"host": null,
"interval": 1,
"match": "all",
"password": null,
"port": null,
"provider": {
"host": null,
"password": null,
"port": null,
"ssh_keyfile": null,
"timeout": 90,
"username": null
},
"retries": 10,
"ssh_keyfile": null,
"timeout": null,
"username": null,
"wait_for": null
}
},
"msg": "{u'answer': None, u'prompt': None, u'command': u'show run-config commands'}\r\n\r\nIncorrect usage. Use the '?' or <TAB> key to list commands.\r\n\r\n(WLC01) >",
"rc": -32603
}
2019-08-07 14:00:08,926 p=wsmith u=16872 | PLAY RECAP **********************************************************************************************************************************************************************************************
2019-08-07 14:00:08,927 p=wsmith u=16872 | WLC01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
2019-08-07 14:00:09,025 p=wsmith u=16886 | shutdown complete
```
|
https://github.com/ansible/ansible/issues/60237
|
https://github.com/ansible/ansible/pull/60243
|
30cc54da8cc4efd7a554df964b0f7bddfde0c9c4
|
50d1cbd30a8e19bcce312bd429acb4b690255f51
| 2019-08-07T21:07:19Z |
python
| 2019-10-01T20:52:35Z |
lib/ansible/plugins/connection/network_cli.py
|
# (c) 2016 Red Hat Inc.
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
---
author: Ansible Networking Team
connection: network_cli
short_description: Use network_cli to run command on network appliances
description:
- This connection plugin provides a connection to remote devices over the
SSH and implements a CLI shell. This connection plugin is typically used by
network devices for sending and receiving CLi commands to network devices.
version_added: "2.3"
options:
host:
description:
- Specifies the remote device FQDN or IP address to establish the SSH
connection to.
default: inventory_hostname
vars:
- name: ansible_host
port:
type: int
description:
- Specifies the port on the remote device that listens for connections
when establishing the SSH connection.
default: 22
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
network_os:
description:
- Configures the device platform network operating system. This value is
used to load the correct terminal and cliconf plugins to communicate
with the remote device.
vars:
- name: ansible_network_os
remote_user:
description:
- The username used to authenticate to the remote device when the SSH
connection is first established. If the remote_user is not specified,
the connection will use the username of the logged in user.
- Can be configured from the CLI via the C(--user) or C(-u) options.
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
password:
description:
- Configures the user password used to authenticate to the remote device
when first establishing the SSH connection.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
private_key_file:
description:
- The private SSH key or certificate file used to authenticate to the
remote device when first establishing the SSH connection.
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
timeout:
type: int
description:
- Sets the connection time, in seconds, for communicating with the
remote device. This timeout is used as the default timeout value for
commands when issuing a command to the network CLI. If the command
does not return in timeout seconds, an error is generated.
default: 120
become:
type: boolean
description:
- The become option will instruct the CLI session to attempt privilege
escalation on platforms that support it. Normally this means
transitioning from user mode to C(enable) mode in the CLI session.
If become is set to True and the remote device does not support
privilege escalation or the privilege has already been elevated, then
this option is silently ignored.
- Can be configured from the CLI via the C(--become) or C(-b) options.
default: False
ini:
- section: privilege_escalation
key: become
env:
- name: ANSIBLE_BECOME
vars:
- name: ansible_become
become_method:
description:
- This option allows the become method to be specified in for handling
privilege escalation. Typically the become_method value is set to
C(enable) but could be defined as other values.
default: sudo
ini:
- section: privilege_escalation
key: become_method
env:
- name: ANSIBLE_BECOME_METHOD
vars:
- name: ansible_become_method
host_key_auto_add:
type: boolean
description:
- By default, Ansible will prompt the user before adding SSH keys to the
known hosts file. Since persistent connections such as network_cli run
in background processes, the user will never be prompted. By enabling
this option, unknown host keys will automatically be added to the
known hosts file.
- Be sure to fully understand the security implications of enabling this
option on production systems as it could create a security vulnerability.
default: False
ini:
- section: paramiko_connection
key: host_key_auto_add
env:
- name: ANSIBLE_HOST_KEY_AUTO_ADD
persistent_connect_timeout:
type: int
description:
- Configures, in seconds, the amount of time to wait when trying to
initially establish a persistent connection. If this value expires
before the connection to the remote device is completed, the connection
will fail.
default: 30
ini:
- section: persistent_connection
key: connect_timeout
env:
- name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT
vars:
- name: ansible_connect_timeout
persistent_command_timeout:
type: int
description:
- Configures, in seconds, the amount of time to wait for a command to
return from the remote device. If this timer is exceeded before the
command returns, the connection plugin will raise an exception and
close.
default: 30
ini:
- section: persistent_connection
key: command_timeout
env:
- name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT
vars:
- name: ansible_command_timeout
persistent_buffer_read_timeout:
type: float
description:
- Configures, in seconds, the amount of time to wait for the data to be read
from Paramiko channel after the command prompt is matched. This timeout
value ensures that command prompt matched is correct and there is no more data
left to be received from remote host.
default: 0.1
ini:
- section: persistent_connection
key: buffer_read_timeout
env:
- name: ANSIBLE_PERSISTENT_BUFFER_READ_TIMEOUT
vars:
- name: ansible_buffer_read_timeout
persistent_log_messages:
type: boolean
description:
- This flag will enable logging the command executed and response received from
target device in the ansible log file. For this option to work 'log_path' ansible
configuration option is required to be set to a file path with write access.
- Be sure to fully understand the security implications of enabling this
option as it could create a security vulnerability by logging sensitive information in log file.
default: False
ini:
- section: persistent_connection
key: log_messages
env:
- name: ANSIBLE_PERSISTENT_LOG_MESSAGES
vars:
- name: ansible_persistent_log_messages
terminal_stdout_re:
type: list
elements: dict
version_added: '2.9'
description:
- A single regex pattern or a sequence of patterns along with optional flags
to match the command prompt from the received response chunk. This option
accepts C(pattern) and C(flags) keys. The value of C(pattern) is a python
regex pattern to match the response and the value of C(flags) is the value
accepted by I(flags) argument of I(re.compile) python method to control
the way regex is matched with the response, for example I('re.I').
vars:
- name: ansible_terminal_stdout_re
terminal_stderr_re:
type: list
elements: dict
version_added: '2.9'
description:
- This option provides the regex pattern and optional flags to match the
error string from the received response chunk. This option
accepts C(pattern) and C(flags) keys. The value of C(pattern) is a python
regex pattern to match the response and the value of C(flags) is the value
accepted by I(flags) argument of I(re.compile) python method to control
the way regex is matched with the response, for example I('re.I').
vars:
- name: ansible_terminal_stderr_re
terminal_initial_prompt:
type: list
version_added: '2.9'
description:
- A single regex pattern or a sequence of patterns to evaluate the expected
prompt at the time of initial login to the remote host.
vars:
- name: ansible_terminal_initial_prompt
terminal_initial_answer:
type: list
version_added: '2.9'
description:
- The answer to reply with if the C(terminal_initial_prompt) is matched. The value can be a single answer
or a list of answers for multiple terminal_initial_prompt. In case the login menu has
multiple prompts the sequence of the prompt and excepted answer should be in same order and the value
of I(terminal_prompt_checkall) should be set to I(True) if all the values in C(terminal_initial_prompt) are
expected to be matched and set to I(False) if any one login prompt is to be matched.
vars:
- name: ansible_terminal_initial_answer
terminal_initial_prompt_checkall:
type: boolean
version_added: '2.9'
description:
- By default the value is set to I(False) and any one of the prompts mentioned in C(terminal_initial_prompt)
option is matched it won't check for other prompts. When set to I(True) it will check for all the prompts
mentioned in C(terminal_initial_prompt) option in the given order and all the prompts
should be received from remote host if not it will result in timeout.
default: False
vars:
- name: ansible_terminal_initial_prompt_checkall
terminal_inital_prompt_newline:
type: boolean
version_added: '2.9'
description:
- This boolean flag, that when set to I(True) will send newline in the response if any of values
in I(terminal_initial_prompt) is matched.
default: True
vars:
- name: ansible_terminal_initial_prompt_newline
network_cli_retries:
description:
- Number of attempts to connect to remote host. The delay time between the retires increases after
every attempt by power of 2 in seconds till either the maximum attempts are exhausted or any of the
C(persistent_command_timeout) or C(persistent_connect_timeout) timers are triggered.
default: 3
version_added: '2.9'
type: integer
env:
- name: ANSIBLE_NETWORK_CLI_RETRIES
ini:
- section: persistent_connection
key: network_cli_retries
vars:
- name: ansible_network_cli_retries
"""
import getpass
import json
import logging
import re
import os
import signal
import socket
import time
import traceback
from io import BytesIO
from ansible.errors import AnsibleConnectionFailure
from ansible.module_utils.six import PY3
from ansible.module_utils.six.moves import cPickle
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils._text import to_bytes, to_text
from ansible.playbook.play_context import PlayContext
from ansible.plugins.connection import NetworkConnectionBase, ensure_connect
from ansible.plugins.loader import cliconf_loader, terminal_loader, connection_loader
class AnsibleCmdRespRecv(Exception):
pass
class Connection(NetworkConnectionBase):
''' CLI (shell) SSH connections on Paramiko '''
transport = 'network_cli'
has_pipelining = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
self._ssh_shell = None
self._matched_prompt = None
self._matched_cmd_prompt = None
self._matched_pattern = None
self._last_response = None
self._history = list()
self._command_response = None
self._terminal = None
self.cliconf = None
self._paramiko_conn = None
if self._play_context.verbosity > 3:
logging.getLogger('paramiko').setLevel(logging.DEBUG)
if self._network_os:
self._terminal = terminal_loader.get(self._network_os, self)
if not self._terminal:
raise AnsibleConnectionFailure('network os %s is not supported' % self._network_os)
self.cliconf = cliconf_loader.get(self._network_os, self)
if self.cliconf:
self.queue_message('vvvv', 'loaded cliconf plugin for network_os %s' % self._network_os)
self._sub_plugin = {'type': 'cliconf', 'name': self._network_os, 'obj': self.cliconf}
else:
self.queue_message('vvvv', 'unable to load cliconf for network_os %s' % self._network_os)
else:
raise AnsibleConnectionFailure(
'Unable to automatically determine host network os. Please '
'manually configure ansible_network_os value for this host'
)
self.queue_message('log', 'network_os is set to %s' % self._network_os)
@property
def paramiko_conn(self):
if self._paramiko_conn is None:
self._paramiko_conn = connection_loader.get('paramiko', self._play_context, '/dev/null')
self._paramiko_conn.set_options(direct={'look_for_keys': not bool(self._play_context.password and not self._play_context.private_key_file)})
return self._paramiko_conn
def _get_log_channel(self):
name = "p=%s u=%s | " % (os.getpid(), getpass.getuser())
name += "paramiko [%s]" % self._play_context.remote_addr
return name
@ensure_connect
def get_prompt(self):
"""Returns the current prompt from the device"""
return self._matched_prompt
def exec_command(self, cmd, in_data=None, sudoable=True):
# this try..except block is just to handle the transition to supporting
# network_cli as a toplevel connection. Once connection=local is gone,
# this block can be removed as well and all calls passed directly to
# the local connection
if self._ssh_shell:
try:
cmd = json.loads(to_text(cmd, errors='surrogate_or_strict'))
kwargs = {'command': to_bytes(cmd['command'], errors='surrogate_or_strict')}
for key in ('prompt', 'answer', 'sendonly', 'newline', 'prompt_retry_check'):
if cmd.get(key) is True or cmd.get(key) is False:
kwargs[key] = cmd[key]
elif cmd.get(key) is not None:
kwargs[key] = to_bytes(cmd[key], errors='surrogate_or_strict')
return self.send(**kwargs)
except ValueError:
cmd = to_bytes(cmd, errors='surrogate_or_strict')
return self.send(command=cmd)
else:
return super(Connection, self).exec_command(cmd, in_data, sudoable)
def update_play_context(self, pc_data):
"""Updates the play context information for the connection"""
pc_data = to_bytes(pc_data)
if PY3:
pc_data = cPickle.loads(pc_data, encoding='bytes')
else:
pc_data = cPickle.loads(pc_data)
play_context = PlayContext()
play_context.deserialize(pc_data)
self.queue_message('vvvv', 'updating play_context for connection')
if self._play_context.become ^ play_context.become:
if play_context.become is True:
auth_pass = play_context.become_pass
self._terminal.on_become(passwd=auth_pass)
self.queue_message('vvvv', 'authorizing connection')
else:
self._terminal.on_unbecome()
self.queue_message('vvvv', 'deauthorizing connection')
self._play_context = play_context
if hasattr(self, 'reset_history'):
self.reset_history()
if hasattr(self, 'disable_response_logging'):
self.disable_response_logging()
def _connect(self):
'''
Connects to the remote device and starts the terminal
'''
if not self.connected:
self.paramiko_conn._set_log_channel(self._get_log_channel())
self.paramiko_conn.force_persistence = self.force_persistence
command_timeout = self.get_option('persistent_command_timeout')
max_pause = min([self.get_option('persistent_connect_timeout'), command_timeout])
retries = self.get_option('network_cli_retries')
total_pause = 0
for attempt in range(retries + 1):
try:
ssh = self.paramiko_conn._connect()
break
except Exception as e:
pause = 2 ** (attempt + 1)
if attempt == retries or total_pause >= max_pause:
raise AnsibleConnectionFailure(to_text(e, errors='surrogate_or_strict'))
else:
msg = (u"network_cli_retry: attempt: %d, caught exception(%s), "
u"pausing for %d seconds" % (attempt + 1, to_text(e, errors='surrogate_or_strict'), pause))
self.queue_message('vv', msg)
time.sleep(pause)
total_pause += pause
continue
self.queue_message('vvvv', 'ssh connection done, setting terminal')
self._connected = True
self._ssh_shell = ssh.ssh.invoke_shell()
self._ssh_shell.settimeout(command_timeout)
self.queue_message('vvvv', 'loaded terminal plugin for network_os %s' % self._network_os)
terminal_initial_prompt = self.get_option('terminal_initial_prompt') or self._terminal.terminal_initial_prompt
terminal_initial_answer = self.get_option('terminal_initial_answer') or self._terminal.terminal_initial_answer
newline = self.get_option('terminal_inital_prompt_newline') or self._terminal.terminal_inital_prompt_newline
check_all = self.get_option('terminal_initial_prompt_checkall') or False
self.receive(prompts=terminal_initial_prompt, answer=terminal_initial_answer, newline=newline, check_all=check_all)
self.queue_message('vvvv', 'firing event: on_open_shell()')
self._terminal.on_open_shell()
if self._play_context.become and self._play_context.become_method == 'enable':
self.queue_message('vvvv', 'firing event: on_become')
auth_pass = self._play_context.become_pass
self._terminal.on_become(passwd=auth_pass)
self.queue_message('vvvv', 'ssh connection has completed successfully')
return self
def close(self):
'''
Close the active connection to the device
'''
# only close the connection if its connected.
if self._connected:
self.queue_message('debug', "closing ssh connection to device")
if self._ssh_shell:
self.queue_message('debug', "firing event: on_close_shell()")
self._terminal.on_close_shell()
self._ssh_shell.close()
self._ssh_shell = None
self.queue_message('debug', "cli session is now closed")
self.paramiko_conn.close()
self._paramiko_conn = None
self.queue_message('debug', "ssh connection has been closed successfully")
super(Connection, self).close()
def receive(self, command=None, prompts=None, answer=None, newline=True, prompt_retry_check=False, check_all=False):
'''
Handles receiving of output from command
'''
self._matched_prompt = None
self._matched_cmd_prompt = None
recv = BytesIO()
handled = False
command_prompt_matched = False
matched_prompt_window = window_count = 0
# set terminal regex values for command prompt and errors in response
self._terminal_stderr_re = self._get_terminal_std_re('terminal_stderr_re')
self._terminal_stdout_re = self._get_terminal_std_re('terminal_stdout_re')
cache_socket_timeout = self._ssh_shell.gettimeout()
command_timeout = self.get_option('persistent_command_timeout')
self._validate_timeout_value(command_timeout, "persistent_command_timeout")
if cache_socket_timeout != command_timeout:
self._ssh_shell.settimeout(command_timeout)
buffer_read_timeout = self.get_option('persistent_buffer_read_timeout')
self._validate_timeout_value(buffer_read_timeout, "persistent_buffer_read_timeout")
self._log_messages("command: %s" % command)
while True:
if command_prompt_matched:
try:
signal.signal(signal.SIGALRM, self._handle_buffer_read_timeout)
signal.setitimer(signal.ITIMER_REAL, buffer_read_timeout)
data = self._ssh_shell.recv(256)
signal.alarm(0)
self._log_messages("response-%s: %s" % (window_count + 1, data))
# if data is still received on channel it indicates the prompt string
# is wrongly matched in between response chunks, continue to read
# remaining response.
command_prompt_matched = False
# restart command_timeout timer
signal.signal(signal.SIGALRM, self._handle_command_timeout)
signal.alarm(command_timeout)
except AnsibleCmdRespRecv:
# reset socket timeout to global timeout
self._ssh_shell.settimeout(cache_socket_timeout)
return self._command_response
else:
data = self._ssh_shell.recv(256)
self._log_messages("response-%s: %s" % (window_count + 1, data))
# when a channel stream is closed, received data will be empty
if not data:
break
recv.write(data)
offset = recv.tell() - 256 if recv.tell() > 256 else 0
recv.seek(offset)
window = self._strip(recv.read())
window_count += 1
if prompts and not handled:
handled = self._handle_prompt(window, prompts, answer, newline, False, check_all)
matched_prompt_window = window_count
elif prompts and handled and prompt_retry_check and matched_prompt_window + 1 == window_count:
# check again even when handled, if same prompt repeats in next window
# (like in the case of a wrong enable password, etc) indicates
# value of answer is wrong, report this as error.
if self._handle_prompt(window, prompts, answer, newline, prompt_retry_check, check_all):
raise AnsibleConnectionFailure("For matched prompt '%s', answer is not valid" % self._matched_cmd_prompt)
if self._find_prompt(window):
self._last_response = recv.getvalue()
resp = self._strip(self._last_response)
self._command_response = self._sanitize(resp, command)
if buffer_read_timeout == 0.0:
# reset socket timeout to global timeout
self._ssh_shell.settimeout(cache_socket_timeout)
return self._command_response
else:
command_prompt_matched = True
@ensure_connect
def send(self, command, prompt=None, answer=None, newline=True, sendonly=False, prompt_retry_check=False, check_all=False):
'''
Sends the command to the device in the opened shell
'''
if check_all:
prompt_len = len(to_list(prompt))
answer_len = len(to_list(answer))
if prompt_len != answer_len:
raise AnsibleConnectionFailure("Number of prompts (%s) is not same as that of answers (%s)" % (prompt_len, answer_len))
try:
cmd = b'%s\r' % command
self._history.append(cmd)
self._ssh_shell.sendall(cmd)
self._log_messages('send command: %s' % cmd)
if sendonly:
return
response = self.receive(command, prompt, answer, newline, prompt_retry_check, check_all)
return to_text(response, errors='surrogate_or_strict')
except (socket.timeout, AttributeError):
self.queue_message('error', traceback.format_exc())
raise AnsibleConnectionFailure("timeout value %s seconds reached while trying to send command: %s"
% (self._ssh_shell.gettimeout(), command.strip()))
def _handle_buffer_read_timeout(self, signum, frame):
self.queue_message('vvvv', "Response received, triggered 'persistent_buffer_read_timeout' timer of %s seconds" %
self.get_option('persistent_buffer_read_timeout'))
raise AnsibleCmdRespRecv()
def _handle_command_timeout(self, signum, frame):
msg = 'command timeout triggered, timeout value is %s secs.\nSee the timeout setting options in the Network Debug and Troubleshooting Guide.'\
% self.get_option('persistent_command_timeout')
self.queue_message('log', msg)
raise AnsibleConnectionFailure(msg)
def _strip(self, data):
'''
Removes ANSI codes from device response
'''
for regex in self._terminal.ansi_re:
data = regex.sub(b'', data)
return data
def _handle_prompt(self, resp, prompts, answer, newline, prompt_retry_check=False, check_all=False):
'''
Matches the command prompt and responds
:arg resp: Byte string containing the raw response from the remote
:arg prompts: Sequence of byte strings that we consider prompts for input
:arg answer: Sequence of Byte string to send back to the remote if we find a prompt.
A carriage return is automatically appended to this string.
:param prompt_retry_check: Bool value for trying to detect more prompts
:param check_all: Bool value to indicate if all the values in prompt sequence should be matched or any one of
given prompt.
:returns: True if a prompt was found in ``resp``. If check_all is True
will True only after all the prompt in the prompts list are matched. False otherwise.
'''
single_prompt = False
if not isinstance(prompts, list):
prompts = [prompts]
single_prompt = True
if not isinstance(answer, list):
answer = [answer]
prompts_regex = [re.compile(to_bytes(r), re.I) for r in prompts]
for index, regex in enumerate(prompts_regex):
match = regex.search(resp)
if match:
self._matched_cmd_prompt = match.group()
self._log_messages("matched command prompt: %s" % self._matched_cmd_prompt)
# if prompt_retry_check is enabled to check if same prompt is
# repeated don't send answer again.
if not prompt_retry_check:
prompt_answer = answer[index] if len(answer) > index else answer[0]
self._ssh_shell.sendall(b'%s' % prompt_answer)
if newline:
self._ssh_shell.sendall(b'\r')
prompt_answer += b'\r'
self._log_messages("matched command prompt answer: %s" % prompt_answer)
if check_all and prompts and not single_prompt:
prompts.pop(0)
answer.pop(0)
return False
return True
return False
def _sanitize(self, resp, command=None):
'''
Removes elements from the response before returning to the caller
'''
cleaned = []
for line in resp.splitlines():
if command and line.strip() == command.strip():
continue
for prompt in self._matched_prompt.strip().splitlines():
if prompt.strip() in line:
break
else:
cleaned.append(line)
return b'\n'.join(cleaned).strip()
def _find_prompt(self, response):
'''Searches the buffered response for a matching command prompt
'''
errored_response = None
is_error_message = False
for regex in self._terminal_stderr_re:
if regex.search(response):
is_error_message = True
# Check if error response ends with command prompt if not
# receive it buffered prompt
for regex in self._terminal_stdout_re:
match = regex.search(response)
if match:
errored_response = response
self._matched_pattern = regex.pattern
self._matched_prompt = match.group()
self._log_messages("matched error regex '%s' from response '%s'" % (self._matched_pattern, errored_response))
break
if not is_error_message:
for regex in self._terminal_stdout_re:
match = regex.search(response)
if match:
self._matched_pattern = regex.pattern
self._matched_prompt = match.group()
self._log_messages("matched cli prompt '%s' with regex '%s' from response '%s'" % (self._matched_prompt, self._matched_pattern, response))
if not errored_response:
return True
if errored_response:
raise AnsibleConnectionFailure(errored_response)
return False
def _validate_timeout_value(self, timeout, timer_name):
if timeout < 0:
raise AnsibleConnectionFailure("'%s' timer value '%s' is invalid, value should be greater than or equal to zero." % (timer_name, timeout))
def transport_test(self, connect_timeout):
"""This method enables wait_for_connection to work.
As it is used by wait_for_connection, it is called by that module's action plugin,
which is on the controller process, which means that nothing done on this instance
should impact the actual persistent connection... this check is for informational
purposes only and should be properly cleaned up.
"""
# Force a fresh connect if for some reason we have connected before.
self.close()
self._connect()
self.close()
def _get_terminal_std_re(self, option):
terminal_std_option = self.get_option(option)
terminal_std_re = []
if terminal_std_option:
for item in terminal_std_option:
if "pattern" not in item:
raise AnsibleConnectionFailure("'pattern' is a required key for option '%s',"
" received option value is %s" % (option, item))
pattern = br"%s" % to_bytes(item['pattern'])
flag = item.get('flags', 0)
if flag:
flag = getattr(re, flag.split('.')[1])
terminal_std_re.append(re.compile(pattern, flag))
else:
# To maintain backward compatibility
terminal_std_re = getattr(self._terminal, option)
return terminal_std_re
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,237 |
aireos modules unable to handle binary data from controller
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Cisco WLC release 8.8.120.0 returns non-ascii values with a `show run-config commands` command causing tasks and modules using this command to fail.
The binary data appears to happen in the same spot in the output from the controller and looks like this in the prompt:
```
wlan flexconnect learn-ipaddr 97 enable
wlan flexconnect learn-ipaddr 101 enable
`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R`Ȓ�R
wlan wgb broadcast-tagging disable 1
wlan wgb broadcast-tagging disable 2 `
```
There are a number of bytes that represent the binary data, one sample is: `\xc8\x92\xef\xbf\xbdR\x7f`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
aireos_command
aireos_config
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /Users/wsmith/temp/network_backup/ansible.cfg
configured module search path = [u'/Users/wsmith/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/wsmith/temp/ansible/lib/ansible
executable location = /Users/wsmith/temp/ansible/bin/ansible
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_DEBUG(env: ANSIBLE_DEBUG) = True
DEFAULT_FORKS(/Users/wsmith/temp/network_backup/ansible.cfg) = 20
DEFAULT_GATHERING(/Users/wsmith/temp/network_backup/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'/Users/wsmith/temp/network_backup/inventory.ini']
DEFAULT_INVENTORY_PLUGIN_PATH(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'/Users/wsmith/temp/network_backup/inventory_plugins']
DEFAULT_KEEP_REMOTE_FILES(env: ANSIBLE_KEEP_REMOTE_FILES) = False
DEFAULT_LOG_PATH(/Users/wsmith/temp/network_backup/ansible.cfg) = /Users/wsmith/temp/network_backup/ansible.log
DEFAULT_ROLES_PATH(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'/Users/wsmith/temp/network_backup/roles']
DEFAULT_VAULT_PASSWORD_FILE(/Users/wsmith/temp/network_backup/ansible.cfg) = /Users/wsmith/temp/network_backup/vault_password.txt
HOST_KEY_CHECKING(/Users/wsmith/temp/network_backup/ansible.cfg) = False
INTERPRETER_PYTHON(/Users/wsmith/temp/network_backup/ansible.cfg) = /Users/wsmith/temp/ansible/venv/bin/python
INVENTORY_ENABLED(/Users/wsmith/temp/network_backup/ansible.cfg) = [u'ini']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Cisco WLC running AireOS with code version 8.8.120.0
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run `ansible-playbook` against playbook with task that will execute the `show run-config commands` command.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: WLCs
vars:
ansible_connection: local
ansible_network_os: aireos
ansible_user: "user"
ansible_password: "password"
tasks:
- name: Get current running configuration
aireos_command:
commands:
- show run-config commands
provider:
timeout: 90
register: output
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
A successful task execution and the running configuration retrieved from the WLC.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
An `AnsibleConnectionFailure` exception is raised.
<!--- Paste verbatim command output between quotes -->
```paste below
2019-08-07 13:59:25,451 p=wsmith u=16872 | ansible-playbook 2.9.0.dev0
config file = /Users/wsmith/temp/network_backup/ansible.cfg
configured module search path = [u'/Users/wsmith/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/wsmith/temp/ansible/lib/ansible
executable location = /Users/wsmith/temp/ansible/bin/ansible-playbook
python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
2019-08-07 13:59:25,451 p=wsmith u=16872 | Using /Users/wsmith/temp/network_backup/ansible.cfg as config file
2019-08-07 13:59:25,452 p=wsmith u=16872 | setting up inventory plugins
2019-08-07 13:59:25,463 p=wsmith u=16872 | Parsed /Users/wsmith/temp/network_backup/inventory.ini inventory source with ini plugin
2019-08-07 13:59:25,832 p=wsmith u=16872 | Loading callback plugin default of type stdout, v2.0 from /Users/wsmith/temp/ansible/lib/ansible/plugins/callback/default.pyc
2019-08-07 13:59:25,875 p=wsmith u=16872 | PLAYBOOK: pb.yml *****************************************************************************************************************************************************************************
2019-08-07 13:59:25,875 p=wsmith u=16872 | 1 plays in pb.yml
2019-08-07 13:59:25,879 p=wsmith u=16872 | PLAY [WLCs] *****************************************************************************************************************************************************************************
2019-08-07 13:59:25,888 p=wsmith u=16872 | META: ran handlers
2019-08-07 13:59:25,894 p=wsmith u=16872 | TASK [Get current running configuration] ****************************************************************************************************************************************************************
2019-08-07 13:59:25,992 p=wsmith u=16880 | Trying secret FileVaultSecret(filename='/Users/wsmith/temp/network_backup/vault_password.txt') for vault_id=default
2019-08-07 13:59:26,012 p=wsmith u=16880 | Trying secret FileVaultSecret(filename='/Users/wsmith/temp/network_backup/vault_password.txt') for vault_id=default
2019-08-07 13:59:26,033 p=wsmith u=16880 | <1.1.1.1> using connection plugin network_cli (was local)
2019-08-07 13:59:26,034 p=wsmith u=16880 | <1.1.1.1> starting connection from persistent connection plugin
2019-08-07 13:59:26,404 p=wsmith u=16886 | <1.1.1.1> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: username on PORT 22 TO 1.1.1.1
2019-08-07 13:59:28,447 p=wsmith u=16880 | <1.1.1.1> local domain socket does not exist, starting it
2019-08-07 13:59:28,449 p=wsmith u=16880 | <1.1.1.1> control socket path is /Users/wsmith/.ansible/pc/0d509e7b95
2019-08-07 13:59:28,450 p=wsmith u=16880 | <1.1.1.1> <1.1.1.1> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: username on PORT 22 TO 1.1.1.1
2019-08-07 13:59:28,451 p=wsmith u=16880 | <1.1.1.1> connection to remote device started successfully
2019-08-07 13:59:28,452 p=wsmith u=16880 | <1.1.1.1> local domain socket listeners started successfully
2019-08-07 13:59:28,452 p=wsmith u=16880 | <1.1.1.1> loaded cliconf plugin for network_os aireos
2019-08-07 13:59:28,453 p=wsmith u=16880 | network_os is set to aireos
2019-08-07 13:59:28,453 p=wsmith u=16880 | <1.1.1.1> ssh connection done, setting terminal
2019-08-07 13:59:28,454 p=wsmith u=16880 | <1.1.1.1> loaded terminal plugin for network_os aireos
2019-08-07 13:59:28,454 p=wsmith u=16880 | <1.1.1.1> Response received, triggered 'persistent_buffer_read_timeout' timer of 0.1 seconds
2019-08-07 13:59:28,455 p=wsmith u=16880 | <1.1.1.1> firing event: on_open_shell()
2019-08-07 13:59:28,455 p=wsmith u=16880 | <1.1.1.1> Response received, triggered 'persistent_buffer_read_timeout' timer of 0.1 seconds
2019-08-07 13:59:28,456 p=wsmith u=16880 | <1.1.1.1> ssh connection has completed successfully
2019-08-07 13:59:28,456 p=wsmith u=16880 | <1.1.1.1>
2019-08-07 13:59:28,457 p=wsmith u=16880 | <1.1.1.1> local domain socket path is /Users/wsmith/.ansible/pc/0d509e7b95
2019-08-07 13:59:28,457 p=wsmith u=16880 | <1.1.1.1> socket_path: /Users/wsmith/.ansible/pc/0d509e7b95
2019-08-07 13:59:28,460 p=wsmith u=16880 | <1.1.1.1> ESTABLISH LOCAL CONNECTION FOR USER: wsmith
2019-08-07 13:59:28,461 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c 'echo ~wsmith && sleep 0'
2019-08-07 13:59:28,477 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451 `" && echo ansible-tmp-1565211568.46-135266596630451="` echo /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451 `" ) && sleep 0'
2019-08-07 13:59:28,670 p=wsmith u=16880 | Using module file /Users/wsmith/temp/ansible/lib/ansible/modules/network/aireos/aireos_command.py
2019-08-07 13:59:28,672 p=wsmith u=16880 | <1.1.1.1> PUT /Users/wsmith/.ansible/tmp/ansible-local-16872CPCpFb/tmpaJTRIR TO /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/AnsiballZ_aireos_command.py
2019-08-07 13:59:28,674 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c 'chmod u+x /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/ /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/AnsiballZ_aireos_command.py && sleep 0'
2019-08-07 13:59:28,693 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c '/Users/wsmith/temp/ansible/venv/bin/python /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/AnsiballZ_aireos_command.py && sleep 0'
2019-08-07 14:00:08,812 p=wsmith u=16886 | Traceback (most recent call last):
File "/Users/wsmith/temp/ansible/lib/ansible/utils/jsonrpc.py", line 45, in handle_request
result = rpc_method(*args, **kwargs)
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 287, in exec_command
return self.send(command=cmd)
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 473, in send
response = self.receive(command, prompt, answer, newline, prompt_retry_check, check_all)
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 444, in receive
if self._find_prompt(window):
File "/Users/wsmith/temp/ansible/lib/ansible/plugins/connection/network_cli.py", line 590, in _find_prompt
raise AnsibleConnectionFailure(errored_response)
AnsibleConnectionFailure: {u'answer': None, u'prompt': None, u'command': u'show run-config commands'}
Incorrect usage. Use the '?' or <TAB> key to list commands.
(WLC01) >
2019-08-07 14:00:08,827 p=wsmith u=16880 | <1.1.1.1> EXEC /bin/sh -c 'rm -f -r /Users/wsmith/.ansible/tmp/ansible-tmp-1565211568.46-135266596630451/ > /dev/null 2>&1 && sleep 0'
2019-08-07 14:00:08,854 p=wsmith u=16872 | fatal: [WLC01]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"commands": [
"show run-config commands"
],
"host": null,
"interval": 1,
"match": "all",
"password": null,
"port": null,
"provider": {
"host": null,
"password": null,
"port": null,
"ssh_keyfile": null,
"timeout": 90,
"username": null
},
"retries": 10,
"ssh_keyfile": null,
"timeout": null,
"username": null,
"wait_for": null
}
},
"msg": "{u'answer': None, u'prompt': None, u'command': u'show run-config commands'}\r\n\r\nIncorrect usage. Use the '?' or <TAB> key to list commands.\r\n\r\n(WLC01) >",
"rc": -32603
}
2019-08-07 14:00:08,926 p=wsmith u=16872 | PLAY RECAP **********************************************************************************************************************************************************************************************
2019-08-07 14:00:08,927 p=wsmith u=16872 | WLC01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
2019-08-07 14:00:09,025 p=wsmith u=16886 | shutdown complete
```
|
https://github.com/ansible/ansible/issues/60237
|
https://github.com/ansible/ansible/pull/60243
|
30cc54da8cc4efd7a554df964b0f7bddfde0c9c4
|
50d1cbd30a8e19bcce312bd429acb4b690255f51
| 2019-08-07T21:07:19Z |
python
| 2019-10-01T20:52:35Z |
test/units/modules/network/aireos/test_aireos_command.py
|
# (c) 2016 Red Hat Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
from units.compat.mock import patch
from ansible.modules.network.aireos import aireos_command
from units.modules.utils import set_module_args
from .aireos_module import TestCiscoWlcModule, load_fixture
class TestCiscoWlcCommandModule(TestCiscoWlcModule):
module = aireos_command
def setUp(self):
super(TestCiscoWlcCommandModule, self).setUp()
self.mock_run_commands = patch('ansible.modules.network.aireos.aireos_command.run_commands')
self.run_commands = self.mock_run_commands.start()
def tearDown(self):
super(TestCiscoWlcCommandModule, self).tearDown()
self.mock_run_commands.stop()
def load_fixtures(self, commands=None):
def load_from_file(*args, **kwargs):
module, commands = args
output = list()
for item in commands:
try:
obj = json.loads(item['command'])
command = obj['command']
except ValueError:
command = item['command']
filename = str(command).replace(' ', '_')
output.append(load_fixture(filename))
return output
self.run_commands.side_effect = load_from_file
def test_aireos_command_simple(self):
set_module_args(dict(commands=['show sysinfo']))
result = self.execute_module()
self.assertEqual(len(result['stdout']), 1)
self.assertTrue(result['stdout'][0].startswith('Manufacturer\'s Name'))
def test_aireos_command_multiple(self):
set_module_args(dict(commands=['show sysinfo', 'show sysinfo']))
result = self.execute_module()
self.assertEqual(len(result['stdout']), 2)
self.assertTrue(result['stdout'][0].startswith('Manufacturer\'s Name'))
def test_aireos_command_wait_for(self):
wait_for = 'result[0] contains "Cisco Systems Inc"'
set_module_args(dict(commands=['show sysinfo'], wait_for=wait_for))
self.execute_module()
def test_aireos_command_wait_for_fails(self):
wait_for = 'result[0] contains "test string"'
set_module_args(dict(commands=['show sysinfo'], wait_for=wait_for))
self.execute_module(failed=True)
self.assertEqual(self.run_commands.call_count, 10)
def test_aireos_command_retries(self):
wait_for = 'result[0] contains "test string"'
set_module_args(dict(commands=['show sysinfo'], wait_for=wait_for, retries=2))
self.execute_module(failed=True)
self.assertEqual(self.run_commands.call_count, 2)
def test_aireos_command_match_any(self):
wait_for = ['result[0] contains "Cisco Systems Inc"',
'result[0] contains "test string"']
set_module_args(dict(commands=['show sysinfo'], wait_for=wait_for, match='any'))
self.execute_module()
def test_aireos_command_match_all(self):
wait_for = ['result[0] contains "Cisco Systems Inc"',
'result[0] contains "Cisco Controller"']
set_module_args(dict(commands=['show sysinfo'], wait_for=wait_for, match='all'))
self.execute_module()
def test_aireos_command_match_all_failure(self):
wait_for = ['result[0] contains "Cisco Systems Inc"',
'result[0] contains "test string"']
commands = ['show sysinfo', 'show sysinfo']
set_module_args(dict(commands=commands, wait_for=wait_for, match='all'))
self.execute_module(failed=True)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,869 |
Missing default collection install path
|
##### SUMMARY
`ansible-galaxy collection install` currently requires the user to specify an install path using the `-p` option. Galaxy currently generates and displays the CLI command for installing a collection. There is no way for Galaxy to determine a value for `-p`, and hard coding a default value into Galaxy makes no sense.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`ansible-galaxy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/Users/chouseknecht/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/chouseknecht/projects/ansible/lib/ansible
executable location = /Users/chouseknecht/.pyenv/versions/venv27/bin/ansible
python version = 2.7.14 (default, Nov 14 2017, 23:24:24) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.38)]
```
|
https://github.com/ansible/ansible/issues/62869
|
https://github.com/ansible/ansible/pull/62870
|
1fd79240db5dd6bef033982d113edc95dba8d017
|
911aa6aab9aea867bc79e38684ab1b198fa36937
| 2019-09-26T12:11:07Z |
python
| 2019-10-02T14:22:00Z |
changelogs/fragments/62870-collection-install-default-path.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,869 |
Missing default collection install path
|
##### SUMMARY
`ansible-galaxy collection install` currently requires the user to specify an install path using the `-p` option. Galaxy currently generates and displays the CLI command for installing a collection. There is no way for Galaxy to determine a value for `-p`, and hard coding a default value into Galaxy makes no sense.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`ansible-galaxy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/Users/chouseknecht/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/chouseknecht/projects/ansible/lib/ansible
executable location = /Users/chouseknecht/.pyenv/versions/venv27/bin/ansible
python version = 2.7.14 (default, Nov 14 2017, 23:24:24) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.38)]
```
|
https://github.com/ansible/ansible/issues/62869
|
https://github.com/ansible/ansible/pull/62870
|
1fd79240db5dd6bef033982d113edc95dba8d017
|
911aa6aab9aea867bc79e38684ab1b198fa36937
| 2019-09-26T12:11:07Z |
python
| 2019-10-02T14:22:00Z |
docs/docsite/rst/dev_guide/developing_collections.rst
|
.. _developing_collections:
**********************
Developing collections
**********************
Collections are a distribution format for Ansible content. You can use collections to package and distribute playbooks, roles, modules, and plugins.
You can publish and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_.
.. contents::
:local:
:depth: 2
.. _collection_structure:
Collection structure
====================
Collections follow a simple data structure. None of the directories are required unless you have specific content that belongs in one of them. A collection does require a ``galaxy.yml`` file at the root level of the collection. This file contains all of the metadata that Galaxy
and other tools need in order to package, build and publish the collection::
collection/
├── docs/
├── galaxy.yml
├── plugins/
│ ├── modules/
│ │ └── module1.py
│ ├── inventory/
│ └── .../
├── README.md
├── roles/
│ ├── role1/
│ ├── role2/
│ └── .../
├── playbooks/
│ ├── files/
│ ├── vars/
│ ├── templates/
│ └── tasks/
└── tests/
.. note::
* Ansible only accepts ``.yml`` extensions for galaxy.yml.
* See the `draft collection <https://github.com/bcoca/collection>`_ for an example of a full collection structure.
* Not all directories are currently in use. Those are placeholders for future features.
.. _galaxy_yml:
galaxy.yml
----------
A collection must have a ``galaxy.yml`` file that contains the necessary information to build a collection artifact.
See :ref:`collections_galaxy_meta` for details.
.. _collections_doc_dir:
docs directory
---------------
Keep general documentation for the collection here. Plugins and modules still keep their specific documentation embedded as Python docstrings. Use the ``docs`` folder to describe how to use the roles and plugins the collection provides, role requirements, and so on. Currently we are looking at Markdown as the standard format for documentation files, but this is subject to change.
Use ``ansible-doc`` to view documentation for plugins inside a collection:
.. code-block:: bash
ansible-doc -t lookup my_namespace.my_collection.lookup1
The ``ansible-doc`` command requires the fully qualified collection name (FQCN) to display specific plugin documentation. In this example, ``my_namespace`` is the namespace and ``my_collection`` is the collection name within that namespace.
.. note:: The Ansible collection namespace is defined in the ``galaxy.yml`` file and is not equivalent to the GitHub repository name.
.. _collections_plugin_dir:
plugins directory
------------------
Add a 'per plugin type' specific subdirectory here, including ``module_utils`` which is usable not only by modules, but by most plugins by using their FQCN. This is a way to distribute modules, lookups, filters, and so on, without having to import a role in every play.
Vars plugins are unsupported in collections. Cache plugins may be used in collections for fact caching, but are not supported for inventory plugins.
module_utils
^^^^^^^^^^^^
When coding with ``module_utils`` in a collection, the Python ``import`` statement needs to take into account the FQCN along with the ``ansible_collections`` convention. The resulting Python import will look like ``from ansible_collections.{namespace}.{collection}.plugins.module_utils.{util} import {something}``
The following example snippets show a Python and PowerShell module using both default Ansible ``module_utils`` and
those provided by a collection. In this example the namespace is ``ansible_example``, the collection is ``community``.
In the Python example the ``module_util`` in question is called ``qradar`` such that the FQCN is
``ansible_example.community.plugins.module_utils.qradar``:
.. code-block:: python
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves.urllib.parse import urlencode, quote_plus
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible_collections.ansible_example.community.plugins.module_utils.qradar import QRadarRequest
argspec = dict(
name=dict(required=True, type='str'),
state=dict(choices=['present', 'absent'], required=True),
)
module = AnsibleModule(
argument_spec=argspec,
supports_check_mode=True
)
qradar_request = QRadarRequest(
module,
headers={"Content-Type": "application/json"},
not_rest_data_keys=['state']
)
Note that importing something from an ``__init__.py`` file requires using the file name:
.. code-block:: python
from ansible_collections.namespace.collection_name.plugins.callback.__init__ import CustomBaseClass
In the PowerShell example the ``module_util`` in question is called ``hyperv`` such that the FCQN is
``ansible_example.community.plugins.module_utils.hyperv``:
.. code-block:: powershell
#!powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
#AnsibleRequires -PowerShell ansible_collections.ansible_example.community.plugins.module_utils.hyperv
$spec = @{
name = @{ required = $true; type = "str" }
state = @{ required = $true; choices = @("present", "absent") }
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
Invoke-HyperVFunction -Name $module.Params.name
$module.ExitJson()
.. _collections_roles_dir:
roles directory
----------------
Collection roles are mostly the same as existing roles, but with a couple of limitations:
- Role names are now limited to contain only lowercase alphanumeric characters, plus ``_`` and start with an alpha character.
- Roles in a collection cannot contain plugins any more. Plugins must live in the collection ``plugins`` directory tree. Each plugin is accessible to all roles in the collection.
The directory name of the role is used as the role name. Therefore, the directory name must comply with the
above role name rules.
The collection import into Galaxy will fail if a role name does not comply with these rules.
You can migrate 'traditional roles' into a collection but they must follow the rules above. You may need to rename roles if they don't conform. You will have to move or link any role-based plugins to the collection specific directories.
.. note::
For roles imported into Galaxy directly from a GitHub repository, setting the ``role_name`` value in the role's
metadata overrides the role name used by Galaxy. For collections, that value is ignored. When importing a
collection, Galaxy uses the role directory as the name of the role and ignores the ``role_name`` metadata value.
playbooks directory
--------------------
TBD.
tests directory
----------------
TBD. Expect tests for the collection itself to reside here.
.. _creating_collections:
Creating collections
======================
To create a collection:
#. Initialize a collection with :ref:`ansible-galaxy collection init<creating_collections_skeleton>` to create the skeleton directory structure.
#. Add your content to the collection.
#. Build the collection into a collection artifact with :ref:`ansible-galaxy collection build<building_collections>`.
#. Publish the collection artifact to Galaxy with :ref:`ansible-galaxy collection publish<publishing_collections>`.
A user can then install your collection on their systems.
Currently the ``ansible-galaxy collection`` command implements the following sub commands:
* ``init``: Create a basic collection skeleton based on the default template included with Ansible or your own template.
* ``build``: Create a collection artifact that can be uploaded to Galaxy or your own repository.
* ``publish``: Publish a built collection artifact to Galaxy.
* ``install``: Install one or more collections.
To learn more about the ``ansible-galaxy`` cli tool, see the :ref:`ansible-galaxy` man page.
.. _creating_collections_skeleton:
Creating a collection skeleton
------------------------------
To start a new collection:
.. code-block:: bash
collection_dir#> ansible-galaxy collection init my_namespace.my_collection
Then you can populate the directories with the content you want inside the collection. See
https://github.com/bcoca/collection to get a better idea of what you can place inside a collection.
.. _building_collections:
Building collections
--------------------
To build a collection, run ``ansible-galaxy collection build`` from inside the root directory of the collection:
.. code-block:: bash
collection_dir#> ansible-galaxy collection build
This creates
a tarball of the built collection in the current directory which can be uploaded to Galaxy.::
my_collection/
├── galaxy.yml
├── ...
├── my_namespace-my_collection-1.0.0.tar.gz
└── ...
.. note::
Certain files and folders are excluded when building the collection artifact. This is not currently configurable
and is a work in progress so the collection artifact may contain files you would not wish to distribute.
This tarball is mainly intended to upload to Galaxy
as a distribution method, but you can use it directly to install the collection on target systems.
.. _trying_collection_locally:
Trying collection locally
-------------------------
You can try your collection locally by installing it from the tarball.
.. code-block:: bash
ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections/ansible_collections
You should use one of the values configured in :ref:`COLLECTIONS_PATHS` for your path. This is also where Ansible itself will expect to find collections when attempting to use them.
Then try to use the local collection inside a playbook, for more details see :ref:`Using collections <using_collections>`
.. _publishing_collections:
Publishing collections
----------------------
You can publish collections to Galaxy using the ``ansible-galaxy collection publish`` command or the Galaxy UI itself.
.. note:: Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before you upload it.
.. _upload_collection_ansible_galaxy:
Upload using ansible-galaxy
^^^^^^^^^^^^^^^^^^^^^^^^^^^
To upload the collection artifact with the ``ansible-galaxy`` command:
.. code-block:: bash
ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --api-key=SECRET
The above command triggers an import process, just as if you uploaded the collection through the Galaxy website.
The command waits until the import process completes before reporting the status back. If you wish to continue
without waiting for the import result, use the ``--no-wait`` argument and manually look at the import progress in your
`My Imports <https://galaxy.ansible.com/my-imports/>`_ page.
The API key is a secret token used by Ansible Galaxy to protect your content. You can find your API key at your
`Galaxy profile preferences <https://galaxy.ansible.com/me/preferences>`_ page.
.. _upload_collection_galaxy:
Upload a collection from the Galaxy website
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To upload your collection artifact directly on Galaxy:
#. Go to the `My Content <https://galaxy.ansible.com/my-content/namespaces>`_ page, and click the **Add Content** button on one of your namespaces.
#. From the **Add Content** dialogue, click **Upload New Collection**, and select the collection archive file from your local filesystem.
When uploading collections it doesn't matter which namespace you select. The collection will be uploaded to the
namespace specified in the collection metadata in the ``galaxy.yml`` file. If you're not an owner of the
namespace, the upload request will fail.
Once Galaxy uploads and accepts a collection, you will be redirected to the **My Imports** page, which displays output from the
import process, including any errors or warnings about the metadata and content contained in the collection.
.. _collection_versions:
Collection versions
-------------------
Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before
uploading. The only way to change a collection is to release a new version. The latest version of a collection (by highest version number)
will be the version displayed everywhere in Galaxy; however, users will still be able to download older versions.
Collection versions use `Sematic Versioning <https://semver.org/>`_ for version numbers. Please read the official documentation for details and examples. In summary:
* Increment major (for example: x in `x.y.z`) version number for an incompatible API change.
* Increment minor (for example: y in `x.y.z`) version number for new functionality in a backwards compatible manner.
* Increment patch (for example: z in `x.y.z`) version number for backwards compatible bug fixes.
.. _migrate_to_collection:
Migrating Ansible content to a collection
=========================================
You can experiment with migrating existing modules into a collection using the `content_collector tool <https://github.com/ansible/content_collector>`_. The ``content_collector`` is a playbook that helps you migrate content from an Ansible distribution into a collection.
.. warning::
This tool is in active development and is provided only for experimentation and feedback at this point.
See the `content_collector README <https://github.com/ansible/content_collector>`_ for full details and usage guidelines.
.. seealso::
:ref:`collections`
Learn how to install and use collections.
:ref:`collections_galaxy_meta`
Understand the collections metadata structure.
:ref:`developing_modules_general`
Learn about how to write Ansible modules
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,869 |
Missing default collection install path
|
##### SUMMARY
`ansible-galaxy collection install` currently requires the user to specify an install path using the `-p` option. Galaxy currently generates and displays the CLI command for installing a collection. There is no way for Galaxy to determine a value for `-p`, and hard coding a default value into Galaxy makes no sense.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`ansible-galaxy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/Users/chouseknecht/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/chouseknecht/projects/ansible/lib/ansible
executable location = /Users/chouseknecht/.pyenv/versions/venv27/bin/ansible
python version = 2.7.14 (default, Nov 14 2017, 23:24:24) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.38)]
```
|
https://github.com/ansible/ansible/issues/62869
|
https://github.com/ansible/ansible/pull/62870
|
1fd79240db5dd6bef033982d113edc95dba8d017
|
911aa6aab9aea867bc79e38684ab1b198fa36937
| 2019-09-26T12:11:07Z |
python
| 2019-10-02T14:22:00Z |
docs/docsite/rst/user_guide/collections_using.rst
|
.. _collections:
*****************
Using collections
*****************
Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins.
You can install and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_.
.. contents::
:local:
:depth: 2
.. _collections_installing:
Installing collections
======================
You can use the ``ansible-galaxy collection install`` command to install a collection on your system. You must specify an installation location using the ``-p`` option.
To install a collection hosted in Galaxy:
.. code-block:: bash
ansible-galaxy collection install my_namespace.my_collection -p /collections
You can also directly use the tarball from your build:
.. code-block:: bash
ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections
.. note::
The install command automatically appends the path ``ansible_collections`` to the one specified with the ``-p`` option unless the
parent directory is already in a folder called ``ansible_collections``.
You should use one of the values configured in :ref:`COLLECTIONS_PATHS` for your path. This is also where Ansible itself will expect to find collections when attempting to use them.
You can also keep a collection adjacent to the current playbook, under a ``collections/ansible_collections/`` directory structure.
::
play.yml
├── collections/
│ └── ansible_collections/
│ └── my_namespace/
│ └── my_collection/<collection structure lives here>
See :ref:`collection_structure` for details on the collection directory structure.
.. _collections_older_version:
Installing an older version of a collection
-------------------------------------------
By default ``ansible-galaxy`` installs the latest collection that is available but you can add a version range
identifier to install a specific version.
To install the 1.0.0 version of the collection:
.. code-block:: bash
ansible-galaxy collection install my_namespace.my_collection:1.0.0
To install the 1.0.0-beta.1 version of the collection:
.. code-block:: bash
ansible-galaxy collection install my_namespace.my_collection:==1.0.0-beta.1
To install the collections that are greater than or equal to 1.0.0 or less than 2.0.0:
.. code-block:: bash
ansible-galaxy collection install my_namespace.my_collection:>=1.0.0,<2.0.0
You can specify multiple range identifiers which are split by ``,``. You can use the following range identifiers:
* ``*``: Any version, this is the default used when no range specified is set.
* ``!=``: Version is not equal to the one specified.
* ``==``: Version must be the one specified.
* ``>=``: Version is greater than or equal to the one specified.
* ``>``: Version is greater than the one specified.
* ``<=``: Version is less than or equal to the one specified.
* ``<``: Version is less than the one specified.
.. note::
The ``ansible-galaxy`` command ignores any pre-release versions unless the ``==`` range identifier is used to
explicitly set to that pre-release version.
.. _collection_requirements_file:
Install multiple collections with a requirements file
-----------------------------------------------------
You can also setup a ``requirements.yml`` file to install multiple collections in one command. This file is a YAML file in the format:
.. code-block:: yaml+jinja
---
collections:
# With just the collection name
- my_namespace.my_collection
# With the collection name, version, and source options
- name: my_namespace.my_other_collection
version: 'version range identifiers (default: ``*``)'
source: 'The Galaxy URL to pull the collection from (default: ``--api-server`` from cmdline)'
The ``version`` key can take in the same range identifier format documented above.
Roles can also be specified and placed under the ``roles`` key. The values follow the same format as a requirements
file used in older Ansible releases.
.. note::
While both roles and collections can be specified in one requirements file, they need to be installed separately.
The ``ansible-galaxy role install -r requirements.yml`` will only install roles and
``ansible-galaxy collection install -r requirements.yml -p ./`` will only install collections.
.. _galaxy_server_config:
Galaxy server configuration list
--------------------------------
By default running ``ansible-galaxy`` will use the :ref:`galaxy_server` config value or the ``--server`` command line
argument when it performs an action against a Galaxy server. The ``ansible-galaxy collection install`` supports
installing collections from multiple servers as defined in the :ref:`ansible_configuration_settings_locations` file
using the :ref:`galaxy_server_list` configuration option. To define multiple Galaxy servers you have to create the
following entries like so:
.. code-block:: ini
[galaxy]
server_list = my_org_hub, release_galaxy, test_galaxy
[galaxy_server.my_org_hub]
url=https://automation.my_org/
username=my_user
password=my_pass
[galaxy_server.release_galaxy]
url=https://galaxy.ansible.com/
token=my_token
[galaxy_server.test_galaxy]
url=https://galaxy-dev.ansible.com/
token=my_token
.. note::
You can use the ``--server`` command line argument to select an explicit Galaxy server in the ``server_list`` and
the value of this arg should match the name of the server. If the value of ``--server`` is not a pre-defined server
in ``ansible.cfg`` then the value specified will be the URL used to access that server and all pre-defined servers
are ignored. Also the ``--api-key`` argument is not applied to any of the pre-defined servers, it is only applied
if no server list is defined or a URL was specified by ``--server``.
The :ref:`galaxy_server_list` option is a list of server identifiers in a prioritized order. When searching for a
collection, the install process will search in that order, e.g. ``my_org_hub`` first, then ``release_galaxy``, and
finally ``test_galaxy`` until the collection is found. The actual Galaxy instance is then defined under the section
``[galaxy_server.{{ id }}]`` where ``{{ id }}`` is the server identifier defined in the list. This section can then
define the following keys:
* ``url``: The URL of the galaxy instance to connect to, this is required.
* ``token``: A token key to use for authentication against the Galaxy instance, this is mutually exclusive with ``username``
* ``username``: The username to use for basic authentication against the Galaxy instance, this is mutually exclusive with ``token``
* ``password``: The password to use for basic authentication
As well as being defined in the ``ansible.cfg`` file, these server options can be defined as an environment variable.
The environment variable is in the form ``ANSIBLE_GALAXY_SERVER_{{ id }}_{{ key }}`` where ``{{ id }}`` is the upper
case form of the server identifier and ``{{ key }}`` is the key to define. For example I can define ``token`` for
``release_galaxy`` by setting ``ANSIBLE_GALAXY_SERVER_RELEASE_GALAXY_TOKEN=secret_token``.
For operations where only one Galaxy server is used, i.e. ``publish``, ``info``, ``login`` then the first entry in the
``server_list`` is used unless an explicit server was passed in as a command line argument.
.. note::
Once a collection is found, any of its requirements are only searched within the same Galaxy instance as the parent
collection. The install process will not search for a collection requirement in a different Galaxy instance.
.. _using_collections:
Using collections in a Playbook
===============================
Once installed, you can reference a collection content by its fully qualified collection name (FQCN):
.. code-block:: yaml
- hosts: all
tasks:
- my_namespace.my_collection.mymodule:
option1: value
This works for roles or any type of plugin distributed within the collection:
.. code-block:: yaml
- hosts: all
tasks:
- import_role:
name: my_namespace.my_collection.role1
- my_namespace.mycollection.mymodule:
option1: value
- debug:
msg: '{{ lookup("my_namespace.my_collection.lookup1", 'param1')| my_namespace.my_collection.filter1 }}'
To avoid a lot of typing, you can use the ``collections`` keyword added in Ansible 2.8:
.. code-block:: yaml
- hosts: all
collections:
- my_namespace.my_collection
tasks:
- import_role:
name: role1
- mymodule:
option1: value
- debug:
msg: '{{ lookup("my_namespace.my_collection.lookup1", 'param1')| my_namespace.my_collection.filter1 }}'
This keyword creates a 'search path' for non namespaced plugin references. It does not import roles or anything else.
Notice that you still need the FQCN for non-action or module plugins.
.. seealso::
:ref:`developing_collections`
Develop or modify a collection.
:ref:`collections_galaxy_meta`
Understand the collections metadata structure.
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,869 |
Missing default collection install path
|
##### SUMMARY
`ansible-galaxy collection install` currently requires the user to specify an install path using the `-p` option. Galaxy currently generates and displays the CLI command for installing a collection. There is no way for Galaxy to determine a value for `-p`, and hard coding a default value into Galaxy makes no sense.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`ansible-galaxy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/Users/chouseknecht/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/chouseknecht/projects/ansible/lib/ansible
executable location = /Users/chouseknecht/.pyenv/versions/venv27/bin/ansible
python version = 2.7.14 (default, Nov 14 2017, 23:24:24) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.38)]
```
|
https://github.com/ansible/ansible/issues/62869
|
https://github.com/ansible/ansible/pull/62870
|
1fd79240db5dd6bef033982d113edc95dba8d017
|
911aa6aab9aea867bc79e38684ab1b198fa36937
| 2019-09-26T12:11:07Z |
python
| 2019-10-02T14:22:00Z |
lib/ansible/cli/galaxy.py
|
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os.path
import re
import shutil
import textwrap
import time
import yaml
from jinja2 import BaseLoader, Environment, FileSystemLoader
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import build_collection, install_collections, publish_collection, \
validate_collection_name
from ansible.galaxy.login import GalaxyLogin
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import GalaxyToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
# Inject role into sys.argv[1] as a backwards compatibility step
if len(args) > 1 and args[1] not in ['-h', '--help'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
# Remove this in Ansible 2.13 when we also remove -v as an option on the root parser for ansible-galaxy.
idx = 2 if args[1].startswith('-v') else 1
args.insert(idx, 'role')
self.api_servers = []
self.galaxy = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences. You can also use ansible-galaxy login to '
'retrieve this key or set the token for the GALAXY_SERVER_LIST entry.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs',
default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_login_options(role_parser, parents=[common])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each role installed in the roles_path.')
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument('role', help='Role', nargs='?', metavar='role')
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_login_options(self, parser, parents=None):
login_parser = parser.add_parser('login', parents=parents,
help="Login to api.github.com server in order to use ansible-galaxy role sub "
"command such as 'import', 'delete', 'publish', and 'setup'")
login_parser.set_defaults(func=self.execute_login)
login_parser.add_argument('--github-token', dest='token', default=None,
help='Identify with github token rather than username and password.')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path', required=True,
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
else:
install_parser.add_argument('-r', '--role-file', dest='role_file',
help='A file containing a list of roles to be imported.')
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be publish to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required):
return {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
}
server_def = [('url', True), ('username', False), ('password', False), ('token', False)]
config_servers = []
for server_key in (C.GALAXY_SERVER_LIST or []):
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req)) for k, req in server_def)
defs = AnsibleLoader(yaml.safe_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
server_options = C.config.get_plugin_options('galaxy_server', server_key)
token_val = server_options['token'] or NoTokenSentinel
server_options['token'] = GalaxyToken(token=token_val)
config_servers.append(GalaxyAPI(self.galaxy, server_key, **server_options))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(self.galaxy, 'cmd_arg', cmd_server, token=cmd_token))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token))
context.CLIARGS['func']()
@property
def api(self):
return self.api_servers[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml.safe_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if requirements_file is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml.safe_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles', []):
requirements['roles'] += parse_role_req(role_req)
for collection_req in file_requirements.get('collections', []):
if isinstance(collection_req, dict):
req_name = collection_req.get('name', None)
if req_name is None:
raise AnsibleError("Collections requirement entry should contain the key name.")
req_version = collection_req.get('version', '*')
req_source = collection_req.get('source', None)
if req_source:
# Try and match up the requirement source with our list of Galaxy API servers defined in the
# config, otherwise create a server with that URL without any auth.
req_source = next(iter([a for a in self.api_servers if req_source in [a.name, a.api_server]]),
GalaxyAPI(self.galaxy, "explicit_requirement_%s" % req_name, req_source))
requirements['collections'].append((req_name, req_version, req_source))
else:
requirements['collections'].append((collection_req, '*', None))
return requirements
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
text.append(u"\tdescription: %s" % role_info.get('description', ''))
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
def to_yaml(v):
return yaml.safe_dump(v, default_flow_style=False).rstrip()
env = Environment(loader=BaseLoader)
env.filters['comment_ify'] = comment_ify
env.filters['to_yaml'] = to_yaml
template = env.from_string(meta_template)
meta_value = template.render({'required_config': required_config, 'optional_config': optional_config})
return meta_value
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(collection_path, output_path, force)
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
))
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
))
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
template_env = Environment(loader=FileSystemLoader(obj_skeleton))
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
elif galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(rel_root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_env.get_template(src_template).stream(inject_data).dump(dest_file, encoding='utf-8')
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
remote_data = False
if not context.CLIARGS['offline']:
remote_data = self.api.lookup_role_by_name(role, False)
if remote_data:
role_info.update(remote_data)
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data = self._display_role_info(role_info)
# FIXME: This is broken in both 1.9 and 2.0 as
# _display_role_info() always returns something
if not data:
data = u"\n- the role %s was not found" % role
self.pager(data)
def execute_install(self):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
"""
if context.CLIARGS['type'] == 'collection':
collections = context.CLIARGS['args']
force = context.CLIARGS['force']
output_path = context.CLIARGS['collections_path']
ignore_certs = context.CLIARGS['ignore_certs']
ignore_errors = context.CLIARGS['ignore_errors']
requirements_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(requirements_file, allow_old_format=False)['collections']
else:
requirements = []
for collection_input in collections:
name, dummy, requirement = collection_input.partition(':')
requirements.append((name, requirement or '*', None))
output_path = GalaxyCLI._resolve_path(output_path)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(output_path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection won't be picked up in an Ansible "
"run." % (to_text(output_path), to_text(":".join(collections_path))))
if os.path.split(output_path)[1] != 'ansible_collections':
output_path = os.path.join(output_path, 'ansible_collections')
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(requirements, output_path, self.api_servers, (not ignore_certs), ignore_errors,
no_deps, force, force_deps)
return 0
role_file = context.CLIARGS['role_file']
if not context.CLIARGS['args'] and role_file is None:
# the user needs to specify one of either --role-file or specify a single user/role name
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
roles_left = []
if role_file:
if not (role_file.endswith('.yaml') or role_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
roles_left = self._parse_requirements_file(role_file)['roles']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
roles_left.append(GalaxyRole(self.galaxy, self.api, **role))
for role in roles_left:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = role.metadata.get('dependencies') or []
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in roles_left:
display.display('- adding dependency: %s' % to_text(dep_role))
roles_left.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependant role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
roles_left.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
roles_left.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
lists the roles installed on the local system or matches a single role passed as an argument.
"""
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
if context.CLIARGS['role']:
# show the requested role, if it exists
name = context.CLIARGS['role']
gr = GalaxyRole(self.galaxy, self.api, name)
if gr.metadata:
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
else:
display.display("- the role %s was not found" % name)
else:
# show all valid roles in the roles_path directory
roles_path = context.CLIARGS['roles_path']
path_found = False
warnings = []
for path in roles_path:
role_path = os.path.expanduser(path)
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
elif not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
path_found = True
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths was usable. Please specify a valid path with --roles-path")
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_login(self):
"""
verify user's identify via Github and retrieve an auth token from Ansible Galaxy.
"""
# Authenticate with github and retrieve a token
if context.CLIARGS['token'] is None:
if C.GALAXY_TOKEN:
github_token = C.GALAXY_TOKEN
else:
login = GalaxyLogin(self.galaxy)
github_token = login.create_github_token()
else:
github_token = context.CLIARGS['token']
galaxy_response = self.api.authenticate(github_token)
if context.CLIARGS['token'] is None and C.GALAXY_TOKEN is None:
# Remove the token we created
login.remove_github_token()
# Store the Galaxy token
token = GalaxyToken()
token.set(galaxy_response['token'])
display.display("Successfully logged into Galaxy as %s" % galaxy_response['username'])
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,809 |
dnf (and probly yum) wild-card match remove is not idempotence
|
##### SUMMARY
When using wild-card in package name to remove, the first run will remove the packages,
but the second run will fail with message `Failed to install some of the specified packages`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- dnf
##### ANSIBLE VERSION
```
ansible 2.8.5
config file = /home/rabin/src/ansible/workstation-setup/ansible.cfg
configured module search path = ['/home/rabin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 16:32:37) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
```
##### CONFIGURATION
```
ANSIBLE_SSH_CONTROL_PATH(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = ./.ansible/ssh-%%C
CACHE_PLUGIN(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = ./.ansible/facts_cache
CACHE_PLUGIN_TIMEOUT(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = 86400
DEFAULT_EXECUTABLE(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = /bin/bash
DEFAULT_GATHERING(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = ['/home/rabin/src/ansible/workstation-setup/inventory']
DEFAULT_REMOTE_PORT(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = 22
RETRY_FILES_SAVE_PATH(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = /home/rabin/src/ansible/workstation-setup/.ansible/retry
```
##### OS / ENVIRONMENT
* Fedora 30
##### STEPS TO REPRODUCE
try to remove a package twice
or run the playbook twice (first run will work if there is a match, but secend run will fail)
```yaml
- dnf:
name: lohit-*-fonts
- dnf:
name: lohit-*-fonts
state: absent
- dnf:
name: lohit-*-fonts
state: absent
```
##### EXPECTED RESULTS
Same as when you ask it to remove a spesific package
```yaml
- dnf:
name: lohit-tamil-fonts
- dnf:
name: lohit-tamil-fonts
state: absent
- dnf:
name: lohit-tamil-fonts
state: absent
```
##### ACTUAL RESULTS
```
fatal: [localhost]: FAILED! => {"changed": false, "failures": ["lohit-*-fonts - no package matched: lohit-*-fonts"], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
```
|
https://github.com/ansible/ansible/issues/62809
|
https://github.com/ansible/ansible/pull/63034
|
911aa6aab9aea867bc79e38684ab1b198fa36937
|
8bcf11fee9842af35efc66087c1195200cb2990e
| 2019-09-24T20:20:34Z |
python
| 2019-10-02T15:05:12Z |
changelogs/fragments/62809-dnf-wildcard-absent-failure.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,809 |
dnf (and probly yum) wild-card match remove is not idempotence
|
##### SUMMARY
When using wild-card in package name to remove, the first run will remove the packages,
but the second run will fail with message `Failed to install some of the specified packages`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- dnf
##### ANSIBLE VERSION
```
ansible 2.8.5
config file = /home/rabin/src/ansible/workstation-setup/ansible.cfg
configured module search path = ['/home/rabin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 16:32:37) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
```
##### CONFIGURATION
```
ANSIBLE_SSH_CONTROL_PATH(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = ./.ansible/ssh-%%C
CACHE_PLUGIN(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = ./.ansible/facts_cache
CACHE_PLUGIN_TIMEOUT(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = 86400
DEFAULT_EXECUTABLE(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = /bin/bash
DEFAULT_GATHERING(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = ['/home/rabin/src/ansible/workstation-setup/inventory']
DEFAULT_REMOTE_PORT(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = 22
RETRY_FILES_SAVE_PATH(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = /home/rabin/src/ansible/workstation-setup/.ansible/retry
```
##### OS / ENVIRONMENT
* Fedora 30
##### STEPS TO REPRODUCE
try to remove a package twice
or run the playbook twice (first run will work if there is a match, but secend run will fail)
```yaml
- dnf:
name: lohit-*-fonts
- dnf:
name: lohit-*-fonts
state: absent
- dnf:
name: lohit-*-fonts
state: absent
```
##### EXPECTED RESULTS
Same as when you ask it to remove a spesific package
```yaml
- dnf:
name: lohit-tamil-fonts
- dnf:
name: lohit-tamil-fonts
state: absent
- dnf:
name: lohit-tamil-fonts
state: absent
```
##### ACTUAL RESULTS
```
fatal: [localhost]: FAILED! => {"changed": false, "failures": ["lohit-*-fonts - no package matched: lohit-*-fonts"], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
```
|
https://github.com/ansible/ansible/issues/62809
|
https://github.com/ansible/ansible/pull/63034
|
911aa6aab9aea867bc79e38684ab1b198fa36937
|
8bcf11fee9842af35efc66087c1195200cb2990e
| 2019-09-24T20:20:34Z |
python
| 2019-10-02T15:05:12Z |
lib/ansible/modules/packaging/os/dnf.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2015 Cristian van Ee <cristian at cvee.org>
# Copyright 2015 Igor Gnatenko <[email protected]>
# Copyright 2018 Adam Miller <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = '''
---
module: dnf
version_added: 1.9
short_description: Manages packages with the I(dnf) package manager
description:
- Installs, upgrade, removes, and lists packages and groups with the I(dnf) package manager.
options:
name:
description:
- "A package name or package specifier with version, like C(name-1.0).
When using state=latest, this can be '*' which means run: dnf -y update.
You can also pass a url or a local path to a rpm file.
To operate on several packages this can accept a comma separated string of packages or a list of packages."
required: true
aliases:
- pkg
list:
description:
- Various (non-idempotent) commands for usage with C(/usr/bin/ansible) and I(not) playbooks. See examples.
state:
description:
- Whether to install (C(present), C(latest)), or remove (C(absent)) a package.
- Default is C(None), however in effect the default action is C(present) unless the C(autoremove) option is
enabled for this module, then C(absent) is inferred.
choices: ['absent', 'present', 'installed', 'removed', 'latest']
enablerepo:
description:
- I(Repoid) of repositories to enable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
disablerepo:
description:
- I(Repoid) of repositories to disable for the install/update operation.
These repos will not persist beyond the transaction.
When specifying multiple repos, separate them with a ",".
conf_file:
description:
- The remote dnf configuration file to use for the transaction.
disable_gpg_check:
description:
- Whether to disable the GPG checking of signatures of packages being
installed. Has an effect only if state is I(present) or I(latest).
type: bool
default: 'no'
installroot:
description:
- Specifies an alternative installroot, relative to which all packages
will be installed.
version_added: "2.3"
default: "/"
releasever:
description:
- Specifies an alternative release from which all packages will be
installed.
version_added: "2.6"
autoremove:
description:
- If C(yes), removes all "leaf" packages from the system that were originally
installed as dependencies of user-installed packages but which are no longer
required by any such package. Should be used alone or when state is I(absent)
type: bool
default: "no"
version_added: "2.4"
exclude:
description:
- Package name(s) to exclude when state=present, or latest. This can be a
list or a comma separated string.
version_added: "2.7"
skip_broken:
description:
- Skip packages with broken dependencies(devsolve) and are causing problems.
type: bool
default: "no"
version_added: "2.7"
update_cache:
description:
- Force dnf to check if cache is out of date and redownload if needed.
Has an effect only if state is I(present) or I(latest).
type: bool
default: "no"
aliases: [ expire-cache ]
version_added: "2.7"
update_only:
description:
- When using latest, only update installed packages. Do not install packages.
- Has an effect only if state is I(latest)
default: "no"
type: bool
version_added: "2.7"
security:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked security related.
type: bool
default: "no"
version_added: "2.7"
bugfix:
description:
- If set to C(yes), and C(state=latest) then only installs updates that have been marked bugfix related.
default: "no"
type: bool
version_added: "2.7"
enable_plugin:
description:
- I(Plugin) name to enable for the install/update operation.
The enabled plugin will not persist beyond the transaction.
version_added: "2.7"
disable_plugin:
description:
- I(Plugin) name to disable for the install/update operation.
The disabled plugins will not persist beyond the transaction.
version_added: "2.7"
disable_excludes:
description:
- Disable the excludes defined in DNF config files.
- If set to C(all), disables all excludes.
- If set to C(main), disable excludes defined in [main] in dnf.conf.
- If set to C(repoid), disable excludes defined for given repo id.
version_added: "2.7"
validate_certs:
description:
- This only applies if using a https url as the source of the rpm. e.g. for localinstall. If set to C(no), the SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates as it avoids verifying the source site.
type: bool
default: "yes"
version_added: "2.7"
allow_downgrade:
description:
- Specify if the named package and version is allowed to downgrade
a maybe already installed higher version of that package.
Note that setting allow_downgrade=True can make this module
behave in a non-idempotent way. The task could end up with a set
of packages that does not match the complete list of specified
packages to install (because dependencies between the downgraded
package and others can cause changes to the packages which were
in the earlier transaction).
type: bool
default: "no"
version_added: "2.7"
install_repoquery:
description:
- This is effectively a no-op in DNF as it is not needed with DNF, but is an accepted parameter for feature
parity/compatibility with the I(yum) module.
type: bool
default: "yes"
version_added: "2.7"
download_only:
description:
- Only download the packages, do not install them.
default: "no"
type: bool
version_added: "2.7"
lock_timeout:
description:
- Amount of time to wait for the dnf lockfile to be freed.
required: false
default: 30
type: int
version_added: "2.8"
install_weak_deps:
description:
- Will also install all packages linked by a weak dependency relation.
type: bool
default: "yes"
version_added: "2.8"
download_dir:
description:
- Specifies an alternate directory to store packages.
- Has an effect only if I(download_only) is specified.
type: str
version_added: "2.8"
notes:
- When used with a `loop:` each package will be processed individually, it is much more efficient to pass the list directly to the `name` option.
- Group removal doesn't work if the group was installed with Ansible because
upstream dnf's API doesn't properly mark groups as installed, therefore upon
removal the module is unable to detect that the group is installed
(https://bugzilla.redhat.com/show_bug.cgi?id=1620324)
requirements:
- "python >= 2.6"
- python-dnf
- for the autoremove option you need dnf >= 2.0.1"
author:
- Igor Gnatenko (@ignatenkobrain) <[email protected]>
- Cristian van Ee (@DJMuggs) <cristian at cvee.org>
- Berend De Schouwer (@berenddeschouwer)
- Adam Miller (@maxamillion) <[email protected]>
'''
EXAMPLES = '''
- name: install the latest version of Apache
dnf:
name: httpd
state: latest
- name: install the latest version of Apache and MariaDB
dnf:
name:
- httpd
- mariadb-server
state: latest
- name: remove the Apache package
dnf:
name: httpd
state: absent
- name: install the latest version of Apache from the testing repo
dnf:
name: httpd
enablerepo: testing
state: present
- name: upgrade all packages
dnf:
name: "*"
state: latest
- name: install the nginx rpm from a remote repo
dnf:
name: 'http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm'
state: present
- name: install nginx rpm from a local file
dnf:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present
- name: install the 'Development tools' package group
dnf:
name: '@Development tools'
state: present
- name: Autoremove unneeded packages installed as dependencies
dnf:
autoremove: yes
- name: Uninstall httpd but keep its dependencies
dnf:
name: httpd
state: absent
autoremove: no
- name: install a modularity appstream with defined stream and profile
dnf:
name: '@postgresql:9.6/client'
state: present
- name: install a modularity appstream with defined stream
dnf:
name: '@postgresql:9.6'
state: present
- name: install a modularity appstream with defined profile
dnf:
name: '@postgresql/client'
state: present
'''
import os
import re
import sys
import tempfile
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
HAS_DNF = True
except ImportError:
HAS_DNF = False
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_url
from ansible.module_utils.six import PY2, text_type
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.yumdnf import YumDnf, yumdnf_argument_spec
# 64k. Number of bytes to read at a time when manually downloading pkgs via a url
BUFSIZE = 65536
class DnfModule(YumDnf):
"""
DNF Ansible module back-end implementation
"""
def __init__(self, module):
# This populates instance vars for all argument spec params
super(DnfModule, self).__init__(module)
self._ensure_dnf()
self.lockfile = "/var/cache/dnf/*_lock.pid"
self.pkg_mgr_name = "dnf"
try:
self.with_modules = dnf.base.WITH_MODULES
except AttributeError:
self.with_modules = False
def is_lockfile_pid_valid(self):
# FIXME? it looks like DNF takes care of invalid lock files itself?
# https://github.com/ansible/ansible/issues/57189
return True
def _sanitize_dnf_error_msg(self, spec, error):
"""
For unhandled dnf.exceptions.Error scenarios, there are certain error
messages we want to filter. Do that here.
"""
if to_text("no package matched") in to_text(error):
return "No package {0} available.".format(spec)
return error
def _package_dict(self, package):
"""Return a dictionary of information for the package."""
# NOTE: This no longer contains the 'dnfstate' field because it is
# already known based on the query type.
result = {
'name': package.name,
'arch': package.arch,
'epoch': str(package.epoch),
'release': package.release,
'version': package.version,
'repo': package.repoid}
result['nevra'] = '{epoch}:{name}-{version}-{release}.{arch}'.format(
**result)
if package.installtime == 0:
result['yumstate'] = 'available'
else:
result['yumstate'] = 'installed'
return result
def _packagename_dict(self, packagename):
"""
Return a dictionary of information for a package name string or None
if the package name doesn't contain at least all NVR elements
"""
if packagename[-4:] == '.rpm':
packagename = packagename[:-4]
# This list was auto generated on a Fedora 28 system with the following one-liner
# printf '[ '; for arch in $(ls /usr/lib/rpm/platform); do printf '"%s", ' ${arch%-linux}; done; printf ']\n'
redhat_rpm_arches = [
"aarch64", "alphaev56", "alphaev5", "alphaev67", "alphaev6", "alpha",
"alphapca56", "amd64", "armv3l", "armv4b", "armv4l", "armv5tejl", "armv5tel",
"armv5tl", "armv6hl", "armv6l", "armv7hl", "armv7hnl", "armv7l", "athlon",
"geode", "i386", "i486", "i586", "i686", "ia32e", "ia64", "m68k", "mips64el",
"mips64", "mips64r6el", "mips64r6", "mipsel", "mips", "mipsr6el", "mipsr6",
"noarch", "pentium3", "pentium4", "ppc32dy4", "ppc64iseries", "ppc64le", "ppc64",
"ppc64p7", "ppc64pseries", "ppc8260", "ppc8560", "ppciseries", "ppc", "ppcpseries",
"riscv64", "s390", "s390x", "sh3", "sh4a", "sh4", "sh", "sparc64", "sparc64v",
"sparc", "sparcv8", "sparcv9", "sparcv9v", "x86_64"
]
rpm_arch_re = re.compile(r'(.*)\.(.*)')
rpm_nevr_re = re.compile(r'(\S+)-(?:(\d*):)?(.*)-(~?\w+[\w.+]*)')
try:
arch = None
rpm_arch_match = rpm_arch_re.match(packagename)
if rpm_arch_match:
nevr, arch = rpm_arch_match.groups()
if arch in redhat_rpm_arches:
packagename = nevr
rpm_nevr_match = rpm_nevr_re.match(packagename)
if rpm_nevr_match:
name, epoch, version, release = rpm_nevr_re.match(packagename).groups()
if not version or not version.split('.')[0].isdigit():
return None
else:
return None
except AttributeError as e:
self.module.fail_json(
msg='Error attempting to parse package: %s, %s' % (packagename, to_native(e)),
rc=1,
results=[]
)
if not epoch:
epoch = "0"
if ':' in name:
epoch_name = name.split(":")
epoch = epoch_name[0]
name = ''.join(epoch_name[1:])
result = {
'name': name,
'epoch': epoch,
'release': release,
'version': version,
}
return result
# Original implementation from yum.rpmUtils.miscutils (GPLv2+)
# http://yum.baseurl.org/gitweb?p=yum.git;a=blob;f=rpmUtils/miscutils.py
def _compare_evr(self, e1, v1, r1, e2, v2, r2):
# return 1: a is newer than b
# 0: a and b are the same version
# -1: b is newer than a
if e1 is None:
e1 = '0'
else:
e1 = str(e1)
v1 = str(v1)
r1 = str(r1)
if e2 is None:
e2 = '0'
else:
e2 = str(e2)
v2 = str(v2)
r2 = str(r2)
# print '%s, %s, %s vs %s, %s, %s' % (e1, v1, r1, e2, v2, r2)
rc = dnf.rpm.rpm.labelCompare((e1, v1, r1), (e2, v2, r2))
# print '%s, %s, %s vs %s, %s, %s = %s' % (e1, v1, r1, e2, v2, r2, rc)
return rc
def fetch_rpm_from_url(self, spec):
# FIXME: Remove this once this PR is merged:
# https://github.com/ansible/ansible/pull/19172
# download package so that we can query it
package_name, dummy = os.path.splitext(str(spec.rsplit('/', 1)[1]))
package_file = tempfile.NamedTemporaryFile(dir=self.module.tmpdir, prefix=package_name, suffix='.rpm', delete=False)
self.module.add_cleanup_file(package_file.name)
try:
rsp, info = fetch_url(self.module, spec)
if not rsp:
self.module.fail_json(
msg="Failure downloading %s, %s" % (spec, info['msg']),
results=[],
)
data = rsp.read(BUFSIZE)
while data:
package_file.write(data)
data = rsp.read(BUFSIZE)
package_file.close()
except Exception as e:
self.module.fail_json(
msg="Failure downloading %s, %s" % (spec, to_native(e)),
results=[],
)
return package_file.name
def _ensure_dnf(self):
if not HAS_DNF:
if PY2:
package = 'python2-dnf'
else:
package = 'python3-dnf'
if self.module.check_mode:
self.module.fail_json(
msg="`{0}` is not installed, but it is required"
"for the Ansible dnf module.".format(package),
results=[],
)
rc, stdout, stderr = self.module.run_command(['dnf', 'install', '-y', package])
global dnf
try:
import dnf
import dnf.cli
import dnf.const
import dnf.exceptions
import dnf.subject
import dnf.util
except ImportError:
self.module.fail_json(
msg="Could not import the dnf python module using {0} ({1}). "
"Please install `{2}` package or ensure you have specified the "
"correct ansible_python_interpreter.".format(sys.executable, sys.version.replace('\n', ''),
package),
results=[],
cmd='dnf install -y {0}'.format(package),
rc=rc,
stdout=stdout,
stderr=stderr,
)
def _configure_base(self, base, conf_file, disable_gpg_check, installroot='/'):
"""Configure the dnf Base object."""
conf = base.conf
# Change the configuration file path if provided, this must be done before conf.read() is called
if conf_file:
# Fail if we can't read the configuration file.
if not os.access(conf_file, os.R_OK):
self.module.fail_json(
msg="cannot read configuration file", conf_file=conf_file,
results=[],
)
else:
conf.config_file_path = conf_file
# Read the configuration file
conf.read()
# Turn off debug messages in the output
conf.debuglevel = 0
# Set whether to check gpg signatures
conf.gpgcheck = not disable_gpg_check
conf.localpkg_gpgcheck = not disable_gpg_check
# Don't prompt for user confirmations
conf.assumeyes = True
# Set installroot
conf.installroot = installroot
# Load substitutions from the filesystem
conf.substitutions.update_from_etc(installroot)
# Handle different DNF versions immutable mutable datatypes and
# dnf v1/v2/v3
#
# In DNF < 3.0 are lists, and modifying them works
# In DNF >= 3.0 < 3.6 are lists, but modifying them doesn't work
# In DNF >= 3.6 have been turned into tuples, to communicate that modifying them doesn't work
#
# https://www.happyassassin.net/2018/06/27/adams-debugging-adventures-the-immutable-mutable-object/
#
# Set excludes
if self.exclude:
_excludes = list(conf.exclude)
_excludes.extend(self.exclude)
conf.exclude = _excludes
# Set disable_excludes
if self.disable_excludes:
_disable_excludes = list(conf.disable_excludes)
if self.disable_excludes not in _disable_excludes:
_disable_excludes.append(self.disable_excludes)
conf.disable_excludes = _disable_excludes
# Set releasever
if self.releasever is not None:
conf.substitutions['releasever'] = self.releasever
# Set skip_broken (in dnf this is strict=0)
if self.skip_broken:
conf.strict = 0
if self.download_only:
conf.downloadonly = True
if self.download_dir:
conf.destdir = self.download_dir
# Default in dnf upstream is true
conf.clean_requirements_on_remove = self.autoremove
# Default in dnf (and module default) is True
conf.install_weak_deps = self.install_weak_deps
def _specify_repositories(self, base, disablerepo, enablerepo):
"""Enable and disable repositories matching the provided patterns."""
base.read_all_repos()
repos = base.repos
# Disable repositories
for repo_pattern in disablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.disable()
# Enable repositories
for repo_pattern in enablerepo:
if repo_pattern:
for repo in repos.get_matching(repo_pattern):
repo.enable()
def _base(self, conf_file, disable_gpg_check, disablerepo, enablerepo, installroot):
"""Return a fully configured dnf Base object."""
base = dnf.Base()
self._configure_base(base, conf_file, disable_gpg_check, installroot)
try:
base.init_plugins(set(self.disable_plugin), set(self.enable_plugin))
base.pre_configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
self._specify_repositories(base, disablerepo, enablerepo)
try:
base.configure_plugins()
except AttributeError:
pass # older versions of dnf didn't require this and don't have these methods
try:
if self.update_cache:
try:
base.update_cache()
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
base.fill_sack(load_system_repo='auto')
except dnf.exceptions.RepoError as e:
self.module.fail_json(
msg="{0}".format(to_text(e)),
results=[],
rc=1
)
if self.bugfix:
key = {'advisory_type__eq': 'bugfix'}
base._update_security_filters = [base.sack.query().filter(**key)]
if self.security:
key = {'advisory_type__eq': 'security'}
base._update_security_filters = [base.sack.query().filter(**key)]
return base
def list_items(self, command):
"""List package info based on the command."""
# Rename updates to upgrades
if command == 'updates':
command = 'upgrades'
# Return the corresponding packages
if command in ['installed', 'upgrades', 'available']:
results = [
self._package_dict(package)
for package in getattr(self.base.sack.query(), command)()]
# Return the enabled repository ids
elif command in ['repos', 'repositories']:
results = [
{'repoid': repo.id, 'state': 'enabled'}
for repo in self.base.repos.iter_enabled()]
# Return any matching packages
else:
packages = dnf.subject.Subject(command).get_best_query(self.base.sack)
results = [self._package_dict(package) for package in packages]
self.module.exit_json(msg="", results=results)
def _is_installed(self, pkg):
installed = self.base.sack.query().installed()
if installed.filter(name=pkg):
return True
else:
return False
def _is_newer_version_installed(self, pkg_name):
candidate_pkg = self._packagename_dict(pkg_name)
if not candidate_pkg:
# The user didn't provide a versioned rpm, so version checking is
# not required
return False
installed = self.base.sack.query().installed()
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
# this looks weird but one is a dict and the other is a dnf.Package
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 1:
return True
else:
return False
else:
return False
def _mark_package_install(self, pkg_spec, upgrade=False):
"""Mark the package for install."""
is_newer_version_installed = self._is_newer_version_installed(pkg_spec)
is_installed = self._is_installed(pkg_spec)
try:
if self.allow_downgrade:
# dnf only does allow_downgrade, we have to handle this ourselves
# because it allows a possibility for non-idempotent transactions
# on a system's package set (pending the yum repo has many old
# NVRs indexed)
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
elif not self.allow_downgrade and is_newer_version_installed:
return {'failed': False, 'msg': '', 'failure': '', 'rc': 0}
elif not is_newer_version_installed:
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
else:
if upgrade:
if is_installed:
self.base.upgrade(pkg_spec)
else:
self.base.install(pkg_spec)
else:
self.base.install(pkg_spec)
return {'failed': False, 'msg': 'Installed: {0}'.format(pkg_spec), 'failure': '', 'rc': 0}
except dnf.exceptions.MarkingError as e:
return {
'failed': True,
'msg': "No package {0} available.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.DepsolveError as e:
return {
'failed': True,
'msg': "Depsolve Error occured for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
return {'failed': False, 'msg': '', 'failure': ''}
else:
return {
'failed': True,
'msg': "Unknown Error occured for package {0}.".format(pkg_spec),
'failure': " ".join((pkg_spec, to_native(e))),
'rc': 1,
"results": []
}
def _whatprovides(self, filepath):
available = self.base.sack.query().available()
pkg_spec = available.filter(provides=filepath).run()
if pkg_spec:
return pkg_spec[0].name
def _parse_spec_group_file(self):
pkg_specs, grp_specs, module_specs, filenames = [], [], [], []
already_loaded_comps = False # Only load this if necessary, it's slow
for name in self.names:
if '://' in name:
name = self.fetch_rpm_from_url(name)
filenames.append(name)
elif name.endswith(".rpm"):
filenames.append(name)
elif name.startswith("@") or ('/' in name):
# like "dnf install /usr/bin/vi"
if '/' in name:
pkg_spec = self._whatprovides(name)
if pkg_spec:
pkg_specs.append(pkg_spec)
continue
if not already_loaded_comps:
self.base.read_comps()
already_loaded_comps = True
grp_env_mdl_candidate = name[1:].strip()
if self.with_modules:
mdl = self.module_base._get_modules(grp_env_mdl_candidate)
if mdl[0]:
module_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
grp_specs.append(grp_env_mdl_candidate)
else:
pkg_specs.append(name)
return pkg_specs, grp_specs, module_specs, filenames
def _update_only(self, pkgs):
not_installed = []
for pkg in pkgs:
if self._is_installed(pkg):
try:
if isinstance(to_text(pkg), text_type):
self.base.upgrade(pkg)
else:
self.base.package_upgrade(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occured attempting update_only operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
else:
not_installed.append(pkg)
return not_installed
def _install_remote_rpms(self, filenames):
if int(dnf.__version__.split(".")[0]) >= 2:
pkgs = list(sorted(self.base.add_remote_rpms(list(filenames)), reverse=True))
else:
pkgs = []
try:
for filename in filenames:
pkgs.append(self.base.add_remote_rpm(filename))
except IOError as e:
if to_text("Can not load RPM file") in to_text(e):
self.module.fail_json(
msg="Error occured attempting remote rpm install of package: {0}. {1}".format(filename, to_native(e)),
results=[],
rc=1,
)
if self.update_only:
self._update_only(pkgs)
else:
for pkg in pkgs:
try:
if self._is_newer_version_installed(self._package_dict(pkg)['nevra']):
if self.allow_downgrade:
self.base.package_install(pkg)
else:
self.base.package_install(pkg)
except Exception as e:
self.module.fail_json(
msg="Error occured attempting remote rpm operation: {0}".format(to_native(e)),
results=[],
rc=1,
)
def _is_module_installed(self, module_spec):
if self.with_modules:
module_spec = module_spec.strip()
module_list, nsv = self.module_base._get_modules(module_spec)
if nsv.stream in self.base._moduleContainer.getEnabledStream(nsv.name):
return True
return False # seems like a sane default
def ensure(self):
allow_erasing = False
response = {
'msg': "",
'changed': False,
'results': [],
'rc': 0
}
# Accumulate failures. Package management modules install what they can
# and fail with a message about what they can't.
failure_response = {
'msg': "",
'failures': [],
'results': [],
'rc': 1
}
# Autoremove is called alone
# Jump to remove path where base.autoremove() is run
if not self.names and self.autoremove:
self.names = []
self.state = 'absent'
if self.names == ['*'] and self.state == 'latest':
try:
self.base.upgrade_all()
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to upgrade all packages"
self.module.fail_json(**failure_response)
else:
pkg_specs, group_specs, module_specs, filenames = self._parse_spec_group_file()
pkg_specs = [p.strip() for p in pkg_specs]
filenames = [f.strip() for f in filenames]
groups = []
environments = []
for group_spec in (g.strip() for g in group_specs):
group = self.base.comps.group_by_pattern(group_spec)
if group:
groups.append(group.id)
else:
environment = self.base.comps.environment_by_pattern(group_spec)
if environment:
environments.append(environment.id)
else:
self.module.fail_json(
msg="No group {0} available.".format(group_spec),
results=[],
)
if self.state in ['installed', 'present']:
# Install files.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Install modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if not self._is_module_installed(module):
response['results'].append("Module {0} installed.".format(module))
self.module_base.install([module])
self.module_base.enable([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
# Install groups.
for group in groups:
try:
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install group: {0}".format(group)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
# In dnf 2.0 if all the mandatory packages in a group do
# not install, an error is raised. We want to capture
# this but still install as much as possible.
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install environment: {0}".format(environment)
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if module_specs and not self.with_modules:
# This means that the group or env wasn't found in comps
self.module.fail_json(
msg="No group {0} available.".format(module_specs[0]),
results=[],
)
# Install packages.
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
install_result = self._mark_package_install(pkg_spec)
if install_result['failed']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg(pkg_spec, install_result['failure']))
else:
response['results'].append(install_result['msg'])
elif self.state == 'latest':
# "latest" is same as "installed" for filenames.
self._install_remote_rpms(filenames)
for filename in filenames:
response['results'].append("Installed {0}".format(filename))
# Upgrade modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} upgraded.".format(module))
self.module_base.upgrade([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
try:
self.base.group_upgrade(group)
response['results'].append("Group {0} upgraded.".format(group))
except dnf.exceptions.CompsError:
if not self.update_only:
# If not already installed, try to install.
group_pkg_count_installed = self.base.group_install(group, dnf.const.GROUP_PACKAGE_TYPES)
if group_pkg_count_installed == 0:
response['results'].append("Group {0} already installed.".format(group))
else:
response['results'].append("Group {0} installed.".format(group))
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((group, to_native(e))))
for environment in environments:
try:
try:
self.base.environment_upgrade(environment)
except dnf.exceptions.CompsError:
# If not already installed, try to install.
self.base.environment_install(environment, dnf.const.GROUP_PACKAGE_TYPES)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured attempting to install environment: {0}".format(environment)
except dnf.exceptions.Error as e:
failure_response['failures'].append(" ".join((environment, to_native(e))))
if self.update_only:
not_installed = self._update_only(pkg_specs)
for spec in not_installed:
response['results'].append("Packages providing %s not installed due to update_only specified" % spec)
else:
for pkg_spec in pkg_specs:
# best effort causes to install the latest package
# even if not previously installed
self.base.conf.best = True
install_result = self._mark_package_install(pkg_spec, upgrade=True)
if install_result['failed']:
failure_response['msg'] += install_result['msg']
failure_response['failures'].append(self._sanitize_dnf_error_msg(pkg_spec, install_result['failure']))
else:
response['results'].append(install_result['msg'])
else:
# state == absent
if filenames:
self.module.fail_json(
msg="Cannot remove paths -- please specify package name.",
results=[],
)
# Remove modules
if module_specs and self.with_modules:
for module in module_specs:
try:
if self._is_module_installed(module):
response['results'].append("Module {0} removed.".format(module))
self.module_base.remove([module])
self.module_base.disable([module])
self.module_base.reset([module])
except dnf.exceptions.MarkingErrors as e:
failure_response['failures'].append(' '.join((module, to_native(e))))
for group in groups:
try:
self.base.group_remove(group)
except dnf.exceptions.CompsError:
# Group is already uninstalled.
pass
except AttributeError:
# Group either isn't installed or wasn't marked installed at install time
# because of DNF bug
#
# This is necessary until the upstream dnf API bug is fixed where installing
# a group via the dnf API doesn't actually mark the group as installed
# https://bugzilla.redhat.com/show_bug.cgi?id=1620324
pass
for environment in environments:
try:
self.base.environment_remove(environment)
except dnf.exceptions.CompsError:
# Environment is already uninstalled.
pass
installed = self.base.sack.query().installed()
for pkg_spec in pkg_specs:
# short-circuit installed check for wildcard matching
if '*' in pkg_spec:
try:
self.base.remove(pkg_spec)
except dnf.exceptions.MarkingError as e:
failure_response['failures'].append('{0} - {1}'.format(pkg_spec, to_native(e)))
continue
installed_pkg = list(map(str, installed.filter(name=pkg_spec).run()))
if installed_pkg:
candidate_pkg = self._packagename_dict(installed_pkg[0])
installed_pkg = installed.filter(name=candidate_pkg['name']).run()
else:
candidate_pkg = self._packagename_dict(pkg_spec)
installed_pkg = installed.filter(nevra=pkg_spec).run()
if installed_pkg:
installed_pkg = installed_pkg[0]
evr_cmp = self._compare_evr(
installed_pkg.epoch, installed_pkg.version, installed_pkg.release,
candidate_pkg['epoch'], candidate_pkg['version'], candidate_pkg['release'],
)
if evr_cmp == 0:
self.base.remove(pkg_spec)
# Like the dnf CLI we want to allow recursive removal of dependent
# packages
allow_erasing = True
if self.autoremove:
self.base.autoremove()
try:
if not self.base.resolve(allow_erasing=allow_erasing):
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages'
self.module.fail_json(**failure_response)
response['msg'] = "Nothing to do"
self.module.exit_json(**response)
else:
response['changed'] = True
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages',
self.module.fail_json(**failure_response)
if self.module.check_mode:
response['msg'] = "Check mode: No changes made, but would have if not in check mode"
self.module.exit_json(**response)
try:
if self.download_only and self.download_dir and self.base.conf.destdir:
dnf.util.ensure_dir(self.base.conf.destdir)
self.base.repos.all().pkgdir = self.base.conf.destdir
self.base.download_packages(self.base.transaction.install_set)
except dnf.exceptions.DownloadError as e:
self.module.fail_json(
msg="Failed to download packages: {0}".format(to_text(e)),
results=[],
)
if self.download_only:
for package in self.base.transaction.install_set:
response['results'].append("Downloaded: {0}".format(package))
self.module.exit_json(**response)
else:
self.base.do_transaction()
for package in self.base.transaction.install_set:
response['results'].append("Installed: {0}".format(package))
for package in self.base.transaction.remove_set:
response['results'].append("Removed: {0}".format(package))
if failure_response['failures']:
failure_response['msg'] = 'Failed to install some of the specified packages',
self.module.exit_json(**response)
self.module.exit_json(**response)
except dnf.exceptions.DepsolveError as e:
failure_response['msg'] = "Depsolve Error occured: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
except dnf.exceptions.Error as e:
if to_text("already installed") in to_text(e):
response['changed'] = False
response['results'].append("Package already installed: {0}".format(to_native(e)))
self.module.exit_json(**response)
else:
failure_response['msg'] = "Unknown Error occured: {0}".format(to_native(e))
self.module.fail_json(**failure_response)
@staticmethod
def has_dnf():
return HAS_DNF
def run(self):
"""The main function."""
# Check if autoremove is called correctly
if self.autoremove:
if LooseVersion(dnf.__version__) < LooseVersion('2.0.1'):
self.module.fail_json(
msg="Autoremove requires dnf>=2.0.1. Current dnf version is %s" % dnf.__version__,
results=[],
)
# Check if download_dir is called correctly
if self.download_dir:
if LooseVersion(dnf.__version__) < LooseVersion('2.6.2'):
self.module.fail_json(
msg="download_dir requires dnf>=2.6.2. Current dnf version is %s" % dnf.__version__,
results=[],
)
if self.update_cache and not self.names and not self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.module.exit_json(
msg="Cache updated",
changed=False,
results=[],
rc=0
)
# Set state as installed by default
# This is not set in AnsibleModule() because the following shouldn't happen
# - dnf: autoremove=yes state=installed
if self.state is None:
self.state = 'installed'
if self.list:
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
self.list_items(self.list)
else:
# Note: base takes a long time to run so we want to check for failure
# before running it.
if not dnf.util.am_i_root():
self.module.fail_json(
msg="This command has to be run under the root user.",
results=[],
)
self.base = self._base(
self.conf_file, self.disable_gpg_check, self.disablerepo,
self.enablerepo, self.installroot
)
if self.with_modules:
self.module_base = dnf.module.module_base.ModuleBase(self.base)
self.ensure()
def main():
# state=installed name=pkgspec
# state=removed name=pkgspec
# state=latest name=pkgspec
#
# informational commands:
# list=installed
# list=updates
# list=available
# list=repos
# list=pkgspec
module = AnsibleModule(
**yumdnf_argument_spec
)
module_implementation = DnfModule(module)
try:
module_implementation.run()
except dnf.exceptions.RepoError as de:
module.fail_json(
msg="Failed to synchronize repodata: {0}".format(to_native(de)),
rc=1,
results=[],
changed=False
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,809 |
dnf (and probly yum) wild-card match remove is not idempotence
|
##### SUMMARY
When using wild-card in package name to remove, the first run will remove the packages,
but the second run will fail with message `Failed to install some of the specified packages`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- dnf
##### ANSIBLE VERSION
```
ansible 2.8.5
config file = /home/rabin/src/ansible/workstation-setup/ansible.cfg
configured module search path = ['/home/rabin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 16:32:37) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
```
##### CONFIGURATION
```
ANSIBLE_SSH_CONTROL_PATH(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = ./.ansible/ssh-%%C
CACHE_PLUGIN(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = ./.ansible/facts_cache
CACHE_PLUGIN_TIMEOUT(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = 86400
DEFAULT_EXECUTABLE(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = /bin/bash
DEFAULT_GATHERING(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = ['/home/rabin/src/ansible/workstation-setup/inventory']
DEFAULT_REMOTE_PORT(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = 22
RETRY_FILES_SAVE_PATH(/home/rabin/src/ansible/workstation-setup/ansible.cfg) = /home/rabin/src/ansible/workstation-setup/.ansible/retry
```
##### OS / ENVIRONMENT
* Fedora 30
##### STEPS TO REPRODUCE
try to remove a package twice
or run the playbook twice (first run will work if there is a match, but secend run will fail)
```yaml
- dnf:
name: lohit-*-fonts
- dnf:
name: lohit-*-fonts
state: absent
- dnf:
name: lohit-*-fonts
state: absent
```
##### EXPECTED RESULTS
Same as when you ask it to remove a spesific package
```yaml
- dnf:
name: lohit-tamil-fonts
- dnf:
name: lohit-tamil-fonts
state: absent
- dnf:
name: lohit-tamil-fonts
state: absent
```
##### ACTUAL RESULTS
```
fatal: [localhost]: FAILED! => {"changed": false, "failures": ["lohit-*-fonts - no package matched: lohit-*-fonts"], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
```
|
https://github.com/ansible/ansible/issues/62809
|
https://github.com/ansible/ansible/pull/63034
|
911aa6aab9aea867bc79e38684ab1b198fa36937
|
8bcf11fee9842af35efc66087c1195200cb2990e
| 2019-09-24T20:20:34Z |
python
| 2019-10-02T15:05:12Z |
test/integration/targets/yum/tasks/yum.yml
|
# UNINSTALL
- name: uninstall sos
yum: name=sos state=removed
register: yum_result
- name: check sos with rpm
shell: rpm -q sos
ignore_errors: True
register: rpm_result
- name: verify uninstallation of sos
assert:
that:
- "yum_result is success"
- "rpm_result is failed"
# UNINSTALL AGAIN
- name: uninstall sos again in check mode
yum: name=sos state=removed
check_mode: true
register: yum_result
- name: verify no change on re-uninstall in check mode
assert:
that:
- "not yum_result is changed"
- name: uninstall sos again
yum: name=sos state=removed
register: yum_result
- name: verify no change on re-uninstall
assert:
that:
- "not yum_result is changed"
# INSTALL
- name: install sos in check mode
yum: name=sos state=present
check_mode: true
register: yum_result
- name: verify installation of sos in check mode
assert:
that:
- "yum_result is changed"
- name: install sos
yum: name=sos state=present
register: yum_result
- name: verify installation of sos
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: check sos with rpm
shell: rpm -q sos
# INSTALL AGAIN
- name: install sos again in check mode
yum: name=sos state=present
check_mode: true
register: yum_result
- name: verify no change on second install in check mode
assert:
that:
- "not yum_result is changed"
- name: install sos again
yum: name=sos state=present
register: yum_result
- name: verify no change on second install
assert:
that:
- "not yum_result is changed"
- name: install sos again with empty string enablerepo
yum: name=sos state=present enablerepo=""
register: yum_result
- name: verify no change on third install with empty string enablerepo
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
# This test case is unfortunately distro specific because we have to specify
# repo names which are not the same across Fedora/RHEL/CentOS for base/updates
- name: install sos again with missing repo enablerepo
yum:
name: sos
state: present
enablerepo:
- "thisrepodoesnotexist"
- "base"
- "updates"
disablerepo: "*"
register: yum_result
when: ansible_distribution == 'CentOS'
- name: verify no change on fourth install with missing repo enablerepo (yum)
assert:
that:
- "yum_result is success"
- "yum_result is not changed"
when: ansible_distribution == 'CentOS'
- name: install sos again with only missing repo enablerepo
yum:
name: sos
state: present
enablerepo: "thisrepodoesnotexist"
ignore_errors: true
register: yum_result
- name: verify no change on fifth install with only missing repo enablerepo (yum)
assert:
that:
- "yum_result is not success"
when: ansible_pkg_mgr == 'yum'
- name: verify no change on fifth install with only missing repo enablerepo (dnf)
assert:
that:
- "yum_result is success"
when: ansible_pkg_mgr == 'dnf'
# INSTALL AGAIN WITH LATEST
- name: install sos again with state latest in check mode
yum: name=sos state=latest
check_mode: true
register: yum_result
- name: verify install sos again with state latest in check mode
assert:
that:
- "not yum_result is changed"
- name: install sos again with state latest idempotence
yum: name=sos state=latest
register: yum_result
- name: verify install sos again with state latest idempotence
assert:
that:
- "not yum_result is changed"
# INSTALL WITH LATEST
- name: uninstall sos
yum: name=sos state=removed
register: yum_result
- name: verify uninstall sos
assert:
that:
- "yum_result is successful"
- name: copy yum.conf file in case it is missing
copy:
src: yum.conf
dest: /etc/yum.conf
force: False
register: yum_conf_copy
- block:
- name: install sos with state latest in check mode with config file param
yum: name=sos state=latest conf_file=/etc/yum.conf
check_mode: true
register: yum_result
- name: verify install sos with state latest in check mode with config file param
assert:
that:
- "yum_result is changed"
always:
- name: remove tmp yum.conf file if we created it
file:
path: /etc/yum.conf
state: absent
when: yum_conf_copy is changed
- name: install sos with state latest in check mode
yum: name=sos state=latest
check_mode: true
register: yum_result
- name: verify install sos with state latest in check mode
assert:
that:
- "yum_result is changed"
- name: install sos with state latest
yum: name=sos state=latest
register: yum_result
- name: verify install sos with state latest
assert:
that:
- "yum_result is changed"
- name: install sos with state latest idempotence
yum: name=sos state=latest
register: yum_result
- name: verify install sos with state latest idempotence
assert:
that:
- "not yum_result is changed"
- name: install sos with state latest idempotence with config file param
yum: name=sos state=latest
register: yum_result
- name: verify install sos with state latest idempotence with config file param
assert:
that:
- "not yum_result is changed"
# Multiple packages
- name: uninstall sos and bc
yum: name=sos,bc state=removed
- name: check sos with rpm
shell: rpm -q sos
ignore_errors: True
register: rpm_sos_result
- name: check bc with rpm
shell: rpm -q bc
ignore_errors: True
register: rpm_bc_result
- name: verify packages installed
assert:
that:
- "rpm_sos_result is failed"
- "rpm_bc_result is failed"
- name: install sos and bc as comma separated
yum: name=sos,bc state=present
register: yum_result
- name: verify packages installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: check sos with rpm
shell: rpm -q sos
- name: check bc with rpm
shell: rpm -q bc
- name: uninstall sos and bc
yum: name=sos,bc state=removed
register: yum_result
- name: install sos and bc as list
yum:
name:
- sos
- bc
state: present
register: yum_result
- name: verify packages installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: check sos with rpm
shell: rpm -q sos
- name: check bc with rpm
shell: rpm -q bc
- name: uninstall sos and bc
yum: name=sos,bc state=removed
register: yum_result
- name: install sos and bc as comma separated with spaces
yum:
name: "sos, bc"
state: present
register: yum_result
- name: verify packages installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: check sos with rpm
shell: rpm -q sos
- name: check bc with rpm
shell: rpm -q bc
- name: uninstall sos and bc
yum: name=sos,bc state=removed
- name: install non-existent rpm
yum: name="{{ item }}"
with_items:
- does-not-exist
register: non_existent_rpm
ignore_errors: True
- name: check non-existent rpm install failed
assert:
that:
- non_existent_rpm is failed
# Install in installroot='/'
- name: install sos
yum: name=sos state=present installroot='/'
register: yum_result
- name: verify installation of sos
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: check sos with rpm
shell: rpm -q sos --root=/
- name: uninstall sos
yum:
name: sos
installroot: '/'
state: removed
register: yum_result
- name: Test download_only
yum:
name: sos
state: latest
download_only: true
register: yum_result
- name: verify download of sos (part 1 -- yum "install" succeeded)
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: uninstall sos (noop)
yum:
name: sos
state: removed
register: yum_result
- name: verify download of sos (part 2 -- nothing removed during uninstall)
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
- name: uninstall sos for downloadonly/downloaddir test
yum:
name: sos
state: absent
- name: Test download_only/download_dir
yum:
name: sos
state: latest
download_only: true
download_dir: "/var/tmp/packages"
register: yum_result
- name: verify yum output
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- command: "ls /var/tmp/packages"
register: ls_out
- name: Verify specified download_dir was used
assert:
that:
- "'sos' in ls_out.stdout"
- name: install group
yum:
name: "@Custom Group"
state: present
register: yum_result
- name: verify installation of the group
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: install the group again
yum:
name: "@Custom Group"
state: present
register: yum_result
- name: verify nothing changed
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: install the group again but also with a package that is not yet installed
yum:
name:
- "@Custom Group"
- sos
state: present
register: yum_result
- name: verify sos is installed
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: try to install the group again, with --check to check 'changed'
yum:
name: "@Custom Group"
state: present
check_mode: yes
register: yum_result
- name: verify nothing changed
assert:
that:
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: try to install non existing group
yum:
name: "@non-existing-group"
state: present
register: yum_result
ignore_errors: True
- name: verify installation of the non existing group failed
assert:
that:
- "yum_result is failed"
- "not yum_result is changed"
- "yum_result is failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: try to install non existing file
yum:
name: /tmp/non-existing-1.0.0.fc26.noarch.rpm
state: present
register: yum_result
ignore_errors: yes
- name: verify installation failed
assert:
that:
- "yum_result is failed"
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- name: try to install from non existing url
yum:
name: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/non-existing-1.0.0.fc26.noarch.rpm
state: present
register: yum_result
ignore_errors: yes
- name: verify installation failed
assert:
that:
- "yum_result is failed"
- "not yum_result is changed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- name: use latest to install httpd
yum:
name: httpd
state: latest
register: yum_result
- name: verify httpd was installed
assert:
that:
- "'changed' in yum_result"
- name: uninstall httpd
yum:
name: httpd
state: removed
- name: update httpd only if it exists
yum:
name: httpd
state: latest
update_only: yes
register: yum_result
- name: verify httpd not installed
assert:
that:
- "not yum_result is changed"
- "'Packages providing httpd not installed due to update_only specified' in yum_result.results"
- name: try to install not compatible arch rpm, should fail
yum:
name: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/banner-1.3.4-3.el7.ppc64le.rpm
state: present
register: yum_result
ignore_errors: True
- name: verify that yum failed
assert:
that:
- "not yum_result is changed"
- "yum_result is failed"
# setup for testing installing an RPM from url
- set_fact:
pkg_name: fpaste
- name: cleanup
yum:
name: "{{ pkg_name }}"
state: absent
- set_fact:
pkg_url: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/fpaste-0.3.7.4.1-2.el7.noarch.rpm
when: ansible_python.version.major == 2
- set_fact:
pkg_url: https://s3.amazonaws.com/ansible-ci-files/test/integration/targets/yum/fpaste-0.3.9.2-1.fc28.noarch.rpm
when: ansible_python.version.major == 3
# setup end
- name: download an rpm
get_url:
url: "{{ pkg_url }}"
dest: "/tmp/{{ pkg_name }}.rpm"
- name: install the downloaded rpm
yum:
name: "/tmp/{{ pkg_name }}.rpm"
state: present
register: yum_result
- name: verify installation
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- "yum_result is not failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: install the downloaded rpm again
yum:
name: "/tmp/{{ pkg_name }}.rpm"
state: present
register: yum_result
- name: verify installation
assert:
that:
- "yum_result is success"
- "not yum_result is changed"
- "yum_result is not failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: clean up
yum:
name: "{{ pkg_name }}"
state: absent
- name: install from url
yum:
name: "{{ pkg_url }}"
state: present
register: yum_result
- name: verify installation
assert:
that:
- "yum_result is success"
- "yum_result is changed"
- "yum_result is not failed"
- name: verify yum module outputs
assert:
that:
- "'changed' in yum_result"
- "'msg' in yum_result"
- "'results' in yum_result"
- name: Create a temp RPM file which does not contain nevra information
file:
name: "/tmp/non_existent_pkg.rpm"
state: touch
- name: Try installing RPM file which does not contain nevra information
yum:
name: "/tmp/non_existent_pkg.rpm"
state: present
register: no_nevra_info_result
ignore_errors: yes
- name: Verify RPM failed to install
assert:
that:
- "'changed' in no_nevra_info_result"
- "'msg' in no_nevra_info_result"
- name: Delete a temp RPM file
file:
name: "/tmp/non_existent_pkg.rpm"
state: absent
- name: get yum version
yum:
list: yum
register: yum_version
- name: set yum_version of installed version
set_fact:
yum_version: "{%- if item.yumstate == 'installed' -%}{{ item.version }}{%- else -%}{{ yum_version }}{%- endif -%}"
with_items: "{{ yum_version.results }}"
- block:
- name: uninstall bc
yum: name=bc state=removed
- name: check bc with rpm
shell: rpm -q bc
ignore_errors: True
register: rpm_bc_result
- name: verify bc is uninstalled
assert:
that:
- "rpm_bc_result is failed"
- name: exclude bc (yum backend)
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=)(.)*
line: "exclude=bc*"
state: present
when: ansible_pkg_mgr == 'yum'
- name: exclude bc (dnf backend)
lineinfile:
dest: /etc/dnf/dnf.conf
regexp: (^excludepkgs=)(.)*
line: "excludepkgs=bc*"
state: present
when: ansible_pkg_mgr == 'dnf'
# begin test case where disable_excludes is supported
- name: Try install bc without disable_excludes
yum: name=bc state=latest
register: yum_bc_result
ignore_errors: True
- name: verify bc did not install because it is in exclude list
assert:
that:
- "yum_bc_result is failed"
- name: install bc with disable_excludes
yum: name=bc state=latest disable_excludes=all
register: yum_bc_result_using_excludes
- name: verify bc did install using disable_excludes=all
assert:
that:
- "yum_bc_result_using_excludes is success"
- "yum_bc_result_using_excludes is changed"
- "yum_bc_result_using_excludes is not failed"
- name: remove exclude bc (cleanup yum.conf)
lineinfile:
dest: /etc/yum.conf
regexp: (^exclude=bc*)
line: "exclude="
state: present
when: ansible_pkg_mgr == 'yum'
- name: remove exclude bc (cleanup dnf.conf)
lineinfile:
dest: /etc/dnf/dnf.conf
regexp: (^excludepkgs=bc*)
line: "excludepkgs="
state: present
when: ansible_pkg_mgr == 'dnf'
# Fedora < 26 has a bug in dnf where package excludes in dnf.conf aren't
# actually honored and those releases are EOL'd so we have no expectation they
# will ever be fixed
when: not ((ansible_distribution == "Fedora") and (ansible_distribution_major_version|int < 26))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,672 |
Azure ansible modules do not support latest azure stack API profiles
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Azure Stack has gone on to support 2019-03-01-hybrid, 2018-03-01-hybrid profiles. But Azure modules for Ansible only supports 2017-03-09-profile with a limited set of resource clients.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
API Profiles
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = None
configured module search path = [u'/home/testuser/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Jul 9 2019, 16:51:35) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Azure Ansible modules need to support 2019-03-01-hybrid and 2018-03-01-hybrid profiles for Azure Stack.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62672
|
https://github.com/ansible/ansible/pull/62675
|
69317a9d3e7286ca5419114b52ebe307aafd702b
|
926e423690162b3bdf3ab255b0d50f0a9ae48e1a
| 2019-09-20T18:12:18Z |
python
| 2019-10-02T17:04:28Z |
lib/ansible/module_utils/azure_rm_common.py
|
# Copyright (c) 2016 Matt Davis, <[email protected]>
# Chris Houseknecht, <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
import os
import re
import types
import copy
import inspect
import traceback
import json
from os.path import expanduser
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
try:
from ansible.module_utils.ansible_release import __version__ as ANSIBLE_VERSION
except Exception:
ANSIBLE_VERSION = 'unknown'
from ansible.module_utils.six.moves import configparser
import ansible.module_utils.six.moves.urllib.parse as urlparse
AZURE_COMMON_ARGS = dict(
auth_source=dict(
type='str',
choices=['auto', 'cli', 'env', 'credential_file', 'msi']
),
profile=dict(type='str'),
subscription_id=dict(type='str'),
client_id=dict(type='str', no_log=True),
secret=dict(type='str', no_log=True),
tenant=dict(type='str', no_log=True),
ad_user=dict(type='str', no_log=True),
password=dict(type='str', no_log=True),
cloud_environment=dict(type='str', default='AzureCloud'),
cert_validation_mode=dict(type='str', choices=['validate', 'ignore']),
api_profile=dict(type='str', default='latest'),
adfs_authority_url=dict(type='str', default=None)
)
AZURE_CREDENTIAL_ENV_MAPPING = dict(
profile='AZURE_PROFILE',
subscription_id='AZURE_SUBSCRIPTION_ID',
client_id='AZURE_CLIENT_ID',
secret='AZURE_SECRET',
tenant='AZURE_TENANT',
ad_user='AZURE_AD_USER',
password='AZURE_PASSWORD',
cloud_environment='AZURE_CLOUD_ENVIRONMENT',
cert_validation_mode='AZURE_CERT_VALIDATION_MODE',
adfs_authority_url='AZURE_ADFS_AUTHORITY_URL'
)
# FUTURE: this should come from the SDK or an external location.
# For now, we have to copy from azure-cli
AZURE_API_PROFILES = {
'latest': {
'ContainerInstanceManagementClient': '2018-02-01-preview',
'ComputeManagementClient': dict(
default_api_version='2018-10-01',
resource_skus='2018-10-01',
disks='2018-06-01',
snapshots='2018-10-01',
virtual_machine_run_commands='2018-10-01'
),
'NetworkManagementClient': '2018-08-01',
'ResourceManagementClient': '2017-05-10',
'StorageManagementClient': '2017-10-01',
'WebSiteManagementClient': '2018-02-01',
'PostgreSQLManagementClient': '2017-12-01',
'MySQLManagementClient': '2017-12-01',
'MariaDBManagementClient': '2019-03-01',
'ManagementLockClient': '2016-09-01'
},
'2017-03-09-profile': {
'ComputeManagementClient': '2016-03-30',
'NetworkManagementClient': '2015-06-15',
'ResourceManagementClient': '2016-02-01',
'StorageManagementClient': '2016-01-01'
}
}
AZURE_TAG_ARGS = dict(
tags=dict(type='dict'),
append_tags=dict(type='bool', default=True),
)
AZURE_COMMON_REQUIRED_IF = [
('log_mode', 'file', ['log_path'])
]
ANSIBLE_USER_AGENT = 'Ansible/{0}'.format(ANSIBLE_VERSION)
CLOUDSHELL_USER_AGENT_KEY = 'AZURE_HTTP_USER_AGENT'
VSCODEEXT_USER_AGENT_KEY = 'VSCODEEXT_USER_AGENT'
CIDR_PATTERN = re.compile(r"(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1"
r"[0-9]{2}|2[0-4][0-9]|25[0-5])(/([0-9]|[1-2][0-9]|3[0-2]))")
AZURE_SUCCESS_STATE = "Succeeded"
AZURE_FAILED_STATE = "Failed"
HAS_AZURE = True
HAS_AZURE_EXC = None
HAS_AZURE_CLI_CORE = True
HAS_AZURE_CLI_CORE_EXC = None
HAS_MSRESTAZURE = True
HAS_MSRESTAZURE_EXC = None
try:
import importlib
except ImportError:
# This passes the sanity import test, but does not provide a user friendly error message.
# Doing so would require catching Exception for all imports of Azure dependencies in modules and module_utils.
importlib = None
try:
from packaging.version import Version
HAS_PACKAGING_VERSION = True
HAS_PACKAGING_VERSION_EXC = None
except ImportError:
Version = None
HAS_PACKAGING_VERSION = False
HAS_PACKAGING_VERSION_EXC = traceback.format_exc()
# NB: packaging issue sometimes cause msrestazure not to be installed, check it separately
try:
from msrest.serialization import Serializer
except ImportError:
HAS_MSRESTAZURE_EXC = traceback.format_exc()
HAS_MSRESTAZURE = False
try:
from enum import Enum
from msrestazure.azure_active_directory import AADTokenCredentials
from msrestazure.azure_exceptions import CloudError
from msrestazure.azure_active_directory import MSIAuthentication
from msrestazure.tools import parse_resource_id, resource_id, is_valid_resource_id
from msrestazure import azure_cloud
from azure.common.credentials import ServicePrincipalCredentials, UserPassCredentials
from azure.mgmt.monitor.version import VERSION as monitor_client_version
from azure.mgmt.network.version import VERSION as network_client_version
from azure.mgmt.storage.version import VERSION as storage_client_version
from azure.mgmt.compute.version import VERSION as compute_client_version
from azure.mgmt.resource.version import VERSION as resource_client_version
from azure.mgmt.dns.version import VERSION as dns_client_version
from azure.mgmt.web.version import VERSION as web_client_version
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.resource.resources import ResourceManagementClient
from azure.mgmt.resource.subscriptions import SubscriptionClient
from azure.mgmt.storage import StorageManagementClient
from azure.mgmt.compute import ComputeManagementClient
from azure.mgmt.dns import DnsManagementClient
from azure.mgmt.monitor import MonitorManagementClient
from azure.mgmt.web import WebSiteManagementClient
from azure.mgmt.containerservice import ContainerServiceClient
from azure.mgmt.marketplaceordering import MarketplaceOrderingAgreements
from azure.mgmt.trafficmanager import TrafficManagerManagementClient
from azure.storage.cloudstorageaccount import CloudStorageAccount
from azure.storage.blob import PageBlobService, BlockBlobService
from adal.authentication_context import AuthenticationContext
from azure.mgmt.sql import SqlManagementClient
from azure.mgmt.servicebus import ServiceBusManagementClient
import azure.mgmt.servicebus.models as ServicebusModel
from azure.mgmt.rdbms.postgresql import PostgreSQLManagementClient
from azure.mgmt.rdbms.mysql import MySQLManagementClient
from azure.mgmt.rdbms.mariadb import MariaDBManagementClient
from azure.mgmt.containerregistry import ContainerRegistryManagementClient
from azure.mgmt.containerinstance import ContainerInstanceManagementClient
from azure.mgmt.loganalytics import LogAnalyticsManagementClient
import azure.mgmt.loganalytics.models as LogAnalyticsModels
from azure.mgmt.automation import AutomationClient
import azure.mgmt.automation.models as AutomationModel
from azure.mgmt.iothub import IotHubClient
from azure.mgmt.iothub import models as IoTHubModels
from msrest.service_client import ServiceClient
from msrestazure import AzureConfiguration
from msrest.authentication import Authentication
from azure.mgmt.resource.locks import ManagementLockClient
except ImportError as exc:
Authentication = object
HAS_AZURE_EXC = traceback.format_exc()
HAS_AZURE = False
from base64 import b64encode, b64decode
from hashlib import sha256
from hmac import HMAC
from time import time
try:
from urllib import (urlencode, quote_plus)
except ImportError:
from urllib.parse import (urlencode, quote_plus)
try:
from azure.cli.core.util import CLIError
from azure.common.credentials import get_azure_cli_credentials, get_cli_profile
from azure.common.cloud import get_cli_active_cloud
except ImportError:
HAS_AZURE_CLI_CORE = False
HAS_AZURE_CLI_CORE_EXC = None
CLIError = Exception
def azure_id_to_dict(id):
pieces = re.sub(r'^\/', '', id).split('/')
result = {}
index = 0
while index < len(pieces) - 1:
result[pieces[index]] = pieces[index + 1]
index += 1
return result
def format_resource_id(val, subscription_id, namespace, types, resource_group):
return resource_id(name=val,
resource_group=resource_group,
namespace=namespace,
type=types,
subscription=subscription_id) if not is_valid_resource_id(val) else val
def normalize_location_name(name):
return name.replace(' ', '').lower()
# FUTURE: either get this from the requirements file (if we can be sure it's always available at runtime)
# or generate the requirements files from this so we only have one source of truth to maintain...
AZURE_PKG_VERSIONS = {
'StorageManagementClient': {
'package_name': 'storage',
'expected_version': '3.1.0'
},
'ComputeManagementClient': {
'package_name': 'compute',
'expected_version': '4.4.0'
},
'ContainerInstanceManagementClient': {
'package_name': 'containerinstance',
'expected_version': '0.4.0'
},
'NetworkManagementClient': {
'package_name': 'network',
'expected_version': '2.3.0'
},
'ResourceManagementClient': {
'package_name': 'resource',
'expected_version': '2.1.0'
},
'DnsManagementClient': {
'package_name': 'dns',
'expected_version': '2.1.0'
},
'WebSiteManagementClient': {
'package_name': 'web',
'expected_version': '0.41.0'
},
'TrafficManagerManagementClient': {
'package_name': 'trafficmanager',
'expected_version': '0.50.0'
},
} if HAS_AZURE else {}
AZURE_MIN_RELEASE = '2.0.0'
class AzureRMModuleBase(object):
def __init__(self, derived_arg_spec, bypass_checks=False, no_log=False,
check_invalid_arguments=None, mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False, supports_check_mode=False,
required_if=None, supports_tags=True, facts_module=False, skip_exec=False):
merged_arg_spec = dict()
merged_arg_spec.update(AZURE_COMMON_ARGS)
if supports_tags:
merged_arg_spec.update(AZURE_TAG_ARGS)
if derived_arg_spec:
merged_arg_spec.update(derived_arg_spec)
merged_required_if = list(AZURE_COMMON_REQUIRED_IF)
if required_if:
merged_required_if += required_if
self.module = AnsibleModule(argument_spec=merged_arg_spec,
bypass_checks=bypass_checks,
no_log=no_log,
check_invalid_arguments=check_invalid_arguments,
mutually_exclusive=mutually_exclusive,
required_together=required_together,
required_one_of=required_one_of,
add_file_common_args=add_file_common_args,
supports_check_mode=supports_check_mode,
required_if=merged_required_if)
if not HAS_PACKAGING_VERSION:
self.fail(msg=missing_required_lib('packaging'),
exception=HAS_PACKAGING_VERSION_EXC)
if not HAS_MSRESTAZURE:
self.fail(msg=missing_required_lib('msrestazure'),
exception=HAS_MSRESTAZURE_EXC)
if not HAS_AZURE:
self.fail(msg=missing_required_lib('ansible[azure] (azure >= {0})'.format(AZURE_MIN_RELEASE)),
exception=HAS_AZURE_EXC)
self._network_client = None
self._storage_client = None
self._resource_client = None
self._compute_client = None
self._dns_client = None
self._web_client = None
self._marketplace_client = None
self._sql_client = None
self._mysql_client = None
self._mariadb_client = None
self._postgresql_client = None
self._containerregistry_client = None
self._containerinstance_client = None
self._containerservice_client = None
self._managedcluster_client = None
self._traffic_manager_management_client = None
self._monitor_client = None
self._resource = None
self._log_analytics_client = None
self._servicebus_client = None
self._automation_client = None
self._IoThub_client = None
self._lock_client = None
self.check_mode = self.module.check_mode
self.api_profile = self.module.params.get('api_profile')
self.facts_module = facts_module
# self.debug = self.module.params.get('debug')
# delegate auth to AzureRMAuth class (shared with all plugin types)
self.azure_auth = AzureRMAuth(fail_impl=self.fail, **self.module.params)
# common parameter validation
if self.module.params.get('tags'):
self.validate_tags(self.module.params['tags'])
if not skip_exec:
res = self.exec_module(**self.module.params)
self.module.exit_json(**res)
def check_client_version(self, client_type):
# Ensure Azure modules are at least 2.0.0rc5.
package_version = AZURE_PKG_VERSIONS.get(client_type.__name__, None)
if package_version is not None:
client_name = package_version.get('package_name')
try:
client_module = importlib.import_module(client_type.__module__)
client_version = client_module.VERSION
except (RuntimeError, AttributeError):
# can't get at the module version for some reason, just fail silently...
return
expected_version = package_version.get('expected_version')
if Version(client_version) < Version(expected_version):
self.fail("Installed azure-mgmt-{0} client version is {1}. The minimum supported version is {2}. Try "
"`pip install ansible[azure]`".format(client_name, client_version, expected_version))
if Version(client_version) != Version(expected_version):
self.module.warn("Installed azure-mgmt-{0} client version is {1}. The expected version is {2}. Try "
"`pip install ansible[azure]`".format(client_name, client_version, expected_version))
def exec_module(self, **kwargs):
self.fail("Error: {0} failed to implement exec_module method.".format(self.__class__.__name__))
def fail(self, msg, **kwargs):
'''
Shortcut for calling module.fail()
:param msg: Error message text.
:param kwargs: Any key=value pairs
:return: None
'''
self.module.fail_json(msg=msg, **kwargs)
def deprecate(self, msg, version=None):
self.module.deprecate(msg, version)
def log(self, msg, pretty_print=False):
if pretty_print:
self.module.debug(json.dumps(msg, indent=4, sort_keys=True))
else:
self.module.debug(msg)
def validate_tags(self, tags):
'''
Check if tags dictionary contains string:string pairs.
:param tags: dictionary of string:string pairs
:return: None
'''
if not self.facts_module:
if not isinstance(tags, dict):
self.fail("Tags must be a dictionary of string:string values.")
for key, value in tags.items():
if not isinstance(value, str):
self.fail("Tags values must be strings. Found {0}:{1}".format(str(key), str(value)))
def update_tags(self, tags):
'''
Call from the module to update metadata tags. Returns tuple
with bool indicating if there was a change and dict of new
tags to assign to the object.
:param tags: metadata tags from the object
:return: bool, dict
'''
tags = tags or dict()
new_tags = copy.copy(tags) if isinstance(tags, dict) else dict()
param_tags = self.module.params.get('tags') if isinstance(self.module.params.get('tags'), dict) else dict()
append_tags = self.module.params.get('append_tags') if self.module.params.get('append_tags') is not None else True
changed = False
# check add or update
for key, value in param_tags.items():
if not new_tags.get(key) or new_tags[key] != value:
changed = True
new_tags[key] = value
# check remove
if not append_tags:
for key, value in tags.items():
if not param_tags.get(key):
new_tags.pop(key)
changed = True
return changed, new_tags
def has_tags(self, obj_tags, tag_list):
'''
Used in fact modules to compare object tags to list of parameter tags. Return true if list of parameter tags
exists in object tags.
:param obj_tags: dictionary of tags from an Azure object.
:param tag_list: list of tag keys or tag key:value pairs
:return: bool
'''
if not obj_tags and tag_list:
return False
if not tag_list:
return True
matches = 0
result = False
for tag in tag_list:
tag_key = tag
tag_value = None
if ':' in tag:
tag_key, tag_value = tag.split(':')
if tag_value and obj_tags.get(tag_key) == tag_value:
matches += 1
elif not tag_value and obj_tags.get(tag_key):
matches += 1
if matches == len(tag_list):
result = True
return result
def get_resource_group(self, resource_group):
'''
Fetch a resource group.
:param resource_group: name of a resource group
:return: resource group object
'''
try:
return self.rm_client.resource_groups.get(resource_group)
except CloudError as cloud_error:
self.fail("Error retrieving resource group {0} - {1}".format(resource_group, cloud_error.message))
except Exception as exc:
self.fail("Error retrieving resource group {0} - {1}".format(resource_group, str(exc)))
def parse_resource_to_dict(self, resource):
'''
Return a dict of the give resource, which contains name and resource group.
:param resource: It can be a resource name, id or a dict contains name and resource group.
'''
resource_dict = parse_resource_id(resource) if not isinstance(resource, dict) else resource
resource_dict['resource_group'] = resource_dict.get('resource_group', self.resource_group)
resource_dict['subscription_id'] = resource_dict.get('subscription_id', self.subscription_id)
return resource_dict
def serialize_obj(self, obj, class_name, enum_modules=None):
'''
Return a JSON representation of an Azure object.
:param obj: Azure object
:param class_name: Name of the object's class
:param enum_modules: List of module names to build enum dependencies from.
:return: serialized result
'''
enum_modules = [] if enum_modules is None else enum_modules
dependencies = dict()
if enum_modules:
for module_name in enum_modules:
mod = importlib.import_module(module_name)
for mod_class_name, mod_class_obj in inspect.getmembers(mod, predicate=inspect.isclass):
dependencies[mod_class_name] = mod_class_obj
self.log("dependencies: ")
self.log(str(dependencies))
serializer = Serializer(classes=dependencies)
return serializer.body(obj, class_name, keep_readonly=True)
def get_poller_result(self, poller, wait=5):
'''
Consistent method of waiting on and retrieving results from Azure's long poller
:param poller Azure poller object
:return object resulting from the original request
'''
try:
delay = wait
while not poller.done():
self.log("Waiting for {0} sec".format(delay))
poller.wait(timeout=delay)
return poller.result()
except Exception as exc:
self.log(str(exc))
raise
def check_provisioning_state(self, azure_object, requested_state='present'):
'''
Check an Azure object's provisioning state. If something did not complete the provisioning
process, then we cannot operate on it.
:param azure_object An object such as a subnet, storageaccount, etc. Must have provisioning_state
and name attributes.
:return None
'''
if hasattr(azure_object, 'properties') and hasattr(azure_object.properties, 'provisioning_state') and \
hasattr(azure_object, 'name'):
# resource group object fits this model
if isinstance(azure_object.properties.provisioning_state, Enum):
if azure_object.properties.provisioning_state.value != AZURE_SUCCESS_STATE and \
requested_state != 'absent':
self.fail("Error {0} has a provisioning state of {1}. Expecting state to be {2}.".format(
azure_object.name, azure_object.properties.provisioning_state, AZURE_SUCCESS_STATE))
return
if azure_object.properties.provisioning_state != AZURE_SUCCESS_STATE and \
requested_state != 'absent':
self.fail("Error {0} has a provisioning state of {1}. Expecting state to be {2}.".format(
azure_object.name, azure_object.properties.provisioning_state, AZURE_SUCCESS_STATE))
return
if hasattr(azure_object, 'provisioning_state') or not hasattr(azure_object, 'name'):
if isinstance(azure_object.provisioning_state, Enum):
if azure_object.provisioning_state.value != AZURE_SUCCESS_STATE and requested_state != 'absent':
self.fail("Error {0} has a provisioning state of {1}. Expecting state to be {2}.".format(
azure_object.name, azure_object.provisioning_state, AZURE_SUCCESS_STATE))
return
if azure_object.provisioning_state != AZURE_SUCCESS_STATE and requested_state != 'absent':
self.fail("Error {0} has a provisioning state of {1}. Expecting state to be {2}.".format(
azure_object.name, azure_object.provisioning_state, AZURE_SUCCESS_STATE))
def get_blob_client(self, resource_group_name, storage_account_name, storage_blob_type='block'):
keys = dict()
try:
# Get keys from the storage account
self.log('Getting keys')
account_keys = self.storage_client.storage_accounts.list_keys(resource_group_name, storage_account_name)
except Exception as exc:
self.fail("Error getting keys for account {0} - {1}".format(storage_account_name, str(exc)))
try:
self.log('Create blob service')
if storage_blob_type == 'page':
return PageBlobService(endpoint_suffix=self._cloud_environment.suffixes.storage_endpoint,
account_name=storage_account_name,
account_key=account_keys.keys[0].value)
elif storage_blob_type == 'block':
return BlockBlobService(endpoint_suffix=self._cloud_environment.suffixes.storage_endpoint,
account_name=storage_account_name,
account_key=account_keys.keys[0].value)
else:
raise Exception("Invalid storage blob type defined.")
except Exception as exc:
self.fail("Error creating blob service client for storage account {0} - {1}".format(storage_account_name,
str(exc)))
def create_default_pip(self, resource_group, location, public_ip_name, allocation_method='Dynamic', sku=None):
'''
Create a default public IP address <public_ip_name> to associate with a network interface.
If a PIP address matching <public_ip_name> exists, return it. Otherwise, create one.
:param resource_group: name of an existing resource group
:param location: a valid azure location
:param public_ip_name: base name to assign the public IP address
:param allocation_method: one of 'Static' or 'Dynamic'
:param sku: sku
:return: PIP object
'''
pip = None
self.log("Starting create_default_pip {0}".format(public_ip_name))
self.log("Check to see if public IP {0} exists".format(public_ip_name))
try:
pip = self.network_client.public_ip_addresses.get(resource_group, public_ip_name)
except CloudError:
pass
if pip:
self.log("Public ip {0} found.".format(public_ip_name))
self.check_provisioning_state(pip)
return pip
params = self.network_models.PublicIPAddress(
location=location,
public_ip_allocation_method=allocation_method,
sku=sku
)
self.log('Creating default public IP {0}'.format(public_ip_name))
try:
poller = self.network_client.public_ip_addresses.create_or_update(resource_group, public_ip_name, params)
except Exception as exc:
self.fail("Error creating {0} - {1}".format(public_ip_name, str(exc)))
return self.get_poller_result(poller)
def create_default_securitygroup(self, resource_group, location, security_group_name, os_type, open_ports):
'''
Create a default security group <security_group_name> to associate with a network interface. If a security group matching
<security_group_name> exists, return it. Otherwise, create one.
:param resource_group: Resource group name
:param location: azure location name
:param security_group_name: base name to use for the security group
:param os_type: one of 'Windows' or 'Linux'. Determins any default rules added to the security group.
:param ssh_port: for os_type 'Linux' port used in rule allowing SSH access.
:param rdp_port: for os_type 'Windows' port used in rule allowing RDP access.
:return: security_group object
'''
group = None
self.log("Create security group {0}".format(security_group_name))
self.log("Check to see if security group {0} exists".format(security_group_name))
try:
group = self.network_client.network_security_groups.get(resource_group, security_group_name)
except CloudError:
pass
if group:
self.log("Security group {0} found.".format(security_group_name))
self.check_provisioning_state(group)
return group
parameters = self.network_models.NetworkSecurityGroup()
parameters.location = location
if not open_ports:
# Open default ports based on OS type
if os_type == 'Linux':
# add an inbound SSH rule
parameters.security_rules = [
self.network_models.SecurityRule(protocol='Tcp',
source_address_prefix='*',
destination_address_prefix='*',
access='Allow',
direction='Inbound',
description='Allow SSH Access',
source_port_range='*',
destination_port_range='22',
priority=100,
name='SSH')
]
parameters.location = location
else:
# for windows add inbound RDP and WinRM rules
parameters.security_rules = [
self.network_models.SecurityRule(protocol='Tcp',
source_address_prefix='*',
destination_address_prefix='*',
access='Allow',
direction='Inbound',
description='Allow RDP port 3389',
source_port_range='*',
destination_port_range='3389',
priority=100,
name='RDP01'),
self.network_models.SecurityRule(protocol='Tcp',
source_address_prefix='*',
destination_address_prefix='*',
access='Allow',
direction='Inbound',
description='Allow WinRM HTTPS port 5986',
source_port_range='*',
destination_port_range='5986',
priority=101,
name='WinRM01'),
]
else:
# Open custom ports
parameters.security_rules = []
priority = 100
for port in open_ports:
priority += 1
rule_name = "Rule_{0}".format(priority)
parameters.security_rules.append(
self.network_models.SecurityRule(protocol='Tcp',
source_address_prefix='*',
destination_address_prefix='*',
access='Allow',
direction='Inbound',
source_port_range='*',
destination_port_range=str(port),
priority=priority,
name=rule_name)
)
self.log('Creating default security group {0}'.format(security_group_name))
try:
poller = self.network_client.network_security_groups.create_or_update(resource_group,
security_group_name,
parameters)
except Exception as exc:
self.fail("Error creating default security rule {0} - {1}".format(security_group_name, str(exc)))
return self.get_poller_result(poller)
@staticmethod
def _validation_ignore_callback(session, global_config, local_config, **kwargs):
session.verify = False
def get_api_profile(self, client_type_name, api_profile_name):
profile_all_clients = AZURE_API_PROFILES.get(api_profile_name)
if not profile_all_clients:
raise KeyError("unknown Azure API profile: {0}".format(api_profile_name))
profile_raw = profile_all_clients.get(client_type_name, None)
if not profile_raw:
self.module.warn("Azure API profile {0} does not define an entry for {1}".format(api_profile_name, client_type_name))
if isinstance(profile_raw, dict):
if not profile_raw.get('default_api_version'):
raise KeyError("Azure API profile {0} does not define 'default_api_version'".format(api_profile_name))
return profile_raw
# wrap basic strings in a dict that just defines the default
return dict(default_api_version=profile_raw)
def get_mgmt_svc_client(self, client_type, base_url=None, api_version=None):
self.log('Getting management service client {0}'.format(client_type.__name__))
self.check_client_version(client_type)
client_argspec = inspect.getargspec(client_type.__init__)
if not base_url:
# most things are resource_manager, don't make everyone specify
base_url = self.azure_auth._cloud_environment.endpoints.resource_manager
client_kwargs = dict(credentials=self.azure_auth.azure_credentials, subscription_id=self.azure_auth.subscription_id, base_url=base_url)
api_profile_dict = {}
if self.api_profile:
api_profile_dict = self.get_api_profile(client_type.__name__, self.api_profile)
# unversioned clients won't accept profile; only send it if necessary
# clients without a version specified in the profile will use the default
if api_profile_dict and 'profile' in client_argspec.args:
client_kwargs['profile'] = api_profile_dict
# If the client doesn't accept api_version, it's unversioned.
# If it does, favor explicitly-specified api_version, fall back to api_profile
if 'api_version' in client_argspec.args:
profile_default_version = api_profile_dict.get('default_api_version', None)
if api_version or profile_default_version:
client_kwargs['api_version'] = api_version or profile_default_version
if 'profile' in client_kwargs:
# remove profile; only pass API version if specified
client_kwargs.pop('profile')
client = client_type(**client_kwargs)
# FUTURE: remove this once everything exposes models directly (eg, containerinstance)
try:
getattr(client, "models")
except AttributeError:
def _ansible_get_models(self, *arg, **kwarg):
return self._ansible_models
setattr(client, '_ansible_models', importlib.import_module(client_type.__module__).models)
client.models = types.MethodType(_ansible_get_models, client)
client.config = self.add_user_agent(client.config)
if self.azure_auth._cert_validation_mode == 'ignore':
client.config.session_configuration_callback = self._validation_ignore_callback
return client
def add_user_agent(self, config):
# Add user agent for Ansible
config.add_user_agent(ANSIBLE_USER_AGENT)
# Add user agent when running from Cloud Shell
if CLOUDSHELL_USER_AGENT_KEY in os.environ:
config.add_user_agent(os.environ[CLOUDSHELL_USER_AGENT_KEY])
# Add user agent when running from VSCode extension
if VSCODEEXT_USER_AGENT_KEY in os.environ:
config.add_user_agent(os.environ[VSCODEEXT_USER_AGENT_KEY])
return config
def generate_sas_token(self, **kwags):
base_url = kwags.get('base_url', None)
expiry = kwags.get('expiry', time() + 3600)
key = kwags.get('key', None)
policy = kwags.get('policy', None)
url = quote_plus(base_url)
ttl = int(expiry)
sign_key = '{0}\n{1}'.format(url, ttl)
signature = b64encode(HMAC(b64decode(key), sign_key.encode('utf-8'), sha256).digest())
result = {
'sr': url,
'sig': signature,
'se': str(ttl),
}
if policy:
result['skn'] = policy
return 'SharedAccessSignature ' + urlencode(result)
def get_data_svc_client(self, **kwags):
url = kwags.get('base_url', None)
config = AzureConfiguration(base_url='https://{0}'.format(url))
config.credentials = AzureSASAuthentication(token=self.generate_sas_token(**kwags))
config = self.add_user_agent(config)
return ServiceClient(creds=config.credentials, config=config)
# passthru methods to AzureAuth instance for backcompat
@property
def credentials(self):
return self.azure_auth.credentials
@property
def _cloud_environment(self):
return self.azure_auth._cloud_environment
@property
def subscription_id(self):
return self.azure_auth.subscription_id
@property
def storage_client(self):
self.log('Getting storage client...')
if not self._storage_client:
self._storage_client = self.get_mgmt_svc_client(StorageManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-07-01')
return self._storage_client
@property
def storage_models(self):
return StorageManagementClient.models("2018-07-01")
@property
def network_client(self):
self.log('Getting network client')
if not self._network_client:
self._network_client = self.get_mgmt_svc_client(NetworkManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-08-01')
return self._network_client
@property
def network_models(self):
self.log("Getting network models...")
return NetworkManagementClient.models("2018-08-01")
@property
def rm_client(self):
self.log('Getting resource manager client')
if not self._resource_client:
self._resource_client = self.get_mgmt_svc_client(ResourceManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2017-05-10')
return self._resource_client
@property
def rm_models(self):
self.log("Getting resource manager models")
return ResourceManagementClient.models("2017-05-10")
@property
def compute_client(self):
self.log('Getting compute client')
if not self._compute_client:
self._compute_client = self.get_mgmt_svc_client(ComputeManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-06-01')
return self._compute_client
@property
def compute_models(self):
self.log("Getting compute models")
return ComputeManagementClient.models("2018-06-01")
@property
def dns_client(self):
self.log('Getting dns client')
if not self._dns_client:
self._dns_client = self.get_mgmt_svc_client(DnsManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-05-01')
return self._dns_client
@property
def dns_models(self):
self.log("Getting dns models...")
return DnsManagementClient.models('2018-05-01')
@property
def web_client(self):
self.log('Getting web client')
if not self._web_client:
self._web_client = self.get_mgmt_svc_client(WebSiteManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-02-01')
return self._web_client
@property
def containerservice_client(self):
self.log('Getting container service client')
if not self._containerservice_client:
self._containerservice_client = self.get_mgmt_svc_client(ContainerServiceClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2017-07-01')
return self._containerservice_client
@property
def managedcluster_models(self):
self.log("Getting container service models")
return ContainerServiceClient.models('2018-03-31')
@property
def managedcluster_client(self):
self.log('Getting container service client')
if not self._managedcluster_client:
self._managedcluster_client = self.get_mgmt_svc_client(ContainerServiceClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-03-31')
return self._managedcluster_client
@property
def sql_client(self):
self.log('Getting SQL client')
if not self._sql_client:
self._sql_client = self.get_mgmt_svc_client(SqlManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._sql_client
@property
def postgresql_client(self):
self.log('Getting PostgreSQL client')
if not self._postgresql_client:
self._postgresql_client = self.get_mgmt_svc_client(PostgreSQLManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._postgresql_client
@property
def mysql_client(self):
self.log('Getting MySQL client')
if not self._mysql_client:
self._mysql_client = self.get_mgmt_svc_client(MySQLManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._mysql_client
@property
def mariadb_client(self):
self.log('Getting MariaDB client')
if not self._mariadb_client:
self._mariadb_client = self.get_mgmt_svc_client(MariaDBManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._mariadb_client
@property
def sql_client(self):
self.log('Getting SQL client')
if not self._sql_client:
self._sql_client = self.get_mgmt_svc_client(SqlManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._sql_client
@property
def containerregistry_client(self):
self.log('Getting container registry mgmt client')
if not self._containerregistry_client:
self._containerregistry_client = self.get_mgmt_svc_client(ContainerRegistryManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2017-10-01')
return self._containerregistry_client
@property
def containerinstance_client(self):
self.log('Getting container instance mgmt client')
if not self._containerinstance_client:
self._containerinstance_client = self.get_mgmt_svc_client(ContainerInstanceManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2018-06-01')
return self._containerinstance_client
@property
def marketplace_client(self):
self.log('Getting marketplace agreement client')
if not self._marketplace_client:
self._marketplace_client = self.get_mgmt_svc_client(MarketplaceOrderingAgreements,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._marketplace_client
@property
def traffic_manager_management_client(self):
self.log('Getting traffic manager client')
if not self._traffic_manager_management_client:
self._traffic_manager_management_client = self.get_mgmt_svc_client(TrafficManagerManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._traffic_manager_management_client
@property
def monitor_client(self):
self.log('Getting monitor client')
if not self._monitor_client:
self._monitor_client = self.get_mgmt_svc_client(MonitorManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._monitor_client
@property
def log_analytics_client(self):
self.log('Getting log analytics client')
if not self._log_analytics_client:
self._log_analytics_client = self.get_mgmt_svc_client(LogAnalyticsManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._log_analytics_client
@property
def log_analytics_models(self):
self.log('Getting log analytics models')
return LogAnalyticsModels
@property
def servicebus_client(self):
self.log('Getting servicebus client')
if not self._servicebus_client:
self._servicebus_client = self.get_mgmt_svc_client(ServiceBusManagementClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._servicebus_client
@property
def servicebus_models(self):
return ServicebusModel
@property
def automation_client(self):
self.log('Getting automation client')
if not self._automation_client:
self._automation_client = self.get_mgmt_svc_client(AutomationClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._automation_client
@property
def automation_models(self):
return AutomationModel
@property
def IoThub_client(self):
self.log('Getting iothub client')
if not self._IoThub_client:
self._IoThub_client = self.get_mgmt_svc_client(IotHubClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._IoThub_client
@property
def IoThub_models(self):
return IoTHubModels
@property
def automation_client(self):
self.log('Getting automation client')
if not self._automation_client:
self._automation_client = self.get_mgmt_svc_client(AutomationClient,
base_url=self._cloud_environment.endpoints.resource_manager)
return self._automation_client
@property
def automation_models(self):
return AutomationModel
@property
def lock_client(self):
self.log('Getting lock client')
if not self._lock_client:
self._lock_client = self.get_mgmt_svc_client(ManagementLockClient,
base_url=self._cloud_environment.endpoints.resource_manager,
api_version='2016-09-01')
return self._lock_client
@property
def lock_models(self):
self.log("Getting lock models")
return ManagementLockClient.models('2016-09-01')
class AzureSASAuthentication(Authentication):
"""Simple SAS Authentication.
An implementation of Authentication in
https://github.com/Azure/msrest-for-python/blob/0732bc90bdb290e5f58c675ffdd7dbfa9acefc93/msrest/authentication.py
:param str token: SAS token
"""
def __init__(self, token):
self.token = token
def signed_session(self):
session = super(AzureSASAuthentication, self).signed_session()
session.headers['Authorization'] = self.token
return session
class AzureRMAuthException(Exception):
pass
class AzureRMAuth(object):
def __init__(self, auth_source='auto', profile=None, subscription_id=None, client_id=None, secret=None,
tenant=None, ad_user=None, password=None, cloud_environment='AzureCloud', cert_validation_mode='validate',
api_profile='latest', adfs_authority_url=None, fail_impl=None, **kwargs):
if fail_impl:
self._fail_impl = fail_impl
else:
self._fail_impl = self._default_fail_impl
self._cloud_environment = None
self._adfs_authority_url = None
# authenticate
self.credentials = self._get_credentials(
dict(auth_source=auth_source, profile=profile, subscription_id=subscription_id, client_id=client_id, secret=secret,
tenant=tenant, ad_user=ad_user, password=password, cloud_environment=cloud_environment,
cert_validation_mode=cert_validation_mode, api_profile=api_profile, adfs_authority_url=adfs_authority_url))
if not self.credentials:
if HAS_AZURE_CLI_CORE:
self.fail("Failed to get credentials. Either pass as parameters, set environment variables, "
"define a profile in ~/.azure/credentials, or log in with Azure CLI (`az login`).")
else:
self.fail("Failed to get credentials. Either pass as parameters, set environment variables, "
"define a profile in ~/.azure/credentials, or install Azure CLI and log in (`az login`).")
# cert validation mode precedence: module-arg, credential profile, env, "validate"
self._cert_validation_mode = cert_validation_mode or self.credentials.get('cert_validation_mode') or \
os.environ.get('AZURE_CERT_VALIDATION_MODE') or 'validate'
if self._cert_validation_mode not in ['validate', 'ignore']:
self.fail('invalid cert_validation_mode: {0}'.format(self._cert_validation_mode))
# if cloud_environment specified, look up/build Cloud object
raw_cloud_env = self.credentials.get('cloud_environment')
if self.credentials.get('credentials') is not None and raw_cloud_env is not None:
self._cloud_environment = raw_cloud_env
elif not raw_cloud_env:
self._cloud_environment = azure_cloud.AZURE_PUBLIC_CLOUD # SDK default
else:
# try to look up "well-known" values via the name attribute on azure_cloud members
all_clouds = [x[1] for x in inspect.getmembers(azure_cloud) if isinstance(x[1], azure_cloud.Cloud)]
matched_clouds = [x for x in all_clouds if x.name == raw_cloud_env]
if len(matched_clouds) == 1:
self._cloud_environment = matched_clouds[0]
elif len(matched_clouds) > 1:
self.fail("Azure SDK failure: more than one cloud matched for cloud_environment name '{0}'".format(raw_cloud_env))
else:
if not urlparse.urlparse(raw_cloud_env).scheme:
self.fail("cloud_environment must be an endpoint discovery URL or one of {0}".format([x.name for x in all_clouds]))
try:
self._cloud_environment = azure_cloud.get_cloud_from_metadata_endpoint(raw_cloud_env)
except Exception as e:
self.fail("cloud_environment {0} could not be resolved: {1}".format(raw_cloud_env, e.message), exception=traceback.format_exc())
if self.credentials.get('subscription_id', None) is None and self.credentials.get('credentials') is None:
self.fail("Credentials did not include a subscription_id value.")
self.log("setting subscription_id")
self.subscription_id = self.credentials['subscription_id']
# get authentication authority
# for adfs, user could pass in authority or not.
# for others, use default authority from cloud environment
if self.credentials.get('adfs_authority_url') is None:
self._adfs_authority_url = self._cloud_environment.endpoints.active_directory
else:
self._adfs_authority_url = self.credentials.get('adfs_authority_url')
# get resource from cloud environment
self._resource = self._cloud_environment.endpoints.active_directory_resource_id
if self.credentials.get('credentials') is not None:
# AzureCLI credentials
self.azure_credentials = self.credentials['credentials']
elif self.credentials.get('client_id') is not None and \
self.credentials.get('secret') is not None and \
self.credentials.get('tenant') is not None:
self.azure_credentials = ServicePrincipalCredentials(client_id=self.credentials['client_id'],
secret=self.credentials['secret'],
tenant=self.credentials['tenant'],
cloud_environment=self._cloud_environment,
verify=self._cert_validation_mode == 'validate')
elif self.credentials.get('ad_user') is not None and \
self.credentials.get('password') is not None and \
self.credentials.get('client_id') is not None and \
self.credentials.get('tenant') is not None:
self.azure_credentials = self.acquire_token_with_username_password(
self._adfs_authority_url,
self._resource,
self.credentials['ad_user'],
self.credentials['password'],
self.credentials['client_id'],
self.credentials['tenant'])
elif self.credentials.get('ad_user') is not None and self.credentials.get('password') is not None:
tenant = self.credentials.get('tenant')
if not tenant:
tenant = 'common' # SDK default
self.azure_credentials = UserPassCredentials(self.credentials['ad_user'],
self.credentials['password'],
tenant=tenant,
cloud_environment=self._cloud_environment,
verify=self._cert_validation_mode == 'validate')
else:
self.fail("Failed to authenticate with provided credentials. Some attributes were missing. "
"Credentials must include client_id, secret and tenant or ad_user and password, or "
"ad_user, password, client_id, tenant and adfs_authority_url(optional) for ADFS authentication, or "
"be logged in using AzureCLI.")
def fail(self, msg, exception=None, **kwargs):
self._fail_impl(msg)
def _default_fail_impl(self, msg, exception=None, **kwargs):
raise AzureRMAuthException(msg)
def _get_profile(self, profile="default"):
path = expanduser("~/.azure/credentials")
try:
config = configparser.ConfigParser()
config.read(path)
except Exception as exc:
self.fail("Failed to access {0}. Check that the file exists and you have read "
"access. {1}".format(path, str(exc)))
credentials = dict()
for key in AZURE_CREDENTIAL_ENV_MAPPING:
try:
credentials[key] = config.get(profile, key, raw=True)
except Exception:
pass
if credentials.get('subscription_id'):
return credentials
return None
def _get_msi_credentials(self, subscription_id_param=None, **kwargs):
client_id = kwargs.get('client_id', None)
credentials = MSIAuthentication(client_id=client_id)
subscription_id = subscription_id_param or os.environ.get(AZURE_CREDENTIAL_ENV_MAPPING['subscription_id'], None)
if not subscription_id:
try:
# use the first subscription of the MSI
subscription_client = SubscriptionClient(credentials)
subscription = next(subscription_client.subscriptions.list())
subscription_id = str(subscription.subscription_id)
except Exception as exc:
self.fail("Failed to get MSI token: {0}. "
"Please check whether your machine enabled MSI or grant access to any subscription.".format(str(exc)))
return {
'credentials': credentials,
'subscription_id': subscription_id
}
def _get_azure_cli_credentials(self):
credentials, subscription_id = get_azure_cli_credentials()
cloud_environment = get_cli_active_cloud()
cli_credentials = {
'credentials': credentials,
'subscription_id': subscription_id,
'cloud_environment': cloud_environment
}
return cli_credentials
def _get_env_credentials(self):
env_credentials = dict()
for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.items():
env_credentials[attribute] = os.environ.get(env_variable, None)
if env_credentials['profile']:
credentials = self._get_profile(env_credentials['profile'])
return credentials
if env_credentials.get('subscription_id') is not None:
return env_credentials
return None
# TODO: use explicit kwargs instead of intermediate dict
def _get_credentials(self, params):
# Get authentication credentials.
self.log('Getting credentials')
arg_credentials = dict()
for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.items():
arg_credentials[attribute] = params.get(attribute, None)
auth_source = params.get('auth_source', None)
if not auth_source:
auth_source = os.environ.get('ANSIBLE_AZURE_AUTH_SOURCE', 'auto')
if auth_source == 'msi':
self.log('Retrieving credenitals from MSI')
return self._get_msi_credentials(arg_credentials['subscription_id'], client_id=params.get('client_id', None))
if auth_source == 'cli':
if not HAS_AZURE_CLI_CORE:
self.fail(msg=missing_required_lib('azure-cli', reason='for `cli` auth_source'),
exception=HAS_AZURE_CLI_CORE_EXC)
try:
self.log('Retrieving credentials from Azure CLI profile')
cli_credentials = self._get_azure_cli_credentials()
return cli_credentials
except CLIError as err:
self.fail("Azure CLI profile cannot be loaded - {0}".format(err))
if auth_source == 'env':
self.log('Retrieving credentials from environment')
env_credentials = self._get_env_credentials()
return env_credentials
if auth_source == 'credential_file':
self.log("Retrieving credentials from credential file")
profile = params.get('profile') or 'default'
default_credentials = self._get_profile(profile)
return default_credentials
# auto, precedence: module parameters -> environment variables -> default profile in ~/.azure/credentials
# try module params
if arg_credentials['profile'] is not None:
self.log('Retrieving credentials with profile parameter.')
credentials = self._get_profile(arg_credentials['profile'])
return credentials
if arg_credentials['subscription_id']:
self.log('Received credentials from parameters.')
return arg_credentials
# try environment
env_credentials = self._get_env_credentials()
if env_credentials:
self.log('Received credentials from env.')
return env_credentials
# try default profile from ~./azure/credentials
default_credentials = self._get_profile()
if default_credentials:
self.log('Retrieved default profile credentials from ~/.azure/credentials.')
return default_credentials
try:
if HAS_AZURE_CLI_CORE:
self.log('Retrieving credentials from AzureCLI profile')
cli_credentials = self._get_azure_cli_credentials()
return cli_credentials
except CLIError as ce:
self.log('Error getting AzureCLI profile credentials - {0}'.format(ce))
return None
def acquire_token_with_username_password(self, authority, resource, username, password, client_id, tenant):
authority_uri = authority
if tenant is not None:
authority_uri = authority + '/' + tenant
context = AuthenticationContext(authority_uri)
token_response = context.acquire_token_with_username_password(resource, username, password, client_id)
return AADTokenCredentials(token_response)
def log(self, msg, pretty_print=False):
pass
# Use only during module development
# if self.debug:
# log_file = open('azure_rm.log', 'a')
# if pretty_print:
# log_file.write(json.dumps(msg, indent=4, sort_keys=True))
# else:
# log_file.write(msg + u'\n')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,892 |
pacman contains deprecated call to be removed in 2.10
|
##### SUMMARY
pacman contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/packaging/os/pacman.py:456:8: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/packaging/os/pacman.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61892
|
https://github.com/ansible/ansible/pull/61961
|
e55f46f3026f4bc17f5c51341bb568b0886e4dfe
|
25ac7042b070b22c5377f7a43399c19060a38966
| 2019-09-05T20:41:14Z |
python
| 2019-10-02T22:02:06Z |
changelogs/fragments/61961-pacman_remove_recurse_option.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,892 |
pacman contains deprecated call to be removed in 2.10
|
##### SUMMARY
pacman contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/packaging/os/pacman.py:456:8: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/packaging/os/pacman.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61892
|
https://github.com/ansible/ansible/pull/61961
|
e55f46f3026f4bc17f5c51341bb568b0886e4dfe
|
25ac7042b070b22c5377f7a43399c19060a38966
| 2019-09-05T20:41:14Z |
python
| 2019-10-02T22:02:06Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
No notable changes
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
No notable changes
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
Plugins
=======
No notable changes
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,892 |
pacman contains deprecated call to be removed in 2.10
|
##### SUMMARY
pacman contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/packaging/os/pacman.py:456:8: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/packaging/os/pacman.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61892
|
https://github.com/ansible/ansible/pull/61961
|
e55f46f3026f4bc17f5c51341bb568b0886e4dfe
|
25ac7042b070b22c5377f7a43399c19060a38966
| 2019-09-05T20:41:14Z |
python
| 2019-10-02T22:02:06Z |
lib/ansible/modules/packaging/os/pacman.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Afterburn <https://github.com/afterburn>
# Copyright: (c) 2013, Aaron Bull Schaefer <[email protected]>
# Copyright: (c) 2015, Indrajit Raychaudhuri <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: pacman
short_description: Manage packages with I(pacman)
description:
- Manage packages with the I(pacman) package manager, which is used by Arch Linux and its variants.
version_added: "1.0"
author:
- Indrajit Raychaudhuri (@indrajitr)
- Aaron Bull Schaefer (@elasticdog) <[email protected]>
- Maxime de Roucy (@tchernomax)
options:
name:
description:
- Name or list of names of the package(s) or file(s) to install, upgrade, or remove.
Can't be used in combination with C(upgrade).
aliases: [ package, pkg ]
state:
description:
- Desired state of the package.
default: present
choices: [ absent, latest, present ]
recurse:
description:
- When removing a package, also remove its dependencies, provided
that they are not required by other packages and were not
explicitly installed by a user.
This option is deprecated since 2.8 and will be removed in 2.10,
use `extra_args=--recursive`.
default: no
type: bool
version_added: "1.3"
force:
description:
- When removing package, force remove package, without any checks.
Same as `extra_args="--nodeps --nodeps"`.
When update_cache, force redownload repo databases.
Same as `update_cache_extra_args="--refresh --refresh"`.
default: no
type: bool
version_added: "2.0"
extra_args:
description:
- Additional option to pass to pacman when enforcing C(state).
default:
version_added: "2.8"
update_cache:
description:
- Whether or not to refresh the master package lists.
- This can be run as part of a package installation or as a separate step.
default: no
type: bool
aliases: [ update-cache ]
update_cache_extra_args:
description:
- Additional option to pass to pacman when enforcing C(update_cache).
default:
version_added: "2.8"
upgrade:
description:
- Whether or not to upgrade the whole system.
Can't be used in combination with C(name).
default: no
type: bool
version_added: "2.0"
upgrade_extra_args:
description:
- Additional option to pass to pacman when enforcing C(upgrade).
default:
version_added: "2.8"
notes:
- When used with a `loop:` each package will be processed individually,
it is much more efficient to pass the list directly to the `name` option.
'''
RETURN = '''
packages:
description: a list of packages that have been changed
returned: when upgrade is set to yes
type: list
sample: [ package, other-package ]
'''
EXAMPLES = '''
- name: Install package foo from repo
pacman:
name: foo
state: present
- name: Install package bar from file
pacman:
name: ~/bar-1.0-1-any.pkg.tar.xz
state: present
- name: Install package foo from repo and bar from file
pacman:
name:
- foo
- ~/bar-1.0-1-any.pkg.tar.xz
state: present
- name: Upgrade package foo
pacman:
name: foo
state: latest
update_cache: yes
- name: Remove packages foo and bar
pacman:
name:
- foo
- bar
state: absent
- name: Recursively remove package baz
pacman:
name: baz
state: absent
extra_args: --recursive
- name: Run the equivalent of "pacman -Sy" as a separate step
pacman:
update_cache: yes
- name: Run the equivalent of "pacman -Su" as a separate step
pacman:
upgrade: yes
- name: Run the equivalent of "pacman -Syu" as a separate step
pacman:
update_cache: yes
upgrade: yes
- name: Run the equivalent of "pacman -Rdd", force remove package baz
pacman:
name: baz
state: absent
force: yes
'''
import re
from ansible.module_utils.basic import AnsibleModule
def get_version(pacman_output):
"""Take pacman -Qi or pacman -Si output and get the Version"""
lines = pacman_output.split('\n')
for line in lines:
if line.startswith('Version '):
return line.split(':')[1].strip()
return None
def get_name(module, pacman_output):
"""Take pacman -Qi or pacman -Si output and get the package name"""
lines = pacman_output.split('\n')
for line in lines:
if line.startswith('Name '):
return line.split(':')[1].strip()
module.fail_json(msg="get_name: fail to retrieve package name from pacman output")
def query_package(module, pacman_path, name, state="present"):
"""Query the package status in both the local system and the repository. Returns a boolean to indicate if the package is installed, a second
boolean to indicate if the package is up-to-date and a third boolean to indicate whether online information were available
"""
if state == "present":
lcmd = "%s --query --info %s" % (pacman_path, name)
lrc, lstdout, lstderr = module.run_command(lcmd, check_rc=False)
if lrc != 0:
# package is not installed locally
return False, False, False
else:
# a non-zero exit code doesn't always mean the package is installed
# for example, if the package name queried is "provided" by another package
installed_name = get_name(module, lstdout)
if installed_name != name:
return False, False, False
# get the version installed locally (if any)
lversion = get_version(lstdout)
rcmd = "%s --sync --info %s" % (pacman_path, name)
rrc, rstdout, rstderr = module.run_command(rcmd, check_rc=False)
# get the version in the repository
rversion = get_version(rstdout)
if rrc == 0:
# Return True to indicate that the package is installed locally, and the result of the version number comparison
# to determine if the package is up-to-date.
return True, (lversion == rversion), False
# package is installed but cannot fetch remote Version. Last True stands for the error
return True, True, True
def update_package_db(module, pacman_path):
if module.params['force']:
module.params["update_cache_extra_args"] += " --refresh --refresh"
cmd = "%s --sync --refresh %s" % (pacman_path, module.params["update_cache_extra_args"])
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc == 0:
return True
else:
module.fail_json(msg="could not update package db")
def upgrade(module, pacman_path):
cmdupgrade = "%s --sync --sysupgrade --quiet --noconfirm %s" % (pacman_path, module.params["upgrade_extra_args"])
cmdneedrefresh = "%s --query --upgrades" % (pacman_path)
rc, stdout, stderr = module.run_command(cmdneedrefresh, check_rc=False)
data = stdout.split('\n')
data.remove('')
packages = []
diff = {
'before': '',
'after': '',
}
if rc == 0:
# Match lines of `pacman -Qu` output of the form:
# (package name) (before version-release) -> (after version-release)
# e.g., "ansible 2.7.1-1 -> 2.7.2-1"
regex = re.compile(r'([\w+\-.@]+) (\S+-\S+) -> (\S+-\S+)')
for p in data:
m = regex.search(p)
packages.append(m.group(1))
if module._diff:
diff['before'] += "%s-%s\n" % (m.group(1), m.group(2))
diff['after'] += "%s-%s\n" % (m.group(1), m.group(3))
if module.check_mode:
module.exit_json(changed=True, msg="%s package(s) would be upgraded" % (len(data)), packages=packages, diff=diff)
rc, stdout, stderr = module.run_command(cmdupgrade, check_rc=False)
if rc == 0:
module.exit_json(changed=True, msg='System upgraded', packages=packages, diff=diff)
else:
module.fail_json(msg="Could not upgrade")
else:
module.exit_json(changed=False, msg='Nothing to upgrade', packages=packages)
def remove_packages(module, pacman_path, packages):
data = []
diff = {
'before': '',
'after': '',
}
if module.params["force"]:
module.params["extra_args"] += " --nodeps --nodeps"
remove_c = 0
# Using a for loop in case of error, we can report the package that failed
for package in packages:
# Query the package first, to see if we even need to remove
installed, updated, unknown = query_package(module, pacman_path, package)
if not installed:
continue
cmd = "%s --remove --noconfirm --noprogressbar %s %s" % (pacman_path, module.params["extra_args"], package)
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to remove %s" % (package))
if module._diff:
d = stdout.split('\n')[2].split(' ')[2:]
for i, pkg in enumerate(d):
d[i] = re.sub('-[0-9].*$', '', d[i].split('/')[-1])
diff['before'] += "%s\n" % pkg
data.append('\n'.join(d))
remove_c += 1
if remove_c > 0:
module.exit_json(changed=True, msg="removed %s package(s)" % remove_c, diff=diff)
module.exit_json(changed=False, msg="package(s) already absent")
def install_packages(module, pacman_path, state, packages, package_files):
install_c = 0
package_err = []
message = ""
data = []
diff = {
'before': '',
'after': '',
}
to_install_repos = []
to_install_files = []
for i, package in enumerate(packages):
# if the package is installed and state == present or state == latest and is up-to-date then skip
installed, updated, latestError = query_package(module, pacman_path, package)
if latestError and state == 'latest':
package_err.append(package)
if installed and (state == 'present' or (state == 'latest' and updated)):
continue
if package_files[i]:
to_install_files.append(package_files[i])
else:
to_install_repos.append(package)
if to_install_repos:
cmd = "%s --sync --noconfirm --noprogressbar --needed %s %s" % (pacman_path, module.params["extra_args"], " ".join(to_install_repos))
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to install %s: %s" % (" ".join(to_install_repos), stderr))
data = stdout.split('\n')[3].split(' ')[2:]
data = [i for i in data if i != '']
for i, pkg in enumerate(data):
data[i] = re.sub('-[0-9].*$', '', data[i].split('/')[-1])
if module._diff:
diff['after'] += "%s\n" % pkg
install_c += len(to_install_repos)
if to_install_files:
cmd = "%s --upgrade --noconfirm --noprogressbar --needed %s %s" % (pacman_path, module.params["extra_args"], " ".join(to_install_files))
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc != 0:
module.fail_json(msg="failed to install %s: %s" % (" ".join(to_install_files), stderr))
data = stdout.split('\n')[3].split(' ')[2:]
data = [i for i in data if i != '']
for i, pkg in enumerate(data):
data[i] = re.sub('-[0-9].*$', '', data[i].split('/')[-1])
if module._diff:
diff['after'] += "%s\n" % pkg
install_c += len(to_install_files)
if state == 'latest' and len(package_err) > 0:
message = "But could not ensure 'latest' state for %s package(s) as remote version could not be fetched." % (package_err)
if install_c > 0:
module.exit_json(changed=True, msg="installed %s package(s). %s" % (install_c, message), diff=diff)
module.exit_json(changed=False, msg="package(s) already installed. %s" % (message), diff=diff)
def check_packages(module, pacman_path, packages, state):
would_be_changed = []
diff = {
'before': '',
'after': '',
'before_header': '',
'after_header': ''
}
for package in packages:
installed, updated, unknown = query_package(module, pacman_path, package)
if ((state in ["present", "latest"] and not installed) or
(state == "absent" and installed) or
(state == "latest" and not updated)):
would_be_changed.append(package)
if would_be_changed:
if state == "absent":
state = "removed"
if module._diff and (state == 'removed'):
diff['before_header'] = 'removed'
diff['before'] = '\n'.join(would_be_changed) + '\n'
elif module._diff and ((state == 'present') or (state == 'latest')):
diff['after_header'] = 'installed'
diff['after'] = '\n'.join(would_be_changed) + '\n'
module.exit_json(changed=True, msg="%s package(s) would be %s" % (
len(would_be_changed), state), diff=diff)
else:
module.exit_json(changed=False, msg="package(s) already %s" % state, diff=diff)
def expand_package_groups(module, pacman_path, pkgs):
expanded = []
for pkg in pkgs:
if pkg: # avoid empty strings
cmd = "%s --sync --groups --quiet %s" % (pacman_path, pkg)
rc, stdout, stderr = module.run_command(cmd, check_rc=False)
if rc == 0:
# A group was found matching the name, so expand it
for name in stdout.split('\n'):
name = name.strip()
if name:
expanded.append(name)
else:
expanded.append(pkg)
return expanded
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='list', aliases=['pkg', 'package']),
state=dict(type='str', default='present', choices=['present', 'installed', 'latest', 'absent', 'removed']),
recurse=dict(type='bool', default=False),
force=dict(type='bool', default=False),
extra_args=dict(type='str', default=''),
upgrade=dict(type='bool', default=False),
upgrade_extra_args=dict(type='str', default=''),
update_cache=dict(type='bool', default=False, aliases=['update-cache']),
update_cache_extra_args=dict(type='str', default=''),
),
required_one_of=[['name', 'update_cache', 'upgrade']],
mutually_exclusive=[['name', 'upgrade']],
supports_check_mode=True,
)
pacman_path = module.get_bin_path('pacman', True)
p = module.params
# normalize the state parameter
if p['state'] in ['present', 'installed']:
p['state'] = 'present'
elif p['state'] in ['absent', 'removed']:
p['state'] = 'absent'
if p['recurse']:
module.deprecate('Option `recurse` is deprecated and will be removed in '
'version 2.10. Please use `extra_args=--recursive` '
'instead', '2.10')
if p['state'] == 'absent':
p['extra_args'] += " --recursive"
if p["update_cache"] and not module.check_mode:
update_package_db(module, pacman_path)
if not (p['name'] or p['upgrade']):
module.exit_json(changed=True, msg='Updated the package master lists')
if p['update_cache'] and module.check_mode and not (p['name'] or p['upgrade']):
module.exit_json(changed=True, msg='Would have updated the package cache')
if p['upgrade']:
upgrade(module, pacman_path)
if p['name']:
pkgs = expand_package_groups(module, pacman_path, p['name'])
pkg_files = []
for i, pkg in enumerate(pkgs):
if not pkg: # avoid empty strings
continue
elif re.match(r".*\.pkg\.tar(\.(gz|bz2|xz|lrz|lzo|Z))?$", pkg):
# The package given is a filename, extract the raw pkg name from
# it and store the filename
pkg_files.append(pkg)
pkgs[i] = re.sub(r'-[0-9].*$', '', pkgs[i].split('/')[-1])
else:
pkg_files.append(None)
if module.check_mode:
check_packages(module, pacman_path, pkgs, p['state'])
if p['state'] in ['present', 'latest']:
install_packages(module, pacman_path, p['state'], pkgs, pkg_files)
elif p['state'] == 'absent':
remove_packages(module, pacman_path, pkgs)
else:
module.exit_json(changed=False, msg="No package specified to work on.")
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,104 |
misdocumented - xml element's to check text value without escape delimiter for special chars
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
latest and devel documentation shows example to add escape characters for double quotes text.
i.e. to bypass " with \\". But this needs not to be done anymore with respect to the latest ansible version (ex. 2.8.5v)
Current example :
- name: Add several more beers to the 'beers' element and add them before the 'Rochefort 10' element
xml:
path: /foo/bar.xml
xpath: '/business/beers/beer[text()=\\"Rochefort 10\\"]'
insertbefore: yes
add_children:
- beer: Old Rasputin
- beer: Old Motor Oil
- beer: Old Curmudgeon
Should be :
- name: Add several more beers to the 'beers' element and add them before the 'Rochefort 10' element
xml:
path: /foo/bar.xml
xpath: '/business/beers/beer[text()="Rochefort 10"]'
insertbefore: yes
add_children:
- beer: Old Rasputin
- beer: Old Motor Oil
- beer: Old Curmudgeon
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
xml module (label:m:xml)
https://docs.ansible.com/ansible/latest/modules/xml_module.html
https://docs.ansible.com/ansible/devel/modules/xml_module.html
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
[vagrant@azure-controller playbooks]$ ansible --version
ansible 2.8.5
config file = /home/vagrant/.ansible.cfg
configured module search path = [u'/home/vagrant/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Not applicable
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
Not applicable
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
before:
- name: Add several more beers to the 'beers' element and add them before the 'Rochefort 10' element
xml:
path: /foo/bar.xml
xpath: '/business/beers/beer[text()=\\"Rochefort 10\\"]'
insertbefore: yes
add_children:
- beer: Old Rasputin
- beer: Old Motor Oil
- beer: Old Curmudgeon
after:
- name: Add several more beers to the 'beers' element and add them before the 'Rochefort 10' element
xml:
path: /foo/bar.xml
xpath: '/business/beers/beer[text()="Rochefort 10"]'
insertbefore: yes
add_children:
- beer: Old Rasputin
- beer: Old Motor Oil
- beer: Old Curmudgeon
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63104
|
https://github.com/ansible/ansible/pull/63128
|
66de3d429f2454f8491f0633bf4298f3703c5db3
|
82a6f9d198bd428fc3ebb14baa19b503fc827fbc
| 2019-10-03T16:06:28Z |
python
| 2019-10-04T12:56:39Z |
lib/ansible/modules/files/xml.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, Red Hat, Inc.
# Copyright: (c) 2014, Tim Bielawa <[email protected]>
# Copyright: (c) 2014, Magnus Hedemark <[email protected]>
# Copyright: (c) 2017, Dag Wieers <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: xml
short_description: Manage bits and pieces of XML files or strings
description:
- A CRUD-like interface to managing bits of XML files.
version_added: '2.4'
options:
path:
description:
- Path to the file to operate on.
- This file must exist ahead of time.
- This parameter is required, unless C(xmlstring) is given.
type: path
required: yes
aliases: [ dest, file ]
xmlstring:
description:
- A string containing XML on which to operate.
- This parameter is required, unless C(path) is given.
type: str
required: yes
xpath:
description:
- A valid XPath expression describing the item(s) you want to manipulate.
- Operates on the document root, C(/), by default.
type: str
namespaces:
description:
- The namespace C(prefix:uri) mapping for the XPath expression.
- Needs to be a C(dict), not a C(list) of items.
type: dict
state:
description:
- Set or remove an xpath selection (node(s), attribute(s)).
type: str
choices: [ absent, present ]
default: present
aliases: [ ensure ]
attribute:
description:
- The attribute to select when using parameter C(value).
- This is a string, not prepended with C(@).
type: raw
value:
description:
- Desired state of the selected attribute.
- Either a string, or to unset a value, the Python C(None) keyword (YAML Equivalent, C(null)).
- Elements default to no value (but present).
- Attributes default to an empty string.
type: raw
add_children:
description:
- Add additional child-element(s) to a selected element for a given C(xpath).
- Child elements must be given in a list and each item may be either a string
(eg. C(children=ansible) to add an empty C(<ansible/>) child element),
or a hash where the key is an element name and the value is the element value.
- This parameter requires C(xpath) to be set.
type: list
set_children:
description:
- Set the child-element(s) of a selected element for a given C(xpath).
- Removes any existing children.
- Child elements must be specified as in C(add_children).
- This parameter requires C(xpath) to be set.
type: list
count:
description:
- Search for a given C(xpath) and provide the count of any matches.
- This parameter requires C(xpath) to be set.
type: bool
default: no
print_match:
description:
- Search for a given C(xpath) and print out any matches.
- This parameter requires C(xpath) to be set.
type: bool
default: no
pretty_print:
description:
- Pretty print XML output.
type: bool
default: no
content:
description:
- Search for a given C(xpath) and get content.
- This parameter requires C(xpath) to be set.
type: str
choices: [ attribute, text ]
input_type:
description:
- Type of input for C(add_children) and C(set_children).
type: str
choices: [ xml, yaml ]
default: yaml
backup:
description:
- Create a backup file including the timestamp information so you can get
the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
strip_cdata_tags:
description:
- Remove CDATA tags surrounding text values.
- Note that this might break your XML file if text values contain characters that could be interpreted as XML.
type: bool
default: no
version_added: '2.7'
insertbefore:
description:
- Add additional child-element(s) before the first selected element for a given C(xpath).
- Child elements must be given in a list and each item may be either a string
(eg. C(children=ansible) to add an empty C(<ansible/>) child element),
or a hash where the key is an element name and the value is the element value.
- This parameter requires C(xpath) to be set.
type: bool
default: no
version_added: '2.8'
insertafter:
description:
- Add additional child-element(s) after the last selected element for a given C(xpath).
- Child elements must be given in a list and each item may be either a string
(eg. C(children=ansible) to add an empty C(<ansible/>) child element),
or a hash where the key is an element name and the value is the element value.
- This parameter requires C(xpath) to be set.
type: bool
default: no
version_added: '2.8'
requirements:
- lxml >= 2.3.0
notes:
- Use the C(--check) and C(--diff) options when testing your expressions.
- The diff output is automatically pretty-printed, so may not reflect the actual file content, only the file structure.
- This module does not handle complicated xpath expressions, so limit xpath selectors to simple expressions.
- Beware that in case your XML elements are namespaced, you need to use the C(namespaces) parameter, see the examples.
- Namespaces prefix should be used for all children of an element where namespace is defined, unless another namespace is defined for them.
seealso:
- name: Xml module development community wiki
description: More information related to the development of this xml module.
link: https://github.com/ansible/community/wiki/Module:-xml
- name: Introduction to XPath
description: A brief tutorial on XPath (w3schools.com).
link: https://www.w3schools.com/xml/xpath_intro.asp
- name: XPath Reference document
description: The reference documentation on XSLT/XPath (developer.mozilla.org).
link: https://developer.mozilla.org/en-US/docs/Web/XPath
author:
- Tim Bielawa (@tbielawa)
- Magnus Hedemark (@magnus919)
- Dag Wieers (@dagwieers)
'''
EXAMPLES = r'''
# Consider the following XML file:
#
# <business type="bar">
# <name>Tasty Beverage Co.</name>
# <beers>
# <beer>Rochefort 10</beer>
# <beer>St. Bernardus Abbot 12</beer>
# <beer>Schlitz</beer>
# </beers>
# <rating subjective="true">10</rating>
# <website>
# <mobilefriendly/>
# <address>http://tastybeverageco.com</address>
# </website>
# </business>
- name: Remove the 'subjective' attribute of the 'rating' element
xml:
path: /foo/bar.xml
xpath: /business/rating/@subjective
state: absent
- name: Set the rating to '11'
xml:
path: /foo/bar.xml
xpath: /business/rating
value: 11
# Retrieve and display the number of nodes
- name: Get count of 'beers' nodes
xml:
path: /foo/bar.xml
xpath: /business/beers/beer
count: yes
register: hits
- debug:
var: hits.count
# Example where parent XML nodes are created automatically
- name: Add a 'phonenumber' element to the 'business' element
xml:
path: /foo/bar.xml
xpath: /business/phonenumber
value: 555-555-1234
- name: Add several more beers to the 'beers' element
xml:
path: /foo/bar.xml
xpath: /business/beers
add_children:
- beer: Old Rasputin
- beer: Old Motor Oil
- beer: Old Curmudgeon
- name: Add several more beers to the 'beers' element and add them before the 'Rochefort 10' element
xml:
path: /foo/bar.xml
xpath: '/business/beers/beer[text()=\"Rochefort 10\"]'
insertbefore: yes
add_children:
- beer: Old Rasputin
- beer: Old Motor Oil
- beer: Old Curmudgeon
# NOTE: The 'state' defaults to 'present' and 'value' defaults to 'null' for elements
- name: Add a 'validxhtml' element to the 'website' element
xml:
path: /foo/bar.xml
xpath: /business/website/validxhtml
- name: Add an empty 'validatedon' attribute to the 'validxhtml' element
xml:
path: /foo/bar.xml
xpath: /business/website/validxhtml/@validatedon
- name: Add or modify an attribute, add element if needed
xml:
path: /foo/bar.xml
xpath: /business/website/validxhtml
attribute: validatedon
value: 1976-08-05
# How to read an attribute value and access it in Ansible
- name: Read an element's attribute values
xml:
path: /foo/bar.xml
xpath: /business/website/validxhtml
content: attribute
register: xmlresp
- name: Show an attribute value
debug:
var: xmlresp.matches[0].validxhtml.validatedon
- name: Remove all children from the 'website' element (option 1)
xml:
path: /foo/bar.xml
xpath: /business/website/*
state: absent
- name: Remove all children from the 'website' element (option 2)
xml:
path: /foo/bar.xml
xpath: /business/website
children: []
# In case of namespaces, like in below XML, they have to be explicitly stated.
#
# <foo xmlns="http://x.test" xmlns:attr="http://z.test">
# <bar>
# <baz xmlns="http://y.test" attr:my_namespaced_attribute="true" />
# </bar>
# </foo>
# NOTE: There is the prefix 'x' in front of the 'bar' element, too.
- name: Set namespaced '/x:foo/x:bar/y:baz/@z:my_namespaced_attribute' to 'false'
xml:
path: foo.xml
xpath: /x:foo/x:bar/y:baz
namespaces:
x: http://x.test
y: http://y.test
z: http://z.test
attribute: z:my_namespaced_attribute
value: 'false'
'''
RETURN = r'''
actions:
description: A dictionary with the original xpath, namespaces and state.
type: dict
returned: success
sample: {xpath: xpath, namespaces: [namespace1, namespace2], state=present}
backup_file:
description: The name of the backup file that was created
type: str
returned: when backup=yes
sample: /path/to/file.xml.1942.2017-08-24@14:16:01~
count:
description: The count of xpath matches.
type: int
returned: when parameter 'count' is set
sample: 2
matches:
description: The xpath matches found.
type: list
returned: when parameter 'print_match' is set
msg:
description: A message related to the performed action(s).
type: str
returned: always
xmlstring:
description: An XML string of the resulting output.
type: str
returned: when parameter 'xmlstring' is set
'''
import copy
import json
import os
import re
import traceback
from distutils.version import LooseVersion
from io import BytesIO
LXML_IMP_ERR = None
try:
from lxml import etree, objectify
HAS_LXML = True
except ImportError:
LXML_IMP_ERR = traceback.format_exc()
HAS_LXML = False
from ansible.module_utils.basic import AnsibleModule, json_dict_bytes_to_unicode, missing_required_lib
from ansible.module_utils.six import iteritems, string_types
from ansible.module_utils._text import to_bytes, to_native
from ansible.module_utils.common._collections_compat import MutableMapping
_IDENT = r"[a-zA-Z-][a-zA-Z0-9_\-\.]*"
_NSIDENT = _IDENT + "|" + _IDENT + ":" + _IDENT
# Note: we can't reasonably support the 'if you need to put both ' and " in a string, concatenate
# strings wrapped by the other delimiter' XPath trick, especially as simple XPath.
_XPSTR = "('(?:.*)'|\"(?:.*)\")"
_RE_SPLITSIMPLELAST = re.compile("^(.*)/(" + _NSIDENT + ")$")
_RE_SPLITSIMPLELASTEQVALUE = re.compile("^(.*)/(" + _NSIDENT + ")/text\\(\\)=" + _XPSTR + "$")
_RE_SPLITSIMPLEATTRLAST = re.compile("^(.*)/(@(?:" + _NSIDENT + "))$")
_RE_SPLITSIMPLEATTRLASTEQVALUE = re.compile("^(.*)/(@(?:" + _NSIDENT + "))=" + _XPSTR + "$")
_RE_SPLITSUBLAST = re.compile("^(.*)/(" + _NSIDENT + ")\\[(.*)\\]$")
_RE_SPLITONLYEQVALUE = re.compile("^(.*)/text\\(\\)=" + _XPSTR + "$")
def has_changed(doc):
orig_obj = etree.tostring(objectify.fromstring(etree.tostring(orig_doc)))
obj = etree.tostring(objectify.fromstring(etree.tostring(doc)))
return (orig_obj != obj)
def do_print_match(module, tree, xpath, namespaces):
match = tree.xpath(xpath, namespaces=namespaces)
match_xpaths = []
for m in match:
match_xpaths.append(tree.getpath(m))
match_str = json.dumps(match_xpaths)
msg = "selector '%s' match: %s" % (xpath, match_str)
finish(module, tree, xpath, namespaces, changed=False, msg=msg)
def count_nodes(module, tree, xpath, namespaces):
""" Return the count of nodes matching the xpath """
hits = tree.xpath("count(/%s)" % xpath, namespaces=namespaces)
msg = "found %d nodes" % hits
finish(module, tree, xpath, namespaces, changed=False, msg=msg, hitcount=int(hits))
def is_node(tree, xpath, namespaces):
""" Test if a given xpath matches anything and if that match is a node.
For now we just assume you're only searching for one specific thing."""
if xpath_matches(tree, xpath, namespaces):
# OK, it found something
match = tree.xpath(xpath, namespaces=namespaces)
if isinstance(match[0], etree._Element):
return True
return False
def is_attribute(tree, xpath, namespaces):
""" Test if a given xpath matches and that match is an attribute
An xpath attribute search will only match one item"""
if xpath_matches(tree, xpath, namespaces):
match = tree.xpath(xpath, namespaces=namespaces)
if isinstance(match[0], etree._ElementStringResult):
return True
elif isinstance(match[0], etree._ElementUnicodeResult):
return True
return False
def xpath_matches(tree, xpath, namespaces):
""" Test if a node exists """
if tree.xpath(xpath, namespaces=namespaces):
return True
return False
def delete_xpath_target(module, tree, xpath, namespaces):
""" Delete an attribute or element from a tree """
try:
for result in tree.xpath(xpath, namespaces=namespaces):
# Get the xpath for this result
if is_attribute(tree, xpath, namespaces):
# Delete an attribute
parent = result.getparent()
# Pop this attribute match out of the parent
# node's 'attrib' dict by using this match's
# 'attrname' attribute for the key
parent.attrib.pop(result.attrname)
elif is_node(tree, xpath, namespaces):
# Delete an element
result.getparent().remove(result)
else:
raise Exception("Impossible error")
except Exception as e:
module.fail_json(msg="Couldn't delete xpath target: %s (%s)" % (xpath, e))
else:
finish(module, tree, xpath, namespaces, changed=True)
def replace_children_of(children, match):
for element in match.getchildren():
match.remove(element)
match.extend(children)
def set_target_children_inner(module, tree, xpath, namespaces, children, in_type):
matches = tree.xpath(xpath, namespaces=namespaces)
# Create a list of our new children
children = children_to_nodes(module, children, in_type)
children_as_string = [etree.tostring(c) for c in children]
changed = False
# xpaths always return matches as a list, so....
for match in matches:
# Check if elements differ
if len(match.getchildren()) == len(children):
for idx, element in enumerate(match.getchildren()):
if etree.tostring(element) != children_as_string[idx]:
replace_children_of(children, match)
changed = True
break
else:
replace_children_of(children, match)
changed = True
return changed
def set_target_children(module, tree, xpath, namespaces, children, in_type):
changed = set_target_children_inner(module, tree, xpath, namespaces, children, in_type)
# Write it out
finish(module, tree, xpath, namespaces, changed=changed)
def add_target_children(module, tree, xpath, namespaces, children, in_type, insertbefore, insertafter):
if is_node(tree, xpath, namespaces):
new_kids = children_to_nodes(module, children, in_type)
if insertbefore or insertafter:
insert_target_children(tree, xpath, namespaces, new_kids, insertbefore, insertafter)
else:
for node in tree.xpath(xpath, namespaces=namespaces):
node.extend(new_kids)
finish(module, tree, xpath, namespaces, changed=True)
else:
finish(module, tree, xpath, namespaces)
def insert_target_children(tree, xpath, namespaces, children, insertbefore, insertafter):
"""
Insert the given children before or after the given xpath. If insertbefore is True, it is inserted before the
first xpath hit, with insertafter, it is inserted after the last xpath hit.
"""
insert_target = tree.xpath(xpath, namespaces=namespaces)
loc_index = 0 if insertbefore else -1
index_in_parent = insert_target[loc_index].getparent().index(insert_target[loc_index])
parent = insert_target[0].getparent()
if insertafter:
index_in_parent += 1
for child in children:
parent.insert(index_in_parent, child)
index_in_parent += 1
def _extract_xpstr(g):
return g[1:-1]
def split_xpath_last(xpath):
"""split an XPath of the form /foo/bar/baz into /foo/bar and baz"""
xpath = xpath.strip()
m = _RE_SPLITSIMPLELAST.match(xpath)
if m:
# requesting an element to exist
return (m.group(1), [(m.group(2), None)])
m = _RE_SPLITSIMPLELASTEQVALUE.match(xpath)
if m:
# requesting an element to exist with an inner text
return (m.group(1), [(m.group(2), _extract_xpstr(m.group(3)))])
m = _RE_SPLITSIMPLEATTRLAST.match(xpath)
if m:
# requesting an attribute to exist
return (m.group(1), [(m.group(2), None)])
m = _RE_SPLITSIMPLEATTRLASTEQVALUE.match(xpath)
if m:
# requesting an attribute to exist with a value
return (m.group(1), [(m.group(2), _extract_xpstr(m.group(3)))])
m = _RE_SPLITSUBLAST.match(xpath)
if m:
content = [x.strip() for x in m.group(3).split(" and ")]
return (m.group(1), [('/' + m.group(2), content)])
m = _RE_SPLITONLYEQVALUE.match(xpath)
if m:
# requesting a change of inner text
return (m.group(1), [("", _extract_xpstr(m.group(2)))])
return (xpath, [])
def nsnameToClark(name, namespaces):
if ":" in name:
(nsname, rawname) = name.split(":")
# return "{{%s}}%s" % (namespaces[nsname], rawname)
return "{{{0}}}{1}".format(namespaces[nsname], rawname)
# no namespace name here
return name
def check_or_make_target(module, tree, xpath, namespaces):
(inner_xpath, changes) = split_xpath_last(xpath)
if (inner_xpath == xpath) or (changes is None):
module.fail_json(msg="Can't process Xpath %s in order to spawn nodes! tree is %s" %
(xpath, etree.tostring(tree, pretty_print=True)))
return False
changed = False
if not is_node(tree, inner_xpath, namespaces):
changed = check_or_make_target(module, tree, inner_xpath, namespaces)
# we test again after calling check_or_make_target
if is_node(tree, inner_xpath, namespaces) and changes:
for (eoa, eoa_value) in changes:
if eoa and eoa[0] != '@' and eoa[0] != '/':
# implicitly creating an element
new_kids = children_to_nodes(module, [nsnameToClark(eoa, namespaces)], "yaml")
if eoa_value:
for nk in new_kids:
nk.text = eoa_value
for node in tree.xpath(inner_xpath, namespaces=namespaces):
node.extend(new_kids)
changed = True
# module.fail_json(msg="now tree=%s" % etree.tostring(tree, pretty_print=True))
elif eoa and eoa[0] == '/':
element = eoa[1:]
new_kids = children_to_nodes(module, [nsnameToClark(element, namespaces)], "yaml")
for node in tree.xpath(inner_xpath, namespaces=namespaces):
node.extend(new_kids)
for nk in new_kids:
for subexpr in eoa_value:
# module.fail_json(msg="element=%s subexpr=%s node=%s now tree=%s" %
# (element, subexpr, etree.tostring(node, pretty_print=True), etree.tostring(tree, pretty_print=True))
check_or_make_target(module, nk, "./" + subexpr, namespaces)
changed = True
# module.fail_json(msg="now tree=%s" % etree.tostring(tree, pretty_print=True))
elif eoa == "":
for node in tree.xpath(inner_xpath, namespaces=namespaces):
if (node.text != eoa_value):
node.text = eoa_value
changed = True
elif eoa and eoa[0] == '@':
attribute = nsnameToClark(eoa[1:], namespaces)
for element in tree.xpath(inner_xpath, namespaces=namespaces):
changing = (attribute not in element.attrib or element.attrib[attribute] != eoa_value)
if changing:
changed = changed or changing
if eoa_value is None:
value = ""
else:
value = eoa_value
element.attrib[attribute] = value
# module.fail_json(msg="arf %s changing=%s as curval=%s changed tree=%s" %
# (xpath, changing, etree.tostring(tree, changing, element[attribute], pretty_print=True)))
else:
module.fail_json(msg="unknown tree transformation=%s" % etree.tostring(tree, pretty_print=True))
return changed
def ensure_xpath_exists(module, tree, xpath, namespaces):
changed = False
if not is_node(tree, xpath, namespaces):
changed = check_or_make_target(module, tree, xpath, namespaces)
finish(module, tree, xpath, namespaces, changed)
def set_target_inner(module, tree, xpath, namespaces, attribute, value):
changed = False
try:
if not is_node(tree, xpath, namespaces):
changed = check_or_make_target(module, tree, xpath, namespaces)
except Exception as e:
missing_namespace = ""
# NOTE: This checks only the namespaces defined in root element!
# TODO: Implement a more robust check to check for child namespaces' existence
if tree.getroot().nsmap and ":" not in xpath:
missing_namespace = "XML document has namespace(s) defined, but no namespace prefix(es) used in xpath!\n"
module.fail_json(msg="%sXpath %s causes a failure: %s\n -- tree is %s" %
(missing_namespace, xpath, e, etree.tostring(tree, pretty_print=True)), exception=traceback.format_exc())
if not is_node(tree, xpath, namespaces):
module.fail_json(msg="Xpath %s does not reference a node! tree is %s" %
(xpath, etree.tostring(tree, pretty_print=True)))
for element in tree.xpath(xpath, namespaces=namespaces):
if not attribute:
changed = changed or (element.text != value)
if element.text != value:
element.text = value
else:
changed = changed or (element.get(attribute) != value)
if ":" in attribute:
attr_ns, attr_name = attribute.split(":")
# attribute = "{{%s}}%s" % (namespaces[attr_ns], attr_name)
attribute = "{{{0}}}{1}".format(namespaces[attr_ns], attr_name)
if element.get(attribute) != value:
element.set(attribute, value)
return changed
def set_target(module, tree, xpath, namespaces, attribute, value):
changed = set_target_inner(module, tree, xpath, namespaces, attribute, value)
finish(module, tree, xpath, namespaces, changed)
def get_element_text(module, tree, xpath, namespaces):
if not is_node(tree, xpath, namespaces):
module.fail_json(msg="Xpath %s does not reference a node!" % xpath)
elements = []
for element in tree.xpath(xpath, namespaces=namespaces):
elements.append({element.tag: element.text})
finish(module, tree, xpath, namespaces, changed=False, msg=len(elements), hitcount=len(elements), matches=elements)
def get_element_attr(module, tree, xpath, namespaces):
if not is_node(tree, xpath, namespaces):
module.fail_json(msg="Xpath %s does not reference a node!" % xpath)
elements = []
for element in tree.xpath(xpath, namespaces=namespaces):
child = {}
for key in element.keys():
value = element.get(key)
child.update({key: value})
elements.append({element.tag: child})
finish(module, tree, xpath, namespaces, changed=False, msg=len(elements), hitcount=len(elements), matches=elements)
def child_to_element(module, child, in_type):
if in_type == 'xml':
infile = BytesIO(to_bytes(child, errors='surrogate_or_strict'))
try:
parser = etree.XMLParser()
node = etree.parse(infile, parser)
return node.getroot()
except etree.XMLSyntaxError as e:
module.fail_json(msg="Error while parsing child element: %s" % e)
elif in_type == 'yaml':
if isinstance(child, string_types):
return etree.Element(child)
elif isinstance(child, MutableMapping):
if len(child) > 1:
module.fail_json(msg="Can only create children from hashes with one key")
(key, value) = next(iteritems(child))
if isinstance(value, MutableMapping):
children = value.pop('_', None)
node = etree.Element(key, value)
if children is not None:
if not isinstance(children, list):
module.fail_json(msg="Invalid children type: %s, must be list." % type(children))
subnodes = children_to_nodes(module, children)
node.extend(subnodes)
else:
node = etree.Element(key)
node.text = value
return node
else:
module.fail_json(msg="Invalid child type: %s. Children must be either strings or hashes." % type(child))
else:
module.fail_json(msg="Invalid child input type: %s. Type must be either xml or yaml." % in_type)
def children_to_nodes(module=None, children=None, type='yaml'):
"""turn a str/hash/list of str&hash into a list of elements"""
children = [] if children is None else children
return [child_to_element(module, child, type) for child in children]
def make_pretty(module, tree):
xml_string = etree.tostring(tree, xml_declaration=True, encoding='UTF-8', pretty_print=module.params['pretty_print'])
result = dict(
changed=False,
)
if module.params['path']:
xml_file = module.params['path']
with open(xml_file, 'rb') as xml_content:
if xml_string != xml_content.read():
result['changed'] = True
if not module.check_mode:
if module.params['backup']:
result['backup_file'] = module.backup_local(module.params['path'])
tree.write(xml_file, xml_declaration=True, encoding='UTF-8', pretty_print=module.params['pretty_print'])
elif module.params['xmlstring']:
result['xmlstring'] = xml_string
# NOTE: Modifying a string is not considered a change !
if xml_string != module.params['xmlstring']:
result['changed'] = True
module.exit_json(**result)
def finish(module, tree, xpath, namespaces, changed=False, msg='', hitcount=0, matches=tuple()):
result = dict(
actions=dict(
xpath=xpath,
namespaces=namespaces,
state=module.params['state']
),
changed=has_changed(tree),
)
if module.params['count'] or hitcount:
result['count'] = hitcount
if module.params['print_match'] or matches:
result['matches'] = matches
if msg:
result['msg'] = msg
if result['changed']:
if module._diff:
result['diff'] = dict(
before=etree.tostring(orig_doc, xml_declaration=True, encoding='UTF-8', pretty_print=True),
after=etree.tostring(tree, xml_declaration=True, encoding='UTF-8', pretty_print=True),
)
if module.params['path'] and not module.check_mode:
if module.params['backup']:
result['backup_file'] = module.backup_local(module.params['path'])
tree.write(module.params['path'], xml_declaration=True, encoding='UTF-8', pretty_print=module.params['pretty_print'])
if module.params['xmlstring']:
result['xmlstring'] = etree.tostring(tree, xml_declaration=True, encoding='UTF-8', pretty_print=module.params['pretty_print'])
module.exit_json(**result)
def main():
module = AnsibleModule(
argument_spec=dict(
path=dict(type='path', aliases=['dest', 'file']),
xmlstring=dict(type='str'),
xpath=dict(type='str'),
namespaces=dict(type='dict', default={}),
state=dict(type='str', default='present', choices=['absent', 'present'], aliases=['ensure']),
value=dict(type='raw'),
attribute=dict(type='raw'),
add_children=dict(type='list'),
set_children=dict(type='list'),
count=dict(type='bool', default=False),
print_match=dict(type='bool', default=False),
pretty_print=dict(type='bool', default=False),
content=dict(type='str', choices=['attribute', 'text']),
input_type=dict(type='str', default='yaml', choices=['xml', 'yaml']),
backup=dict(type='bool', default=False),
strip_cdata_tags=dict(type='bool', default=False),
insertbefore=dict(type='bool', default=False),
insertafter=dict(type='bool', default=False),
),
supports_check_mode=True,
required_by=dict(
add_children=['xpath'],
# TODO: Reinstate this in Ansible v2.12 when we have deprecated the incorrect use below
# attribute=['value'],
content=['xpath'],
set_children=['xpath'],
value=['xpath'],
),
required_if=[
['count', True, ['xpath']],
['print_match', True, ['xpath']],
['insertbefore', True, ['xpath']],
['insertafter', True, ['xpath']],
],
required_one_of=[
['path', 'xmlstring'],
['add_children', 'content', 'count', 'pretty_print', 'print_match', 'set_children', 'value'],
],
mutually_exclusive=[
['add_children', 'content', 'count', 'print_match', 'set_children', 'value'],
['path', 'xmlstring'],
['insertbefore', 'insertafter'],
],
)
xml_file = module.params['path']
xml_string = module.params['xmlstring']
xpath = module.params['xpath']
namespaces = module.params['namespaces']
state = module.params['state']
value = json_dict_bytes_to_unicode(module.params['value'])
attribute = module.params['attribute']
set_children = json_dict_bytes_to_unicode(module.params['set_children'])
add_children = json_dict_bytes_to_unicode(module.params['add_children'])
pretty_print = module.params['pretty_print']
content = module.params['content']
input_type = module.params['input_type']
print_match = module.params['print_match']
count = module.params['count']
backup = module.params['backup']
strip_cdata_tags = module.params['strip_cdata_tags']
insertbefore = module.params['insertbefore']
insertafter = module.params['insertafter']
# Check if we have lxml 2.3.0 or newer installed
if not HAS_LXML:
module.fail_json(msg=missing_required_lib("lxml"), exception=LXML_IMP_ERR)
elif LooseVersion('.'.join(to_native(f) for f in etree.LXML_VERSION)) < LooseVersion('2.3.0'):
module.fail_json(msg='The xml ansible module requires lxml 2.3.0 or newer installed on the managed machine')
elif LooseVersion('.'.join(to_native(f) for f in etree.LXML_VERSION)) < LooseVersion('3.0.0'):
module.warn('Using lxml version lower than 3.0.0 does not guarantee predictable element attribute order.')
# Report wrongly used attribute parameter when using content=attribute
# TODO: Remove this in Ansible v2.12 (and reinstate strict parameter test above) and remove the integration test example
if content == 'attribute' and attribute is not None:
module.deprecate("Parameter 'attribute=%s' is ignored when using 'content=attribute' only 'xpath' is used. Please remove entry." % attribute, '2.12')
# Check if the file exists
if xml_string:
infile = BytesIO(to_bytes(xml_string, errors='surrogate_or_strict'))
elif os.path.isfile(xml_file):
infile = open(xml_file, 'rb')
else:
module.fail_json(msg="The target XML source '%s' does not exist." % xml_file)
# Parse and evaluate xpath expression
if xpath is not None:
try:
etree.XPath(xpath)
except etree.XPathSyntaxError as e:
module.fail_json(msg="Syntax error in xpath expression: %s (%s)" % (xpath, e))
except etree.XPathEvalError as e:
module.fail_json(msg="Evaluation error in xpath expression: %s (%s)" % (xpath, e))
# Try to parse in the target XML file
try:
parser = etree.XMLParser(remove_blank_text=pretty_print, strip_cdata=strip_cdata_tags)
doc = etree.parse(infile, parser)
except etree.XMLSyntaxError as e:
module.fail_json(msg="Error while parsing document: %s (%s)" % (xml_file or 'xml_string', e))
# Ensure we have the original copy to compare
global orig_doc
orig_doc = copy.deepcopy(doc)
if print_match:
do_print_match(module, doc, xpath, namespaces)
if count:
count_nodes(module, doc, xpath, namespaces)
if content == 'attribute':
get_element_attr(module, doc, xpath, namespaces)
elif content == 'text':
get_element_text(module, doc, xpath, namespaces)
# File exists:
if state == 'absent':
# - absent: delete xpath target
delete_xpath_target(module, doc, xpath, namespaces)
# - present: carry on
# children && value both set?: should have already aborted by now
# add_children && set_children both set?: should have already aborted by now
# set_children set?
if set_children:
set_target_children(module, doc, xpath, namespaces, set_children, input_type)
# add_children set?
if add_children:
add_target_children(module, doc, xpath, namespaces, add_children, input_type, insertbefore, insertafter)
# No?: Carry on
# Is the xpath target an attribute selector?
if value is not None:
set_target(module, doc, xpath, namespaces, attribute, value)
# If an xpath was provided, we need to do something with the data
if xpath is not None:
ensure_xpath_exists(module, doc, xpath, namespaces)
# Otherwise only reformat the xml data?
if pretty_print:
make_pretty(module, doc)
module.fail_json(msg="Don't know what to do")
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,083 |
ios_bgp - Not idempotent when using networks on classful boundaries
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using a classful network boundary for a network, the router will not show "mask x.x.x.x" in the configuration. Thus the module will think a configuration change is needed, not fully idempotent.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_bgp
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /Users/joshv/Github/ansible-using_ios/ansible.cfg
configured module search path = ['/etc/ansible/library']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.2 (default, Feb 10 2019, 15:44:18) [Clang 10.0.0 (clang-1000.11.45.5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_GATHERING(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = ['/Users/joshv/Github/ansible-using_ios/hosts']
DEFAULT_MODULE_PATH(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = ['/etc/ansible/library']
HOST_KEY_CHECKING(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = False
INTERPRETER_PYTHON(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = /usr/local/bin/python3
RETRY_FILES_ENABLED(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
MacOS -> Cisco IOS
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Advertise networks 198.51.100.0/24
networks:
- prefix: 198.51.100.0
masklen: 24
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "TASK 1: Setup iBGP Peer"
ios_bgp:
config:
bgp_as: 65500
router_id: 10.0.0.3
log_neighbor_changes: true
neighbors:
- neighbor: 198.51.100.2
remote_as: 65500
activate: true
timers:
keepalive: 15
holdtime: 45
min_neighbor_holdtime: 5
description: R4
networks:
- prefix: 198.51.100.0
masklen: 24
- prefix: 203.0.113.0
masklen: 24
address_family:
- afi: ipv4
safi: unicast
neighbors:
- neighbor: 198.51.100.2
activate: yes
next_hop_self: yes
operation: merge
register: ibgp_peer1
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I'd expect that the second run would be idempotent and not change. I'm getting a changed message instead of OK.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The module attempted to push
<!--- Paste verbatim command output between quotes -->
```paste below
"commands": [
"router bgp 65500",
"network 198.51.100.0 mask 255.255.255.0",
"address-family ipv4",
"no auto-summary",
"exit-address-family",
"exit"
],
"failed": false
```
I'm thinking of how to take a stab at this in the coming weeks if it isn't already a quick easy fix. Small low priority issue...
|
https://github.com/ansible/ansible/issues/59083
|
https://github.com/ansible/ansible/pull/63055
|
10ed3d0ec8246473e6275aef6532b46c31793042
|
7ac4756bf6a285bfd8585a0e6c6600af8bcb0b1c
| 2019-07-15T01:28:11Z |
python
| 2019-10-04T14:56:08Z |
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/process.py
|
#
# (c) 2019, Ansible by Red Hat, inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#
import re
from ansible.module_utils.six import iteritems
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils.network.ios.providers.providers import register_provider
from ansible.module_utils.network.ios.providers.providers import CliProvider
from ansible.module_utils.network.ios.providers.cli.config.bgp.neighbors import Neighbors
from ansible.module_utils.network.ios.providers.cli.config.bgp.address_family import AddressFamily
from ansible.module_utils.common.network import to_netmask
REDISTRIBUTE_PROTOCOLS = frozenset(['ospf', 'ospfv3', 'eigrp', 'isis', 'static', 'connected',
'odr', 'lisp', 'mobile', 'rip'])
@register_provider('ios', 'ios_bgp')
class Provider(CliProvider):
def render(self, config=None):
commands = list()
existing_as = None
if config:
match = re.search(r'router bgp (\d+)', config, re.M)
existing_as = match.group(1)
operation = self.params['operation']
context = None
if self.params['config']:
context = 'router bgp %s' % self.get_value('config.bgp_as')
if operation == 'delete':
if existing_as:
commands.append('no router bgp %s' % existing_as)
elif context:
commands.append('no %s' % context)
else:
self._validate_input(config)
if operation == 'replace':
if existing_as and int(existing_as) != self.get_value('config.bgp_as'):
commands.append('no router bgp %s' % existing_as)
config = None
elif operation == 'override':
if existing_as:
commands.append('no router bgp %s' % existing_as)
config = None
context_commands = list()
for key, value in iteritems(self.get_value('config')):
if value is not None:
meth = getattr(self, '_render_%s' % key, None)
if meth:
resp = meth(config)
if resp:
context_commands.extend(to_list(resp))
if context and context_commands:
commands.append(context)
commands.extend(context_commands)
commands.append('exit')
return commands
def _render_router_id(self, config=None):
cmd = 'bgp router-id %s' % self.get_value('config.router_id')
if not config or cmd not in config:
return cmd
def _render_log_neighbor_changes(self, config=None):
cmd = 'bgp log-neighbor-changes'
log_neighbor_changes = self.get_value('config.log_neighbor_changes')
if log_neighbor_changes is True:
if not config or cmd not in config:
return cmd
elif log_neighbor_changes is False:
if config and cmd in config:
return 'no %s' % cmd
def _render_networks(self, config=None):
commands = list()
safe_list = list()
for entry in self.get_value('config.networks'):
network = entry['prefix']
cmd = 'network %s' % network
if entry['masklen']:
cmd += ' mask %s' % to_netmask(entry['masklen'])
network += ' mask %s' % to_netmask(entry['masklen'])
if entry['route_map']:
cmd += ' route-map %s' % entry['route_map']
network += ' route-map %s' % entry['route_map']
safe_list.append(network)
if not config or cmd not in config:
commands.append(cmd)
if self.params['operation'] == 'replace':
if config:
matches = re.findall(r'network (.*)', config, re.M)
for entry in set(matches).difference(safe_list):
commands.append('no network %s' % entry)
return commands
def _render_neighbors(self, config):
""" generate bgp neighbor configuration
"""
return Neighbors(self.params).render(config)
def _render_address_family(self, config):
""" generate address-family configuration
"""
return AddressFamily(self.params).render(config)
def _validate_input(self, config=None):
def device_has_AF(config):
return re.search(r'address-family (?:.*)', config)
address_family = self.get_value('config.address_family')
root_networks = self.get_value('config.networks')
operation = self.params['operation']
if operation == 'replace':
if address_family and root_networks:
for item in address_family:
if item['networks']:
raise ValueError('operation is replace but provided both root level network(s) and network(s) under %s %s address family'
% (item['afi'], item['safi']))
if root_networks and config and device_has_AF(config):
raise ValueError('operation is replace and device has one or more address family activated but root level network(s) provided')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,083 |
ios_bgp - Not idempotent when using networks on classful boundaries
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using a classful network boundary for a network, the router will not show "mask x.x.x.x" in the configuration. Thus the module will think a configuration change is needed, not fully idempotent.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_bgp
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /Users/joshv/Github/ansible-using_ios/ansible.cfg
configured module search path = ['/etc/ansible/library']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.2 (default, Feb 10 2019, 15:44:18) [Clang 10.0.0 (clang-1000.11.45.5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_GATHERING(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = ['/Users/joshv/Github/ansible-using_ios/hosts']
DEFAULT_MODULE_PATH(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = ['/etc/ansible/library']
HOST_KEY_CHECKING(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = False
INTERPRETER_PYTHON(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = /usr/local/bin/python3
RETRY_FILES_ENABLED(/Users/joshv/Github/ansible-using_ios/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
MacOS -> Cisco IOS
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Advertise networks 198.51.100.0/24
networks:
- prefix: 198.51.100.0
masklen: 24
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "TASK 1: Setup iBGP Peer"
ios_bgp:
config:
bgp_as: 65500
router_id: 10.0.0.3
log_neighbor_changes: true
neighbors:
- neighbor: 198.51.100.2
remote_as: 65500
activate: true
timers:
keepalive: 15
holdtime: 45
min_neighbor_holdtime: 5
description: R4
networks:
- prefix: 198.51.100.0
masklen: 24
- prefix: 203.0.113.0
masklen: 24
address_family:
- afi: ipv4
safi: unicast
neighbors:
- neighbor: 198.51.100.2
activate: yes
next_hop_self: yes
operation: merge
register: ibgp_peer1
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I'd expect that the second run would be idempotent and not change. I'm getting a changed message instead of OK.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The module attempted to push
<!--- Paste verbatim command output between quotes -->
```paste below
"commands": [
"router bgp 65500",
"network 198.51.100.0 mask 255.255.255.0",
"address-family ipv4",
"no auto-summary",
"exit-address-family",
"exit"
],
"failed": false
```
I'm thinking of how to take a stab at this in the coming weeks if it isn't already a quick easy fix. Small low priority issue...
|
https://github.com/ansible/ansible/issues/59083
|
https://github.com/ansible/ansible/pull/63055
|
10ed3d0ec8246473e6275aef6532b46c31793042
|
7ac4756bf6a285bfd8585a0e6c6600af8bcb0b1c
| 2019-07-15T01:28:11Z |
python
| 2019-10-04T14:56:08Z |
test/integration/targets/ios_bgp/tests/cli/basic.yaml
|
- debug: msg="START ios cli/ios_bgp.yaml on connection={{ ansible_connection }}"
- name: Clear existing BGP config
ios_bgp:
operation: delete
ignore_errors: yes
- block:
- name: Configure BGP with AS 64496 and a router-id
ios_bgp: &config
operation: merge
config:
bgp_as: 64496
router_id: 192.0.2.2
register: result
- assert:
that:
- 'result.changed == true'
- "'router bgp 64496' in result.commands"
- "'bgp router-id 192.0.2.2' in result.commands"
- name: Configure BGP with AS 64496 and a router-id (idempotent)
ios_bgp: *config
register: result
- assert:
that:
- 'result.changed == false'
- name: Configure BGP neighbors
ios_bgp: &nbr
operation: merge
config:
bgp_as: 64496
neighbors:
- neighbor: 192.0.2.10
remote_as: 64496
password: ansible
description: IBGP_NBR_1
ebgp_multihop: 100
timers:
keepalive: 300
holdtime: 360
min_neighbor_holdtime: 360
- neighbor: 192.0.2.15
remote_as: 64496
description: IBGP_NBR_2
ebgp_multihop: 150
register: result
- assert:
that:
- 'result.changed == true'
- "'router bgp 64496' in result.commands"
- "'neighbor 192.0.2.10 remote-as 64496' in result.commands"
- "'neighbor 192.0.2.10 description IBGP_NBR_1' in result.commands"
- "'neighbor 192.0.2.10 ebgp-multihop 100' in result.commands"
- "'neighbor 192.0.2.10 timers 300 360 360' in result.commands"
- "'neighbor 192.0.2.15 remote-as 64496' in result.commands"
- "'neighbor 192.0.2.15 description IBGP_NBR_2' in result.commands"
- "'neighbor 192.0.2.15 ebgp-multihop 150' in result.commands"
- name: Configure BGP neighbors (idempotent)
ios_bgp: *nbr
register: result
- assert:
that:
- 'result.changed == false'
- name: Configure BGP neighbors with operation replace
ios_bgp: &nbr_rplc
operation: replace
config:
bgp_as: 64496
neighbors:
- neighbor: 192.0.2.15
remote_as: 64496
description: IBGP_NBR_2
ebgp_multihop: 150
- neighbor: 203.0.113.10
remote_as: 64511
description: EBGP_NBR_1
local_as: 64497
register: result
- assert:
that:
- 'result.changed == true'
- "'neighbor 203.0.113.10 remote-as 64511' in result.commands"
- "'neighbor 203.0.113.10 description EBGP_NBR_1' in result.commands"
- "'neighbor 203.0.113.10 local-as 64497' in result.commands"
- "'no neighbor 192.0.2.10' in result.commands"
- name: Configure BGP neighbors with operation replace (idempotent)
ios_bgp: *nbr_rplc
register: result
- assert:
that:
- 'result.changed == false'
- name: Configure root-level networks for BGP
ios_bgp: &net
operation: merge
config:
bgp_as: 64496
networks:
- prefix: 203.0.113.0
masklen: 27
route_map: RMAP_1
- prefix: 203.0.113.32
masklen: 27
route_map: RMAP_2
register: result
- assert:
that:
- 'result.changed == True'
- "'router bgp 64496' in result.commands"
- "'network 203.0.113.0 mask 255.255.255.224 route-map RMAP_1' in result.commands"
- "'network 203.0.113.32 mask 255.255.255.224 route-map RMAP_2' in result.commands"
- name: Configure root-level networks for BGP (idempotent)
ios_bgp: *net
register: result
- assert:
that:
- 'result.changed == false'
- name: Configure root-level networks for BGP with operation replace
ios_bgp: &net_rplc
operation: replace
config:
bgp_as: 64496
networks:
- prefix: 203.0.113.0
masklen: 27
route_map: RMAP_1
- prefix: 198.51.100.16
masklen: 28
register: result
- assert:
that:
- 'result.changed == True'
- "'router bgp 64496' in result.commands"
- "'network 198.51.100.16 mask 255.255.255.240' in result.commands"
- "'no network 203.0.113.32 mask 255.255.255.224 route-map RMAP_2' in result.commands"
- name: Configure root-level networks for BGP with operation replace (idempotent)
ios_bgp: *net_rplc
register: result
- assert:
that:
- 'result.changed == false'
- name: Configure BGP neighbors under address family mode
ios_bgp: &af_nbr
operation: merge
config:
bgp_as: 64496
address_family:
- afi: ipv4
safi: unicast
neighbors:
- neighbor: 203.0.113.10
activate: yes
maximum_prefix: 250
advertisement_interval: 120
- neighbor: 192.0.2.15
activate: yes
route_reflector_client: True
register: result
- assert:
that:
- 'result.changed == true'
- "'router bgp 64496' in result.commands"
- "'address-family ipv4' in result.commands"
- "'neighbor 203.0.113.10 activate' in result.commands"
- "'neighbor 203.0.113.10 maximum-prefix 250' in result.commands"
- "'neighbor 203.0.113.10 advertisement-interval 120' in result.commands"
- "'neighbor 192.0.2.15 activate' in result.commands"
- "'neighbor 192.0.2.15 route-reflector-client' in result.commands"
- name: Configure BGP neighbors under address family mode (idempotent)
ios_bgp: *af_nbr
register: result
- assert:
that:
- 'result.changed == false'
- name: Configure networks under address family
ios_bgp: &af_net
operation: merge
config:
bgp_as: 64496
address_family:
- afi: ipv4
safi: multicast
networks:
- prefix: 198.51.100.48
masklen: 28
route_map: RMAP_1
- prefix: 192.0.2.64
masklen: 27
- prefix: 203.0.113.160
masklen: 27
route_map: RMAP_2
- afi: ipv4
safi: unicast
networks:
- prefix: 198.51.100.64
masklen: 28
register: result
- assert:
that:
- 'result.changed == true'
- "'router bgp 64496' in result.commands"
- "'address-family ipv4 multicast' in result.commands"
- "'network 198.51.100.48 mask 255.255.255.240 route-map RMAP_1' in result.commands"
- "'network 192.0.2.64 mask 255.255.255.224' in result.commands"
- "'network 203.0.113.160 mask 255.255.255.224 route-map RMAP_2' in result.commands"
- "'exit-address-family' in result.commands"
- "'address-family ipv4' in result.commands"
- "'network 198.51.100.64 mask 255.255.255.240' in result.commands"
- "'exit-address-family' in result.commands"
- name: Configure networks under address family (idempotent)
ios_bgp: *af_net
register: result
- assert:
that:
- 'result.changed == false'
- name: Configure networks under address family with operation replace
ios_bgp: &af_net_rplc
operation: replace
config:
bgp_as: 64496
address_family:
- afi: ipv4
safi: multicast
networks:
- prefix: 198.51.100.80
masklen: 28
- prefix: 192.0.2.64
masklen: 27
- prefix: 203.0.113.192
masklen: 27
- afi: ipv4
safi: unicast
networks:
- prefix: 198.51.100.64
masklen: 28
register: result
- assert:
that:
- 'result.changed == true'
- '"router bgp 64496" in result.commands'
- '"address-family ipv4 multicast" in result.commands'
- '"network 198.51.100.80 mask 255.255.255.240" in result.commands'
- '"network 203.0.113.192 mask 255.255.255.224" in result.commands'
- '"no network 198.51.100.48 mask 255.255.255.240 route-map RMAP_1" in result.commands'
- '"no network 203.0.113.160 mask 255.255.255.224 route-map RMAP_2" in result.commands'
- '"exit-address-family" in result.commands'
- name: Configure networks under address family with operation replace (idempotent)
ios_bgp: *af_net_rplc
register: result
- assert:
that:
- 'result.changed == false'
- name: Configure redistribute information under address family mode
ios_bgp: &af_rdr
operation: merge
config:
bgp_as: 64496
address_family:
- afi: ipv4
safi: multicast
redistribute:
- protocol: ospf
id: 112
metric: 64
- protocol: eigrp
id: 233
metric: 256
register: result
- assert:
that:
- 'result.changed == true'
- "'router bgp 64496' in result.commands"
- "'address-family ipv4 multicast' in result.commands"
- "'redistribute ospf 112 metric 64' in result.commands"
- "'redistribute eigrp 233 metric 256' in result.commands"
- "'exit-address-family' in result.commands"
- name: Configure redistribute information under address family mode (idempotent)
ios_bgp: *af_rdr
register: result
- assert:
that:
- 'result.changed == false'
- name: Get the IOS version
ios_facts:
gather_subset: all
- name: Configure redistribute information under address family mode with operation replace
ios_bgp: &af_rdr_rplc
operation: replace
config:
bgp_as: 64496
address_family:
- afi: ipv4
safi: multicast
redistribute:
- protocol: ospf
id: 112
metric: 64
register: result
- assert:
that:
- 'result.changed == true'
- "'router bgp 64496' in result.commands"
- "'address-family ipv4 multicast' in result.commands"
- "'no redistribute eigrp 233' in result.commands"
- "'exit-address-family' in result.commands"
- name: Configure redistribute information under address family mode with operation replace (idempotent)
ios_bgp: *af_rdr_rplc
register: result
when: ansible_net_version != "15.6(2)T"
- assert:
that:
- 'result.changed == false'
when: ansible_net_version != "15.6(2)T"
- name: Override all the exisiting BGP config
ios_bgp:
operation: override
config:
bgp_as: 64497
router_id: 192.0.2.10
log_neighbor_changes: True
register: result
- assert:
that:
- 'result.changed == true'
- "'no router bgp 64496' in result.commands"
- "'router bgp 64497' in result.commands"
- "'bgp router-id 192.0.2.10' in result.commands"
- "'bgp log-neighbor-changes' in result.commands"
always:
- name: Teardown
ios_bgp: &rm
operation: delete
register: result
- assert:
that:
- 'result.changed == true'
- "'no router bgp 64497' in result.commands"
- name: Teardown again (idempotent)
ios_bgp: *rm
register: result
- assert:
that:
- 'result.changed == false'
- debug: msg="END ios cli/ios_bgp.yaml on connection={{ ansible_connection }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,953 |
docker_container: image change not recognized if new image is not known to docker daemon
|
##### SUMMARY
If there's currently a container with image `repo/name:tag1` running (or created), and I change the `docker_container` image tag to `repo/name:tag2` (where this image hasn't been pulled yet), `docker_container` will not notice that the image changed and will not recreate the container.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```paste below
2.9.0
```
|
https://github.com/ansible/ansible/issues/62953
|
https://github.com/ansible/ansible/pull/62971
|
bb9b2e87bc946c7446b2747a0c97427adf94b020
|
41eafc2051d95ac712d1e359920f03056ca523a3
| 2019-09-30T06:10:46Z |
python
| 2019-10-04T19:50:09Z |
changelogs/fragments/62971-docker_container-image-finding.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,953 |
docker_container: image change not recognized if new image is not known to docker daemon
|
##### SUMMARY
If there's currently a container with image `repo/name:tag1` running (or created), and I change the `docker_container` image tag to `repo/name:tag2` (where this image hasn't been pulled yet), `docker_container` will not notice that the image changed and will not recreate the container.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```paste below
2.9.0
```
|
https://github.com/ansible/ansible/issues/62953
|
https://github.com/ansible/ansible/pull/62971
|
bb9b2e87bc946c7446b2747a0c97427adf94b020
|
41eafc2051d95ac712d1e359920f03056ca523a3
| 2019-09-30T06:10:46Z |
python
| 2019-10-04T19:50:09Z |
lib/ansible/modules/cloud/docker/docker_container.py
|
#!/usr/bin/python
#
# Copyright 2016 Red Hat | Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_container
short_description: manage docker containers
description:
- Manage the life cycle of docker containers.
- Supports check mode. Run with C(--check) and C(--diff) to view config difference and list of actions to be taken.
version_added: "2.1"
notes:
- For most config changes, the container needs to be recreated, i.e. the existing container has to be destroyed and
a new one created. This can cause unexpected data loss and downtime. You can use the I(comparisons) option to
prevent this.
- If the module needs to recreate the container, it will only use the options provided to the module to create the
new container (except I(image)). Therefore, always specify *all* options relevant to the container.
- When I(restart) is set to C(true), the module will only restart the container if no config changes are detected.
Please note that several options have default values; if the container to be restarted uses different values for
these options, it will be recreated instead. The options with default values which can cause this are I(auto_remove),
I(detach), I(init), I(interactive), I(memory), I(paused), I(privileged), I(read_only) and I(tty).
options:
auto_remove:
description:
- enable auto-removal of the container on daemon side when the container's process exits
type: bool
default: no
version_added: "2.4"
blkio_weight:
description:
- Block IO (relative weight), between 10 and 1000.
type: int
capabilities:
description:
- List of capabilities to add to the container.
type: list
cap_drop:
description:
- List of capabilities to drop from the container.
type: list
version_added: "2.7"
cleanup:
description:
- Use with I(detach=false) to remove the container after successful execution.
type: bool
default: no
version_added: "2.2"
command:
description:
- Command to execute when the container starts.
A command may be either a string or a list.
- Prior to version 2.4, strings were split on commas.
type: raw
comparisons:
description:
- Allows to specify how properties of existing containers are compared with
module options to decide whether the container should be recreated / updated
or not. Only options which correspond to the state of a container as handled
by the Docker daemon can be specified, as well as C(networks).
- Must be a dictionary specifying for an option one of the keys C(strict), C(ignore)
and C(allow_more_present).
- If C(strict) is specified, values are tested for equality, and changes always
result in updating or restarting. If C(ignore) is specified, changes are ignored.
- C(allow_more_present) is allowed only for lists, sets and dicts. If it is
specified for lists or sets, the container will only be updated or restarted if
the module option contains a value which is not present in the container's
options. If the option is specified for a dict, the container will only be updated
or restarted if the module option contains a key which isn't present in the
container's option, or if the value of a key present differs.
- The wildcard option C(*) can be used to set one of the default values C(strict)
or C(ignore) to I(all) comparisons.
- See the examples for details.
type: dict
version_added: "2.8"
cpu_period:
description:
- Limit CPU CFS (Completely Fair Scheduler) period
type: int
cpu_quota:
description:
- Limit CPU CFS (Completely Fair Scheduler) quota
type: int
cpuset_cpus:
description:
- CPUs in which to allow execution C(1,3) or C(1-3).
type: str
cpuset_mems:
description:
- Memory nodes (MEMs) in which to allow execution C(0-3) or C(0,1)
type: str
cpu_shares:
description:
- CPU shares (relative weight).
type: int
detach:
description:
- Enable detached mode to leave the container running in background.
If disabled, the task will reflect the status of the container run (failed if the command failed).
type: bool
default: yes
devices:
description:
- "List of host device bindings to add to the container. Each binding is a mapping expressed
in the format: <path_on_host>:<path_in_container>:<cgroup_permissions>"
type: list
device_read_bps:
description:
- "List of device path and read rate (bytes per second) from device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit. Format: <number>[<unit>]"
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)"
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_write_bps:
description:
- "List of device and write rate (bytes per second) to device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit. Format: <number>[<unit>]"
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)"
- "Omitting the unit defaults to bytes."
type: str
required: yes
version_added: "2.8"
device_read_iops:
description:
- "List of device and read rate (IO per second) from device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
device_write_iops:
description:
- "List of device and write rate (IO per second) to device."
type: list
suboptions:
path:
description:
- Device path in the container.
type: str
required: yes
rate:
description:
- "Device read limit."
- "Must be a positive integer."
type: int
required: yes
version_added: "2.8"
dns_opts:
description:
- list of DNS options
type: list
dns_servers:
description:
- List of custom DNS servers.
type: list
dns_search_domains:
description:
- List of custom DNS search domains.
type: list
domainname:
description:
- Container domainname.
type: str
version_added: "2.5"
env:
description:
- Dictionary of key,value pairs.
- Values which might be parsed as numbers, booleans or other types by the YAML parser must be quoted (e.g. C("true")) in order to avoid data loss.
type: dict
env_file:
description:
- Path to a file, present on the target, containing environment variables I(FOO=BAR).
- If variable also present in C(env), then C(env) value will override.
type: path
version_added: "2.2"
entrypoint:
description:
- Command that overwrites the default ENTRYPOINT of the image.
type: list
etc_hosts:
description:
- Dict of host-to-IP mappings, where each host name is a key in the dictionary.
Each host name will be added to the container's /etc/hosts file.
type: dict
exposed_ports:
description:
- List of additional container ports which informs Docker that the container
listens on the specified network ports at runtime.
If the port is already exposed using EXPOSE in a Dockerfile, it does not
need to be exposed again.
type: list
aliases:
- exposed
- expose
force_kill:
description:
- Use the kill command when stopping a running container.
type: bool
default: no
aliases:
- forcekill
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
type: list
healthcheck:
description:
- 'Configure a check that is run to determine whether or not containers for this service are "healthy".
See the docs for the L(HEALTHCHECK Dockerfile instruction,https://docs.docker.com/engine/reference/builder/#healthcheck)
for details on how healthchecks work.'
- 'I(interval), I(timeout) and I(start_period) are specified as durations. They accept duration as a string in a format
that look like: C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)'
type: dict
suboptions:
test:
description:
- Command to run to check health.
- Must be either a string or a list. If it is a list, the first item must be one of C(NONE), C(CMD) or C(CMD-SHELL).
type: raw
interval:
description:
- 'Time between running the check. (default: 30s)'
type: str
timeout:
description:
- 'Maximum time to allow one check to run. (default: 30s)'
type: str
retries:
description:
- 'Consecutive failures needed to report unhealthy. It accept integer value. (default: 3)'
type: int
start_period:
description:
- 'Start period for the container to initialize before starting health-retries countdown. (default: 0s)'
type: str
version_added: "2.8"
hostname:
description:
- Container hostname.
type: str
ignore_image:
description:
- When C(state) is I(present) or I(started) the module compares the configuration of an existing
container to requested configuration. The evaluation includes the image version. If
the image version in the registry does not match the container, the container will be
recreated. Stop this behavior by setting C(ignore_image) to I(True).
- I(Warning:) This option is ignored if C(image) or C(*) is used for the C(comparisons) option.
type: bool
default: no
version_added: "2.2"
image:
description:
- Repository path and tag used to create the container. If an image is not found or pull is true, the image
will be pulled from the registry. If no tag is included, C(latest) will be used.
- Can also be an image ID. If this is the case, the image is assumed to be available locally.
The C(pull) option is ignored for this case.
type: str
init:
description:
- Run an init inside the container that forwards signals and reaps processes.
This option requires Docker API >= 1.25.
type: bool
default: no
version_added: "2.6"
interactive:
description:
- Keep stdin open after a container is launched, even if not attached.
type: bool
default: no
ipc_mode:
description:
- Set the IPC mode for the container. Can be one of 'container:<name|id>' to reuse another
container's IPC namespace or 'host' to use the host's IPC namespace within the container.
type: str
keep_volumes:
description:
- Retain volumes associated with a removed container.
type: bool
default: yes
kill_signal:
description:
- Override default signal used to kill a running container.
type: str
kernel_memory:
description:
- "Kernel memory limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte). Minimum is C(4M)."
- Omitting the unit defaults to bytes.
type: str
labels:
description:
- Dictionary of key value pairs.
type: dict
links:
description:
- List of name aliases for linked containers in the format C(container_name:alias).
- Setting this will force container to be restarted.
type: list
log_driver:
description:
- Specify the logging driver. Docker uses I(json-file) by default.
- See L(here,https://docs.docker.com/config/containers/logging/configure/) for possible choices.
type: str
log_options:
description:
- Dictionary of options specific to the chosen log_driver. See https://docs.docker.com/engine/admin/logging/overview/
for details.
type: dict
aliases:
- log_opt
mac_address:
description:
- Container MAC address (e.g. 92:d0:c6:0a:29:33)
type: str
memory:
description:
- "Memory limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
default: '0'
memory_reservation:
description:
- "Memory soft limit (format: C(<number>[<unit>])). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swap:
description:
- "Total memory limit (memory + swap, format: C(<number>[<unit>])).
Number is a positive integer. Unit can be C(B) (byte), C(K) (kibibyte, 1024B),
C(M) (mebibyte), C(G) (gibibyte), C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes.
type: str
memory_swappiness:
description:
- Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- If not set, the value will be remain the same if container exists and will be inherited from the host machine if it is (re-)created.
type: int
mounts:
version_added: "2.9"
type: list
description:
- 'Specification for mounts to be added to the container. More powerful alternative to I(volumes).'
suboptions:
target:
description:
- Path inside the container.
type: str
required: true
source:
description:
- Mount source (e.g. a volume name or a host path).
type: str
type:
description:
- The mount type.
- Note that C(npipe) is only supported by Docker for Windows.
type: str
choices:
- 'bind'
- 'volume'
- 'tmpfs'
- 'npipe'
default: volume
read_only:
description:
- 'Whether the mount should be read-only.'
type: bool
consistency:
description:
- 'The consistency requirement for the mount.'
type: str
choices:
- 'default'
- 'consistent'
- 'cached'
- 'delegated'
propagation:
description:
- Propagation mode. Only valid for the C(bind) type.
type: str
choices:
- 'private'
- 'rprivate'
- 'shared'
- 'rshared'
- 'slave'
- 'rslave'
no_copy:
description:
- False if the volume should be populated with the data from the target. Only valid for the C(volume) type.
- The default value is C(false).
type: bool
labels:
description:
- User-defined name and labels for the volume. Only valid for the C(volume) type.
type: dict
volume_driver:
description:
- Specify the volume driver. Only valid for the C(volume) type.
- See L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver) for details.
type: str
volume_options:
description:
- Dictionary of options specific to the chosen volume_driver. See L(here,https://docs.docker.com/storage/volumes/#use-a-volume-driver)
for details.
type: dict
tmpfs_size:
description:
- "The size for the tmpfs mount in bytes. Format: <number>[<unit>]"
- "Number is a positive integer. Unit can be one of C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)"
- "Omitting the unit defaults to bytes."
type: str
tmpfs_mode:
description:
- The permission mode for the tmpfs mount.
type: str
name:
description:
- Assign a name to a new container or match an existing container.
- When identifying an existing container name may be a name or a long or short container ID.
type: str
required: yes
network_mode:
description:
- Connect the container to a network. Choices are "bridge", "host", "none" or "container:<name|id>"
type: str
userns_mode:
description:
- Set the user namespace mode for the container. Currently, the only valid value is C(host).
type: str
version_added: "2.5"
networks:
description:
- List of networks the container belongs to.
- For examples of the data structure and usage see EXAMPLES below.
- To remove a container from one or more networks, use the C(purge_networks) option.
- Note that as opposed to C(docker run ...), M(docker_container) does not remove the default
network if C(networks) is specified. You need to explicitly use C(purge_networks) to enforce
the removal of the default network (and all other networks not explicitly mentioned in C(networks)).
type: list
suboptions:
name:
description:
- The network's name.
type: str
required: yes
ipv4_address:
description:
- The container's IPv4 address in this network.
type: str
ipv6_address:
description:
- The container's IPv6 address in this network.
type: str
links:
description:
- A list of containers to link to.
type: list
aliases:
description:
- List of aliases for this container in this network. These names
can be used in the network to reach this container.
type: list
version_added: "2.2"
networks_cli_compatible:
description:
- "When networks are provided to the module via the I(networks) option, the module
behaves differently than C(docker run --network): C(docker run --network other)
will create a container with network C(other) attached, but the default network
not attached. This module with C(networks: {name: other}) will create a container
with both C(default) and C(other) attached. If I(purge_networks) is set to C(yes),
the C(default) network will be removed afterwards."
- "If I(networks_cli_compatible) is set to C(yes), this module will behave as
C(docker run --network) and will I(not) add the default network if C(networks) is
specified. If C(networks) is not specified, the default network will be attached."
- "Note that docker CLI also sets C(network_mode) to the name of the first network
added if C(--network) is specified. For more compatibility with docker CLI, you
explicitly have to set C(network_mode) to the name of the first network you're
adding."
- Current value is C(no). A new default of C(yes) will be set in Ansible 2.12.
type: bool
version_added: "2.8"
oom_killer:
description:
- Whether or not to disable OOM Killer for the container.
type: bool
oom_score_adj:
description:
- An integer value containing the score given to the container in order to tune
OOM killer preferences.
type: int
version_added: "2.2"
output_logs:
description:
- If set to true, output of the container command will be printed (only effective
when log_driver is set to json-file or journald.
type: bool
default: no
version_added: "2.7"
paused:
description:
- Use with the started state to pause running processes inside the container.
type: bool
default: no
pid_mode:
description:
- Set the PID namespace mode for the container.
- Note that Docker SDK for Python < 2.0 only supports 'host'. Newer versions of the
Docker SDK for Python (docker) allow all values supported by the docker daemon.
type: str
pids_limit:
description:
- Set PIDs limit for the container. It accepts an integer value.
- Set -1 for unlimited PIDs.
type: int
version_added: "2.8"
privileged:
description:
- Give extended privileges to the container.
type: bool
default: no
published_ports:
description:
- List of ports to publish from the container to the host.
- "Use docker CLI syntax: C(8000), C(9000:8000), or C(0.0.0.0:9000:8000), where 8000 is a
container port, 9000 is a host port, and 0.0.0.0 is a host interface."
- Port ranges can be used for source and destination ports. If two ranges with
different lengths are specified, the shorter range will be used.
- "Bind addresses must be either IPv4 or IPv6 addresses. Hostnames are I(not) allowed. This
is different from the C(docker) command line utility. Use the L(dig lookup,../lookup/dig.html)
to resolve hostnames."
- A value of C(all) will publish all exposed container ports to random host ports, ignoring
any other mappings.
- If C(networks) parameter is provided, will inspect each network to see if there exists
a bridge network with optional parameter com.docker.network.bridge.host_binding_ipv4.
If such a network is found, then published ports where no host IP address is specified
will be bound to the host IP pointed to by com.docker.network.bridge.host_binding_ipv4.
Note that the first bridge network with a com.docker.network.bridge.host_binding_ipv4
value encountered in the list of C(networks) is the one that will be used.
type: list
aliases:
- ports
pull:
description:
- If true, always pull the latest version of an image. Otherwise, will only pull an image
when missing.
- I(Note) that images are only pulled when specified by name. If the image is specified
as a image ID (hash), it cannot be pulled.
type: bool
default: no
purge_networks:
description:
- Remove the container from ALL networks not included in C(networks) parameter.
- Any default networks such as I(bridge), if not found in C(networks), will be removed as well.
type: bool
default: no
version_added: "2.2"
read_only:
description:
- Mount the container's root file system as read-only.
type: bool
default: no
recreate:
description:
- Use with present and started states to force the re-creation of an existing container.
type: bool
default: no
restart:
description:
- Use with started state to force a matching container to be stopped and restarted.
type: bool
default: no
restart_policy:
description:
- Container restart policy. Place quotes around I(no) option.
type: str
choices:
- 'no'
- 'on-failure'
- 'always'
- 'unless-stopped'
restart_retries:
description:
- Use with restart policy to control maximum number of restart attempts.
type: int
runtime:
description:
- Runtime to use for the container.
type: str
version_added: "2.8"
shm_size:
description:
- "Size of C(/dev/shm) (format: C(<number>[<unit>])). Number is positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Omitting the unit defaults to bytes. If you omit the size entirely, the system uses C(64M).
type: str
security_opts:
description:
- List of security options in the form of C("label:user:User")
type: list
state:
description:
- 'I(absent) - A container matching the specified name will be stopped and removed. Use force_kill to kill the container
rather than stopping it. Use keep_volumes to retain volumes associated with the removed container.'
- 'I(present) - Asserts the existence of a container matching the name and any provided configuration parameters. If no
container matches the name, a container will be created. If a container matches the name but the provided configuration
does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed and re-created
with the requested config. Image version will be taken into account when comparing configuration. To ignore image
version use the ignore_image option. Use the recreate option to force the re-creation of the matching container. Use
force_kill to kill the container rather than stopping it. Use keep_volumes to retain volumes associated with a removed
container.'
- 'I(started) - Asserts there is a running container matching the name and any provided configuration. If no container
matches the name, a container will be created and started. If a container matching the name is found but the
configuration does not match, the container will be updated, if it can be. If it cannot be updated, it will be removed
and a new container will be created with the requested configuration and started. Image version will be taken into
account when comparing configuration. To ignore image version use the ignore_image option. Use recreate to always
re-create a matching container, even if it is running. Use restart to force a matching container to be stopped and
restarted. Use force_kill to kill a container rather than stopping it. Use keep_volumes to retain volumes associated
with a removed container.'
- 'I(stopped) - Asserts that the container is first I(present), and then if the container is running moves it to a stopped
state. Use force_kill to kill a container rather than stopping it.'
type: str
default: started
choices:
- absent
- present
- stopped
- started
stop_signal:
description:
- Override default signal used to stop the container.
type: str
stop_timeout:
description:
- Number of seconds to wait for the container to stop before sending SIGKILL.
When the container is created by this module, its C(StopTimeout) configuration
will be set to this value.
- When the container is stopped, will be used as a timeout for stopping the
container. In case the container has a custom C(StopTimeout) configuration,
the behavior depends on the version of the docker daemon. New versions of
the docker daemon will always use the container's configured C(StopTimeout)
value if it has been configured.
type: int
trust_image_content:
description:
- If C(yes), skip image verification.
type: bool
default: no
tmpfs:
description:
- Mount a tmpfs directory
type: list
version_added: 2.4
tty:
description:
- Allocate a pseudo-TTY.
type: bool
default: no
ulimits:
description:
- "List of ulimit options. A ulimit is specified as C(nofile:262144:262144)"
type: list
sysctls:
description:
- Dictionary of key,value pairs.
type: dict
version_added: 2.4
user:
description:
- Sets the username or UID used and optionally the groupname or GID for the specified command.
- "Can be [ user | user:group | uid | uid:gid | user:gid | uid:group ]"
type: str
uts:
description:
- Set the UTS namespace mode for the container.
type: str
volumes:
description:
- List of volumes to mount within the container.
- "Use docker CLI-style syntax: C(/host:/container[:mode])"
- "Mount modes can be a comma-separated list of various modes such as C(ro), C(rw), C(consistent),
C(delegated), C(cached), C(rprivate), C(private), C(rshared), C(shared), C(rslave), C(slave), and
C(nocopy). Note that the docker daemon might not support all modes and combinations of such modes."
- SELinux hosts can additionally use C(z) or C(Z) to use a shared or
private label for the volume.
- "Note that Ansible 2.7 and earlier only supported one mode, which had to be one of C(ro), C(rw),
C(z), and C(Z)."
type: list
volume_driver:
description:
- The container volume driver.
type: str
volumes_from:
description:
- List of container names or Ids to get volumes from.
type: list
working_dir:
description:
- Path to the working directory.
type: str
version_added: "2.4"
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
author:
- "Cove Schneider (@cove)"
- "Joshua Conner (@joshuaconner)"
- "Pavel Antonov (@softzilla)"
- "Thomas Steinbach (@ThomasSteinbach)"
- "Philippe Jandot (@zfil)"
- "Daan Oosterveld (@dusdanig)"
- "Chris Houseknecht (@chouseknecht)"
- "Kassian Sun (@kassiansun)"
- "Felix Fontein (@felixfontein)"
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
'''
EXAMPLES = '''
- name: Create a data container
docker_container:
name: mydata
image: busybox
volumes:
- /data
- name: Re-create a redis container
docker_container:
name: myredis
image: redis
command: redis-server --appendonly yes
state: present
recreate: yes
exposed_ports:
- 6379
volumes_from:
- mydata
- name: Restart a container
docker_container:
name: myapplication
image: someuser/appimage
state: started
restart: yes
links:
- "myredis:aliasedredis"
devices:
- "/dev/sda:/dev/xvda:rwm"
ports:
- "8080:9000"
- "127.0.0.1:8081:9001/udp"
env:
SECRET_KEY: "ssssh"
# Values which might be parsed as numbers, booleans or other types by the YAML parser need to be quoted
BOOLEAN_KEY: "yes"
- name: Container present
docker_container:
name: mycontainer
state: present
image: ubuntu:14.04
command: sleep infinity
- name: Stop a container
docker_container:
name: mycontainer
state: stopped
- name: Start 4 load-balanced containers
docker_container:
name: "container{{ item }}"
recreate: yes
image: someuser/anotherappimage
command: sleep 1d
with_sequence: count=4
- name: remove container
docker_container:
name: ohno
state: absent
- name: Syslogging output
docker_container:
name: myservice
image: busybox
log_driver: syslog
log_options:
syslog-address: tcp://my-syslog-server:514
syslog-facility: daemon
# NOTE: in Docker 1.13+ the "syslog-tag" option was renamed to "tag" for
# older docker installs, use "syslog-tag" instead
tag: myservice
- name: Create db container and connect to network
docker_container:
name: db_test
image: "postgres:latest"
networks:
- name: "{{ docker_network_name }}"
- name: Start container, connect to network and link
docker_container:
name: sleeper
image: ubuntu:14.04
networks:
- name: TestingNet
ipv4_address: "172.1.1.100"
aliases:
- sleepyzz
links:
- db_test:db
- name: TestingNet2
- name: Start a container with a command
docker_container:
name: sleepy
image: ubuntu:14.04
command: ["sleep", "infinity"]
- name: Add container to networks
docker_container:
name: sleepy
networks:
- name: TestingNet
ipv4_address: 172.1.1.18
links:
- sleeper
- name: TestingNet2
ipv4_address: 172.1.10.20
- name: Update network with aliases
docker_container:
name: sleepy
networks:
- name: TestingNet
aliases:
- sleepyz
- zzzz
- name: Remove container from one network
docker_container:
name: sleepy
networks:
- name: TestingNet2
purge_networks: yes
- name: Remove container from all networks
docker_container:
name: sleepy
purge_networks: yes
- name: Start a container and use an env file
docker_container:
name: agent
image: jenkinsci/ssh-slave
env_file: /var/tmp/jenkins/agent.env
- name: Create a container with limited capabilities
docker_container:
name: sleepy
image: ubuntu:16.04
command: sleep infinity
capabilities:
- sys_time
cap_drop:
- all
- name: Finer container restart/update control
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
volumes:
- /tmp:/tmp
comparisons:
image: ignore # don't restart containers with older versions of the image
env: strict # we want precisely this environment
volumes: allow_more_present # if there are more volumes, that's ok, as long as `/tmp:/tmp` is there
- name: Finer container restart/update control II
docker_container:
name: test
image: ubuntu:18.04
env:
arg1: "true"
arg2: "whatever"
comparisons:
'*': ignore # by default, ignore *all* options (including image)
env: strict # except for environment variables; there, we want to be strict
- name: Start container with healthstatus
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# Check if nginx server is healthy by curl'ing the server.
# If this fails or timeouts, the healthcheck fails.
test: ["CMD", "curl", "--fail", "http://nginx.host.com"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
- name: Remove healthcheck from container
docker_container:
name: nginx-proxy
image: nginx:1.13
state: started
healthcheck:
# The "NONE" check needs to be specified
test: ["NONE"]
- name: start container with block device read limit
docker_container:
name: test
image: ubuntu:18.04
state: started
device_read_bps:
# Limit read rate for /dev/sda to 20 mebibytes per second
- path: /dev/sda
rate: 20M
device_read_iops:
# Limit read rate for /dev/sdb to 300 IO per second
- path: /dev/sdb
rate: 300
'''
RETURN = '''
container:
description:
- Facts representing the current state of the container. Matches the docker inspection output.
- Note that facts are part of the registered vars since Ansible 2.8. For compatibility reasons, the facts
are also accessible directly as C(docker_container). Note that the returned fact will be removed in Ansible 2.12.
- Before 2.3 this was C(ansible_docker_container) but was renamed in 2.3 to C(docker_container) due to
conflicts with the connection plugin.
- Empty if C(state) is I(absent)
- If detached is I(False), will include Output attribute containing any output from container run.
returned: always
type: dict
sample: '{
"AppArmorProfile": "",
"Args": [],
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [
"/usr/bin/supervisord"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
"Hostname": "8e47bf643eb9",
"Image": "lnmp_nginx:v1",
"Labels": {},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": {
"/tmp/lnmp/nginx-sites/logs/": {}
},
...
}'
'''
import os
import re
import shlex
import traceback
from distutils.version import LooseVersion
from ansible.module_utils.common.text.formatters import human_to_bytes
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DifferenceTracker,
DockerBaseClass,
compare_generic,
is_image_name_id,
sanitize_result,
clean_dict_booleans_for_docker_api,
omit_none_from_dict,
parse_healthcheck,
DOCKER_COMMON_ARGS,
RequestException,
)
from ansible.module_utils.six import string_types
try:
from docker import utils
from ansible.module_utils.docker.common import docker_version
if LooseVersion(docker_version) >= LooseVersion('1.10.0'):
from docker.types import Ulimit, LogConfig
from docker import types as docker_types
else:
from docker.utils.types import Ulimit, LogConfig
from docker.errors import DockerException, APIError, NotFound
except Exception:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
REQUIRES_CONVERSION_TO_BYTES = [
'kernel_memory',
'memory',
'memory_reservation',
'memory_swap',
'shm_size'
]
def is_volume_permissions(mode):
for part in mode.split(','):
if part not in ('rw', 'ro', 'z', 'Z', 'consistent', 'delegated', 'cached', 'rprivate', 'private', 'rshared', 'shared', 'rslave', 'slave', 'nocopy'):
return False
return True
def parse_port_range(range_or_port, client):
'''
Parses a string containing either a single port or a range of ports.
Returns a list of integers for each port in the list.
'''
if '-' in range_or_port:
try:
start, end = [int(port) for port in range_or_port.split('-')]
except Exception:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
if end < start:
client.fail('Invalid port range: "{0}"'.format(range_or_port))
return list(range(start, end + 1))
else:
try:
return [int(range_or_port)]
except Exception:
client.fail('Invalid port: "{0}"'.format(range_or_port))
def split_colon_ipv6(text, client):
'''
Split string by ':', while keeping IPv6 addresses in square brackets in one component.
'''
if '[' not in text:
return text.split(':')
start = 0
result = []
while start < len(text):
i = text.find('[', start)
if i < 0:
result.extend(text[start:].split(':'))
break
j = text.find(']', i)
if j < 0:
client.fail('Cannot find closing "]" in input "{0}" for opening "[" at index {1}!'.format(text, i + 1))
result.extend(text[start:i].split(':'))
k = text.find(':', j)
if k < 0:
result[-1] += text[i:]
start = len(text)
else:
result[-1] += text[i:k]
if k == len(text):
result.append('')
break
start = k + 1
return result
class TaskParameters(DockerBaseClass):
'''
Access and parse module parameters
'''
def __init__(self, client):
super(TaskParameters, self).__init__()
self.client = client
self.auto_remove = None
self.blkio_weight = None
self.capabilities = None
self.cap_drop = None
self.cleanup = None
self.command = None
self.cpu_period = None
self.cpu_quota = None
self.cpuset_cpus = None
self.cpuset_mems = None
self.cpu_shares = None
self.detach = None
self.debug = None
self.devices = None
self.device_read_bps = None
self.device_write_bps = None
self.device_read_iops = None
self.device_write_iops = None
self.dns_servers = None
self.dns_opts = None
self.dns_search_domains = None
self.domainname = None
self.env = None
self.env_file = None
self.entrypoint = None
self.etc_hosts = None
self.exposed_ports = None
self.force_kill = None
self.groups = None
self.healthcheck = None
self.hostname = None
self.ignore_image = None
self.image = None
self.init = None
self.interactive = None
self.ipc_mode = None
self.keep_volumes = None
self.kernel_memory = None
self.kill_signal = None
self.labels = None
self.links = None
self.log_driver = None
self.output_logs = None
self.log_options = None
self.mac_address = None
self.memory = None
self.memory_reservation = None
self.memory_swap = None
self.memory_swappiness = None
self.mounts = None
self.name = None
self.network_mode = None
self.userns_mode = None
self.networks = None
self.networks_cli_compatible = None
self.oom_killer = None
self.oom_score_adj = None
self.paused = None
self.pid_mode = None
self.pids_limit = None
self.privileged = None
self.purge_networks = None
self.pull = None
self.read_only = None
self.recreate = None
self.restart = None
self.restart_retries = None
self.restart_policy = None
self.runtime = None
self.shm_size = None
self.security_opts = None
self.state = None
self.stop_signal = None
self.stop_timeout = None
self.tmpfs = None
self.trust_image_content = None
self.tty = None
self.user = None
self.uts = None
self.volumes = None
self.volume_binds = dict()
self.volumes_from = None
self.volume_driver = None
self.working_dir = None
for key, value in client.module.params.items():
setattr(self, key, value)
self.comparisons = client.comparisons
# If state is 'absent', parameters do not have to be parsed or interpreted.
# Only the container's name is needed.
if self.state == 'absent':
return
if self.groups:
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
self.groups = [str(g) for g in self.groups]
for param_name in REQUIRES_CONVERSION_TO_BYTES:
if client.module.params.get(param_name):
try:
setattr(self, param_name, human_to_bytes(client.module.params.get(param_name)))
except ValueError as exc:
self.fail("Failed to convert %s to bytes: %s" % (param_name, exc))
self.publish_all_ports = False
self.published_ports = self._parse_publish_ports()
if self.published_ports in ('all', 'ALL'):
self.publish_all_ports = True
self.published_ports = None
self.ports = self._parse_exposed_ports(self.published_ports)
self.log("expose ports:")
self.log(self.ports, pretty_print=True)
self.links = self._parse_links(self.links)
if self.volumes:
self.volumes = self._expand_host_paths()
self.tmpfs = self._parse_tmpfs()
self.env = self._get_environment()
self.ulimits = self._parse_ulimits()
self.sysctls = self._parse_sysctls()
self.log_config = self._parse_log_config()
try:
self.healthcheck, self.disable_healthcheck = parse_healthcheck(self.healthcheck)
except ValueError as e:
self.fail(str(e))
self.exp_links = None
self.volume_binds = self._get_volume_binds(self.volumes)
self.pid_mode = self._replace_container_names(self.pid_mode)
self.ipc_mode = self._replace_container_names(self.ipc_mode)
self.network_mode = self._replace_container_names(self.network_mode)
self.log("volumes:")
self.log(self.volumes, pretty_print=True)
self.log("volume binds:")
self.log(self.volume_binds, pretty_print=True)
if self.networks:
for network in self.networks:
network['id'] = self._get_network_id(network['name'])
if not network['id']:
self.fail("Parameter error: network named %s could not be found. Does it exist?" % network['name'])
if network.get('links'):
network['links'] = self._parse_links(network['links'])
if self.mac_address:
# Ensure the MAC address uses colons instead of hyphens for later comparison
self.mac_address = self.mac_address.replace('-', ':')
if self.entrypoint:
# convert from list to str.
self.entrypoint = ' '.join([str(x) for x in self.entrypoint])
if self.command:
# convert from list to str
if isinstance(self.command, list):
self.command = ' '.join([str(x) for x in self.command])
self.mounts_opt, self.expected_mounts = self._process_mounts()
self._check_mount_target_collisions()
for param_name in ["device_read_bps", "device_write_bps"]:
if client.module.params.get(param_name):
self._process_rate_bps(option=param_name)
for param_name in ["device_read_iops", "device_write_iops"]:
if client.module.params.get(param_name):
self._process_rate_iops(option=param_name)
def fail(self, msg):
self.client.fail(msg)
@property
def update_parameters(self):
'''
Returns parameters used to update a container
'''
update_parameters = dict(
blkio_weight='blkio_weight',
cpu_period='cpu_period',
cpu_quota='cpu_quota',
cpu_shares='cpu_shares',
cpuset_cpus='cpuset_cpus',
cpuset_mems='cpuset_mems',
mem_limit='memory',
mem_reservation='memory_reservation',
memswap_limit='memory_swap',
kernel_memory='kernel_memory',
)
result = dict()
for key, value in update_parameters.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
return result
@property
def create_parameters(self):
'''
Returns parameters used to create a container
'''
create_params = dict(
command='command',
domainname='domainname',
hostname='hostname',
user='user',
detach='detach',
stdin_open='interactive',
tty='tty',
ports='ports',
environment='env',
name='name',
entrypoint='entrypoint',
mac_address='mac_address',
labels='labels',
stop_signal='stop_signal',
working_dir='working_dir',
stop_timeout='stop_timeout',
healthcheck='healthcheck',
)
if self.client.docker_py_version < LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
create_params['cpu_shares'] = 'cpu_shares'
create_params['volume_driver'] = 'volume_driver'
result = dict(
host_config=self._host_config(),
volumes=self._get_mounts(),
)
for key, value in create_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
result[key] = getattr(self, value)
if self.networks_cli_compatible and self.networks:
network = self.networks[0]
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if network.get(para):
params[para] = network[para]
network_config = dict()
network_config[network['name']] = self.client.create_endpoint_config(**params)
result['networking_config'] = self.client.create_networking_config(network_config)
return result
def _expand_host_paths(self):
new_vols = []
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if re.match(r'[.~]', host):
host = os.path.abspath(os.path.expanduser(host))
new_vols.append("%s:%s:%s" % (host, container, mode))
continue
elif len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]) and re.match(r'[.~]', parts[0]):
host = os.path.abspath(os.path.expanduser(parts[0]))
new_vols.append("%s:%s:rw" % (host, parts[1]))
continue
new_vols.append(vol)
return new_vols
def _get_mounts(self):
'''
Return a list of container mounts.
:return:
'''
result = []
if self.volumes:
for vol in self.volumes:
if ':' in vol:
if len(vol.split(':')) == 3:
dummy, container, dummy = vol.split(':')
result.append(container)
continue
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
result.append(parts[1])
continue
result.append(vol)
self.log("mounts:")
self.log(result, pretty_print=True)
return result
def _host_config(self):
'''
Returns parameters used to create a HostConfig object
'''
host_config_params = dict(
port_bindings='published_ports',
publish_all_ports='publish_all_ports',
links='links',
privileged='privileged',
dns='dns_servers',
dns_opt='dns_opts',
dns_search='dns_search_domains',
binds='volume_binds',
volumes_from='volumes_from',
network_mode='network_mode',
userns_mode='userns_mode',
cap_add='capabilities',
cap_drop='cap_drop',
extra_hosts='etc_hosts',
read_only='read_only',
ipc_mode='ipc_mode',
security_opt='security_opts',
ulimits='ulimits',
sysctls='sysctls',
log_config='log_config',
mem_limit='memory',
memswap_limit='memory_swap',
mem_swappiness='memory_swappiness',
oom_score_adj='oom_score_adj',
oom_kill_disable='oom_killer',
shm_size='shm_size',
group_add='groups',
devices='devices',
pid_mode='pid_mode',
tmpfs='tmpfs',
init='init',
uts_mode='uts',
runtime='runtime',
auto_remove='auto_remove',
device_read_bps='device_read_bps',
device_write_bps='device_write_bps',
device_read_iops='device_read_iops',
device_write_iops='device_write_iops',
pids_limit='pids_limit',
mounts='mounts',
)
if self.client.docker_py_version >= LooseVersion('1.9') and self.client.docker_api_version >= LooseVersion('1.22'):
# blkio_weight can always be updated, but can only be set on creation
# when Docker SDK for Python and Docker API are new enough
host_config_params['blkio_weight'] = 'blkio_weight'
if self.client.docker_py_version >= LooseVersion('3.0'):
# cpu_shares and volume_driver moved to create_host_config in > 3
host_config_params['cpu_shares'] = 'cpu_shares'
host_config_params['volume_driver'] = 'volume_driver'
params = dict()
for key, value in host_config_params.items():
if getattr(self, value, None) is not None:
if self.client.option_minimal_versions[value]['supported']:
params[key] = getattr(self, value)
if self.restart_policy:
params['restart_policy'] = dict(Name=self.restart_policy,
MaximumRetryCount=self.restart_retries)
if 'mounts' in params:
params['mounts'] = self.mounts_opt
return self.client.create_host_config(**params)
@property
def default_host_ip(self):
ip = '0.0.0.0'
if not self.networks:
return ip
for net in self.networks:
if net.get('name'):
try:
network = self.client.inspect_network(net['name'])
if network.get('Driver') == 'bridge' and \
network.get('Options', {}).get('com.docker.network.bridge.host_binding_ipv4'):
ip = network['Options']['com.docker.network.bridge.host_binding_ipv4']
break
except NotFound as nfe:
self.client.fail(
"Cannot inspect the network '{0}' to determine the default IP: {1}".format(net['name'], nfe),
exception=traceback.format_exc()
)
return ip
def _parse_publish_ports(self):
'''
Parse ports from docker CLI syntax
'''
if self.published_ports is None:
return None
if 'all' in self.published_ports:
return 'all'
default_ip = self.default_host_ip
binds = {}
for port in self.published_ports:
parts = split_colon_ipv6(str(port), self.client)
container_port = parts[-1]
protocol = ''
if '/' in container_port:
container_port, protocol = parts[-1].split('/')
container_ports = parse_port_range(container_port, self.client)
p_len = len(parts)
if p_len == 1:
port_binds = len(container_ports) * [(default_ip,)]
elif p_len == 2:
port_binds = [(default_ip, port) for port in parse_port_range(parts[0], self.client)]
elif p_len == 3:
# We only allow IPv4 and IPv6 addresses for the bind address
ipaddr = parts[0]
if not re.match(r'^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$', parts[0]) and not re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
self.fail(('Bind addresses for published ports must be IPv4 or IPv6 addresses, not hostnames. '
'Use the dig lookup to resolve hostnames. (Found hostname: {0})').format(ipaddr))
if re.match(r'^\[[0-9a-fA-F:]+\]$', ipaddr):
ipaddr = ipaddr[1:-1]
if parts[1]:
port_binds = [(ipaddr, port) for port in parse_port_range(parts[1], self.client)]
else:
port_binds = len(container_ports) * [(ipaddr,)]
for bind, container_port in zip(port_binds, container_ports):
idx = '{0}/{1}'.format(container_port, protocol) if protocol else container_port
if idx in binds:
old_bind = binds[idx]
if isinstance(old_bind, list):
old_bind.append(bind)
else:
binds[idx] = [old_bind, bind]
else:
binds[idx] = bind
return binds
def _get_volume_binds(self, volumes):
'''
Extract host bindings, if any, from list of volume mapping strings.
:return: dictionary of bind mappings
'''
result = dict()
if volumes:
for vol in volumes:
host = None
if ':' in vol:
parts = vol.split(':')
if len(parts) == 3:
host, container, mode = parts
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
elif len(parts) == 2:
if not is_volume_permissions(parts[1]):
host, container, mode = (vol.split(':') + ['rw'])
if host is not None:
result[host] = dict(
bind=container,
mode=mode
)
return result
def _parse_exposed_ports(self, published_ports):
'''
Parse exposed ports from docker CLI-style ports syntax.
'''
exposed = []
if self.exposed_ports:
for port in self.exposed_ports:
port = str(port).strip()
protocol = 'tcp'
match = re.search(r'(/.+$)', port)
if match:
protocol = match.group(1).replace('/', '')
port = re.sub(r'/.+$', '', port)
exposed.append((port, protocol))
if published_ports:
# Any published port should also be exposed
for publish_port in published_ports:
match = False
if isinstance(publish_port, string_types) and '/' in publish_port:
port, protocol = publish_port.split('/')
port = int(port)
else:
protocol = 'tcp'
port = int(publish_port)
for exposed_port in exposed:
if exposed_port[1] != protocol:
continue
if isinstance(exposed_port[0], string_types) and '-' in exposed_port[0]:
start_port, end_port = exposed_port[0].split('-')
if int(start_port) <= port <= int(end_port):
match = True
elif exposed_port[0] == port:
match = True
if not match:
exposed.append((port, protocol))
return exposed
@staticmethod
def _parse_links(links):
'''
Turn links into a dictionary
'''
if links is None:
return None
result = []
for link in links:
parsed_link = link.split(':', 1)
if len(parsed_link) == 2:
result.append((parsed_link[0], parsed_link[1]))
else:
result.append((parsed_link[0], parsed_link[0]))
return result
def _parse_ulimits(self):
'''
Turn ulimits into an array of Ulimit objects
'''
if self.ulimits is None:
return None
results = []
for limit in self.ulimits:
limits = dict()
pieces = limit.split(':')
if len(pieces) >= 2:
limits['name'] = pieces[0]
limits['soft'] = int(pieces[1])
limits['hard'] = int(pieces[1])
if len(pieces) == 3:
limits['hard'] = int(pieces[2])
try:
results.append(Ulimit(**limits))
except ValueError as exc:
self.fail("Error parsing ulimits value %s - %s" % (limit, exc))
return results
def _parse_sysctls(self):
'''
Turn sysctls into an hash of Sysctl objects
'''
return self.sysctls
def _parse_log_config(self):
'''
Create a LogConfig object
'''
if self.log_driver is None:
return None
options = dict(
Type=self.log_driver,
Config=dict()
)
if self.log_options is not None:
options['Config'] = dict()
for k, v in self.log_options.items():
if not isinstance(v, string_types):
self.client.module.warn(
"Non-string value found for log_options option '%s'. The value is automatically converted to '%s'. "
"If this is not correct, or you want to avoid such warnings, please quote the value." % (k, str(v))
)
v = str(v)
self.log_options[k] = v
options['Config'][k] = v
try:
return LogConfig(**options)
except ValueError as exc:
self.fail('Error parsing logging options - %s' % (exc))
def _parse_tmpfs(self):
'''
Turn tmpfs into a hash of Tmpfs objects
'''
result = dict()
if self.tmpfs is None:
return result
for tmpfs_spec in self.tmpfs:
split_spec = tmpfs_spec.split(":", 1)
if len(split_spec) > 1:
result[split_spec[0]] = split_spec[1]
else:
result[split_spec[0]] = ""
return result
def _get_environment(self):
"""
If environment file is combined with explicit environment variables, the explicit environment variables
take precedence.
"""
final_env = {}
if self.env_file:
parsed_env_file = utils.parse_env_file(self.env_file)
for name, value in parsed_env_file.items():
final_env[name] = str(value)
if self.env:
for name, value in self.env.items():
if not isinstance(value, string_types):
self.fail("Non-string value found for env option. Ambiguous env options must be "
"wrapped in quotes to avoid them being interpreted. Key: %s" % (name, ))
final_env[name] = str(value)
return final_env
def _get_network_id(self, network_name):
network_id = None
try:
for network in self.client.networks(names=[network_name]):
if network['Name'] == network_name:
network_id = network['Id']
break
except Exception as exc:
self.fail("Error getting network id for %s - %s" % (network_name, str(exc)))
return network_id
def _process_mounts(self):
if self.mounts is None:
return None, None
mounts_list = []
mounts_expected = []
for mount in self.mounts:
target = mount['target']
datatype = mount['type']
mount_dict = dict(mount)
# Sanity checks (so we don't wait for docker-py to barf on input)
if mount_dict.get('source') is None and datatype != 'tmpfs':
self.client.fail('source must be specified for mount "{0}" of type "{1}"'.format(target, datatype))
mount_option_types = dict(
volume_driver='volume',
volume_options='volume',
propagation='bind',
no_copy='volume',
labels='volume',
tmpfs_size='tmpfs',
tmpfs_mode='tmpfs',
)
for option, req_datatype in mount_option_types.items():
if mount_dict.get(option) is not None and datatype != req_datatype:
self.client.fail('{0} cannot be specified for mount "{1}" of type "{2}" (needs type "{3}")'.format(option, target, datatype, req_datatype))
# Handle volume_driver and volume_options
volume_driver = mount_dict.pop('volume_driver')
volume_options = mount_dict.pop('volume_options')
if volume_driver:
if volume_options:
volume_options = clean_dict_booleans_for_docker_api(volume_options)
mount_dict['driver_config'] = docker_types.DriverConfig(name=volume_driver, options=volume_options)
if mount_dict['labels']:
mount_dict['labels'] = clean_dict_booleans_for_docker_api(mount_dict['labels'])
if mount_dict.get('tmpfs_size') is not None:
try:
mount_dict['tmpfs_size'] = human_to_bytes(mount_dict['tmpfs_size'])
except ValueError as exc:
self.fail('Failed to convert tmpfs_size of mount "{0}" to bytes: {1}'.format(target, exc))
if mount_dict.get('tmpfs_mode') is not None:
try:
mount_dict['tmpfs_mode'] = int(mount_dict['tmpfs_mode'], 8)
except Exception as dummy:
self.client.fail('tmp_fs mode of mount "{0}" is not an octal string!'.format(target))
# Fill expected mount dict
mount_expected = dict(mount)
mount_expected['tmpfs_size'] = mount_dict['tmpfs_size']
mount_expected['tmpfs_mode'] = mount_dict['tmpfs_mode']
# Add result to lists
mounts_list.append(docker_types.Mount(**mount_dict))
mounts_expected.append(omit_none_from_dict(mount_expected))
return mounts_list, mounts_expected
def _process_rate_bps(self, option):
"""
Format device_read_bps and device_write_bps option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
device_dict['Rate'] = human_to_bytes(device_dict['Rate'])
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _process_rate_iops(self, option):
"""
Format device_read_iops and device_write_iops option
"""
devices_list = []
for v in getattr(self, option):
device_dict = dict((x.title(), y) for x, y in v.items())
devices_list.append(device_dict)
setattr(self, option, devices_list)
def _replace_container_names(self, mode):
"""
Parse IPC and PID modes. If they contain a container name, replace
with the container's ID.
"""
if mode is None or not mode.startswith('container:'):
return mode
container_name = mode[len('container:'):]
# Try to inspect container to see whether this is an ID or a
# name (and in the latter case, retrieve it's ID)
container = self.client.get_container(container_name)
if container is None:
# If we can't find the container, issue a warning and continue with
# what the user specified.
self.client.module.warn('Cannot find a container with name or ID "{0}"'.format(container_name))
return mode
return 'container:{0}'.format(container['Id'])
def _check_mount_target_collisions(self):
last = dict()
def f(t, name):
if t in last:
if name == last[t]:
self.client.fail('The mount point "{0}" appears twice in the {1} option'.format(t, name))
else:
self.client.fail('The mount point "{0}" appears both in the {1} and {2} option'.format(t, name, last[t]))
last[t] = name
if self.expected_mounts:
for t in [m['target'] for m in self.expected_mounts]:
f(t, 'mounts')
if self.volumes:
for v in self.volumes:
vs = v.split(':')
f(vs[0 if len(vs) == 1 else 1], 'volumes')
class Container(DockerBaseClass):
def __init__(self, container, parameters):
super(Container, self).__init__()
self.raw = container
self.Id = None
self.container = container
if container:
self.Id = container['Id']
self.Image = container['Image']
self.log(self.container, pretty_print=True)
self.parameters = parameters
self.parameters.expected_links = None
self.parameters.expected_ports = None
self.parameters.expected_exposed = None
self.parameters.expected_volumes = None
self.parameters.expected_ulimits = None
self.parameters.expected_sysctls = None
self.parameters.expected_etc_hosts = None
self.parameters.expected_env = None
self.parameters_map = dict()
self.parameters_map['expected_links'] = 'links'
self.parameters_map['expected_ports'] = 'expected_ports'
self.parameters_map['expected_exposed'] = 'exposed_ports'
self.parameters_map['expected_volumes'] = 'volumes'
self.parameters_map['expected_ulimits'] = 'ulimits'
self.parameters_map['expected_sysctls'] = 'sysctls'
self.parameters_map['expected_etc_hosts'] = 'etc_hosts'
self.parameters_map['expected_env'] = 'env'
self.parameters_map['expected_entrypoint'] = 'entrypoint'
self.parameters_map['expected_binds'] = 'volumes'
self.parameters_map['expected_cmd'] = 'command'
self.parameters_map['expected_devices'] = 'devices'
self.parameters_map['expected_healthcheck'] = 'healthcheck'
self.parameters_map['expected_mounts'] = 'mounts'
def fail(self, msg):
self.parameters.client.fail(msg)
@property
def exists(self):
return True if self.container else False
@property
def running(self):
if self.container and self.container.get('State'):
if self.container['State'].get('Running') and not self.container['State'].get('Ghost', False):
return True
return False
@property
def paused(self):
if self.container and self.container.get('State'):
return self.container['State'].get('Paused', False)
return False
def _compare(self, a, b, compare):
'''
Compare values a and b as described in compare.
'''
return compare_generic(a, b, compare['comparison'], compare['type'])
def _decode_mounts(self, mounts):
if not mounts:
return mounts
result = []
empty_dict = dict()
for mount in mounts:
res = dict()
res['type'] = mount.get('Type')
res['source'] = mount.get('Source')
res['target'] = mount.get('Target')
res['read_only'] = mount.get('ReadOnly', False) # golang's omitempty for bool returns None for False
res['consistency'] = mount.get('Consistency')
res['propagation'] = mount.get('BindOptions', empty_dict).get('Propagation')
res['no_copy'] = mount.get('VolumeOptions', empty_dict).get('NoCopy', False)
res['labels'] = mount.get('VolumeOptions', empty_dict).get('Labels', empty_dict)
res['volume_driver'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Name')
res['volume_options'] = mount.get('VolumeOptions', empty_dict).get('DriverConfig', empty_dict).get('Options', empty_dict)
res['tmpfs_size'] = mount.get('TmpfsOptions', empty_dict).get('SizeBytes')
res['tmpfs_mode'] = mount.get('TmpfsOptions', empty_dict).get('Mode')
result.append(res)
return result
def has_different_configuration(self, image):
'''
Diff parameters vs existing container config. Returns tuple: (True | False, List of differences)
'''
self.log('Starting has_different_configuration')
self.parameters.expected_entrypoint = self._get_expected_entrypoint()
self.parameters.expected_links = self._get_expected_links()
self.parameters.expected_ports = self._get_expected_ports()
self.parameters.expected_exposed = self._get_expected_exposed(image)
self.parameters.expected_volumes = self._get_expected_volumes(image)
self.parameters.expected_binds = self._get_expected_binds(image)
self.parameters.expected_ulimits = self._get_expected_ulimits(self.parameters.ulimits)
self.parameters.expected_sysctls = self._get_expected_sysctls(self.parameters.sysctls)
self.parameters.expected_etc_hosts = self._convert_simple_dict_to_list('etc_hosts')
self.parameters.expected_env = self._get_expected_env(image)
self.parameters.expected_cmd = self._get_expected_cmd()
self.parameters.expected_devices = self._get_expected_devices()
self.parameters.expected_healthcheck = self._get_expected_healthcheck()
if not self.container.get('HostConfig'):
self.fail("has_config_diff: Error parsing container properties. HostConfig missing.")
if not self.container.get('Config'):
self.fail("has_config_diff: Error parsing container properties. Config missing.")
if not self.container.get('NetworkSettings'):
self.fail("has_config_diff: Error parsing container properties. NetworkSettings missing.")
host_config = self.container['HostConfig']
log_config = host_config.get('LogConfig', dict())
restart_policy = host_config.get('RestartPolicy', dict())
config = self.container['Config']
network = self.container['NetworkSettings']
# The previous version of the docker module ignored the detach state by
# assuming if the container was running, it must have been detached.
detach = not (config.get('AttachStderr') and config.get('AttachStdout'))
# "ExposedPorts": null returns None type & causes AttributeError - PR #5517
if config.get('ExposedPorts') is not None:
expected_exposed = [self._normalize_port(p) for p in config.get('ExposedPorts', dict()).keys()]
else:
expected_exposed = []
# Map parameters to container inspect results
config_mapping = dict(
expected_cmd=config.get('Cmd'),
domainname=config.get('Domainname'),
hostname=config.get('Hostname'),
user=config.get('User'),
detach=detach,
init=host_config.get('Init'),
interactive=config.get('OpenStdin'),
capabilities=host_config.get('CapAdd'),
cap_drop=host_config.get('CapDrop'),
expected_devices=host_config.get('Devices'),
dns_servers=host_config.get('Dns'),
dns_opts=host_config.get('DnsOptions'),
dns_search_domains=host_config.get('DnsSearch'),
expected_env=(config.get('Env') or []),
expected_entrypoint=config.get('Entrypoint'),
expected_etc_hosts=host_config['ExtraHosts'],
expected_exposed=expected_exposed,
groups=host_config.get('GroupAdd'),
ipc_mode=host_config.get("IpcMode"),
labels=config.get('Labels'),
expected_links=host_config.get('Links'),
mac_address=network.get('MacAddress'),
memory_swappiness=host_config.get('MemorySwappiness'),
network_mode=host_config.get('NetworkMode'),
userns_mode=host_config.get('UsernsMode'),
oom_killer=host_config.get('OomKillDisable'),
oom_score_adj=host_config.get('OomScoreAdj'),
pid_mode=host_config.get('PidMode'),
privileged=host_config.get('Privileged'),
expected_ports=host_config.get('PortBindings'),
read_only=host_config.get('ReadonlyRootfs'),
restart_policy=restart_policy.get('Name'),
runtime=host_config.get('Runtime'),
shm_size=host_config.get('ShmSize'),
security_opts=host_config.get("SecurityOpt"),
stop_signal=config.get("StopSignal"),
tmpfs=host_config.get('Tmpfs'),
tty=config.get('Tty'),
expected_ulimits=host_config.get('Ulimits'),
expected_sysctls=host_config.get('Sysctls'),
uts=host_config.get('UTSMode'),
expected_volumes=config.get('Volumes'),
expected_binds=host_config.get('Binds'),
volume_driver=host_config.get('VolumeDriver'),
volumes_from=host_config.get('VolumesFrom'),
working_dir=config.get('WorkingDir'),
publish_all_ports=host_config.get('PublishAllPorts'),
expected_healthcheck=config.get('Healthcheck'),
disable_healthcheck=(not config.get('Healthcheck') or config.get('Healthcheck').get('Test') == ['NONE']),
device_read_bps=host_config.get('BlkioDeviceReadBps'),
device_write_bps=host_config.get('BlkioDeviceWriteBps'),
device_read_iops=host_config.get('BlkioDeviceReadIOps'),
device_write_iops=host_config.get('BlkioDeviceWriteIOps'),
pids_limit=host_config.get('PidsLimit'),
# According to https://github.com/moby/moby/, support for HostConfig.Mounts
# has been included at least since v17.03.0-ce, which has API version 1.26.
# The previous tag, v1.9.1, has API version 1.21 and does not have
# HostConfig.Mounts. I have no idea what about API 1.25...
expected_mounts=self._decode_mounts(host_config.get('Mounts')),
)
# Options which don't make sense without their accompanying option
if self.parameters.restart_policy:
config_mapping['restart_retries'] = restart_policy.get('MaximumRetryCount')
if self.parameters.log_driver:
config_mapping['log_driver'] = log_config.get('Type')
config_mapping['log_options'] = log_config.get('Config')
if self.parameters.client.option_minimal_versions['auto_remove']['supported']:
# auto_remove is only supported in Docker SDK for Python >= 2.0.0; unfortunately
# it has a default value, that's why we have to jump through the hoops here
config_mapping['auto_remove'] = host_config.get('AutoRemove')
if self.parameters.client.option_minimal_versions['stop_timeout']['supported']:
# stop_timeout is only supported in Docker SDK for Python >= 2.1. Note that
# stop_timeout has a hybrid role, in that it used to be something only used
# for stopping containers, and is now also used as a container property.
# That's why it needs special handling here.
config_mapping['stop_timeout'] = config.get('StopTimeout')
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# For docker API < 1.22, update_container() is not supported. Thus
# we need to handle all limits which are usually handled by
# update_container() as configuration changes which require a container
# restart.
config_mapping.update(dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
))
differences = DifferenceTracker()
for key, value in config_mapping.items():
minimal_version = self.parameters.client.option_minimal_versions.get(key, {})
if not minimal_version.get('supported', True):
continue
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
self.log('check differences %s %s vs %s (%s)' % (key, getattr(self.parameters, key), str(value), compare))
if getattr(self.parameters, key, None) is not None:
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
p = getattr(self.parameters, key)
c = value
if compare['type'] == 'set':
# Since the order does not matter, sort so that the diff output is better.
if p is not None:
p = sorted(p)
if c is not None:
c = sorted(c)
elif compare['type'] == 'set(dict)':
# Since the order does not matter, sort so that the diff output is better.
if key == 'expected_mounts':
# For selected values, use one entry as key
def sort_key_fn(x):
return x['target']
else:
# We sort the list of dictionaries by using the sorted items of a dict as its key.
def sort_key_fn(x):
return sorted((a, str(b)) for a, b in x.items())
if p is not None:
p = sorted(p, key=sort_key_fn)
if c is not None:
c = sorted(c, key=sort_key_fn)
differences.add(key, parameter=p, active=c)
has_differences = not differences.empty
return has_differences, differences
def has_different_resource_limits(self):
'''
Diff parameters and container resource limits
'''
if not self.container.get('HostConfig'):
self.fail("limits_differ_from_container: Error parsing container properties. HostConfig missing.")
if self.parameters.client.docker_api_version < LooseVersion('1.22'):
# update_container() call not supported
return False, []
host_config = self.container['HostConfig']
config_mapping = dict(
blkio_weight=host_config.get('BlkioWeight'),
cpu_period=host_config.get('CpuPeriod'),
cpu_quota=host_config.get('CpuQuota'),
cpu_shares=host_config.get('CpuShares'),
cpuset_cpus=host_config.get('CpusetCpus'),
cpuset_mems=host_config.get('CpusetMems'),
kernel_memory=host_config.get("KernelMemory"),
memory=host_config.get('Memory'),
memory_reservation=host_config.get('MemoryReservation'),
memory_swap=host_config.get('MemorySwap'),
)
differences = DifferenceTracker()
for key, value in config_mapping.items():
if getattr(self.parameters, key, None):
compare = self.parameters.client.comparisons[self.parameters_map.get(key, key)]
match = self._compare(getattr(self.parameters, key), value, compare)
if not match:
# no match. record the differences
differences.add(key, parameter=getattr(self.parameters, key), active=value)
different = not differences.empty
return different, differences
def has_network_differences(self):
'''
Check if the container is connected to requested networks with expected options: links, aliases, ipv4, ipv6
'''
different = False
differences = []
if not self.parameters.networks:
return different, differences
if not self.container.get('NetworkSettings'):
self.fail("has_missing_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings']['Networks']
for network in self.parameters.networks:
network_info = connected_networks.get(network['name'])
if network_info is None:
different = True
differences.append(dict(
parameter=network,
container=None
))
else:
diff = False
network_info_ipam = network_info.get('IPAMConfig', {})
if network.get('ipv4_address') and network['ipv4_address'] != network_info_ipam.get('IPv4Address'):
diff = True
if network.get('ipv6_address') and network['ipv6_address'] != network_info_ipam.get('IPv6Address'):
diff = True
if network.get('aliases'):
if not compare_generic(network['aliases'], network_info.get('Aliases'), 'allow_more_present', 'set'):
diff = True
if network.get('links'):
expected_links = []
for link, alias in network['links']:
expected_links.append("%s:%s" % (link, alias))
if not compare_generic(expected_links, network_info.get('Links'), 'allow_more_present', 'set'):
diff = True
if diff:
different = True
differences.append(dict(
parameter=network,
container=dict(
name=network['name'],
ipv4_address=network_info_ipam.get('IPv4Address'),
ipv6_address=network_info_ipam.get('IPv6Address'),
aliases=network_info.get('Aliases'),
links=network_info.get('Links')
)
))
return different, differences
def has_extra_networks(self):
'''
Check if the container is connected to non-requested networks
'''
extra_networks = []
extra = False
if not self.container.get('NetworkSettings'):
self.fail("has_extra_networks: Error parsing container properties. NetworkSettings missing.")
connected_networks = self.container['NetworkSettings'].get('Networks')
if connected_networks:
for network, network_config in connected_networks.items():
keep = False
if self.parameters.networks:
for expected_network in self.parameters.networks:
if expected_network['name'] == network:
keep = True
if not keep:
extra = True
extra_networks.append(dict(name=network, id=network_config['NetworkID']))
return extra, extra_networks
def _get_expected_devices(self):
if not self.parameters.devices:
return None
expected_devices = []
for device in self.parameters.devices:
parts = device.split(':')
if len(parts) == 1:
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[0],
PathOnHost=parts[0]
))
elif len(parts) == 2:
parts = device.split(':')
expected_devices.append(
dict(
CgroupPermissions='rwm',
PathInContainer=parts[1],
PathOnHost=parts[0]
)
)
else:
expected_devices.append(
dict(
CgroupPermissions=parts[2],
PathInContainer=parts[1],
PathOnHost=parts[0]
))
return expected_devices
def _get_expected_entrypoint(self):
if not self.parameters.entrypoint:
return None
return shlex.split(self.parameters.entrypoint)
def _get_expected_ports(self):
if not self.parameters.published_ports:
return None
expected_bound_ports = {}
for container_port, config in self.parameters.published_ports.items():
if isinstance(container_port, int):
container_port = "%s/tcp" % container_port
if len(config) == 1:
if isinstance(config[0], int):
expected_bound_ports[container_port] = [{'HostIp': "0.0.0.0", 'HostPort': config[0]}]
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': ""}]
elif isinstance(config[0], tuple):
expected_bound_ports[container_port] = []
for host_ip, host_port in config:
expected_bound_ports[container_port].append({'HostIp': host_ip, 'HostPort': str(host_port)})
else:
expected_bound_ports[container_port] = [{'HostIp': config[0], 'HostPort': str(config[1])}]
return expected_bound_ports
def _get_expected_links(self):
if self.parameters.links is None:
return None
self.log('parameter links:')
self.log(self.parameters.links, pretty_print=True)
exp_links = []
for link, alias in self.parameters.links:
exp_links.append("/%s:%s/%s" % (link, ('/' + self.parameters.name), alias))
return exp_links
def _get_expected_binds(self, image):
self.log('_get_expected_binds')
image_vols = []
if image:
image_vols = self._get_image_binds(image[self.parameters.client.image_inspect_source].get('Volumes'))
param_vols = []
if self.parameters.volumes:
for vol in self.parameters.volumes:
host = None
if ':' in vol:
if len(vol.split(':')) == 3:
host, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
host, container, mode = vol.split(':') + ['rw']
if host:
param_vols.append("%s:%s:%s" % (host, container, mode))
result = list(set(image_vols + param_vols))
self.log("expected_binds:")
self.log(result, pretty_print=True)
return result
def _get_image_binds(self, volumes):
'''
Convert array of binds to array of strings with format host_path:container_path:mode
:param volumes: array of bind dicts
:return: array of strings
'''
results = []
if isinstance(volumes, dict):
results += self._get_bind_from_dict(volumes)
elif isinstance(volumes, list):
for vol in volumes:
results += self._get_bind_from_dict(vol)
return results
@staticmethod
def _get_bind_from_dict(volume_dict):
results = []
if volume_dict:
for host_path, config in volume_dict.items():
if isinstance(config, dict) and config.get('bind'):
container_path = config.get('bind')
mode = config.get('mode', 'rw')
results.append("%s:%s:%s" % (host_path, container_path, mode))
return results
def _get_expected_volumes(self, image):
self.log('_get_expected_volumes')
expected_vols = dict()
if image and image[self.parameters.client.image_inspect_source].get('Volumes'):
expected_vols.update(image[self.parameters.client.image_inspect_source].get('Volumes'))
if self.parameters.volumes:
for vol in self.parameters.volumes:
container = None
if ':' in vol:
if len(vol.split(':')) == 3:
dummy, container, mode = vol.split(':')
if not is_volume_permissions(mode):
self.fail('Found invalid volumes mode: {0}'.format(mode))
if len(vol.split(':')) == 2:
parts = vol.split(':')
if not is_volume_permissions(parts[1]):
dummy, container, mode = vol.split(':') + ['rw']
new_vol = dict()
if container:
new_vol[container] = dict()
else:
new_vol[vol] = dict()
expected_vols.update(new_vol)
if not expected_vols:
expected_vols = None
self.log("expected_volumes:")
self.log(expected_vols, pretty_print=True)
return expected_vols
def _get_expected_env(self, image):
self.log('_get_expected_env')
expected_env = dict()
if image and image[self.parameters.client.image_inspect_source].get('Env'):
for env_var in image[self.parameters.client.image_inspect_source]['Env']:
parts = env_var.split('=', 1)
expected_env[parts[0]] = parts[1]
if self.parameters.env:
expected_env.update(self.parameters.env)
param_env = []
for key, value in expected_env.items():
param_env.append("%s=%s" % (key, value))
return param_env
def _get_expected_exposed(self, image):
self.log('_get_expected_exposed')
image_ports = []
if image:
image_exposed_ports = image[self.parameters.client.image_inspect_source].get('ExposedPorts') or {}
image_ports = [self._normalize_port(p) for p in image_exposed_ports.keys()]
param_ports = []
if self.parameters.ports:
param_ports = [str(p[0]) + '/' + p[1] for p in self.parameters.ports]
result = list(set(image_ports + param_ports))
self.log(result, pretty_print=True)
return result
def _get_expected_ulimits(self, config_ulimits):
self.log('_get_expected_ulimits')
if config_ulimits is None:
return None
results = []
for limit in config_ulimits:
results.append(dict(
Name=limit.name,
Soft=limit.soft,
Hard=limit.hard
))
return results
def _get_expected_sysctls(self, config_sysctls):
self.log('_get_expected_sysctls')
if config_sysctls is None:
return None
result = dict()
for key, value in config_sysctls.items():
result[key] = str(value)
return result
def _get_expected_cmd(self):
self.log('_get_expected_cmd')
if not self.parameters.command:
return None
return shlex.split(self.parameters.command)
def _convert_simple_dict_to_list(self, param_name, join_with=':'):
if getattr(self.parameters, param_name, None) is None:
return None
results = []
for key, value in getattr(self.parameters, param_name).items():
results.append("%s%s%s" % (key, join_with, value))
return results
def _normalize_port(self, port):
if '/' not in port:
return port + '/tcp'
return port
def _get_expected_healthcheck(self):
self.log('_get_expected_healthcheck')
expected_healthcheck = dict()
if self.parameters.healthcheck:
expected_healthcheck.update([(k.title().replace("_", ""), v)
for k, v in self.parameters.healthcheck.items()])
return expected_healthcheck
class ContainerManager(DockerBaseClass):
'''
Perform container management tasks
'''
def __init__(self, client):
super(ContainerManager, self).__init__()
if client.module.params.get('log_options') and not client.module.params.get('log_driver'):
client.module.warn('log_options is ignored when log_driver is not specified')
if client.module.params.get('healthcheck') and not client.module.params.get('healthcheck').get('test'):
client.module.warn('healthcheck is ignored when test is not specified')
if client.module.params.get('restart_retries') is not None and not client.module.params.get('restart_policy'):
client.module.warn('restart_retries is ignored when restart_policy is not specified')
self.client = client
self.parameters = TaskParameters(client)
self.check_mode = self.client.check_mode
self.results = {'changed': False, 'actions': []}
self.diff = {}
self.diff_tracker = DifferenceTracker()
self.facts = {}
state = self.parameters.state
if state in ('stopped', 'started', 'present'):
self.present(state)
elif state == 'absent':
self.absent()
if not self.check_mode and not self.parameters.debug:
self.results.pop('actions')
if self.client.module._diff or self.parameters.debug:
self.diff['before'], self.diff['after'] = self.diff_tracker.get_before_after()
self.results['diff'] = self.diff
if self.facts:
self.results['ansible_facts'] = {'docker_container': self.facts}
self.results['container'] = self.facts
def present(self, state):
container = self._get_container(self.parameters.name)
was_running = container.running
was_paused = container.paused
container_created = False
# If the image parameter was passed then we need to deal with the image
# version comparison. Otherwise we handle this depending on whether
# the container already runs or not; in the former case, in case the
# container needs to be restarted, we use the existing container's
# image ID.
image = self._get_image()
self.log(image, pretty_print=True)
if not container.exists:
# New container
self.log('No container found')
if not self.parameters.image:
self.fail('Cannot create container when image is not specified!')
self.diff_tracker.add('exists', parameter=True, active=False)
new_container = self.container_create(self.parameters.image, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
else:
# Existing container
different, differences = container.has_different_configuration(image)
image_different = False
if self.parameters.comparisons['image']['comparison'] == 'strict':
image_different = self._image_is_different(image, container)
if image_different or different or self.parameters.recreate:
self.diff_tracker.merge(differences)
self.diff['differences'] = differences.get_legacy_docker_container_diffs()
if image_different:
self.diff['image_different'] = True
self.log("differences")
self.log(differences.get_legacy_docker_container_diffs(), pretty_print=True)
image_to_use = self.parameters.image
if not image_to_use and container and container.Image:
image_to_use = container.Image
if not image_to_use:
self.fail('Cannot recreate container when image is not specified or cannot be extracted from current container!')
if container.running:
self.container_stop(container.Id)
self.container_remove(container.Id)
new_container = self.container_create(image_to_use, self.parameters.create_parameters)
if new_container:
container = new_container
container_created = True
if container and container.exists:
container = self.update_limits(container)
container = self.update_networks(container, container_created)
if state == 'started' and not container.running:
self.diff_tracker.add('running', parameter=True, active=was_running)
container = self.container_start(container.Id)
elif state == 'started' and self.parameters.restart:
self.diff_tracker.add('running', parameter=True, active=was_running)
self.diff_tracker.add('restarted', parameter=True, active=False)
container = self.container_restart(container.Id)
elif state == 'stopped' and container.running:
self.diff_tracker.add('running', parameter=False, active=was_running)
self.container_stop(container.Id)
container = self._get_container(container.Id)
if state == 'started' and container.paused != self.parameters.paused:
self.diff_tracker.add('paused', parameter=self.parameters.paused, active=was_paused)
if not self.check_mode:
try:
if self.parameters.paused:
self.client.pause(container=container.Id)
else:
self.client.unpause(container=container.Id)
except Exception as exc:
self.fail("Error %s container %s: %s" % (
"pausing" if self.parameters.paused else "unpausing", container.Id, str(exc)
))
container = self._get_container(container.Id)
self.results['changed'] = True
self.results['actions'].append(dict(set_paused=self.parameters.paused))
self.facts = container.raw
def absent(self):
container = self._get_container(self.parameters.name)
if container.exists:
if container.running:
self.diff_tracker.add('running', parameter=False, active=True)
self.container_stop(container.Id)
self.diff_tracker.add('exists', parameter=False, active=True)
self.container_remove(container.Id)
def fail(self, msg, **kwargs):
self.client.fail(msg, **kwargs)
def _output_logs(self, msg):
self.client.module.log(msg=msg)
def _get_container(self, container):
'''
Expects container ID or Name. Returns a container object
'''
return Container(self.client.get_container(container), self.parameters)
def _get_image(self):
if not self.parameters.image:
self.log('No image specified')
return None
if is_image_name_id(self.parameters.image):
image = self.client.find_image_by_id(self.parameters.image)
else:
repository, tag = utils.parse_repository_tag(self.parameters.image)
if not tag:
tag = "latest"
image = self.client.find_image(repository, tag)
if not self.check_mode:
if not image or self.parameters.pull:
self.log("Pull the image.")
image, alreadyToLatest = self.client.pull_image(repository, tag)
if alreadyToLatest:
self.results['changed'] = False
else:
self.results['changed'] = True
self.results['actions'].append(dict(pulled_image="%s:%s" % (repository, tag)))
self.log("image")
self.log(image, pretty_print=True)
return image
def _image_is_different(self, image, container):
if image and image.get('Id'):
if container and container.Image:
if image.get('Id') != container.Image:
self.diff_tracker.add('image', parameter=image.get('Id'), active=container.Image)
return True
return False
def update_limits(self, container):
limits_differ, different_limits = container.has_different_resource_limits()
if limits_differ:
self.log("limit differences:")
self.log(different_limits.get_legacy_docker_container_diffs(), pretty_print=True)
self.diff_tracker.merge(different_limits)
if limits_differ and not self.check_mode:
self.container_update(container.Id, self.parameters.update_parameters)
return self._get_container(container.Id)
return container
def update_networks(self, container, container_created):
updated_container = container
if self.parameters.comparisons['networks']['comparison'] != 'ignore' or container_created:
has_network_differences, network_differences = container.has_network_differences()
if has_network_differences:
if self.diff.get('differences'):
self.diff['differences'].append(dict(network_differences=network_differences))
else:
self.diff['differences'] = [dict(network_differences=network_differences)]
for netdiff in network_differences:
self.diff_tracker.add(
'network.{0}'.format(netdiff['parameter']['name']),
parameter=netdiff['parameter'],
active=netdiff['container']
)
self.results['changed'] = True
updated_container = self._add_networks(container, network_differences)
if (self.parameters.comparisons['networks']['comparison'] == 'strict' and self.parameters.networks is not None) or self.parameters.purge_networks:
has_extra_networks, extra_networks = container.has_extra_networks()
if has_extra_networks:
if self.diff.get('differences'):
self.diff['differences'].append(dict(purge_networks=extra_networks))
else:
self.diff['differences'] = [dict(purge_networks=extra_networks)]
for extra_network in extra_networks:
self.diff_tracker.add(
'network.{0}'.format(extra_network['name']),
active=extra_network
)
self.results['changed'] = True
updated_container = self._purge_networks(container, extra_networks)
return updated_container
def _add_networks(self, container, differences):
for diff in differences:
# remove the container from the network, if connected
if diff.get('container'):
self.results['actions'].append(dict(removed_from_network=diff['parameter']['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, diff['parameter']['id'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (diff['parameter']['name'],
str(exc)))
# connect to the network
params = dict()
for para in ('ipv4_address', 'ipv6_address', 'links', 'aliases'):
if diff['parameter'].get(para):
params[para] = diff['parameter'][para]
self.results['actions'].append(dict(added_to_network=diff['parameter']['name'], network_parameters=params))
if not self.check_mode:
try:
self.log("Connecting container to network %s" % diff['parameter']['id'])
self.log(params, pretty_print=True)
self.client.connect_container_to_network(container.Id, diff['parameter']['id'], **params)
except Exception as exc:
self.fail("Error connecting container to network %s - %s" % (diff['parameter']['name'], str(exc)))
return self._get_container(container.Id)
def _purge_networks(self, container, networks):
for network in networks:
self.results['actions'].append(dict(removed_from_network=network['name']))
if not self.check_mode:
try:
self.client.disconnect_container_from_network(container.Id, network['name'])
except Exception as exc:
self.fail("Error disconnecting container from network %s - %s" % (network['name'],
str(exc)))
return self._get_container(container.Id)
def container_create(self, image, create_parameters):
self.log("create container")
self.log("image: %s parameters:" % image)
self.log(create_parameters, pretty_print=True)
self.results['actions'].append(dict(created="Created container", create_parameters=create_parameters))
self.results['changed'] = True
new_container = None
if not self.check_mode:
try:
new_container = self.client.create_container(image, **create_parameters)
self.client.report_warnings(new_container)
except Exception as exc:
self.fail("Error creating container: %s" % str(exc))
return self._get_container(new_container['Id'])
return new_container
def container_start(self, container_id):
self.log("start container %s" % (container_id))
self.results['actions'].append(dict(started=container_id))
self.results['changed'] = True
if not self.check_mode:
try:
self.client.start(container=container_id)
except Exception as exc:
self.fail("Error starting container %s: %s" % (container_id, str(exc)))
if not self.parameters.detach:
if self.client.docker_py_version >= LooseVersion('3.0'):
status = self.client.wait(container_id)['StatusCode']
else:
status = self.client.wait(container_id)
if self.parameters.auto_remove:
output = "Cannot retrieve result as auto_remove is enabled"
if self.parameters.output_logs:
self.client.module.warn('Cannot output_logs if auto_remove is enabled!')
else:
config = self.client.inspect_container(container_id)
logging_driver = config['HostConfig']['LogConfig']['Type']
if logging_driver in ('json-file', 'journald'):
output = self.client.logs(container_id, stdout=True, stderr=True, stream=False, timestamps=False)
if self.parameters.output_logs:
self._output_logs(msg=output)
else:
output = "Result logged using `%s` driver" % logging_driver
if status != 0:
self.fail(output, status=status)
if self.parameters.cleanup:
self.container_remove(container_id, force=True)
insp = self._get_container(container_id)
if insp.raw:
insp.raw['Output'] = output
else:
insp.raw = dict(Output=output)
return insp
return self._get_container(container_id)
def container_remove(self, container_id, link=False, force=False):
volume_state = (not self.parameters.keep_volumes)
self.log("remove container container:%s v:%s link:%s force%s" % (container_id, volume_state, link, force))
self.results['actions'].append(dict(removed=container_id, volume_state=volume_state, link=link, force=force))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
response = self.client.remove_container(container_id, v=volume_state, link=link, force=force)
except NotFound as dummy:
pass
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
if 'removal of container ' in exc.explanation and ' is already in progress' in exc.explanation:
pass
else:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error removing container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def container_update(self, container_id, update_parameters):
if update_parameters:
self.log("update container %s" % (container_id))
self.log(update_parameters, pretty_print=True)
self.results['actions'].append(dict(updated=container_id, update_parameters=update_parameters))
self.results['changed'] = True
if not self.check_mode and callable(getattr(self.client, 'update_container')):
try:
result = self.client.update_container(container_id, **update_parameters)
self.client.report_warnings(result)
except Exception as exc:
self.fail("Error updating container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_kill(self, container_id):
self.results['actions'].append(dict(killed=container_id, signal=self.parameters.kill_signal))
self.results['changed'] = True
response = None
if not self.check_mode:
try:
if self.parameters.kill_signal:
response = self.client.kill(container_id, signal=self.parameters.kill_signal)
else:
response = self.client.kill(container_id)
except Exception as exc:
self.fail("Error killing container %s: %s" % (container_id, exc))
return response
def container_restart(self, container_id):
self.results['actions'].append(dict(restarted=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
if not self.check_mode:
try:
if self.parameters.stop_timeout:
dummy = self.client.restart(container_id, timeout=self.parameters.stop_timeout)
else:
dummy = self.client.restart(container_id)
except Exception as exc:
self.fail("Error restarting container %s: %s" % (container_id, str(exc)))
return self._get_container(container_id)
def container_stop(self, container_id):
if self.parameters.force_kill:
self.container_kill(container_id)
return
self.results['actions'].append(dict(stopped=container_id, timeout=self.parameters.stop_timeout))
self.results['changed'] = True
response = None
if not self.check_mode:
count = 0
while True:
try:
if self.parameters.stop_timeout:
response = self.client.stop(container_id, timeout=self.parameters.stop_timeout)
else:
response = self.client.stop(container_id)
except APIError as exc:
if 'Unpause the container before stopping or killing' in exc.explanation:
# New docker daemon versions do not allow containers to be removed
# if they are paused. Make sure we don't end up in an infinite loop.
if count == 3:
self.fail("Error removing container %s (tried to unpause three times): %s" % (container_id, str(exc)))
count += 1
# Unpause
try:
self.client.unpause(container=container_id)
except Exception as exc2:
self.fail("Error unpausing container %s for removal: %s" % (container_id, str(exc2)))
# Now try again
continue
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
except Exception as exc:
self.fail("Error stopping container %s: %s" % (container_id, str(exc)))
# We only loop when explicitly requested by 'continue'
break
return response
def detect_ipvX_address_usage(client):
'''
Helper function to detect whether any specified network uses ipv4_address or ipv6_address
'''
for network in client.module.params.get("networks") or []:
if network.get('ipv4_address') is not None or network.get('ipv6_address') is not None:
return True
return False
class AnsibleDockerClientContainer(AnsibleDockerClient):
# A list of module options which are not docker container properties
__NON_CONTAINER_PROPERTY_OPTIONS = tuple([
'env_file', 'force_kill', 'keep_volumes', 'ignore_image', 'name', 'pull', 'purge_networks',
'recreate', 'restart', 'state', 'trust_image_content', 'networks', 'cleanup', 'kill_signal',
'output_logs', 'paused'
] + list(DOCKER_COMMON_ARGS.keys()))
def _parse_comparisons(self):
comparisons = {}
comp_aliases = {}
# Put in defaults
explicit_types = dict(
command='list',
devices='set(dict)',
dns_search_domains='list',
dns_servers='list',
env='set',
entrypoint='list',
etc_hosts='set',
mounts='set(dict)',
networks='set(dict)',
ulimits='set(dict)',
device_read_bps='set(dict)',
device_write_bps='set(dict)',
device_read_iops='set(dict)',
device_write_iops='set(dict)',
)
all_options = set() # this is for improving user feedback when a wrong option was specified for comparison
default_values = dict(
stop_timeout='ignore',
)
for option, data in self.module.argument_spec.items():
all_options.add(option)
for alias in data.get('aliases', []):
all_options.add(alias)
# Ignore options which aren't used as container properties
if option in self.__NON_CONTAINER_PROPERTY_OPTIONS and option != 'networks':
continue
# Determine option type
if option in explicit_types:
datatype = explicit_types[option]
elif data['type'] == 'list':
datatype = 'set'
elif data['type'] == 'dict':
datatype = 'dict'
else:
datatype = 'value'
# Determine comparison type
if option in default_values:
comparison = default_values[option]
elif datatype in ('list', 'value'):
comparison = 'strict'
else:
comparison = 'allow_more_present'
comparisons[option] = dict(type=datatype, comparison=comparison, name=option)
# Keep track of aliases
comp_aliases[option] = option
for alias in data.get('aliases', []):
comp_aliases[alias] = option
# Process legacy ignore options
if self.module.params['ignore_image']:
comparisons['image']['comparison'] = 'ignore'
if self.module.params['purge_networks']:
comparisons['networks']['comparison'] = 'strict'
# Process options
if self.module.params.get('comparisons'):
# If '*' appears in comparisons, process it first
if '*' in self.module.params['comparisons']:
value = self.module.params['comparisons']['*']
if value not in ('strict', 'ignore'):
self.fail("The wildcard can only be used with comparison modes 'strict' and 'ignore'!")
for option, v in comparisons.items():
if option == 'networks':
# `networks` is special: only update if
# some value is actually specified
if self.module.params['networks'] is None:
continue
v['comparison'] = value
# Now process all other comparisons.
comp_aliases_used = {}
for key, value in self.module.params['comparisons'].items():
if key == '*':
continue
# Find main key
key_main = comp_aliases.get(key)
if key_main is None:
if key_main in all_options:
self.fail("The module option '%s' cannot be specified in the comparisons dict, "
"since it does not correspond to container's state!" % key)
self.fail("Unknown module option '%s' in comparisons dict!" % key)
if key_main in comp_aliases_used:
self.fail("Both '%s' and '%s' (aliases of %s) are specified in comparisons dict!" % (key, comp_aliases_used[key_main], key_main))
comp_aliases_used[key_main] = key
# Check value and update accordingly
if value in ('strict', 'ignore'):
comparisons[key_main]['comparison'] = value
elif value == 'allow_more_present':
if comparisons[key_main]['type'] == 'value':
self.fail("Option '%s' is a value and not a set/list/dict, so its comparison cannot be %s" % (key, value))
comparisons[key_main]['comparison'] = value
else:
self.fail("Unknown comparison mode '%s'!" % value)
# Add implicit options
comparisons['publish_all_ports'] = dict(type='value', comparison='strict', name='published_ports')
comparisons['expected_ports'] = dict(type='dict', comparison=comparisons['published_ports']['comparison'], name='expected_ports')
comparisons['disable_healthcheck'] = dict(type='value',
comparison='ignore' if comparisons['healthcheck']['comparison'] == 'ignore' else 'strict',
name='disable_healthcheck')
# Check legacy values
if self.module.params['ignore_image'] and comparisons['image']['comparison'] != 'ignore':
self.module.warn('The ignore_image option has been overridden by the comparisons option!')
if self.module.params['purge_networks'] and comparisons['networks']['comparison'] != 'strict':
self.module.warn('The purge_networks option has been overridden by the comparisons option!')
self.comparisons = comparisons
def _get_additional_minimal_versions(self):
stop_timeout_supported = self.docker_api_version >= LooseVersion('1.25')
stop_timeout_needed_for_update = self.module.params.get("stop_timeout") is not None and self.module.params.get('state') != 'absent'
if stop_timeout_supported:
stop_timeout_supported = self.docker_py_version >= LooseVersion('2.1')
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker SDK for Python's version is %s. Minimum version required is 2.1 to update "
"the container's stop_timeout configuration. "
"If you use the 'docker-py' module, you have to switch to the 'docker' Python package." % (docker_version,))
else:
if stop_timeout_needed_for_update and not stop_timeout_supported:
# We warn (instead of fail) since in older versions, stop_timeout was not used
# to update the container's configuration, but only when stopping a container.
self.module.warn("Docker API version is %s. Minimum version required is 1.25 to set or "
"update the container's stop_timeout configuration." % (self.docker_api_version_str,))
self.option_minimal_versions['stop_timeout']['supported'] = stop_timeout_supported
def __init__(self, **kwargs):
option_minimal_versions = dict(
# internal options
log_config=dict(),
publish_all_ports=dict(),
ports=dict(),
volume_binds=dict(),
name=dict(),
# normal options
device_read_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_read_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_bps=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
device_write_iops=dict(docker_py_version='1.9.0', docker_api_version='1.22'),
dns_opts=dict(docker_api_version='1.21', docker_py_version='1.10.0'),
ipc_mode=dict(docker_api_version='1.25'),
mac_address=dict(docker_api_version='1.25'),
oom_score_adj=dict(docker_api_version='1.22'),
shm_size=dict(docker_api_version='1.22'),
stop_signal=dict(docker_api_version='1.21'),
tmpfs=dict(docker_api_version='1.22'),
volume_driver=dict(docker_api_version='1.21'),
memory_reservation=dict(docker_api_version='1.21'),
kernel_memory=dict(docker_api_version='1.21'),
auto_remove=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
healthcheck=dict(docker_py_version='2.0.0', docker_api_version='1.24'),
init=dict(docker_py_version='2.2.0', docker_api_version='1.25'),
runtime=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
sysctls=dict(docker_py_version='1.10.0', docker_api_version='1.24'),
userns_mode=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
uts=dict(docker_py_version='3.5.0', docker_api_version='1.25'),
pids_limit=dict(docker_py_version='1.10.0', docker_api_version='1.23'),
mounts=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
# specials
ipvX_address_supported=dict(docker_py_version='1.9.0', docker_api_version='1.22',
detect_usage=detect_ipvX_address_usage,
usage_msg='ipv4_address or ipv6_address in networks'),
stop_timeout=dict(), # see _get_additional_minimal_versions()
)
super(AnsibleDockerClientContainer, self).__init__(
option_minimal_versions=option_minimal_versions,
option_minimal_versions_ignore_params=self.__NON_CONTAINER_PROPERTY_OPTIONS,
**kwargs
)
self.image_inspect_source = 'Config'
if self.docker_api_version < LooseVersion('1.21'):
self.image_inspect_source = 'ContainerConfig'
self._get_additional_minimal_versions()
self._parse_comparisons()
def main():
argument_spec = dict(
auto_remove=dict(type='bool', default=False),
blkio_weight=dict(type='int'),
capabilities=dict(type='list', elements='str'),
cap_drop=dict(type='list', elements='str'),
cleanup=dict(type='bool', default=False),
command=dict(type='raw'),
comparisons=dict(type='dict'),
cpu_period=dict(type='int'),
cpu_quota=dict(type='int'),
cpuset_cpus=dict(type='str'),
cpuset_mems=dict(type='str'),
cpu_shares=dict(type='int'),
detach=dict(type='bool', default=True),
devices=dict(type='list', elements='str'),
device_read_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_write_bps=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='str'),
)),
device_read_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
device_write_iops=dict(type='list', elements='dict', options=dict(
path=dict(required=True, type='str'),
rate=dict(required=True, type='int'),
)),
dns_servers=dict(type='list', elements='str'),
dns_opts=dict(type='list', elements='str'),
dns_search_domains=dict(type='list', elements='str'),
domainname=dict(type='str'),
entrypoint=dict(type='list', elements='str'),
env=dict(type='dict'),
env_file=dict(type='path'),
etc_hosts=dict(type='dict'),
exposed_ports=dict(type='list', elements='str', aliases=['exposed', 'expose']),
force_kill=dict(type='bool', default=False, aliases=['forcekill']),
groups=dict(type='list', elements='str'),
healthcheck=dict(type='dict', options=dict(
test=dict(type='raw'),
interval=dict(type='str'),
timeout=dict(type='str'),
start_period=dict(type='str'),
retries=dict(type='int'),
)),
hostname=dict(type='str'),
ignore_image=dict(type='bool', default=False),
image=dict(type='str'),
init=dict(type='bool', default=False),
interactive=dict(type='bool', default=False),
ipc_mode=dict(type='str'),
keep_volumes=dict(type='bool', default=True),
kernel_memory=dict(type='str'),
kill_signal=dict(type='str'),
labels=dict(type='dict'),
links=dict(type='list', elements='str'),
log_driver=dict(type='str'),
log_options=dict(type='dict', aliases=['log_opt']),
mac_address=dict(type='str'),
memory=dict(type='str', default='0'),
memory_reservation=dict(type='str'),
memory_swap=dict(type='str'),
memory_swappiness=dict(type='int'),
mounts=dict(type='list', elements='dict', options=dict(
target=dict(type='str', required=True),
source=dict(type='str'),
type=dict(type='str', choices=['bind', 'volume', 'tmpfs', 'npipe'], default='volume'),
read_only=dict(type='bool'),
consistency=dict(type='str', choices=['default', 'consistent', 'cached', 'delegated']),
propagation=dict(type='str', choices=['private', 'rprivate', 'shared', 'rshared', 'slave', 'rslave']),
no_copy=dict(type='bool'),
labels=dict(type='dict'),
volume_driver=dict(type='str'),
volume_options=dict(type='dict'),
tmpfs_size=dict(type='str'),
tmpfs_mode=dict(type='str'),
)),
name=dict(type='str', required=True),
network_mode=dict(type='str'),
networks=dict(type='list', elements='dict', options=dict(
name=dict(type='str', required=True),
ipv4_address=dict(type='str'),
ipv6_address=dict(type='str'),
aliases=dict(type='list', elements='str'),
links=dict(type='list', elements='str'),
)),
networks_cli_compatible=dict(type='bool'),
oom_killer=dict(type='bool'),
oom_score_adj=dict(type='int'),
output_logs=dict(type='bool', default=False),
paused=dict(type='bool', default=False),
pid_mode=dict(type='str'),
pids_limit=dict(type='int'),
privileged=dict(type='bool', default=False),
published_ports=dict(type='list', elements='str', aliases=['ports']),
pull=dict(type='bool', default=False),
purge_networks=dict(type='bool', default=False),
read_only=dict(type='bool', default=False),
recreate=dict(type='bool', default=False),
restart=dict(type='bool', default=False),
restart_policy=dict(type='str', choices=['no', 'on-failure', 'always', 'unless-stopped']),
restart_retries=dict(type='int'),
runtime=dict(type='str'),
security_opts=dict(type='list', elements='str'),
shm_size=dict(type='str'),
state=dict(type='str', default='started', choices=['absent', 'present', 'started', 'stopped']),
stop_signal=dict(type='str'),
stop_timeout=dict(type='int'),
sysctls=dict(type='dict'),
tmpfs=dict(type='list', elements='str'),
trust_image_content=dict(type='bool', default=False),
tty=dict(type='bool', default=False),
ulimits=dict(type='list', elements='str'),
user=dict(type='str'),
userns_mode=dict(type='str'),
uts=dict(type='str'),
volume_driver=dict(type='str'),
volumes=dict(type='list', elements='str'),
volumes_from=dict(type='list', elements='str'),
working_dir=dict(type='str'),
)
required_if = [
('state', 'present', ['image'])
]
client = AnsibleDockerClientContainer(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_api_version='1.20',
)
if client.module.params['networks_cli_compatible'] is None and client.module.params['networks']:
client.module.deprecate(
'Please note that docker_container handles networks slightly different than docker CLI. '
'If you specify networks, the default network will still be attached as the first network. '
'(You can specify purge_networks to remove all networks not explicitly listed.) '
'This behavior will change in Ansible 2.12. You can change the behavior now by setting '
'the new `networks_cli_compatible` option to `yes`, and remove this warning by setting '
'it to `no`',
version='2.12'
)
try:
cm = ContainerManager(client)
client.module.exit_json(**sanitize_result(cm.results))
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,953 |
docker_container: image change not recognized if new image is not known to docker daemon
|
##### SUMMARY
If there's currently a container with image `repo/name:tag1` running (or created), and I change the `docker_container` image tag to `repo/name:tag2` (where this image hasn't been pulled yet), `docker_container` will not notice that the image changed and will not recreate the container.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_container
##### ANSIBLE VERSION
```paste below
2.9.0
```
|
https://github.com/ansible/ansible/issues/62953
|
https://github.com/ansible/ansible/pull/62971
|
bb9b2e87bc946c7446b2747a0c97427adf94b020
|
41eafc2051d95ac712d1e359920f03056ca523a3
| 2019-09-30T06:10:46Z |
python
| 2019-10-04T19:50:09Z |
test/integration/targets/docker_container/tasks/tests/image-ids.yml
|
---
- name: Registering container name
set_fact:
cname: "{{ cname_prefix ~ '-iid' }}"
- name: Registering container name
set_fact:
cnames: "{{ cnames + [cname] }}"
- name: Pull images
docker_image:
name: "{{ image }}"
pull: true
loop:
- "hello-world:latest"
- "alpine:3.8"
loop_control:
loop_var: image
- name: Get image ID of hello-world and alpine images
docker_image_info:
name:
- "hello-world:latest"
- "alpine:3.8"
register: image_info
- assert:
that:
- image_info.images | length == 2
- name: Print image IDs
debug:
msg: "hello-world: {{ image_info.images[0].Id }}; alpine: {{ image_info.images[1].Id }}"
- name: Create container with hello-world image via ID
docker_container:
image: "{{ image_info.images[0].Id }}"
name: "{{ cname }}"
state: present
force_kill: yes
register: create_1
- name: Create container with hello-world image via ID (idempotent)
docker_container:
image: "{{ image_info.images[0].Id }}"
name: "{{ cname }}"
state: present
force_kill: yes
register: create_2
- name: Create container with alpine image via ID
docker_container:
image: "{{ image_info.images[1].Id }}"
name: "{{ cname }}"
state: present
force_kill: yes
register: create_3
- name: Create container with alpine image via ID (idempotent)
docker_container:
image: "{{ image_info.images[1].Id }}"
name: "{{ cname }}"
state: present
force_kill: yes
register: create_4
- name: Cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- create_1 is changed
- create_2 is not changed
- create_3 is changed
- create_4 is not changed
- name: set Digests
set_fact:
digest_hello_world_2016: 0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
digest_hello_world_2019: 2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
- name: Create container with hello-world image via old digest
docker_container:
image: "hello-world@sha256:{{ digest_hello_world_2016 }}"
name: "{{ cname }}"
state: present
force_kill: yes
register: digest_1
- name: Create container with hello-world image via old digest (idempotent)
docker_container:
image: "hello-world@sha256:{{ digest_hello_world_2016 }}"
name: "{{ cname }}"
state: present
force_kill: yes
register: digest_2
- name: Update container with hello-world image via new digest
docker_container:
image: "hello-world@sha256:{{ digest_hello_world_2019 }}"
name: "{{ cname }}"
state: present
force_kill: yes
register: digest_3
- name: Cleanup
docker_container:
name: "{{ cname }}"
state: absent
force_kill: yes
diff: no
- assert:
that:
- digest_1 is changed
- digest_2 is not changed
- digest_3 is changed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,923 |
lineinfile is not idempotent
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
lineinfile repeatedly inserts the same line
Reversion of https://github.com/ansible/ansible/pull/49409
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lineinfile
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.2
config file = /Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg
configured module search path = ['/Users/stuart/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 18:13:23) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg) = ['/Users/stuart/git/productmadness/ansible/aws/ansible_hosts.py']
HOST_KEY_CHECKING(/Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
master is OSX 10.14.5
Target is Centos 7.6.1810
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
use lineinfile with
insertafter and regexp
According to the docs if regexp matches insertafter is ignored
c.f. https://docs.ansible.com/ansible/latest/modules/lineinfile_module.html
viz
>The regular expression to look for in every line of the file.
>For state=present, the pattern to replace if found. Only the last line found will be replaced.
>For state=absent, the pattern of the line(s) to remove.
>If the regular expression is not matched, the line will be added to the file in keeping with insertbefore or insertafter settings.
>When modifying a line the regexp should typically match both the initial state of the line as well as its state after replacement by line to ensure idempotence.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Set JVM opts
lineinfile:
path: /opt/atlassian/crucible/bin/start.sh
insertafter: '^#!/bin/sh'
regexp: ^export FISHEYE_OPTS
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the line to be inserted after #!/bin/bash if it does not exist and to be left alone if it does
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
Evey time I run the playbook the line is added again after #!/bin/bash
<script src="https://gist.github.com/mutt13y/49c90c10430f4117d6bbbcb3e6e8a70e.js"></script>
```paste below
cat /opt/atlassian/crucible/bin/start.sh
#!/bin/sh
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
case "`uname`" in
Darwin*) if [ -z "$JAVA_HOME" ] ; then
```
|
https://github.com/ansible/ansible/issues/58923
|
https://github.com/ansible/ansible/pull/63194
|
b7a9d99cefe15b248ebc11162529d16babd28d7f
|
3b18337cac332e24bb2bcdca3f4a4451770efc68
| 2019-07-10T13:06:32Z |
python
| 2019-10-08T14:01:36Z |
changelogs/fragments/63194-lineinfile_insertafter_duplicate.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,923 |
lineinfile is not idempotent
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
lineinfile repeatedly inserts the same line
Reversion of https://github.com/ansible/ansible/pull/49409
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lineinfile
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.2
config file = /Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg
configured module search path = ['/Users/stuart/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 18:13:23) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg) = ['/Users/stuart/git/productmadness/ansible/aws/ansible_hosts.py']
HOST_KEY_CHECKING(/Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
master is OSX 10.14.5
Target is Centos 7.6.1810
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
use lineinfile with
insertafter and regexp
According to the docs if regexp matches insertafter is ignored
c.f. https://docs.ansible.com/ansible/latest/modules/lineinfile_module.html
viz
>The regular expression to look for in every line of the file.
>For state=present, the pattern to replace if found. Only the last line found will be replaced.
>For state=absent, the pattern of the line(s) to remove.
>If the regular expression is not matched, the line will be added to the file in keeping with insertbefore or insertafter settings.
>When modifying a line the regexp should typically match both the initial state of the line as well as its state after replacement by line to ensure idempotence.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Set JVM opts
lineinfile:
path: /opt/atlassian/crucible/bin/start.sh
insertafter: '^#!/bin/sh'
regexp: ^export FISHEYE_OPTS
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the line to be inserted after #!/bin/bash if it does not exist and to be left alone if it does
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
Evey time I run the playbook the line is added again after #!/bin/bash
<script src="https://gist.github.com/mutt13y/49c90c10430f4117d6bbbcb3e6e8a70e.js"></script>
```paste below
cat /opt/atlassian/crucible/bin/start.sh
#!/bin/sh
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
case "`uname`" in
Darwin*) if [ -z "$JAVA_HOME" ] ; then
```
|
https://github.com/ansible/ansible/issues/58923
|
https://github.com/ansible/ansible/pull/63194
|
b7a9d99cefe15b248ebc11162529d16babd28d7f
|
3b18337cac332e24bb2bcdca3f4a4451770efc68
| 2019-07-10T13:06:32Z |
python
| 2019-10-08T14:01:36Z |
lib/ansible/modules/files/lineinfile.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# Copyright: (c) 2014, Ahti Kitsik <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: lineinfile
short_description: Manage lines in text files
description:
- This module ensures a particular line is in a file, or replace an
existing line using a back-referenced regular expression.
- This is primarily useful when you want to change a single line in a file only.
- See the M(replace) module if you want to change multiple, similar lines
or check M(blockinfile) if you want to insert/update/remove a block of lines in a file.
For other cases, see the M(copy) or M(template) modules.
version_added: "0.7"
options:
path:
description:
- The file to modify.
- Before Ansible 2.3 this option was only usable as I(dest), I(destfile) and I(name).
type: path
required: true
aliases: [ dest, destfile, name ]
regexp:
description:
- The regular expression to look for in every line of the file.
- For C(state=present), the pattern to replace if found. Only the last line found will be replaced.
- For C(state=absent), the pattern of the line(s) to remove.
- If the regular expression is not matched, the line will be
added to the file in keeping with C(insertbefore) or C(insertafter)
settings.
- When modifying a line the regexp should typically match both the initial state of
the line as well as its state after replacement by C(line) to ensure idempotence.
- Uses Python regular expressions. See U(http://docs.python.org/2/library/re.html).
type: str
aliases: [ regex ]
version_added: '1.7'
state:
description:
- Whether the line should be there or not.
type: str
choices: [ absent, present ]
default: present
line:
description:
- The line to insert/replace into the file.
- Required for C(state=present).
- If C(backrefs) is set, may contain backreferences that will get
expanded with the C(regexp) capture groups if the regexp matches.
type: str
aliases: [ value ]
backrefs:
description:
- Used with C(state=present).
- If set, C(line) can contain backreferences (both positional and named)
that will get populated if the C(regexp) matches.
- This parameter changes the operation of the module slightly;
C(insertbefore) and C(insertafter) will be ignored, and if the C(regexp)
does not match anywhere in the file, the file will be left unchanged.
- If the C(regexp) does match, the last matching line will be replaced by
the expanded line parameter.
type: bool
default: no
version_added: "1.1"
insertafter:
description:
- Used with C(state=present).
- If specified, the line will be inserted after the last match of specified regular expression.
- If the first match is required, use(firstmatch=yes).
- A special value is available; C(EOF) for inserting the line at the end of the file.
- If specified regular expression has no matches, EOF will be used instead.
- If C(insertbefore) is set, default value C(EOF) will be ignored.
- If regular expressions are passed to both C(regexp) and C(insertafter), C(insertafter) is only honored if no match for C(regexp) is found.
- May not be used with C(backrefs) or C(insertbefore).
type: str
choices: [ EOF, '*regex*' ]
default: EOF
insertbefore:
description:
- Used with C(state=present).
- If specified, the line will be inserted before the last match of specified regular expression.
- If the first match is required, use C(firstmatch=yes).
- A value is available; C(BOF) for inserting the line at the beginning of the file.
- If specified regular expression has no matches, the line will be inserted at the end of the file.
- If regular expressions are passed to both C(regexp) and C(insertbefore), C(insertbefore) is only honored if no match for C(regexp) is found.
- May not be used with C(backrefs) or C(insertafter).
type: str
choices: [ BOF, '*regex*' ]
version_added: "1.1"
create:
description:
- Used with C(state=present).
- If specified, the file will be created if it does not already exist.
- By default it will fail if the file is missing.
type: bool
default: no
backup:
description:
- Create a backup file including the timestamp information so you can
get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
firstmatch:
description:
- Used with C(insertafter) or C(insertbefore).
- If set, C(insertafter) and C(insertbefore) find a first line has regular expression matches.
type: bool
default: no
version_added: "2.5"
others:
description:
- All arguments accepted by the M(file) module also work here.
type: str
extends_documentation_fragment:
- files
- validate
notes:
- As of Ansible 2.3, the I(dest) option has been changed to I(path) as default, but I(dest) still works as well.
seealso:
- module: blockinfile
- module: copy
- module: file
- module: replace
- module: template
- module: win_lineinfile
author:
- Daniel Hokka Zakrissoni (@dhozac)
- Ahti Kitsik (@ahtik)
'''
EXAMPLES = r'''
# NOTE: Before 2.3, option 'dest', 'destfile' or 'name' was used instead of 'path'
- name: Ensure SELinux is set to enforcing mode
lineinfile:
path: /etc/selinux/config
regexp: '^SELINUX='
line: SELINUX=enforcing
- name: Make sure group wheel is not in the sudoers configuration
lineinfile:
path: /etc/sudoers
state: absent
regexp: '^%wheel'
- name: Replace a localhost entry with our own
lineinfile:
path: /etc/hosts
regexp: '^127\.0\.0\.1'
line: 127.0.0.1 localhost
owner: root
group: root
mode: '0644'
- name: Ensure the default Apache port is 8080
lineinfile:
path: /etc/httpd/conf/httpd.conf
regexp: '^Listen '
insertafter: '^#Listen '
line: Listen 8080
- name: Ensure we have our own comment added to /etc/services
lineinfile:
path: /etc/services
regexp: '^# port for http'
insertbefore: '^www.*80/tcp'
line: '# port for http by default'
- name: Add a line to a file if the file does not exist, without passing regexp
lineinfile:
path: /tmp/testfile
line: 192.168.1.99 foo.lab.net foo
create: yes
# NOTE: Yaml requires escaping backslashes in double quotes but not in single quotes
- name: Ensure the JBoss memory settings are exactly as needed
lineinfile:
path: /opt/jboss-as/bin/standalone.conf
regexp: '^(.*)Xms(\\d+)m(.*)$'
line: '\1Xms${xms}m\3'
backrefs: yes
# NOTE: Fully quoted because of the ': ' on the line. See the Gotchas in the YAML docs.
- name: Validate the sudoers file before saving
lineinfile:
path: /etc/sudoers
state: present
regexp: '^%ADMIN ALL='
line: '%ADMIN ALL=(ALL) NOPASSWD: ALL'
validate: /usr/sbin/visudo -cf %s
'''
import os
import re
import tempfile
# import module snippets
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import b
from ansible.module_utils._text import to_bytes, to_native
def write_changes(module, b_lines, dest):
tmpfd, tmpfile = tempfile.mkstemp()
with os.fdopen(tmpfd, 'wb') as f:
f.writelines(b_lines)
validate = module.params.get('validate', None)
valid = not validate
if validate:
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(to_bytes(validate % tmpfile, errors='surrogate_or_strict'))
valid = rc == 0
if rc != 0:
module.fail_json(msg='failed to validate: '
'rc:%s error:%s' % (rc, err))
if valid:
module.atomic_move(tmpfile,
to_native(os.path.realpath(to_bytes(dest, errors='surrogate_or_strict')), errors='surrogate_or_strict'),
unsafe_writes=module.params['unsafe_writes'])
def check_file_attrs(module, changed, message, diff):
file_args = module.load_file_common_arguments(module.params)
if module.set_fs_attributes_if_different(file_args, False, diff=diff):
if changed:
message += " and "
changed = True
message += "ownership, perms or SE linux context changed"
return message, changed
def present(module, dest, regexp, line, insertafter, insertbefore, create,
backup, backrefs, firstmatch):
diff = {'before': '',
'after': '',
'before_header': '%s (content)' % dest,
'after_header': '%s (content)' % dest}
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest):
if not create:
module.fail_json(rc=257, msg='Destination %s does not exist !' % dest)
b_destpath = os.path.dirname(b_dest)
if not os.path.exists(b_destpath) and not module.check_mode:
try:
os.makedirs(b_destpath)
except Exception as e:
module.fail_json(msg='Error creating %s Error code: %s Error description: %s' % (b_destpath, e[0], e[1]))
b_lines = []
else:
with open(b_dest, 'rb') as f:
b_lines = f.readlines()
if module._diff:
diff['before'] = to_native(b('').join(b_lines))
if regexp is not None:
bre_m = re.compile(to_bytes(regexp, errors='surrogate_or_strict'))
if insertafter not in (None, 'BOF', 'EOF'):
bre_ins = re.compile(to_bytes(insertafter, errors='surrogate_or_strict'))
elif insertbefore not in (None, 'BOF'):
bre_ins = re.compile(to_bytes(insertbefore, errors='surrogate_or_strict'))
else:
bre_ins = None
# index[0] is the line num where regexp has been found
# index[1] is the line num where insertafter/insertbefore has been found
index = [-1, -1]
m = None
b_line = to_bytes(line, errors='surrogate_or_strict')
for lineno, b_cur_line in enumerate(b_lines):
if regexp is not None:
match_found = bre_m.search(b_cur_line)
else:
match_found = b_line == b_cur_line.rstrip(b('\r\n'))
if match_found:
index[0] = lineno
m = match_found
elif bre_ins is not None and bre_ins.search(b_cur_line):
if insertafter:
# + 1 for the next line
index[1] = lineno + 1
if firstmatch:
break
if insertbefore:
# index[1] for the previous line
index[1] = lineno
if firstmatch:
break
msg = ''
changed = False
b_linesep = to_bytes(os.linesep, errors='surrogate_or_strict')
# Exact line or Regexp matched a line in the file
if index[0] != -1:
if backrefs:
b_new_line = m.expand(b_line)
else:
# Don't do backref expansion if not asked.
b_new_line = b_line
if not b_new_line.endswith(b_linesep):
b_new_line += b_linesep
# If no regexp was given and no line match is found anywhere in the file,
# insert the line appropriately if using insertbefore or insertafter
if regexp is None and m is None:
# Insert lines
if insertafter and insertafter != 'EOF':
# Ensure there is a line separator after the found string
# at the end of the file.
if b_lines and not b_lines[-1][-1:] in (b('\n'), b('\r')):
b_lines[-1] = b_lines[-1] + b_linesep
# If the line to insert after is at the end of the file
# use the appropriate index value.
if len(b_lines) == index[1]:
if b_lines[index[1] - 1].rstrip(b('\r\n')) != b_line:
b_lines.append(b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[1]].rstrip(b('\r\n')) != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif insertbefore and insertbefore != 'BOF':
# If the line to insert before is at the beginning of the file
# use the appropriate index value.
if index[1] <= 0:
if b_lines[index[1]].rstrip(b('\r\n')) != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[1] - 1].rstrip(b('\r\n')) != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[0]] != b_new_line:
b_lines[index[0]] = b_new_line
msg = 'line replaced'
changed = True
elif backrefs:
# Do absolutely nothing, since it's not safe generating the line
# without the regexp matching to populate the backrefs.
pass
# Add it to the beginning of the file
elif insertbefore == 'BOF' or insertafter == 'BOF':
b_lines.insert(0, b_line + b_linesep)
msg = 'line added'
changed = True
# Add it to the end of the file if requested or
# if insertafter/insertbefore didn't match anything
# (so default behaviour is to add at the end)
elif insertafter == 'EOF' or index[1] == -1:
# If the file is not empty then ensure there's a newline before the added line
if b_lines and not b_lines[-1][-1:] in (b('\n'), b('\r')):
b_lines.append(b_linesep)
b_lines.append(b_line + b_linesep)
msg = 'line added'
changed = True
# insert matched, but not the regexp
else:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
if module._diff:
diff['after'] = to_native(b('').join(b_lines))
backupdest = ""
if changed and not module.check_mode:
if backup and os.path.exists(b_dest):
backupdest = module.backup_local(dest)
write_changes(module, b_lines, dest)
if module.check_mode and not os.path.exists(b_dest):
module.exit_json(changed=changed, msg=msg, backup=backupdest, diff=diff)
attr_diff = {}
msg, changed = check_file_attrs(module, changed, msg, attr_diff)
attr_diff['before_header'] = '%s (file attributes)' % dest
attr_diff['after_header'] = '%s (file attributes)' % dest
difflist = [diff, attr_diff]
module.exit_json(changed=changed, msg=msg, backup=backupdest, diff=difflist)
def absent(module, dest, regexp, line, backup):
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest):
module.exit_json(changed=False, msg="file not present")
msg = ''
diff = {'before': '',
'after': '',
'before_header': '%s (content)' % dest,
'after_header': '%s (content)' % dest}
with open(b_dest, 'rb') as f:
b_lines = f.readlines()
if module._diff:
diff['before'] = to_native(b('').join(b_lines))
if regexp is not None:
bre_c = re.compile(to_bytes(regexp, errors='surrogate_or_strict'))
found = []
b_line = to_bytes(line, errors='surrogate_or_strict')
def matcher(b_cur_line):
if regexp is not None:
match_found = bre_c.search(b_cur_line)
else:
match_found = b_line == b_cur_line.rstrip(b('\r\n'))
if match_found:
found.append(b_cur_line)
return not match_found
b_lines = [l for l in b_lines if matcher(l)]
changed = len(found) > 0
if module._diff:
diff['after'] = to_native(b('').join(b_lines))
backupdest = ""
if changed and not module.check_mode:
if backup:
backupdest = module.backup_local(dest)
write_changes(module, b_lines, dest)
if changed:
msg = "%s line(s) removed" % len(found)
attr_diff = {}
msg, changed = check_file_attrs(module, changed, msg, attr_diff)
attr_diff['before_header'] = '%s (file attributes)' % dest
attr_diff['after_header'] = '%s (file attributes)' % dest
difflist = [diff, attr_diff]
module.exit_json(changed=changed, found=len(found), msg=msg, backup=backupdest, diff=difflist)
def main():
module = AnsibleModule(
argument_spec=dict(
path=dict(type='path', required=True, aliases=['dest', 'destfile', 'name']),
state=dict(type='str', default='present', choices=['absent', 'present']),
regexp=dict(type='str', aliases=['regex']),
line=dict(type='str', aliases=['value']),
insertafter=dict(type='str'),
insertbefore=dict(type='str'),
backrefs=dict(type='bool', default=False),
create=dict(type='bool', default=False),
backup=dict(type='bool', default=False),
firstmatch=dict(type='bool', default=False),
validate=dict(type='str'),
),
mutually_exclusive=[['insertbefore', 'insertafter']],
add_file_common_args=True,
supports_check_mode=True,
)
params = module.params
create = params['create']
backup = params['backup']
backrefs = params['backrefs']
path = params['path']
firstmatch = params['firstmatch']
regexp = params['regexp']
line = params['line']
if regexp == '':
module.warn(
"The regular expression is an empty string, which will match every line in the file. "
"This may have unintended consequences, such as replacing the last line in the file rather than appending. "
"If this is desired, use '^' to match every line in the file and avoid this warning.")
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.isdir(b_path):
module.fail_json(rc=256, msg='Path %s is a directory !' % path)
if params['state'] == 'present':
if backrefs and regexp is None:
module.fail_json(msg='regexp is required with backrefs=true')
if line is None:
module.fail_json(msg='line is required with state=present')
# Deal with the insertafter default value manually, to avoid errors
# because of the mutually_exclusive mechanism.
ins_bef, ins_aft = params['insertbefore'], params['insertafter']
if ins_bef is None and ins_aft is None:
ins_aft = 'EOF'
present(module, path, regexp, line,
ins_aft, ins_bef, create, backup, backrefs, firstmatch)
else:
if regexp is None and line is None:
module.fail_json(msg='one of line or regexp is required with state=absent')
absent(module, path, regexp, line, backup)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,923 |
lineinfile is not idempotent
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
lineinfile repeatedly inserts the same line
Reversion of https://github.com/ansible/ansible/pull/49409
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lineinfile
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.2
config file = /Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg
configured module search path = ['/Users/stuart/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 18:13:23) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg) = ['/Users/stuart/git/productmadness/ansible/aws/ansible_hosts.py']
HOST_KEY_CHECKING(/Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
master is OSX 10.14.5
Target is Centos 7.6.1810
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
use lineinfile with
insertafter and regexp
According to the docs if regexp matches insertafter is ignored
c.f. https://docs.ansible.com/ansible/latest/modules/lineinfile_module.html
viz
>The regular expression to look for in every line of the file.
>For state=present, the pattern to replace if found. Only the last line found will be replaced.
>For state=absent, the pattern of the line(s) to remove.
>If the regular expression is not matched, the line will be added to the file in keeping with insertbefore or insertafter settings.
>When modifying a line the regexp should typically match both the initial state of the line as well as its state after replacement by line to ensure idempotence.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Set JVM opts
lineinfile:
path: /opt/atlassian/crucible/bin/start.sh
insertafter: '^#!/bin/sh'
regexp: ^export FISHEYE_OPTS
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the line to be inserted after #!/bin/bash if it does not exist and to be left alone if it does
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
Evey time I run the playbook the line is added again after #!/bin/bash
<script src="https://gist.github.com/mutt13y/49c90c10430f4117d6bbbcb3e6e8a70e.js"></script>
```paste below
cat /opt/atlassian/crucible/bin/start.sh
#!/bin/sh
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
case "`uname`" in
Darwin*) if [ -z "$JAVA_HOME" ] ; then
```
|
https://github.com/ansible/ansible/issues/58923
|
https://github.com/ansible/ansible/pull/63194
|
b7a9d99cefe15b248ebc11162529d16babd28d7f
|
3b18337cac332e24bb2bcdca3f4a4451770efc68
| 2019-07-10T13:06:32Z |
python
| 2019-10-08T14:01:36Z |
test/integration/targets/lineinfile/files/firstmatch.txt
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,923 |
lineinfile is not idempotent
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
lineinfile repeatedly inserts the same line
Reversion of https://github.com/ansible/ansible/pull/49409
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lineinfile
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.2
config file = /Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg
configured module search path = ['/Users/stuart/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 18:13:23) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg) = ['/Users/stuart/git/productmadness/ansible/aws/ansible_hosts.py']
HOST_KEY_CHECKING(/Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
master is OSX 10.14.5
Target is Centos 7.6.1810
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
use lineinfile with
insertafter and regexp
According to the docs if regexp matches insertafter is ignored
c.f. https://docs.ansible.com/ansible/latest/modules/lineinfile_module.html
viz
>The regular expression to look for in every line of the file.
>For state=present, the pattern to replace if found. Only the last line found will be replaced.
>For state=absent, the pattern of the line(s) to remove.
>If the regular expression is not matched, the line will be added to the file in keeping with insertbefore or insertafter settings.
>When modifying a line the regexp should typically match both the initial state of the line as well as its state after replacement by line to ensure idempotence.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Set JVM opts
lineinfile:
path: /opt/atlassian/crucible/bin/start.sh
insertafter: '^#!/bin/sh'
regexp: ^export FISHEYE_OPTS
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the line to be inserted after #!/bin/bash if it does not exist and to be left alone if it does
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
Evey time I run the playbook the line is added again after #!/bin/bash
<script src="https://gist.github.com/mutt13y/49c90c10430f4117d6bbbcb3e6e8a70e.js"></script>
```paste below
cat /opt/atlassian/crucible/bin/start.sh
#!/bin/sh
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
case "`uname`" in
Darwin*) if [ -z "$JAVA_HOME" ] ; then
```
|
https://github.com/ansible/ansible/issues/58923
|
https://github.com/ansible/ansible/pull/63194
|
b7a9d99cefe15b248ebc11162529d16babd28d7f
|
3b18337cac332e24bb2bcdca3f4a4451770efc68
| 2019-07-10T13:06:32Z |
python
| 2019-10-08T14:01:36Z |
test/integration/targets/lineinfile/files/test_58923.txt
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 58,923 |
lineinfile is not idempotent
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
lineinfile repeatedly inserts the same line
Reversion of https://github.com/ansible/ansible/pull/49409
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lineinfile
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.2
config file = /Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg
configured module search path = ['/Users/stuart/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.8.2/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 18:13:23) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg) = ['/Users/stuart/git/productmadness/ansible/aws/ansible_hosts.py']
HOST_KEY_CHECKING(/Users/stuart/git/productmadness/ansible/aws/atlassian/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
master is OSX 10.14.5
Target is Centos 7.6.1810
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
use lineinfile with
insertafter and regexp
According to the docs if regexp matches insertafter is ignored
c.f. https://docs.ansible.com/ansible/latest/modules/lineinfile_module.html
viz
>The regular expression to look for in every line of the file.
>For state=present, the pattern to replace if found. Only the last line found will be replaced.
>For state=absent, the pattern of the line(s) to remove.
>If the regular expression is not matched, the line will be added to the file in keeping with insertbefore or insertafter settings.
>When modifying a line the regexp should typically match both the initial state of the line as well as its state after replacement by line to ensure idempotence.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Set JVM opts
lineinfile:
path: /opt/atlassian/crucible/bin/start.sh
insertafter: '^#!/bin/sh'
regexp: ^export FISHEYE_OPTS
firstmatch: true
line: export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect that the line to be inserted after #!/bin/bash if it does not exist and to be left alone if it does
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
Evey time I run the playbook the line is added again after #!/bin/bash
<script src="https://gist.github.com/mutt13y/49c90c10430f4117d6bbbcb3e6e8a70e.js"></script>
```paste below
cat /opt/atlassian/crucible/bin/start.sh
#!/bin/sh
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
export FISHEYE_OPTS="-Xmx4096m -Xms2048m"
case "`uname`" in
Darwin*) if [ -z "$JAVA_HOME" ] ; then
```
|
https://github.com/ansible/ansible/issues/58923
|
https://github.com/ansible/ansible/pull/63194
|
b7a9d99cefe15b248ebc11162529d16babd28d7f
|
3b18337cac332e24bb2bcdca3f4a4451770efc68
| 2019-07-10T13:06:32Z |
python
| 2019-10-08T14:01:36Z |
test/integration/targets/lineinfile/tasks/main.yml
|
# test code for the lineinfile module
# (c) 2014, James Cammarata <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- name: deploy the test file for lineinfile
copy:
src: test.txt
dest: "{{ output_dir }}/test.txt"
register: result
- name: assert that the test file was deployed
assert:
that:
- result is changed
- "result.checksum == '5feac65e442c91f557fc90069ce6efc4d346ab51'"
- "result.state == 'file'"
- name: insert a line at the beginning of the file, and back it up
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New line at the beginning"
insertbefore: "BOF"
backup: yes
register: result1
- name: insert a line at the beginning of the file again
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New line at the beginning"
insertbefore: "BOF"
register: result2
- name: assert that the line was inserted at the head of the file
assert:
that:
- result1 is changed
- result2 is not changed
- result1.msg == 'line added'
- result1.backup != ''
- name: stat the backup file
stat:
path: "{{ result1.backup }}"
register: result
- name: assert the backup file matches the previous hash
assert:
that:
- "result.stat.checksum == '5feac65e442c91f557fc90069ce6efc4d346ab51'"
- name: stat the test after the insert at the head
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test hash is what we expect for the file with the insert at the head
assert:
that:
- "result.stat.checksum == '7eade4042b23b800958fe807b5bfc29f8541ec09'"
- name: insert a line at the end of the file
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New line at the end"
insertafter: "EOF"
register: result
- name: assert that the line was inserted at the end of the file
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: stat the test after the insert at the end
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after the insert at the end
assert:
that:
- "result.stat.checksum == 'fb57af7dc10a1006061b000f1f04c38e4bef50a9'"
- name: insert a line after the first line
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New line after line 1"
insertafter: "^This is line 1$"
register: result
- name: assert that the line was inserted after the first line
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: stat the test after insert after the first line
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after the insert after the first line
assert:
that:
- "result.stat.checksum == '5348da605b1bc93dbadf3a16474cdf22ef975bec'"
- name: insert a line before the last line
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New line after line 5"
insertbefore: "^This is line 5$"
register: result
- name: assert that the line was inserted before the last line
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: stat the test after the insert before the last line
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after the insert before the last line
assert:
that:
- "result.stat.checksum == 'e1cae425403507feea4b55bb30a74decfdd4a23e'"
- name: replace a line with backrefs
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "This is line 3"
backrefs: yes
regexp: "^(REF) .* \\1$"
register: result
- name: assert that the line with backrefs was changed
assert:
that:
- result is changed
- "result.msg == 'line replaced'"
- name: stat the test after the backref line was replaced
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after backref line was replaced
assert:
that:
- "result.stat.checksum == '2ccdf45d20298f9eaece73b713648e5489a52444'"
- name: remove the middle line
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: absent
regexp: "^This is line 3$"
register: result
- name: assert that the line was removed
assert:
that:
- result is changed
- "result.msg == '1 line(s) removed'"
- name: stat the test after the middle line was removed
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after the middle line was removed
assert:
that:
- "result.stat.checksum == 'a6ba6865547c19d4c203c38a35e728d6d1942c75'"
- name: run a validation script that succeeds
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: absent
regexp: "^This is line 5$"
validate: "true %s"
register: result
- name: assert that the file validated after removing a line
assert:
that:
- result is changed
- "result.msg == '1 line(s) removed'"
- name: stat the test after the validation succeeded
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after the validation succeeded
assert:
that:
- "result.stat.checksum == '76955a4516a00a38aad8427afc9ee3e361024ba5'"
- name: run a validation script that fails
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: absent
regexp: "^This is line 1$"
validate: "/bin/false %s"
register: result
ignore_errors: yes
- name: assert that the validate failed
assert:
that:
- "result.failed == true"
- name: stat the test after the validation failed
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches the previous after the validation failed
assert:
that:
- "result.stat.checksum == '76955a4516a00a38aad8427afc9ee3e361024ba5'"
- name: use create=yes
lineinfile:
dest: "{{ output_dir }}/new_test.txt"
create: yes
insertbefore: BOF
state: present
line: "This is a new file"
register: result
- name: assert that the new file was created
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: validate that the newly created file exists
stat:
path: "{{ output_dir }}/new_test.txt"
register: result
ignore_errors: yes
- name: assert the newly created test checksum matches
assert:
that:
- "result.stat.checksum == '038f10f9e31202451b093163e81e06fbac0c6f3a'"
# Test EOF in cases where file has no newline at EOF
- name: testnoeof deploy the file for lineinfile
copy:
src: testnoeof.txt
dest: "{{ output_dir }}/testnoeof.txt"
register: result
- name: testnoeof insert a line at the end of the file
lineinfile:
dest: "{{ output_dir }}/testnoeof.txt"
state: present
line: "New line at the end"
insertafter: "EOF"
register: result
- name: testempty assert that the line was inserted at the end of the file
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: insert a multiple lines at the end of the file
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "This is a line\nwith \\n character"
insertafter: "EOF"
register: result
- name: assert that the multiple lines was inserted
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: testnoeof stat the no newline EOF test after the insert at the end
stat:
path: "{{ output_dir }}/testnoeof.txt"
register: result
- name: testnoeof assert test checksum matches after the insert at the end
assert:
that:
- "result.stat.checksum == 'f9af7008e3cb67575ce653d094c79cabebf6e523'"
# Test EOF with empty file to make sure no unnecessary newline is added
- name: testempty deploy the testempty file for lineinfile
copy:
src: testempty.txt
dest: "{{ output_dir }}/testempty.txt"
register: result
- name: testempty insert a line at the end of the file
lineinfile:
dest: "{{ output_dir }}/testempty.txt"
state: present
line: "New line at the end"
insertafter: "EOF"
register: result
- name: testempty assert that the line was inserted at the end of the file
assert:
that:
- result is changed
- "result.msg == 'line added'"
- name: testempty stat the test after the insert at the end
stat:
path: "{{ output_dir }}/testempty.txt"
register: result
- name: testempty assert test checksum matches after the insert at the end
assert:
that:
- "result.stat.checksum == 'f440dc65ea9cec3fd496c1479ddf937e1b949412'"
- stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after inserting multiple lines
assert:
that:
- "result.stat.checksum == 'bf5b711f8f0509355aaeb9d0d61e3e82337c1365'"
- name: replace a line with backrefs included in the line
lineinfile:
dest: "{{ output_dir }}/test.txt"
state: present
line: "New \\1 created with the backref"
backrefs: yes
regexp: "^This is (line 4)$"
register: result
- name: assert that the line with backrefs was changed
assert:
that:
- result is changed
- "result.msg == 'line replaced'"
- name: stat the test after the backref line was replaced
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: assert test checksum matches after backref line was replaced
assert:
that:
- "result.stat.checksum == '04b7a54d0fb233a4e26c9e625325bb4874841b3c'"
###################################################################
# issue 8535
- name: create a new file for testing quoting issues
file:
dest: "{{ output_dir }}/test_quoting.txt"
state: touch
register: result
- name: assert the new file was created
assert:
that:
- result is changed
- name: use with_items to add code-like strings to the quoting txt file
lineinfile:
dest: "{{ output_dir }}/test_quoting.txt"
line: "{{ item }}"
insertbefore: BOF
with_items:
- "'foo'"
- "dotenv.load();"
- "var dotenv = require('dotenv');"
register: result
- name: assert the quote test file was modified correctly
assert:
that:
- result.results|length == 3
- result.results[0] is changed
- result.results[0].item == "'foo'"
- result.results[1] is changed
- result.results[1].item == "dotenv.load();"
- result.results[2] is changed
- result.results[2].item == "var dotenv = require('dotenv');"
- name: stat the quote test file
stat:
path: "{{ output_dir }}/test_quoting.txt"
register: result
- name: assert test checksum matches after backref line was replaced
assert:
that:
- "result.stat.checksum == '7dc3cb033c3971e73af0eaed6623d4e71e5743f1'"
- name: insert a line into the quoted file with a single quote
lineinfile:
dest: "{{ output_dir }}/test_quoting.txt"
line: "import g'"
register: result
- name: assert that the quoted file was changed
assert:
that:
- result is changed
- name: stat the quote test file
stat:
path: "{{ output_dir }}/test_quoting.txt"
register: result
- name: assert test checksum matches after backref line was replaced
assert:
that:
- "result.stat.checksum == '73b271c2cc1cef5663713bc0f00444b4bf9f4543'"
- name: insert a line into the quoted file with many double quotation strings
lineinfile:
dest: "{{ output_dir }}/test_quoting.txt"
line: "\"quote\" and \"unquote\""
register: result
- name: assert that the quoted file was changed
assert:
that:
- result is changed
- name: stat the quote test file
stat:
path: "{{ output_dir }}/test_quoting.txt"
register: result
- name: assert test checksum matches after backref line was replaced
assert:
that:
- "result.stat.checksum == 'b10ab2a3c3b6492680c8d0b1d6f35aa6b8f9e731'"
###################################################################
# Issue 28721
- name: Deploy the testmultiple file
copy:
src: testmultiple.txt
dest: "{{ output_dir }}/testmultiple.txt"
register: result
- name: Assert that the testmultiple file was deployed
assert:
that:
- result is changed
- result.checksum == '3e0090a34fb641f3c01e9011546ff586260ea0ea'
- result.state == 'file'
# Test insertafter
- name: Write the same line to a file inserted after different lines
lineinfile:
path: "{{ output_dir }}/testmultiple.txt"
insertafter: "{{ item.regex }}"
line: "{{ item.replace }}"
register: _multitest_1
with_items: "{{ test_regexp }}"
- name: Assert that the line is added once only
assert:
that:
- _multitest_1.results.0 is changed
- _multitest_1.results.1 is not changed
- _multitest_1.results.2 is not changed
- _multitest_1.results.3 is not changed
- name: Do the same thing again to check for changes
lineinfile:
path: "{{ output_dir }}/testmultiple.txt"
insertafter: "{{ item.regex }}"
line: "{{ item.replace }}"
register: _multitest_2
with_items: "{{ test_regexp }}"
- name: Assert that the line is not added anymore
assert:
that:
- _multitest_2.results.0 is not changed
- _multitest_2.results.1 is not changed
- _multitest_2.results.2 is not changed
- _multitest_2.results.3 is not changed
- name: Stat the insertafter file
stat:
path: "{{ output_dir }}/testmultiple.txt"
register: result
- name: Assert that the insertafter file matches expected checksum
assert:
that:
- result.stat.checksum == 'c6733b6c53ddd0e11e6ba39daa556ef8f4840761'
# Test insertbefore
- name: Deploy the testmultiple file
copy:
src: testmultiple.txt
dest: "{{ output_dir }}/testmultiple.txt"
register: result
- name: Assert that the testmultiple file was deployed
assert:
that:
- result is changed
- result.checksum == '3e0090a34fb641f3c01e9011546ff586260ea0ea'
- result.state == 'file'
- name: Write the same line to a file inserted before different lines
lineinfile:
path: "{{ output_dir }}/testmultiple.txt"
insertbefore: "{{ item.regex }}"
line: "{{ item.replace }}"
register: _multitest_3
with_items: "{{ test_regexp }}"
- name: Assert that the line is added once only
assert:
that:
- _multitest_3.results.0 is changed
- _multitest_3.results.1 is not changed
- _multitest_3.results.2 is not changed
- _multitest_3.results.3 is not changed
- name: Do the same thing again to check for changes
lineinfile:
path: "{{ output_dir }}/testmultiple.txt"
insertbefore: "{{ item.regex }}"
line: "{{ item.replace }}"
register: _multitest_4
with_items: "{{ test_regexp }}"
- name: Assert that the line is not added anymore
assert:
that:
- _multitest_4.results.0 is not changed
- _multitest_4.results.1 is not changed
- _multitest_4.results.2 is not changed
- _multitest_4.results.3 is not changed
- name: Stat the insertbefore file
stat:
path: "{{ output_dir }}/testmultiple.txt"
register: result
- name: Assert that the insertbefore file matches expected checksum
assert:
that:
- result.stat.checksum == '5d298651fbc377b45257da10308a9dc2fe1f8be5'
###################################################################
# Issue 36156
# Test insertbefore and insertafter with regexp
- name: Deploy the test.conf file
copy:
src: test.conf
dest: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the test.conf file was deployed
assert:
that:
- result is changed
- result.checksum == '6037f13e419b132eb3fd20a89e60c6c87a6add38'
- result.state == 'file'
# Test instertafter
- name: Insert lines after with regexp
lineinfile:
path: "{{ output_dir }}/test.conf"
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
insertafter: "{{ item.after }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_5
- name: Do the same thing again and check for changes
lineinfile:
path: "{{ output_dir }}/test.conf"
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
insertafter: "{{ item.after }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_6
- name: Assert that the file was changed the first time but not the second time
assert:
that:
- item.0 is changed
- item.1 is not changed
with_together:
- "{{ _multitest_5.results }}"
- "{{ _multitest_6.results }}"
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file contents match what is expected
assert:
that:
- result.stat.checksum == '06e2c456e5028dd7bcd0b117b5927a1139458c82'
- name: Do the same thing a third time without regexp and check for changes
lineinfile:
path: "{{ output_dir }}/test.conf"
line: "{{ item.line }}"
insertafter: "{{ item.after }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_7
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file was changed when no regexp was provided
assert:
that:
- item is not changed
with_items: "{{ _multitest_7.results }}"
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file contents match what is expected
assert:
that:
- result.stat.checksum == '06e2c456e5028dd7bcd0b117b5927a1139458c82'
# Test insertbefore
- name: Deploy the test.conf file
copy:
src: test.conf
dest: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the test.conf file was deployed
assert:
that:
- result is changed
- result.checksum == '6037f13e419b132eb3fd20a89e60c6c87a6add38'
- result.state == 'file'
- name: Insert lines before with regexp
lineinfile:
path: "{{ output_dir }}/test.conf"
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
insertbefore: "{{ item.before }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_8
- name: Do the same thing again and check for changes
lineinfile:
path: "{{ output_dir }}/test.conf"
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
insertbefore: "{{ item.before }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_9
- name: Assert that the file was changed the first time but not the second time
assert:
that:
- item.0 is changed
- item.1 is not changed
with_together:
- "{{ _multitest_8.results }}"
- "{{ _multitest_9.results }}"
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file contents match what is expected
assert:
that:
- result.stat.checksum == 'c3be9438a07c44d4c256cebfcdbca15a15b1db91'
- name: Do the same thing a third time without regexp and check for changes
lineinfile:
path: "{{ output_dir }}/test.conf"
line: "{{ item.line }}"
insertbefore: "{{ item.before }}"
with_items: "{{ test_befaf_regexp }}"
register: _multitest_10
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file was changed when no regexp was provided
assert:
that:
- item is not changed
with_items: "{{ _multitest_10.results }}"
- name: Stat the file
stat:
path: "{{ output_dir }}/test.conf"
register: result
- name: Assert that the file contents match what is expected
assert:
that:
- result.stat.checksum == 'c3be9438a07c44d4c256cebfcdbca15a15b1db91'
- name: Copy empty file to test with insertbefore
copy:
src: testempty.txt
dest: "{{ output_dir }}/testempty.txt"
- name: Add a line to empty file with insertbefore
lineinfile:
path: "{{ output_dir }}/testempty.txt"
line: top
insertbefore: '^not in the file$'
register: oneline_insbefore_test1
- name: Add a line to file with only one line using insertbefore
lineinfile:
path: "{{ output_dir }}/testempty.txt"
line: top
insertbefore: '^not in the file$'
register: oneline_insbefore_test2
- name: Stat the file
stat:
path: "{{ output_dir }}/testempty.txt"
register: oneline_insbefore_file
- name: Assert that insertebefore worked properly with a one line file
assert:
that:
- oneline_insbefore_test1 is changed
- oneline_insbefore_test2 is not changed
- oneline_insbefore_file.stat.checksum == '4dca56d05a21f0d018cd311f43e134e4501cf6d9'
###################################################################
# Issue 29443
# When using an empty regexp, replace the last line (since it matches every line)
# but also provide a warning.
- name: Deploy the test file for lineinfile
copy:
src: test.txt
dest: "{{ output_dir }}/test.txt"
register: result
- name: Assert that the test file was deployed
assert:
that:
- result is changed
- result.checksum == '5feac65e442c91f557fc90069ce6efc4d346ab51'
- result.state == 'file'
- name: Insert a line in the file using an empty string as a regular expression
lineinfile:
path: "{{ output_dir }}/test.txt"
regexp: ''
line: This is line 6
register: insert_empty_regexp
- name: Stat the file
stat:
path: "{{ output_dir }}/test.txt"
register: result
- name: Assert that the file contents match what is expected and a warning was displayed
assert:
that:
- insert_empty_regexp is changed
- warning_message in insert_empty_regexp.warnings
- result.stat.checksum == '23555a98ceaa88756b4c7c7bba49d9f86eed868f'
vars:
warning_message: >-
The regular expression is an empty string, which will match every line in the file.
This may have unintended consequences, such as replacing the last line in the file rather than appending.
If this is desired, use '^' to match every line in the file and avoid this warning.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,225 |
[ovirt_vm] Provide possibility to change Display keyboard-layout
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
I would like to be able to set the keyboard layout for graphical console through ansible, as it's already possible via API and UI
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/rest_api_guide/types#types-display
==> keyboard_layout
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ovirt_vm
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
In countries like Germany, I would like to setup the keyboard layout native to german. This is possible by API and UI, but not using ansible.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ovirt_vm:
graphical_console:
protocol:
- vnc
keyboard_layout: de_DE
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63225
|
https://github.com/ansible/ansible/pull/63230
|
67d9cc45bde602b43c93ede4fe0bf5bb2b51a184
|
35c3d831fb81fc4f7bddd2d5251d046e8c8a7727
| 2019-10-08T11:00:00Z |
python
| 2019-10-08T15:12:14Z |
lib/ansible/modules/cloud/ovirt/ovirt_vm.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_vm
short_description: Module to manage Virtual Machines in oVirt/RHV
version_added: "2.2"
author:
- Ondra Machacek (@machacekondra)
description:
- This module manages whole lifecycle of the Virtual Machine(VM) in oVirt/RHV.
- Since VM can hold many states in oVirt/RHV, this see notes to see how the states of the VM are handled.
options:
name:
description:
- Name of the Virtual Machine to manage.
- If VM don't exists C(name) is required. Otherwise C(id) or C(name) can be used.
id:
description:
- ID of the Virtual Machine to manage.
state:
description:
- Should the Virtual Machine be running/stopped/present/absent/suspended/next_run/registered/exported/reboot.
When C(state) is I(registered) and the unregistered VM's name
belongs to an already registered in engine VM in the same DC
then we fail to register the unregistered template.
- I(present) state will create/update VM and don't change its state if it already exists.
- I(running) state will create/update VM and start it.
- I(next_run) state updates the VM and if the VM has next run configuration it will be rebooted.
- Please check I(notes) to more detailed description of states.
- I(exported) state will export the VM to export domain or as OVA.
- I(registered) is supported since 2.4.
- I(reboot) is supported since 2.10, virtual machine is rebooted only if it's in up state.
choices: [ absent, next_run, present, registered, running, stopped, suspended, exported, reboot ]
default: present
cluster:
description:
- Name of the cluster, where Virtual Machine should be created.
- Required if creating VM.
allow_partial_import:
description:
- Boolean indication whether to allow partial registration of Virtual Machine when C(state) is registered.
type: bool
version_added: "2.4"
vnic_profile_mappings:
description:
- "Mapper which maps an external virtual NIC profile to one that exists in the engine when C(state) is registered.
vnic_profile is described by the following dictionary:"
suboptions:
source_network_name:
description:
- The network name of the source network.
source_profile_name:
description:
- The profile name related to the source network.
target_profile_id:
description:
- The id of the target profile id to be mapped to in the engine.
version_added: "2.5"
cluster_mappings:
description:
- "Mapper which maps cluster name between VM's OVF and the destination cluster this VM should be registered to,
relevant when C(state) is registered.
Cluster mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source cluster.
dest_name:
description:
- The name of the destination cluster.
version_added: "2.5"
role_mappings:
description:
- "Mapper which maps role name between VM's OVF and the destination role this VM should be registered to,
relevant when C(state) is registered.
Role mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source role.
dest_name:
description:
- The name of the destination role.
version_added: "2.5"
domain_mappings:
description:
- "Mapper which maps aaa domain name between VM's OVF and the destination aaa domain this VM should be registered to,
relevant when C(state) is registered.
The aaa domain mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source aaa domain.
dest_name:
description:
- The name of the destination aaa domain.
version_added: "2.5"
affinity_group_mappings:
description:
- "Mapper which maps affinity name between VM's OVF and the destination affinity this VM should be registered to,
relevant when C(state) is registered."
version_added: "2.5"
affinity_label_mappings:
description:
- "Mapper which maps affinity label name between VM's OVF and the destination label this VM should be registered to,
relevant when C(state) is registered."
version_added: "2.5"
lun_mappings:
description:
- "Mapper which maps lun between VM's OVF and the destination lun this VM should contain, relevant when C(state) is registered.
lun_mappings is described by the following dictionary:
- C(logical_unit_id): The logical unit number to identify a logical unit,
- C(logical_unit_port): The port being used to connect with the LUN disk.
- C(logical_unit_portal): The portal being used to connect with the LUN disk.
- C(logical_unit_address): The address of the block storage host.
- C(logical_unit_target): The iSCSI specification located on an iSCSI server
- C(logical_unit_username): Username to be used to connect to the block storage host.
- C(logical_unit_password): Password to be used to connect to the block storage host.
- C(storage_type): The storage type which the LUN reside on (iscsi or fcp)"
version_added: "2.5"
reassign_bad_macs:
description:
- "Boolean indication whether to reassign bad macs when C(state) is registered."
type: bool
version_added: "2.5"
template:
description:
- Name of the template, which should be used to create Virtual Machine.
- Required if creating VM.
- If template is not specified and VM doesn't exist, VM will be created from I(Blank) template.
template_version:
description:
- Version number of the template to be used for VM.
- By default the latest available version of the template is used.
version_added: "2.3"
use_latest_template_version:
description:
- Specify if latest template version should be used, when running a stateless VM.
- If this parameter is set to I(yes) stateless VM is created.
type: bool
version_added: "2.3"
storage_domain:
description:
- Name of the storage domain where all template disks should be created.
- This parameter is considered only when C(template) is provided.
- IMPORTANT - This parameter is not idempotent, if the VM exists and you specify different storage domain,
disk won't move.
version_added: "2.4"
disk_format:
description:
- Specify format of the disk.
- If C(cow) format is used, disk will by created as sparse, so space will be allocated for the volume as needed, also known as I(thin provision).
- If C(raw) format is used, disk storage will be allocated right away, also known as I(preallocated).
- Note that this option isn't idempotent as it's not currently possible to change format of the disk via API.
- This parameter is considered only when C(template) and C(storage domain) is provided.
choices: [ cow, raw ]
default: cow
version_added: "2.4"
memory:
description:
- Amount of memory of the Virtual Machine. Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- Default value is set by engine.
memory_guaranteed:
description:
- Amount of minimal guaranteed memory of the Virtual Machine.
Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- C(memory_guaranteed) parameter can't be lower than C(memory) parameter.
- Default value is set by engine.
memory_max:
description:
- Upper bound of virtual machine memory up to which memory hot-plug can be performed.
Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- Default value is set by engine.
version_added: "2.5"
cpu_shares:
description:
- Set a CPU shares for this Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_cores:
description:
- Number of virtual CPUs cores of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_sockets:
description:
- Number of virtual CPUs sockets of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_threads:
description:
- Number of virtual CPUs sockets of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
version_added: "2.5"
type:
description:
- Type of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
- I(high_performance) is supported since Ansible 2.5 and oVirt/RHV 4.2.
choices: [ desktop, server, high_performance ]
quota_id:
description:
- "Virtual Machine quota ID to be used for disk. By default quota is chosen by oVirt/RHV engine."
version_added: "2.5"
operating_system:
description:
- Operating system of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
- "Possible values: debian_7, freebsd, freebsdx64, other, other_linux,
other_linux_ppc64, other_ppc64, rhel_3, rhel_4, rhel_4x64, rhel_5, rhel_5x64,
rhel_6, rhel_6x64, rhel_6_ppc64, rhel_7x64, rhel_7_ppc64, sles_11, sles_11_ppc64,
ubuntu_12_04, ubuntu_12_10, ubuntu_13_04, ubuntu_13_10, ubuntu_14_04, ubuntu_14_04_ppc64,
windows_10, windows_10x64, windows_2003, windows_2003x64, windows_2008, windows_2008x64,
windows_2008r2x64, windows_2008R2x64, windows_2012x64, windows_2012R2x64, windows_7,
windows_7x64, windows_8, windows_8x64, windows_xp"
boot_devices:
description:
- List of boot devices which should be used to boot. For example C([ cdrom, hd ]).
- Default value is set by oVirt/RHV engine.
choices: [ cdrom, hd, network ]
boot_menu:
description:
- "I(True) enable menu to select boot device, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
usb_support:
description:
- "I(True) enable USB support, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
serial_console:
description:
- "I(True) enable VirtIO serial console, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
sso:
description:
- "I(True) enable Single Sign On by Guest Agent, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
host:
description:
- Specify host where Virtual Machine should be running. By default the host is chosen by engine scheduler.
- This parameter is used only when C(state) is I(running) or I(present).
high_availability:
description:
- If I(yes) Virtual Machine will be set as highly available.
- If I(no) Virtual Machine won't be set as highly available.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
high_availability_priority:
description:
- Indicates the priority of the virtual machine inside the run and migration queues.
Virtual machines with higher priorities will be started and migrated before virtual machines with lower
priorities. The value is an integer between 0 and 100. The higher the value, the higher the priority.
- If no value is passed, default value is set by oVirt/RHV engine.
version_added: "2.5"
lease:
description:
- Name of the storage domain this virtual machine lease reside on. Pass an empty string to remove the lease.
- NOTE - Supported since oVirt 4.1.
version_added: "2.4"
custom_compatibility_version:
description:
- "Enables a virtual machine to be customized to its own compatibility version. If
'C(custom_compatibility_version)' is set, it overrides the cluster's compatibility version
for this particular virtual machine."
version_added: "2.7"
host_devices:
description:
- Single Root I/O Virtualization - technology that allows single device to expose multiple endpoints that can be passed to VMs
- host_devices is an list which contain dictionary with name and state of device
version_added: "2.7"
delete_protected:
description:
- If I(yes) Virtual Machine will be set as delete protected.
- If I(no) Virtual Machine won't be set as delete protected.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
stateless:
description:
- If I(yes) Virtual Machine will be set as stateless.
- If I(no) Virtual Machine will be unset as stateless.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
clone:
description:
- If I(yes) then the disks of the created virtual machine will be cloned and independent of the template.
- This parameter is used only when C(state) is I(running) or I(present) and VM didn't exist before.
type: bool
default: 'no'
clone_permissions:
description:
- If I(yes) then the permissions of the template (only the direct ones, not the inherited ones)
will be copied to the created virtual machine.
- This parameter is used only when C(state) is I(running) or I(present) and VM didn't exist before.
type: bool
default: 'no'
cd_iso:
description:
- ISO file from ISO storage domain which should be attached to Virtual Machine.
- If you pass empty string the CD will be ejected from VM.
- If used with C(state) I(running) or I(present) and VM is running the CD will be attached to VM.
- If used with C(state) I(running) or I(present) and VM is down the CD will be attached to VM persistently.
force:
description:
- Please check to I(Synopsis) to more detailed description of force parameter, it can behave differently
in different situations.
type: bool
default: 'no'
nics:
description:
- List of NICs, which should be attached to Virtual Machine. NIC is described by following dictionary.
suboptions:
name:
description:
- Name of the NIC.
profile_name:
description:
- Profile name where NIC should be attached.
interface:
description:
- Type of the network interface.
choices: ['virtio', 'e1000', 'rtl8139']
default: 'virtio'
mac_address:
description:
- Custom MAC address of the network interface, by default it's obtained from MAC pool.
- "NOTE - This parameter is used only when C(state) is I(running) or I(present) and is able to only create NICs.
To manage NICs of the VM in more depth please use M(ovirt_nic) module instead."
disks:
description:
- List of disks, which should be attached to Virtual Machine. Disk is described by following dictionary.
suboptions:
name:
description:
- Name of the disk. Either C(name) or C(id) is required.
id:
description:
- ID of the disk. Either C(name) or C(id) is required.
interface:
description:
- Interface of the disk.
choices: ['virtio', 'ide']
default: 'virtio'
bootable:
description:
- I(True) if the disk should be bootable, default is non bootable.
type: bool
activate:
description:
- I(True) if the disk should be activated, default is activated.
- "NOTE - This parameter is used only when C(state) is I(running) or I(present) and is able to only attach disks.
To manage disks of the VM in more depth please use M(ovirt_disk) module instead."
type: bool
sysprep:
description:
- Dictionary with values for Windows Virtual Machine initialization using sysprep.
suboptions:
host_name:
description:
- Hostname to be set to Virtual Machine when deployed.
active_directory_ou:
description:
- Active Directory Organizational Unit, to be used for login of user.
org_name:
description:
- Organization name to be set to Windows Virtual Machine.
domain:
description:
- Domain to be set to Windows Virtual Machine.
timezone:
description:
- Timezone to be set to Windows Virtual Machine.
ui_language:
description:
- UI language of the Windows Virtual Machine.
system_locale:
description:
- System localization of the Windows Virtual Machine.
input_locale:
description:
- Input localization of the Windows Virtual Machine.
windows_license_key:
description:
- License key to be set to Windows Virtual Machine.
user_name:
description:
- Username to be used for set password to Windows Virtual Machine.
root_password:
description:
- Password to be set for username to Windows Virtual Machine.
cloud_init:
description:
- Dictionary with values for Unix-like Virtual Machine initialization using cloud init.
suboptions:
host_name:
description:
- Hostname to be set to Virtual Machine when deployed.
timezone:
description:
- Timezone to be set to Virtual Machine when deployed.
user_name:
description:
- Username to be used to set password to Virtual Machine when deployed.
root_password:
description:
- Password to be set for user specified by C(user_name) parameter.
authorized_ssh_keys:
description:
- Use this SSH keys to login to Virtual Machine.
regenerate_ssh_keys:
description:
- If I(True) SSH keys will be regenerated on Virtual Machine.
type: bool
custom_script:
description:
- Cloud-init script which will be executed on Virtual Machine when deployed.
- This is appended to the end of the cloud-init script generated by any other options.
dns_servers:
description:
- DNS servers to be configured on Virtual Machine.
dns_search:
description:
- DNS search domains to be configured on Virtual Machine.
nic_boot_protocol:
description:
- Set boot protocol of the network interface of Virtual Machine.
choices: ['none', 'dhcp', 'static']
nic_ip_address:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
nic_netmask:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
nic_gateway:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
nic_boot_protocol_v6:
description:
- Set boot protocol of the network interface of Virtual Machine.
choices: ['none', 'dhcp', 'static']
version_added: "2.9"
nic_ip_address_v6:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
version_added: "2.9"
nic_netmask_v6:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
version_added: "2.9"
nic_gateway_v6:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
- For IPv6 addresses the value is an integer in the range of 0-128, which represents the subnet prefix.
version_added: "2.9"
nic_name:
description:
- Set name to network interface of Virtual Machine.
nic_on_boot:
description:
- If I(True) network interface will be set to start on boot.
type: bool
cloud_init_nics:
description:
- List of dictionaries representing network interfaces to be setup by cloud init.
- This option is used, when user needs to setup more network interfaces via cloud init.
- If one network interface is enough, user should use C(cloud_init) I(nic_*) parameters. C(cloud_init) I(nic_*) parameters
are merged with C(cloud_init_nics) parameters.
suboptions:
nic_boot_protocol:
description:
- Set boot protocol of the network interface of Virtual Machine. Can be one of C(none), C(dhcp) or C(static).
nic_ip_address:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
nic_netmask:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
nic_gateway:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
nic_boot_protocol_v6:
description:
- Set boot protocol of the network interface of Virtual Machine. Can be one of C(none), C(dhcp) or C(static).
version_added: "2.9"
nic_ip_address_v6:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
version_added: "2.9"
nic_netmask_v6:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
version_added: "2.9"
nic_gateway_v6:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
- For IPv6 addresses the value is an integer in the range of 0-128, which represents the subnet prefix.
version_added: "2.9"
nic_name:
description:
- Set name to network interface of Virtual Machine.
nic_on_boot:
description:
- If I(True) network interface will be set to start on boot.
type: bool
version_added: "2.3"
cloud_init_persist:
description:
- "If I(yes) the C(cloud_init) or C(sysprep) parameters will be saved for the virtual machine
and the virtual machine won't be started as run-once."
type: bool
version_added: "2.5"
aliases: [ 'sysprep_persist' ]
default: 'no'
kernel_params_persist:
description:
- "If I(true) C(kernel_params), C(initrd_path) and C(kernel_path) will persist in virtual machine configuration,
if I(False) it will be used for run once."
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
type: bool
version_added: "2.8"
kernel_path:
description:
- Path to a kernel image used to boot the virtual machine.
- Kernel image must be stored on either the ISO domain or on the host's storage.
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
version_added: "2.3"
initrd_path:
description:
- Path to an initial ramdisk to be used with the kernel specified by C(kernel_path) option.
- Ramdisk image must be stored on either the ISO domain or on the host's storage.
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
version_added: "2.3"
kernel_params:
description:
- Kernel command line parameters (formatted as string) to be used with the kernel specified by C(kernel_path) option.
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
version_added: "2.3"
instance_type:
description:
- Name of virtual machine's hardware configuration.
- By default no instance type is used.
version_added: "2.3"
description:
description:
- Description of the Virtual Machine.
version_added: "2.3"
comment:
description:
- Comment of the Virtual Machine.
version_added: "2.3"
timezone:
description:
- Sets time zone offset of the guest hardware clock.
- For example C(Etc/GMT)
version_added: "2.3"
serial_policy:
description:
- Specify a serial number policy for the Virtual Machine.
- Following options are supported.
- C(vm) - Sets the Virtual Machine's UUID as its serial number.
- C(host) - Sets the host's UUID as the Virtual Machine's serial number.
- C(custom) - Allows you to specify a custom serial number in C(serial_policy_value).
choices: ['vm', 'host', 'custom']
version_added: "2.3"
serial_policy_value:
description:
- Allows you to specify a custom serial number.
- This parameter is used only when C(serial_policy) is I(custom).
version_added: "2.3"
vmware:
description:
- Dictionary of values to be used to connect to VMware and import
a virtual machine to oVirt.
suboptions:
username:
description:
- The username to authenticate against the VMware.
password:
description:
- The password to authenticate against the VMware.
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1)
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
xen:
description:
- Dictionary of values to be used to connect to XEN and import
a virtual machine to oVirt.
suboptions:
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(xen+ssh://[email protected]). This is required parameter.
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
kvm:
description:
- Dictionary of values to be used to connect to kvm and import
a virtual machine to oVirt.
suboptions:
name:
description:
- The name of the KVM virtual machine.
username:
description:
- The username to authenticate against the KVM.
password:
description:
- The password to authenticate against the KVM.
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(qemu:///system). This is required parameter.
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
cpu_mode:
description:
- "CPU mode of the virtual machine. It can be some of the following: I(host_passthrough), I(host_model) or I(custom)."
- "For I(host_passthrough) CPU type you need to set C(placement_policy) to I(pinned)."
- "If no value is passed, default value is set by oVirt/RHV engine."
version_added: "2.5"
placement_policy:
description:
- "The configuration of the virtual machine's placement policy."
- "If no value is passed, default value is set by oVirt/RHV engine."
- "Placement policy can be one of the following values:"
suboptions:
migratable:
description:
- "Allow manual and automatic migration."
pinned:
description:
- "Do not allow migration."
user_migratable:
description:
- "Allow manual migration only."
version_added: "2.5"
ticket:
description:
- "If I(true), in addition return I(remote_vv_file) inside I(vm) dictionary, which contains compatible
content for remote-viewer application. Works only C(state) is I(running)."
version_added: "2.7"
type: bool
cpu_pinning:
description:
- "CPU Pinning topology to map virtual machine CPU to host CPU."
- "CPU Pinning topology is a list of dictionary which can have following values:"
suboptions:
cpu:
description:
- "Number of the host CPU."
vcpu:
description:
- "Number of the virtual machine CPU."
version_added: "2.5"
soundcard_enabled:
description:
- "If I(true), the sound card is added to the virtual machine."
type: bool
version_added: "2.5"
smartcard_enabled:
description:
- "If I(true), use smart card authentication."
type: bool
version_added: "2.5"
io_threads:
description:
- "Number of IO threads used by virtual machine. I(0) means IO threading disabled."
version_added: "2.5"
ballooning_enabled:
description:
- "If I(true), use memory ballooning."
- "Memory balloon is a guest device, which may be used to re-distribute / reclaim the host memory
based on VM needs in a dynamic way. In this way it's possible to create memory over commitment states."
type: bool
version_added: "2.5"
numa_tune_mode:
description:
- "Set how the memory allocation for NUMA nodes of this VM is applied (relevant if NUMA nodes are set for this VM)."
- "It can be one of the following: I(interleave), I(preferred) or I(strict)."
- "If no value is passed, default value is set by oVirt/RHV engine."
choices: ['interleave', 'preferred', 'strict']
version_added: "2.6"
numa_nodes:
description:
- "List of vNUMA Nodes to set for this VM and pin them to assigned host's physical NUMA node."
- "Each vNUMA node is described by following dictionary:"
suboptions:
index:
description:
- "The index of this NUMA node (mandatory)."
memory:
description:
- "Memory size of the NUMA node in MiB (mandatory)."
cores:
description:
- "list of VM CPU cores indexes to be included in this NUMA node (mandatory)."
numa_node_pins:
description:
- "list of physical NUMA node indexes to pin this virtual NUMA node to."
version_added: "2.6"
rng_device:
description:
- "Random number generator (RNG). You can choose of one the following devices I(urandom), I(random) or I(hwrng)."
- "In order to select I(hwrng), you must have it enabled on cluster first."
- "/dev/urandom is used for cluster version >= 4.1, and /dev/random for cluster version <= 4.0"
version_added: "2.5"
custom_properties:
description:
- "Properties sent to VDSM to configure various hooks."
- "Custom properties is a list of dictionary which can have following values:"
suboptions:
name:
description:
- "Name of the custom property. For example: I(hugepages), I(vhost), I(sap_agent), etc."
regexp:
description:
- "Regular expression to set for custom property."
value:
description:
- "Value to set for custom property."
version_added: "2.5"
watchdog:
description:
- "Assign watchdog device for the virtual machine."
- "Watchdogs is a dictionary which can have following values:"
suboptions:
model:
description:
- "Model of the watchdog device. For example: I(i6300esb), I(diag288) or I(null)."
action:
description:
- "Watchdog action to be performed when watchdog is triggered. For example: I(none), I(reset), I(poweroff), I(pause) or I(dump)."
version_added: "2.5"
graphical_console:
description:
- "Assign graphical console to the virtual machine."
suboptions:
headless_mode:
description:
- If I(true) disable the graphics console for this virtual machine.
type: bool
protocol:
description:
- Graphical protocol, a list of I(spice), I(vnc), or both.
version_added: "2.5"
exclusive:
description:
- "When C(state) is I(exported) this parameter indicates if the existing VM with the
same name should be overwritten."
version_added: "2.8"
type: bool
export_domain:
description:
- "When C(state) is I(exported)this parameter specifies the name of the export storage domain."
version_added: "2.8"
export_ova:
description:
- Dictionary of values to be used to export VM as OVA.
suboptions:
host:
description:
- The name of the destination host where the OVA has to be exported.
directory:
description:
- The name of the directory where the OVA has to be exported.
filename:
description:
- The name of the exported OVA file.
version_added: "2.8"
force_migrate:
description:
- If I(true), the VM will migrate when I(placement_policy=user-migratable) but not when I(placement_policy=pinned).
version_added: "2.8"
type: bool
migrate:
description:
- "If I(true), the VM will migrate to any available host."
version_added: "2.8"
type: bool
next_run:
description:
- "If I(true), the update will not be applied to the VM immediately and will be only applied when virtual machine is restarted."
- NOTE - If there are multiple next run configuration changes on the VM, the first change may get reverted if this option is not passed.
version_added: "2.8"
type: bool
snapshot_name:
description:
- "Snapshot to clone VM from."
- "Snapshot with description specified should exist."
- "You have to specify C(snapshot_vm) parameter with virtual machine name of this snapshot."
version_added: "2.9"
snapshot_vm:
description:
- "Source VM to clone VM from."
- "VM should have snapshot specified by C(snapshot)."
- "If C(snapshot_name) specified C(snapshot_vm) is required."
version_added: "2.9"
custom_emulated_machine:
description:
- "Sets the value of the custom_emulated_machine attribute."
version_added: "2.10"
notes:
- If VM is in I(UNASSIGNED) or I(UNKNOWN) state before any operation, the module will fail.
If VM is in I(IMAGE_LOCKED) state before any operation, we try to wait for VM to be I(DOWN).
If VM is in I(SAVING_STATE) state before any operation, we try to wait for VM to be I(SUSPENDED).
If VM is in I(POWERING_DOWN) state before any operation, we try to wait for VM to be I(UP) or I(DOWN). VM can
get into I(UP) state from I(POWERING_DOWN) state, when there is no ACPI or guest agent running inside VM, or
if the shutdown operation fails.
When user specify I(UP) C(state), we always wait to VM to be in I(UP) state in case VM is I(MIGRATING),
I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). In other states we run start operation on VM.
When user specify I(stopped) C(state), and If user pass C(force) parameter set to I(true) we forcibly stop the VM in
any state. If user don't pass C(force) parameter, we always wait to VM to be in UP state in case VM is
I(MIGRATING), I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). If VM is in I(PAUSED) or
I(SUSPENDED) state, we start the VM. Then we gracefully shutdown the VM.
When user specify I(suspended) C(state), we always wait to VM to be in UP state in case VM is I(MIGRATING),
I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). If VM is in I(PAUSED) or I(DOWN) state,
we start the VM. Then we suspend the VM.
When user specify I(absent) C(state), we forcibly stop the VM in any state and remove it.
extends_documentation_fragment: ovirt
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
- name: Creates a new Virtual Machine from template named 'rhel7_template'
ovirt_vm:
state: present
name: myvm
template: rhel7_template
cluster: mycluster
- name: Register VM
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
name: myvm
- name: Register VM using id
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
- name: Register VM, allowing partial import
ovirt_vm:
state: registered
storage_domain: mystorage
allow_partial_import: "True"
cluster: mycluster
id: 1111-1111-1111-1111
- name: Register VM with vnic profile mappings and reassign bad macs
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
vnic_profile_mappings:
- source_network_name: mynetwork
source_profile_name: mynetwork
target_profile_id: 3333-3333-3333-3333
- source_network_name: mynetwork2
source_profile_name: mynetwork2
target_profile_id: 4444-4444-4444-4444
reassign_bad_macs: "True"
- name: Register VM with mappings
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
role_mappings:
- source_name: Role_A
dest_name: Role_B
domain_mappings:
- source_name: Domain_A
dest_name: Domain_B
lun_mappings:
- source_storage_type: iscsi
source_logical_unit_id: 1IET_000d0001
source_logical_unit_port: 3260
source_logical_unit_portal: 1
source_logical_unit_address: 10.34.63.203
source_logical_unit_target: iqn.2016-08-09.brq.str-01:omachace
dest_storage_type: iscsi
dest_logical_unit_id: 1IET_000d0002
dest_logical_unit_port: 3260
dest_logical_unit_portal: 1
dest_logical_unit_address: 10.34.63.204
dest_logical_unit_target: iqn.2016-08-09.brq.str-02:omachace
affinity_group_mappings:
- source_name: Affinity_A
dest_name: Affinity_B
affinity_label_mappings:
- source_name: Label_A
dest_name: Label_B
cluster_mappings:
- source_name: cluster_A
dest_name: cluster_B
- name: Creates a stateless VM which will always use latest template version
ovirt_vm:
name: myvm
template: rhel7
cluster: mycluster
use_latest_template_version: true
# Creates a new server rhel7 Virtual Machine from Blank template
# on brq01 cluster with 2GiB memory and 2 vcpu cores/sockets
# and attach bootable disk with name rhel7_disk and attach virtio NIC
- ovirt_vm:
state: present
cluster: brq01
name: myvm
memory: 2GiB
cpu_cores: 2
cpu_sockets: 2
cpu_shares: 1024
type: server
operating_system: rhel_7x64
disks:
- name: rhel7_disk
bootable: True
nics:
- name: nic1
# Change VM Name
- ovirt_vm:
id: 00000000-0000-0000-0000-000000000000
name: "new_vm_name"
- name: Run VM with cloud init
ovirt_vm:
name: rhel7
template: rhel7
cluster: Default
memory: 1GiB
high_availability: true
high_availability_priority: 50 # Available from Ansible 2.5
cloud_init:
nic_boot_protocol: static
nic_ip_address: 10.34.60.86
nic_netmask: 255.255.252.0
nic_gateway: 10.34.63.254
nic_name: eth1
nic_on_boot: true
host_name: example.com
custom_script: |
write_files:
- content: |
Hello, world!
path: /tmp/greeting.txt
permissions: '0644'
user_name: root
root_password: super_password
- name: Run VM with cloud init, with multiple network interfaces
ovirt_vm:
name: rhel7_4
template: rhel7
cluster: mycluster
cloud_init_nics:
- nic_name: eth0
nic_boot_protocol: dhcp
nic_on_boot: true
- nic_name: eth1
nic_boot_protocol: static
nic_ip_address: 10.34.60.86
nic_netmask: 255.255.252.0
nic_gateway: 10.34.63.254
nic_on_boot: true
# IP version 6 parameters are supported since ansible 2.9
- nic_name: eth2
nic_boot_protocol_v6: static
nic_ip_address_v6: '2620:52:0:2282:b898:1f69:6512:36c5'
nic_gateway_v6: '2620:52:0:2282:b898:1f69:6512:36c9'
nic_netmask_v6: '120'
nic_on_boot: true
- nic_name: eth3
nic_on_boot: true
nic_boot_protocol_v6: dhcp
- name: Run VM with sysprep
ovirt_vm:
name: windows2012R2_AD
template: windows2012R2
cluster: Default
memory: 3GiB
high_availability: true
sysprep:
host_name: windowsad.example.com
user_name: Administrator
root_password: SuperPassword123
- name: Migrate/Run VM to/on host named 'host1'
ovirt_vm:
state: running
name: myvm
host: host1
- name: Migrate VM to any available host
ovirt_vm:
state: running
name: myvm
migrate: true
- name: Change VMs CD
ovirt_vm:
name: myvm
cd_iso: drivers.iso
- name: Eject VMs CD
ovirt_vm:
name: myvm
cd_iso: ''
- name: Boot VM from CD
ovirt_vm:
name: myvm
cd_iso: centos7_x64.iso
boot_devices:
- cdrom
- name: Stop vm
ovirt_vm:
state: stopped
name: myvm
- name: Upgrade memory to already created VM
ovirt_vm:
name: myvm
memory: 4GiB
- name: Hot plug memory to already created and running VM (VM won't be restarted)
ovirt_vm:
name: myvm
memory: 4GiB
# Create/update a VM to run with two vNUMA nodes and pin them to physical NUMA nodes as follows:
# vnuma index 0-> numa index 0, vnuma index 1-> numa index 1
- name: Create a VM to run with two vNUMA nodes
ovirt_vm:
name: myvm
cluster: mycluster
numa_tune_mode: "interleave"
numa_nodes:
- index: 0
cores: [0]
memory: 20
numa_node_pins: [0]
- index: 1
cores: [1]
memory: 30
numa_node_pins: [1]
- name: Update an existing VM to run without previously created vNUMA nodes (i.e. remove all vNUMA nodes+NUMA pinning setting)
ovirt_vm:
name: myvm
cluster: mycluster
state: "present"
numa_tune_mode: "interleave"
numa_nodes:
- index: -1
# When change on the VM needs restart of the VM, use next_run state,
# The VM will be updated and rebooted if there are any changes.
# If present state would be used, VM won't be restarted.
- ovirt_vm:
state: next_run
name: myvm
boot_devices:
- network
- name: Import virtual machine from VMware
ovirt_vm:
state: stopped
cluster: mycluster
name: vmware_win10
timeout: 1800
poll_interval: 30
vmware:
url: vpx://[email protected]/Folder1/Cluster1/2.3.4.5?no_verify=1
name: windows10
storage_domain: mynfs
username: user
password: password
- name: Create vm from template and create all disks on specific storage domain
ovirt_vm:
name: vm_test
cluster: mycluster
template: mytemplate
storage_domain: mynfs
nics:
- name: nic1
- name: Remove VM, if VM is running it will be stopped
ovirt_vm:
state: absent
name: myvm
# Defining a specific quota for a VM:
# Since Ansible 2.5
- ovirt_quotas_facts:
data_center: Default
name: myquota
- ovirt_vm:
name: myvm
sso: False
boot_menu: True
usb_support: True
serial_console: True
quota_id: "{{ ovirt_quotas[0]['id'] }}"
- name: Create a VM that has the console configured for both Spice and VNC
ovirt_vm:
name: myvm
template: mytemplate
cluster: mycluster
graphical_console:
protocol:
- spice
- vnc
# Execute remote viewer to VM
- block:
- name: Create a ticket for console for a running VM
ovirt_vm:
name: myvm
ticket: true
state: running
register: myvm
- name: Save ticket to file
copy:
content: "{{ myvm.vm.remote_vv_file }}"
dest: ~/vvfile.vv
- name: Run remote viewer with file
command: remote-viewer ~/vvfile.vv
# Default value of host_device state is present
- name: Attach host devices to virtual machine
ovirt_vm:
name: myvm
host: myhost
placement_policy: pinned
host_devices:
- name: pci_0000_00_06_0
- name: pci_0000_00_07_0
state: absent
- name: pci_0000_00_08_0
state: present
- name: Export the VM as OVA
ovirt_vm:
name: myvm
state: exported
cluster: mycluster
export_ova:
host: myhost
filename: myvm.ova
directory: /tmp/
- name: Clone VM from snapshot
ovirt_vm:
snapshot_vm: myvm
snapshot_name: myvm_snap
name: myvm_clone
state: present
'''
RETURN = '''
id:
description: ID of the VM which is managed
returned: On success if VM is found.
type: str
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
vm:
description: "Dictionary of all the VM attributes. VM attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/vm.
Additionally when user sent ticket=true, this module will return also remote_vv_file
parameter in vm dictionary, which contains remote-viewer compatible file to open virtual
machine console. Please note that this file contains sensible information."
returned: On success if VM is found.
type: dict
'''
import traceback
try:
import ovirtsdk4.types as otypes
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
BaseModule,
check_params,
check_sdk,
convert_to_bytes,
create_connection,
equal,
get_dict_of_struct,
get_entity,
get_link_name,
get_id_by_name,
ovirt_full_argument_spec,
search_by_attributes,
search_by_name,
wait,
engine_supported,
)
class VmsModule(BaseModule):
def __init__(self, *args, **kwargs):
super(VmsModule, self).__init__(*args, **kwargs)
self._initialization = None
self._is_new = False
def __get_template_with_version(self):
"""
oVirt/RHV in version 4.1 doesn't support search by template+version_number,
so we need to list all templates with specific name and then iterate
through it's version until we find the version we look for.
"""
template = None
templates_service = self._connection.system_service().templates_service()
if self.param('template'):
clusters_service = self._connection.system_service().clusters_service()
cluster = search_by_name(clusters_service, self.param('cluster'))
data_center = self._connection.follow_link(cluster.data_center)
templates = templates_service.list(
search='name=%s and datacenter=%s' % (self.param('template'), data_center.name)
)
if self.param('template_version'):
templates = [
t for t in templates
if t.version.version_number == self.param('template_version')
]
if not templates:
raise ValueError(
"Template with name '%s' and version '%s' in data center '%s' was not found'" % (
self.param('template'),
self.param('template_version'),
data_center.name
)
)
template = sorted(templates, key=lambda t: t.version.version_number, reverse=True)[0]
elif self._is_new:
# If template isn't specified and VM is about to be created specify default template:
template = templates_service.template_service('00000000-0000-0000-0000-000000000000').get()
return template
def __get_storage_domain_and_all_template_disks(self, template):
if self.param('template') is None:
return None
if self.param('storage_domain') is None:
return None
disks = list()
for att in self._connection.follow_link(template.disk_attachments):
disks.append(
otypes.DiskAttachment(
disk=otypes.Disk(
id=att.disk.id,
format=otypes.DiskFormat(self.param('disk_format')),
storage_domains=[
otypes.StorageDomain(
id=get_id_by_name(
self._connection.system_service().storage_domains_service(),
self.param('storage_domain')
)
)
]
)
)
)
return disks
def __get_snapshot(self):
if self.param('snapshot_vm') is None:
return None
if self.param('snapshot_name') is None:
return None
vms_service = self._connection.system_service().vms_service()
vm_id = get_id_by_name(vms_service, self.param('snapshot_vm'))
vm_service = vms_service.vm_service(vm_id)
snaps_service = vm_service.snapshots_service()
snaps = snaps_service.list()
snap = next(
(s for s in snaps if s.description == self.param('snapshot_name')),
None
)
return snap
def __get_cluster(self):
if self.param('cluster') is not None:
return self.param('cluster')
elif self.param('snapshot_name') is not None and self.param('snapshot_vm') is not None:
vms_service = self._connection.system_service().vms_service()
vm = search_by_name(vms_service, self.param('snapshot_vm'))
return self._connection.system_service().clusters_service().cluster_service(vm.cluster.id).get().name
def build_entity(self):
template = self.__get_template_with_version()
cluster = self.__get_cluster()
snapshot = self.__get_snapshot()
disk_attachments = self.__get_storage_domain_and_all_template_disks(template)
return otypes.Vm(
id=self.param('id'),
name=self.param('name'),
cluster=otypes.Cluster(
name=cluster
) if cluster else None,
disk_attachments=disk_attachments,
template=otypes.Template(
id=template.id,
) if template else None,
use_latest_template_version=self.param('use_latest_template_version'),
stateless=self.param('stateless') or self.param('use_latest_template_version'),
delete_protected=self.param('delete_protected'),
custom_emulated_machine=self.param('custom_emulated_machine'),
bios=(
otypes.Bios(boot_menu=otypes.BootMenu(enabled=self.param('boot_menu')))
) if self.param('boot_menu') is not None else None,
console=(
otypes.Console(enabled=self.param('serial_console'))
) if self.param('serial_console') is not None else None,
usb=(
otypes.Usb(enabled=self.param('usb_support'))
) if self.param('usb_support') is not None else None,
sso=(
otypes.Sso(
methods=[otypes.Method(id=otypes.SsoMethod.GUEST_AGENT)] if self.param('sso') else []
)
) if self.param('sso') is not None else None,
quota=otypes.Quota(id=self._module.params.get('quota_id')) if self.param('quota_id') is not None else None,
high_availability=otypes.HighAvailability(
enabled=self.param('high_availability'),
priority=self.param('high_availability_priority'),
) if self.param('high_availability') is not None or self.param('high_availability_priority') else None,
lease=otypes.StorageDomainLease(
storage_domain=otypes.StorageDomain(
id=get_id_by_name(
service=self._connection.system_service().storage_domains_service(),
name=self.param('lease')
) if self.param('lease') else None
)
) if self.param('lease') is not None else None,
cpu=otypes.Cpu(
topology=otypes.CpuTopology(
cores=self.param('cpu_cores'),
sockets=self.param('cpu_sockets'),
threads=self.param('cpu_threads'),
) if any((
self.param('cpu_cores'),
self.param('cpu_sockets'),
self.param('cpu_threads')
)) else None,
cpu_tune=otypes.CpuTune(
vcpu_pins=[
otypes.VcpuPin(vcpu=int(pin['vcpu']), cpu_set=str(pin['cpu'])) for pin in self.param('cpu_pinning')
],
) if self.param('cpu_pinning') else None,
mode=otypes.CpuMode(self.param('cpu_mode')) if self.param('cpu_mode') else None,
) if any((
self.param('cpu_cores'),
self.param('cpu_sockets'),
self.param('cpu_threads'),
self.param('cpu_mode'),
self.param('cpu_pinning')
)) else None,
cpu_shares=self.param('cpu_shares'),
os=otypes.OperatingSystem(
type=self.param('operating_system'),
boot=otypes.Boot(
devices=[
otypes.BootDevice(dev) for dev in self.param('boot_devices')
],
) if self.param('boot_devices') else None,
cmdline=self.param('kernel_params') if self.param('kernel_params_persist') else None,
initrd=self.param('initrd_path') if self.param('kernel_params_persist') else None,
kernel=self.param('kernel_path') if self.param('kernel_params_persist') else None,
) if (
self.param('operating_system') or self.param('boot_devices') or self.param('kernel_params_persist')
) else None,
type=otypes.VmType(
self.param('type')
) if self.param('type') else None,
memory=convert_to_bytes(
self.param('memory')
) if self.param('memory') else None,
memory_policy=otypes.MemoryPolicy(
guaranteed=convert_to_bytes(self.param('memory_guaranteed')),
ballooning=self.param('ballooning_enabled'),
max=convert_to_bytes(self.param('memory_max')),
) if any((
self.param('memory_guaranteed'),
self.param('ballooning_enabled') is not None,
self.param('memory_max')
)) else None,
instance_type=otypes.InstanceType(
id=get_id_by_name(
self._connection.system_service().instance_types_service(),
self.param('instance_type'),
),
) if self.param('instance_type') else None,
custom_compatibility_version=otypes.Version(
major=self._get_major(self.param('custom_compatibility_version')),
minor=self._get_minor(self.param('custom_compatibility_version')),
) if self.param('custom_compatibility_version') is not None else None,
description=self.param('description'),
comment=self.param('comment'),
time_zone=otypes.TimeZone(
name=self.param('timezone'),
) if self.param('timezone') else None,
serial_number=otypes.SerialNumber(
policy=otypes.SerialNumberPolicy(self.param('serial_policy')),
value=self.param('serial_policy_value'),
) if (
self.param('serial_policy') is not None or
self.param('serial_policy_value') is not None
) else None,
placement_policy=otypes.VmPlacementPolicy(
affinity=otypes.VmAffinity(self.param('placement_policy')),
hosts=[
otypes.Host(name=self.param('host')),
] if self.param('host') else None,
) if self.param('placement_policy') else None,
soundcard_enabled=self.param('soundcard_enabled'),
display=otypes.Display(
smartcard_enabled=self.param('smartcard_enabled')
) if self.param('smartcard_enabled') is not None else None,
io=otypes.Io(
threads=self.param('io_threads'),
) if self.param('io_threads') is not None else None,
numa_tune_mode=otypes.NumaTuneMode(
self.param('numa_tune_mode')
) if self.param('numa_tune_mode') else None,
rng_device=otypes.RngDevice(
source=otypes.RngSource(self.param('rng_device')),
) if self.param('rng_device') else None,
custom_properties=[
otypes.CustomProperty(
name=cp.get('name'),
regexp=cp.get('regexp'),
value=str(cp.get('value')),
) for cp in self.param('custom_properties') if cp
] if self.param('custom_properties') is not None else None,
initialization=self.get_initialization() if self.param('cloud_init_persist') else None,
snapshots=[otypes.Snapshot(id=snapshot.id)] if snapshot is not None else None,
)
def _get_export_domain_service(self):
provider_name = self._module.params['export_domain']
export_sds_service = self._connection.system_service().storage_domains_service()
export_sd_id = get_id_by_name(export_sds_service, provider_name)
return export_sds_service.service(export_sd_id)
def post_export_action(self, entity):
self._service = self._get_export_domain_service().vms_service()
def update_check(self, entity):
res = self._update_check(entity)
if entity.next_run_configuration_exists:
res = res and self._update_check(self._service.service(entity.id).get(next_run=True))
return res
def _update_check(self, entity):
def check_cpu_pinning():
if self.param('cpu_pinning'):
current = []
if entity.cpu.cpu_tune:
current = [(str(pin.cpu_set), int(pin.vcpu)) for pin in entity.cpu.cpu_tune.vcpu_pins]
passed = [(str(pin['cpu']), int(pin['vcpu'])) for pin in self.param('cpu_pinning')]
return sorted(current) == sorted(passed)
return True
def check_custom_properties():
if self.param('custom_properties'):
current = []
if entity.custom_properties:
current = [(cp.name, cp.regexp, str(cp.value)) for cp in entity.custom_properties]
passed = [(cp.get('name'), cp.get('regexp'), str(cp.get('value'))) for cp in self.param('custom_properties') if cp]
return sorted(current) == sorted(passed)
return True
def check_host():
if self.param('host') is not None:
return self.param('host') in [self._connection.follow_link(host).name for host in getattr(entity.placement_policy, 'hosts', None) or []]
return True
def check_custom_compatibility_version():
if self.param('custom_compatibility_version') is not None:
return (self._get_minor(self.param('custom_compatibility_version')) == self._get_minor(entity.custom_compatibility_version) and
self._get_major(self.param('custom_compatibility_version')) == self._get_major(entity.custom_compatibility_version))
return True
cpu_mode = getattr(entity.cpu, 'mode')
vm_display = entity.display
return (
check_cpu_pinning() and
check_custom_properties() and
check_host() and
check_custom_compatibility_version() and
not self.param('cloud_init_persist') and
not self.param('kernel_params_persist') and
equal(self.param('cluster'), get_link_name(self._connection, entity.cluster)) and equal(convert_to_bytes(self.param('memory')), entity.memory) and
equal(convert_to_bytes(self.param('memory_guaranteed')), entity.memory_policy.guaranteed) and
equal(convert_to_bytes(self.param('memory_max')), entity.memory_policy.max) and
equal(self.param('cpu_cores'), entity.cpu.topology.cores) and
equal(self.param('cpu_sockets'), entity.cpu.topology.sockets) and
equal(self.param('cpu_threads'), entity.cpu.topology.threads) and
equal(self.param('cpu_mode'), str(cpu_mode) if cpu_mode else None) and
equal(self.param('type'), str(entity.type)) and
equal(self.param('name'), str(entity.name)) and
equal(self.param('operating_system'), str(entity.os.type)) and
equal(self.param('boot_menu'), entity.bios.boot_menu.enabled) and
equal(self.param('soundcard_enabled'), entity.soundcard_enabled) and
equal(self.param('smartcard_enabled'), getattr(vm_display, 'smartcard_enabled', False)) and
equal(self.param('io_threads'), entity.io.threads) and
equal(self.param('ballooning_enabled'), entity.memory_policy.ballooning) and
equal(self.param('serial_console'), getattr(entity.console, 'enabled', None)) and
equal(self.param('usb_support'), entity.usb.enabled) and
equal(self.param('sso'), True if entity.sso.methods else False) and
equal(self.param('quota_id'), getattr(entity.quota, 'id', None)) and
equal(self.param('high_availability'), entity.high_availability.enabled) and
equal(self.param('high_availability_priority'), entity.high_availability.priority) and
equal(self.param('lease'), get_link_name(self._connection, getattr(entity.lease, 'storage_domain', None))) and
equal(self.param('stateless'), entity.stateless) and
equal(self.param('cpu_shares'), entity.cpu_shares) and
equal(self.param('delete_protected'), entity.delete_protected) and
equal(self.param('custom_emulated_machine'), entity.custom_emulated_machine) and
equal(self.param('use_latest_template_version'), entity.use_latest_template_version) and
equal(self.param('boot_devices'), [str(dev) for dev in getattr(entity.os.boot, 'devices', [])]) and
equal(self.param('instance_type'), get_link_name(self._connection, entity.instance_type), ignore_case=True) and
equal(self.param('description'), entity.description) and
equal(self.param('comment'), entity.comment) and
equal(self.param('timezone'), getattr(entity.time_zone, 'name', None)) and
equal(self.param('serial_policy'), str(getattr(entity.serial_number, 'policy', None))) and
equal(self.param('serial_policy_value'), getattr(entity.serial_number, 'value', None)) and
equal(self.param('placement_policy'), str(entity.placement_policy.affinity) if entity.placement_policy else None) and
equal(self.param('numa_tune_mode'), str(entity.numa_tune_mode)) and
equal(self.param('rng_device'), str(entity.rng_device.source) if entity.rng_device else None)
)
def pre_create(self, entity):
# Mark if entity exists before touching it:
if entity is None:
self._is_new = True
def post_update(self, entity):
self.post_present(entity.id)
def post_present(self, entity_id):
# After creation of the VM, attach disks and NICs:
entity = self._service.service(entity_id).get()
self.__attach_disks(entity)
self.__attach_nics(entity)
self._attach_cd(entity)
self.changed = self.__attach_numa_nodes(entity)
self.changed = self.__attach_watchdog(entity)
self.changed = self.__attach_graphical_console(entity)
self.changed = self.__attach_host_devices(entity)
def pre_remove(self, entity):
# Forcibly stop the VM, if it's not in DOWN state:
if entity.status != otypes.VmStatus.DOWN:
if not self._module.check_mode:
self.changed = self.action(
action='stop',
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)['changed']
def __suspend_shutdown_common(self, vm_service):
if vm_service.get().status in [
otypes.VmStatus.MIGRATING,
otypes.VmStatus.POWERING_UP,
otypes.VmStatus.REBOOT_IN_PROGRESS,
otypes.VmStatus.WAIT_FOR_LAUNCH,
otypes.VmStatus.UP,
otypes.VmStatus.RESTORING_STATE,
]:
self._wait_for_UP(vm_service)
def _pre_shutdown_action(self, entity):
vm_service = self._service.vm_service(entity.id)
self.__suspend_shutdown_common(vm_service)
if entity.status in [otypes.VmStatus.SUSPENDED, otypes.VmStatus.PAUSED]:
vm_service.start()
self._wait_for_UP(vm_service)
return vm_service.get()
def _pre_suspend_action(self, entity):
vm_service = self._service.vm_service(entity.id)
self.__suspend_shutdown_common(vm_service)
if entity.status in [otypes.VmStatus.PAUSED, otypes.VmStatus.DOWN]:
vm_service.start()
self._wait_for_UP(vm_service)
return vm_service.get()
def _post_start_action(self, entity):
vm_service = self._service.service(entity.id)
self._wait_for_UP(vm_service)
self._attach_cd(vm_service.get())
def _attach_cd(self, entity):
cd_iso = self.param('cd_iso')
if cd_iso is not None:
vm_service = self._service.service(entity.id)
current = vm_service.get().status == otypes.VmStatus.UP and self.param('state') == 'running'
cdroms_service = vm_service.cdroms_service()
cdrom_device = cdroms_service.list()[0]
cdrom_service = cdroms_service.cdrom_service(cdrom_device.id)
cdrom = cdrom_service.get(current=current)
if getattr(cdrom.file, 'id', '') != cd_iso:
if not self._module.check_mode:
cdrom_service.update(
cdrom=otypes.Cdrom(
file=otypes.File(id=cd_iso)
),
current=current,
)
self.changed = True
return entity
def _migrate_vm(self, entity):
vm_host = self.param('host')
vm_service = self._service.vm_service(entity.id)
# In case VM is preparing to be UP, wait to be up, to migrate it:
if entity.status == otypes.VmStatus.UP:
if vm_host is not None:
hosts_service = self._connection.system_service().hosts_service()
current_vm_host = hosts_service.host_service(entity.host.id).get().name
if vm_host != current_vm_host:
if not self._module.check_mode:
vm_service.migrate(host=otypes.Host(name=vm_host), force=self.param('force_migrate'))
self._wait_for_UP(vm_service)
self.changed = True
elif self.param('migrate'):
if not self._module.check_mode:
vm_service.migrate(force=self.param('force_migrate'))
self._wait_for_UP(vm_service)
self.changed = True
return entity
def _wait_for_UP(self, vm_service):
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.UP,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
def _wait_for_vm_disks(self, vm_service):
disks_service = self._connection.system_service().disks_service()
for da in vm_service.disk_attachments_service().list():
disk_service = disks_service.disk_service(da.disk.id)
wait(
service=disk_service,
condition=lambda disk: disk.status == otypes.DiskStatus.OK if disk.storage_type == otypes.DiskStorageType.IMAGE else True,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
def wait_for_down(self, vm):
"""
This function will first wait for the status DOWN of the VM.
Then it will find the active snapshot and wait until it's state is OK for
stateless VMs and stateless snapshot is removed.
"""
vm_service = self._service.vm_service(vm.id)
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
if vm.stateless:
snapshots_service = vm_service.snapshots_service()
snapshots = snapshots_service.list()
snap_active = [
snap for snap in snapshots
if snap.snapshot_type == otypes.SnapshotType.ACTIVE
][0]
snap_stateless = [
snap for snap in snapshots
if snap.snapshot_type == otypes.SnapshotType.STATELESS
]
# Stateless snapshot may be already removed:
if snap_stateless:
"""
We need to wait for Active snapshot ID, to be removed as it's current
stateless snapshot. Then we need to wait for staless snapshot ID to
be read, for use, because it will become active snapshot.
"""
wait(
service=snapshots_service.snapshot_service(snap_active.id),
condition=lambda snap: snap is None,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
wait(
service=snapshots_service.snapshot_service(snap_stateless[0].id),
condition=lambda snap: snap.snapshot_status == otypes.SnapshotStatus.OK,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
return True
def __attach_graphical_console(self, entity):
graphical_console = self.param('graphical_console')
if not graphical_console:
return False
vm_service = self._service.service(entity.id)
gcs_service = vm_service.graphics_consoles_service()
graphical_consoles = gcs_service.list()
# Remove all graphical consoles if there are any:
if bool(graphical_console.get('headless_mode')):
if not self._module.check_mode:
for gc in graphical_consoles:
gcs_service.console_service(gc.id).remove()
return len(graphical_consoles) > 0
# If there are not gc add any gc to be added:
protocol = graphical_console.get('protocol')
if isinstance(protocol, str):
protocol = [protocol]
current_protocols = [str(gc.protocol) for gc in graphical_consoles]
if not current_protocols:
if not self._module.check_mode:
for p in protocol:
gcs_service.add(
otypes.GraphicsConsole(
protocol=otypes.GraphicsType(p),
)
)
return True
# Update consoles:
if sorted(protocol) != sorted(current_protocols):
if not self._module.check_mode:
for gc in graphical_consoles:
gcs_service.console_service(gc.id).remove()
for p in protocol:
gcs_service.add(
otypes.GraphicsConsole(
protocol=otypes.GraphicsType(p),
)
)
return True
def __attach_disks(self, entity):
if not self.param('disks'):
return
vm_service = self._service.service(entity.id)
disks_service = self._connection.system_service().disks_service()
disk_attachments_service = vm_service.disk_attachments_service()
self._wait_for_vm_disks(vm_service)
for disk in self.param('disks'):
# If disk ID is not specified, find disk by name:
disk_id = disk.get('id')
if disk_id is None:
disk_id = getattr(
search_by_name(
service=disks_service,
name=disk.get('name')
),
'id',
None
)
# Attach disk to VM:
disk_attachment = disk_attachments_service.attachment_service(disk_id)
if get_entity(disk_attachment) is None:
if not self._module.check_mode:
disk_attachments_service.add(
otypes.DiskAttachment(
disk=otypes.Disk(
id=disk_id,
),
active=disk.get('activate', True),
interface=otypes.DiskInterface(
disk.get('interface', 'virtio')
),
bootable=disk.get('bootable', False),
)
)
self.changed = True
def __get_vnic_profile_id(self, nic):
"""
Return VNIC profile ID looked up by it's name, because there can be
more VNIC profiles with same name, other criteria of filter is cluster.
"""
vnics_service = self._connection.system_service().vnic_profiles_service()
clusters_service = self._connection.system_service().clusters_service()
cluster = search_by_name(clusters_service, self.param('cluster'))
profiles = [
profile for profile in vnics_service.list()
if profile.name == nic.get('profile_name')
]
cluster_networks = [
net.id for net in self._connection.follow_link(cluster.networks)
]
try:
return next(
profile.id for profile in profiles
if profile.network.id in cluster_networks
)
except StopIteration:
raise Exception(
"Profile '%s' was not found in cluster '%s'" % (
nic.get('profile_name'),
self.param('cluster')
)
)
def __attach_numa_nodes(self, entity):
updated = False
numa_nodes_service = self._service.service(entity.id).numa_nodes_service()
if len(self.param('numa_nodes')) > 0:
# Remove all existing virtual numa nodes before adding new ones
existed_numa_nodes = numa_nodes_service.list()
existed_numa_nodes.sort(reverse=len(existed_numa_nodes) > 1 and existed_numa_nodes[1].index > existed_numa_nodes[0].index)
for current_numa_node in existed_numa_nodes:
numa_nodes_service.node_service(current_numa_node.id).remove()
updated = True
for numa_node in self.param('numa_nodes'):
if numa_node is None or numa_node.get('index') is None or numa_node.get('cores') is None or numa_node.get('memory') is None:
continue
numa_nodes_service.add(
otypes.VirtualNumaNode(
index=numa_node.get('index'),
memory=numa_node.get('memory'),
cpu=otypes.Cpu(
cores=[
otypes.Core(
index=core
) for core in numa_node.get('cores')
],
),
numa_node_pins=[
otypes.NumaNodePin(
index=pin
) for pin in numa_node.get('numa_node_pins')
] if numa_node.get('numa_node_pins') is not None else None,
)
)
updated = True
return updated
def __attach_watchdog(self, entity):
watchdogs_service = self._service.service(entity.id).watchdogs_service()
watchdog = self.param('watchdog')
if watchdog is not None:
current_watchdog = next(iter(watchdogs_service.list()), None)
if watchdog.get('model') is None and current_watchdog:
watchdogs_service.watchdog_service(current_watchdog.id).remove()
return True
elif watchdog.get('model') is not None and current_watchdog is None:
watchdogs_service.add(
otypes.Watchdog(
model=otypes.WatchdogModel(watchdog.get('model').lower()),
action=otypes.WatchdogAction(watchdog.get('action')),
)
)
return True
elif current_watchdog is not None:
if (
str(current_watchdog.model).lower() != watchdog.get('model').lower() or
str(current_watchdog.action).lower() != watchdog.get('action').lower()
):
watchdogs_service.watchdog_service(current_watchdog.id).update(
otypes.Watchdog(
model=otypes.WatchdogModel(watchdog.get('model')),
action=otypes.WatchdogAction(watchdog.get('action')),
)
)
return True
return False
def __attach_nics(self, entity):
# Attach NICs to VM, if specified:
nics_service = self._service.service(entity.id).nics_service()
for nic in self.param('nics'):
if search_by_name(nics_service, nic.get('name')) is None:
if not self._module.check_mode:
nics_service.add(
otypes.Nic(
name=nic.get('name'),
interface=otypes.NicInterface(
nic.get('interface', 'virtio')
),
vnic_profile=otypes.VnicProfile(
id=self.__get_vnic_profile_id(nic),
) if nic.get('profile_name') else None,
mac=otypes.Mac(
address=nic.get('mac_address')
) if nic.get('mac_address') else None,
)
)
self.changed = True
def get_initialization(self):
if self._initialization is not None:
return self._initialization
sysprep = self.param('sysprep')
cloud_init = self.param('cloud_init')
cloud_init_nics = self.param('cloud_init_nics') or []
if cloud_init is not None:
cloud_init_nics.append(cloud_init)
if cloud_init or cloud_init_nics:
self._initialization = otypes.Initialization(
nic_configurations=[
otypes.NicConfiguration(
boot_protocol=otypes.BootProtocol(
nic.pop('nic_boot_protocol').lower()
) if nic.get('nic_boot_protocol') else None,
ipv6_boot_protocol=otypes.BootProtocol(
nic.pop('nic_boot_protocol_v6').lower()
) if nic.get('nic_boot_protocol_v6') else None,
name=nic.pop('nic_name', None),
on_boot=nic.pop('nic_on_boot', None),
ip=otypes.Ip(
address=nic.pop('nic_ip_address', None),
netmask=nic.pop('nic_netmask', None),
gateway=nic.pop('nic_gateway', None),
version=otypes.IpVersion('v4')
) if (
nic.get('nic_gateway') is not None or
nic.get('nic_netmask') is not None or
nic.get('nic_ip_address') is not None
) else None,
ipv6=otypes.Ip(
address=nic.pop('nic_ip_address_v6', None),
netmask=nic.pop('nic_netmask_v6', None),
gateway=nic.pop('nic_gateway_v6', None),
version=otypes.IpVersion('v6')
) if (
nic.get('nic_gateway_v6') is not None or
nic.get('nic_netmask_v6') is not None or
nic.get('nic_ip_address_v6') is not None
) else None,
)
for nic in cloud_init_nics
if (
nic.get('nic_boot_protocol_v6') is not None or
nic.get('nic_ip_address_v6') is not None or
nic.get('nic_gateway_v6') is not None or
nic.get('nic_netmask_v6') is not None or
nic.get('nic_gateway') is not None or
nic.get('nic_netmask') is not None or
nic.get('nic_ip_address') is not None or
nic.get('nic_boot_protocol') is not None or
nic.get('nic_on_boot') is not None
)
] if cloud_init_nics else None,
**cloud_init
)
elif sysprep:
self._initialization = otypes.Initialization(
**sysprep
)
return self._initialization
def __attach_host_devices(self, entity):
vm_service = self._service.service(entity.id)
host_devices_service = vm_service.host_devices_service()
host_devices = self.param('host_devices')
updated = False
if host_devices:
device_names = [dev.name for dev in host_devices_service.list()]
for device in host_devices:
device_name = device.get('name')
state = device.get('state', 'present')
if state == 'absent' and device_name in device_names:
updated = True
if not self._module.check_mode:
device_id = get_id_by_name(host_devices_service, device.get('name'))
host_devices_service.device_service(device_id).remove()
elif state == 'present' and device_name not in device_names:
updated = True
if not self._module.check_mode:
host_devices_service.add(
otypes.HostDevice(
name=device.get('name'),
)
)
return updated
def _get_role_mappings(module):
roleMappings = list()
for roleMapping in module.params['role_mappings']:
roleMappings.append(
otypes.RegistrationRoleMapping(
from_=otypes.Role(
name=roleMapping['source_name'],
) if roleMapping['source_name'] else None,
to=otypes.Role(
name=roleMapping['dest_name'],
) if roleMapping['dest_name'] else None,
)
)
return roleMappings
def _get_affinity_group_mappings(module):
affinityGroupMappings = list()
for affinityGroupMapping in module.params['affinity_group_mappings']:
affinityGroupMappings.append(
otypes.RegistrationAffinityGroupMapping(
from_=otypes.AffinityGroup(
name=affinityGroupMapping['source_name'],
) if affinityGroupMapping['source_name'] else None,
to=otypes.AffinityGroup(
name=affinityGroupMapping['dest_name'],
) if affinityGroupMapping['dest_name'] else None,
)
)
return affinityGroupMappings
def _get_affinity_label_mappings(module):
affinityLabelMappings = list()
for affinityLabelMapping in module.params['affinity_label_mappings']:
affinityLabelMappings.append(
otypes.RegistrationAffinityLabelMapping(
from_=otypes.AffinityLabel(
name=affinityLabelMapping['source_name'],
) if affinityLabelMapping['source_name'] else None,
to=otypes.AffinityLabel(
name=affinityLabelMapping['dest_name'],
) if affinityLabelMapping['dest_name'] else None,
)
)
return affinityLabelMappings
def _get_domain_mappings(module):
domainMappings = list()
for domainMapping in module.params['domain_mappings']:
domainMappings.append(
otypes.RegistrationDomainMapping(
from_=otypes.Domain(
name=domainMapping['source_name'],
) if domainMapping['source_name'] else None,
to=otypes.Domain(
name=domainMapping['dest_name'],
) if domainMapping['dest_name'] else None,
)
)
return domainMappings
def _get_lun_mappings(module):
lunMappings = list()
for lunMapping in module.params['lun_mappings']:
lunMappings.append(
otypes.RegistrationLunMapping(
from_=otypes.Disk(
lun_storage=otypes.HostStorage(
type=otypes.StorageType(lunMapping['source_storage_type'])
if (lunMapping['source_storage_type'] in
['iscsi', 'fcp']) else None,
logical_units=[
otypes.LogicalUnit(
id=lunMapping['source_logical_unit_id'],
)
],
),
) if lunMapping['source_logical_unit_id'] else None,
to=otypes.Disk(
lun_storage=otypes.HostStorage(
type=otypes.StorageType(lunMapping['dest_storage_type'])
if (lunMapping['dest_storage_type'] in
['iscsi', 'fcp']) else None,
logical_units=[
otypes.LogicalUnit(
id=lunMapping['dest_logical_unit_id'],
port=lunMapping['dest_logical_unit_port'],
portal=lunMapping['dest_logical_unit_portal'],
address=lunMapping['dest_logical_unit_address'],
target=lunMapping['dest_logical_unit_target'],
password=lunMapping['dest_logical_unit_password'],
username=lunMapping['dest_logical_unit_username'],
)
],
),
) if lunMapping['dest_logical_unit_id'] else None,
),
),
return lunMappings
def _get_cluster_mappings(module):
clusterMappings = list()
for clusterMapping in module.params['cluster_mappings']:
clusterMappings.append(
otypes.RegistrationClusterMapping(
from_=otypes.Cluster(
name=clusterMapping['source_name'],
),
to=otypes.Cluster(
name=clusterMapping['dest_name'],
) if clusterMapping['dest_name'] else None,
)
)
return clusterMappings
def _get_vnic_profile_mappings(module):
vnicProfileMappings = list()
for vnicProfileMapping in module.params['vnic_profile_mappings']:
vnicProfileMappings.append(
otypes.VnicProfileMapping(
source_network_name=vnicProfileMapping['source_network_name'],
source_network_profile_name=vnicProfileMapping['source_profile_name'],
target_vnic_profile=otypes.VnicProfile(
id=vnicProfileMapping['target_profile_id'],
) if vnicProfileMapping['target_profile_id'] else None,
)
)
return vnicProfileMappings
def import_vm(module, connection):
vms_service = connection.system_service().vms_service()
if search_by_name(vms_service, module.params['name']) is not None:
return False
events_service = connection.system_service().events_service()
last_event = events_service.list(max=1)[0]
external_type = [
tmp for tmp in ['kvm', 'xen', 'vmware']
if module.params[tmp] is not None
][0]
external_vm = module.params[external_type]
imports_service = connection.system_service().external_vm_imports_service()
imported_vm = imports_service.add(
otypes.ExternalVmImport(
vm=otypes.Vm(
name=module.params['name']
),
name=external_vm.get('name'),
username=external_vm.get('username', 'test'),
password=external_vm.get('password', 'test'),
provider=otypes.ExternalVmProviderType(external_type),
url=external_vm.get('url'),
cluster=otypes.Cluster(
name=module.params['cluster'],
) if module.params['cluster'] else None,
storage_domain=otypes.StorageDomain(
name=external_vm.get('storage_domain'),
) if external_vm.get('storage_domain') else None,
sparse=external_vm.get('sparse', True),
host=otypes.Host(
name=module.params['host'],
) if module.params['host'] else None,
)
)
# Wait until event with code 1152 for our VM don't appear:
vms_service = connection.system_service().vms_service()
wait(
service=vms_service.vm_service(imported_vm.vm.id),
condition=lambda vm: len([
event
for event in events_service.list(
from_=int(last_event.id),
search='type=1152 and vm.id=%s' % vm.id,
)
]) > 0 if vm is not None else False,
fail_condition=lambda vm: vm is None,
timeout=module.params['timeout'],
poll_interval=module.params['poll_interval'],
)
return True
def check_deprecated_params(module, connection):
if engine_supported(connection, '4.4') and \
(module.params.get('kernel_params_persist') is not None or
module.params.get('kernel_path') is not None or
module.params.get('initrd_path') is not None or
module.params.get('kernel_params') is not None):
module.warn("Parameters 'kernel_params_persist', 'kernel_path', 'initrd_path', 'kernel_params' are not supported since oVirt 4.4.")
def control_state(vm, vms_service, module):
if vm is None:
return
force = module.params['force']
state = module.params['state']
vm_service = vms_service.vm_service(vm.id)
if vm.status == otypes.VmStatus.IMAGE_LOCKED:
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)
elif vm.status == otypes.VmStatus.SAVING_STATE:
# Result state is SUSPENDED, we should wait to be suspended:
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.SUSPENDED,
)
elif (
vm.status == otypes.VmStatus.UNASSIGNED or
vm.status == otypes.VmStatus.UNKNOWN
):
# Invalid states:
module.fail_json(msg="Not possible to control VM, if it's in '{0}' status".format(vm.status))
elif vm.status == otypes.VmStatus.POWERING_DOWN:
if (force and state == 'stopped') or state == 'absent':
vm_service.stop()
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)
else:
# If VM is powering down, wait to be DOWN or UP.
# VM can end in UP state in case there is no GA
# or ACPI on the VM or shutdown operation crashed:
wait(
service=vm_service,
condition=lambda vm: vm.status in [otypes.VmStatus.DOWN, otypes.VmStatus.UP],
)
def main():
argument_spec = ovirt_full_argument_spec(
state=dict(type='str', default='present', choices=[
'absent', 'next_run', 'present', 'registered', 'running', 'stopped', 'suspended', 'exported', 'reboot'
]),
name=dict(type='str'),
id=dict(type='str'),
cluster=dict(type='str'),
allow_partial_import=dict(type='bool'),
template=dict(type='str'),
template_version=dict(type='int'),
use_latest_template_version=dict(type='bool'),
storage_domain=dict(type='str'),
disk_format=dict(type='str', default='cow', choices=['cow', 'raw']),
disks=dict(type='list', default=[]),
memory=dict(type='str'),
memory_guaranteed=dict(type='str'),
memory_max=dict(type='str'),
cpu_sockets=dict(type='int'),
cpu_cores=dict(type='int'),
cpu_shares=dict(type='int'),
cpu_threads=dict(type='int'),
type=dict(type='str', choices=['server', 'desktop', 'high_performance']),
operating_system=dict(type='str'),
cd_iso=dict(type='str'),
boot_devices=dict(type='list', choices=['cdrom', 'hd', 'network']),
vnic_profile_mappings=dict(default=[], type='list'),
cluster_mappings=dict(default=[], type='list'),
role_mappings=dict(default=[], type='list'),
affinity_group_mappings=dict(default=[], type='list'),
affinity_label_mappings=dict(default=[], type='list'),
lun_mappings=dict(default=[], type='list'),
domain_mappings=dict(default=[], type='list'),
reassign_bad_macs=dict(default=None, type='bool'),
boot_menu=dict(type='bool'),
serial_console=dict(type='bool'),
usb_support=dict(type='bool'),
sso=dict(type='bool'),
quota_id=dict(type='str'),
high_availability=dict(type='bool'),
high_availability_priority=dict(type='int'),
lease=dict(type='str'),
stateless=dict(type='bool'),
delete_protected=dict(type='bool'),
custom_emulated_machine=dict(type='str'),
force=dict(type='bool', default=False),
nics=dict(type='list', default=[]),
cloud_init=dict(type='dict'),
cloud_init_nics=dict(type='list', default=[]),
cloud_init_persist=dict(type='bool', default=False, aliases=['sysprep_persist']),
kernel_params_persist=dict(type='bool', default=False),
sysprep=dict(type='dict'),
host=dict(type='str'),
clone=dict(type='bool', default=False),
clone_permissions=dict(type='bool', default=False),
kernel_path=dict(type='str'),
initrd_path=dict(type='str'),
kernel_params=dict(type='str'),
instance_type=dict(type='str'),
description=dict(type='str'),
comment=dict(type='str'),
timezone=dict(type='str'),
serial_policy=dict(type='str', choices=['vm', 'host', 'custom']),
serial_policy_value=dict(type='str'),
vmware=dict(type='dict'),
xen=dict(type='dict'),
kvm=dict(type='dict'),
cpu_mode=dict(type='str'),
placement_policy=dict(type='str'),
custom_compatibility_version=dict(type='str'),
ticket=dict(type='bool', default=None),
cpu_pinning=dict(type='list'),
soundcard_enabled=dict(type='bool', default=None),
smartcard_enabled=dict(type='bool', default=None),
io_threads=dict(type='int', default=None),
ballooning_enabled=dict(type='bool', default=None),
rng_device=dict(type='str'),
numa_tune_mode=dict(type='str', choices=['interleave', 'preferred', 'strict']),
numa_nodes=dict(type='list', default=[]),
custom_properties=dict(type='list'),
watchdog=dict(type='dict'),
host_devices=dict(type='list'),
graphical_console=dict(type='dict'),
exclusive=dict(type='bool'),
export_domain=dict(default=None),
export_ova=dict(type='dict'),
force_migrate=dict(type='bool'),
migrate=dict(type='bool', default=None),
next_run=dict(type='bool'),
snapshot_name=dict(type='str'),
snapshot_vm=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[['id', 'name']],
required_if=[
('state', 'registered', ['storage_domain']),
],
required_together=[['snapshot_name', 'snapshot_vm']]
)
check_sdk(module)
check_params(module)
try:
state = module.params['state']
auth = module.params.pop('auth')
connection = create_connection(auth)
check_deprecated_params(module, connection)
vms_service = connection.system_service().vms_service()
vms_module = VmsModule(
connection=connection,
module=module,
service=vms_service,
)
vm = vms_module.search_entity(list_params={'all_content': True})
# Boolean variable to mark if vm existed before module was executed
vm_existed = True if vm else False
control_state(vm, vms_service, module)
if state in ('present', 'running', 'next_run'):
if module.params['xen'] or module.params['kvm'] or module.params['vmware']:
vms_module.changed = import_vm(module, connection)
# In case of wait=false and state=running, waits for VM to be created
# In case VM don't exist, wait for VM DOWN state,
# otherwise don't wait for any state, just update VM:
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
update_params={'next_run': module.params['next_run']} if module.params['next_run'] is not None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
_wait=True if not module.params['wait'] and state == 'running' else module.params['wait'],
)
# If VM is going to be created and check_mode is on, return now:
if module.check_mode and ret.get('id') is None:
module.exit_json(**ret)
vms_module.post_present(ret['id'])
# Run the VM if it was just created, else don't run it:
if state == 'running':
def kernel_persist_check():
return (module.params.get('kernel_params') or
module.params.get('initrd_path') or
module.params.get('kernel_path')
and not module.params.get('cloud_init_persist'))
initialization = vms_module.get_initialization()
ret = vms_module.action(
action='start',
post_action=vms_module._post_start_action,
action_condition=lambda vm: (
vm.status not in [
otypes.VmStatus.MIGRATING,
otypes.VmStatus.POWERING_UP,
otypes.VmStatus.REBOOT_IN_PROGRESS,
otypes.VmStatus.WAIT_FOR_LAUNCH,
otypes.VmStatus.UP,
otypes.VmStatus.RESTORING_STATE,
]
),
wait_condition=lambda vm: vm.status == otypes.VmStatus.UP,
# Start action kwargs:
use_cloud_init=True if not module.params.get('cloud_init_persist') and module.params.get('cloud_init') else None,
use_sysprep=True if not module.params.get('cloud_init_persist') and module.params.get('sysprep') else None,
vm=otypes.Vm(
placement_policy=otypes.VmPlacementPolicy(
hosts=[otypes.Host(name=module.params['host'])]
) if module.params['host'] else None,
initialization=initialization,
os=otypes.OperatingSystem(
cmdline=module.params.get('kernel_params'),
initrd=module.params.get('initrd_path'),
kernel=module.params.get('kernel_path'),
) if (kernel_persist_check()) else None,
) if (
kernel_persist_check() or
module.params.get('host') or
initialization is not None
and not module.params.get('cloud_init_persist')
) else None,
)
if module.params['ticket']:
vm_service = vms_service.vm_service(ret['id'])
graphics_consoles_service = vm_service.graphics_consoles_service()
graphics_console = graphics_consoles_service.list()[0]
console_service = graphics_consoles_service.console_service(graphics_console.id)
ticket = console_service.remote_viewer_connection_file()
if ticket:
ret['vm']['remote_vv_file'] = ticket
if state == 'next_run':
# Apply next run configuration, if needed:
vm = vms_service.vm_service(ret['id']).get()
if vm.next_run_configuration_exists:
ret = vms_module.action(
action='reboot',
entity=vm,
action_condition=lambda vm: vm.status == otypes.VmStatus.UP,
wait_condition=lambda vm: vm.status == otypes.VmStatus.UP,
)
# Allow migrate vm when state present.
if vm_existed:
vms_module._migrate_vm(vm)
ret['changed'] = vms_module.changed
elif state == 'stopped':
if module.params['xen'] or module.params['kvm'] or module.params['vmware']:
vms_module.changed = import_vm(module, connection)
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
)
if module.params['force']:
ret = vms_module.action(
action='stop',
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=vms_module.wait_for_down,
)
else:
ret = vms_module.action(
action='shutdown',
pre_action=vms_module._pre_shutdown_action,
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=vms_module.wait_for_down,
)
vms_module.post_present(ret['id'])
elif state == 'suspended':
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
)
vms_module.post_present(ret['id'])
ret = vms_module.action(
action='suspend',
pre_action=vms_module._pre_suspend_action,
action_condition=lambda vm: vm.status != otypes.VmStatus.SUSPENDED,
wait_condition=lambda vm: vm.status == otypes.VmStatus.SUSPENDED,
)
elif state == 'absent':
ret = vms_module.remove()
elif state == 'registered':
storage_domains_service = connection.system_service().storage_domains_service()
# Find the storage domain with unregistered VM:
sd_id = get_id_by_name(storage_domains_service, module.params['storage_domain'])
storage_domain_service = storage_domains_service.storage_domain_service(sd_id)
vms_service = storage_domain_service.vms_service()
# Find the unregistered VM we want to register:
vms = vms_service.list(unregistered=True)
vm = next(
(vm for vm in vms if (vm.id == module.params['id'] or vm.name == module.params['name'])),
None
)
changed = False
if vm is None:
vm = vms_module.search_entity()
if vm is None:
raise ValueError(
"VM '%s(%s)' wasn't found." % (module.params['name'], module.params['id'])
)
else:
# Register the vm into the system:
changed = True
vm_service = vms_service.vm_service(vm.id)
vm_service.register(
allow_partial_import=module.params['allow_partial_import'],
cluster=otypes.Cluster(
name=module.params['cluster']
) if module.params['cluster'] else None,
vnic_profile_mappings=_get_vnic_profile_mappings(module)
if module.params['vnic_profile_mappings'] else None,
reassign_bad_macs=module.params['reassign_bad_macs']
if module.params['reassign_bad_macs'] is not None else None,
registration_configuration=otypes.RegistrationConfiguration(
cluster_mappings=_get_cluster_mappings(module),
role_mappings=_get_role_mappings(module),
domain_mappings=_get_domain_mappings(module),
lun_mappings=_get_lun_mappings(module),
affinity_group_mappings=_get_affinity_group_mappings(module),
affinity_label_mappings=_get_affinity_label_mappings(module),
) if (module.params['cluster_mappings']
or module.params['role_mappings']
or module.params['domain_mappings']
or module.params['lun_mappings']
or module.params['affinity_group_mappings']
or module.params['affinity_label_mappings']) else None
)
if module.params['wait']:
vm = vms_module.wait_for_import()
else:
# Fetch vm to initialize return.
vm = vm_service.get()
ret = {
'changed': changed,
'id': vm.id,
'vm': get_dict_of_struct(vm)
}
elif state == 'exported':
if module.params['export_domain']:
export_service = vms_module._get_export_domain_service()
export_vm = search_by_attributes(export_service.vms_service(), id=vm.id)
ret = vms_module.action(
entity=vm,
action='export',
action_condition=lambda t: export_vm is None or module.params['exclusive'],
wait_condition=lambda t: t is not None,
post_action=vms_module.post_export_action,
storage_domain=otypes.StorageDomain(id=export_service.get().id),
exclusive=module.params['exclusive'],
)
elif module.params['export_ova']:
export_vm = module.params['export_ova']
ret = vms_module.action(
entity=vm,
action='export_to_path_on_host',
host=otypes.Host(name=export_vm.get('host')),
directory=export_vm.get('directory'),
filename=export_vm.get('filename'),
)
elif state == 'reboot':
ret = vms_module.action(
action='reboot',
entity=vm,
action_condition=lambda vm: vm.status == otypes.VmStatus.UP,
wait_condition=lambda vm: vm.status == otypes.VmStatus.UP,
)
module.exit_json(**ret)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,983 |
junos_facts in 2.9rc1 throws exception. Works in 2.8
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Ansible throws an exception when calling the junos_facts module
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
junos_facts (built-in network module)
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.9.0rc1
config file = /Users/matucker/junos_collection_test/ansible.cfg
configured module search path = ['/Users/matucker/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/matucker/junos_collection_test/venv/lib/python3.7/site-packages/ansible
executable location = /Users/matucker/junos_collection_test/venv/bin/ansible
python version = 3.7.4 (default, Jul 9 2019, 18:13:23) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
```paste below
$ ansible-config dump --only-changed
DEFAULT_HOST_LIST(/Users/matucker/junos_collection_test/ansible.cfg) = ['/Users/matucker/junos_collection_test/hosts']
HOST_KEY_CHECKING(/Users/matucker/junos_collection_test/ansible.cfg) = False
```
##### OS / ENVIRONMENT
```
lab@alpsgagipxfw02-node0> show version
node0:
--------------------------------------------------------------------------
Hostname: lavender
Model: srx5800
Junos: 15.1X49-D180.2
AI-Scripts [6.0R1.1]
JUNOS Software Release [15.1X49-D180.2]
```
##### STEPS TO REPRODUCE
Install Ansible 2.9rc1
Inventory:
```yaml
[junos]
lavender.ultralab.juniper.net
[junos:vars]
ansible_connection=netconf
ansible_network_os=junos
ansible_user=labroot
ansible_password=lab123
```
Playbook:
```yaml
---
- name: Playbook Test
hosts: junos
tasks:
- name: built in module test
junos_facts:
```
##### EXPECTED RESULTS
```
PLAY [Playbook Test] **************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************
ok: [lavender.ultralab.juniper.net]
TASK [built in module test] *******************************************************************************************************************************************
ok: [lavender.ultralab.juniper.net]
PLAY RECAP ************************************************************************************************************************************************************
lavender.ultralab.juniper.net : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
```
PLAY [Playbook Test] **************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************
[WARNING]: Ignoring timeout(10) for junos_facts
fatal: [lavender.ultralab.juniper.net]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"junos_facts": {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "exception": "Traceback (most recent call last):\n File \"/Users/matucker/.ansible/tmp/ansible-local-77560craccig5/ansible-tmp-1569878043.2963982-176353774268055/AnsiballZ_junos_facts.py\", line 102, in <module>\n _ansiballz_main()\n File \"/Users/matucker/.ansible/tmp/ansible-local-77560craccig5/ansible-tmp-1569878043.2963982-176353774268055/AnsiballZ_junos_facts.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/Users/matucker/.ansible/tmp/ansible-local-77560craccig5/ansible-tmp-1569878043.2963982-176353774268055/AnsiballZ_junos_facts.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.junos.junos_facts', init_globals=None, run_name='__main__', alter_sys=False)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 180, in run_module\n fname, loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/modules/network/junos/junos_facts.py\", line 135, in <module>\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/modules/network/junos/junos_facts.py\", line 126, in main\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/module_utils/network/junos/facts/facts.py\", line 78, in get_facts\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/module_utils/network/common/facts/facts.py\", line 124, in get_network_legacy_facts\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/module_utils/network/junos/facts/legacy/base.py\", line 97, in populate\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/module_utils/network/junos/facts/legacy/base.py\", line 54, in get_text\nUnicodeEncodeError: 'ascii' codec can't encode characters in position 450034-450041: ordinal not in range(128)\n", "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/Users/matucker/.ansible/tmp/ansible-local-77560craccig5/ansible-tmp-1569878043.2963982-176353774268055/AnsiballZ_junos_facts.py\", line 102, in <module>\n _ansiballz_main()\n File \"/Users/matucker/.ansible/tmp/ansible-local-77560craccig5/ansible-tmp-1569878043.2963982-176353774268055/AnsiballZ_junos_facts.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/Users/matucker/.ansible/tmp/ansible-local-77560craccig5/ansible-tmp-1569878043.2963982-176353774268055/AnsiballZ_junos_facts.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.network.junos.junos_facts', init_globals=None, run_name='__main__', alter_sys=False)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 180, in run_module\n fname, loader, pkg_name)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/modules/network/junos/junos_facts.py\", line 135, in <module>\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/modules/network/junos/junos_facts.py\", line 126, in main\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/module_utils/network/junos/facts/facts.py\", line 78, in get_facts\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/module_utils/network/common/facts/facts.py\", line 124, in get_network_legacy_facts\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/module_utils/network/junos/facts/legacy/base.py\", line 97, in populate\n File \"/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload.zip/ansible/module_utils/network/junos/facts/legacy/base.py\", line 54, in get_text\nUnicodeEncodeError: 'ascii' codec can't encode characters in position 450034-450041: ordinal not in range(128)\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1, "warnings": ["Platform darwin on host lavender.ultralab.juniper.net is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information."]}}, "msg": "The following modules failed to execute: junos_facts\n"}
PLAY RECAP ************************************************************************************************************************************************************
lavender.ultralab.juniper.net : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
```
Traceback (most recent call last):
File "/Users/matucker/.ansible/tmp/ansible-local-77560craccig5/ansible-tmp-1569878043.2963982-176353774268055/Ansibal
lZ_junos_facts.py", line 102, in <module>
_ansiballz_main()
File "/Users/matucker/.ansible/tmp/ansible-local-77560craccig5/ansible-tmp-1569878043.2963982-176353774268055/Ansibal
lZ_junos_facts.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/Users/matucker/.ansible/tmp/ansible-local-77560craccig5/ansible-tmp-1569878043.2963982-176353774268055/Ansibal
lZ_junos_facts.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.network.junos.junos_facts', init_globals=None, run_name='__main__', alte
r_sys=False)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 180, in run_module
fname, loader, pkg_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload
.zip/ansible/modules/network/junos/junos_facts.py", line 135, in <module>
File "/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload
.zip/ansible/modules/network/junos/junos_facts.py", line 126, in main
File "/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload
.zip/ansible/module_utils/network/junos/facts/facts.py", line 78, in get_facts
File "/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload
.zip/ansible/module_utils/network/common/facts/facts.py", line 124, in get_network_legacy_facts
File "/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload
.zip/ansible/module_utils/network/junos/facts/legacy/base.py", line 97, in populate
File "/var/folders/04/g24gg21176zflhxqzl_82y9c0000gn/T/ansible_junos_facts_payload_LFmG2F/ansible_junos_facts_payload
.zip/ansible/module_utils/network/junos/facts/legacy/base.py", line 54, in get_text
UnicodeEncodeError: 'ascii' codec can't encode characters in position 450034-450041: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/62983
|
https://github.com/ansible/ansible/pull/63046
|
71bcce5db50ba6829b7759c2686e1090424ac6a4
|
d336a14da6c02d41add2c6210dacca87babd501f
| 2019-09-30T21:18:20Z |
python
| 2019-10-09T13:56:27Z |
lib/ansible/module_utils/network/junos/facts/legacy/base.py
|
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The junos interfaces fact class
It is in this file the configuration is collected from the device
for a given resource, parsed, and the facts tree is populated
based on the configuration.
"""
import platform
from ansible.module_utils.network.common.netconf import exec_rpc
from ansible.module_utils.network.junos.junos import get_param, tostring
from ansible.module_utils.network.junos.junos import get_configuration, get_capabilities
from ansible.module_utils._text import to_native
try:
from lxml.etree import Element, SubElement
except ImportError:
from xml.etree.ElementTree import Element, SubElement
try:
from jnpr.junos import Device
from jnpr.junos.exception import ConnectError
HAS_PYEZ = True
except ImportError:
HAS_PYEZ = False
class FactsBase(object):
def __init__(self, module):
self.module = module
self.facts = dict()
self.warnings = []
def populate(self):
raise NotImplementedError
def cli(self, command):
reply = command(self.module, command)
output = reply.find('.//output')
if not output:
self.module.fail_json(msg='failed to retrieve facts for command %s' % command)
return str(output.text).strip()
def rpc(self, rpc):
return exec_rpc(self.module, tostring(Element(rpc)))
def get_text(self, ele, tag):
try:
return str(ele.find(tag).text).strip()
except AttributeError:
pass
class Default(FactsBase):
def populate(self):
self.facts.update(self.platform_facts())
reply = self.rpc('get-chassis-inventory')
data = reply.find('.//chassis-inventory/chassis')
self.facts['serialnum'] = self.get_text(data, 'serial-number')
def platform_facts(self):
platform_facts = {}
resp = get_capabilities(self.module)
device_info = resp['device_info']
platform_facts['system'] = device_info['network_os']
for item in ('model', 'image', 'version', 'platform', 'hostname'):
val = device_info.get('network_os_%s' % item)
if val:
platform_facts[item] = val
platform_facts['api'] = resp['network_api']
platform_facts['python_version'] = platform.python_version()
return platform_facts
class Config(FactsBase):
def populate(self):
config_format = self.module.params['config_format']
reply = get_configuration(self.module, format=config_format)
if config_format == 'xml':
config = tostring(reply.find('configuration')).strip()
elif config_format == 'text':
config = self.get_text(reply, 'configuration-text')
elif config_format == 'json':
config = self.module.from_json(reply.text.strip())
elif config_format == 'set':
config = self.get_text(reply, 'configuration-set')
self.facts['config'] = config
class Hardware(FactsBase):
def populate(self):
reply = self.rpc('get-system-memory-information')
data = reply.find('.//system-memory-information/system-memory-summary-information')
self.facts.update({
'memfree_mb': int(self.get_text(data, 'system-memory-free')),
'memtotal_mb': int(self.get_text(data, 'system-memory-total'))
})
reply = self.rpc('get-system-storage')
data = reply.find('.//system-storage-information')
filesystems = list()
for obj in data:
filesystems.append(self.get_text(obj, 'filesystem-name'))
self.facts['filesystems'] = filesystems
reply = self.rpc('get-route-engine-information')
data = reply.find('.//route-engine-information')
routing_engines = dict()
for obj in data:
slot = self.get_text(obj, 'slot')
routing_engines.update({slot: {}})
routing_engines[slot].update({'slot': slot})
for child in obj:
if child.text != "\n":
routing_engines[slot].update({child.tag.replace("-", "_"): child.text})
self.facts['routing_engines'] = routing_engines
if len(data) > 1:
self.facts['has_2RE'] = True
else:
self.facts['has_2RE'] = False
reply = self.rpc('get-chassis-inventory')
data = reply.findall('.//chassis-module')
modules = list()
for obj in data:
mod = dict()
for child in obj:
if child.text != "\n":
mod.update({child.tag.replace("-", "_"): child.text})
modules.append(mod)
self.facts['modules'] = modules
class Interfaces(FactsBase):
def populate(self):
ele = Element('get-interface-information')
SubElement(ele, 'detail')
reply = exec_rpc(self.module, tostring(ele))
interfaces = {}
for item in reply[0]:
name = self.get_text(item, 'name')
obj = {
'oper-status': self.get_text(item, 'oper-status'),
'admin-status': self.get_text(item, 'admin-status'),
'speed': self.get_text(item, 'speed'),
'macaddress': self.get_text(item, 'hardware-physical-address'),
'mtu': self.get_text(item, 'mtu'),
'type': self.get_text(item, 'if-type'),
}
interfaces[name] = obj
self.facts['interfaces'] = interfaces
class OFacts(FactsBase):
def _connect(self, module):
host = get_param(module, 'host')
kwargs = {
'port': get_param(module, 'port') or 830,
'user': get_param(module, 'username')
}
if get_param(module, 'password'):
kwargs['passwd'] = get_param(module, 'password')
if get_param(module, 'ssh_keyfile'):
kwargs['ssh_private_key_file'] = get_param(module, 'ssh_keyfile')
kwargs['gather_facts'] = False
try:
device = Device(host, **kwargs)
device.open()
device.timeout = get_param(module, 'timeout') or 10
except ConnectError as exc:
module.fail_json('unable to connect to %s: %s' % (host, to_native(exc)))
return device
def populate(self):
device = self._connect(self.module)
facts = dict(device.facts)
if '2RE' in facts:
facts['has_2RE'] = facts['2RE']
del facts['2RE']
facts['version_info'] = dict(facts['version_info'])
if 'junos_info' in facts:
for key, value in facts['junos_info'].items():
if 'object' in value:
value['object'] = dict(value['object'])
return facts
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,310 |
ConfigManager envvar decode failure is fatal
|
##### SUMMARY
When ConfigManager is decoding config envvar values, if the value is not decode-able based on current LC_ALL/LANG settings, the error is fatal.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ConfigManager
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9.0rc3
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
NA
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
04:42 + LC_ALL=C
04:42 + LANG=C
04:42 + ansible-playbook test_connection.yml -i ../connection_chroot/test_connection.inventory -e target_hosts=chroot -e action_prefix= -e local_tmp=/tmp/ansible-local -e remote_tmp=/tmp/ansible-remote -vvvvvv
04:42 Unhandled error:
04:42 Traceback (most recent call last):
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 550, in update_config_data
04:42 value, origin = self.get_config_value_and_origin(config, configfile)
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 453, in get_config_value_and_origin
04:42 value, origin = self._loop_entries(py3compat.environ, defs[config]['env'])
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 393, in _loop_entries
04:42 temp_value = container.get(name, None)
04:42 File "/usr/lib64/python2.7/_abcoll.py", line 363, in get
04:42 return self[key]
04:42 File "/root/ansible/lib/ansible/utils/py3compat.py", line 50, in __getitem__
04:42 nonstring='passthru', errors='surrogate_or_strict')
04:42 File "/root/ansible/lib/ansible/module_utils/_text.py", line 235, in to_text
04:42 return obj.decode(encoding, errors)
04:42 UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 69: ordinal not in range(128)
04:42
04:42
04:42 Traceback (most recent call last):
04:42 File "/root/ansible/bin/ansible-playbook", line 62, in <module>
04:42 import ansible.constants as C
04:42 File "/root/ansible/lib/ansible/constants.py", line 174, in <module>
04:42 config = ConfigManager()
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 290, in __init__
04:42 self.update_config_data()
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 562, in update_config_data
04:42 raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
04:42 ansible.errors.AnsibleError: Invalid settings supplied for PLAYBOOK_DIR: 'ascii' codec can't decode byte 0xc3 in position 69: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/63310
|
https://github.com/ansible/ansible/pull/63311
|
9d014778adcdbbfeca6531c2c4db4121e362c8d2
|
77de663879bcf2f6ab82d5009b8a0b799814750b
| 2019-10-09T23:17:02Z |
python
| 2019-10-10T00:08:29Z |
changelogs/fragments/config_encoding_resilience.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,310 |
ConfigManager envvar decode failure is fatal
|
##### SUMMARY
When ConfigManager is decoding config envvar values, if the value is not decode-able based on current LC_ALL/LANG settings, the error is fatal.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ConfigManager
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9.0rc3
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
NA
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
04:42 + LC_ALL=C
04:42 + LANG=C
04:42 + ansible-playbook test_connection.yml -i ../connection_chroot/test_connection.inventory -e target_hosts=chroot -e action_prefix= -e local_tmp=/tmp/ansible-local -e remote_tmp=/tmp/ansible-remote -vvvvvv
04:42 Unhandled error:
04:42 Traceback (most recent call last):
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 550, in update_config_data
04:42 value, origin = self.get_config_value_and_origin(config, configfile)
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 453, in get_config_value_and_origin
04:42 value, origin = self._loop_entries(py3compat.environ, defs[config]['env'])
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 393, in _loop_entries
04:42 temp_value = container.get(name, None)
04:42 File "/usr/lib64/python2.7/_abcoll.py", line 363, in get
04:42 return self[key]
04:42 File "/root/ansible/lib/ansible/utils/py3compat.py", line 50, in __getitem__
04:42 nonstring='passthru', errors='surrogate_or_strict')
04:42 File "/root/ansible/lib/ansible/module_utils/_text.py", line 235, in to_text
04:42 return obj.decode(encoding, errors)
04:42 UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 69: ordinal not in range(128)
04:42
04:42
04:42 Traceback (most recent call last):
04:42 File "/root/ansible/bin/ansible-playbook", line 62, in <module>
04:42 import ansible.constants as C
04:42 File "/root/ansible/lib/ansible/constants.py", line 174, in <module>
04:42 config = ConfigManager()
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 290, in __init__
04:42 self.update_config_data()
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 562, in update_config_data
04:42 raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
04:42 ansible.errors.AnsibleError: Invalid settings supplied for PLAYBOOK_DIR: 'ascii' codec can't decode byte 0xc3 in position 69: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/63310
|
https://github.com/ansible/ansible/pull/63311
|
9d014778adcdbbfeca6531c2c4db4121e362c8d2
|
77de663879bcf2f6ab82d5009b8a0b799814750b
| 2019-10-09T23:17:02Z |
python
| 2019-10-10T00:08:29Z |
lib/ansible/config/manager.py
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import atexit
import io
import os
import os.path
import sys
import stat
import tempfile
import traceback
from collections import namedtuple
from yaml import load as yaml_load
try:
# use C version if possible for speedup
from yaml import CSafeLoader as SafeLoader
except ImportError:
from yaml import SafeLoader
from ansible.config.data import ConfigData
from ansible.errors import AnsibleOptionsError, AnsibleError
from ansible.module_utils._text import to_text, to_bytes, to_native
from ansible.module_utils.common._collections_compat import Sequence
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils.six.moves import configparser
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.parsing.quoting import unquote
from ansible.utils import py3compat
from ansible.utils.path import cleanup_tmp_file, makedirs_safe, unfrackpath
Plugin = namedtuple('Plugin', 'name type')
Setting = namedtuple('Setting', 'name value origin type')
INTERNAL_DEFS = {'lookup': ('_terms',)}
def _get_entry(plugin_type, plugin_name, config):
''' construct entry for requested config '''
entry = ''
if plugin_type:
entry += 'plugin_type: %s ' % plugin_type
if plugin_name:
entry += 'plugin: %s ' % plugin_name
entry += 'setting: %s ' % config
return entry
# FIXME: see if we can unify in module_utils with similar function used by argspec
def ensure_type(value, value_type, origin=None):
''' return a configuration variable with casting
:arg value: The value to ensure correct typing of
:kwarg value_type: The type of the value. This can be any of the following strings:
:boolean: sets the value to a True or False value
:bool: Same as 'boolean'
:integer: Sets the value to an integer or raises a ValueType error
:int: Same as 'integer'
:float: Sets the value to a float or raises a ValueType error
:list: Treats the value as a comma separated list. Split the value
and return it as a python list.
:none: Sets the value to None
:path: Expands any environment variables and tilde's in the value.
:tmppath: Create a unique temporary directory inside of the directory
specified by value and return its path.
:temppath: Same as 'tmppath'
:tmp: Same as 'tmppath'
:pathlist: Treat the value as a typical PATH string. (On POSIX, this
means colon separated strings.) Split the value and then expand
each part for environment variables and tildes.
:pathspec: Treat the value as a PATH string. Expands any environment variables
tildes's in the value.
:str: Sets the value to string types.
:string: Same as 'str'
'''
errmsg = ''
basedir = None
if origin and os.path.isabs(origin) and os.path.exists(to_bytes(origin)):
basedir = origin
if value_type:
value_type = value_type.lower()
if value is not None:
if value_type in ('boolean', 'bool'):
value = boolean(value, strict=False)
elif value_type in ('integer', 'int'):
value = int(value)
elif value_type == 'float':
value = float(value)
elif value_type == 'list':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
elif not isinstance(value, Sequence):
errmsg = 'list'
elif value_type == 'none':
if value == "None":
value = None
if value is not None:
errmsg = 'None'
elif value_type == 'path':
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
else:
errmsg = 'path'
elif value_type in ('tmp', 'temppath', 'tmppath'):
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
if not os.path.exists(value):
makedirs_safe(value, 0o700)
prefix = 'ansible-local-%s' % os.getpid()
value = tempfile.mkdtemp(prefix=prefix, dir=value)
atexit.register(cleanup_tmp_file, value, warn=True)
else:
errmsg = 'temppath'
elif value_type == 'pathspec':
if isinstance(value, string_types):
value = value.split(os.pathsep)
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathspec'
elif value_type == 'pathlist':
if isinstance(value, string_types):
value = value.split(',')
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathlist'
elif value_type in ('str', 'string'):
if isinstance(value, string_types):
value = unquote(to_text(value, errors='surrogate_or_strict'))
else:
errmsg = 'string'
# defaults to string type
elif isinstance(value, string_types):
value = unquote(value)
if errmsg:
raise ValueError('Invalid type provided for "%s": %s' % (errmsg, to_native(value)))
return to_text(value, errors='surrogate_or_strict', nonstring='passthru')
# FIXME: see if this can live in utils/path
def resolve_path(path, basedir=None):
''' resolve relative or 'variable' paths '''
if '{{CWD}}' in path: # allow users to force CWD using 'magic' {{CWD}}
path = path.replace('{{CWD}}', os.getcwd())
return unfrackpath(path, follow=False, basedir=basedir)
# FIXME: generic file type?
def get_config_type(cfile):
ftype = None
if cfile is not None:
ext = os.path.splitext(cfile)[-1]
if ext in ('.ini', '.cfg'):
ftype = 'ini'
elif ext in ('.yaml', '.yml'):
ftype = 'yaml'
else:
raise AnsibleOptionsError("Unsupported configuration file extension for %s: %s" % (cfile, to_native(ext)))
return ftype
# FIXME: can move to module_utils for use for ini plugins also?
def get_ini_config_value(p, entry):
''' returns the value of last ini entry found '''
value = None
if p is not None:
try:
value = p.get(entry.get('section', 'defaults'), entry.get('key', ''), raw=True)
except Exception: # FIXME: actually report issues here
pass
return value
def find_ini_config_file(warnings=None):
''' Load INI Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
# FIXME: eventually deprecate ini configs
if warnings is None:
# Note: In this case, warnings does nothing
warnings = set()
# A value that can never be a valid path so that we can tell if ANSIBLE_CONFIG was set later
# We can't use None because we could set path to None.
SENTINEL = object
potential_paths = []
# Environment setting
path_from_env = os.getenv("ANSIBLE_CONFIG", SENTINEL)
if path_from_env is not SENTINEL:
path_from_env = unfrackpath(path_from_env, follow=False)
if os.path.isdir(to_bytes(path_from_env)):
path_from_env = os.path.join(path_from_env, "ansible.cfg")
potential_paths.append(path_from_env)
# Current working directory
warn_cmd_public = False
try:
cwd = os.getcwd()
perms = os.stat(cwd)
cwd_cfg = os.path.join(cwd, "ansible.cfg")
if perms.st_mode & stat.S_IWOTH:
# Working directory is world writable so we'll skip it.
# Still have to look for a file here, though, so that we know if we have to warn
if os.path.exists(cwd_cfg):
warn_cmd_public = True
else:
potential_paths.append(cwd_cfg)
except OSError:
# If we can't access cwd, we'll simply skip it as a possible config source
pass
# Per user location
potential_paths.append(unfrackpath("~/.ansible.cfg", follow=False))
# System location
potential_paths.append("/etc/ansible/ansible.cfg")
for path in potential_paths:
b_path = to_bytes(path)
if os.path.exists(b_path) and os.access(b_path, os.R_OK):
break
else:
path = None
# Emit a warning if all the following are true:
# * We did not use a config from ANSIBLE_CONFIG
# * There's an ansible.cfg in the current working directory that we skipped
if path_from_env != path and warn_cmd_public:
warnings.add(u"Ansible is being run in a world writable directory (%s),"
u" ignoring it as an ansible.cfg source."
u" For more information see"
u" https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir"
% to_text(cwd))
return path
class ConfigManager(object):
DEPRECATED = []
WARNINGS = set()
def __init__(self, conf_file=None, defs_file=None):
self._base_defs = {}
self._plugins = {}
self._parsers = {}
self._config_file = conf_file
self.data = ConfigData()
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
if self._config_file is None:
# set config using ini
self._config_file = find_ini_config_file(self.WARNINGS)
# consume configuration
if self._config_file:
# initialize parser and read config
self._parse_config_file()
# update constants
self.update_config_data()
try:
self.update_module_defaults_groups()
except Exception as e:
# Since this is a 2.7 preview feature, we want to have it fail as gracefully as possible when there are issues.
sys.stderr.write('Could not load module_defaults_groups: %s: %s\n\n' % (type(e).__name__, e))
self.module_defaults_groups = {}
def _read_config_yaml_file(self, yml_file):
# TODO: handle relative paths as relative to the directory containing the current playbook instead of CWD
# Currently this is only used with absolute paths to the `ansible/config` directory
yml_file = to_bytes(yml_file)
if os.path.exists(yml_file):
with open(yml_file, 'rb') as config_def:
return yaml_load(config_def, Loader=SafeLoader) or {}
raise AnsibleError(
"Missing base YAML definition file (bad install?): %s" % to_native(yml_file))
def _parse_config_file(self, cfile=None):
''' return flat configuration settings from file(s) '''
# TODO: take list of files with merge/nomerge
if cfile is None:
cfile = self._config_file
ftype = get_config_type(cfile)
if cfile is not None:
if ftype == 'ini':
self._parsers[cfile] = configparser.ConfigParser()
with open(to_bytes(cfile), 'rb') as f:
try:
cfg_text = to_text(f.read(), errors='surrogate_or_strict')
except UnicodeError as e:
raise AnsibleOptionsError("Error reading config file(%s) because the config file was not utf8 encoded: %s" % (cfile, to_native(e)))
try:
if PY3:
self._parsers[cfile].read_string(cfg_text)
else:
cfg_file = io.StringIO(cfg_text)
self._parsers[cfile].readfp(cfg_file)
except configparser.Error as e:
raise AnsibleOptionsError("Error reading config file (%s): %s" % (cfile, to_native(e)))
# FIXME: this should eventually handle yaml config files
# elif ftype == 'yaml':
# with open(cfile, 'rb') as config_stream:
# self._parsers[cfile] = yaml.safe_load(config_stream)
else:
raise AnsibleOptionsError("Unsupported configuration file type: %s" % to_native(ftype))
def _find_yaml_config_files(self):
''' Load YAML Config Files in order, check merge flags, keep origin of settings'''
pass
def get_plugin_options(self, plugin_type, name, keys=None, variables=None, direct=None):
options = {}
defs = self.get_configuration_definitions(plugin_type, name)
for option in defs:
options[option] = self.get_config_value(option, plugin_type=plugin_type, plugin_name=name, keys=keys, variables=variables, direct=direct)
return options
def get_plugin_vars(self, plugin_type, name):
pvars = []
for pdef in self.get_configuration_definitions(plugin_type, name).values():
if 'vars' in pdef and pdef['vars']:
for var_entry in pdef['vars']:
pvars.append(var_entry['name'])
return pvars
def get_configuration_definition(self, name, plugin_type=None, plugin_name=None):
ret = {}
if plugin_type is None:
ret = self._base_defs.get(name, None)
elif plugin_name is None:
ret = self._plugins.get(plugin_type, {}).get(name, None)
else:
ret = self._plugins.get(plugin_type, {}).get(plugin_name, {}).get(name, None)
return ret
def get_configuration_definitions(self, plugin_type=None, name=None):
''' just list the possible settings, either base or for specific plugins or plugin '''
ret = {}
if plugin_type is None:
ret = self._base_defs
elif name is None:
ret = self._plugins.get(plugin_type, {})
else:
ret = self._plugins.get(plugin_type, {}).get(name, {})
return ret
def _loop_entries(self, container, entry_list):
''' repeat code for value entry assignment '''
value = None
origin = None
for entry in entry_list:
name = entry.get('name')
temp_value = container.get(name, None)
if temp_value is not None: # only set if env var is defined
value = temp_value
origin = name
# deal with deprecation of setting source, if used
if 'deprecated' in entry:
self.DEPRECATED.append((entry['name'], entry['deprecated']))
return value, origin
def get_config_value(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' wrapper '''
try:
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
keys=keys, variables=variables, direct=direct)
except AnsibleError:
raise
except Exception as e:
raise AnsibleError("Unhandled exception when retrieving %s:\n%s" % (config, to_native(e)), orig_exc=e)
return value
def get_config_value_and_origin(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' Given a config key figure out the actual value and report on the origin of the settings '''
if cfile is None:
# use default config
cfile = self._config_file
# Note: sources that are lists listed in low to high precedence (last one wins)
value = None
origin = None
defs = self.get_configuration_definitions(plugin_type, plugin_name)
if config in defs:
# direct setting via plugin arguments, can set to None so we bypass rest of processing/defaults
direct_aliases = []
if direct:
direct_aliases = [direct[alias] for alias in defs[config].get('aliases', []) if alias in direct]
if direct and config in direct:
value = direct[config]
origin = 'Direct'
elif direct and direct_aliases:
value = direct_aliases[0]
origin = 'Direct'
else:
# Use 'variable overrides' if present, highest precedence, but only present when querying running play
if variables and defs[config].get('vars'):
value, origin = self._loop_entries(variables, defs[config]['vars'])
origin = 'var: %s' % origin
# use playbook keywords if you have em
if value is None and keys and config in keys:
value, origin = keys[config], 'keyword'
origin = 'keyword: %s' % origin
# env vars are next precedence
if value is None and defs[config].get('env'):
value, origin = self._loop_entries(py3compat.environ, defs[config]['env'])
origin = 'env: %s' % origin
# try config file entries next, if we have one
if self._parsers.get(cfile, None) is None:
self._parse_config_file(cfile)
if value is None and cfile is not None:
ftype = get_config_type(cfile)
if ftype and defs[config].get(ftype):
if ftype == 'ini':
# load from ini config
try: # FIXME: generalize _loop_entries to allow for files also, most of this code is dupe
for ini_entry in defs[config]['ini']:
temp_value = get_ini_config_value(self._parsers[cfile], ini_entry)
if temp_value is not None:
value = temp_value
origin = cfile
if 'deprecated' in ini_entry:
self.DEPRECATED.append(('[%s]%s' % (ini_entry['section'], ini_entry['key']), ini_entry['deprecated']))
except Exception as e:
sys.stderr.write("Error while loading ini config %s: %s" % (cfile, to_native(e)))
elif ftype == 'yaml':
# FIXME: implement, also , break down key from defs (. notation???)
origin = cfile
# set default if we got here w/o a value
if value is None:
if defs[config].get('required', False):
if not plugin_type or config not in INTERNAL_DEFS.get(plugin_type, {}):
raise AnsibleError("No setting was provided for required configuration %s" %
to_native(_get_entry(plugin_type, plugin_name, config)))
else:
value = defs[config].get('default')
origin = 'default'
# skip typing as this is a templated default that will be resolved later in constants, which has needed vars
if plugin_type is None and isinstance(value, string_types) and (value.startswith('{{') and value.endswith('}}')):
return value, origin
# ensure correct type, can raise exceptions on mismatched types
try:
value = ensure_type(value, defs[config].get('type'), origin=origin)
except ValueError as e:
if origin.startswith('env:') and value == '':
# this is empty env var for non string so we can set to default
origin = 'default'
value = ensure_type(defs[config].get('default'), defs[config].get('type'), origin=origin)
else:
raise AnsibleOptionsError('Invalid type for configuration option %s: %s' %
(to_native(_get_entry(plugin_type, plugin_name, config)), to_native(e)))
# deal with deprecation of the setting
if 'deprecated' in defs[config] and origin != 'default':
self.DEPRECATED.append((config, defs[config].get('deprecated')))
else:
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
return value, origin
def initialize_plugin_configuration_definitions(self, plugin_type, name, defs):
if plugin_type not in self._plugins:
self._plugins[plugin_type] = {}
self._plugins[plugin_type][name] = defs
def update_module_defaults_groups(self):
defaults_config = self._read_config_yaml_file(
'%s/module_defaults.yml' % os.path.join(os.path.dirname(__file__))
)
if defaults_config.get('version') not in ('1', '1.0', 1, 1.0):
raise AnsibleError('module_defaults.yml has an invalid version "%s" for configuration. Could be a bad install.' % defaults_config.get('version'))
self.module_defaults_groups = defaults_config.get('groupings', {})
def update_config_data(self, defs=None, configfile=None):
''' really: update constants '''
if defs is None:
defs = self._base_defs
if configfile is None:
configfile = self._config_file
if not isinstance(defs, dict):
raise AnsibleOptionsError("Invalid configuration definition type: %s for %s" % (type(defs), defs))
# update the constant for config file
self.data.update_setting(Setting('CONFIG_FILE', configfile, '', 'string'))
origin = None
# env and config defs can have several entries, ordered in list from lowest to highest precedence
for config in defs:
if not isinstance(defs[config], dict):
raise AnsibleOptionsError("Invalid configuration definition '%s': type is %s" % (to_native(config), type(defs[config])))
# get value and origin
try:
value, origin = self.get_config_value_and_origin(config, configfile)
except Exception as e:
# Printing the problem here because, in the current code:
# (1) we can't reach the error handler for AnsibleError before we
# hit a different error due to lack of working config.
# (2) We don't have access to display yet because display depends on config
# being properly loaded.
#
# If we start getting double errors printed from this section of code, then the
# above problem #1 has been fixed. Revamp this to be more like the try: except
# in get_config_value() at that time.
sys.stderr.write("Unhandled error:\n %s\n\n" % traceback.format_exc())
raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
# set the constant
self.data.update_setting(Setting(config, value, origin, defs[config].get('type', 'string')))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,310 |
ConfigManager envvar decode failure is fatal
|
##### SUMMARY
When ConfigManager is decoding config envvar values, if the value is not decode-able based on current LC_ALL/LANG settings, the error is fatal.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ConfigManager
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9.0rc3
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
NA
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
04:42 + LC_ALL=C
04:42 + LANG=C
04:42 + ansible-playbook test_connection.yml -i ../connection_chroot/test_connection.inventory -e target_hosts=chroot -e action_prefix= -e local_tmp=/tmp/ansible-local -e remote_tmp=/tmp/ansible-remote -vvvvvv
04:42 Unhandled error:
04:42 Traceback (most recent call last):
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 550, in update_config_data
04:42 value, origin = self.get_config_value_and_origin(config, configfile)
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 453, in get_config_value_and_origin
04:42 value, origin = self._loop_entries(py3compat.environ, defs[config]['env'])
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 393, in _loop_entries
04:42 temp_value = container.get(name, None)
04:42 File "/usr/lib64/python2.7/_abcoll.py", line 363, in get
04:42 return self[key]
04:42 File "/root/ansible/lib/ansible/utils/py3compat.py", line 50, in __getitem__
04:42 nonstring='passthru', errors='surrogate_or_strict')
04:42 File "/root/ansible/lib/ansible/module_utils/_text.py", line 235, in to_text
04:42 return obj.decode(encoding, errors)
04:42 UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 69: ordinal not in range(128)
04:42
04:42
04:42 Traceback (most recent call last):
04:42 File "/root/ansible/bin/ansible-playbook", line 62, in <module>
04:42 import ansible.constants as C
04:42 File "/root/ansible/lib/ansible/constants.py", line 174, in <module>
04:42 config = ConfigManager()
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 290, in __init__
04:42 self.update_config_data()
04:42 File "/root/ansible/lib/ansible/config/manager.py", line 562, in update_config_data
04:42 raise AnsibleError("Invalid settings supplied for %s: %s\n" % (config, to_native(e)), orig_exc=e)
04:42 ansible.errors.AnsibleError: Invalid settings supplied for PLAYBOOK_DIR: 'ascii' codec can't decode byte 0xc3 in position 69: ordinal not in range(128)
```
|
https://github.com/ansible/ansible/issues/63310
|
https://github.com/ansible/ansible/pull/63311
|
9d014778adcdbbfeca6531c2c4db4121e362c8d2
|
77de663879bcf2f6ab82d5009b8a0b799814750b
| 2019-10-09T23:17:02Z |
python
| 2019-10-10T00:08:29Z |
lib/ansible/utils/py3compat.py
|
# -*- coding: utf-8 -*-
#
# (c) 2018, Toshio Kuratomi <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#
# Note that the original author of this, Toshio Kuratomi, is trying to submit this to six. If
# successful, the code in six will be available under six's more liberal license:
# https://mail.python.org/pipermail/python-porting/2018-July/000539.html
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from ansible.module_utils.six import PY3
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.common._collections_compat import MutableMapping
__all__ = ('environ',)
class _TextEnviron(MutableMapping):
"""
Utility class to return text strings from the environment instead of byte strings
Mimics the behaviour of os.environ on Python3
"""
def __init__(self, env=None):
if env is None:
env = os.environ
self._raw_environ = env
self._value_cache = {}
# Since we're trying to mimic Python3's os.environ, use sys.getfilesystemencoding()
# instead of utf-8
self.encoding = sys.getfilesystemencoding()
def __delitem__(self, key):
del self._raw_environ[key]
def __getitem__(self, key):
value = self._raw_environ[key]
if PY3:
return value
# Cache keys off of the undecoded values to handle any environment variables which change
# during a run
if value not in self._value_cache:
self._value_cache[value] = to_text(value, encoding=self.encoding,
nonstring='passthru', errors='surrogate_or_strict')
return self._value_cache[value]
def __setitem__(self, key, value):
self._raw_environ[key] = to_bytes(value, encoding=self.encoding, nonstring='strict',
errors='surrogate_or_strict')
def __iter__(self):
return self._raw_environ.__iter__()
def __len__(self):
return len(self._raw_environ)
environ = _TextEnviron()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,302 |
vmware_guest_network: fails with a backtrace if static MAC starts with '00:50:56'
|
##### SUMMARY
<!--- Explain the problem briefly below -->
hi @pgbidkar and @Tomorrow9.
If I use the following task:
```yaml
- name: add new network adapters to virtual machine
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
networks:
- name: "VM Network"
state: new
device_type: e1000e
manual_mac: "00:50:56:58:59:60"
- name: "VM Network"
state: new
device_type: vmxnet3
manual_mac: "00:50:56:58:59:61"
register: add_netadapter
```
I get the following error: [add_nic.txt](https://github.com/ansible/ansible/files/3709181/add_nic.txt)
If I remove the `manual_mac` keys, everything works fine.
I create the Fedora 30 VM with:
```yaml
- name: Create VMs
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ dc1 }}"
validate_certs: no
folder: '/DC0/vm/F0'
name: test_vm1
state: poweredon
guest_id: centos7_64Guest
disk:
- size_gb: 1
type: thin
datastore: '{{ ds2 }}'
hardware:
version: latest
memory_mb: 1024
num_cpus: 1
scsi: paravirtual
cdrom:
type: iso
iso_path: "[{{ ds1 }}] fedora.iso"
networks:
- name: VM Network
```
I've tried several hardware.version/guest_id combination, without any difference. My lab is set-up as described here: https://docs.ansible.com/ansible/devel/dev_guide/platforms/vmware_guidelines.html
My vcenter set-up:
```shell
$ govc ls '/**/**/**'
/DC0/vm/F0/test_vm1
/DC0/network/VM Network
/DC0/host/DC0_C0/Resources
/DC0/host/DC0_C0/esxi1.test
/DC0/host/DC0_C0/esxi2.test
/DC0/datastore/LocalDS_0
/DC0/datastore/LocalDS_1
/DC0/datastore/datastore1 (1)
/DC0/datastore/datastore1
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest_network
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
|
https://github.com/ansible/ansible/issues/63302
|
https://github.com/ansible/ansible/pull/63336
|
7de70bde5220ec983790870ea655859bcef44741
|
add74fd24b4ae530c026c34ef2cce4d69121d537
| 2019-10-09T20:27:51Z |
python
| 2019-10-10T17:26:47Z |
test/integration/targets/vmware_guest_network/tasks/main.yml
|
# Test code for the vmware_guest_network module
# Copyright: (c) 2019, Diane Wang (Tomorrow9) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- when: vcsim is not defined
block:
- import_role:
name: prepare_vmware_tests
vars:
setup_attach_host: true
setup_datastore: true
setup_virtualmachines: true
- name: gather network adapters' facts of the virtual machine
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ virtual_machines[0].name }}"
gather_network_info: true
register: netadapter_info
- debug: var=netadapter_info
- name: get number of existing netowrk adapters
set_fact:
netadapter_num: "{{ netadapter_info.network_data | length }}"
- name: add new network adapters to virtual machine
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ virtual_machines[0].name }}"
networks:
- name: "VM Network"
state: new
device_type: e1000e
manual_mac: "00:50:56:58:59:60"
- name: "VM Network"
state: new
device_type: vmxnet3
manual_mac: "00:50:56:58:59:61"
register: add_netadapter
- debug: var=add_netadapter
- name: assert the new netowrk adapters were added to VM
assert:
that:
- "add_netadapter.changed == true"
- "{{ add_netadapter.network_data | length | int }} == {{ netadapter_num | int + 2 }}"
- name: delete one specified network adapter
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ virtual_machines[0].name }}"
networks:
- state: absent
mac: "00:50:56:58:59:60"
register: del_netadapter
- debug: var=del_netadapter
- name: assert the network adapter was removed
assert:
that:
- "del_netadapter.changed == true"
- "{{ del_netadapter.network_data | length | int }} == {{ netadapter_num | int + 1 }}"
- name: disconnect one specified network adapter
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ virtual_machines[0].name }}"
networks:
- state: present
mac: "00:50:56:58:59:61"
connected: false
register: disc_netadapter
- debug: var=disc_netadapter
- name: assert the network adapter was disconnected
assert:
that:
- "disc_netadapter.changed == true"
- "{{ disc_netadapter.network_data[netadapter_num]['connected'] }} == false"
- name: Check if network does not exists
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ virtual_machines[0].name }}"
networks:
- name: non-existing-nw
manual_mac: "00:50:56:11:22:33"
state: new
register: no_nw_details
ignore_errors: yes
- debug: var=no_nw_details
- name: Check if network does not exists
assert:
that:
- not no_nw_details.changed
- no_nw_details.failed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,222 |
vmware_guest_custom_attributes does not accept uuid only
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Documentation states, that uuid is required if name is not set but the play fails with an error that a required argument (name) is missing.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest_custom_attributes
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
$ ansible --version
ansible 2.8.5
config file = /Users/oha/.ansible.cfg
configured module search path = ['/Users/oha/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/oha/venv/ansible/lib/python3.7/site-packages/ansible
executable location = /Users/oha/venv/ansible/bin/ansible
python version = 3.7.4 (default, Sep 7 2019, 18:27:02) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
HOST_KEY_CHECKING(/Users/oha/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Mac OS 10.14
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
---
```
- hosts: all
connection: local
gather_facts: True
tasks:
- debug:
msg: "{{ config.instanceuuid }}"
- debug:
msg: "{{ config.name }}"
- name: Set virtual machine custom attributes using name
vmware_guest_custom_attributes:
validate_certs: False
state: present
name: "{{ config.name }}"
attributes:
- name: CreatedBy
value: myuser
- name: CreatedOn
value: "{{ ansible_date_time.date }}"
- name: Set virtual machine custom attributes using uuid
vmware_guest_custom_attributes:
validate_certs: False
state: present
use_instance_uuid: True
uuid: "{{ config.instanceuuid }}"
attributes:
- name: CreatedBy
value: myuser
- name: CreatedOn
value: "{{ ansible_date_time.date }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
That all tasks are executed with state OK
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************************************************************************
ok: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35] => {
"msg": "5027c056-98e0-7923-d25b-e37b628a8960"
}
TASK [debug] ******************************************************************************************************************************************************************************************************************************************************************
ok: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35] => {
"msg": "virtualmachine09"
}
TASK [Set virtual machine custom attributes using name] ***********************************************************************************************************************************************************************************************************************
changed: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35]
TASK [Set virtual machine custom attributes using uuid] ***********************************************************************************************************************************************************************************************************************
fatal: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35]: FAILED! => {"changed": false, "msg": "missing required arguments: name"}
PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************
virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35 : ok=4 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63222
|
https://github.com/ansible/ansible/pull/63320
|
b3deab4319fe1e793d36307093e1639a49c28175
|
35cc228b3b4f8526c71a72589c26d4a357cae402
| 2019-10-08T06:37:57Z |
python
| 2019-10-10T21:25:20Z |
changelogs/fragments/vmware_guest_custom_attributes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,222 |
vmware_guest_custom_attributes does not accept uuid only
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Documentation states, that uuid is required if name is not set but the play fails with an error that a required argument (name) is missing.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest_custom_attributes
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
$ ansible --version
ansible 2.8.5
config file = /Users/oha/.ansible.cfg
configured module search path = ['/Users/oha/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/oha/venv/ansible/lib/python3.7/site-packages/ansible
executable location = /Users/oha/venv/ansible/bin/ansible
python version = 3.7.4 (default, Sep 7 2019, 18:27:02) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
HOST_KEY_CHECKING(/Users/oha/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Mac OS 10.14
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
---
```
- hosts: all
connection: local
gather_facts: True
tasks:
- debug:
msg: "{{ config.instanceuuid }}"
- debug:
msg: "{{ config.name }}"
- name: Set virtual machine custom attributes using name
vmware_guest_custom_attributes:
validate_certs: False
state: present
name: "{{ config.name }}"
attributes:
- name: CreatedBy
value: myuser
- name: CreatedOn
value: "{{ ansible_date_time.date }}"
- name: Set virtual machine custom attributes using uuid
vmware_guest_custom_attributes:
validate_certs: False
state: present
use_instance_uuid: True
uuid: "{{ config.instanceuuid }}"
attributes:
- name: CreatedBy
value: myuser
- name: CreatedOn
value: "{{ ansible_date_time.date }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
That all tasks are executed with state OK
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************************************************************************
ok: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35] => {
"msg": "5027c056-98e0-7923-d25b-e37b628a8960"
}
TASK [debug] ******************************************************************************************************************************************************************************************************************************************************************
ok: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35] => {
"msg": "virtualmachine09"
}
TASK [Set virtual machine custom attributes using name] ***********************************************************************************************************************************************************************************************************************
changed: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35]
TASK [Set virtual machine custom attributes using uuid] ***********************************************************************************************************************************************************************************************************************
fatal: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35]: FAILED! => {"changed": false, "msg": "missing required arguments: name"}
PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************
virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35 : ok=4 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63222
|
https://github.com/ansible/ansible/pull/63320
|
b3deab4319fe1e793d36307093e1639a49c28175
|
35cc228b3b4f8526c71a72589c26d4a357cae402
| 2019-10-08T06:37:57Z |
python
| 2019-10-10T21:25:20Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
No notable changes
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
No notable changes
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
Plugins
=======
No notable changes
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,222 |
vmware_guest_custom_attributes does not accept uuid only
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Documentation states, that uuid is required if name is not set but the play fails with an error that a required argument (name) is missing.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest_custom_attributes
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
$ ansible --version
ansible 2.8.5
config file = /Users/oha/.ansible.cfg
configured module search path = ['/Users/oha/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/oha/venv/ansible/lib/python3.7/site-packages/ansible
executable location = /Users/oha/venv/ansible/bin/ansible
python version = 3.7.4 (default, Sep 7 2019, 18:27:02) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
HOST_KEY_CHECKING(/Users/oha/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Mac OS 10.14
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
---
```
- hosts: all
connection: local
gather_facts: True
tasks:
- debug:
msg: "{{ config.instanceuuid }}"
- debug:
msg: "{{ config.name }}"
- name: Set virtual machine custom attributes using name
vmware_guest_custom_attributes:
validate_certs: False
state: present
name: "{{ config.name }}"
attributes:
- name: CreatedBy
value: myuser
- name: CreatedOn
value: "{{ ansible_date_time.date }}"
- name: Set virtual machine custom attributes using uuid
vmware_guest_custom_attributes:
validate_certs: False
state: present
use_instance_uuid: True
uuid: "{{ config.instanceuuid }}"
attributes:
- name: CreatedBy
value: myuser
- name: CreatedOn
value: "{{ ansible_date_time.date }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
That all tasks are executed with state OK
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
PLAY [all] ********************************************************************************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************************************************************************
ok: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35] => {
"msg": "5027c056-98e0-7923-d25b-e37b628a8960"
}
TASK [debug] ******************************************************************************************************************************************************************************************************************************************************************
ok: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35] => {
"msg": "virtualmachine09"
}
TASK [Set virtual machine custom attributes using name] ***********************************************************************************************************************************************************************************************************************
changed: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35]
TASK [Set virtual machine custom attributes using uuid] ***********************************************************************************************************************************************************************************************************************
fatal: [virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35]: FAILED! => {"changed": false, "msg": "missing required arguments: name"}
PLAY RECAP ********************************************************************************************************************************************************************************************************************************************************************
virtualmachine09_4227e011-59a6-7730-4191-3192cce23b35 : ok=4 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63222
|
https://github.com/ansible/ansible/pull/63320
|
b3deab4319fe1e793d36307093e1639a49c28175
|
35cc228b3b4f8526c71a72589c26d4a357cae402
| 2019-10-08T06:37:57Z |
python
| 2019-10-10T21:25:20Z |
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright, (c) 2018, Ansible Project
# Copyright, (c) 2018, Abhijeet Kasurde <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: vmware_guest_custom_attributes
short_description: Manage custom attributes from VMware for the given virtual machine
description:
- This module can be used to add, remove and update custom attributes for the given virtual machine.
version_added: 2.7
author:
- Jimmy Conner (@cigamit)
- Abhijeet Kasurde (@Akasurde)
notes:
- Tested on vSphere 6.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
name:
description:
- Name of the virtual machine to work with.
- This is required parameter, if C(uuid) or C(moid) is not supplied.
required: True
type: str
state:
description:
- The action to take.
- If set to C(present), then custom attribute is added or updated.
- If set to C(absent), then custom attribute is removed.
default: 'present'
choices: ['present', 'absent']
type: str
uuid:
description:
- UUID of the virtual machine to manage if known. This is VMware's unique identifier.
- This is required parameter, if C(name) or C(moid) is not supplied.
type: str
moid:
description:
- Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.
- This is required if C(name) or C(uuid) is not supplied.
version_added: '2.9'
type: str
use_instance_uuid:
description:
- Whether to use the VMware instance UUID rather than the BIOS UUID.
default: no
type: bool
version_added: '2.8'
folder:
description:
- Absolute path to find an existing guest.
- This is required parameter, if C(name) is supplied and multiple virtual machines with same name are found.
type: str
datacenter:
description:
- Datacenter name where the virtual machine is located in.
required: True
type: str
attributes:
description:
- A list of name and value of custom attributes that needs to be manage.
- Value of custom attribute is not required and will be ignored, if C(state) is set to C(absent).
default: []
type: list
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
- name: Add virtual machine custom attributes
vmware_guest_custom_attributes:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
uuid: 421e4592-c069-924d-ce20-7e7533fab926
state: present
attributes:
- name: MyAttribute
value: MyValue
delegate_to: localhost
register: attributes
- name: Add multiple virtual machine custom attributes
vmware_guest_custom_attributes:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
uuid: 421e4592-c069-924d-ce20-7e7533fab926
state: present
attributes:
- name: MyAttribute
value: MyValue
- name: MyAttribute2
value: MyValue2
delegate_to: localhost
register: attributes
- name: Remove virtual machine Attribute
vmware_guest_custom_attributes:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
uuid: 421e4592-c069-924d-ce20-7e7533fab926
state: absent
attributes:
- name: MyAttribute
delegate_to: localhost
register: attributes
- name: Remove virtual machine Attribute using Virtual Machine MoID
vmware_guest_custom_attributes:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
moid: vm-42
state: absent
attributes:
- name: MyAttribute
delegate_to: localhost
register: attributes
'''
RETURN = """
custom_attributes:
description: metadata about the virtual machine attributes
returned: always
type: dict
sample: {
"mycustom": "my_custom_value",
"mycustom_2": "my_custom_value_2",
"sample_1": "sample_1_value",
"sample_2": "sample_2_value",
"sample_3": "sample_3_value"
}
"""
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec
class VmAttributeManager(PyVmomi):
def __init__(self, module):
super(VmAttributeManager, self).__init__(module)
def set_custom_field(self, vm, user_fields):
result_fields = dict()
change_list = list()
changed = False
for field in user_fields:
field_key = self.check_exists(field['name'])
found = False
field_value = field.get('value', '')
for k, v in [(x.name, v.value) for x in self.custom_field_mgr for v in vm.customValue if x.key == v.key]:
if k == field['name']:
found = True
if v != field_value:
if not self.module.check_mode:
self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)
result_fields[k] = field_value
change_list.append(True)
if not found and field_value != "":
if not field_key and not self.module.check_mode:
field_key = self.content.customFieldsManager.AddFieldDefinition(name=field['name'], moType=vim.VirtualMachine)
change_list.append(True)
if not self.module.check_mode:
self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)
result_fields[field['name']] = field_value
if any(change_list):
changed = True
return {'changed': changed, 'failed': False, 'custom_attributes': result_fields}
def check_exists(self, field):
for x in self.custom_field_mgr:
if x.name == field:
return x
return False
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
datacenter=dict(type='str'),
name=dict(required=True, type='str'),
folder=dict(type='str'),
uuid=dict(type='str'),
moid=dict(type='str'),
use_instance_uuid=dict(type='bool', default=False),
state=dict(type='str', default='present',
choices=['absent', 'present']),
attributes=dict(
type='list',
default=[],
options=dict(
name=dict(type='str', required=True),
value=dict(type='str'),
)
),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[
['name', 'uuid', 'moid']
],
)
if module.params.get('folder'):
# FindByInventoryPath() does not require an absolute path
# so we should leave the input folder path unmodified
module.params['folder'] = module.params['folder'].rstrip('/')
pyv = VmAttributeManager(module)
results = {'changed': False, 'failed': False, 'instance': dict()}
# Check if the virtual machine exists before continuing
vm = pyv.get_vm()
if vm:
# virtual machine already exists
if module.params['state'] == "present":
results = pyv.set_custom_field(vm, module.params['attributes'])
elif module.params['state'] == "absent":
results = pyv.set_custom_field(vm, module.params['attributes'])
module.exit_json(**results)
else:
# virtual machine does not exists
vm_id = (module.params.get('name') or module.params.get('uuid') or module.params.get('moid'))
module.fail_json(msg="Unable to manage custom attributes for non-existing"
" virtual machine %s" % vm_id)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,242 |
Nosh module return value for `status` should be a sample, not "contains"
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
In the nosh module's return documentation, the description of the `status` field isn't helpful:

That section is built from https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/system/nosh.py#L138-L187. The content there should be a sample of the return values, not a `contains` entry.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
nosh
##### ANSIBLE VERSION
2.10
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/63242
|
https://github.com/ansible/ansible/pull/63303
|
7f4befdea77045fa83b5f2b304bd5e16b219f74c
|
df283788e562470741ebed15dc4374190316c352
| 2019-10-08T16:29:08Z |
python
| 2019-10-11T16:07:53Z |
lib/ansible/modules/system/nosh.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2017, Thomas Caravia <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: nosh
author:
- "Thomas Caravia (@tacatac)"
version_added: "2.5"
short_description: Manage services with nosh
description:
- Control running and enabled state for system-wide or user services.
- BSD and Linux systems are supported.
options:
name:
required: true
description:
- Name of the service to manage.
state:
required: false
choices: [ started, stopped, reset, restarted, reloaded ]
description:
- C(started)/C(stopped) are idempotent actions that will not run
commands unless necessary.
C(restarted) will always bounce the service.
C(reloaded) will send a SIGHUP or start the service.
C(reset) will start or stop the service according to whether it is
enabled or not.
enabled:
required: false
type: bool
description:
- Enable or disable the service, independently of C(*.preset) file
preference or running state. Mutually exclusive with I(preset). Will take
effect prior to I(state=reset).
preset:
required: false
type: bool
description:
- Enable or disable the service according to local preferences in *.preset files.
Mutually exclusive with I(enabled). Only has an effect if set to true. Will take
effect prior to I(state=reset).
user:
required: false
default: 'no'
type: bool
description:
- Run system-control talking to the calling user's service manager, rather than
the system-wide service manager.
requirements:
- A system with an active nosh service manager, see Notes for further information.
notes:
- Information on the nosh utilities suite may be found at U(https://jdebp.eu/Softwares/nosh/).
'''
EXAMPLES = '''
- name: start dnscache if not running
nosh: name=dnscache state=started
- name: stop mpd, if running
nosh: name=mpd state=stopped
- name: restart unbound or start it if not already running
nosh:
name: unbound
state: restarted
- name: reload fail2ban or start it if not already running
nosh:
name: fail2ban
state: reloaded
- name: disable nsd
nosh: name=nsd enabled=no
- name: for package installers, set nginx running state according to local enable settings, preset and reset
nosh: name=nginx preset=True state=reset
- name: reboot the host if nosh is the system manager, would need a "wait_for*" task at least, not recommended as-is
nosh: name=reboot state=started
- name: using conditionals with the module facts
tasks:
- name: obtain information on tinydns service
nosh: name=tinydns
register: result
- name: fail if service not loaded
fail: msg="The {{ result.name }} service is not loaded"
when: not result.status
- name: fail if service is running
fail: msg="The {{ result.name }} service is running"
when: result.status and result.status['DaemontoolsEncoreState'] == "running"
'''
RETURN = '''
name:
description: name used to find the service
returned: success
type: str
sample: "sshd"
service_path:
description: resolved path for the service
returned: success
type: str
sample: "/var/sv/sshd"
enabled:
description: whether the service is enabled at system bootstrap
returned: success
type: bool
sample: True
preset:
description: whether the enabled status reflects the one set in the relevant C(*.preset) file
returned: success
type: bool
sample: False
state:
description: service process run state, C(None) if the service is not loaded and will not be started
returned: if state option is used
type: str
sample: "reloaded"
status:
description: a dictionary with the key=value pairs returned by `system-control show-json` or C(None) if the service is not loaded
returned: success
type: complex
contains: {
"After": [
"/etc/service-bundles/targets/basic",
"../sshdgenkeys",
"log"
],
"Before": [
"/etc/service-bundles/targets/shutdown"
],
"Conflicts": [],
"DaemontoolsEncoreState": "running",
"DaemontoolsState": "up",
"Enabled": true,
"LogService": "../cyclog@sshd",
"MainPID": 661,
"Paused": false,
"ReadyAfterRun": false,
"RemainAfterExit": false,
"Required-By": [],
"RestartExitStatusCode": 0,
"RestartExitStatusNumber": 0,
"RestartTimestamp": 4611686019935648081,
"RestartUTCTimestamp": 1508260140,
"RunExitStatusCode": 0,
"RunExitStatusNumber": 0,
"RunTimestamp": 4611686019935648081,
"RunUTCTimestamp": 1508260140,
"StartExitStatusCode": 1,
"StartExitStatusNumber": 0,
"StartTimestamp": 4611686019935648081,
"StartUTCTimestamp": 1508260140,
"StopExitStatusCode": 0,
"StopExitStatusNumber": 0,
"StopTimestamp": 4611686019935648081,
"StopUTCTimestamp": 1508260140,
"Stopped-By": [
"/etc/service-bundles/targets/shutdown"
],
"Timestamp": 4611686019935648081,
"UTCTimestamp": 1508260140,
"Want": "nothing",
"Wanted-By": [
"/etc/service-bundles/targets/server",
"/etc/service-bundles/targets/sockets"
],
"Wants": [
"/etc/service-bundles/targets/basic",
"../sshdgenkeys"
]
}
user:
description: whether the user-level service manager is called
returned: success
type: bool
sample: False
'''
import json
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.service import fail_if_missing
from ansible.module_utils._text import to_native
def run_sys_ctl(module, args):
sys_ctl = [module.get_bin_path('system-control', required=True)]
if module.params['user']:
sys_ctl = sys_ctl + ['--user']
return module.run_command(sys_ctl + args)
def get_service_path(module, service):
(rc, out, err) = run_sys_ctl(module, ['find', service])
# fail if service not found
if rc != 0:
fail_if_missing(module, False, service, msg='host')
else:
return to_native(out).strip()
def service_is_enabled(module, service_path):
(rc, out, err) = run_sys_ctl(module, ['is-enabled', service_path])
return rc == 0
def service_is_preset_enabled(module, service_path):
(rc, out, err) = run_sys_ctl(module, ['preset', '--dry-run', service_path])
return to_native(out).strip().startswith("enable")
def service_is_loaded(module, service_path):
(rc, out, err) = run_sys_ctl(module, ['is-loaded', service_path])
return rc == 0
def get_service_status(module, service_path):
(rc, out, err) = run_sys_ctl(module, ['show-json', service_path])
# will fail if not service is not loaded
if err is not None and err:
module.fail_json(msg=err)
else:
json_out = json.loads(to_native(out).strip())
status = json_out[service_path] # descend past service path header
return status
def service_is_running(service_status):
return service_status['DaemontoolsEncoreState'] in set(['starting', 'started', 'running'])
def handle_enabled(module, result, service_path):
"""Enable or disable a service as needed.
- 'preset' will set the enabled state according to available preset file settings.
- 'enabled' will set the enabled state explicitly, independently of preset settings.
These options are set to "mutually exclusive" but the explicit 'enabled' option will
have priority if the check is bypassed.
"""
# computed prior in control flow
preset = result['preset']
enabled = result['enabled']
# preset, effect only if option set to true (no reverse preset)
if module.params['preset']:
action = 'preset'
# run preset if needed
if preset != module.params['preset']:
result['changed'] = True
if not module.check_mode:
(rc, out, err) = run_sys_ctl(module, [action, service_path])
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, service_path, out + err))
result['preset'] = not preset
result['enabled'] = not enabled
# enabled/disabled state
if module.params['enabled'] is not None:
if module.params['enabled']:
action = 'enable'
else:
action = 'disable'
# change enable/disable if needed
if enabled != module.params['enabled']:
result['changed'] = True
if not module.check_mode:
(rc, out, err) = run_sys_ctl(module, [action, service_path])
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, service_path, out + err))
result['enabled'] = not enabled
result['preset'] = not preset
def handle_state(module, result, service_path):
"""Set service running state as needed.
Takes into account the fact that a service may not be loaded (no supervise directory) in
which case it is 'stopped' as far as the service manager is concerned. No status information
can be obtained and the service can only be 'started'.
"""
# default to desired state, no action
result['state'] = module.params['state']
state = module.params['state']
action = None
# computed prior in control flow, possibly modified by handle_enabled()
enabled = result['enabled']
# service not loaded -> not started by manager, no status information
if not service_is_loaded(module, service_path):
if state in ['started', 'restarted', 'reloaded']:
action = 'start'
result['state'] = 'started'
elif state == 'reset':
if enabled:
action = 'start'
result['state'] = 'started'
else:
result['state'] = None
else:
result['state'] = None
# service is loaded
else:
# get status information
result['status'] = get_service_status(module, service_path)
running = service_is_running(result['status'])
if state == 'started':
if not running:
action = 'start'
elif state == 'stopped':
if running:
action = 'stop'
# reset = start/stop according to enabled status
elif state == 'reset':
if enabled is not running:
if running:
action = 'stop'
result['state'] = 'stopped'
else:
action = 'start'
result['state'] = 'started'
# start if not running, 'service' module constraint
elif state == 'restarted':
if not running:
action = 'start'
result['state'] = 'started'
else:
action = 'condrestart'
# start if not running, 'service' module constraint
elif state == 'reloaded':
if not running:
action = 'start'
result['state'] = 'started'
else:
action = 'hangup'
# change state as needed
if action:
result['changed'] = True
if not module.check_mode:
(rc, out, err) = run_sys_ctl(module, [action, service_path])
if rc != 0:
module.fail_json(msg="Unable to %s service %s: %s" % (action, service_path, err))
# ===========================================
# Main control flow
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
state=dict(choices=['started', 'stopped', 'reset', 'restarted', 'reloaded'], type='str'),
enabled=dict(type='bool'),
preset=dict(type='bool'),
user=dict(type='bool', default=False),
),
supports_check_mode=True,
mutually_exclusive=[['enabled', 'preset']],
)
service = module.params['name']
rc = 0
out = err = ''
result = {
'name': service,
'changed': False,
'status': None,
}
# check service can be found (or fail) and get path
service_path = get_service_path(module, service)
# get preliminary service facts
result['service_path'] = service_path
result['user'] = module.params['user']
result['enabled'] = service_is_enabled(module, service_path)
result['preset'] = result['enabled'] is service_is_preset_enabled(module, service_path)
# set enabled state, service need not be loaded
if module.params['enabled'] is not None or module.params['preset']:
handle_enabled(module, result, service_path)
# set service running state
if module.params['state'] is not None:
handle_state(module, result, service_path)
# get final service status if possible
if service_is_loaded(module, service_path):
result['status'] = get_service_status(module, service_path)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,353 |
docker_node_info fails with exception when the node name is ambiguous
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using the docker_node_info with a node name that would be ambiguous, the module fails with an exception about an undefined variable.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_node_info
##### ANSIBLE VERSION
```
ansible 2.8.5
config file = /home/vagrant/code/ansible-playbooks/ansible.cfg
configured module search path = [u'/home/vagrant/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
```
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/vagrant/code/ansible-playbooks/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/vagrant/code/ansible-playbooks/ansible.cfg) = yaml
HOST_KEY_CHECKING(/home/vagrant/code/ansible-playbooks/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Running on Ubuntu, targeting Ubuntu
##### STEPS TO REPRODUCE
Create a minimal docker swarm setup, with 2 nodes, one manager and one worker. Add the worker to the swarm, then leave the swarm then re-join the swarm. This will lead to an output similar to:
```
azureuser@pgelinas-dev-ln0:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
0oqo20dyohnfm0d2itjvnlt0x pgelinas-d-wn0 Down Active 18.09.9
ikqtd942969nc3d9j0k68isfi pgelinas-d-wn0 Ready Active 18.09.9
wf1mmy7zbuhsfu4vstvl06t2y * pgelinas-dev-ln0 Ready Active Leader 19.03.3
```
And docker node inspect output gives the following:
```
azureuser@pgelinas-dev-ln0:~$ docker node inspect pgelinas-d-wn0
[]
Status: Error response from daemon: node pgelinas-d-wn0 is ambiguous (2 matches found), Code: 1
```
Sample task definition:
```yaml
- name: Ensure worker has joined the swarm
delegate_to: "pgelinas-dev-ln0"
docker_node_info:
name: "{{ ansible_hostname }}"
register: result
failed_when: result.nodes is not defined
```
##### EXPECTED RESULTS
The module should handle the error gracefully and provide the error message/status in the result
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
TASK [swarm-worker : Ensure worker has joined the swarm] **********************************************************************************************************************************************************
fatal: [pgelinas-dev-ww0 -> pgelinas-dev-lm0.centralus.cloudapp.azure.com]: FAILED! => changed=false
attempts: 1
failed_when_result: true
module_stderr: |-
Shared connection to pgelinas-dev-lm0.centralus.cloudapp.azure.com closed.
module_stdout: |-
Traceback (most recent call last):
File "/home/azureuser/.ansible/tmp/ansible-tmp-1570731617.12-46436432626554/AnsiballZ_docker_node_info.py", line 114, in <module>
_ansiballz_main()
File "/home/azureuser/.ansible/tmp/ansible-tmp-1570731617.12-46436432626554/AnsiballZ_docker_node_info.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/azureuser/.ansible/tmp/ansible-tmp-1570731617.12-46436432626554/AnsiballZ_docker_node_info.py", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/tmp/ansible_docker_node_info_payload_nJH7PJ/__main__.py", line 158, in <module>
File "/tmp/ansible_docker_node_info_payload_nJH7PJ/__main__.py", line 145, in main
File "/tmp/ansible_docker_node_info_payload_nJH7PJ/__main__.py", line 123, in get_node_facts
File "/tmp/ansible_docker_node_info_payload_nJH7PJ/ansible_docker_node_info_payload.zip/ansible/module_utils/docker/swarm.py", line 170, in get_node_inspect
UnboundLocalError: local variable 'node_info' referenced before assignment
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
I've poked a bit at the code in github regarding this; I think this might be in lib/ansible/module_utils/docker/swarm.py around line 162. Probably an unhandled status_code?
|
https://github.com/ansible/ansible/issues/63353
|
https://github.com/ansible/ansible/pull/63418
|
7f643690c7535c2623c52809cd27b3231b9f0fa7
|
d753168e9d66cf4ddfaf5273bf7a3b9d065a00c2
| 2019-10-10T19:00:27Z |
python
| 2019-10-13T12:16:02Z |
changelogs/fragments/63418-docker_node_info-errors.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,353 |
docker_node_info fails with exception when the node name is ambiguous
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When using the docker_node_info with a node name that would be ambiguous, the module fails with an exception about an undefined variable.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
docker_node_info
##### ANSIBLE VERSION
```
ansible 2.8.5
config file = /home/vagrant/code/ansible-playbooks/ansible.cfg
configured module search path = [u'/home/vagrant/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
```
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/vagrant/code/ansible-playbooks/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/vagrant/code/ansible-playbooks/ansible.cfg) = yaml
HOST_KEY_CHECKING(/home/vagrant/code/ansible-playbooks/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Running on Ubuntu, targeting Ubuntu
##### STEPS TO REPRODUCE
Create a minimal docker swarm setup, with 2 nodes, one manager and one worker. Add the worker to the swarm, then leave the swarm then re-join the swarm. This will lead to an output similar to:
```
azureuser@pgelinas-dev-ln0:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
0oqo20dyohnfm0d2itjvnlt0x pgelinas-d-wn0 Down Active 18.09.9
ikqtd942969nc3d9j0k68isfi pgelinas-d-wn0 Ready Active 18.09.9
wf1mmy7zbuhsfu4vstvl06t2y * pgelinas-dev-ln0 Ready Active Leader 19.03.3
```
And docker node inspect output gives the following:
```
azureuser@pgelinas-dev-ln0:~$ docker node inspect pgelinas-d-wn0
[]
Status: Error response from daemon: node pgelinas-d-wn0 is ambiguous (2 matches found), Code: 1
```
Sample task definition:
```yaml
- name: Ensure worker has joined the swarm
delegate_to: "pgelinas-dev-ln0"
docker_node_info:
name: "{{ ansible_hostname }}"
register: result
failed_when: result.nodes is not defined
```
##### EXPECTED RESULTS
The module should handle the error gracefully and provide the error message/status in the result
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
TASK [swarm-worker : Ensure worker has joined the swarm] **********************************************************************************************************************************************************
fatal: [pgelinas-dev-ww0 -> pgelinas-dev-lm0.centralus.cloudapp.azure.com]: FAILED! => changed=false
attempts: 1
failed_when_result: true
module_stderr: |-
Shared connection to pgelinas-dev-lm0.centralus.cloudapp.azure.com closed.
module_stdout: |-
Traceback (most recent call last):
File "/home/azureuser/.ansible/tmp/ansible-tmp-1570731617.12-46436432626554/AnsiballZ_docker_node_info.py", line 114, in <module>
_ansiballz_main()
File "/home/azureuser/.ansible/tmp/ansible-tmp-1570731617.12-46436432626554/AnsiballZ_docker_node_info.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/azureuser/.ansible/tmp/ansible-tmp-1570731617.12-46436432626554/AnsiballZ_docker_node_info.py", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/tmp/ansible_docker_node_info_payload_nJH7PJ/__main__.py", line 158, in <module>
File "/tmp/ansible_docker_node_info_payload_nJH7PJ/__main__.py", line 145, in main
File "/tmp/ansible_docker_node_info_payload_nJH7PJ/__main__.py", line 123, in get_node_facts
File "/tmp/ansible_docker_node_info_payload_nJH7PJ/ansible_docker_node_info_payload.zip/ansible/module_utils/docker/swarm.py", line 170, in get_node_inspect
UnboundLocalError: local variable 'node_info' referenced before assignment
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
I've poked a bit at the code in github regarding this; I think this might be in lib/ansible/module_utils/docker/swarm.py around line 162. Probably an unhandled status_code?
|
https://github.com/ansible/ansible/issues/63353
|
https://github.com/ansible/ansible/pull/63418
|
7f643690c7535c2623c52809cd27b3231b9f0fa7
|
d753168e9d66cf4ddfaf5273bf7a3b9d065a00c2
| 2019-10-10T19:00:27Z |
python
| 2019-10-13T12:16:02Z |
lib/ansible/module_utils/docker/swarm.py
|
# (c) 2019 Piotr Wojciechowski (@wojciechowskipiotr) <[email protected]>
# (c) Thierry Bouvet (@tbouvet)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
from time import sleep
try:
from docker.errors import APIError, NotFound
except ImportError:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
from ansible.module_utils._text import to_native
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
LooseVersion,
)
class AnsibleDockerSwarmClient(AnsibleDockerClient):
def __init__(self, **kwargs):
super(AnsibleDockerSwarmClient, self).__init__(**kwargs)
def get_swarm_node_id(self):
"""
Get the 'NodeID' of the Swarm node or 'None' if host is not in Swarm. It returns the NodeID
of Docker host the module is executed on
:return:
NodeID of host or 'None' if not part of Swarm
"""
try:
info = self.info()
except APIError as exc:
self.fail("Failed to get node information for %s" % to_native(exc))
if info:
json_str = json.dumps(info, ensure_ascii=False)
swarm_info = json.loads(json_str)
if swarm_info['Swarm']['NodeID']:
return swarm_info['Swarm']['NodeID']
return None
def check_if_swarm_node(self, node_id=None):
"""
Checking if host is part of Docker Swarm. If 'node_id' is not provided it reads the Docker host
system information looking if specific key in output exists. If 'node_id' is provided then it tries to
read node information assuming it is run on Swarm manager. The get_node_inspect() method handles exception if
it is not executed on Swarm manager
:param node_id: Node identifier
:return:
bool: True if node is part of Swarm, False otherwise
"""
if node_id is None:
try:
info = self.info()
except APIError:
self.fail("Failed to get host information.")
if info:
json_str = json.dumps(info, ensure_ascii=False)
swarm_info = json.loads(json_str)
if swarm_info['Swarm']['NodeID']:
return True
if swarm_info['Swarm']['LocalNodeState'] in ('active', 'pending', 'locked'):
return True
return False
else:
try:
node_info = self.get_node_inspect(node_id=node_id)
except APIError:
return
if node_info['ID'] is not None:
return True
return False
def check_if_swarm_manager(self):
"""
Checks if node role is set as Manager in Swarm. The node is the docker host on which module action
is performed. The inspect_swarm() will fail if node is not a manager
:return: True if node is Swarm Manager, False otherwise
"""
try:
self.inspect_swarm()
return True
except APIError:
return False
def fail_task_if_not_swarm_manager(self):
"""
If host is not a swarm manager then Ansible task on this host should end with 'failed' state
"""
if not self.check_if_swarm_manager():
self.fail("Error running docker swarm module: must run on swarm manager node")
def check_if_swarm_worker(self):
"""
Checks if node role is set as Worker in Swarm. The node is the docker host on which module action
is performed. Will fail if run on host that is not part of Swarm via check_if_swarm_node()
:return: True if node is Swarm Worker, False otherwise
"""
if self.check_if_swarm_node() and not self.check_if_swarm_manager():
return True
return False
def check_if_swarm_node_is_down(self, node_id=None, repeat_check=1):
"""
Checks if node status on Swarm manager is 'down'. If node_id is provided it query manager about
node specified in parameter, otherwise it query manager itself. If run on Swarm Worker node or
host that is not part of Swarm it will fail the playbook
:param repeat_check: number of check attempts with 5 seconds delay between them, by default check only once
:param node_id: node ID or name, if None then method will try to get node_id of host module run on
:return:
True if node is part of swarm but its state is down, False otherwise
"""
if repeat_check < 1:
repeat_check = 1
if node_id is None:
node_id = self.get_swarm_node_id()
for retry in range(0, repeat_check):
if retry > 0:
sleep(5)
node_info = self.get_node_inspect(node_id=node_id)
if node_info['Status']['State'] == 'down':
return True
return False
def get_node_inspect(self, node_id=None, skip_missing=False):
"""
Returns Swarm node info as in 'docker node inspect' command about single node
:param skip_missing: if True then function will return None instead of failing the task
:param node_id: node ID or name, if None then method will try to get node_id of host module run on
:return:
Single node information structure
"""
if node_id is None:
node_id = self.get_swarm_node_id()
if node_id is None:
self.fail("Failed to get node information.")
try:
node_info = self.inspect_node(node_id=node_id)
except APIError as exc:
if exc.status_code == 503:
self.fail("Cannot inspect node: To inspect node execute module on Swarm Manager")
if exc.status_code == 404:
if skip_missing is False:
self.fail("Error while reading from Swarm manager: %s" % to_native(exc))
else:
return None
except Exception as exc:
self.fail("Error inspecting swarm node: %s" % exc)
json_str = json.dumps(node_info, ensure_ascii=False)
node_info = json.loads(json_str)
if 'ManagerStatus' in node_info:
if node_info['ManagerStatus'].get('Leader'):
# This is workaround of bug in Docker when in some cases the Leader IP is 0.0.0.0
# Check moby/moby#35437 for details
count_colons = node_info['ManagerStatus']['Addr'].count(":")
if count_colons == 1:
swarm_leader_ip = node_info['ManagerStatus']['Addr'].split(":", 1)[0] or node_info['Status']['Addr']
else:
swarm_leader_ip = node_info['Status']['Addr']
node_info['Status']['Addr'] = swarm_leader_ip
return node_info
def get_all_nodes_inspect(self):
"""
Returns Swarm node info as in 'docker node inspect' command about all registered nodes
:return:
Structure with information about all nodes
"""
try:
node_info = self.nodes()
except APIError as exc:
if exc.status_code == 503:
self.fail("Cannot inspect node: To inspect node execute module on Swarm Manager")
self.fail("Error while reading from Swarm manager: %s" % to_native(exc))
except Exception as exc:
self.fail("Error inspecting swarm node: %s" % exc)
json_str = json.dumps(node_info, ensure_ascii=False)
node_info = json.loads(json_str)
return node_info
def get_all_nodes_list(self, output='short'):
"""
Returns list of nodes registered in Swarm
:param output: Defines format of returned data
:return:
If 'output' is 'short' then return data is list of nodes hostnames registered in Swarm,
if 'output' is 'long' then returns data is list of dict containing the attributes as in
output of command 'docker node ls'
"""
nodes_list = []
nodes_inspect = self.get_all_nodes_inspect()
if nodes_inspect is None:
return None
if output == 'short':
for node in nodes_inspect:
nodes_list.append(node['Description']['Hostname'])
elif output == 'long':
for node in nodes_inspect:
node_property = {}
node_property.update({'ID': node['ID']})
node_property.update({'Hostname': node['Description']['Hostname']})
node_property.update({'Status': node['Status']['State']})
node_property.update({'Availability': node['Spec']['Availability']})
if 'ManagerStatus' in node:
if node['ManagerStatus']['Leader'] is True:
node_property.update({'Leader': True})
node_property.update({'ManagerStatus': node['ManagerStatus']['Reachability']})
node_property.update({'EngineVersion': node['Description']['Engine']['EngineVersion']})
nodes_list.append(node_property)
else:
return None
return nodes_list
def get_node_name_by_id(self, nodeid):
return self.get_node_inspect(nodeid)['Description']['Hostname']
def get_unlock_key(self):
if self.docker_py_version < LooseVersion('2.7.0'):
return None
return super(AnsibleDockerSwarmClient, self).get_unlock_key()
def get_service_inspect(self, service_id, skip_missing=False):
"""
Returns Swarm service info as in 'docker service inspect' command about single service
:param service_id: service ID or name
:param skip_missing: if True then function will return None instead of failing the task
:return:
Single service information structure
"""
try:
service_info = self.inspect_service(service_id)
except NotFound as exc:
if skip_missing is False:
self.fail("Error while reading from Swarm manager: %s" % to_native(exc))
else:
return None
except APIError as exc:
if exc.status_code == 503:
self.fail("Cannot inspect service: To inspect service execute module on Swarm Manager")
self.fail("Error inspecting swarm service: %s" % exc)
except Exception as exc:
self.fail("Error inspecting swarm service: %s" % exc)
json_str = json.dumps(service_info, ensure_ascii=False)
service_info = json.loads(json_str)
return service_info
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,127 |
[ovirt] clear up fetch_nested parameter works
|
##### SUMMARY
We should document that by default we go only over first level attributes, not any other level of attribute and maybe ADD parameter which enables going deeper.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ansible/lib/ansible/modules/cloud/ovirt/*_facts.py
##### ADDITIONAL INFORMATION
Discussion was originally raised in https://bugzilla.redhat.com/show_bug.cgi?id=1653743
|
https://github.com/ansible/ansible/issues/63127
|
https://github.com/ansible/ansible/pull/63191
|
e9d29b1fe4285d90d7a4506b80260a9e24c3bcea
|
0beab6bf699fc014aeb7bb74af7a692b2e7d462f
| 2019-10-04T10:50:59Z |
python
| 2019-10-14T16:21:17Z |
lib/ansible/plugins/doc_fragments/ovirt_info.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Red Hat, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
class ModuleDocFragment(object):
# info standard oVirt documentation fragment
DOCUMENTATION = r'''
options:
fetch_nested:
description:
- If I(yes) the module will fetch additional data from the API.
- It will fetch IDs of the VMs disks, snapshots, etc. User can configure to fetch other
attributes of the nested entities by specifying C(nested_attributes).
type: bool
version_added: "2.3"
nested_attributes:
description:
- Specifies list of the attributes which should be fetched from the API.
- This parameter apply only when C(fetch_nested) is I(true).
type: list
version_added: "2.3"
auth:
description:
- "Dictionary with values needed to create HTTP/HTTPS connection to oVirt:"
- C(username)[I(required)] - The name of the user, something like I(admin@internal).
Default value is set by I(OVIRT_USERNAME) environment variable.
- "C(password)[I(required)] - The password of the user. Default value is set by I(OVIRT_PASSWORD) environment variable."
- "C(url)- A string containing the API URL of the server, usually
something like `I(https://server.example.com/ovirt-engine/api)`. Default value is set by I(OVIRT_URL) environment variable.
Either C(url) or C(hostname) is required."
- "C(hostname) - A string containing the hostname of the server, usually
something like `I(server.example.com)`. Default value is set by I(OVIRT_HOSTNAME) environment variable.
Either C(url) or C(hostname) is required."
- "C(token) - Token to be used instead of login with username/password. Default value is set by I(OVIRT_TOKEN) environment variable."
- "C(insecure) - A boolean flag that indicates if the server TLS
certificate and host name should be checked."
- "C(ca_file) - A PEM file containing the trusted CA certificates. The
certificate presented by the server will be verified using these CA
certificates. If `C(ca_file)` parameter is not set, system wide
CA certificate store is used. Default value is set by I(OVIRT_CAFILE) environment variable."
- "C(kerberos) - A boolean flag indicating if Kerberos authentication
should be used instead of the default basic authentication."
- "C(headers) - Dictionary of HTTP headers to be added to each API call."
type: dict
required: true
requirements:
- python >= 2.7
- ovirt-engine-sdk-python >= 4.3.0
notes:
- "In order to use this module you have to install oVirt Python SDK.
To ensure it's installed with correct version you can create the following task:
pip: name=ovirt-engine-sdk-python version=4.3.0"
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,510 |
user module produces UnboundLocalError when targeting AIX nodes
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
In Ansible 2.8.2, using the `user` module against an AIX host produces the following error:
~~~
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2953, in <module>
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2892, in main
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 1056, in modify_user
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2402, in modify_user_usermod
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 886, in user_info
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 910, in user_password
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2517, in parse_shadow_file
UnboundLocalError: local variable 'b_passwd' referenced before assignment
~~~
This may be a possible regression as Ansible 2.7 executes `user` successfully.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`user`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.8.2
config file = /ansible/tower/projects/_214__mq_client_jar_deployment/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
IBM AIX
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Run the `user` module against a set of IBM AIX nodes using Ansible 2.7.
2. Upgrade Ansible to 2.8.2.
3. Run the `user` module against a set of IBM AIX nodes using Ansible 2.8.2.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
tasks:
- name: Get sys user status
user:
name: sys
group: sys
check_mode: true
register: user_out
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Ansible should have executed the `user` module per the preceding playbook task.
~~~
TASK [Get sys user status] *****************************************************
ok: [node1.is.local]
ok: [node2.is.local]
ok: [node3.is.local]
ok: [node4.is.local]
~~~
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Instead, the Ansible task fails with the following stack trace:
<!--- Paste verbatim command output between quotes -->
```paste below
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2953, in <module>
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2892, in main
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 1056, in modify_user
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2402, in modify_user_usermod
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 886, in user_info
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 910, in user_password
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2517, in parse_shadow_file
UnboundLocalError: local variable 'b_passwd' referenced before assignment
```
The stack trace goes back to `parse_shadow_file()`, which was introduced in the following commit: https://github.com/ansible/ansible/commit/f27eccabbd00f628f6e4195be7e1e4e5c463cba2#diff-8f60df5685f83b8659a1106d36591e93
|
https://github.com/ansible/ansible/issues/62510
|
https://github.com/ansible/ansible/pull/62547
|
d8389d9f5513b8924a40767ef7d181cf41bfaea1
|
e9d10f94b7722710c50f9923c7a7eea652a98228
| 2019-09-18T12:50:02Z |
python
| 2019-10-14T19:44:22Z |
changelogs/fragments/user-aix-shadow-unbound-local.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,510 |
user module produces UnboundLocalError when targeting AIX nodes
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
In Ansible 2.8.2, using the `user` module against an AIX host produces the following error:
~~~
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2953, in <module>
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2892, in main
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 1056, in modify_user
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2402, in modify_user_usermod
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 886, in user_info
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 910, in user_password
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2517, in parse_shadow_file
UnboundLocalError: local variable 'b_passwd' referenced before assignment
~~~
This may be a possible regression as Ansible 2.7 executes `user` successfully.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`user`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible-playbook 2.8.2
config file = /ansible/tower/projects/_214__mq_client_jar_deployment/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
IBM AIX
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Run the `user` module against a set of IBM AIX nodes using Ansible 2.7.
2. Upgrade Ansible to 2.8.2.
3. Run the `user` module against a set of IBM AIX nodes using Ansible 2.8.2.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
tasks:
- name: Get sys user status
user:
name: sys
group: sys
check_mode: true
register: user_out
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Ansible should have executed the `user` module per the preceding playbook task.
~~~
TASK [Get sys user status] *****************************************************
ok: [node1.is.local]
ok: [node2.is.local]
ok: [node3.is.local]
ok: [node4.is.local]
~~~
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Instead, the Ansible task fails with the following stack trace:
<!--- Paste verbatim command output between quotes -->
```paste below
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2953, in <module>
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2892, in main
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 1056, in modify_user
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2402, in modify_user_usermod
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 886, in user_info
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 910, in user_password
File "/tmp/ansible_user_payload_IVarsv/__main__.py", line 2517, in parse_shadow_file
UnboundLocalError: local variable 'b_passwd' referenced before assignment
```
The stack trace goes back to `parse_shadow_file()`, which was introduced in the following commit: https://github.com/ansible/ansible/commit/f27eccabbd00f628f6e4195be7e1e4e5c463cba2#diff-8f60df5685f83b8659a1106d36591e93
|
https://github.com/ansible/ansible/issues/62510
|
https://github.com/ansible/ansible/pull/62547
|
d8389d9f5513b8924a40767ef7d181cf41bfaea1
|
e9d10f94b7722710c50f9923c7a7eea652a98228
| 2019-09-18T12:50:02Z |
python
| 2019-10-14T19:44:22Z |
lib/ansible/modules/system/user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
module: user
version_added: "0.2"
short_description: Manage user accounts
description:
- Manage user accounts and user attributes.
- For Windows targets, use the M(win_user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
type: str
required: true
aliases: [ user ]
uid:
description:
- Optionally sets the I(UID) of the user.
type: int
comment:
description:
- Optionally sets the description (aka I(GECOS)) of user account.
type: str
hidden:
description:
- macOS only, optionally hide the user from the login window and system preferences.
- The default will be C(yes) if the I(system) option is used.
type: bool
version_added: "2.6"
non_unique:
description:
- Optionally when used with the -u option, this option allows to change the user ID to a non-unique value.
type: bool
default: no
version_added: "1.1"
seuser:
description:
- Optionally sets the seuser type (user_u) on selinux enabled systems.
type: str
version_added: "2.1"
group:
description:
- Optionally sets the user's primary group (takes a group name).
type: str
groups:
description:
- List of groups user will be added to. When set to an empty string C(''),
the user is removed from all groups except the primary group.
- Before Ansible 2.3, the only input format allowed was a comma separated string.
- Mutually exclusive with C(local)
type: list
append:
description:
- If C(yes), add the user to the groups specified in C(groups).
- If C(no), user will only be added to the groups specified in C(groups),
removing them from all other groups.
- Mutually exclusive with C(local)
type: bool
default: no
shell:
description:
- Optionally set the user's shell.
- On macOS, before Ansible 2.5, the default shell for non-system users was C(/usr/bin/false).
Since Ansible 2.5, the default shell for non-system users on macOS is C(/bin/bash).
- On other operating systems, the default shell is determined by the underlying tool being
used. See Notes for details.
type: str
home:
description:
- Optionally set the user's home directory.
type: path
skeleton:
description:
- Optionally set a home skeleton directory.
- Requires C(create_home) option!
type: str
version_added: "2.0"
password:
description:
- Optionally set the user's password to this crypted value.
- On macOS systems, this value has to be cleartext. Beware of security issues.
- To create a disabled account on Linux systems, set this to C('!') or C('*').
- To create a disabled account on OpenBSD, set this to C('*************').
- See U(https://docs.ansible.com/ansible/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module)
for details on various ways to generate these password values.
type: str
state:
description:
- Whether the account should exist or not, taking action if the state is different from what is stated.
type: str
choices: [ absent, present ]
default: present
create_home:
description:
- Unless set to C(no), a home directory will be made for the user
when the account is created or if the home directory does not exist.
- Changed from C(createhome) to C(create_home) in Ansible 2.5.
type: bool
default: yes
aliases: [ createhome ]
move_home:
description:
- "If set to C(yes) when used with C(home: ), attempt to move the user's old home
directory to the specified directory if it isn't there already and the old home exists."
type: bool
default: no
system:
description:
- When creating an account C(state=present), setting this to C(yes) makes the user a system account.
- This setting cannot be changed on existing users.
type: bool
default: no
force:
description:
- This only affects C(state=absent), it forces removal of the user and associated directories on supported platforms.
- The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support.
- When used with C(generate_ssh_key=yes) this forces an existing key to be overwritten.
type: bool
default: no
remove:
description:
- This only affects C(state=absent), it attempts to remove directories associated with the user.
- The behavior is the same as C(userdel --remove), check the man page for details and support.
type: bool
default: no
login_class:
description:
- Optionally sets the user's login class, a feature of most BSD OSs.
type: str
generate_ssh_key:
description:
- Whether to generate a SSH key for the user in question.
- This will B(not) overwrite an existing SSH key unless used with C(force=yes).
type: bool
default: no
version_added: "0.9"
ssh_key_bits:
description:
- Optionally specify number of bits in SSH key to create.
type: int
default: default set by ssh-keygen
version_added: "0.9"
ssh_key_type:
description:
- Optionally specify the type of SSH key to generate.
- Available SSH key types will depend on implementation
present on target host.
type: str
default: rsa
version_added: "0.9"
ssh_key_file:
description:
- Optionally specify the SSH key filename.
- If this is a relative filename then it will be relative to the user's home directory.
- This parameter defaults to I(.ssh/id_rsa).
type: path
version_added: "0.9"
ssh_key_comment:
description:
- Optionally define the comment for the SSH key.
type: str
default: ansible-generated on $HOSTNAME
version_added: "0.9"
ssh_key_passphrase:
description:
- Set a passphrase for the SSH key.
- If no passphrase is provided, the SSH key will default to having no passphrase.
type: str
version_added: "0.9"
update_password:
description:
- C(always) will update passwords if they differ.
- C(on_create) will only set the password for newly created users.
type: str
choices: [ always, on_create ]
default: always
version_added: "1.3"
expires:
description:
- An expiry time for the user in epoch, it will be ignored on platforms that do not support this.
- Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD.
- Since Ansible 2.6 you can remove the expiry time specify a negative value.
Currently supported on GNU/Linux and FreeBSD.
type: float
version_added: "1.9"
password_lock:
description:
- Lock the password (usermod -L, pw lock, usermod -C).
- BUT implementation differs on different platforms, this option does not always mean the user cannot login via other methods.
- This option does not disable the user, only lock the password. Do not change the password in the same task.
- Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD.
type: bool
version_added: "2.6"
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentification when you want to manipulate the local users
(i.e. it uses C(luseradd) instead of C(useradd)).
- This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database
exists somewhere other than C(/etc/passwd), this setting will not work properly.
- This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error.
- Mutually exclusive with C(groups) and C(append)
type: bool
default: no
version_added: "2.4"
profile:
description:
- Sets the profile of the user.
- Does nothing when used with other platforms.
- Can set multiple profiles using comma separation.
- To delete all the profiles, use C(profile='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
authorization:
description:
- Sets the authorization of the user.
- Does nothing when used with other platforms.
- Can set multiple authorizations using comma separation.
- To delete all authorizations, use C(authorization='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
role:
description:
- Sets the role of the user.
- Does nothing when used with other platforms.
- Can set multiple roles using comma separation.
- To delete all roles, use C(role='').
- Currently supported on Illumos/Solaris.
type: str
version_added: "2.8"
notes:
- There are specific requirements per platform on user management utilities. However
they generally come pre-installed with the system and Ansible will require they
are present at runtime. If they are not, a descriptive error message will be shown.
- On SunOS platforms, the shadow file is backed up automatically since this module edits it directly.
On other platforms, the shadow file is backed up by the underlying tools used by this module.
- On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to
modify group membership. Accounts are hidden from the login window by modifying
C(/Library/Preferences/com.apple.loginwindow.plist).
- On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify,
C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts.
- On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and
C(userdel) to remove accounts.
seealso:
- module: authorized_key
- module: group
- module: win_user
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = r'''
- name: Add the user 'johnd' with a specific uid and a primary group of 'admin'
user:
name: johnd
comment: John Doe
uid: 1040
group: admin
- name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups
user:
name: james
shell: /bin/bash
groups: admins,developers
append: yes
- name: Remove the user 'johnd'
user:
name: johnd
state: absent
remove: yes
- name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa
user:
name: jsmith
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Added a consultant whose account you want to expire
user:
name: james18
shell: /bin/zsh
groups: developers
expires: 1422403387
- name: Starting at Ansible 2.6, modify user, remove expiry time
user:
name: james18
expires: -1
'''
RETURN = r'''
append:
description: Whether or not to append the user to groups
returned: When state is 'present' and the user exists
type: bool
sample: True
comment:
description: Comment section from passwd file, usually the user name
returned: When user exists
type: str
sample: Agent Smith
create_home:
description: Whether or not to create the home directory
returned: When user does not exist and not check mode
type: bool
sample: True
force:
description: Whether or not a user account was forcibly deleted
returned: When state is 'absent' and user exists
type: bool
sample: False
group:
description: Primary user group ID
returned: When user exists
type: int
sample: 1001
groups:
description: List of groups of which the user is a member
returned: When C(groups) is not empty and C(state) is 'present'
type: str
sample: 'chrony,apache'
home:
description: "Path to user's home directory"
returned: When C(state) is 'present'
type: str
sample: '/home/asmith'
move_home:
description: Whether or not to move an existing home directory
returned: When C(state) is 'present' and user exists
type: bool
sample: False
name:
description: User account name
returned: always
type: str
sample: asmith
password:
description: Masked value of the password
returned: When C(state) is 'present' and C(password) is not empty
type: str
sample: 'NOT_LOGGING_PASSWORD'
remove:
description: Whether or not to remove the user account
returned: When C(state) is 'absent' and user exists
type: bool
sample: True
shell:
description: User login shell
returned: When C(state) is 'present'
type: str
sample: '/bin/bash'
ssh_fingerprint:
description: Fingerprint of generated SSH key
returned: When C(generate_ssh_key) is C(True)
type: str
sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)'
ssh_key_file:
description: Path to generated SSH private key file
returned: When C(generate_ssh_key) is C(True)
type: str
sample: /home/asmith/.ssh/id_rsa
ssh_public_key:
description: Generated SSH public key file
returned: When C(generate_ssh_key) is C(True)
type: str
sample: >
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo
618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y
d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host'
stderr:
description: Standard error from running commands
returned: When stderr is returned by a command that is run
type: str
sample: Group wheels does not exist
stdout:
description: Standard output from running commands
returned: When standard output is returned by the command that is run
type: str
sample:
system:
description: Whether or not the account is a system account
returned: When C(system) is passed to the module and the account does not exist
type: bool
sample: True
uid:
description: User ID of the user account
returned: When C(UID) is passed to the module
type: int
sample: 1044
'''
import errno
import grp
import calendar
import os
import re
import pty
import pwd
import select
import shutil
import socket
import subprocess
import time
from ansible.module_utils import distro
from ansible.module_utils._text import to_native, to_bytes, to_text
from ansible.module_utils.basic import load_platform_subclass, AnsibleModule
try:
import spwd
HAVE_SPWD = True
except ImportError:
HAVE_SPWD = False
_HASH_RE = re.compile(r'[^a-zA-Z0-9./=]')
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
- modify_user()
- ssh_key_gen()
- ssh_key_fingerprint()
- user_exists()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
PASSWORDFILE = '/etc/passwd'
SHADOWFILE = '/etc/shadow'
SHADOWFILE_EXPIRE_INDEX = 7
LOGIN_DEFS = '/etc/login.defs'
DATE_FORMAT = '%Y-%m-%d'
def __new__(cls, *args, **kwargs):
return load_platform_subclass(User, args, kwargs)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.uid = module.params['uid']
self.hidden = module.params['hidden']
self.non_unique = module.params['non_unique']
self.seuser = module.params['seuser']
self.group = module.params['group']
self.comment = module.params['comment']
self.shell = module.params['shell']
self.password = module.params['password']
self.force = module.params['force']
self.remove = module.params['remove']
self.create_home = module.params['create_home']
self.move_home = module.params['move_home']
self.skeleton = module.params['skeleton']
self.system = module.params['system']
self.login_class = module.params['login_class']
self.append = module.params['append']
self.sshkeygen = module.params['generate_ssh_key']
self.ssh_bits = module.params['ssh_key_bits']
self.ssh_type = module.params['ssh_key_type']
self.ssh_comment = module.params['ssh_key_comment']
self.ssh_passphrase = module.params['ssh_key_passphrase']
self.update_password = module.params['update_password']
self.home = module.params['home']
self.expires = None
self.password_lock = module.params['password_lock']
self.groups = None
self.local = module.params['local']
self.profile = module.params['profile']
self.authorization = module.params['authorization']
self.role = module.params['role']
if module.params['groups'] is not None:
self.groups = ','.join(module.params['groups'])
if module.params['expires'] is not None:
try:
self.expires = time.gmtime(module.params['expires'])
except Exception as e:
module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e)))
if module.params['ssh_key_file'] is not None:
self.ssh_file = module.params['ssh_key_file']
else:
self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type)
def check_password_encrypted(self):
# Darwin needs cleartext password, so skip validation
if self.module.params['password'] and self.platform != 'Darwin':
maybe_invalid = False
# Allow setting certain passwords in order to disable the account
if self.module.params['password'] in set(['*', '!', '*************']):
maybe_invalid = False
else:
# : for delimiter, * for disable user, ! for lock user
# these characters are invalid in the password
if any(char in self.module.params['password'] for char in ':*!'):
maybe_invalid = True
if '$' not in self.module.params['password']:
maybe_invalid = True
else:
fields = self.module.params['password'].split("$")
if len(fields) >= 3:
# contains character outside the crypto constraint
if bool(_HASH_RE.search(fields[-1])):
maybe_invalid = True
# md5
if fields[1] == '1' and len(fields[-1]) != 22:
maybe_invalid = True
# sha256
if fields[1] == '5' and len(fields[-1]) != 43:
maybe_invalid = True
# sha512
if fields[1] == '6' and len(fields[-1]) != 86:
maybe_invalid = True
else:
maybe_invalid = True
if maybe_invalid:
self.module.warn("The input password appears not to have been hashed. "
"The 'password' argument must be encrypted for this module to work properly.")
def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True):
if self.module.check_mode and obey_checkmode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
else:
# cast all args to strings ansible-modules-core/issues/4397
cmd = [str(x) for x in cmd]
return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data)
def backup_shadow(self):
if not self.module.check_mode and self.SHADOWFILE:
return self.module.backup_local(self.SHADOWFILE)
def remove_user_userdel(self):
if self.local:
command_name = 'luserdel'
else:
command_name = 'userdel'
cmd = [self.module.get_bin_path(command_name, True)]
if self.force:
cmd.append('-f')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self):
if self.local:
command_name = 'luseradd'
else:
command_name = 'useradd'
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.seuser is not None:
cmd.append('-Z')
cmd.append(self.seuser)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
elif self.group_exists(self.name):
# use the -N option (no user group) if a group already
# exists with the same name as the user to prevent
# errors from useradd trying to create a group when
# USERGROUPS_ENAB is set in /etc/login.defs.
if os.path.exists('/etc/redhat-release'):
dist = distro.linux_distribution(full_distribution_name=False)
major_release = int(dist[1].split('.')[0])
if major_release <= 5:
cmd.append('-n')
else:
cmd.append('-N')
elif os.path.exists('/etc/SuSE-release'):
# -N did not exist in useradd before SLE 11 and did not
# automatically create a group
dist = distro.linux_distribution(full_distribution_name=False)
major_release = int(dist[1].split('.')[0])
if major_release >= 12:
cmd.append('-N')
else:
cmd.append('-N')
if self.groups is not None and not self.local and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
# If the specified path to the user home contains parent directories that
# do not exist, first create the home directory since useradd cannot
# create parent directories
parent = os.path.dirname(self.home)
if not os.path.isdir(parent):
self.create_homedir(self.home)
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('')
else:
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
if not self.local:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def _check_usermod_append(self):
# check if this version of usermod can append groups
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
usermod_path = self.module.get_bin_path(command_name, True)
# for some reason, usermod --help cannot be used by non root
# on RH/Fedora, due to lack of execute bit for others
if not os.access(usermod_path, os.X_OK):
return False
cmd = [usermod_path, '--help']
(rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False)
helpout = data1 + data2
# check if --append exists
lines = to_native(helpout).split('\n')
for line in lines:
if line.strip().startswith('-a, --append'):
return True
return False
def modify_user_usermod(self):
if self.local:
command_name = 'lusermod'
else:
command_name = 'usermod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.user_info()
has_append = self._check_usermod_append()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
# get a list of all groups for the user, including the primary
current_groups = self.user_group_membership(exclude_primary=False)
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
if has_append:
cmd.append('-a')
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod and not self.local:
if self.append and not has_append:
cmd.append('-A')
cmd.append(','.join(group_diff))
else:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.expires is not None:
current_expires = int(self.user_password()[1])
if self.expires < time.gmtime(0):
if current_expires >= 0:
cmd.append('-e')
cmd.append('')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires * 86400)
# Current expires is negative or we compare year, month, and day only
if current_expires < 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(time.strftime(self.DATE_FORMAT, self.expires))
# Lock if no password or unlocked, unlock only if locked
if self.password_lock and not info[1].startswith('!'):
cmd.append('-L')
elif self.password_lock is False and info[1].startswith('!'):
# usermod will refuse to unlock a user with no password, module shows 'changed' regardless
cmd.append('-U')
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
def group_exists(self, group):
try:
# Try group as a gid first
grp.getgrgid(int(group))
return True
except (ValueError, KeyError):
try:
grp.getgrnam(group)
return True
except KeyError:
return False
def group_info(self, group):
if not self.group_exists(group):
return False
try:
# Try group as a gid first
return list(grp.getgrgid(int(group)))
except (ValueError, KeyError):
return list(grp.getgrnam(group))
def get_groups_set(self, remove_existing=True):
if self.groups is None:
return None
info = self.user_info()
groups = set(x.strip() for x in self.groups.split(',') if x)
for g in groups.copy():
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
if info and remove_existing and self.group_info(g)[2] == info[3]:
groups.remove(g)
return groups
def user_group_membership(self, exclude_primary=True):
''' Return a list of groups the user belongs to '''
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem:
# Exclude the user's primary group by default
if not exclude_primary:
groups.append(group[0])
else:
if info[3] != group.gr_gid:
groups.append(group[0])
return groups
def user_exists(self):
# The pwd module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not an account exists locally.
# It returns True if the account exists locally or in the directory, so instead
# look in the local PASSWORD file for an existing account.
if self.local:
if not os.path.exists(self.PASSWORDFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.PASSWORDFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and user '{name}' was not found in {file}. "
"The local user account may already exist if the local account database exists "
"somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name))
return exists
else:
try:
if pwd.getpwnam(self.name):
return True
except KeyError:
return False
def get_pwd_info(self):
if not self.user_exists():
return False
return list(pwd.getpwnam(self.name))
def user_info(self):
if not self.user_exists():
return False
info = self.get_pwd_info()
if len(info[1]) == 1 or len(info[1]) == 0:
info[1] = self.user_password()[0]
return info
def user_password(self):
passwd = ''
expires = ''
if HAVE_SPWD:
try:
passwd = spwd.getspnam(self.name)[1]
expires = spwd.getspnam(self.name)[7]
return passwd, expires
except KeyError:
return passwd, expires
except OSError as e:
# Python 3.6 raises PermissionError instead of KeyError
# Due to absence of PermissionError in python2.7 need to check
# errno
if e.errno in (errno.EACCES, errno.EPERM):
return passwd, expires
raise
if not self.user_exists():
return passwd, expires
elif self.SHADOWFILE:
passwd, expires = self.parse_shadow_file()
return passwd, expires
def parse_shadow_file(self):
passwd = ''
expires = ''
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
passwd = line.split(':')[1]
expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1
return passwd, expires
def get_ssh_key_path(self):
info = self.user_info()
if os.path.isabs(self.ssh_file):
ssh_key_file = self.ssh_file
else:
if not os.path.exists(info[5]) and not self.module.check_mode:
raise Exception('User %s home directory does not exist' % self.name)
ssh_key_file = os.path.join(info[5], self.ssh_file)
return ssh_key_file
def ssh_key_gen(self):
info = self.user_info()
overwrite = None
try:
ssh_key_file = self.get_ssh_key_path()
except Exception as e:
return (1, '', to_native(e))
ssh_dir = os.path.dirname(ssh_key_file)
if not os.path.exists(ssh_dir):
if self.module.check_mode:
return (0, '', '')
try:
os.mkdir(ssh_dir, int('0700', 8))
os.chown(ssh_dir, info[2], info[3])
except OSError as e:
return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e)))
if os.path.exists(ssh_key_file):
if self.force:
# ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm
overwrite = 'y'
else:
return (None, 'Key already exists, use "force: yes" to overwrite', '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-t')
cmd.append(self.ssh_type)
if self.ssh_bits > 0:
cmd.append('-b')
cmd.append(self.ssh_bits)
cmd.append('-C')
cmd.append(self.ssh_comment)
cmd.append('-f')
cmd.append(ssh_key_file)
if self.ssh_passphrase is not None:
if self.module.check_mode:
self.module.debug('In check mode, would have run: "%s"' % cmd)
return (0, '', '')
master_in_fd, slave_in_fd = pty.openpty()
master_out_fd, slave_out_fd = pty.openpty()
master_err_fd, slave_err_fd = pty.openpty()
env = os.environ.copy()
env['LC_ALL'] = 'C'
try:
p = subprocess.Popen([to_bytes(c) for c in cmd],
stdin=slave_in_fd,
stdout=slave_out_fd,
stderr=slave_err_fd,
preexec_fn=os.setsid,
env=env)
out_buffer = b''
err_buffer = b''
while p.poll() is None:
r, w, e = select.select([master_out_fd, master_err_fd], [], [], 1)
first_prompt = b'Enter passphrase (empty for no passphrase):'
second_prompt = b'Enter same passphrase again'
prompt = first_prompt
for fd in r:
if fd == master_out_fd:
chunk = os.read(master_out_fd, 10240)
out_buffer += chunk
if prompt in out_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
else:
chunk = os.read(master_err_fd, 10240)
err_buffer += chunk
if prompt in err_buffer:
os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r')
prompt = second_prompt
if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer:
# The key was created between us checking for existence and now
return (None, 'Key already exists', '')
rc = p.returncode
out = to_native(out_buffer)
err = to_native(err_buffer)
except OSError as e:
return (1, '', to_native(e))
else:
cmd.append('-N')
cmd.append('')
(rc, out, err) = self.execute_command(cmd, data=overwrite)
if rc == 0 and not self.module.check_mode:
# If the keys were successfully created, we should be able
# to tweak ownership.
os.chown(ssh_key_file, info[2], info[3])
os.chown('%s.pub' % ssh_key_file, info[2], info[3])
return (rc, out, err)
def ssh_key_fingerprint(self):
ssh_key_file = self.get_ssh_key_path()
if not os.path.exists(ssh_key_file):
return (1, 'SSH Key file %s does not exist' % ssh_key_file, '')
cmd = [self.module.get_bin_path('ssh-keygen', True)]
cmd.append('-l')
cmd.append('-f')
cmd.append(ssh_key_file)
return self.execute_command(cmd, obey_checkmode=False)
def get_ssh_public_key(self):
ssh_public_key_file = '%s.pub' % self.get_ssh_key_path()
try:
with open(ssh_public_key_file, 'r') as f:
ssh_public_key = f.read().strip()
except IOError:
return None
return ssh_public_key
def create_user(self):
# by default we use the create_user_useradd method
return self.create_user_useradd()
def remove_user(self):
# by default we use the remove_user_userdel method
return self.remove_user_userdel()
def modify_user(self):
# by default we use the modify_user_usermod method
return self.modify_user_usermod()
def create_homedir(self, path):
if not os.path.exists(path):
if self.skeleton is not None:
skeleton = self.skeleton
else:
skeleton = '/etc/skel'
if os.path.exists(skeleton):
try:
shutil.copytree(skeleton, path, symlinks=True)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
else:
try:
os.makedirs(path)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# get umask from /etc/login.defs and set correct home mode
if os.path.exists(self.LOGIN_DEFS):
with open(self.LOGIN_DEFS, 'r') as f:
for line in f:
m = re.match(r'^UMASK\s+(\d+)$', line)
if m:
umask = int(m.group(1), 8)
mode = 0o777 & ~umask
try:
os.chmod(path, mode)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
def chown_homedir(self, uid, gid, path):
try:
os.chown(path, uid, gid)
for root, dirs, files in os.walk(path):
for d in dirs:
os.chown(os.path.join(root, d), uid, gid)
for f in files:
os.chown(os.path.join(root, f), uid, gid)
except OSError as e:
self.module.exit_json(failed=True, msg="%s" % to_native(e))
# ===========================================
class FreeBsdUser(User):
"""
This is a FreeBSD User manipulation class - it uses the pw command
to manipulate the user database, followed by the chpass command
to change the password.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'FreeBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
SHADOWFILE_EXPIRE_INDEX = 6
DATE_FORMAT = '%d-%b-%Y'
def remove_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'userdel',
'-n',
self.name
]
if self.remove:
cmd.append('-r')
return self.execute_command(cmd)
def create_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'useradd',
'-n',
self.name,
]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.expires is not None:
cmd.append('-e')
if self.expires < time.gmtime(0):
cmd.append('0')
else:
cmd.append(str(calendar.timegm(self.expires)))
# system cannot be handled currently - should we error if its requested?
# create the user
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# we have to set the password in a second command
if self.password is not None:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
return self.execute_command(cmd)
return (rc, out, err)
def modify_user(self):
cmd = [
self.module.get_bin_path('pw', True),
'usermod',
'-n',
self.name
]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home):
cmd.append('-m')
if info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'r') as f:
for line in f:
if line.startswith('%s:' % self.name):
user_login_class = line.split(':')[4]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.expires is not None:
current_expires = int(self.user_password()[1])
# If expiration is negative or zero and the current expiration is greater than zero, disable expiration.
# In OpenBSD, setting expiration to zero disables expiration. It does not expire the account.
if self.expires <= time.gmtime(0):
if current_expires > 0:
cmd.append('-e')
cmd.append('0')
else:
# Convert days since Epoch to seconds since Epoch as struct_time
current_expire_date = time.gmtime(current_expires)
# Current expires is negative or we compare year, month, and day only
if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]:
cmd.append('-e')
cmd.append(str(calendar.timegm(self.expires)))
# modify the user if cmd will do anything
if cmd_len != len(cmd):
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password in a second command
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = [
self.module.get_bin_path('chpass', True),
'-p',
self.password,
self.name
]
return self.execute_command(cmd)
# we have to lock/unlock the password in a distinct command
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'lock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd = [
self.module.get_bin_path('pw', True),
'unlock',
self.name
]
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
return self.execute_command(cmd)
return (rc, out, err)
class DragonFlyBsdUser(FreeBsdUser):
"""
This is a DragonFlyBSD User manipulation class - it inherits the
FreeBsdUser class behaviors, such as using the pw command to
manipulate the user database, followed by the chpass command
to change the password.
"""
platform = 'DragonFly'
class OpenBSDUser(User):
"""
This is a OpenBSD User manipulation class.
Main differences are that OpenBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'OpenBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None and self.password != '*':
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups_option = '-S'
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_option = '-G'
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append(groups_option)
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
# find current login class
user_login_class = None
userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name]
(rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False)
for line in out.splitlines():
tokens = line.split()
if tokens[0] == 'class' and len(tokens) == 2:
user_login_class = tokens[1]
# act only if login_class change
if self.login_class != user_login_class:
cmd.append('-L')
cmd.append(self.login_class)
if self.password_lock and not info[1].startswith('*'):
cmd.append('-Z')
elif self.password_lock is False and info[1].startswith('*'):
cmd.append('-U')
if self.update_password == 'always' and self.password is not None \
and self.password != '*' and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class NetBSDUser(User):
"""
This is a NetBSD User manipulation class.
Main differences are that NetBSD:-
- has no concept of "system" account.
- has no force delete user
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'NetBSD'
distribution = None
SHADOWFILE = '/etc/master.passwd'
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user_userdel(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups = set(current_groups).union(groups)
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
if len(groups) > 16:
self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups))
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.login_class is not None:
cmd.append('-L')
cmd.append(self.login_class)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-p')
cmd.append(self.password)
if self.password_lock and not info[1].startswith('*LOCKED*'):
cmd.append('-C yes')
elif self.password_lock is False and info[1].startswith('*LOCKED*'):
cmd.append('-C no')
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class SunOS(User):
"""
This is a SunOS User manipulation class - The main difference between
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- user_info()
"""
platform = 'SunOS'
distribution = None
SHADOWFILE = '/etc/shadow'
USER_ATTR = '/etc/user_attr'
def get_password_defaults(self):
# Read password aging defaults
try:
minweeks = ''
maxweeks = ''
warnweeks = ''
with open("/etc/default/passwd", 'r') as f:
for line in f:
line = line.strip()
if (line.startswith('#') or line == ''):
continue
m = re.match(r'^([^#]*)#(.*)$', line)
if m: # The line contains a hash / comment
line = m.group(1)
key, value = line.split('=')
if key == "MINWEEKS":
minweeks = value.rstrip('\n')
elif key == "MAXWEEKS":
maxweeks = value.rstrip('\n')
elif key == "WARNWEEKS":
warnweeks = value.rstrip('\n')
except Exception as err:
self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err))
return (minweeks, maxweeks, warnweeks)
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user(self):
cmd = [self.module.get_bin_path('useradd', True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.profile is not None:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None:
cmd.append('-R')
cmd.append(self.role)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if not self.module.check_mode:
# we have to set the password by editing the /etc/shadow file
if self.password is not None:
self.backup_shadow()
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
try:
fields[3] = str(int(minweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if maxweeks:
try:
fields[4] = str(int(maxweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
if warnweeks:
try:
fields[5] = str(int(warnweeks) * 7)
except ValueError:
# mirror solaris, which allows for any value in this field, and ignores anything that is not an int.
pass
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
cmd_len = len(cmd)
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.update(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.profile is not None and info[7] != self.profile:
cmd.append('-P')
cmd.append(self.profile)
if self.authorization is not None and info[8] != self.authorization:
cmd.append('-A')
cmd.append(self.authorization)
if self.role is not None and info[9] != self.role:
cmd.append('-R')
cmd.append(self.role)
# modify the user if cmd will do anything
if cmd_len != len(cmd):
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
else:
(rc, out, err) = (None, '', '')
# we have to set the password by editing the /etc/shadow file
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
self.backup_shadow()
(rc, out, err) = (0, '', '')
if not self.module.check_mode:
minweeks, maxweeks, warnweeks = self.get_password_defaults()
try:
lines = []
with open(self.SHADOWFILE, 'rb') as f:
for line in f:
line = to_native(line, errors='surrogate_or_strict')
fields = line.strip().split(':')
if not fields[0] == self.name:
lines.append(line)
continue
fields[1] = self.password
fields[2] = str(int(time.time() // 86400))
if minweeks:
fields[3] = str(int(minweeks) * 7)
if maxweeks:
fields[4] = str(int(maxweeks) * 7)
if warnweeks:
fields[5] = str(int(warnweeks) * 7)
line = ':'.join(fields)
lines.append('%s\n' % line)
with open(self.SHADOWFILE, 'w+') as f:
f.writelines(lines)
rc = 0
except Exception as err:
self.module.fail_json(msg="failed to update users password: %s" % to_native(err))
return (rc, out, err)
def user_info(self):
info = super(SunOS, self).user_info()
if info:
info += self._user_attr_info()
return info
def _user_attr_info(self):
info = [''] * 3
with open(self.USER_ATTR, 'r') as file_handler:
for line in file_handler:
lines = line.strip().split('::::')
if lines[0] == self.name:
tmp = dict(x.split('=') for x in lines[1].split(';'))
info[0] = tmp.get('profiles', '')
info[1] = tmp.get('auths', '')
info[2] = tmp.get('roles', '')
return info
class DarwinUser(User):
"""
This is a Darwin macOS User manipulation class.
Main differences are that Darwin:-
- Handles accounts in a database managed by dscl(1)
- Has no useradd/groupadd
- Does not create home directories
- User password must be cleartext
- UID must be given
- System users must ben under 500
This overrides the following methods from the generic class:-
- user_exists()
- create_user()
- remove_user()
- modify_user()
"""
platform = 'Darwin'
distribution = None
SHADOWFILE = None
dscl_directory = '.'
fields = [
('comment', 'RealName'),
('home', 'NFSHomeDirectory'),
('shell', 'UserShell'),
('uid', 'UniqueID'),
('group', 'PrimaryGroupID'),
('hidden', 'IsHidden'),
]
def __init__(self, module):
super(DarwinUser, self).__init__(module)
# make the user hidden if option is set or deffer to system option
if self.hidden is None:
if self.system:
self.hidden = 1
elif self.hidden:
self.hidden = 1
else:
self.hidden = 0
# add hidden to processing if set
if self.hidden is not None:
self.fields.append(('hidden', 'IsHidden'))
def _get_dscl(self):
return [self.module.get_bin_path('dscl', True), self.dscl_directory]
def _list_user_groups(self):
cmd = self._get_dscl()
cmd += ['-search', '/Groups', 'GroupMembership', self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
groups = []
for line in out.splitlines():
if line.startswith(' ') or line.startswith(')'):
continue
groups.append(line.split()[0])
return groups
def _get_user_property(self, property):
'''Return user PROPERTY as given my dscl(1) read or None if not found.'''
cmd = self._get_dscl()
cmd += ['-read', '/Users/%s' % self.name, property]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
return None
# from dscl(1)
# if property contains embedded spaces, the list will instead be
# displayed one entry per line, starting on the line after the key.
lines = out.splitlines()
# sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines))
if len(lines) == 1:
return lines[0].split(': ')[1]
else:
if len(lines) > 2:
return '\n'.join([lines[1].strip()] + lines[2:])
else:
if len(lines) == 2:
return lines[1].strip()
else:
return None
def _get_next_uid(self, system=None):
'''
Return the next available uid. If system=True, then
uid should be below of 500, if possible.
'''
cmd = self._get_dscl()
cmd += ['-list', '/Users', 'UniqueID']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
if rc != 0:
self.module.fail_json(
msg="Unable to get the next available uid",
rc=rc,
out=out,
err=err
)
max_uid = 0
max_system_uid = 0
for line in out.splitlines():
current_uid = int(line.split(' ')[-1])
if max_uid < current_uid:
max_uid = current_uid
if max_system_uid < current_uid and current_uid < 500:
max_system_uid = current_uid
if system and (0 < max_system_uid < 499):
return max_system_uid + 1
return max_uid + 1
def _change_user_password(self):
'''Change password for SELF.NAME against SELF.PASSWORD.
Please note that password must be cleartext.
'''
# some documentation on how is stored passwords on OSX:
# http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/
# http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/
# http://pastebin.com/RYqxi7Ca
# on OSX 10.8+ hash is SALTED-SHA512-PBKDF2
# https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html
# https://gist.github.com/nueh/8252572
cmd = self._get_dscl()
if self.password:
cmd += ['-passwd', '/Users/%s' % self.name, self.password]
else:
cmd += ['-create', '/Users/%s' % self.name, 'Password', '*']
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc)
return (rc, out, err)
def _make_group_numerical(self):
'''Convert SELF.GROUP to is stringed numerical value suitable for dscl.'''
if self.group is None:
self.group = 'nogroup'
try:
self.group = grp.getgrnam(self.group).gr_gid
except KeyError:
self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group)
# We need to pass a string to dscl
self.group = str(self.group)
def __modify_group(self, group, action):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
if action == 'add':
option = '-a'
else:
option = '-d'
cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot %s user "%s" to group "%s".'
% (action, self.name, group), err=err, out=out, rc=rc)
return (rc, out, err)
def _modify_group(self):
'''Add or remove SELF.NAME to or from GROUP depending on ACTION.
ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. '''
rc = 0
out = ''
err = ''
changed = False
current = set(self._list_user_groups())
if self.groups is not None:
target = set(self.groups.split(','))
else:
target = set([])
if self.append is False:
for remove in current - target:
(_rc, _err, _out) = self.__modify_group(remove, 'delete')
rc += rc
out += _out
err += _err
changed = True
for add in target - current:
(_rc, _err, _out) = self.__modify_group(add, 'add')
rc += _rc
out += _out
err += _err
changed = True
return (rc, err, out, changed)
def _update_system_user(self):
'''Hide or show user on login window according SELF.SYSTEM.
Returns 0 if a change has been made, None otherwise.'''
plist_file = '/Library/Preferences/com.apple.loginwindow.plist'
# http://support.apple.com/kb/HT5017?viewlocale=en_US
cmd = ['defaults', 'read', plist_file, 'HiddenUsersList']
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
# returned value is
# (
# "_userA",
# "_UserB",
# userc
# )
hidden_users = []
for x in out.splitlines()[1:-1]:
try:
x = x.split('"')[1]
except IndexError:
x = x.strip()
hidden_users.append(x)
if self.system:
if self.name not in hidden_users:
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
else:
if self.name in hidden_users:
del (hidden_users[hidden_users.index(self.name)])
cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc)
return 0
def user_exists(self):
'''Check is SELF.NAME is a known user on the system.'''
cmd = self._get_dscl()
cmd += ['-list', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd, obey_checkmode=False)
return rc == 0
def remove_user(self):
'''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.'''
info = self.user_info()
cmd = self._get_dscl()
cmd += ['-delete', '/Users/%s' % self.name]
(rc, out, err) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc)
if self.force:
if os.path.exists(info[5]):
shutil.rmtree(info[5])
out += "Removed %s" % info[5]
return (rc, out, err)
def create_user(self, command_name='dscl'):
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name]
(rc, err, out) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc)
self._make_group_numerical()
if self.uid is None:
self.uid = str(self._get_next_uid(self.system))
# Homedir is not created by default
if self.create_home:
if self.home is None:
self.home = '/Users/%s' % self.name
if not self.module.check_mode:
if not os.path.exists(self.home):
os.makedirs(self.home)
self.chown_homedir(int(self.uid), int(self.group), self.home)
# dscl sets shell to /usr/bin/false when UserShell is not specified
# so set the shell to /bin/bash when the user is not a system user
if not self.system and self.shell is None:
self.shell = '/bin/bash'
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _err, _out) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc)
out += _out
err += _err
if rc != 0:
return (rc, _err, _out)
(rc, _err, _out) = self._change_user_password()
out += _out
err += _err
self._update_system_user()
# here we don't care about change status since it is a creation,
# thus changed is always true.
if self.groups:
(rc, _out, _err, changed) = self._modify_group()
out += _out
err += _err
return (rc, err, out)
def modify_user(self):
changed = None
out = ''
err = ''
if self.group:
self._make_group_numerical()
for field in self.fields:
if field[0] in self.__dict__ and self.__dict__[field[0]]:
current = self._get_user_property(field[1])
if current is None or current != self.__dict__[field[0]]:
cmd = self._get_dscl()
cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]]
(rc, _err, _out) = self.execute_command(cmd)
if rc != 0:
self.module.fail_json(
msg='Cannot update property "%s" for user "%s".'
% (field[0], self.name), err=err, out=out, rc=rc)
changed = rc
out += _out
err += _err
if self.update_password == 'always' and self.password is not None:
(rc, _err, _out) = self._change_user_password()
out += _out
err += _err
changed = rc
if self.groups:
(rc, _out, _err, _changed) = self._modify_group()
out += _out
err += _err
if _changed is True:
changed = rc
rc = self._update_system_user()
if rc == 0:
changed = rc
return (changed, out, err)
class AIX(User):
"""
This is a AIX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
- parse_shadow_file()
"""
platform = 'AIX'
distribution = None
SHADOWFILE = '/etc/security/passwd'
def remove_user(self):
cmd = [self.module.get_bin_path('userdel', True)]
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def create_user_useradd(self, command_name='useradd'):
cmd = [self.module.get_bin_path(command_name, True)]
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.create_home:
cmd.append('-m')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.password is not None:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
return (rc, out, err)
def modify_user_usermod(self):
cmd = [self.module.get_bin_path('usermod', True)]
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
if self.move_home:
cmd.append('-m')
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
# skip if no changes to be made
if len(cmd) == 1:
(rc, out, err) = (None, '', '')
else:
cmd.append(self.name)
(rc, out, err) = self.execute_command(cmd)
# set password with chpasswd
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd = []
cmd.append(self.module.get_bin_path('chpasswd', True))
cmd.append('-e')
cmd.append('-c')
(rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password))
else:
(rc2, out2, err2) = (None, '', '')
if rc is not None:
return (rc, out + out2, err + err2)
else:
return (rc2, out + out2, err + err2)
def parse_shadow_file(self):
"""Example AIX shadowfile data:
nobody:
password = *
operator1:
password = {ssha512}06$xxxxxxxxxxxx....
lastupdate = 1549558094
test1:
password = *
lastupdate = 1553695126
"""
b_name = to_bytes(self.name)
if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK):
with open(self.SHADOWFILE, 'rb') as bf:
b_lines = bf.readlines()
b_passwd_line = b''
b_expires_line = b''
for index, b_line in enumerate(b_lines):
# Get password and lastupdate lines which come after the username
if b_line.startswith(b'%s:' % b_name):
b_passwd_line = b_lines[index + 1]
b_expires_line = b_lines[index + 2]
break
# Sanity check the lines because sometimes both are not present
if b' = ' in b_passwd_line:
b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip()
else:
b_passwd = b''
if b' = ' in b_expires_line:
b_expires = b_expires_line.split(b' = ', 1)[-1].strip()
else:
b_expires = b''
passwd = to_native(b_passwd)
expires = to_native(b_expires) or -1
return passwd, expires
class HPUX(User):
"""
This is a HP-UX User manipulation class.
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
- modify_user()
"""
platform = 'HP-UX'
distribution = None
SHADOWFILE = '/etc/shadow'
def create_user(self):
cmd = ['/usr/sam/lbin/useradd.sam']
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-d')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if self.password is not None:
cmd.append('-p')
cmd.append(self.password)
if self.create_home:
cmd.append('-m')
else:
cmd.append('-M')
if self.system:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def remove_user(self):
cmd = ['/usr/sam/lbin/userdel.sam']
if self.force:
cmd.append('-F')
if self.remove:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def modify_user(self):
cmd = ['/usr/sam/lbin/usermod.sam']
info = self.user_info()
if self.uid is not None and info[2] != int(self.uid):
cmd.append('-u')
cmd.append(self.uid)
if self.non_unique:
cmd.append('-o')
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg="Group %s does not exist" % self.group)
ginfo = self.group_info(self.group)
if info[3] != ginfo[2]:
cmd.append('-g')
cmd.append(self.group)
if self.groups is not None:
current_groups = self.user_group_membership()
groups_need_mod = False
groups = []
if self.groups == '':
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.get_groups_set(remove_existing=False)
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
for g in groups:
if g in group_diff:
groups_need_mod = True
break
else:
groups_need_mod = True
if groups_need_mod:
cmd.append('-G')
new_groups = groups
if self.append:
new_groups = groups | set(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
cmd.append('-c')
cmd.append(self.comment)
if self.home is not None and info[5] != self.home:
cmd.append('-d')
cmd.append(self.home)
if self.move_home:
cmd.append('-m')
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.update_password == 'always' and self.password is not None and info[1] != self.password:
cmd.append('-F')
cmd.append('-p')
cmd.append(self.password)
# skip if no changes to be made
if len(cmd) == 1:
return (None, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
class BusyBox(User):
"""
This is the BusyBox class for use on systems that have adduser, deluser,
and delgroup commands. It overrides the following methods:
- create_user()
- remove_user()
- modify_user()
"""
def create_user(self):
cmd = [self.module.get_bin_path('adduser', True)]
cmd.append('-D')
if self.uid is not None:
cmd.append('-u')
cmd.append(self.uid)
if self.group is not None:
if not self.group_exists(self.group):
self.module.fail_json(msg='Group {0} does not exist'.format(self.group))
cmd.append('-G')
cmd.append(self.group)
if self.comment is not None:
cmd.append('-g')
cmd.append(self.comment)
if self.home is not None:
cmd.append('-h')
cmd.append(self.home)
if self.shell is not None:
cmd.append('-s')
cmd.append(self.shell)
if not self.create_home:
cmd.append('-H')
if self.skeleton is not None:
cmd.append('-k')
cmd.append(self.skeleton)
if self.system:
cmd.append('-S')
cmd.append(self.name)
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
if self.password is not None:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Add to additional groups
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
add_cmd_bin = self.module.get_bin_path('adduser', True)
for group in groups:
cmd = [add_cmd_bin, self.name, group]
rc, out, err = self.execute_command(cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
def remove_user(self):
cmd = [
self.module.get_bin_path('deluser', True),
self.name
]
if self.remove:
cmd.append('--remove-home')
return self.execute_command(cmd)
def modify_user(self):
current_groups = self.user_group_membership()
groups = []
rc = None
out = ''
err = ''
info = self.user_info()
add_cmd_bin = self.module.get_bin_path('adduser', True)
remove_cmd_bin = self.module.get_bin_path('delgroup', True)
# Manage group membership
if self.groups is not None and len(self.groups):
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
for g in groups:
if g in group_diff:
add_cmd = [add_cmd_bin, self.name, g]
rc, out, err = self.execute_command(add_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
for g in group_diff:
if g not in groups and not self.append:
remove_cmd = [remove_cmd_bin, self.name, g]
rc, out, err = self.execute_command(remove_cmd)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
# Manage password
if self.password is not None:
if info[1] != self.password:
cmd = [self.module.get_bin_path('chpasswd', True)]
cmd.append('--encrypted')
data = '{name}:{password}'.format(name=self.name, password=self.password)
rc, out, err = self.execute_command(cmd, data=data)
if rc is not None and rc != 0:
self.module.fail_json(name=self.name, msg=err, rc=rc)
return rc, out, err
class Alpine(BusyBox):
"""
This is the Alpine User manipulation class. It inherits the BusyBox class
behaviors such as using adduser and deluser commands.
"""
platform = 'Linux'
distribution = 'Alpine'
def main():
ssh_defaults = dict(
bits=0,
type='rsa',
passphrase=None,
comment='ansible-generated on %s' % socket.gethostname()
)
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True, aliases=['user']),
uid=dict(type='int'),
non_unique=dict(type='bool', default=False),
group=dict(type='str'),
groups=dict(type='list'),
comment=dict(type='str'),
home=dict(type='path'),
shell=dict(type='str'),
password=dict(type='str', no_log=True),
login_class=dict(type='str'),
# following options are specific to macOS
hidden=dict(type='bool'),
# following options are specific to selinux
seuser=dict(type='str'),
# following options are specific to userdel
force=dict(type='bool', default=False),
remove=dict(type='bool', default=False),
# following options are specific to useradd
create_home=dict(type='bool', default=True, aliases=['createhome']),
skeleton=dict(type='str'),
system=dict(type='bool', default=False),
# following options are specific to usermod
move_home=dict(type='bool', default=False),
append=dict(type='bool', default=False),
# following are specific to ssh key generation
generate_ssh_key=dict(type='bool'),
ssh_key_bits=dict(type='int', default=ssh_defaults['bits']),
ssh_key_type=dict(type='str', default=ssh_defaults['type']),
ssh_key_file=dict(type='path'),
ssh_key_comment=dict(type='str', default=ssh_defaults['comment']),
ssh_key_passphrase=dict(type='str', no_log=True),
update_password=dict(type='str', default='always', choices=['always', 'on_create']),
expires=dict(type='float'),
password_lock=dict(type='bool'),
local=dict(type='bool'),
profile=dict(type='str'),
authorization=dict(type='str'),
role=dict(type='str'),
),
supports_check_mode=True,
mutually_exclusive=[
('local', 'groups'),
('local', 'append')
]
)
user = User(module)
user.check_password_encrypted()
module.debug('User instantiated - platform %s' % user.platform)
if user.distribution:
module.debug('User instantiated - distribution %s' % user.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = user.name
result['state'] = user.state
if user.state == 'absent':
if user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = user.remove_user()
if rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
result['force'] = user.force
result['remove'] = user.remove
elif user.state == 'present':
if not user.user_exists():
if module.check_mode:
module.exit_json(changed=True)
# Check to see if the provided home path contains parent directories
# that do not exist.
path_needs_parents = False
if user.home:
parent = os.path.dirname(user.home)
if not os.path.isdir(parent):
path_needs_parents = True
(rc, out, err) = user.create_user()
# If the home path had parent directories that needed to be created,
# make sure file permissions are correct in the created home directory.
if path_needs_parents:
info = user.user_info()
if info is not False:
user.chown_homedir(info[2], info[3], user.home)
if module.check_mode:
result['system'] = user.name
else:
result['system'] = user.system
result['create_home'] = user.create_home
else:
# modify user (note: this function is check mode aware)
(rc, out, err) = user.modify_user()
result['append'] = user.append
result['move_home'] = user.move_home
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if user.password is not None:
result['password'] = 'NOT_LOGGING_PASSWORD'
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if user.user_exists() and user.state == 'present':
info = user.user_info()
if info is False:
result['msg'] = "failed to look up user name: %s" % user.name
result['failed'] = True
result['uid'] = info[2]
result['group'] = info[3]
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
if user.groups is not None:
result['groups'] = user.groups
# handle missing homedirs
info = user.user_info()
if user.home is None:
user.home = info[5]
if not os.path.exists(user.home) and user.create_home:
if not module.check_mode:
user.create_homedir(user.home)
user.chown_homedir(info[2], info[3], user.home)
result['changed'] = True
# deal with ssh key
if user.sshkeygen:
# generate ssh key (note: this function is check mode aware)
(rc, out, err) = user.ssh_key_gen()
if rc is not None and rc != 0:
module.fail_json(name=user.name, msg=err, rc=rc)
if rc == 0:
result['changed'] = True
(rc, out, err) = user.ssh_key_fingerprint()
if rc == 0:
result['ssh_fingerprint'] = out.strip()
else:
result['ssh_fingerprint'] = err.strip()
result['ssh_key_file'] = user.get_ssh_key_path()
result['ssh_public_key'] = user.get_ssh_public_key()
module.exit_json(**result)
# import module snippets
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,976 |
gitlab_runner module: Misleading deprecation warnings in Ansible 2.8
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The gitlab_runner module throws deprecation warnings for an option I don't use and another option that is not actually deprecated
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
'gitlab_runner' module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/m/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/m/.local/lib/python3.6/site-packages/ansible
executable location = /home/m/.local/bin/ansible
python version = 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = root
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target: Debian Buster, freshly installed. Kernel 4.19.0-5-amd64, Python versions 2.7.16 and 3.7.3
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a task using the gitlab_runner module. Here is the task I use:
<!--- Paste example playbooks or commands between quotes below -->
```
m@void:~/work/ansible-default-deployment$ cat roles/gitlab-runner/tasks/main.yml
---
- name: Install python-gitlab via pip
pip:
name: python-gitlab
state: present
- name: Register runner
gitlab_runner:
url: <redacted>
api_token: "{{ gitlab_api_token }}"
registration_token: "{{ gitlab_runner_registration_token }}"
description: "Auto-created runner for project {{ project_name }}"
state: present
active: False
tag_list: ['docker', '{{project_name}}']
run_untagged: False
locked: False
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
- I expect no deprecation warning for the `url` parameter because it is described as a required parameter with no possible alternative, and it is not marked as deprecated even in the 'devel' docs
- I expect no deprecation warning for the `login_token` parameter because I don't use it
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
This is the output of running the aforenmentioned task:
<!--- Paste verbatim command output between quotes -->
```
TASK [gitlab-runner : Register runner] *************************************************************************************************************************************************************************************************************
[DEPRECATION WARNING]: Param 'url' is deprecated. See the module docs for more information. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: Aliases 'login_token' are deprecated. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
changed: [redacted]
```
|
https://github.com/ansible/ansible/issues/59976
|
https://github.com/ansible/ansible/pull/60425
|
e9d10f94b7722710c50f9923c7a7eea652a98228
|
7bb90999d31951ebd41f612626e9f60e5965a564
| 2019-08-02T11:49:41Z |
python
| 2019-10-14T19:58:21Z |
lib/ansible/modules/source_control/gitlab_deploy_key.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# Based on code:
# Copyright: (c) 2013, Phillip Gentry <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: gitlab_deploy_key
short_description: Manages GitLab project deploy keys.
description:
- Adds, updates and removes project deploy keys
version_added: "2.6"
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
version_added: "2.8"
type: str
aliases:
- private_token
- access_token
project:
description:
- Id or Full path of project in the form of group/name
required: true
type: str
title:
description:
- Deploy key's title
required: true
type: str
key:
description:
- Deploy key
required: true
type: str
can_push:
description:
- Whether this key can push to the project
type: bool
default: no
state:
description:
- When C(present) the deploy key added to the project if it doesn't exist.
- When C(absent) it will be removed from the project if it exists
required: true
default: present
type: str
choices: [ "present", "absent" ]
'''
EXAMPLES = '''
- name: "Adding a project deploy key"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
- name: "Update the above deploy key to add push access"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
can_push: yes
- name: "Remove the previous deploy key from the project"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
state: absent
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: key is already in use"
deploy_key:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findProject
class GitLabDeployKey(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.deployKeyObject = None
'''
@param project Project object
@param key_title Title of the key
@param key_key String of the key
@param key_can_push Option of the deployKey
@param options Deploy key options
'''
def createOrUpdateDeployKey(self, project, key_title, key_key, options):
changed = False
# Because we have already call existsDeployKey in main()
if self.deployKeyObject is None:
deployKey = self.createDeployKey(project, {
'title': key_title,
'key': key_key,
'can_push': options['can_push']})
changed = True
else:
changed, deployKey = self.updateDeployKey(self.deployKeyObject, {
'can_push': options['can_push']})
self.deployKeyObject = deployKey
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title)
try:
deployKey.save()
except Exception as e:
self._module.fail_json(msg="Failed to update deploy key: %s " % e)
return True
else:
return False
'''
@param project Project Object
@param arguments Attributs of the deployKey
'''
def createDeployKey(self, project, arguments):
if self._module.check_mode:
return True
try:
deployKey = project.keys.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create deploy key: %s " % to_native(e))
return deployKey
'''
@param deployKey Deploy Key Object
@param arguments Attributs of the deployKey
'''
def updateDeployKey(self, deployKey, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(deployKey, arg_key) != arguments[arg_key]:
setattr(deployKey, arg_key, arguments[arg_key])
changed = True
return (changed, deployKey)
'''
@param project Project object
@param key_title Title of the key
'''
def findDeployKey(self, project, key_title):
deployKeys = project.keys.list()
for deployKey in deployKeys:
if (deployKey.title == key_title):
return deployKey
'''
@param project Project object
@param key_title Title of the key
'''
def existsDeployKey(self, project, key_title):
# When project exists, object will be stored in self.projectObject.
deployKey = self.findDeployKey(project, key_title)
if deployKey:
self.deployKeyObject = deployKey
return True
return False
def deleteDeployKey(self):
if self._module.check_mode:
return True
return self.deployKeyObject.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token', 'access_token']
module.deprecate("Aliases \'{aliases}\' are deprecated".format(aliases='\', \''.join(deprecated_aliases)), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True, aliases=["private_token", "access_token"]),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
key=dict(type='str', required=True),
can_push=dict(type='bool', default=False),
title=dict(type='str', required=True)
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
gitlab_url = re.sub('/api.*', '', module.params['api_url'])
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
project_identifier = module.params['project']
key_title = module.params['title']
key_keyfile = module.params['key']
key_can_push = module.params['can_push']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_deploy_key = GitLabDeployKey(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create deploy key: project %s doesn't exists" % project_identifier)
deployKey_exists = gitlab_deploy_key.existsDeployKey(project, key_title)
if state == 'absent':
if deployKey_exists:
gitlab_deploy_key.deleteDeployKey()
module.exit_json(changed=True, msg="Successfully deleted deploy key %s" % key_title)
else:
module.exit_json(changed=False, msg="Deploy key deleted or does not exists")
if state == 'present':
if gitlab_deploy_key.createOrUpdateDeployKey(project, key_title, key_keyfile, {'can_push': key_can_push}):
module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,976 |
gitlab_runner module: Misleading deprecation warnings in Ansible 2.8
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The gitlab_runner module throws deprecation warnings for an option I don't use and another option that is not actually deprecated
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
'gitlab_runner' module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/m/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/m/.local/lib/python3.6/site-packages/ansible
executable location = /home/m/.local/bin/ansible
python version = 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = root
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target: Debian Buster, freshly installed. Kernel 4.19.0-5-amd64, Python versions 2.7.16 and 3.7.3
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a task using the gitlab_runner module. Here is the task I use:
<!--- Paste example playbooks or commands between quotes below -->
```
m@void:~/work/ansible-default-deployment$ cat roles/gitlab-runner/tasks/main.yml
---
- name: Install python-gitlab via pip
pip:
name: python-gitlab
state: present
- name: Register runner
gitlab_runner:
url: <redacted>
api_token: "{{ gitlab_api_token }}"
registration_token: "{{ gitlab_runner_registration_token }}"
description: "Auto-created runner for project {{ project_name }}"
state: present
active: False
tag_list: ['docker', '{{project_name}}']
run_untagged: False
locked: False
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
- I expect no deprecation warning for the `url` parameter because it is described as a required parameter with no possible alternative, and it is not marked as deprecated even in the 'devel' docs
- I expect no deprecation warning for the `login_token` parameter because I don't use it
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
This is the output of running the aforenmentioned task:
<!--- Paste verbatim command output between quotes -->
```
TASK [gitlab-runner : Register runner] *************************************************************************************************************************************************************************************************************
[DEPRECATION WARNING]: Param 'url' is deprecated. See the module docs for more information. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: Aliases 'login_token' are deprecated. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
changed: [redacted]
```
|
https://github.com/ansible/ansible/issues/59976
|
https://github.com/ansible/ansible/pull/60425
|
e9d10f94b7722710c50f9923c7a7eea652a98228
|
7bb90999d31951ebd41f612626e9f60e5965a564
| 2019-08-02T11:49:41Z |
python
| 2019-10-14T19:58:21Z |
lib/ansible/modules/source_control/gitlab_group.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_group
short_description: Creates/updates/deletes GitLab Groups
description:
- When the group does not exist in GitLab, it will be created.
- When the group does exist and state=absent, the group will be deleted.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
required: true
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
name:
description:
- Name of the group you want to create.
required: true
type: str
path:
description:
- The path of the group you want to create, this will be server_url/group_path
- If not supplied, the group_name will be used.
type: str
description:
description:
- A description for the group.
version_added: "2.7"
type: str
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
parent:
description:
- Allow to create subgroups
- Id or Full path of parent group in the form of group/name
version_added: "2.8"
type: str
visibility:
description:
- Default visibility of the group
version_added: "2.8"
choices: ["private", "internal", "public"]
default: private
type: str
'''
EXAMPLES = '''
- name: "Delete GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
name: my_first_group
state: absent
- name: "Create GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_usersername: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
# The group will by created at https://gitlab.dj-wasabi.local/super_parent/parent/my_first_group
- name: "Create GitLab SubGroup"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_usersername: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
parent: "super_parent/parent"
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
group:
description: API object
returned: always
type: dict
'''
import os
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabGroup(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.groupObject = None
'''
@param group Group object
'''
def getGroupId(self, group):
if group is not None:
return group.id
return None
'''
@param name Name of the group
@param parent Parent group full path
@param options Group options
'''
def createOrUpdateGroup(self, name, parent, options):
changed = False
# Because we have already call userExists in main()
if self.groupObject is None:
parent_id = self.getGroupId(parent)
group = self.createGroup({
'name': name,
'path': options['path'],
'parent_id': parent_id,
'visibility': options['visibility']})
changed = True
else:
changed, group = self.updateGroup(self.groupObject, {
'name': name,
'description': options['description'],
'visibility': options['visibility']})
self.groupObject = group
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the group %s" % name)
try:
group.save()
except Exception as e:
self._module.fail_json(msg="Failed to update group: %s " % e)
return True
else:
return False
'''
@param arguments Attributs of the group
'''
def createGroup(self, arguments):
if self._module.check_mode:
return True
try:
group = self._gitlab.groups.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create group: %s " % to_native(e))
return group
'''
@param group Group Object
@param arguments Attributs of the group
'''
def updateGroup(self, group, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(group, arg_key) != arguments[arg_key]:
setattr(group, arg_key, arguments[arg_key])
changed = True
return (changed, group)
def deleteGroup(self):
group = self.groupObject
if len(group.projects.list()) >= 1:
self._module.fail_json(
msg="There are still projects in this group. These needs to be moved or deleted before this group can be removed.")
else:
if self._module.check_mode:
return True
try:
group.delete()
except Exception as e:
self._module.fail_json(msg="Failed to delete group: %s " % to_native(e))
'''
@param name Name of the groupe
@param full_path Complete path of the Group including parent group path. <parent_path>/<group_path>
'''
def existsGroup(self, project_identifier):
# When group/user exists, object will be stored in self.groupObject.
group = findGroup(self._gitlab, project_identifier)
if group:
self.groupObject = group
return True
return False
def deprecation_warning(module):
deprecated_aliases = ['login_token']
module.deprecate("Aliases \'{aliases}\' are deprecated".format(aliases='\', \''.join(deprecated_aliases)), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', required=True, removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
state=dict(type='str', default="present", choices=["absent", "present"]),
parent=dict(type='str'),
visibility=dict(type='str', default="private", choices=["internal", "private", "public"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
group_name = module.params['name']
group_path = module.params['path']
description = module.params['description']
state = module.params['state']
parent_identifier = module.params['parent']
group_visibility = module.params['visibility']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e))
# Define default group_path based on group_name
if group_path is None:
group_path = group_name.replace(" ", "_")
gitlab_group = GitLabGroup(module, gitlab_instance)
parent_group = None
if parent_identifier:
parent_group = findGroup(gitlab_instance, parent_identifier)
if not parent_group:
module.fail_json(msg="Failed create GitLab group: Parent group doesn't exists")
group_exists = gitlab_group.existsGroup(parent_group.full_path + '/' + group_path)
else:
group_exists = gitlab_group.existsGroup(group_path)
if state == 'absent':
if group_exists:
gitlab_group.deleteGroup()
module.exit_json(changed=True, msg="Successfully deleted group %s" % group_name)
else:
module.exit_json(changed=False, msg="Group deleted or does not exists")
if state == 'present':
if gitlab_group.createOrUpdateGroup(group_name, parent_group, {
"path": group_path,
"description": description,
"visibility": group_visibility}):
module.exit_json(changed=True, msg="Successfully created or updated the group %s" % group_name, group=gitlab_group.groupObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the group %s" % group_name, group=gitlab_group.groupObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,976 |
gitlab_runner module: Misleading deprecation warnings in Ansible 2.8
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The gitlab_runner module throws deprecation warnings for an option I don't use and another option that is not actually deprecated
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
'gitlab_runner' module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/m/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/m/.local/lib/python3.6/site-packages/ansible
executable location = /home/m/.local/bin/ansible
python version = 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = root
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target: Debian Buster, freshly installed. Kernel 4.19.0-5-amd64, Python versions 2.7.16 and 3.7.3
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a task using the gitlab_runner module. Here is the task I use:
<!--- Paste example playbooks or commands between quotes below -->
```
m@void:~/work/ansible-default-deployment$ cat roles/gitlab-runner/tasks/main.yml
---
- name: Install python-gitlab via pip
pip:
name: python-gitlab
state: present
- name: Register runner
gitlab_runner:
url: <redacted>
api_token: "{{ gitlab_api_token }}"
registration_token: "{{ gitlab_runner_registration_token }}"
description: "Auto-created runner for project {{ project_name }}"
state: present
active: False
tag_list: ['docker', '{{project_name}}']
run_untagged: False
locked: False
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
- I expect no deprecation warning for the `url` parameter because it is described as a required parameter with no possible alternative, and it is not marked as deprecated even in the 'devel' docs
- I expect no deprecation warning for the `login_token` parameter because I don't use it
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
This is the output of running the aforenmentioned task:
<!--- Paste verbatim command output between quotes -->
```
TASK [gitlab-runner : Register runner] *************************************************************************************************************************************************************************************************************
[DEPRECATION WARNING]: Param 'url' is deprecated. See the module docs for more information. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: Aliases 'login_token' are deprecated. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
changed: [redacted]
```
|
https://github.com/ansible/ansible/issues/59976
|
https://github.com/ansible/ansible/pull/60425
|
e9d10f94b7722710c50f9923c7a7eea652a98228
|
7bb90999d31951ebd41f612626e9f60e5965a564
| 2019-08-02T11:49:41Z |
python
| 2019-10-14T19:58:21Z |
lib/ansible/modules/source_control/gitlab_hook.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# Based on code:
# Copyright: (c) 2013, Phillip Gentry <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_hook
short_description: Manages GitLab project hooks.
description:
- Adds, updates and removes project hook
version_added: "2.6"
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
version_added: "2.8"
type: str
aliases:
- private_token
- access_token
project:
description:
- Id or Full path of the project in the form of group/name.
required: true
type: str
hook_url:
description:
- The url that you want GitLab to post to, this is used as the primary key for updates and deletion.
required: true
type: str
state:
description:
- When C(present) the hook will be updated to match the input or created if it doesn't exist.
- When C(absent) hook will be deleted if it exists.
required: true
default: present
type: str
choices: [ "present", "absent" ]
push_events:
description:
- Trigger hook on push events.
type: bool
default: yes
issues_events:
description:
- Trigger hook on issues events.
type: bool
default: no
merge_requests_events:
description:
- Trigger hook on merge requests events.
type: bool
default: no
tag_push_events:
description:
- Trigger hook on tag push events.
type: bool
default: no
note_events:
description:
- Trigger hook on note events or when someone adds a comment.
type: bool
default: no
job_events:
description:
- Trigger hook on job events.
type: bool
default: no
pipeline_events:
description:
- Trigger hook on pipeline events.
type: bool
default: no
wiki_page_events:
description:
- Trigger hook on wiki events.
type: bool
default: no
hook_validate_certs:
description:
- Whether GitLab will do SSL verification when triggering the hook.
type: bool
default: no
aliases: [ enable_ssl_verification ]
token:
description:
- Secret token to validate hook messages at the receiver.
- If this is present it will always result in a change as it cannot be retrieved from GitLab.
- Will show up in the X-GitLab-Token HTTP request header.
required: false
type: str
'''
EXAMPLES = '''
- name: "Adding a project hook"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: present
push_events: yes
tag_push_events: yes
hook_validate_certs: no
token: "my-super-secret-token-that-my-ci-server-will-check"
- name: "Delete the previous hook"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: absent
- name: "Delete a hook by numeric project id"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: 10
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: absent
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
hook:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findProject
class GitLabHook(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.hookObject = None
'''
@param project Project Object
@param hook_url Url to call on event
@param description Description of the group
@param parent Parent group full path
'''
def createOrUpdateHook(self, project, hook_url, options):
changed = False
# Because we have already call userExists in main()
if self.hookObject is None:
hook = self.createHook(project, {
'url': hook_url,
'push_events': options['push_events'],
'issues_events': options['issues_events'],
'merge_requests_events': options['merge_requests_events'],
'tag_push_events': options['tag_push_events'],
'note_events': options['note_events'],
'job_events': options['job_events'],
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
changed = True
else:
changed, hook = self.updateHook(self.hookObject, {
'push_events': options['push_events'],
'issues_events': options['issues_events'],
'merge_requests_events': options['merge_requests_events'],
'tag_push_events': options['tag_push_events'],
'note_events': options['note_events'],
'job_events': options['job_events'],
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
self.hookObject = hook
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url)
try:
hook.save()
except Exception as e:
self._module.fail_json(msg="Failed to update hook: %s " % e)
return True
else:
return False
'''
@param project Project Object
@param arguments Attributs of the hook
'''
def createHook(self, project, arguments):
if self._module.check_mode:
return True
hook = project.hooks.create(arguments)
return hook
'''
@param hook Hook Object
@param arguments Attributs of the hook
'''
def updateHook(self, hook, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(hook, arg_key) != arguments[arg_key]:
setattr(hook, arg_key, arguments[arg_key])
changed = True
return (changed, hook)
'''
@param project Project object
@param hook_url Url to call on event
'''
def findHook(self, project, hook_url):
hooks = project.hooks.list()
for hook in hooks:
if (hook.url == hook_url):
return hook
'''
@param project Project object
@param hook_url Url to call on event
'''
def existsHook(self, project, hook_url):
# When project exists, object will be stored in self.projectObject.
hook = self.findHook(project, hook_url)
if hook:
self.hookObject = hook
return True
return False
def deleteHook(self):
if self._module.check_mode:
return True
return self.hookObject.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token', 'access_token']
module.deprecate("Aliases \'{aliases}\' are deprecated".format(aliases='\', \''.join(deprecated_aliases)), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True, aliases=["private_token", "access_token"]),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
hook_url=dict(type='str', required=True),
push_events=dict(type='bool', default=True),
issues_events=dict(type='bool', default=False),
merge_requests_events=dict(type='bool', default=False),
tag_push_events=dict(type='bool', default=False),
note_events=dict(type='bool', default=False),
job_events=dict(type='bool', default=False),
pipeline_events=dict(type='bool', default=False),
wiki_page_events=dict(type='bool', default=False),
hook_validate_certs=dict(type='bool', default=False, aliases=['enable_ssl_verification']),
token=dict(type='str', no_log=True),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
gitlab_url = re.sub('/api.*', '', module.params['api_url'])
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
project_identifier = module.params['project']
hook_url = module.params['hook_url']
push_events = module.params['push_events']
issues_events = module.params['issues_events']
merge_requests_events = module.params['merge_requests_events']
tag_push_events = module.params['tag_push_events']
note_events = module.params['note_events']
job_events = module.params['job_events']
pipeline_events = module.params['pipeline_events']
wiki_page_events = module.params['wiki_page_events']
enable_ssl_verification = module.params['hook_validate_certs']
hook_token = module.params['token']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_hook = GitLabHook(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create hook: project %s doesn't exists" % project_identifier)
hook_exists = gitlab_hook.existsHook(project, hook_url)
if state == 'absent':
if hook_exists:
gitlab_hook.deleteHook()
module.exit_json(changed=True, msg="Successfully deleted hook %s" % hook_url)
else:
module.exit_json(changed=False, msg="Hook deleted or does not exists")
if state == 'present':
if gitlab_hook.createOrUpdateHook(project, hook_url, {
"push_events": push_events,
"issues_events": issues_events,
"merge_requests_events": merge_requests_events,
"tag_push_events": tag_push_events,
"note_events": note_events,
"job_events": job_events,
"pipeline_events": pipeline_events,
"wiki_page_events": wiki_page_events,
"enable_ssl_verification": enable_ssl_verification,
"token": hook_token}):
module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,976 |
gitlab_runner module: Misleading deprecation warnings in Ansible 2.8
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The gitlab_runner module throws deprecation warnings for an option I don't use and another option that is not actually deprecated
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
'gitlab_runner' module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/m/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/m/.local/lib/python3.6/site-packages/ansible
executable location = /home/m/.local/bin/ansible
python version = 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = root
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target: Debian Buster, freshly installed. Kernel 4.19.0-5-amd64, Python versions 2.7.16 and 3.7.3
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a task using the gitlab_runner module. Here is the task I use:
<!--- Paste example playbooks or commands between quotes below -->
```
m@void:~/work/ansible-default-deployment$ cat roles/gitlab-runner/tasks/main.yml
---
- name: Install python-gitlab via pip
pip:
name: python-gitlab
state: present
- name: Register runner
gitlab_runner:
url: <redacted>
api_token: "{{ gitlab_api_token }}"
registration_token: "{{ gitlab_runner_registration_token }}"
description: "Auto-created runner for project {{ project_name }}"
state: present
active: False
tag_list: ['docker', '{{project_name}}']
run_untagged: False
locked: False
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
- I expect no deprecation warning for the `url` parameter because it is described as a required parameter with no possible alternative, and it is not marked as deprecated even in the 'devel' docs
- I expect no deprecation warning for the `login_token` parameter because I don't use it
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
This is the output of running the aforenmentioned task:
<!--- Paste verbatim command output between quotes -->
```
TASK [gitlab-runner : Register runner] *************************************************************************************************************************************************************************************************************
[DEPRECATION WARNING]: Param 'url' is deprecated. See the module docs for more information. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: Aliases 'login_token' are deprecated. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
changed: [redacted]
```
|
https://github.com/ansible/ansible/issues/59976
|
https://github.com/ansible/ansible/pull/60425
|
e9d10f94b7722710c50f9923c7a7eea652a98228
|
7bb90999d31951ebd41f612626e9f60e5965a564
| 2019-08-02T11:49:41Z |
python
| 2019-10-14T19:58:21Z |
lib/ansible/modules/source_control/gitlab_project.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_project
short_description: Creates/updates/deletes GitLab Projects
description:
- When the project does not exist in GitLab, it will be created.
- When the project does exists and state=absent, the project will be deleted.
- When changes are made to the project, the project will be updated.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
required: true
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
group:
description:
- Id or The full path of the group of which this projects belongs to.
type: str
name:
description:
- The name of the project
required: true
type: str
path:
description:
- The path of the project you want to create, this will be server_url/<group>/path
- If not supplied, name will be used.
type: str
description:
description:
- An description for the project.
type: str
issues_enabled:
description:
- Whether you want to create issues or not.
- Possible values are true and false.
type: bool
default: yes
merge_requests_enabled:
description:
- If merge requests can be made or not.
- Possible values are true and false.
type: bool
default: yes
wiki_enabled:
description:
- If an wiki for this project should be available or not.
- Possible values are true and false.
type: bool
default: yes
snippets_enabled:
description:
- If creating snippets should be available or not.
- Possible values are true and false.
type: bool
default: yes
visibility:
description:
- Private. Project access must be granted explicitly for each user.
- Internal. The project can be cloned by any logged in user.
- Public. The project can be cloned without any authentication.
default: private
type: str
choices: ["private", "internal", "public"]
aliases:
- visibility_level
import_url:
description:
- Git repository which will be imported into gitlab.
- GitLab server needs read access to this git repository.
required: false
type: str
state:
description:
- create or delete project.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
'''
EXAMPLES = '''
- name: Delete GitLab Project
gitlab_project:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
name: my_first_project
state: absent
delegate_to: localhost
- name: Create GitLab Project in group Ansible
gitlab_project:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_project
group: ansible
issues_enabled: False
wiki_enabled: True
snippets_enabled: True
import_url: http://git.example.com/example/lab.git
state: present
delegate_to: localhost
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
project:
description: API object
returned: always
type: dict
'''
import os
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup, findProject
class GitLabProject(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.projectObject = None
'''
@param project_name Name of the project
@param namespace Namespace Object (User or Group)
@param options Options of the project
'''
def createOrUpdateProject(self, project_name, namespace, options):
changed = False
# Because we have already call userExists in main()
if self.projectObject is None:
project = self.createProject(namespace, {
'name': project_name,
'path': options['path'],
'description': options['description'],
'issues_enabled': options['issues_enabled'],
'merge_requests_enabled': options['merge_requests_enabled'],
'wiki_enabled': options['wiki_enabled'],
'snippets_enabled': options['snippets_enabled'],
'visibility': options['visibility'],
'import_url': options['import_url']})
changed = True
else:
changed, project = self.updateProject(self.projectObject, {
'name': project_name,
'description': options['description'],
'issues_enabled': options['issues_enabled'],
'merge_requests_enabled': options['merge_requests_enabled'],
'wiki_enabled': options['wiki_enabled'],
'snippets_enabled': options['snippets_enabled'],
'visibility': options['visibility']})
self.projectObject = project
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name)
try:
project.save()
except Exception as e:
self._module.fail_json(msg="Failed update project: %s " % e)
return True
else:
return False
'''
@param namespace Namespace Object (User or Group)
@param arguments Attributs of the project
'''
def createProject(self, namespace, arguments):
if self._module.check_mode:
return True
arguments['namespace_id'] = namespace.id
try:
project = self._gitlab.projects.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create project: %s " % to_native(e))
return project
'''
@param project Project Object
@param arguments Attributs of the project
'''
def updateProject(self, project, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(project, arg_key) != arguments[arg_key]:
setattr(project, arg_key, arguments[arg_key])
changed = True
return (changed, project)
def deleteProject(self):
if self._module.check_mode:
return True
project = self.projectObject
return project.delete()
'''
@param namespace User/Group object
@param name Name of the project
'''
def existsProject(self, namespace, path):
# When project exists, object will be stored in self.projectObject.
project = findProject(self._gitlab, namespace.full_path + '/' + path)
if project:
self.projectObject = project
return True
return False
def deprecation_warning(module):
deprecated_aliases = ['login_token']
module.deprecate("Aliases \'{aliases}\' are deprecated".format(aliases='\', \''.join(deprecated_aliases)), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', required=True, removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
group=dict(type='str'),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
issues_enabled=dict(type='bool', default=True),
merge_requests_enabled=dict(type='bool', default=True),
wiki_enabled=dict(type='bool', default=True),
snippets_enabled=dict(default=True, type='bool'),
visibility=dict(type='str', default="private", choices=["internal", "private", "public"], aliases=["visibility_level"]),
import_url=dict(type='str'),
state=dict(type='str', default="present", choices=["absent", "present"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
group_identifier = module.params['group']
project_name = module.params['name']
project_path = module.params['path']
project_description = module.params['description']
issues_enabled = module.params['issues_enabled']
merge_requests_enabled = module.params['merge_requests_enabled']
wiki_enabled = module.params['wiki_enabled']
snippets_enabled = module.params['snippets_enabled']
visibility = module.params['visibility']
import_url = module.params['import_url']
state = module.params['state']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
# Set project_path to project_name if it is empty.
if project_path is None:
project_path = project_name.replace(" ", "_")
gitlab_project = GitLabProject(module, gitlab_instance)
if group_identifier:
group = findGroup(gitlab_instance, group_identifier)
if group is None:
module.fail_json(msg="Failed to create project: group %s doesn't exists" % group_identifier)
namespace = gitlab_instance.namespaces.get(group.id)
project_exists = gitlab_project.existsProject(namespace, project_path)
else:
user = gitlab_instance.users.list(username=gitlab_instance.user.username)[0]
namespace = gitlab_instance.namespaces.get(user.id)
project_exists = gitlab_project.existsProject(namespace, project_path)
if state == 'absent':
if project_exists:
gitlab_project.deleteProject()
module.exit_json(changed=True, msg="Successfully deleted project %s" % project_name)
else:
module.exit_json(changed=False, msg="Project deleted or does not exists")
if state == 'present':
if gitlab_project.createOrUpdateProject(project_name, namespace, {
"path": project_path,
"description": project_description,
"issues_enabled": issues_enabled,
"merge_requests_enabled": merge_requests_enabled,
"wiki_enabled": wiki_enabled,
"snippets_enabled": snippets_enabled,
"visibility": visibility,
"import_url": import_url}):
module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name, project=gitlab_project.projectObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the project %s" % project_name, project=gitlab_project.projectObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,976 |
gitlab_runner module: Misleading deprecation warnings in Ansible 2.8
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The gitlab_runner module throws deprecation warnings for an option I don't use and another option that is not actually deprecated
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
'gitlab_runner' module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/m/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/m/.local/lib/python3.6/site-packages/ansible
executable location = /home/m/.local/bin/ansible
python version = 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = root
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target: Debian Buster, freshly installed. Kernel 4.19.0-5-amd64, Python versions 2.7.16 and 3.7.3
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a task using the gitlab_runner module. Here is the task I use:
<!--- Paste example playbooks or commands between quotes below -->
```
m@void:~/work/ansible-default-deployment$ cat roles/gitlab-runner/tasks/main.yml
---
- name: Install python-gitlab via pip
pip:
name: python-gitlab
state: present
- name: Register runner
gitlab_runner:
url: <redacted>
api_token: "{{ gitlab_api_token }}"
registration_token: "{{ gitlab_runner_registration_token }}"
description: "Auto-created runner for project {{ project_name }}"
state: present
active: False
tag_list: ['docker', '{{project_name}}']
run_untagged: False
locked: False
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
- I expect no deprecation warning for the `url` parameter because it is described as a required parameter with no possible alternative, and it is not marked as deprecated even in the 'devel' docs
- I expect no deprecation warning for the `login_token` parameter because I don't use it
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
This is the output of running the aforenmentioned task:
<!--- Paste verbatim command output between quotes -->
```
TASK [gitlab-runner : Register runner] *************************************************************************************************************************************************************************************************************
[DEPRECATION WARNING]: Param 'url' is deprecated. See the module docs for more information. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: Aliases 'login_token' are deprecated. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
changed: [redacted]
```
|
https://github.com/ansible/ansible/issues/59976
|
https://github.com/ansible/ansible/pull/60425
|
e9d10f94b7722710c50f9923c7a7eea652a98228
|
7bb90999d31951ebd41f612626e9f60e5965a564
| 2019-08-02T11:49:41Z |
python
| 2019-10-14T19:58:21Z |
lib/ansible/modules/source_control/gitlab_runner.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Samy Coenen <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_runner
short_description: Create, modify and delete GitLab Runners.
description:
- Register, update and delete runners with the GitLab API.
- All operations are performed using the GitLab API v4.
- For details, consult the full API documentation at U(https://docs.gitlab.com/ee/api/runners.html).
- A valid private API token is required for all operations. You can create as many tokens as you like using the GitLab web interface at
U(https://$GITLAB_URL/profile/personal_access_tokens).
- A valid registration token is required for registering a new runner.
To create shared runners, you need to ask your administrator to give you this token.
It can be found at U(https://$GITLAB_URL/admin/runners/).
notes:
- To create a new runner at least the C(api_token), C(description) and C(url) options are required.
- Runners need to have unique descriptions.
version_added: 2.8
author:
- Samy Coenen (@SamyCoenen)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab >= 1.5.0
extends_documentation_fragment:
- auth_basic
options:
url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
required: true
type: str
api_token:
description:
- Your private token to interact with the GitLab API.
required: True
type: str
aliases:
- private_token
description:
description:
- The unique name of the runner.
required: True
type: str
aliases:
- name
state:
description:
- Make sure that the runner with the same name exists with the same configuration or delete the runner with the same name.
required: False
default: present
choices: ["present", "absent"]
type: str
registration_token:
description:
- The registration token is used to register new runners.
required: True
type: str
active:
description:
- Define if the runners is immediately active after creation.
required: False
default: yes
type: bool
locked:
description:
- Determines if the runner is locked or not.
required: False
default: False
type: bool
access_level:
description:
- Determines if a runner can pick up jobs from protected branches.
required: False
default: ref_protected
choices: ["ref_protected", "not_protected"]
type: str
maximum_timeout:
description:
- The maximum timeout that a runner has to pick up a specific job.
required: False
default: 3600
type: int
run_untagged:
description:
- Run untagged jobs or not.
required: False
default: yes
type: bool
tag_list:
description: The tags that apply to the runner.
required: False
default: []
type: list
'''
EXAMPLES = '''
- name: "Register runner"
gitlab_runner:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
registration_token: 4gfdsg345
description: Docker Machine t1
state: present
active: True
tag_list: ['docker']
run_untagged: False
locked: False
- name: "Delete runner"
gitlab_runner:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
description: Docker Machine t1
state: absent
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
runner:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
try:
cmp
except NameError:
def cmp(a, b):
return (a > b) - (a < b)
class GitLabRunner(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.runnerObject = None
def createOrUpdateRunner(self, description, options):
changed = False
# Because we have already call userExists in main()
if self.runnerObject is None:
runner = self.createRunner({
'description': description,
'active': options['active'],
'token': options['registration_token'],
'locked': options['locked'],
'run_untagged': options['run_untagged'],
'maximum_timeout': options['maximum_timeout'],
'tag_list': options['tag_list']})
changed = True
else:
changed, runner = self.updateRunner(self.runnerObject, {
'active': options['active'],
'locked': options['locked'],
'run_untagged': options['run_untagged'],
'maximum_timeout': options['maximum_timeout'],
'access_level': options['access_level'],
'tag_list': options['tag_list']})
self.runnerObject = runner
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the runner %s" % description)
try:
runner.save()
except Exception as e:
self._module.fail_json(msg="Failed to update runner: %s " % to_native(e))
return True
else:
return False
'''
@param arguments Attributs of the runner
'''
def createRunner(self, arguments):
if self._module.check_mode:
return True
try:
runner = self._gitlab.runners.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create runner: %s " % to_native(e))
return runner
'''
@param runner Runner object
@param arguments Attributs of the runner
'''
def updateRunner(self, runner, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if isinstance(arguments[arg_key], list):
list1 = getattr(runner, arg_key)
list1.sort()
list2 = arguments[arg_key]
list2.sort()
if cmp(list1, list2):
setattr(runner, arg_key, arguments[arg_key])
changed = True
else:
if getattr(runner, arg_key) != arguments[arg_key]:
setattr(runner, arg_key, arguments[arg_key])
changed = True
return (changed, runner)
'''
@param description Description of the runner
'''
def findRunner(self, description):
runners = self._gitlab.runners.list(as_list=False)
for runner in runners:
if (runner.description == description):
return self._gitlab.runners.get(runner.id)
'''
@param description Description of the runner
'''
def existsRunner(self, description):
# When runner exists, object will be stored in self.runnerObject.
runner = self.findRunner(description)
if runner:
self.runnerObject = runner
return True
return False
def deleteRunner(self):
if self._module.check_mode:
return True
runner = self.runnerObject
return runner.delete()
def deprecation_warning(module):
deprecated_aliases = ['login_token']
module.deprecate("Aliases \'{aliases}\' are deprecated".format(aliases='\', \''.join(deprecated_aliases)), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
url=dict(type='str', required=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["private_token"]),
description=dict(type='str', required=True, aliases=["name"]),
active=dict(type='bool', default=True),
tag_list=dict(type='list', default=[]),
run_untagged=dict(type='bool', default=True),
locked=dict(type='bool', default=False),
access_level=dict(type='str', default='ref_protected', choices=["not_protected", "ref_protected"]),
maximum_timeout=dict(type='int', default=3600),
registration_token=dict(type='str', required=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'url'],
['api_username', 'api_token'],
['api_password', 'api_token'],
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
url = re.sub('/api.*', '', module.params['url'])
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
gitlab_url = url if api_url is None else api_url
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
runner_description = module.params['description']
runner_active = module.params['active']
tag_list = module.params['tag_list']
run_untagged = module.params['run_untagged']
runner_locked = module.params['locked']
access_level = module.params['access_level']
maximum_timeout = module.params['maximum_timeout']
registration_token = module.params['registration_token']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e))
gitlab_runner = GitLabRunner(module, gitlab_instance)
runner_exists = gitlab_runner.existsRunner(runner_description)
if state == 'absent':
if runner_exists:
gitlab_runner.deleteRunner()
module.exit_json(changed=True, msg="Successfully deleted runner %s" % runner_description)
else:
module.exit_json(changed=False, msg="Runner deleted or does not exists")
if state == 'present':
if gitlab_runner.createOrUpdateRunner(runner_description, {
"active": runner_active,
"tag_list": tag_list,
"run_untagged": run_untagged,
"locked": runner_locked,
"access_level": access_level,
"maximum_timeout": maximum_timeout,
"registration_token": registration_token}):
module.exit_json(changed=True, runner=gitlab_runner.runnerObject._attrs,
msg="Successfully created or updated the runner %s" % runner_description)
else:
module.exit_json(changed=False, runner=gitlab_runner.runnerObject._attrs,
msg="No need to update the runner %s" % runner_description)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,976 |
gitlab_runner module: Misleading deprecation warnings in Ansible 2.8
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The gitlab_runner module throws deprecation warnings for an option I don't use and another option that is not actually deprecated
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
'gitlab_runner' module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/m/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/m/.local/lib/python3.6/site-packages/ansible
executable location = /home/m/.local/bin/ansible
python version = 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_REMOTE_USER(/etc/ansible/ansible.cfg) = root
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target: Debian Buster, freshly installed. Kernel 4.19.0-5-amd64, Python versions 2.7.16 and 3.7.3
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a task using the gitlab_runner module. Here is the task I use:
<!--- Paste example playbooks or commands between quotes below -->
```
m@void:~/work/ansible-default-deployment$ cat roles/gitlab-runner/tasks/main.yml
---
- name: Install python-gitlab via pip
pip:
name: python-gitlab
state: present
- name: Register runner
gitlab_runner:
url: <redacted>
api_token: "{{ gitlab_api_token }}"
registration_token: "{{ gitlab_runner_registration_token }}"
description: "Auto-created runner for project {{ project_name }}"
state: present
active: False
tag_list: ['docker', '{{project_name}}']
run_untagged: False
locked: False
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
- I expect no deprecation warning for the `url` parameter because it is described as a required parameter with no possible alternative, and it is not marked as deprecated even in the 'devel' docs
- I expect no deprecation warning for the `login_token` parameter because I don't use it
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
This is the output of running the aforenmentioned task:
<!--- Paste verbatim command output between quotes -->
```
TASK [gitlab-runner : Register runner] *************************************************************************************************************************************************************************************************************
[DEPRECATION WARNING]: Param 'url' is deprecated. See the module docs for more information. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: Aliases 'login_token' are deprecated. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
changed: [redacted]
```
|
https://github.com/ansible/ansible/issues/59976
|
https://github.com/ansible/ansible/pull/60425
|
e9d10f94b7722710c50f9923c7a7eea652a98228
|
7bb90999d31951ebd41f612626e9f60e5965a564
| 2019-08-02T11:49:41Z |
python
| 2019-10-14T19:58:21Z |
lib/ansible/modules/source_control/gitlab_user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_user
short_description: Creates/updates/deletes GitLab Users
description:
- When the user does not exist in GitLab, it will be created.
- When the user does exists and state=absent, the user will be deleted.
- When changes are made to user, the user will be updated.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
- administrator rights on the GitLab server
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
required: true
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
name:
description:
- Name of the user you want to create
required: true
type: str
username:
description:
- The username of the user.
required: true
type: str
password:
description:
- The password of the user.
- GitLab server enforces minimum password length to 8, set this value with 8 or more characters.
required: true
type: str
email:
description:
- The email that belongs to the user.
required: true
type: str
sshkey_name:
description:
- The name of the sshkey
type: str
sshkey_file:
description:
- The ssh key itself.
type: str
group:
description:
- Id or Full path of parent group in the form of group/name
- Add user as an member to this group.
type: str
access_level:
description:
- The access level to the group. One of the following can be used.
- guest
- reporter
- developer
- master (alias for maintainer)
- maintainer
- owner
default: guest
type: str
choices: ["guest", "reporter", "developer", "master", "maintainer", "owner"]
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
confirm:
description:
- Require confirmation.
type: bool
default: yes
version_added: "2.4"
isadmin:
description:
- Grant admin privileges to the user
type: bool
default: no
version_added: "2.8"
external:
description:
- Define external parameter for this user
type: bool
default: no
version_added: "2.8"
'''
EXAMPLES = '''
- name: "Delete GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
username: myusername
state: absent
delegate_to: localhost
- name: "Create GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: My Name
username: myusername
password: mysecretpassword
email: [email protected]
sshkey_name: MySSH
sshkey_file: ssh-rsa AAAAB3NzaC1yc...
state: present
group: super_group/mon_group
access_level: owner
delegate_to: localhost
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
user:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabUser(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.userObject = None
self.ACCESS_LEVEL = {
'guest': gitlab.GUEST_ACCESS,
'reporter': gitlab.REPORTER_ACCESS,
'developer': gitlab.DEVELOPER_ACCESS,
'master': gitlab.MAINTAINER_ACCESS,
'maintainer': gitlab.MAINTAINER_ACCESS,
'owner': gitlab.OWNER_ACCESS}
'''
@param username Username of the user
@param options User options
'''
def createOrUpdateUser(self, username, options):
changed = False
# Because we have already call userExists in main()
if self.userObject is None:
user = self.createUser({
'name': options['name'],
'username': username,
'password': options['password'],
'email': options['email'],
'skip_confirmation': not options['confirm'],
'admin': options['isadmin'],
'external': options['external']})
changed = True
else:
changed, user = self.updateUser(self.userObject, {
'name': options['name'],
'email': options['email'],
'is_admin': options['isadmin'],
'external': options['external']})
# Assign ssh keys
if options['sshkey_name'] and options['sshkey_file']:
changed = changed or self.addSshKeyToUser(user, {
'name': options['sshkey_name'],
'file': options['sshkey_file']})
# Assign group
if options['group_path']:
changed = changed or self.assignUserToGroup(user, options['group_path'], options['access_level'])
self.userObject = user
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the user %s" % username)
try:
user.save()
except Exception as e:
self._module.fail_json(msg="Failed to update user: %s " % to_native(e))
return True
else:
return False
'''
@param group User object
'''
def getUserId(self, user):
if user is not None:
return user.id
return None
'''
@param user User object
@param sshkey_name Name of the ssh key
'''
def sshKeyExists(self, user, sshkey_name):
keyList = map(lambda k: k.title, user.keys.list())
return sshkey_name in keyList
'''
@param user User object
@param sshkey Dict containing sshkey infos {"name": "", "file": ""}
'''
def addSshKeyToUser(self, user, sshkey):
if not self.sshKeyExists(user, sshkey['name']):
if self._module.check_mode:
return True
try:
user.keys.create({
'title': sshkey['name'],
'key': sshkey['file']})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign sshkey to user: %s" % to_native(e))
return True
return False
'''
@param group Group object
@param user_id Id of the user to find
'''
def findMember(self, group, user_id):
try:
member = group.members.get(user_id)
except gitlab.exceptions.GitlabGetError as e:
return None
return member
'''
@param group Group object
@param user_id Id of the user to check
'''
def memberExists(self, group, user_id):
member = self.findMember(group, user_id)
return member is not None
'''
@param group Group object
@param user_id Id of the user to check
@param access_level GitLab access_level to check
'''
def memberAsGoodAccessLevel(self, group, user_id, access_level):
member = self.findMember(group, user_id)
return member.access_level == access_level
'''
@param user User object
@param group_path Complete path of the Group including parent group path. <parent_path>/<group_path>
@param access_level GitLab access_level to assign
'''
def assignUserToGroup(self, user, group_identifier, access_level):
group = findGroup(self._gitlab, group_identifier)
if self._module.check_mode:
return True
if group is None:
return False
if self.memberExists(group, self.getUserId(user)):
member = self.findMember(group, self.getUserId(user))
if not self.memberAsGoodAccessLevel(group, member.id, self.ACCESS_LEVEL[access_level]):
member.access_level = self.ACCESS_LEVEL[access_level]
member.save()
return True
else:
try:
group.members.create({
'user_id': self.getUserId(user),
'access_level': self.ACCESS_LEVEL[access_level]})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign user to group: %s" % to_native(e))
return True
return False
'''
@param user User object
@param arguments User attributes
'''
def updateUser(self, user, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(user, arg_key) != arguments[arg_key]:
setattr(user, arg_key, arguments[arg_key])
changed = True
return (changed, user)
'''
@param arguments User attributes
'''
def createUser(self, arguments):
if self._module.check_mode:
return True
try:
user = self._gitlab.users.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create user: %s " % to_native(e))
return user
'''
@param username Username of the user
'''
def findUser(self, username):
users = self._gitlab.users.list(search=username)
for user in users:
if (user.username == username):
return user
'''
@param username Username of the user
'''
def existsUser(self, username):
# When user exists, object will be stored in self.userObject.
user = self.findUser(username)
if user:
self.userObject = user
return True
return False
def deleteUser(self):
if self._module.check_mode:
return True
user = self.userObject
return user.delete()
def deprecation_warning(module):
deprecated_aliases = ['login_token']
module.deprecate("Aliases \'{aliases}\' are deprecated".format(aliases='\', \''.join(deprecated_aliases)), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', required=True, removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
name=dict(type='str', required=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
username=dict(type='str', required=True),
password=dict(type='str', required=True, no_log=True),
email=dict(type='str', required=True),
sshkey_name=dict(type='str'),
sshkey_file=dict(type='str'),
group=dict(type='str'),
access_level=dict(type='str', default="guest", choices=["developer", "guest", "maintainer", "master", "owner", "reporter"]),
confirm=dict(type='bool', default=True),
isadmin=dict(type='bool', default=False),
external=dict(type='bool', default=False),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
user_name = module.params['name']
state = module.params['state']
user_username = module.params['username'].lower()
user_password = module.params['password']
user_email = module.params['email']
user_sshkey_name = module.params['sshkey_name']
user_sshkey_file = module.params['sshkey_file']
group_path = module.params['group']
access_level = module.params['access_level']
confirm = module.params['confirm']
user_isadmin = module.params['isadmin']
user_external = module.params['external']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_user = GitLabUser(module, gitlab_instance)
user_exists = gitlab_user.existsUser(user_username)
if state == 'absent':
if user_exists:
gitlab_user.deleteUser()
module.exit_json(changed=True, msg="Successfully deleted user %s" % user_username)
else:
module.exit_json(changed=False, msg="User deleted or does not exists")
if state == 'present':
if gitlab_user.createOrUpdateUser(user_username, {
"name": user_name,
"password": user_password,
"email": user_email,
"sshkey_name": user_sshkey_name,
"sshkey_file": user_sshkey_file,
"group_path": group_path,
"access_level": access_level,
"confirm": confirm,
"isadmin": user_isadmin,
"external": user_external}):
module.exit_json(changed=True, msg="Successfully created or updated the user %s" % user_username, user=gitlab_user.userObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the user %s" % user_username, user=gitlab_user.userObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,335 |
Update docs to use string values instead of ints
|
##### SUMMARY
The documentation for various modules uses bare int values, but these now print a warning message indicating they should be quoted string values. The warning message was already reported as #58347 and closed as not a bug. Since it appears bare int values are no longer acceptable (at least not without a warning), I think the documentation should be updated to avoid use of such values. Incidentally, the original reporter of #58347 broached that topic, but I think it was lost in the shuffle.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
There are likely others, but the two modules I know that document behavior that will result in a warning message being presented are:
- sysctl
- ufw
##### ANSIBLE VERSION
```paste below
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/nirvdrum/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
Default configuration. Nothing changed.
##### OS / ENVIRONMENT
Ubuntu 18.04.
##### ADDITIONAL INFORMATION
The first example from the Ansible sysctl module documentation looks like:
```yaml
# Set vm.swappiness to 5 in /etc/sysctl.conf
- sysctl:
name: vm.swappiness
value: 5
state: present
```
Since this will result in a warning being printed out, that can be confusing for users. It's particularly confusing for anyone that happened to have used the example code with releases of Ansible that previously did not produce a warning. If this is the path going forward, however, I think we can clear things up quite a bit by quoting values in the documentation. E.g.,
```yaml
# Set vm.swappiness to 5 in /etc/sysctl.conf
- sysctl:
name: vm.swappiness
value: '5'
state: present
```
|
https://github.com/ansible/ansible/issues/60335
|
https://github.com/ansible/ansible/pull/63505
|
e0f67b58ce85b3fbb5930e19ad76d27834cf6c34
|
e4eea0510cd89124c3a20f4d9937e7c015a46597
| 2019-08-09T19:48:00Z |
python
| 2019-10-15T15:55:45Z |
lib/ansible/modules/system/ufw.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, Ahti Kitsik <[email protected]>
# Copyright: (c) 2014, Jarno Keskikangas <[email protected]>
# Copyright: (c) 2013, Aleksey Ovcharenko <[email protected]>
# Copyright: (c) 2013, James Martin <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: ufw
short_description: Manage firewall with UFW
description:
- Manage firewall with UFW.
version_added: 1.6
author:
- Aleksey Ovcharenko (@ovcharenko)
- Jarno Keskikangas (@pyykkis)
- Ahti Kitsik (@ahtik)
notes:
- See C(man ufw) for more examples.
requirements:
- C(ufw) package
options:
state:
description:
- C(enabled) reloads firewall and enables firewall on boot.
- C(disabled) unloads firewall and disables firewall on boot.
- C(reloaded) reloads firewall.
- C(reset) disables and resets firewall to installation defaults.
type: str
choices: [ disabled, enabled, reloaded, reset ]
default:
description:
- Change the default policy for incoming or outgoing traffic.
type: str
choices: [ allow, deny, reject ]
aliases: [ policy ]
direction:
description:
- Select direction for a rule or default policy command.
type: str
choices: [ in, incoming, out, outgoing, routed ]
logging:
description:
- Toggles logging. Logged packets use the LOG_KERN syslog facility.
type: str
choices: [ 'on', 'off', low, medium, high, full ]
insert:
description:
- Insert the corresponding rule as rule number NUM.
- Note that ufw numbers rules starting with 1.
type: int
insert_relative_to:
description:
- Allows to interpret the index in I(insert) relative to a position.
- C(zero) interprets the rule number as an absolute index (i.e. 1 is
the first rule).
- C(first-ipv4) interprets the rule number relative to the index of the
first IPv4 rule, or relative to the position where the first IPv4 rule
would be if there is currently none.
- C(last-ipv4) interprets the rule number relative to the index of the
last IPv4 rule, or relative to the position where the last IPv4 rule
would be if there is currently none.
- C(first-ipv6) interprets the rule number relative to the index of the
first IPv6 rule, or relative to the position where the first IPv6 rule
would be if there is currently none.
- C(last-ipv6) interprets the rule number relative to the index of the
last IPv6 rule, or relative to the position where the last IPv6 rule
would be if there is currently none.
type: str
choices: [ first-ipv4, first-ipv6, last-ipv4, last-ipv6, zero ]
default: zero
version_added: "2.8"
rule:
description:
- Add firewall rule
type: str
choices: [ allow, deny, limit, reject ]
log:
description:
- Log new connections matched to this rule
type: bool
from_ip:
description:
- Source IP address.
type: str
default: any
aliases: [ from, src ]
from_port:
description:
- Source port.
type: str
to_ip:
description:
- Destination IP address.
type: str
default: any
aliases: [ dest, to]
to_port:
description:
- Destination port.
type: str
aliases: [ port ]
proto:
description:
- TCP/IP protocol.
type: str
choices: [ any, tcp, udp, ipv6, esp, ah, gre, igmp ]
aliases: [ protocol ]
name:
description:
- Use profile located in C(/etc/ufw/applications.d).
type: str
aliases: [ app ]
delete:
description:
- Delete rule.
type: bool
interface:
description:
- Specify interface for rule.
type: str
aliases: [ if ]
route:
description:
- Apply the rule to routed/forwarded packets.
type: bool
comment:
description:
- Add a comment to the rule. Requires UFW version >=0.35.
type: str
version_added: "2.4"
'''
EXAMPLES = r'''
- name: Allow everything and enable UFW
ufw:
state: enabled
policy: allow
- name: Set logging
ufw:
logging: on
# Sometimes it is desirable to let the sender know when traffic is
# being denied, rather than simply ignoring it. In these cases, use
# reject instead of deny. In addition, log rejected connections:
- ufw:
rule: reject
port: auth
log: yes
# ufw supports connection rate limiting, which is useful for protecting
# against brute-force login attacks. ufw will deny connections if an IP
# address has attempted to initiate 6 or more connections in the last
# 30 seconds. See http://www.debian-administration.org/articles/187
# for details. Typical usage is:
- ufw:
rule: limit
port: ssh
proto: tcp
# Allow OpenSSH. (Note that as ufw manages its own state, simply removing
# a rule=allow task can leave those ports exposed. Either use delete=yes
# or a separate state=reset task)
- ufw:
rule: allow
name: OpenSSH
- name: Delete OpenSSH rule
ufw:
rule: allow
name: OpenSSH
delete: yes
- name: Deny all access to port 53
ufw:
rule: deny
port: 53
- name: Allow port range 60000-61000
ufw:
rule: allow
port: 60000:61000
proto: tcp
- name: Allow all access to tcp port 80
ufw:
rule: allow
port: 80
proto: tcp
- name: Allow all access from RFC1918 networks to this host
ufw:
rule: allow
src: '{{ item }}'
loop:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
- name: Deny access to udp port 514 from host 1.2.3.4 and include a comment
ufw:
rule: deny
proto: udp
src: 1.2.3.4
port: 514
comment: Block syslog
- name: Allow incoming access to eth0 from 1.2.3.5 port 5469 to 1.2.3.4 port 5469
ufw:
rule: allow
interface: eth0
direction: in
proto: udp
src: 1.2.3.5
from_port: 5469
dest: 1.2.3.4
to_port: 5469
# Note that IPv6 must be enabled in /etc/default/ufw for IPv6 firewalling to work.
- name: Deny all traffic from the IPv6 2001:db8::/32 to tcp port 25 on this host
ufw:
rule: deny
proto: tcp
src: 2001:db8::/32
port: 25
- name: Deny all IPv6 traffic to tcp port 20 on this host
# this should be the first IPv6 rule
ufw:
rule: deny
proto: tcp
port: 20
to_ip: "::"
insert: 0
insert_relative_to: first-ipv6
- name: Deny all IPv4 traffic to tcp port 20 on this host
# This should be the third to last IPv4 rule
# (insert: -1 addresses the second to last IPv4 rule;
# so the new rule will be inserted before the second
# to last IPv4 rule, and will be come the third to last
# IPv4 rule.)
ufw:
rule: deny
proto: tcp
port: 20
to_ip: "::"
insert: -1
insert_relative_to: last-ipv4
# Can be used to further restrict a global FORWARD policy set to allow
- name: Deny forwarded/routed traffic from subnet 1.2.3.0/24 to subnet 4.5.6.0/24
ufw:
rule: deny
route: yes
src: 1.2.3.0/24
dest: 4.5.6.0/24
'''
import re
from operator import itemgetter
from ansible.module_utils.basic import AnsibleModule
def compile_ipv4_regexp():
r = r"((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}"
r += r"(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])"
return re.compile(r)
def compile_ipv6_regexp():
"""
validation pattern provided by :
https://stackoverflow.com/questions/53497/regular-expression-that-matches-
valid-ipv6-addresses#answer-17871737
"""
r = r"(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:"
r += r"|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}"
r += r"(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4})"
r += r"{1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]"
r += r"{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]"
r += r"{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4})"
r += r"{0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]"
r += r"|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}"
r += r"[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}"
r += r"[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))"
return re.compile(r)
def main():
command_keys = ['state', 'default', 'rule', 'logging']
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', choices=['enabled', 'disabled', 'reloaded', 'reset']),
default=dict(type='str', aliases=['policy'], choices=['allow', 'deny', 'reject']),
logging=dict(type='str', choices=['full', 'high', 'low', 'medium', 'off', 'on']),
direction=dict(type='str', choices=['in', 'incoming', 'out', 'outgoing', 'routed']),
delete=dict(type='bool', default=False),
route=dict(type='bool', default=False),
insert=dict(type='int'),
insert_relative_to=dict(choices=['zero', 'first-ipv4', 'last-ipv4', 'first-ipv6', 'last-ipv6'], default='zero'),
rule=dict(type='str', choices=['allow', 'deny', 'limit', 'reject']),
interface=dict(type='str', aliases=['if']),
log=dict(type='bool', default=False),
from_ip=dict(type='str', default='any', aliases=['from', 'src']),
from_port=dict(type='str'),
to_ip=dict(type='str', default='any', aliases=['dest', 'to']),
to_port=dict(type='str', aliases=['port']),
proto=dict(type='str', aliases=['protocol'], choices=['ah', 'any', 'esp', 'ipv6', 'tcp', 'udp', 'gre', 'igmp']),
name=dict(type='str', aliases=['app']),
comment=dict(type='str'),
),
supports_check_mode=True,
mutually_exclusive=[
['name', 'proto', 'logging'],
],
required_one_of=([command_keys]),
required_by=dict(
interface=('direction', ),
),
)
cmds = []
ipv4_regexp = compile_ipv4_regexp()
ipv6_regexp = compile_ipv6_regexp()
def filter_line_that_not_start_with(pattern, content):
return ''.join([line for line in content.splitlines(True) if line.startswith(pattern)])
def filter_line_that_contains(pattern, content):
return [line for line in content.splitlines(True) if pattern in line]
def filter_line_that_not_contains(pattern, content):
return ''.join([line for line in content.splitlines(True) if not line.contains(pattern)])
def filter_line_that_match_func(match_func, content):
return ''.join([line for line in content.splitlines(True) if match_func(line) is not None])
def filter_line_that_contains_ipv4(content):
return filter_line_that_match_func(ipv4_regexp.search, content)
def filter_line_that_contains_ipv6(content):
return filter_line_that_match_func(ipv6_regexp.search, content)
def is_starting_by_ipv4(ip):
return ipv4_regexp.match(ip) is not None
def is_starting_by_ipv6(ip):
return ipv6_regexp.match(ip) is not None
def execute(cmd, ignore_error=False):
cmd = ' '.join(map(itemgetter(-1), filter(itemgetter(0), cmd)))
cmds.append(cmd)
(rc, out, err) = module.run_command(cmd, environ_update={"LANG": "C"})
if rc != 0 and not ignore_error:
module.fail_json(msg=err or out, commands=cmds)
return out
def get_current_rules():
user_rules_files = ["/lib/ufw/user.rules",
"/lib/ufw/user6.rules",
"/etc/ufw/user.rules",
"/etc/ufw/user6.rules",
"/var/lib/ufw/user.rules",
"/var/lib/ufw/user6.rules"]
cmd = [[grep_bin], ["-h"], ["'^### tuple'"]]
cmd.extend([[f] for f in user_rules_files])
return execute(cmd, ignore_error=True)
def ufw_version():
"""
Returns the major and minor version of ufw installed on the system.
"""
out = execute([[ufw_bin], ["--version"]])
lines = [x for x in out.split('\n') if x.strip() != '']
if len(lines) == 0:
module.fail_json(msg="Failed to get ufw version.", rc=0, out=out)
matches = re.search(r'^ufw.+(\d+)\.(\d+)(?:\.(\d+))?.*$', lines[0])
if matches is None:
module.fail_json(msg="Failed to get ufw version.", rc=0, out=out)
# Convert version to numbers
major = int(matches.group(1))
minor = int(matches.group(2))
rev = 0
if matches.group(3) is not None:
rev = int(matches.group(3))
return major, minor, rev
params = module.params
commands = dict((key, params[key]) for key in command_keys if params[key])
# Ensure ufw is available
ufw_bin = module.get_bin_path('ufw', True)
grep_bin = module.get_bin_path('grep', True)
# Save the pre state and rules in order to recognize changes
pre_state = execute([[ufw_bin], ['status verbose']])
pre_rules = get_current_rules()
changed = False
# Execute filter
for (command, value) in commands.items():
cmd = [[ufw_bin], [module.check_mode, '--dry-run']]
if command == 'state':
states = {'enabled': 'enable', 'disabled': 'disable',
'reloaded': 'reload', 'reset': 'reset'}
if value in ['reloaded', 'reset']:
changed = True
if module.check_mode:
# "active" would also match "inactive", hence the space
ufw_enabled = pre_state.find(" active") != -1
if (value == 'disabled' and ufw_enabled) or (value == 'enabled' and not ufw_enabled):
changed = True
else:
execute(cmd + [['-f'], [states[value]]])
elif command == 'logging':
extract = re.search(r'Logging: (on|off)(?: \(([a-z]+)\))?', pre_state)
if extract:
current_level = extract.group(2)
current_on_off_value = extract.group(1)
if value != "off":
if current_on_off_value == "off":
changed = True
elif value != "on" and value != current_level:
changed = True
elif current_on_off_value != "off":
changed = True
else:
changed = True
if not module.check_mode:
execute(cmd + [[command], [value]])
elif command == 'default':
if params['direction'] not in ['outgoing', 'incoming', 'routed', None]:
module.fail_json(msg='For default, direction must be one of "outgoing", "incoming" and "routed", or direction must not be specified.')
if module.check_mode:
regexp = r'Default: (deny|allow|reject) \(incoming\), (deny|allow|reject) \(outgoing\), (deny|allow|reject|disabled) \(routed\)'
extract = re.search(regexp, pre_state)
if extract is not None:
current_default_values = {}
current_default_values["incoming"] = extract.group(1)
current_default_values["outgoing"] = extract.group(2)
current_default_values["routed"] = extract.group(3)
v = current_default_values[params['direction'] or 'incoming']
if v not in (value, 'disabled'):
changed = True
else:
changed = True
else:
execute(cmd + [[command], [value], [params['direction']]])
elif command == 'rule':
if params['direction'] not in ['in', 'out', None]:
module.fail_json(msg='For rules, direction must be one of "in" and "out", or direction must not be specified.')
# Rules are constructed according to the long format
#
# ufw [--dry-run] [route] [delete] [insert NUM] allow|deny|reject|limit [in|out on INTERFACE] [log|log-all] \
# [from ADDRESS [port PORT]] [to ADDRESS [port PORT]] \
# [proto protocol] [app application] [comment COMMENT]
cmd.append([module.boolean(params['route']), 'route'])
cmd.append([module.boolean(params['delete']), 'delete'])
if params['insert'] is not None:
relative_to_cmd = params['insert_relative_to']
if relative_to_cmd == 'zero':
insert_to = params['insert']
else:
(dummy, numbered_state, dummy) = module.run_command([ufw_bin, 'status', 'numbered'])
numbered_line_re = re.compile(R'^\[ *([0-9]+)\] ')
lines = [(numbered_line_re.match(line), '(v6)' in line) for line in numbered_state.splitlines()]
lines = [(int(matcher.group(1)), ipv6) for (matcher, ipv6) in lines if matcher]
last_number = max([no for (no, ipv6) in lines]) if lines else 0
has_ipv4 = any([not ipv6 for (no, ipv6) in lines])
has_ipv6 = any([ipv6 for (no, ipv6) in lines])
if relative_to_cmd == 'first-ipv4':
relative_to = 1
elif relative_to_cmd == 'last-ipv4':
relative_to = max([no for (no, ipv6) in lines if not ipv6]) if has_ipv4 else 1
elif relative_to_cmd == 'first-ipv6':
relative_to = max([no for (no, ipv6) in lines if not ipv6]) + 1 if has_ipv4 else 1
elif relative_to_cmd == 'last-ipv6':
relative_to = last_number if has_ipv6 else last_number + 1
insert_to = params['insert'] + relative_to
if insert_to > last_number:
# ufw does not like it when the insert number is larger than the
# maximal rule number for IPv4/IPv6.
insert_to = None
cmd.append([insert_to is not None, "insert %s" % insert_to])
cmd.append([value])
cmd.append([params['direction'], "%s" % params['direction']])
cmd.append([params['interface'], "on %s" % params['interface']])
cmd.append([module.boolean(params['log']), 'log'])
for (key, template) in [('from_ip', "from %s"), ('from_port', "port %s"),
('to_ip', "to %s"), ('to_port', "port %s"),
('proto', "proto %s"), ('name', "app '%s'")]:
value = params[key]
cmd.append([value, template % (value)])
ufw_major, ufw_minor, dummy = ufw_version()
# comment is supported only in ufw version after 0.35
if (ufw_major == 0 and ufw_minor >= 35) or ufw_major > 0:
cmd.append([params['comment'], "comment '%s'" % params['comment']])
rules_dry = execute(cmd)
if module.check_mode:
nb_skipping_line = len(filter_line_that_contains("Skipping", rules_dry))
if not (nb_skipping_line > 0 and nb_skipping_line == len(rules_dry.splitlines(True))):
rules_dry = filter_line_that_not_start_with("### tuple", rules_dry)
# ufw dry-run doesn't send all rules so have to compare ipv4 or ipv6 rules
if is_starting_by_ipv4(params['from_ip']) or is_starting_by_ipv4(params['to_ip']):
if filter_line_that_contains_ipv4(pre_rules) != filter_line_that_contains_ipv4(rules_dry):
changed = True
elif is_starting_by_ipv6(params['from_ip']) or is_starting_by_ipv6(params['to_ip']):
if filter_line_that_contains_ipv6(pre_rules) != filter_line_that_contains_ipv6(rules_dry):
changed = True
elif pre_rules != rules_dry:
changed = True
# Get the new state
if module.check_mode:
return module.exit_json(changed=changed, commands=cmds)
else:
post_state = execute([[ufw_bin], ['status'], ['verbose']])
if not changed:
post_rules = get_current_rules()
changed = (pre_state != post_state) or (pre_rules != post_rules)
return module.exit_json(changed=changed, commands=cmds, msg=post_state.rstrip())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,364 |
nsupdate module failed to remove Letsencrypt DNS challenge record
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
we use Letsencrypt to generate a certificate with DNS challenge. Ansible works great to satisfy the challenge. However, when we want to remove the record as it is no longer needed, ansible is reporting module failure.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible **nsupdate**
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = [u'timer']
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = [u'/Ansible/inventory']
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_PRIVATE_KEY_FILE(/etc/ansible/ansible.cfg) = /etc/ansible/master.rsa
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/Ansible/roles']
DEFAULT_SSH_TRANSFER_METHOD(/etc/ansible/ansible.cfg) = scp
DEFAULT_VAULT_PASSWORD_FILE(/etc/ansible/ansible.cfg) = /root/.ssh/vault-pw.conf
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS Linux release 7.2.1511 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- set_facts:
zone_records:
- name: '_acme-challenge.awx'
state: absent
type: TXT
- name: remove dns records from vars
nsupdate:
key_name: "{{ vault_nsupdate_key_name }}"
key_secret: "{{ vault_nsupdate_key_secret }}"
server: "{{dns_server}}"
zone: "{{zone}}"
record: "{{item.name}}"
type: "{{item.type}}"
state: "{{item.state}}"
with_items: "{{zone_records}}"
when: server_role == "master" and item.state == "absent"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
_acme-challenge.awx TXT record is removed from DNS server
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
I have to run ansible-playbook with **-vvv** (3 v) as "-vvvv(4v) will give ssh error:
```
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/net_tools/nsupdate.py
Pipelining is enabled.
<dns01.lpr> ESTABLISH SSH CONNECTION FOR USER: None
<dns01.lpr> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 'IdentityFile="/etc/ansible/master.rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/eb56091f4e dns01.lpr '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<dns01.lpr> (1, '', 'Traceback (most recent call last):\n File "<stdin>", line 114, in <module>\n File "<stdin>", line 106, in _ansiballz_main\n File "<stdin>", line 49, in invoke_module\n File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 430, in <module>\n File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 408, in main\n File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 225, in __init__\nTypeError: argument 2 to map() must support iteration\n')
<dns01.lpr> Failed to connect to the host via ssh: Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 430, in <module>
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 408, in main
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 225, in __init__
TypeError: argument 2 to map() must support iteration
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 430, in <module>
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 408, in main
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 225, in __init__
TypeError: argument 2 to map() must support iteration
failed: [dns01.lpr] (item={u'state': u'absent', u'type': u'TXT', u'name': u'_acme-challenge.awx', u'ttl': u'3600'}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"name": "_acme-challenge.awx",
"state": "absent",
"ttl": "3600",
"type": "TXT"
},
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 114, in <module>\n File \"<stdin>\", line 106, in _ansiballz_main\n File \"<stdin>\", line 49, in invoke_module\n File \"/tmp/ansible_nsupdate_payload_OWViZX/__main__.py\", line 430, in <module>\n File \"/tmp/ansible_nsupdate_payload_OWViZX/__main__.py\", line 408, in main\n File \"/tmp/ansible_nsupdate_payload_OWViZX/__main__.py\", line 225, in __init__\nTypeError: argument 2 to map() must support iteration\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
Here is the result when running ansible-playbook with -vvvv (4v):
```paste below
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/net_tools/nsupdate.py
Pipelining is enabled.
<dns01.lpr> ESTABLISH SSH CONNECTION FOR USER: None
<dns01.lpr> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 'IdentityFile="/etc/ansible/master.rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/eb56091f4e dns01.lpr '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<dns01.lpr> (1, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /root/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 26756\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\nTraceback (most recent call last):\n File "<stdin>", line 114, in <module>\n File "<stdin>", line 106, in _ansiballz_main\n File "<stdin>", line 49, in invoke_module\n File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 430, in <module>\n File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 408, in main\n File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 225, in __init__\nTypeError: argument 2 to map() must support iteration\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n')
<dns01.lpr> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /root/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 26756
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 430, in <module>
File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 408, in main
File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 225, in __init__
TypeError: argument 2 to map() must support iteration
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
failed: [dns01.lpr] (item={u'state': u'absent', u'type': u'TXT', u'name': u'_acme-challenge.awx', u'ttl': u'3600'}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"name": "_acme-challenge.awx",
"state": "absent",
"ttl": "3600",
"type": "TXT"
},
"module_stderr": "OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /root/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 26756\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\nTraceback (most recent call last):\n File \"<stdin>\", line 114, in <module>\n File \"<stdin>\", line 106, in _ansiballz_main\n File \"<stdin>\", line 49, in invoke_module\n File \"/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py\", line 430, in <module>\n File \"/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py\", line 408, in main\n File \"/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py\", line 225, in __init__\nTypeError: argument 2 to map() must support iteration\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
|
https://github.com/ansible/ansible/issues/63364
|
https://github.com/ansible/ansible/pull/63408
|
332b22085458d105a52a38494f2dfebd946a0055
|
98b025239a692b9b5d3533e977c0881792900348
| 2019-10-11T01:50:15Z |
python
| 2019-10-16T12:28:41Z |
changelogs/fragments/63408-nsupdate-dont-fix-none-txt-value.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,364 |
nsupdate module failed to remove Letsencrypt DNS challenge record
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
we use Letsencrypt to generate a certificate with DNS challenge. Ansible works great to satisfy the challenge. However, when we want to remove the record as it is no longer needed, ansible is reporting module failure.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible **nsupdate**
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = [u'timer']
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = [u'/Ansible/inventory']
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible.log
DEFAULT_PRIVATE_KEY_FILE(/etc/ansible/ansible.cfg) = /etc/ansible/master.rsa
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/Ansible/roles']
DEFAULT_SSH_TRANSFER_METHOD(/etc/ansible/ansible.cfg) = scp
DEFAULT_VAULT_PASSWORD_FILE(/etc/ansible/ansible.cfg) = /root/.ssh/vault-pw.conf
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS Linux release 7.2.1511 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- set_facts:
zone_records:
- name: '_acme-challenge.awx'
state: absent
type: TXT
- name: remove dns records from vars
nsupdate:
key_name: "{{ vault_nsupdate_key_name }}"
key_secret: "{{ vault_nsupdate_key_secret }}"
server: "{{dns_server}}"
zone: "{{zone}}"
record: "{{item.name}}"
type: "{{item.type}}"
state: "{{item.state}}"
with_items: "{{zone_records}}"
when: server_role == "master" and item.state == "absent"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
_acme-challenge.awx TXT record is removed from DNS server
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
I have to run ansible-playbook with **-vvv** (3 v) as "-vvvv(4v) will give ssh error:
```
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/net_tools/nsupdate.py
Pipelining is enabled.
<dns01.lpr> ESTABLISH SSH CONNECTION FOR USER: None
<dns01.lpr> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 'IdentityFile="/etc/ansible/master.rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/eb56091f4e dns01.lpr '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<dns01.lpr> (1, '', 'Traceback (most recent call last):\n File "<stdin>", line 114, in <module>\n File "<stdin>", line 106, in _ansiballz_main\n File "<stdin>", line 49, in invoke_module\n File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 430, in <module>\n File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 408, in main\n File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 225, in __init__\nTypeError: argument 2 to map() must support iteration\n')
<dns01.lpr> Failed to connect to the host via ssh: Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 430, in <module>
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 408, in main
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 225, in __init__
TypeError: argument 2 to map() must support iteration
The full traceback is:
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 430, in <module>
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 408, in main
File "/tmp/ansible_nsupdate_payload_OWViZX/__main__.py", line 225, in __init__
TypeError: argument 2 to map() must support iteration
failed: [dns01.lpr] (item={u'state': u'absent', u'type': u'TXT', u'name': u'_acme-challenge.awx', u'ttl': u'3600'}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"name": "_acme-challenge.awx",
"state": "absent",
"ttl": "3600",
"type": "TXT"
},
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 114, in <module>\n File \"<stdin>\", line 106, in _ansiballz_main\n File \"<stdin>\", line 49, in invoke_module\n File \"/tmp/ansible_nsupdate_payload_OWViZX/__main__.py\", line 430, in <module>\n File \"/tmp/ansible_nsupdate_payload_OWViZX/__main__.py\", line 408, in main\n File \"/tmp/ansible_nsupdate_payload_OWViZX/__main__.py\", line 225, in __init__\nTypeError: argument 2 to map() must support iteration\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
Here is the result when running ansible-playbook with -vvvv (4v):
```paste below
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/net_tools/nsupdate.py
Pipelining is enabled.
<dns01.lpr> ESTABLISH SSH CONNECTION FOR USER: None
<dns01.lpr> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 'IdentityFile="/etc/ansible/master.rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/eb56091f4e dns01.lpr '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<dns01.lpr> (1, '', 'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /root/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 26756\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\nTraceback (most recent call last):\n File "<stdin>", line 114, in <module>\n File "<stdin>", line 106, in _ansiballz_main\n File "<stdin>", line 49, in invoke_module\n File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 430, in <module>\n File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 408, in main\n File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 225, in __init__\nTypeError: argument 2 to map() must support iteration\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n')
<dns01.lpr> Failed to connect to the host via ssh: OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /root/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 26756
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 430, in <module>
File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 408, in main
File "/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py", line 225, in __init__
TypeError: argument 2 to map() must support iteration
debug3: mux_client_read_packet: read header failed: Broken pipe
debug2: Received exit status from master 1
failed: [dns01.lpr] (item={u'state': u'absent', u'type': u'TXT', u'name': u'_acme-challenge.awx', u'ttl': u'3600'}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"name": "_acme-challenge.awx",
"state": "absent",
"ttl": "3600",
"type": "TXT"
},
"module_stderr": "OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /root/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 26756\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\nTraceback (most recent call last):\n File \"<stdin>\", line 114, in <module>\n File \"<stdin>\", line 106, in _ansiballz_main\n File \"<stdin>\", line 49, in invoke_module\n File \"/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py\", line 430, in <module>\n File \"/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py\", line 408, in main\n File \"/tmp/ansible_nsupdate_payload_pwj0Q7/__main__.py\", line 225, in __init__\nTypeError: argument 2 to map() must support iteration\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 1\r\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
```
|
https://github.com/ansible/ansible/issues/63364
|
https://github.com/ansible/ansible/pull/63408
|
332b22085458d105a52a38494f2dfebd946a0055
|
98b025239a692b9b5d3533e977c0881792900348
| 2019-10-11T01:50:15Z |
python
| 2019-10-16T12:28:41Z |
lib/ansible/modules/net_tools/nsupdate.py
|
#!/usr/bin/python
# (c) 2016, Marcin Skarbek <[email protected]>
# (c) 2016, Andreas Olsson <[email protected]>
# (c) 2017, Loic Blot <[email protected]>
#
# This module was ported from https://github.com/mskarbek/ansible-nsupdate
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: nsupdate
short_description: Manage DNS records.
description:
- Create, update and remove DNS records using DDNS updates
version_added: "2.3"
requirements:
- dnspython
author: "Loic Blot (@nerzhul)"
options:
state:
description:
- Manage DNS record.
choices: ['present', 'absent']
default: 'present'
server:
description:
- Apply DNS modification on this server.
required: true
port:
description:
- Use this TCP port when connecting to C(server).
default: 53
version_added: 2.5
key_name:
description:
- Use TSIG key name to authenticate against DNS C(server)
key_secret:
description:
- Use TSIG key secret, associated with C(key_name), to authenticate against C(server)
key_algorithm:
description:
- Specify key algorithm used by C(key_secret).
choices: ['HMAC-MD5.SIG-ALG.REG.INT', 'hmac-md5', 'hmac-sha1', 'hmac-sha224', 'hmac-sha256', 'hmac-sha384',
'hmac-sha512']
default: 'hmac-md5'
zone:
description:
- DNS record will be modified on this C(zone).
- When omitted DNS will be queried to attempt finding the correct zone.
- Starting with Ansible 2.7 this parameter is optional.
record:
description:
- Sets the DNS record to modify. When zone is omitted this has to be absolute (ending with a dot).
required: true
type:
description:
- Sets the record type.
default: 'A'
ttl:
description:
- Sets the record TTL.
default: 3600
value:
description:
- Sets the record value.
protocol:
description:
- Sets the transport protocol (TCP or UDP). TCP is the recommended and a more robust option.
default: 'tcp'
choices: ['tcp', 'udp']
version_added: 2.8
'''
EXAMPLES = '''
- name: Add or modify ansible.example.org A to 192.168.1.1"
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.1.1.1"
zone: "example.org"
record: "ansible"
value: "192.168.1.1"
- name: Add or modify ansible.example.org A to 192.168.1.1, 192.168.1.2 and 192.168.1.3"
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.1.1.1"
zone: "example.org"
record: "ansible"
value: ["192.168.1.1", "192.168.1.2", "192.168.1.3"]
- name: Remove puppet.example.org CNAME
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.1.1.1"
zone: "example.org"
record: "puppet"
type: "CNAME"
state: absent
- name: Add 1.1.168.192.in-addr.arpa. PTR for ansible.example.org
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.1.1.1"
record: "1.1.168.192.in-addr.arpa."
type: "PTR"
value: "ansible.example.org."
state: present
- name: Remove 1.1.168.192.in-addr.arpa. PTR
nsupdate:
key_name: "nsupdate"
key_secret: "+bFQtBCta7j2vWkjPkAFtgA=="
server: "10.1.1.1"
record: "1.1.168.192.in-addr.arpa."
type: "PTR"
state: absent
'''
RETURN = '''
changed:
description: If module has modified record
returned: success
type: str
record:
description: DNS record
returned: success
type: str
sample: 'ansible'
ttl:
description: DNS record TTL
returned: success
type: int
sample: 86400
type:
description: DNS record type
returned: success
type: str
sample: 'CNAME'
value:
description: DNS record value(s)
returned: success
type: list
sample: '192.168.1.1'
zone:
description: DNS record zone
returned: success
type: str
sample: 'example.org.'
dns_rc:
description: dnspython return code
returned: always
type: int
sample: 4
dns_rc_str:
description: dnspython return code (string representation)
returned: always
type: str
sample: 'REFUSED'
'''
import traceback
from binascii import Error as binascii_error
from socket import error as socket_error
DNSPYTHON_IMP_ERR = None
try:
import dns.update
import dns.query
import dns.tsigkeyring
import dns.message
import dns.resolver
HAVE_DNSPYTHON = True
except ImportError:
DNSPYTHON_IMP_ERR = traceback.format_exc()
HAVE_DNSPYTHON = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
class RecordManager(object):
def __init__(self, module):
self.module = module
if module.params['zone'] is None:
if module.params['record'][-1] != '.':
self.module.fail_json(msg='record must be absolute when omitting zone parameter')
self.zone = self.lookup_zone()
else:
self.zone = module.params['zone']
if self.zone[-1] != '.':
self.zone += '.'
if module.params['record'][-1] != '.':
self.fqdn = module.params['record'] + '.' + self.zone
else:
self.fqdn = module.params['record']
if module.params['key_name']:
try:
self.keyring = dns.tsigkeyring.from_text({
module.params['key_name']: module.params['key_secret']
})
except TypeError:
module.fail_json(msg='Missing key_secret')
except binascii_error as e:
module.fail_json(msg='TSIG key error: %s' % to_native(e))
else:
self.keyring = None
if module.params['key_algorithm'] == 'hmac-md5':
self.algorithm = 'HMAC-MD5.SIG-ALG.REG.INT'
else:
self.algorithm = module.params['key_algorithm']
if self.module.params['type'].lower() == 'txt':
self.value = list(map(self.txt_helper, self.module.params['value']))
else:
self.value = self.module.params['value']
self.dns_rc = 0
def txt_helper(self, entry):
if entry[0] == '"' and entry[-1] == '"':
return entry
return '"{text}"'.format(text=entry)
def lookup_zone(self):
name = dns.name.from_text(self.module.params['record'])
while True:
query = dns.message.make_query(name, dns.rdatatype.SOA)
try:
if self.module.params['protocol'] == 'tcp':
lookup = dns.query.tcp(query, self.module.params['server'], timeout=10, port=self.module.params['port'])
else:
lookup = dns.query.udp(query, self.module.params['server'], timeout=10, port=self.module.params['port'])
except (socket_error, dns.exception.Timeout) as e:
self.module.fail_json(msg='DNS server error: (%s): %s' % (e.__class__.__name__, to_native(e)))
if lookup.rcode() in [dns.rcode.SERVFAIL, dns.rcode.REFUSED]:
self.module.fail_json(msg='Zone lookup failure: \'%s\' will not respond to queries regarding \'%s\'.' % (
self.module.params['server'], self.module.params['record']))
try:
zone = lookup.authority[0].name
if zone == name:
return zone.to_text()
except IndexError:
pass
try:
name = name.parent()
except dns.name.NoParent:
self.module.fail_json(msg='Zone lookup of \'%s\' failed for unknown reason.' % (self.module.params['record']))
def __do_update(self, update):
response = None
try:
if self.module.params['protocol'] == 'tcp':
response = dns.query.tcp(update, self.module.params['server'], timeout=10, port=self.module.params['port'])
else:
response = dns.query.udp(update, self.module.params['server'], timeout=10, port=self.module.params['port'])
except (dns.tsig.PeerBadKey, dns.tsig.PeerBadSignature) as e:
self.module.fail_json(msg='TSIG update error (%s): %s' % (e.__class__.__name__, to_native(e)))
except (socket_error, dns.exception.Timeout) as e:
self.module.fail_json(msg='DNS server error: (%s): %s' % (e.__class__.__name__, to_native(e)))
return response
def create_or_update_record(self):
result = {'changed': False, 'failed': False}
exists = self.record_exists()
if exists in [0, 2]:
if self.module.check_mode:
self.module.exit_json(changed=True)
if exists == 0:
self.dns_rc = self.create_record()
if self.dns_rc != 0:
result['msg'] = "Failed to create DNS record (rc: %d)" % self.dns_rc
elif exists == 2:
self.dns_rc = self.modify_record()
if self.dns_rc != 0:
result['msg'] = "Failed to update DNS record (rc: %d)" % self.dns_rc
if self.dns_rc != 0:
result['failed'] = True
else:
result['changed'] = True
else:
result['changed'] = False
return result
def create_record(self):
update = dns.update.Update(self.zone, keyring=self.keyring, keyalgorithm=self.algorithm)
for entry in self.value:
try:
update.add(self.module.params['record'],
self.module.params['ttl'],
self.module.params['type'],
entry)
except AttributeError:
self.module.fail_json(msg='value needed when state=present')
except dns.exception.SyntaxError:
self.module.fail_json(msg='Invalid/malformed value')
response = self.__do_update(update)
return dns.message.Message.rcode(response)
def modify_record(self):
update = dns.update.Update(self.zone, keyring=self.keyring, keyalgorithm=self.algorithm)
update.delete(self.module.params['record'], self.module.params['type'])
for entry in self.value:
try:
update.add(self.module.params['record'],
self.module.params['ttl'],
self.module.params['type'],
entry)
except AttributeError:
self.module.fail_json(msg='value needed when state=present')
except dns.exception.SyntaxError:
self.module.fail_json(msg='Invalid/malformed value')
response = self.__do_update(update)
return dns.message.Message.rcode(response)
def remove_record(self):
result = {'changed': False, 'failed': False}
if self.record_exists() == 0:
return result
# Check mode and record exists, declared fake change.
if self.module.check_mode:
self.module.exit_json(changed=True)
update = dns.update.Update(self.zone, keyring=self.keyring, keyalgorithm=self.algorithm)
update.delete(self.module.params['record'], self.module.params['type'])
response = self.__do_update(update)
self.dns_rc = dns.message.Message.rcode(response)
if self.dns_rc != 0:
result['failed'] = True
result['msg'] = "Failed to delete record (rc: %d)" % self.dns_rc
else:
result['changed'] = True
return result
def record_exists(self):
update = dns.update.Update(self.zone, keyring=self.keyring, keyalgorithm=self.algorithm)
try:
update.present(self.module.params['record'], self.module.params['type'])
except dns.rdatatype.UnknownRdatatype as e:
self.module.fail_json(msg='Record error: {0}'.format(to_native(e)))
response = self.__do_update(update)
self.dns_rc = dns.message.Message.rcode(response)
if self.dns_rc == 0:
if self.module.params['state'] == 'absent':
return 1
for entry in self.value:
try:
update.present(self.module.params['record'], self.module.params['type'], entry)
except AttributeError:
self.module.fail_json(msg='value needed when state=present')
except dns.exception.SyntaxError:
self.module.fail_json(msg='Invalid/malformed value')
response = self.__do_update(update)
self.dns_rc = dns.message.Message.rcode(response)
if self.dns_rc == 0:
if self.ttl_changed():
return 2
else:
return 1
else:
return 2
else:
return 0
def ttl_changed(self):
query = dns.message.make_query(self.fqdn, self.module.params['type'])
try:
if self.module.params['protocol'] == 'tcp':
lookup = dns.query.tcp(query, self.module.params['server'], timeout=10, port=self.module.params['port'])
else:
lookup = dns.query.udp(query, self.module.params['server'], timeout=10, port=self.module.params['port'])
except (socket_error, dns.exception.Timeout) as e:
self.module.fail_json(msg='DNS server error: (%s): %s' % (e.__class__.__name__, to_native(e)))
current_ttl = lookup.answer[0].ttl
return current_ttl != self.module.params['ttl']
def main():
tsig_algs = ['HMAC-MD5.SIG-ALG.REG.INT', 'hmac-md5', 'hmac-sha1', 'hmac-sha224',
'hmac-sha256', 'hmac-sha384', 'hmac-sha512']
module = AnsibleModule(
argument_spec=dict(
state=dict(required=False, default='present', choices=['present', 'absent'], type='str'),
server=dict(required=True, type='str'),
port=dict(required=False, default=53, type='int'),
key_name=dict(required=False, type='str'),
key_secret=dict(required=False, type='str', no_log=True),
key_algorithm=dict(required=False, default='hmac-md5', choices=tsig_algs, type='str'),
zone=dict(required=False, default=None, type='str'),
record=dict(required=True, type='str'),
type=dict(required=False, default='A', type='str'),
ttl=dict(required=False, default=3600, type='int'),
value=dict(required=False, default=None, type='list'),
protocol=dict(required=False, default='tcp', choices=['tcp', 'udp'], type='str')
),
supports_check_mode=True
)
if not HAVE_DNSPYTHON:
module.fail_json(msg=missing_required_lib('dnspython'), exception=DNSPYTHON_IMP_ERR)
if len(module.params["record"]) == 0:
module.fail_json(msg='record cannot be empty.')
record = RecordManager(module)
result = {}
if module.params["state"] == 'absent':
result = record.remove_record()
elif module.params["state"] == 'present':
result = record.create_or_update_record()
result['dns_rc'] = record.dns_rc
result['dns_rc_str'] = dns.rcode.to_text(record.dns_rc)
if result['failed']:
module.fail_json(**result)
else:
result['record'] = dict(zone=record.zone,
record=module.params['record'],
type=module.params['type'],
ttl=module.params['ttl'],
value=record.value)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,290 |
eos_logging : Logging configuration fails on not passing 'dest'
|
**SUMMARY**
```
While trying to configure 'logging facility local7' on an Arista EOS switch, following commands are pushed into the switch:
['logging facility local7', 'logging None']
Expected is only 'logging facility local7'
Because of this the playbook is failing.
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
eos_logging.py
```
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gosriniv/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/gosriniv/ansible/lib/ansible
executable location = /home/gosriniv/ansible/bin/ansible
python version = 3.6.5 (default, Sep 4 2019, 12:23:33) [GCC 9.0.1 20190312 (Red Hat 9.0.1-0.10)]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
```
"Arista vEOS"
```
##### STEPS TO REPRODUCE
```
---
- hosts: eos
gather_facts: false
connection: network_cli
tasks:
- name: Set up logging facility
eos_logging:
facility: local7
become: yes
register: result
- assert:
that:
- 'result.changed == true'
- '"logging facility local7" in result.commands'
```
##### EXPECTED RESULTS
Expectation is that there should not be any failure
##### ACTUAL RESULTS
```
fatal: [10.8.38.31]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"data": "logging None\r\n% Invalid input\r\nfoo(config-s-ansibl)#",
"invocation": {
"module_args": {
"aggregate": null,
"auth_pass": null,
"authorize": null,
"dest": null,
"facility": "local7",
"host": null,
"level": null,
"name": null,
"password": null,
"port": null,
"provider": null,
"size": null,
"ssh_keyfile": null,
"state": "present",
"timeout": null,
"transport": null,
"use_ssl": null,
"username": null,
"validate_certs": null
}
},
"msg": "logging None\r\n% Invalid input\r\nfoo(config-s-ansibl)#"
}
```
|
https://github.com/ansible/ansible/issues/63290
|
https://github.com/ansible/ansible/pull/63308
|
88013e7159bdaf0ae1198aee736cfd04689503a2
|
6d7620b646ca4cf53a9febfa3eb4c9576a886acf
| 2019-10-09T17:38:23Z |
python
| 2019-10-16T13:47:46Z |
lib/ansible/modules/network/eos/eos_logging.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible by Red Hat, inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: eos_logging
version_added: "2.4"
author: "Trishna Guha (@trishnaguha)"
short_description: Manage logging on network devices
description:
- This module provides declarative management of logging
on Arista Eos devices.
notes:
- Tested against EOS 4.15
options:
dest:
description:
- Destination of the logs.
choices: ['on', 'host', 'console', 'monitor', 'buffered']
name:
description:
- The hostname or IP address of the destination.
- Required when I(dest=host).
size:
description:
- Size of buffer. The acceptable value is in range from 10 to
2147483647 bytes.
facility:
description:
- Set logging facility.
level:
description:
- Set logging severity levels.
choices: ['emergencies', 'alerts', 'critical', 'errors',
'warnings', 'notifications', 'informational', 'debugging']
aggregate:
description: List of logging definitions.
state:
description:
- State of the logging configuration.
default: present
choices: ['present', 'absent']
extends_documentation_fragment: eos
"""
EXAMPLES = """
- name: configure host logging
eos_logging:
dest: host
name: 172.16.0.1
state: present
- name: remove host logging configuration
eos_logging:
dest: host
name: 172.16.0.1
state: absent
- name: configure console logging level and facility
eos_logging:
dest: console
facility: local7
level: debugging
state: present
- name: enable logging to all
eos_logging:
dest : on
- name: configure buffer size
eos_logging:
dest: buffered
size: 5000
- name: Configure logging using aggregate
eos_logging:
aggregate:
- { dest: console, level: warnings }
- { dest: buffered, size: 480000 }
state: present
"""
RETURN = """
commands:
description: The list of configuration mode commands to send to the device
returned: always
type: list
sample:
- logging facility local7
- logging host 172.16.0.1
"""
import re
from copy import deepcopy
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.common.utils import remove_default_spec
from ansible.module_utils.network.eos.eos import get_config, load_config
from ansible.module_utils.network.eos.eos import eos_argument_spec
DEST_GROUP = ['on', 'host', 'console', 'monitor', 'buffered']
LEVEL_GROUP = ['emergencies', 'alerts', 'critical', 'errors',
'warnings', 'notifications', 'informational',
'debugging']
def validate_size(value, module):
if value:
if not int(10) <= value <= int(2147483647):
module.fail_json(msg='size must be between 10 and 2147483647')
else:
return value
def map_obj_to_commands(updates, module):
commands = list()
want, have = updates
for w in want:
dest = w['dest']
name = w['name']
size = w['size']
facility = w['facility']
level = w['level']
state = w['state']
del w['state']
if state == 'absent' and w in have:
if dest:
if dest == 'host':
commands.append('no logging host {0}'.format(name))
elif dest in DEST_GROUP:
commands.append('no logging {0}'.format(dest))
else:
module.fail_json(msg='dest must be among console, monitor, buffered, host, on')
if facility:
commands.append('no logging facility {0}'.format(facility))
if state == 'present' and w not in have:
if facility:
present = False
# Iterate over every dictionary in the 'have' list to check if
# similar configuration for facility exists or not
for entry in have:
if not entry['dest'] and entry['facility'] == facility:
present = True
if not present:
commands.append('logging facility {0}'.format(facility))
if dest == 'host':
commands.append('logging host {0}'.format(name))
elif dest == 'on':
commands.append('logging on')
elif dest == 'buffered' and size:
present = False
# Deals with the following two cases:
# Case 1: logging buffered <size> <level>
# logging buffered <same-size>
#
# Case 2: Same buffered logging configuration
# already exists (i.e., both size &
# level are same)
for entry in have:
if entry['dest'] == 'buffered' and entry['size'] == size:
if not level or entry['level'] == level:
present = True
if not present:
if size and level:
commands.append('logging buffered {0} {1}'.format(size, level))
else:
commands.append('logging buffered {0}'.format(size))
else:
dest_cmd = 'logging {0}'.format(dest)
if level:
dest_cmd += ' {0}'.format(level)
commands.append(dest_cmd)
return commands
def parse_facility(line):
facility = None
match = re.search(r'logging facility (\S+)', line, re.M)
if match:
facility = match.group(1)
return facility
def parse_size(line, dest):
size = None
if dest == 'buffered':
match = re.search(r'logging buffered (\S+)', line, re.M)
if match:
try:
int_size = int(match.group(1))
except ValueError:
int_size = None
if int_size:
if isinstance(int_size, int):
size = str(match.group(1))
else:
size = str(10)
return size
def parse_name(line, dest):
name = None
if dest == 'host':
match = re.search(r'logging host (\S+)', line, re.M)
if match:
name = match.group(1)
return name
def parse_level(line, dest):
level = None
if dest != 'host':
# Line for buffer logging entry in running-config is of the form:
# logging buffered <size> <level>
if dest == 'buffered':
match = re.search(r'logging buffered (?:\d+) (\S+)', line, re.M)
else:
match = re.search(r'logging {0} (\S+)'.format(dest), line, re.M)
if match:
if match.group(1) in LEVEL_GROUP:
level = match.group(1)
return level
def map_config_to_obj(module):
obj = []
data = get_config(module, flags=['section logging'])
for line in data.split('\n'):
match = re.search(r'logging (\S+)', line, re.M)
if match:
if match.group(1) in DEST_GROUP:
dest = match.group(1)
else:
dest = None
obj.append({'dest': dest,
'name': parse_name(line, dest),
'size': parse_size(line, dest),
'facility': parse_facility(line),
'level': parse_level(line, dest)})
return obj
def parse_obj(obj, module):
if module.params['size'] is None:
obj.append({
'dest': module.params['dest'],
'name': module.params['name'],
'size': module.params['size'],
'facility': module.params['facility'],
'level': module.params['level'],
'state': module.params['state']
})
else:
obj.append({
'dest': module.params['dest'],
'name': module.params['name'],
'size': str(validate_size(module.params['size'], module)),
'facility': module.params['facility'],
'level': module.params['level'],
'state': module.params['state']
})
return obj
def map_params_to_obj(module, required_if=None):
obj = []
aggregate = module.params.get('aggregate')
if aggregate:
for item in aggregate:
for key in item:
if item.get(key) is None:
item[key] = module.params[key]
module._check_required_if(required_if, item)
d = item.copy()
if d['dest'] != 'host':
d['name'] = None
if d['dest'] == 'buffered':
if 'size' in d:
d['size'] = str(validate_size(d['size'], module))
elif 'size' not in d:
d['size'] = str(10)
else:
pass
if d['dest'] != 'buffered':
d['size'] = None
obj.append(d)
else:
if module.params['dest'] != 'host':
module.params['name'] = None
if module.params['dest'] == 'buffered':
if not module.params['size']:
module.params['size'] = str(10)
else:
module.params['size'] = None
parse_obj(obj, module)
return obj
def main():
""" main entry point for module execution
"""
element_spec = dict(
dest=dict(choices=DEST_GROUP),
name=dict(),
size=dict(type='int'),
facility=dict(),
level=dict(choices=LEVEL_GROUP),
state=dict(default='present', choices=['present', 'absent']),
)
aggregate_spec = deepcopy(element_spec)
# remove default in aggregate spec, to handle common arguments
remove_default_spec(aggregate_spec)
argument_spec = dict(
aggregate=dict(type='list', elements='dict', options=aggregate_spec),
)
argument_spec.update(element_spec)
argument_spec.update(eos_argument_spec)
required_if = [('dest', 'host', ['name'])]
module = AnsibleModule(argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True)
warnings = list()
result = {'changed': False}
if warnings:
result['warnings'] = warnings
have = map_config_to_obj(module)
want = map_params_to_obj(module, required_if=required_if)
commands = map_obj_to_commands((want, have), module)
result['commands'] = commands
if commands:
commit = not module.check_mode
response = load_config(module, commands, commit=commit)
if response.get('diff') and module._diff:
result['diff'] = {'prepared': response.get('diff')}
result['session_name'] = response.get('session')
result['changed'] = True
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,290 |
eos_logging : Logging configuration fails on not passing 'dest'
|
**SUMMARY**
```
While trying to configure 'logging facility local7' on an Arista EOS switch, following commands are pushed into the switch:
['logging facility local7', 'logging None']
Expected is only 'logging facility local7'
Because of this the playbook is failing.
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
eos_logging.py
```
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gosriniv/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/gosriniv/ansible/lib/ansible
executable location = /home/gosriniv/ansible/bin/ansible
python version = 3.6.5 (default, Sep 4 2019, 12:23:33) [GCC 9.0.1 20190312 (Red Hat 9.0.1-0.10)]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
```
"Arista vEOS"
```
##### STEPS TO REPRODUCE
```
---
- hosts: eos
gather_facts: false
connection: network_cli
tasks:
- name: Set up logging facility
eos_logging:
facility: local7
become: yes
register: result
- assert:
that:
- 'result.changed == true'
- '"logging facility local7" in result.commands'
```
##### EXPECTED RESULTS
Expectation is that there should not be any failure
##### ACTUAL RESULTS
```
fatal: [10.8.38.31]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"data": "logging None\r\n% Invalid input\r\nfoo(config-s-ansibl)#",
"invocation": {
"module_args": {
"aggregate": null,
"auth_pass": null,
"authorize": null,
"dest": null,
"facility": "local7",
"host": null,
"level": null,
"name": null,
"password": null,
"port": null,
"provider": null,
"size": null,
"ssh_keyfile": null,
"state": "present",
"timeout": null,
"transport": null,
"use_ssl": null,
"username": null,
"validate_certs": null
}
},
"msg": "logging None\r\n% Invalid input\r\nfoo(config-s-ansibl)#"
}
```
|
https://github.com/ansible/ansible/issues/63290
|
https://github.com/ansible/ansible/pull/63308
|
88013e7159bdaf0ae1198aee736cfd04689503a2
|
6d7620b646ca4cf53a9febfa3eb4c9576a886acf
| 2019-10-09T17:38:23Z |
python
| 2019-10-16T13:47:46Z |
test/integration/targets/eos_logging/tests/cli/basic.yaml
|
---
- debug: msg="START cli/basic.yaml on connection={{ ansible_connection }}"
- name: Set up host logging
eos_logging:
dest: host
name: 172.16.0.1
state: present
become: yes
register: result
- assert:
that:
- 'result.changed == true'
- '"logging host 172.16.0.1" in result.commands'
- name: Set up host logging again (idempotent)
eos_logging:
dest: host
name: 172.16.0.1
state: present
become: yes
register: result
- assert:
that:
- 'result.changed == false'
- name: Delete/disable host logging
eos_logging:
dest: host
name: 172.16.0.1
state: absent
become: yes
register: result
- assert:
that:
- 'result.changed == true'
- '"no logging host 172.16.0.1" in result.commands'
- name: Delete/disable host logging (idempotent)
eos_logging:
dest: host
name: 172.16.0.1
state: absent
become: yes
register: result
- assert:
that:
- 'result.changed == false'
- name: Console logging with level warnings
eos_logging:
dest: console
level: warnings
state: present
become: yes
register: result
- assert:
that:
- 'result.changed == true'
- '"logging console warnings" in result.commands'
- name: Configure buffer size
eos_logging:
dest: buffered
size: 480000
become: yes
register: result
- assert:
that:
- 'result.changed == true'
- '"logging buffered 480000" in result.commands'
- name: Set up logging destination and facility at the same time
eos_logging:
dest: buffered
size: 4096
facility: local7
level: informational
state: present
become: yes
register: result
- assert:
that:
- 'result.changed == true'
- '"logging buffered 4096 informational" in result.commands'
- '"logging facility local7" in result.commands'
- name: Set up logging destination and facility at the same time again (idempotent)
eos_logging:
dest: buffered
size: 4096
facility: local7
level: informational
state: present
become: yes
register: result
- assert:
that:
- 'result.changed == false'
- name: remove logging as collection tearDown
eos_logging:
aggregate:
- { dest: console, level: warnings, state: absent }
- { dest: buffered, level: informational, size: 4096, state: absent }
- { facility: local7, state: absent }
become: yes
register: result
- assert:
that:
- 'result.changed == true'
- '"no logging console" in result.commands'
- '"no logging buffered" in result.commands'
- '"no logging facility local7" in result.commands'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,315 |
[aws lightsail] Delete operation is not idempotent
|
##### SUMMARY
The delete operation of the `lightsail` module is not idempotent. If the instance is already deleted, then the task generates a traceback.
```
Traceback (most recent call last):
File "ansible/modules/cloud/amazon/lightsail.py", line 458, in main
File "ansible/modules/cloud/amazon/lightsail.py", line 422, in core
File "ansible/module_utils/common/dict_transformations.py", line 42, in camel_dict_to_snake_dict
for k, v in camel_dict.items():
AttributeError: 'NoneType' object has no attribute 'items'
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lightsail`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
nothing
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Python 3.7.4
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- lightsail:
state: absent
name: my_instance
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Task successfully returns with `changed: false`.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Instead this traceback is generated -
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "ansible/modules/cloud/amazon/lightsail.py", line 458, in main
File "ansible/modules/cloud/amazon/lightsail.py", line 422, in core
File "ansible/module_utils/common/dict_transformations.py", line 42, in camel_dict_to_snake_dict
for k, v in camel_dict.items():
AttributeError: 'NoneType' object has no attribute 'items'
```
|
https://github.com/ansible/ansible/issues/63315
|
https://github.com/ansible/ansible/pull/63317
|
94c23136be35ca5334945a66c1342863dd026fa4
|
275b3972cb147583bdd7cd394df708f9d97d7f84
| 2019-10-10T04:34:28Z |
python
| 2019-10-16T17:01:50Z |
lib/ansible/modules/cloud/amazon/lightsail.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: lightsail
short_description: Create or delete a virtual machine instance in AWS Lightsail
description:
- Creates or instances in AWS Lightsail and optionally wait for it to be 'running'.
version_added: "2.4"
author: "Nick Ball (@nickball)"
options:
state:
description:
- Indicate desired state of the target.
default: present
choices: ['present', 'absent', 'running', 'restarted', 'stopped']
name:
description:
- Name of the instance
required: true
zone:
description:
- AWS availability zone in which to launch the instance. Required when state='present'
blueprint_id:
description:
- ID of the instance blueprint image. Required when state='present'
bundle_id:
description:
- Bundle of specification info for the instance. Required when state='present'
user_data:
description:
- Launch script that can configure the instance with additional data
key_pair_name:
description:
- Name of the key pair to use with the instance
wait:
description:
- Wait for the instance to be in state 'running' before returning. If wait is "no" an ip_address may not be returned
type: bool
default: 'yes'
wait_timeout:
description:
- How long before wait gives up, in seconds.
default: 300
requirements:
- "python >= 2.6"
- boto3
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
# Create a new Lightsail instance, register the instance details
- lightsail:
state: present
name: myinstance
region: us-east-1
zone: us-east-1a
blueprint_id: ubuntu_16_04
bundle_id: nano_1_0
key_pair_name: id_rsa
user_data: " echo 'hello world' > /home/ubuntu/test.txt"
wait_timeout: 500
register: my_instance
- debug:
msg: "Name is {{ my_instance.instance.name }}"
- debug:
msg: "IP is {{ my_instance.instance.public_ip_address }}"
# Delete an instance if present
- lightsail:
state: absent
region: us-east-1
name: myinstance
'''
RETURN = '''
changed:
description: if a snapshot has been modified/created
returned: always
type: bool
sample:
changed: true
instance:
description: instance data
returned: always
type: dict
sample:
arn: "arn:aws:lightsail:us-east-1:448830907657:Instance/1fef0175-d6c8-480e-84fa-214f969cda87"
blueprint_id: "ubuntu_16_04"
blueprint_name: "Ubuntu"
bundle_id: "nano_1_0"
created_at: "2017-03-27T08:38:59.714000-04:00"
hardware:
cpu_count: 1
ram_size_in_gb: 0.5
is_static_ip: false
location:
availability_zone: "us-east-1a"
region_name: "us-east-1"
name: "my_instance"
networking:
monthly_transfer:
gb_per_month_allocated: 1024
ports:
- access_direction: "inbound"
access_from: "Anywhere (0.0.0.0/0)"
access_type: "public"
common_name: ""
from_port: 80
protocol: tcp
to_port: 80
- access_direction: "inbound"
access_from: "Anywhere (0.0.0.0/0)"
access_type: "public"
common_name: ""
from_port: 22
protocol: tcp
to_port: 22
private_ip_address: "172.26.8.14"
public_ip_address: "34.207.152.202"
resource_type: "Instance"
ssh_key_name: "keypair"
state:
code: 16
name: running
support_code: "588307843083/i-0997c97831ee21e33"
username: "ubuntu"
'''
import time
import traceback
try:
import botocore
HAS_BOTOCORE = True
except ImportError:
HAS_BOTOCORE = False
try:
import boto3
except ImportError:
# will be caught by imported HAS_BOTO3
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ec2 import (ec2_argument_spec, get_aws_connection_info, boto3_conn,
HAS_BOTO3, camel_dict_to_snake_dict)
def create_instance(module, client, instance_name):
"""
Create an instance
module: Ansible module object
client: authenticated lightsail connection object
instance_name: name of instance to delete
Returns a dictionary of instance information
about the new instance.
"""
changed = False
# Check if instance already exists
inst = None
try:
inst = _find_instance_info(client, instance_name)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] != 'NotFoundException':
module.fail_json(msg='Error finding instance {0}, error: {1}'.format(instance_name, e))
zone = module.params.get('zone')
blueprint_id = module.params.get('blueprint_id')
bundle_id = module.params.get('bundle_id')
key_pair_name = module.params.get('key_pair_name')
user_data = module.params.get('user_data')
user_data = '' if user_data is None else user_data
resp = None
if inst is None:
try:
resp = client.create_instances(
instanceNames=[
instance_name
],
availabilityZone=zone,
blueprintId=blueprint_id,
bundleId=bundle_id,
userData=user_data,
keyPairName=key_pair_name,
)
resp = resp['operations'][0]
except botocore.exceptions.ClientError as e:
module.fail_json(msg='Unable to create instance {0}, error: {1}'.format(instance_name, e))
changed = True
inst = _find_instance_info(client, instance_name)
return (changed, inst)
def delete_instance(module, client, instance_name):
"""
Terminates an instance
module: Ansible module object
client: authenticated lightsail connection object
instance_name: name of instance to delete
Returns a dictionary of instance information
about the instance deleted (pre-deletion).
If the instance to be deleted is running
"changed" will be set to False.
"""
# It looks like deleting removes the instance immediately, nothing to wait for
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
wait_max = time.time() + wait_timeout
changed = False
inst = None
try:
inst = _find_instance_info(client, instance_name)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] != 'NotFoundException':
module.fail_json(msg='Error finding instance {0}, error: {1}'.format(instance_name, e))
# Wait for instance to exit transition state before deleting
if wait:
while wait_max > time.time() and inst is not None and inst['state']['name'] in ('pending', 'stopping'):
try:
time.sleep(5)
inst = _find_instance_info(client, instance_name)
except botocore.exceptions.ClientError as e:
if e.response['ResponseMetadata']['HTTPStatusCode'] == "403":
module.fail_json(msg="Failed to delete instance {0}. Check that you have permissions to perform the operation.".format(instance_name),
exception=traceback.format_exc())
elif e.response['Error']['Code'] == "RequestExpired":
module.fail_json(msg="RequestExpired: Failed to delete instance {0}.".format(instance_name), exception=traceback.format_exc())
# sleep and retry
time.sleep(10)
# Attempt to delete
if inst is not None:
while not changed and ((wait and wait_max > time.time()) or (not wait)):
try:
client.delete_instance(instanceName=instance_name)
changed = True
except botocore.exceptions.ClientError as e:
module.fail_json(msg='Error deleting instance {0}, error: {1}'.format(instance_name, e))
# Timed out
if wait and not changed and wait_max <= time.time():
module.fail_json(msg="wait for instance delete timeout at %s" % time.asctime())
return (changed, inst)
def restart_instance(module, client, instance_name):
"""
Reboot an existing instance
module: Ansible module object
client: authenticated lightsail connection object
instance_name: name of instance to reboot
Returns a dictionary of instance information
about the restarted instance
If the instance was not able to reboot,
"changed" will be set to False.
Wait will not apply here as this is an OS-level operation
"""
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
wait_max = time.time() + wait_timeout
changed = False
inst = None
try:
inst = _find_instance_info(client, instance_name)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] != 'NotFoundException':
module.fail_json(msg='Error finding instance {0}, error: {1}'.format(instance_name, e))
# Wait for instance to exit transition state before state change
if wait:
while wait_max > time.time() and inst is not None and inst['state']['name'] in ('pending', 'stopping'):
try:
time.sleep(5)
inst = _find_instance_info(client, instance_name)
except botocore.exceptions.ClientError as e:
if e.response['ResponseMetadata']['HTTPStatusCode'] == "403":
module.fail_json(msg="Failed to restart instance {0}. Check that you have permissions to perform the operation.".format(instance_name),
exception=traceback.format_exc())
elif e.response['Error']['Code'] == "RequestExpired":
module.fail_json(msg="RequestExpired: Failed to restart instance {0}.".format(instance_name), exception=traceback.format_exc())
time.sleep(3)
# send reboot
if inst is not None:
try:
client.reboot_instance(instanceName=instance_name)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] != 'NotFoundException':
module.fail_json(msg='Unable to reboot instance {0}, error: {1}'.format(instance_name, e))
changed = True
return (changed, inst)
def startstop_instance(module, client, instance_name, state):
"""
Starts or stops an existing instance
module: Ansible module object
client: authenticated lightsail connection object
instance_name: name of instance to start/stop
state: Target state ("running" or "stopped")
Returns a dictionary of instance information
about the instance started/stopped
If the instance was not able to state change,
"changed" will be set to False.
"""
wait = module.params.get('wait')
wait_timeout = int(module.params.get('wait_timeout'))
wait_max = time.time() + wait_timeout
changed = False
inst = None
try:
inst = _find_instance_info(client, instance_name)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] != 'NotFoundException':
module.fail_json(msg='Error finding instance {0}, error: {1}'.format(instance_name, e))
# Wait for instance to exit transition state before state change
if wait:
while wait_max > time.time() and inst is not None and inst['state']['name'] in ('pending', 'stopping'):
try:
time.sleep(5)
inst = _find_instance_info(client, instance_name)
except botocore.exceptions.ClientError as e:
if e.response['ResponseMetadata']['HTTPStatusCode'] == "403":
module.fail_json(msg="Failed to start/stop instance {0}. Check that you have permissions to perform the operation".format(instance_name),
exception=traceback.format_exc())
elif e.response['Error']['Code'] == "RequestExpired":
module.fail_json(msg="RequestExpired: Failed to start/stop instance {0}.".format(instance_name), exception=traceback.format_exc())
time.sleep(1)
# Try state change
if inst is not None and inst['state']['name'] != state:
try:
if state == 'running':
client.start_instance(instanceName=instance_name)
else:
client.stop_instance(instanceName=instance_name)
except botocore.exceptions.ClientError as e:
module.fail_json(msg='Unable to change state for instance {0}, error: {1}'.format(instance_name, e))
changed = True
# Grab current instance info
inst = _find_instance_info(client, instance_name)
return (changed, inst)
def core(module):
region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module, boto3=True)
if not region:
module.fail_json(msg='region must be specified')
client = None
try:
client = boto3_conn(module, conn_type='client', resource='lightsail',
region=region, endpoint=ec2_url, **aws_connect_kwargs)
except (botocore.exceptions.ClientError, botocore.exceptions.ValidationError) as e:
module.fail_json(msg='Failed while connecting to the lightsail service: %s' % e, exception=traceback.format_exc())
changed = False
state = module.params['state']
name = module.params['name']
if state == 'absent':
changed, instance_dict = delete_instance(module, client, name)
elif state in ('running', 'stopped'):
changed, instance_dict = startstop_instance(module, client, name, state)
elif state == 'restarted':
changed, instance_dict = restart_instance(module, client, name)
elif state == 'present':
changed, instance_dict = create_instance(module, client, name)
module.exit_json(changed=changed, instance=camel_dict_to_snake_dict(instance_dict))
def _find_instance_info(client, instance_name):
''' handle exceptions where this function is called '''
inst = None
try:
inst = client.get_instance(instanceName=instance_name)
except botocore.exceptions.ClientError as e:
raise
return inst['instance']
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
name=dict(type='str', required=True),
state=dict(type='str', default='present', choices=['present', 'absent', 'stopped', 'running', 'restarted']),
zone=dict(type='str'),
blueprint_id=dict(type='str'),
bundle_id=dict(type='str'),
key_pair_name=dict(type='str'),
user_data=dict(type='str'),
wait=dict(type='bool', default=True),
wait_timeout=dict(default=300, type='int'),
))
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO3:
module.fail_json(msg='Python module "boto3" is missing, please install it')
if not HAS_BOTOCORE:
module.fail_json(msg='Python module "botocore" is missing, please install it')
try:
core(module)
except (botocore.exceptions.ClientError, Exception) as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,590 |
setup_mysql8 integration test role fails when run with the test group
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When the `setup_mysql8` role is run as part of the `shippable/posix/group1/`, the service fails to start and the test fails.
Example failure output from [this run](https://app.shippable.com/github/ansible/ansible/runs/146941/68/tests):
```
{
"changed": false,
"msg": "Unable to start service mysqld: Job for mysqld.service failed because the control process exited with error code. See \"systemctl status mysqld.service\" and \"journalctl -xe\" for details.\n"
}
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`test/integration/targets/setup_mysql8`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.10
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run `ansible-test integration --docker centos7 shippable/posix/group1/`
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tests pass
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Tests fail
<!--- Paste verbatim command output between quotes -->
```
{
"changed": false,
"msg": "Unable to start service mysqld: Job for mysqld.service failed because the control process exited with error code. See \"systemctl status mysqld.service\" and \"journalctl -xe\" for details.\n"
}
```
```
|
https://github.com/ansible/ansible/issues/63590
|
https://github.com/ansible/ansible/pull/63602
|
4cad7e479c922cd973015fd1cf4bd5e5890158de
|
d615d7ea1924eb3e63d4e5d77423d261b3673800
| 2019-10-16T20:28:29Z |
python
| 2019-10-17T06:26:09Z |
test/integration/targets/mysql_variables/aliases
|
destructive
shippable/posix/group1
skip/osx
skip/freebsd
skip/ubuntu
skip/fedora
skip/opensuse
skip/rhel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,784 |
vmware_guest_network.py uses product uuid
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The documentation states the instance uuid is required for the uuid parameter but the hw product uuid is used instead.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest_network
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0rc1.post0
config file = None
configured module search path = ['/home/pernst/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Configure virtual network cards
vmware_guest_network:
hostname: "{{ vsphere_host }}"
username: "{{ vsphere_user }}"
password: "{{ vsphere_pass }}"
validate_certs: "{{ vsphere_validate_certs | default(true) }}"
datacenter: "{{ vsphere_target_dc }}"
uuid: "{{ vm_facts.instance.instance_uuid }}"
register: vm_networks
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
FAILED! => {"changed": false, "msg": "Unable to find the specified virtual machine using <UUID>
```
P.S.: I suggest the same uuid handling as in vmware_guest with an additional use_instance_uuid parameter.
|
https://github.com/ansible/ansible/issues/62784
|
https://github.com/ansible/ansible/pull/62818
|
d615d7ea1924eb3e63d4e5d77423d261b3673800
|
b6ea43efc3a9c426cea6b4c70e2b795b58f7bbbd
| 2019-09-24T12:41:40Z |
python
| 2019-10-17T06:53:13Z |
lib/ansible/modules/cloud/vmware/vmware_guest_network.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# Copyright: (c) 2019, Diane Wang <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: vmware_guest_network
short_description: Manage network adapters of specified virtual machine in given vCenter infrastructure
description:
- This module is used to add, reconfigure, remove network adapter of given virtual machine.
- All parameters and VMware object names are case sensitive.
version_added: '2.9'
author:
- Diane Wang (@Tomorrow9) <[email protected]>
notes:
- Tested on vSphere 6.0, 6.5 and 6.7
requirements:
- "python >= 2.6"
- PyVmomi
options:
name:
description:
- Name of the virtual machine.
- This is a required parameter, if parameter C(uuid) or C(moid) is not supplied.
type: str
uuid:
description:
- UUID of the instance to gather info if known, this is VMware's unique identifier.
- This is a required parameter, if parameter C(name) or C(moid) is not supplied.
type: str
moid:
description:
- Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.
- This is required if C(name) or C(uuid) is not supplied.
type: str
folder:
description:
- Destination folder, absolute or relative path to find an existing guest.
- This is a required parameter, only if multiple VMs are found with same name.
- The folder should include the datacenter. ESXi server's datacenter is ha-datacenter.
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
type: str
cluster:
description:
- The name of cluster where the virtual machine will run.
- This is a required parameter, if C(esxi_hostname) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
type: str
esxi_hostname:
description:
- The ESXi hostname where the virtual machine will run.
- This is a required parameter, if C(cluster) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
type: str
datacenter:
default: ha-datacenter
description:
- The datacenter name to which virtual machine belongs to.
type: str
gather_network_info:
description:
- If set to C(True), return settings of all network adapters, other parameters are ignored.
- If set to C(False), will add, reconfigure or remove network adapters according to the parameters in C(networks).
type: bool
default: False
aliases: [ gather_network_facts ]
networks:
type: list
description:
- A list of network adapters.
- C(mac) or C(label) or C(device_type) is required to reconfigure or remove an existing network adapter.
- 'If there are multiple network adapters with the same C(device_type), you should set C(label) or C(mac) to match
one of them, or will apply changes on all network adapters with the C(device_type) specified.'
- 'C(mac), C(label), C(device_type) is the order of precedence from greatest to least if all set.'
- 'Valid attributes are:'
- ' - C(mac) (string): MAC address of the existing network adapter to be reconfigured or removed.'
- ' - C(label) (string): Label of the existing network adapter to be reconfigured or removed, e.g., "Network adapter 1".'
- ' - C(device_type) (string): Valid virtual network device types are:
C(e1000), C(e1000e), C(pcnet32), C(vmxnet2), C(vmxnet3) (default), C(sriov).
Used to add new network adapter, reconfigure or remove the existing network adapter with this type.
If C(mac) and C(label) not specified or not find network adapter by C(mac) or C(label) will use this parameter.'
- ' - C(name) (string): Name of the portgroup or distributed virtual portgroup for this interface.
When specifying distributed virtual portgroup make sure given C(esxi_hostname) or C(cluster) is associated with it.'
- ' - C(vlan) (integer): VLAN number for this interface.'
- ' - C(dvswitch_name) (string): Name of the distributed vSwitch.
This value is required if multiple distributed portgroups exists with the same name.'
- ' - C(state) (string): State of the network adapter.'
- ' If set to C(present), then will do reconfiguration for the specified network adapter.'
- ' If set to C(new), then will add the specified network adapter.'
- ' If set to C(absent), then will remove this network adapter.'
- ' - C(manual_mac) (string): Manual specified MAC address of the network adapter when creating, or reconfiguring.
If not specified when creating new network adapter, mac address will be generated automatically.
When reconfigure MAC address, VM should be in powered off state.'
- ' - C(connected) (bool): Indicates that virtual network adapter connects to the associated virtual machine.'
- ' - C(start_connected) (bool): Indicates that virtual network adapter starts with associated virtual machine powers on.'
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
- name: Change network adapter settings of virtual machine
vmware_guest_network:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
validate_certs: no
name: test-vm
gather_network_info: false
networks:
- name: "VM Network"
state: new
manual_mac: "00:50:56:11:22:33"
- state: present
device_type: e1000e
manual_mac: "00:50:56:44:55:66"
- state: present
label: "Network adapter 3"
connected: false
- state: absent
mac: "00:50:56:44:55:77"
delegate_to: localhost
register: network_info
- name: Change network adapter settings of virtual machine using MoID
vmware_guest_network:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
validate_certs: no
moid: vm-42
gather_network_info: false
networks:
- state: absent
mac: "00:50:56:44:55:77"
delegate_to: localhost
'''
RETURN = """
network_data:
description: metadata about the virtual machine's network adapter after managing them
returned: always
type: dict
sample: {
"0": {
"label": "Network Adapter 1",
"name": "VM Network",
"device_type": "E1000E",
"mac_addr": "00:50:56:89:dc:05",
"unit_number": 7,
"wake_onlan": false,
"allow_guest_ctl": true,
"connected": true,
"start_connected": true,
},
}
"""
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.network import is_mac
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, wait_for_task, get_all_objs, get_parent_datacenter
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
self.change_detected = False
self.config_spec = vim.vm.ConfigSpec()
self.config_spec.deviceChange = []
self.nic_device_type = dict(
pcnet32=vim.vm.device.VirtualPCNet32,
vmxnet2=vim.vm.device.VirtualVmxnet2,
vmxnet3=vim.vm.device.VirtualVmxnet3,
e1000=vim.vm.device.VirtualE1000,
e1000e=vim.vm.device.VirtualE1000e,
sriov=vim.vm.device.VirtualSriovEthernetCard,
)
def get_device_type(self, device_type=None):
""" Get network adapter device type """
if device_type and device_type in list(self.nic_device_type.keys()):
return self.nic_device_type[device_type]()
else:
self.module.fail_json(msg='Invalid network device_type %s' % device_type)
def get_network_device(self, vm=None, mac=None, device_type=None, device_label=None):
"""
Get network adapter
"""
nic_devices = []
nic_device = None
if vm is None:
if device_type:
return nic_devices
else:
return nic_device
for device in vm.config.hardware.device:
if mac:
if isinstance(device, vim.vm.device.VirtualEthernetCard):
if device.macAddress == mac:
nic_device = device
break
elif device_type:
if isinstance(device, self.nic_device_type[device_type]):
nic_devices.append(device)
elif device_label:
if isinstance(device, vim.vm.device.VirtualEthernetCard):
if device.deviceInfo.label == device_label:
nic_device = device
break
if device_type:
return nic_devices
else:
return nic_device
def get_network_device_by_mac(self, vm=None, mac=None):
""" Get network adapter with the specified mac address"""
return self.get_network_device(vm=vm, mac=mac)
def get_network_devices_by_type(self, vm=None, device_type=None):
""" Get network adapter list with the name type """
return self.get_network_device(vm=vm, device_type=device_type)
def get_network_device_by_label(self, vm=None, device_label=None):
""" Get network adapter with the specified label """
return self.get_network_device(vm=vm, device_label=device_label)
def create_network_adapter(self, device_info):
nic = vim.vm.device.VirtualDeviceSpec()
nic.device = self.get_device_type(device_type=device_info.get('device_type', 'vmxnet3'))
nic.device.deviceInfo = vim.Description()
nic.device.deviceInfo.summary = device_info['name']
nic.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()
nic.device.backing.deviceName = device_info['name']
nic.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
nic.device.connectable.startConnected = device_info.get('start_connected', True)
nic.device.connectable.allowGuestControl = True
nic.device.connectable.connected = device_info.get('connected', True)
if 'manual_mac' in device_info:
nic.device.addressType = 'manual'
nic.device.macAddress = device_info['manual_mac']
else:
nic.device.addressType = 'generated'
return nic
def get_network_info(self, vm_obj):
network_info = dict()
if vm_obj is None:
return network_info
nic_index = 0
for nic in vm_obj.config.hardware.device:
nic_type = None
if isinstance(nic, vim.vm.device.VirtualPCNet32):
nic_type = 'PCNet32'
elif isinstance(nic, vim.vm.device.VirtualVmxnet2):
nic_type = 'VMXNET2'
elif isinstance(nic, vim.vm.device.VirtualVmxnet3):
nic_type = 'VMXNET3'
elif isinstance(nic, vim.vm.device.VirtualE1000):
nic_type = 'E1000'
elif isinstance(nic, vim.vm.device.VirtualE1000e):
nic_type = 'E1000E'
elif isinstance(nic, vim.vm.device.VirtualSriovEthernetCard):
nic_type = 'SriovEthernetCard'
if nic_type is not None:
network_info[nic_index] = dict(
device_type=nic_type,
label=nic.deviceInfo.label,
name=nic.deviceInfo.summary,
mac_addr=nic.macAddress,
unit_number=nic.unitNumber,
wake_onlan=nic.wakeOnLanEnabled,
allow_guest_ctl=nic.connectable.allowGuestControl,
connected=nic.connectable.connected,
start_connected=nic.connectable.startConnected,
)
nic_index += 1
return network_info
def sanitize_network_params(self):
network_list = []
valid_state = ['new', 'present', 'absent']
if len(self.params['networks']) != 0:
for network in self.params['networks']:
if 'state' not in network or network['state'].lower() not in valid_state:
self.module.fail_json(msg="Network adapter state not specified or invalid: '%s', valid values: "
"%s" % (network.get('state', ''), valid_state))
# add new network adapter but no name specified
if network['state'].lower() == 'new' and 'name' not in network and 'vlan' not in network:
self.module.fail_json(msg="Please specify at least network name or VLAN name for adding new network adapter.")
if network['state'].lower() == 'new' and 'mac' in network:
self.module.fail_json(msg="networks.mac is used for vNIC reconfigure, but networks.state is set to 'new'.")
if network['state'].lower() == 'present' and 'mac' not in network and 'label' not in network and 'device_type' not in network:
self.module.fail_json(msg="Should specify 'mac', 'label' or 'device_type' parameter to reconfigure network adapter")
if 'connected' in network:
if not isinstance(network['connected'], bool):
self.module.fail_json(msg="networks.connected parameter should be boolean.")
if network['state'].lower() == 'new' and not network['connected']:
network['start_connected'] = False
if 'start_connected' in network:
if not isinstance(network['start_connected'], bool):
self.module.fail_json(msg="networks.start_connected parameter should be boolean.")
if network['state'].lower() == 'new' and not network['start_connected']:
network['connected'] = False
# specified network does not exist
if 'name' in network and not self.network_exists_by_name(network['name']):
self.module.fail_json(msg="Network '%(name)s' does not exist." % network)
elif 'vlan' in network:
objects = get_all_objs(self.content, [vim.dvs.DistributedVirtualPortgroup])
dvps = [x for x in objects if to_text(get_parent_datacenter(x).name) == to_text(self.params['datacenter'])]
for dvp in dvps:
if hasattr(dvp.config.defaultPortConfig, 'vlan') and \
isinstance(dvp.config.defaultPortConfig.vlan.vlanId, int) and \
str(dvp.config.defaultPortConfig.vlan.vlanId) == str(network['vlan']):
network['name'] = dvp.config.name
break
if 'dvswitch_name' in network and \
dvp.config.distributedVirtualSwitch.name == network['dvswitch_name'] and \
dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
if dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
else:
self.module.fail_json(msg="VLAN '%(vlan)s' does not exist." % network)
if 'device_type' in network and network['device_type'] not in list(self.nic_device_type.keys()):
self.module.fail_json(msg="Device type specified '%s' is invalid. "
"Valid types %s " % (network['device_type'], list(self.nic_device_type.keys())))
if ('mac' in network and not is_mac(network['mac'])) or \
('manual_mac' in network and not is_mac(network['manual_mac'])):
self.module.fail_json(msg="Device MAC address '%s' or manual set MAC address %s is invalid. "
"Please provide correct MAC address." % (network['mac'], network['manual_mac']))
network_list.append(network)
return network_list
def get_network_config_spec(self, vm_obj, network_list):
# create network adapter config spec for adding, editing, removing
for network in network_list:
# add new network adapter
if network['state'].lower() == 'new':
nic_spec = self.create_network_adapter(network)
nic_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
self.change_detected = True
self.config_spec.deviceChange.append(nic_spec)
# reconfigure network adapter or remove network adapter
else:
nic_devices = []
if 'mac' in network:
nic = self.get_network_device_by_mac(vm_obj, mac=network['mac'])
if nic is not None:
nic_devices.append(nic)
if 'label' in network and len(nic_devices) == 0:
nic = self.get_network_device_by_label(vm_obj, device_label=network['label'])
if nic is not None:
nic_devices.append(nic)
if 'device_type' in network and len(nic_devices) == 0:
nic_devices = self.get_network_devices_by_type(vm_obj, device_type=network['device_type'])
if len(nic_devices) != 0:
for nic_device in nic_devices:
nic_spec = vim.vm.device.VirtualDeviceSpec()
if network['state'].lower() == 'present':
nic_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
nic_spec.device = nic_device
if 'start_connected' in network and nic_device.connectable.startConnected != network['start_connected']:
nic_device.connectable.startConnected = network['start_connected']
self.change_detected = True
if 'connected' in network and nic_device.connectable.connected != network['connected']:
nic_device.connectable.connected = network['connected']
self.change_detected = True
if 'name' in network and nic_device.deviceInfo.summary != network['name']:
nic_device.deviceInfo.summary = network['name']
self.change_detected = True
if 'manual_mac' in network and nic_device.macAddress != network['manual_mac']:
if vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
self.module.fail_json(msg='Expected power state is poweredOff to reconfigure MAC address')
nic_device.addressType = 'manual'
nic_device.macAddress = network['manual_mac']
self.change_detected = True
if self.change_detected:
self.config_spec.deviceChange.append(nic_spec)
elif network['state'].lower() == 'absent':
nic_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove
nic_spec.device = nic_device
self.change_detected = True
self.config_spec.deviceChange.append(nic_spec)
else:
self.module.fail_json(msg='Unable to find the specified network adapter: %s' % network)
def reconfigure_vm_network(self, vm_obj):
network_list = self.sanitize_network_params()
# gather network adapter info only
if (self.params['gather_network_info'] is not None and self.params['gather_network_info']) or len(network_list) == 0:
results = {'changed': False, 'failed': False, 'network_data': self.get_network_info(vm_obj)}
# do reconfigure then gather info
else:
self.get_network_config_spec(vm_obj, network_list)
try:
task = vm_obj.ReconfigVM_Task(spec=self.config_spec)
wait_for_task(task)
except vim.fault.InvalidDeviceSpec as e:
self.module.fail_json(msg="Failed to configure network adapter on given virtual machine due to invalid"
" device spec : %s" % to_native(e.msg),
details="Please check ESXi server logs for more details.")
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to reconfigure virtual machine due to"
" product versioning restrictions: %s" % to_native(e.msg))
if task.info.state == 'error':
results = {'changed': self.change_detected, 'failed': True, 'msg': task.info.error.msg}
else:
network_info = self.get_network_info(vm_obj)
results = {'changed': self.change_detected, 'failed': False, 'network_data': network_info}
return results
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
name=dict(type='str'),
uuid=dict(type='str'),
moid=dict(type='str'),
folder=dict(type='str'),
datacenter=dict(type='str', default='ha-datacenter'),
esxi_hostname=dict(type='str'),
cluster=dict(type='str'),
gather_network_info=dict(type='bool', default=False, aliases=['gather_network_facts']),
networks=dict(type='list', default=[])
)
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=[
['name', 'uuid', 'moid']
]
)
pyv = PyVmomiHelper(module)
vm = pyv.get_vm()
if not vm:
vm_id = (module.params.get('uuid') or module.params.get('name') or module.params.get('moid'))
module.fail_json(msg='Unable to find the specified virtual machine using %s' % vm_id)
result = pyv.reconfigure_vm_network(vm)
if result['failed']:
module.fail_json(**result)
else:
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,784 |
vmware_guest_network.py uses product uuid
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The documentation states the instance uuid is required for the uuid parameter but the hw product uuid is used instead.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
vmware_guest_network
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0rc1.post0
config file = None
configured module search path = ['/home/pernst/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Configure virtual network cards
vmware_guest_network:
hostname: "{{ vsphere_host }}"
username: "{{ vsphere_user }}"
password: "{{ vsphere_pass }}"
validate_certs: "{{ vsphere_validate_certs | default(true) }}"
datacenter: "{{ vsphere_target_dc }}"
uuid: "{{ vm_facts.instance.instance_uuid }}"
register: vm_networks
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
FAILED! => {"changed": false, "msg": "Unable to find the specified virtual machine using <UUID>
```
P.S.: I suggest the same uuid handling as in vmware_guest with an additional use_instance_uuid parameter.
|
https://github.com/ansible/ansible/issues/62784
|
https://github.com/ansible/ansible/pull/62818
|
d615d7ea1924eb3e63d4e5d77423d261b3673800
|
b6ea43efc3a9c426cea6b4c70e2b795b58f7bbbd
| 2019-09-24T12:41:40Z |
python
| 2019-10-17T06:53:13Z |
test/integration/targets/vmware_guest_network/tasks/main.yml
|
# Test code for the vmware_guest_network module
# Copyright: (c) 2019, Diane Wang (Tomorrow9) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- when: vcsim is not defined
block:
- import_role:
name: prepare_vmware_tests
vars:
setup_attach_host: true
setup_datastore: true
- name: Create VMs
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ dc1 }}"
validate_certs: no
folder: '/DC0/vm/F0'
name: test_vm1
state: poweredon
guest_id: centos7_64Guest
disk:
- size_gb: 1
type: thin
datastore: '{{ ds2 }}'
hardware:
version: latest
memory_mb: 1024
num_cpus: 1
scsi: paravirtual
cdrom:
type: iso
iso_path: "[{{ ds1 }}] fedora.iso"
networks:
- name: VM Network
- vmware_guest_tools_wait:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: test_vm1
- name: gather network adapters' facts of the virtual machine
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
gather_network_info: true
register: netadapter_info
- debug: var=netadapter_info
- name: get number of existing netowrk adapters
set_fact:
netadapter_num: "{{ netadapter_info.network_data | length }}"
- name: add new network adapters to virtual machine
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
networks:
- name: "VM Network"
state: new
device_type: e1000e
manual_mac: "aa:50:56:58:59:60"
- name: "VM Network"
state: new
device_type: vmxnet3
manual_mac: "aa:50:56:58:59:61"
register: add_netadapter
- debug: var=add_netadapter
- name: assert the new netowrk adapters were added to VM
assert:
that:
- "add_netadapter.changed == true"
- "{{ add_netadapter.network_data | length | int }} == {{ netadapter_num | int + 2 }}"
- name: delete one specified network adapter
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
networks:
- state: absent
mac: "aa:50:56:58:59:60"
register: del_netadapter
- debug: var=del_netadapter
- name: assert the network adapter was removed
assert:
that:
- "del_netadapter.changed == true"
- "{{ del_netadapter.network_data | length | int }} == {{ netadapter_num | int + 1 }}"
- name: disconnect one specified network adapter
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
networks:
- state: present
mac: "aa:50:56:58:59:61"
connected: false
register: disc_netadapter
- debug: var=disc_netadapter
- name: assert the network adapter was disconnected
assert:
that:
- "disc_netadapter.changed == true"
- "{{ disc_netadapter.network_data[netadapter_num]['connected'] }} == false"
- name: Check if network does not exists
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
networks:
- name: non-existing-nw
manual_mac: "aa:50:56:11:22:33"
state: new
register: no_nw_details
ignore_errors: yes
- debug: var=no_nw_details
- name: Check if network does not exists
assert:
that:
- not no_nw_details.changed
- no_nw_details.failed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,253 |
Allow to select openssl_privatekey format
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Allow to specify a new "format" parameter in "openssl_privatekey". Today the output format is decided by a very simple heuristic which requires further commands to work properly.
I need, for example, to generate a RSA private key but in PKCS8 format. The current heuristic uses OpenSSH format for RSA keys and only uses PKCS8 for Ed25519 and similar. This makes me use a "command" with "openssl" to convert the generated key, which leads to idempotency problems and unnecessary complexity.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
openssl_privatekey
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The current heuristic should be kept unless overwritten by user.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
format: pkcs8
```
The values allowed are `pkcs1`, `raw` and `pkcs8`
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/59253
|
https://github.com/ansible/ansible/pull/60388
|
e3c7e356568c3254a14440d7b832cc2cf32a2b14
|
d00d0c81b379f4232421ec01a104b3ce1c6e2820
| 2019-07-18T15:29:13Z |
python
| 2019-10-17T08:40:13Z |
changelogs/fragments/60388-openssl_privatekey-format.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,253 |
Allow to select openssl_privatekey format
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Allow to specify a new "format" parameter in "openssl_privatekey". Today the output format is decided by a very simple heuristic which requires further commands to work properly.
I need, for example, to generate a RSA private key but in PKCS8 format. The current heuristic uses OpenSSH format for RSA keys and only uses PKCS8 for Ed25519 and similar. This makes me use a "command" with "openssl" to convert the generated key, which leads to idempotency problems and unnecessary complexity.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
openssl_privatekey
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The current heuristic should be kept unless overwritten by user.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
format: pkcs8
```
The values allowed are `pkcs1`, `raw` and `pkcs8`
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/59253
|
https://github.com/ansible/ansible/pull/60388
|
e3c7e356568c3254a14440d7b832cc2cf32a2b14
|
d00d0c81b379f4232421ec01a104b3ce1c6e2820
| 2019-07-18T15:29:13Z |
python
| 2019-10-17T08:40:13Z |
lib/ansible/module_utils/crypto.py
|
# -*- coding: utf-8 -*-
#
# (c) 2016, Yanis Guenane <[email protected]>
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# ----------------------------------------------------------------------
# A clearly marked portion of this file is licensed under the BSD license
# Copyright (c) 2015, 2016 Paul Kehrer (@reaperhulk)
# Copyright (c) 2017 Fraser Tweedale (@frasertweedale)
# For more details, search for the function _obj2txt().
# ---------------------------------------------------------------------
# A clearly marked portion of this file is extracted from a project that
# is licensed under the Apache License 2.0
# Copyright (c) the OpenSSL contributors
# For more details, search for the function _OID_MAP.
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import sys
from distutils.version import LooseVersion
try:
import OpenSSL
from OpenSSL import crypto
except ImportError:
# An error will be raised in the calling class to let the end
# user know that OpenSSL couldn't be found.
pass
try:
import cryptography
from cryptography import x509
from cryptography.hazmat.backends import default_backend as cryptography_backend
from cryptography.hazmat.primitives.serialization import load_pem_private_key
from cryptography.hazmat.primitives import hashes
import ipaddress
# Older versions of cryptography (< 2.1) do not have __hash__ functions for
# general name objects (DNSName, IPAddress, ...), while providing overloaded
# equality and string representation operations. This makes it impossible to
# use them in hash-based data structures such as set or dict. Since we are
# actually doing that in openssl_certificate, and potentially in other code,
# we need to monkey-patch __hash__ for these classes to make sure our code
# works fine.
if LooseVersion(cryptography.__version__) < LooseVersion('2.1'):
# A very simply hash function which relies on the representation
# of an object to be implemented. This is the case since at least
# cryptography 1.0, see
# https://github.com/pyca/cryptography/commit/7a9abce4bff36c05d26d8d2680303a6f64a0e84f
def simple_hash(self):
return hash(repr(self))
# The hash functions for the following types were added for cryptography 2.1:
# https://github.com/pyca/cryptography/commit/fbfc36da2a4769045f2373b004ddf0aff906cf38
x509.DNSName.__hash__ = simple_hash
x509.DirectoryName.__hash__ = simple_hash
x509.GeneralName.__hash__ = simple_hash
x509.IPAddress.__hash__ = simple_hash
x509.OtherName.__hash__ = simple_hash
x509.RegisteredID.__hash__ = simple_hash
if LooseVersion(cryptography.__version__) < LooseVersion('1.2'):
# The hash functions for the following types were added for cryptography 1.2:
# https://github.com/pyca/cryptography/commit/b642deed88a8696e5f01ce6855ccf89985fc35d0
# https://github.com/pyca/cryptography/commit/d1b5681f6db2bde7a14625538bd7907b08dfb486
x509.RFC822Name.__hash__ = simple_hash
x509.UniformResourceIdentifier.__hash__ = simple_hash
except ImportError:
# Error handled in the calling module.
pass
import abc
import base64
import binascii
import datetime
import errno
import hashlib
import os
import re
import tempfile
from ansible.module_utils import six
from ansible.module_utils._text import to_bytes, to_text
class OpenSSLObjectError(Exception):
pass
class OpenSSLBadPassphraseError(OpenSSLObjectError):
pass
def get_fingerprint_of_bytes(source):
"""Generate the fingerprint of the given bytes."""
fingerprint = {}
try:
algorithms = hashlib.algorithms
except AttributeError:
try:
algorithms = hashlib.algorithms_guaranteed
except AttributeError:
return None
for algo in algorithms:
f = getattr(hashlib, algo)
h = f(source)
try:
# Certain hash functions have a hexdigest() which expects a length parameter
pubkey_digest = h.hexdigest()
except TypeError:
pubkey_digest = h.hexdigest(32)
fingerprint[algo] = ':'.join(pubkey_digest[i:i + 2] for i in range(0, len(pubkey_digest), 2))
return fingerprint
def get_fingerprint(path, passphrase=None):
"""Generate the fingerprint of the public key. """
privatekey = load_privatekey(path, passphrase, check_passphrase=False)
try:
publickey = crypto.dump_publickey(crypto.FILETYPE_ASN1, privatekey)
except AttributeError:
# If PyOpenSSL < 16.0 crypto.dump_publickey() will fail.
try:
bio = crypto._new_mem_buf()
rc = crypto._lib.i2d_PUBKEY_bio(bio, privatekey._pkey)
if rc != 1:
crypto._raise_current_error()
publickey = crypto._bio_to_string(bio)
except AttributeError:
# By doing this we prevent the code from raising an error
# yet we return no value in the fingerprint hash.
return None
return get_fingerprint_of_bytes(publickey)
def load_privatekey(path, passphrase=None, check_passphrase=True, content=None, backend='pyopenssl'):
"""Load the specified OpenSSL private key.
The content can also be specified via content; in that case,
this function will not load the key from disk.
"""
try:
if content is None:
with open(path, 'rb') as b_priv_key_fh:
priv_key_detail = b_priv_key_fh.read()
else:
priv_key_detail = content
if backend == 'pyopenssl':
# First try: try to load with real passphrase (resp. empty string)
# Will work if this is the correct passphrase, or the key is not
# password-protected.
try:
result = crypto.load_privatekey(crypto.FILETYPE_PEM,
priv_key_detail,
to_bytes(passphrase or ''))
except crypto.Error as e:
if len(e.args) > 0 and len(e.args[0]) > 0:
if e.args[0][0][2] in ('bad decrypt', 'bad password read'):
# This happens in case we have the wrong passphrase.
if passphrase is not None:
raise OpenSSLBadPassphraseError('Wrong passphrase provided for private key!')
else:
raise OpenSSLBadPassphraseError('No passphrase provided, but private key is password-protected!')
raise OpenSSLObjectError('Error while deserializing key: {0}'.format(e))
if check_passphrase:
# Next we want to make sure that the key is actually protected by
# a passphrase (in case we did try the empty string before, make
# sure that the key is not protected by the empty string)
try:
crypto.load_privatekey(crypto.FILETYPE_PEM,
priv_key_detail,
to_bytes('y' if passphrase == 'x' else 'x'))
if passphrase is not None:
# Since we can load the key without an exception, the
# key isn't password-protected
raise OpenSSLBadPassphraseError('Passphrase provided, but private key is not password-protected!')
except crypto.Error as e:
if passphrase is None and len(e.args) > 0 and len(e.args[0]) > 0:
if e.args[0][0][2] in ('bad decrypt', 'bad password read'):
# The key is obviously protected by the empty string.
# Don't do this at home (if it's possible at all)...
raise OpenSSLBadPassphraseError('No passphrase provided, but private key is password-protected!')
elif backend == 'cryptography':
try:
result = load_pem_private_key(priv_key_detail,
None if passphrase is None else to_bytes(passphrase),
cryptography_backend())
except TypeError as dummy:
raise OpenSSLBadPassphraseError('Wrong or empty passphrase provided for private key')
except ValueError as dummy:
raise OpenSSLBadPassphraseError('Wrong passphrase provided for private key')
return result
except (IOError, OSError) as exc:
raise OpenSSLObjectError(exc)
def load_certificate(path, backend='pyopenssl'):
"""Load the specified certificate."""
try:
with open(path, 'rb') as cert_fh:
cert_content = cert_fh.read()
if backend == 'pyopenssl':
return crypto.load_certificate(crypto.FILETYPE_PEM, cert_content)
elif backend == 'cryptography':
return x509.load_pem_x509_certificate(cert_content, cryptography_backend())
except (IOError, OSError) as exc:
raise OpenSSLObjectError(exc)
def load_certificate_request(path, backend='pyopenssl'):
"""Load the specified certificate signing request."""
try:
with open(path, 'rb') as csr_fh:
csr_content = csr_fh.read()
except (IOError, OSError) as exc:
raise OpenSSLObjectError(exc)
if backend == 'pyopenssl':
return crypto.load_certificate_request(crypto.FILETYPE_PEM, csr_content)
elif backend == 'cryptography':
return x509.load_pem_x509_csr(csr_content, cryptography_backend())
def parse_name_field(input_dict):
"""Take a dict with key: value or key: list_of_values mappings and return a list of tuples"""
result = []
for key in input_dict:
if isinstance(input_dict[key], list):
for entry in input_dict[key]:
result.append((key, entry))
else:
result.append((key, input_dict[key]))
return result
def convert_relative_to_datetime(relative_time_string):
"""Get a datetime.datetime or None from a string in the time format described in sshd_config(5)"""
parsed_result = re.match(
r"^(?P<prefix>[+-])((?P<weeks>\d+)[wW])?((?P<days>\d+)[dD])?((?P<hours>\d+)[hH])?((?P<minutes>\d+)[mM])?((?P<seconds>\d+)[sS]?)?$",
relative_time_string)
if parsed_result is None or len(relative_time_string) == 1:
# not matched or only a single "+" or "-"
return None
offset = datetime.timedelta(0)
if parsed_result.group("weeks") is not None:
offset += datetime.timedelta(weeks=int(parsed_result.group("weeks")))
if parsed_result.group("days") is not None:
offset += datetime.timedelta(days=int(parsed_result.group("days")))
if parsed_result.group("hours") is not None:
offset += datetime.timedelta(hours=int(parsed_result.group("hours")))
if parsed_result.group("minutes") is not None:
offset += datetime.timedelta(
minutes=int(parsed_result.group("minutes")))
if parsed_result.group("seconds") is not None:
offset += datetime.timedelta(
seconds=int(parsed_result.group("seconds")))
if parsed_result.group("prefix") == "+":
return datetime.datetime.utcnow() + offset
else:
return datetime.datetime.utcnow() - offset
def select_message_digest(digest_string):
digest = None
if digest_string == 'sha256':
digest = hashes.SHA256()
elif digest_string == 'sha384':
digest = hashes.SHA384()
elif digest_string == 'sha512':
digest = hashes.SHA512()
elif digest_string == 'sha1':
digest = hashes.SHA1()
elif digest_string == 'md5':
digest = hashes.MD5()
return digest
def write_file(module, content, default_mode=None, path=None):
'''
Writes content into destination file as securely as possible.
Uses file arguments from module.
'''
# Find out parameters for file
file_args = module.load_file_common_arguments(module.params)
if file_args['mode'] is None:
file_args['mode'] = default_mode
# If the path was set to override module path
if path is not None:
file_args['path'] = path
# Create tempfile name
tmp_fd, tmp_name = tempfile.mkstemp(prefix=b'.ansible_tmp')
try:
os.close(tmp_fd)
except Exception as dummy:
pass
module.add_cleanup_file(tmp_name) # if we fail, let Ansible try to remove the file
try:
try:
# Create tempfile
file = os.open(tmp_name, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600)
os.write(file, content)
os.close(file)
except Exception as e:
try:
os.remove(tmp_name)
except Exception as dummy:
pass
module.fail_json(msg='Error while writing result into temporary file: {0}'.format(e))
# Update destination to wanted permissions
if os.path.exists(file_args['path']):
module.set_fs_attributes_if_different(file_args, False)
# Move tempfile to final destination
module.atomic_move(tmp_name, file_args['path'])
# Try to update permissions again
module.set_fs_attributes_if_different(file_args, False)
except Exception as e:
try:
os.remove(tmp_name)
except Exception as dummy:
pass
module.fail_json(msg='Error while writing result: {0}'.format(e))
@six.add_metaclass(abc.ABCMeta)
class OpenSSLObject(object):
def __init__(self, path, state, force, check_mode):
self.path = path
self.state = state
self.force = force
self.name = os.path.basename(path)
self.changed = False
self.check_mode = check_mode
def check(self, module, perms_required=True):
"""Ensure the resource is in its desired state."""
def _check_state():
return os.path.exists(self.path)
def _check_perms(module):
file_args = module.load_file_common_arguments(module.params)
return not module.set_fs_attributes_if_different(file_args, False)
if not perms_required:
return _check_state()
return _check_state() and _check_perms(module)
@abc.abstractmethod
def dump(self):
"""Serialize the object into a dictionary."""
pass
@abc.abstractmethod
def generate(self):
"""Generate the resource."""
pass
def remove(self, module):
"""Remove the resource from the filesystem."""
try:
os.remove(self.path)
self.changed = True
except OSError as exc:
if exc.errno != errno.ENOENT:
raise OpenSSLObjectError(exc)
else:
pass
# #####################################################################################
# #####################################################################################
# This has been extracted from the OpenSSL project's objects.txt:
# https://github.com/openssl/openssl/blob/9537fe5757bb07761fa275d779bbd40bcf5530e4/crypto/objects/objects.txt
# Extracted with https://gist.github.com/felixfontein/376748017ad65ead093d56a45a5bf376
#
# In case the following data structure has any copyrightable content, note that it is licensed as follows:
# Copyright (c) the OpenSSL contributors
# Licensed under the Apache License 2.0
# https://github.com/openssl/openssl/blob/master/LICENSE
_OID_MAP = {
'0': ('itu-t', 'ITU-T', 'ccitt'),
'0.3.4401.5': ('ntt-ds', ),
'0.3.4401.5.3.1.9': ('camellia', ),
'0.3.4401.5.3.1.9.1': ('camellia-128-ecb', 'CAMELLIA-128-ECB'),
'0.3.4401.5.3.1.9.3': ('camellia-128-ofb', 'CAMELLIA-128-OFB'),
'0.3.4401.5.3.1.9.4': ('camellia-128-cfb', 'CAMELLIA-128-CFB'),
'0.3.4401.5.3.1.9.6': ('camellia-128-gcm', 'CAMELLIA-128-GCM'),
'0.3.4401.5.3.1.9.7': ('camellia-128-ccm', 'CAMELLIA-128-CCM'),
'0.3.4401.5.3.1.9.9': ('camellia-128-ctr', 'CAMELLIA-128-CTR'),
'0.3.4401.5.3.1.9.10': ('camellia-128-cmac', 'CAMELLIA-128-CMAC'),
'0.3.4401.5.3.1.9.21': ('camellia-192-ecb', 'CAMELLIA-192-ECB'),
'0.3.4401.5.3.1.9.23': ('camellia-192-ofb', 'CAMELLIA-192-OFB'),
'0.3.4401.5.3.1.9.24': ('camellia-192-cfb', 'CAMELLIA-192-CFB'),
'0.3.4401.5.3.1.9.26': ('camellia-192-gcm', 'CAMELLIA-192-GCM'),
'0.3.4401.5.3.1.9.27': ('camellia-192-ccm', 'CAMELLIA-192-CCM'),
'0.3.4401.5.3.1.9.29': ('camellia-192-ctr', 'CAMELLIA-192-CTR'),
'0.3.4401.5.3.1.9.30': ('camellia-192-cmac', 'CAMELLIA-192-CMAC'),
'0.3.4401.5.3.1.9.41': ('camellia-256-ecb', 'CAMELLIA-256-ECB'),
'0.3.4401.5.3.1.9.43': ('camellia-256-ofb', 'CAMELLIA-256-OFB'),
'0.3.4401.5.3.1.9.44': ('camellia-256-cfb', 'CAMELLIA-256-CFB'),
'0.3.4401.5.3.1.9.46': ('camellia-256-gcm', 'CAMELLIA-256-GCM'),
'0.3.4401.5.3.1.9.47': ('camellia-256-ccm', 'CAMELLIA-256-CCM'),
'0.3.4401.5.3.1.9.49': ('camellia-256-ctr', 'CAMELLIA-256-CTR'),
'0.3.4401.5.3.1.9.50': ('camellia-256-cmac', 'CAMELLIA-256-CMAC'),
'0.9': ('data', ),
'0.9.2342': ('pss', ),
'0.9.2342.19200300': ('ucl', ),
'0.9.2342.19200300.100': ('pilot', ),
'0.9.2342.19200300.100.1': ('pilotAttributeType', ),
'0.9.2342.19200300.100.1.1': ('userId', 'UID'),
'0.9.2342.19200300.100.1.2': ('textEncodedORAddress', ),
'0.9.2342.19200300.100.1.3': ('rfc822Mailbox', 'mail'),
'0.9.2342.19200300.100.1.4': ('info', ),
'0.9.2342.19200300.100.1.5': ('favouriteDrink', ),
'0.9.2342.19200300.100.1.6': ('roomNumber', ),
'0.9.2342.19200300.100.1.7': ('photo', ),
'0.9.2342.19200300.100.1.8': ('userClass', ),
'0.9.2342.19200300.100.1.9': ('host', ),
'0.9.2342.19200300.100.1.10': ('manager', ),
'0.9.2342.19200300.100.1.11': ('documentIdentifier', ),
'0.9.2342.19200300.100.1.12': ('documentTitle', ),
'0.9.2342.19200300.100.1.13': ('documentVersion', ),
'0.9.2342.19200300.100.1.14': ('documentAuthor', ),
'0.9.2342.19200300.100.1.15': ('documentLocation', ),
'0.9.2342.19200300.100.1.20': ('homeTelephoneNumber', ),
'0.9.2342.19200300.100.1.21': ('secretary', ),
'0.9.2342.19200300.100.1.22': ('otherMailbox', ),
'0.9.2342.19200300.100.1.23': ('lastModifiedTime', ),
'0.9.2342.19200300.100.1.24': ('lastModifiedBy', ),
'0.9.2342.19200300.100.1.25': ('domainComponent', 'DC'),
'0.9.2342.19200300.100.1.26': ('aRecord', ),
'0.9.2342.19200300.100.1.27': ('pilotAttributeType27', ),
'0.9.2342.19200300.100.1.28': ('mXRecord', ),
'0.9.2342.19200300.100.1.29': ('nSRecord', ),
'0.9.2342.19200300.100.1.30': ('sOARecord', ),
'0.9.2342.19200300.100.1.31': ('cNAMERecord', ),
'0.9.2342.19200300.100.1.37': ('associatedDomain', ),
'0.9.2342.19200300.100.1.38': ('associatedName', ),
'0.9.2342.19200300.100.1.39': ('homePostalAddress', ),
'0.9.2342.19200300.100.1.40': ('personalTitle', ),
'0.9.2342.19200300.100.1.41': ('mobileTelephoneNumber', ),
'0.9.2342.19200300.100.1.42': ('pagerTelephoneNumber', ),
'0.9.2342.19200300.100.1.43': ('friendlyCountryName', ),
'0.9.2342.19200300.100.1.44': ('uniqueIdentifier', 'uid'),
'0.9.2342.19200300.100.1.45': ('organizationalStatus', ),
'0.9.2342.19200300.100.1.46': ('janetMailbox', ),
'0.9.2342.19200300.100.1.47': ('mailPreferenceOption', ),
'0.9.2342.19200300.100.1.48': ('buildingName', ),
'0.9.2342.19200300.100.1.49': ('dSAQuality', ),
'0.9.2342.19200300.100.1.50': ('singleLevelQuality', ),
'0.9.2342.19200300.100.1.51': ('subtreeMinimumQuality', ),
'0.9.2342.19200300.100.1.52': ('subtreeMaximumQuality', ),
'0.9.2342.19200300.100.1.53': ('personalSignature', ),
'0.9.2342.19200300.100.1.54': ('dITRedirect', ),
'0.9.2342.19200300.100.1.55': ('audio', ),
'0.9.2342.19200300.100.1.56': ('documentPublisher', ),
'0.9.2342.19200300.100.3': ('pilotAttributeSyntax', ),
'0.9.2342.19200300.100.3.4': ('iA5StringSyntax', ),
'0.9.2342.19200300.100.3.5': ('caseIgnoreIA5StringSyntax', ),
'0.9.2342.19200300.100.4': ('pilotObjectClass', ),
'0.9.2342.19200300.100.4.3': ('pilotObject', ),
'0.9.2342.19200300.100.4.4': ('pilotPerson', ),
'0.9.2342.19200300.100.4.5': ('account', ),
'0.9.2342.19200300.100.4.6': ('document', ),
'0.9.2342.19200300.100.4.7': ('room', ),
'0.9.2342.19200300.100.4.9': ('documentSeries', ),
'0.9.2342.19200300.100.4.13': ('Domain', 'domain'),
'0.9.2342.19200300.100.4.14': ('rFC822localPart', ),
'0.9.2342.19200300.100.4.15': ('dNSDomain', ),
'0.9.2342.19200300.100.4.17': ('domainRelatedObject', ),
'0.9.2342.19200300.100.4.18': ('friendlyCountry', ),
'0.9.2342.19200300.100.4.19': ('simpleSecurityObject', ),
'0.9.2342.19200300.100.4.20': ('pilotOrganization', ),
'0.9.2342.19200300.100.4.21': ('pilotDSA', ),
'0.9.2342.19200300.100.4.22': ('qualityLabelledData', ),
'0.9.2342.19200300.100.10': ('pilotGroups', ),
'1': ('iso', 'ISO'),
'1.0.9797.3.4': ('gmac', 'GMAC'),
'1.0.10118.3.0.55': ('whirlpool', ),
'1.2': ('ISO Member Body', 'member-body'),
'1.2.156': ('ISO CN Member Body', 'ISO-CN'),
'1.2.156.10197': ('oscca', ),
'1.2.156.10197.1': ('sm-scheme', ),
'1.2.156.10197.1.104.1': ('sm4-ecb', 'SM4-ECB'),
'1.2.156.10197.1.104.2': ('sm4-cbc', 'SM4-CBC'),
'1.2.156.10197.1.104.3': ('sm4-ofb', 'SM4-OFB'),
'1.2.156.10197.1.104.4': ('sm4-cfb', 'SM4-CFB'),
'1.2.156.10197.1.104.5': ('sm4-cfb1', 'SM4-CFB1'),
'1.2.156.10197.1.104.6': ('sm4-cfb8', 'SM4-CFB8'),
'1.2.156.10197.1.104.7': ('sm4-ctr', 'SM4-CTR'),
'1.2.156.10197.1.301': ('sm2', 'SM2'),
'1.2.156.10197.1.401': ('sm3', 'SM3'),
'1.2.156.10197.1.501': ('SM2-with-SM3', 'SM2-SM3'),
'1.2.156.10197.1.504': ('sm3WithRSAEncryption', 'RSA-SM3'),
'1.2.392.200011.61.1.1.1.2': ('camellia-128-cbc', 'CAMELLIA-128-CBC'),
'1.2.392.200011.61.1.1.1.3': ('camellia-192-cbc', 'CAMELLIA-192-CBC'),
'1.2.392.200011.61.1.1.1.4': ('camellia-256-cbc', 'CAMELLIA-256-CBC'),
'1.2.392.200011.61.1.1.3.2': ('id-camellia128-wrap', ),
'1.2.392.200011.61.1.1.3.3': ('id-camellia192-wrap', ),
'1.2.392.200011.61.1.1.3.4': ('id-camellia256-wrap', ),
'1.2.410.200004': ('kisa', 'KISA'),
'1.2.410.200004.1.3': ('seed-ecb', 'SEED-ECB'),
'1.2.410.200004.1.4': ('seed-cbc', 'SEED-CBC'),
'1.2.410.200004.1.5': ('seed-cfb', 'SEED-CFB'),
'1.2.410.200004.1.6': ('seed-ofb', 'SEED-OFB'),
'1.2.410.200046.1.1': ('aria', ),
'1.2.410.200046.1.1.1': ('aria-128-ecb', 'ARIA-128-ECB'),
'1.2.410.200046.1.1.2': ('aria-128-cbc', 'ARIA-128-CBC'),
'1.2.410.200046.1.1.3': ('aria-128-cfb', 'ARIA-128-CFB'),
'1.2.410.200046.1.1.4': ('aria-128-ofb', 'ARIA-128-OFB'),
'1.2.410.200046.1.1.5': ('aria-128-ctr', 'ARIA-128-CTR'),
'1.2.410.200046.1.1.6': ('aria-192-ecb', 'ARIA-192-ECB'),
'1.2.410.200046.1.1.7': ('aria-192-cbc', 'ARIA-192-CBC'),
'1.2.410.200046.1.1.8': ('aria-192-cfb', 'ARIA-192-CFB'),
'1.2.410.200046.1.1.9': ('aria-192-ofb', 'ARIA-192-OFB'),
'1.2.410.200046.1.1.10': ('aria-192-ctr', 'ARIA-192-CTR'),
'1.2.410.200046.1.1.11': ('aria-256-ecb', 'ARIA-256-ECB'),
'1.2.410.200046.1.1.12': ('aria-256-cbc', 'ARIA-256-CBC'),
'1.2.410.200046.1.1.13': ('aria-256-cfb', 'ARIA-256-CFB'),
'1.2.410.200046.1.1.14': ('aria-256-ofb', 'ARIA-256-OFB'),
'1.2.410.200046.1.1.15': ('aria-256-ctr', 'ARIA-256-CTR'),
'1.2.410.200046.1.1.34': ('aria-128-gcm', 'ARIA-128-GCM'),
'1.2.410.200046.1.1.35': ('aria-192-gcm', 'ARIA-192-GCM'),
'1.2.410.200046.1.1.36': ('aria-256-gcm', 'ARIA-256-GCM'),
'1.2.410.200046.1.1.37': ('aria-128-ccm', 'ARIA-128-CCM'),
'1.2.410.200046.1.1.38': ('aria-192-ccm', 'ARIA-192-CCM'),
'1.2.410.200046.1.1.39': ('aria-256-ccm', 'ARIA-256-CCM'),
'1.2.643.2.2': ('cryptopro', ),
'1.2.643.2.2.3': ('GOST R 34.11-94 with GOST R 34.10-2001', 'id-GostR3411-94-with-GostR3410-2001'),
'1.2.643.2.2.4': ('GOST R 34.11-94 with GOST R 34.10-94', 'id-GostR3411-94-with-GostR3410-94'),
'1.2.643.2.2.9': ('GOST R 34.11-94', 'md_gost94'),
'1.2.643.2.2.10': ('HMAC GOST 34.11-94', 'id-HMACGostR3411-94'),
'1.2.643.2.2.14.0': ('id-Gost28147-89-None-KeyMeshing', ),
'1.2.643.2.2.14.1': ('id-Gost28147-89-CryptoPro-KeyMeshing', ),
'1.2.643.2.2.19': ('GOST R 34.10-2001', 'gost2001'),
'1.2.643.2.2.20': ('GOST R 34.10-94', 'gost94'),
'1.2.643.2.2.20.1': ('id-GostR3410-94-a', ),
'1.2.643.2.2.20.2': ('id-GostR3410-94-aBis', ),
'1.2.643.2.2.20.3': ('id-GostR3410-94-b', ),
'1.2.643.2.2.20.4': ('id-GostR3410-94-bBis', ),
'1.2.643.2.2.21': ('GOST 28147-89', 'gost89'),
'1.2.643.2.2.22': ('GOST 28147-89 MAC', 'gost-mac'),
'1.2.643.2.2.23': ('GOST R 34.11-94 PRF', 'prf-gostr3411-94'),
'1.2.643.2.2.30.0': ('id-GostR3411-94-TestParamSet', ),
'1.2.643.2.2.30.1': ('id-GostR3411-94-CryptoProParamSet', ),
'1.2.643.2.2.31.0': ('id-Gost28147-89-TestParamSet', ),
'1.2.643.2.2.31.1': ('id-Gost28147-89-CryptoPro-A-ParamSet', ),
'1.2.643.2.2.31.2': ('id-Gost28147-89-CryptoPro-B-ParamSet', ),
'1.2.643.2.2.31.3': ('id-Gost28147-89-CryptoPro-C-ParamSet', ),
'1.2.643.2.2.31.4': ('id-Gost28147-89-CryptoPro-D-ParamSet', ),
'1.2.643.2.2.31.5': ('id-Gost28147-89-CryptoPro-Oscar-1-1-ParamSet', ),
'1.2.643.2.2.31.6': ('id-Gost28147-89-CryptoPro-Oscar-1-0-ParamSet', ),
'1.2.643.2.2.31.7': ('id-Gost28147-89-CryptoPro-RIC-1-ParamSet', ),
'1.2.643.2.2.32.0': ('id-GostR3410-94-TestParamSet', ),
'1.2.643.2.2.32.2': ('id-GostR3410-94-CryptoPro-A-ParamSet', ),
'1.2.643.2.2.32.3': ('id-GostR3410-94-CryptoPro-B-ParamSet', ),
'1.2.643.2.2.32.4': ('id-GostR3410-94-CryptoPro-C-ParamSet', ),
'1.2.643.2.2.32.5': ('id-GostR3410-94-CryptoPro-D-ParamSet', ),
'1.2.643.2.2.33.1': ('id-GostR3410-94-CryptoPro-XchA-ParamSet', ),
'1.2.643.2.2.33.2': ('id-GostR3410-94-CryptoPro-XchB-ParamSet', ),
'1.2.643.2.2.33.3': ('id-GostR3410-94-CryptoPro-XchC-ParamSet', ),
'1.2.643.2.2.35.0': ('id-GostR3410-2001-TestParamSet', ),
'1.2.643.2.2.35.1': ('id-GostR3410-2001-CryptoPro-A-ParamSet', ),
'1.2.643.2.2.35.2': ('id-GostR3410-2001-CryptoPro-B-ParamSet', ),
'1.2.643.2.2.35.3': ('id-GostR3410-2001-CryptoPro-C-ParamSet', ),
'1.2.643.2.2.36.0': ('id-GostR3410-2001-CryptoPro-XchA-ParamSet', ),
'1.2.643.2.2.36.1': ('id-GostR3410-2001-CryptoPro-XchB-ParamSet', ),
'1.2.643.2.2.98': ('GOST R 34.10-2001 DH', 'id-GostR3410-2001DH'),
'1.2.643.2.2.99': ('GOST R 34.10-94 DH', 'id-GostR3410-94DH'),
'1.2.643.2.9': ('cryptocom', ),
'1.2.643.2.9.1.3.3': ('GOST R 34.11-94 with GOST R 34.10-94 Cryptocom', 'id-GostR3411-94-with-GostR3410-94-cc'),
'1.2.643.2.9.1.3.4': ('GOST R 34.11-94 with GOST R 34.10-2001 Cryptocom', 'id-GostR3411-94-with-GostR3410-2001-cc'),
'1.2.643.2.9.1.5.3': ('GOST 34.10-94 Cryptocom', 'gost94cc'),
'1.2.643.2.9.1.5.4': ('GOST 34.10-2001 Cryptocom', 'gost2001cc'),
'1.2.643.2.9.1.6.1': ('GOST 28147-89 Cryptocom ParamSet', 'id-Gost28147-89-cc'),
'1.2.643.2.9.1.8.1': ('GOST R 3410-2001 Parameter Set Cryptocom', 'id-GostR3410-2001-ParamSet-cc'),
'1.2.643.3.131.1.1': ('INN', 'INN'),
'1.2.643.7.1': ('id-tc26', ),
'1.2.643.7.1.1': ('id-tc26-algorithms', ),
'1.2.643.7.1.1.1': ('id-tc26-sign', ),
'1.2.643.7.1.1.1.1': ('GOST R 34.10-2012 with 256 bit modulus', 'gost2012_256'),
'1.2.643.7.1.1.1.2': ('GOST R 34.10-2012 with 512 bit modulus', 'gost2012_512'),
'1.2.643.7.1.1.2': ('id-tc26-digest', ),
'1.2.643.7.1.1.2.2': ('GOST R 34.11-2012 with 256 bit hash', 'md_gost12_256'),
'1.2.643.7.1.1.2.3': ('GOST R 34.11-2012 with 512 bit hash', 'md_gost12_512'),
'1.2.643.7.1.1.3': ('id-tc26-signwithdigest', ),
'1.2.643.7.1.1.3.2': ('GOST R 34.10-2012 with GOST R 34.11-2012 (256 bit)', 'id-tc26-signwithdigest-gost3410-2012-256'),
'1.2.643.7.1.1.3.3': ('GOST R 34.10-2012 with GOST R 34.11-2012 (512 bit)', 'id-tc26-signwithdigest-gost3410-2012-512'),
'1.2.643.7.1.1.4': ('id-tc26-mac', ),
'1.2.643.7.1.1.4.1': ('HMAC GOST 34.11-2012 256 bit', 'id-tc26-hmac-gost-3411-2012-256'),
'1.2.643.7.1.1.4.2': ('HMAC GOST 34.11-2012 512 bit', 'id-tc26-hmac-gost-3411-2012-512'),
'1.2.643.7.1.1.5': ('id-tc26-cipher', ),
'1.2.643.7.1.1.5.1': ('id-tc26-cipher-gostr3412-2015-magma', ),
'1.2.643.7.1.1.5.1.1': ('id-tc26-cipher-gostr3412-2015-magma-ctracpkm', ),
'1.2.643.7.1.1.5.1.2': ('id-tc26-cipher-gostr3412-2015-magma-ctracpkm-omac', ),
'1.2.643.7.1.1.5.2': ('id-tc26-cipher-gostr3412-2015-kuznyechik', ),
'1.2.643.7.1.1.5.2.1': ('id-tc26-cipher-gostr3412-2015-kuznyechik-ctracpkm', ),
'1.2.643.7.1.1.5.2.2': ('id-tc26-cipher-gostr3412-2015-kuznyechik-ctracpkm-omac', ),
'1.2.643.7.1.1.6': ('id-tc26-agreement', ),
'1.2.643.7.1.1.6.1': ('id-tc26-agreement-gost-3410-2012-256', ),
'1.2.643.7.1.1.6.2': ('id-tc26-agreement-gost-3410-2012-512', ),
'1.2.643.7.1.1.7': ('id-tc26-wrap', ),
'1.2.643.7.1.1.7.1': ('id-tc26-wrap-gostr3412-2015-magma', ),
'1.2.643.7.1.1.7.1.1': ('id-tc26-wrap-gostr3412-2015-magma-kexp15', 'id-tc26-wrap-gostr3412-2015-kuznyechik-kexp15'),
'1.2.643.7.1.1.7.2': ('id-tc26-wrap-gostr3412-2015-kuznyechik', ),
'1.2.643.7.1.2': ('id-tc26-constants', ),
'1.2.643.7.1.2.1': ('id-tc26-sign-constants', ),
'1.2.643.7.1.2.1.1': ('id-tc26-gost-3410-2012-256-constants', ),
'1.2.643.7.1.2.1.1.1': ('GOST R 34.10-2012 (256 bit) ParamSet A', 'id-tc26-gost-3410-2012-256-paramSetA'),
'1.2.643.7.1.2.1.1.2': ('GOST R 34.10-2012 (256 bit) ParamSet B', 'id-tc26-gost-3410-2012-256-paramSetB'),
'1.2.643.7.1.2.1.1.3': ('GOST R 34.10-2012 (256 bit) ParamSet C', 'id-tc26-gost-3410-2012-256-paramSetC'),
'1.2.643.7.1.2.1.1.4': ('GOST R 34.10-2012 (256 bit) ParamSet D', 'id-tc26-gost-3410-2012-256-paramSetD'),
'1.2.643.7.1.2.1.2': ('id-tc26-gost-3410-2012-512-constants', ),
'1.2.643.7.1.2.1.2.0': ('GOST R 34.10-2012 (512 bit) testing parameter set', 'id-tc26-gost-3410-2012-512-paramSetTest'),
'1.2.643.7.1.2.1.2.1': ('GOST R 34.10-2012 (512 bit) ParamSet A', 'id-tc26-gost-3410-2012-512-paramSetA'),
'1.2.643.7.1.2.1.2.2': ('GOST R 34.10-2012 (512 bit) ParamSet B', 'id-tc26-gost-3410-2012-512-paramSetB'),
'1.2.643.7.1.2.1.2.3': ('GOST R 34.10-2012 (512 bit) ParamSet C', 'id-tc26-gost-3410-2012-512-paramSetC'),
'1.2.643.7.1.2.2': ('id-tc26-digest-constants', ),
'1.2.643.7.1.2.5': ('id-tc26-cipher-constants', ),
'1.2.643.7.1.2.5.1': ('id-tc26-gost-28147-constants', ),
'1.2.643.7.1.2.5.1.1': ('GOST 28147-89 TC26 parameter set', 'id-tc26-gost-28147-param-Z'),
'1.2.643.100.1': ('OGRN', 'OGRN'),
'1.2.643.100.3': ('SNILS', 'SNILS'),
'1.2.643.100.111': ('Signing Tool of Subject', 'subjectSignTool'),
'1.2.643.100.112': ('Signing Tool of Issuer', 'issuerSignTool'),
'1.2.804': ('ISO-UA', ),
'1.2.804.2.1.1.1': ('ua-pki', ),
'1.2.804.2.1.1.1.1.1.1': ('DSTU Gost 28147-2009', 'dstu28147'),
'1.2.804.2.1.1.1.1.1.1.2': ('DSTU Gost 28147-2009 OFB mode', 'dstu28147-ofb'),
'1.2.804.2.1.1.1.1.1.1.3': ('DSTU Gost 28147-2009 CFB mode', 'dstu28147-cfb'),
'1.2.804.2.1.1.1.1.1.1.5': ('DSTU Gost 28147-2009 key wrap', 'dstu28147-wrap'),
'1.2.804.2.1.1.1.1.1.2': ('HMAC DSTU Gost 34311-95', 'hmacWithDstu34311'),
'1.2.804.2.1.1.1.1.2.1': ('DSTU Gost 34311-95', 'dstu34311'),
'1.2.804.2.1.1.1.1.3.1.1': ('DSTU 4145-2002 little endian', 'dstu4145le'),
'1.2.804.2.1.1.1.1.3.1.1.1.1': ('DSTU 4145-2002 big endian', 'dstu4145be'),
'1.2.804.2.1.1.1.1.3.1.1.2.0': ('DSTU curve 0', 'uacurve0'),
'1.2.804.2.1.1.1.1.3.1.1.2.1': ('DSTU curve 1', 'uacurve1'),
'1.2.804.2.1.1.1.1.3.1.1.2.2': ('DSTU curve 2', 'uacurve2'),
'1.2.804.2.1.1.1.1.3.1.1.2.3': ('DSTU curve 3', 'uacurve3'),
'1.2.804.2.1.1.1.1.3.1.1.2.4': ('DSTU curve 4', 'uacurve4'),
'1.2.804.2.1.1.1.1.3.1.1.2.5': ('DSTU curve 5', 'uacurve5'),
'1.2.804.2.1.1.1.1.3.1.1.2.6': ('DSTU curve 6', 'uacurve6'),
'1.2.804.2.1.1.1.1.3.1.1.2.7': ('DSTU curve 7', 'uacurve7'),
'1.2.804.2.1.1.1.1.3.1.1.2.8': ('DSTU curve 8', 'uacurve8'),
'1.2.804.2.1.1.1.1.3.1.1.2.9': ('DSTU curve 9', 'uacurve9'),
'1.2.840': ('ISO US Member Body', 'ISO-US'),
'1.2.840.10040': ('X9.57', 'X9-57'),
'1.2.840.10040.2': ('holdInstruction', ),
'1.2.840.10040.2.1': ('Hold Instruction None', 'holdInstructionNone'),
'1.2.840.10040.2.2': ('Hold Instruction Call Issuer', 'holdInstructionCallIssuer'),
'1.2.840.10040.2.3': ('Hold Instruction Reject', 'holdInstructionReject'),
'1.2.840.10040.4': ('X9.57 CM ?', 'X9cm'),
'1.2.840.10040.4.1': ('dsaEncryption', 'DSA'),
'1.2.840.10040.4.3': ('dsaWithSHA1', 'DSA-SHA1'),
'1.2.840.10045': ('ANSI X9.62', 'ansi-X9-62'),
'1.2.840.10045.1': ('id-fieldType', ),
'1.2.840.10045.1.1': ('prime-field', ),
'1.2.840.10045.1.2': ('characteristic-two-field', ),
'1.2.840.10045.1.2.3': ('id-characteristic-two-basis', ),
'1.2.840.10045.1.2.3.1': ('onBasis', ),
'1.2.840.10045.1.2.3.2': ('tpBasis', ),
'1.2.840.10045.1.2.3.3': ('ppBasis', ),
'1.2.840.10045.2': ('id-publicKeyType', ),
'1.2.840.10045.2.1': ('id-ecPublicKey', ),
'1.2.840.10045.3': ('ellipticCurve', ),
'1.2.840.10045.3.0': ('c-TwoCurve', ),
'1.2.840.10045.3.0.1': ('c2pnb163v1', ),
'1.2.840.10045.3.0.2': ('c2pnb163v2', ),
'1.2.840.10045.3.0.3': ('c2pnb163v3', ),
'1.2.840.10045.3.0.4': ('c2pnb176v1', ),
'1.2.840.10045.3.0.5': ('c2tnb191v1', ),
'1.2.840.10045.3.0.6': ('c2tnb191v2', ),
'1.2.840.10045.3.0.7': ('c2tnb191v3', ),
'1.2.840.10045.3.0.8': ('c2onb191v4', ),
'1.2.840.10045.3.0.9': ('c2onb191v5', ),
'1.2.840.10045.3.0.10': ('c2pnb208w1', ),
'1.2.840.10045.3.0.11': ('c2tnb239v1', ),
'1.2.840.10045.3.0.12': ('c2tnb239v2', ),
'1.2.840.10045.3.0.13': ('c2tnb239v3', ),
'1.2.840.10045.3.0.14': ('c2onb239v4', ),
'1.2.840.10045.3.0.15': ('c2onb239v5', ),
'1.2.840.10045.3.0.16': ('c2pnb272w1', ),
'1.2.840.10045.3.0.17': ('c2pnb304w1', ),
'1.2.840.10045.3.0.18': ('c2tnb359v1', ),
'1.2.840.10045.3.0.19': ('c2pnb368w1', ),
'1.2.840.10045.3.0.20': ('c2tnb431r1', ),
'1.2.840.10045.3.1': ('primeCurve', ),
'1.2.840.10045.3.1.1': ('prime192v1', ),
'1.2.840.10045.3.1.2': ('prime192v2', ),
'1.2.840.10045.3.1.3': ('prime192v3', ),
'1.2.840.10045.3.1.4': ('prime239v1', ),
'1.2.840.10045.3.1.5': ('prime239v2', ),
'1.2.840.10045.3.1.6': ('prime239v3', ),
'1.2.840.10045.3.1.7': ('prime256v1', ),
'1.2.840.10045.4': ('id-ecSigType', ),
'1.2.840.10045.4.1': ('ecdsa-with-SHA1', ),
'1.2.840.10045.4.2': ('ecdsa-with-Recommended', ),
'1.2.840.10045.4.3': ('ecdsa-with-Specified', ),
'1.2.840.10045.4.3.1': ('ecdsa-with-SHA224', ),
'1.2.840.10045.4.3.2': ('ecdsa-with-SHA256', ),
'1.2.840.10045.4.3.3': ('ecdsa-with-SHA384', ),
'1.2.840.10045.4.3.4': ('ecdsa-with-SHA512', ),
'1.2.840.10046.2.1': ('X9.42 DH', 'dhpublicnumber'),
'1.2.840.113533.7.66.10': ('cast5-cbc', 'CAST5-CBC'),
'1.2.840.113533.7.66.12': ('pbeWithMD5AndCast5CBC', ),
'1.2.840.113533.7.66.13': ('password based MAC', 'id-PasswordBasedMAC'),
'1.2.840.113533.7.66.30': ('Diffie-Hellman based MAC', 'id-DHBasedMac'),
'1.2.840.113549': ('RSA Data Security, Inc.', 'rsadsi'),
'1.2.840.113549.1': ('RSA Data Security, Inc. PKCS', 'pkcs'),
'1.2.840.113549.1.1': ('pkcs1', ),
'1.2.840.113549.1.1.1': ('rsaEncryption', ),
'1.2.840.113549.1.1.2': ('md2WithRSAEncryption', 'RSA-MD2'),
'1.2.840.113549.1.1.3': ('md4WithRSAEncryption', 'RSA-MD4'),
'1.2.840.113549.1.1.4': ('md5WithRSAEncryption', 'RSA-MD5'),
'1.2.840.113549.1.1.5': ('sha1WithRSAEncryption', 'RSA-SHA1'),
'1.2.840.113549.1.1.6': ('rsaOAEPEncryptionSET', ),
'1.2.840.113549.1.1.7': ('rsaesOaep', 'RSAES-OAEP'),
'1.2.840.113549.1.1.8': ('mgf1', 'MGF1'),
'1.2.840.113549.1.1.9': ('pSpecified', 'PSPECIFIED'),
'1.2.840.113549.1.1.10': ('rsassaPss', 'RSASSA-PSS'),
'1.2.840.113549.1.1.11': ('sha256WithRSAEncryption', 'RSA-SHA256'),
'1.2.840.113549.1.1.12': ('sha384WithRSAEncryption', 'RSA-SHA384'),
'1.2.840.113549.1.1.13': ('sha512WithRSAEncryption', 'RSA-SHA512'),
'1.2.840.113549.1.1.14': ('sha224WithRSAEncryption', 'RSA-SHA224'),
'1.2.840.113549.1.1.15': ('sha512-224WithRSAEncryption', 'RSA-SHA512/224'),
'1.2.840.113549.1.1.16': ('sha512-256WithRSAEncryption', 'RSA-SHA512/256'),
'1.2.840.113549.1.3': ('pkcs3', ),
'1.2.840.113549.1.3.1': ('dhKeyAgreement', ),
'1.2.840.113549.1.5': ('pkcs5', ),
'1.2.840.113549.1.5.1': ('pbeWithMD2AndDES-CBC', 'PBE-MD2-DES'),
'1.2.840.113549.1.5.3': ('pbeWithMD5AndDES-CBC', 'PBE-MD5-DES'),
'1.2.840.113549.1.5.4': ('pbeWithMD2AndRC2-CBC', 'PBE-MD2-RC2-64'),
'1.2.840.113549.1.5.6': ('pbeWithMD5AndRC2-CBC', 'PBE-MD5-RC2-64'),
'1.2.840.113549.1.5.10': ('pbeWithSHA1AndDES-CBC', 'PBE-SHA1-DES'),
'1.2.840.113549.1.5.11': ('pbeWithSHA1AndRC2-CBC', 'PBE-SHA1-RC2-64'),
'1.2.840.113549.1.5.12': ('PBKDF2', ),
'1.2.840.113549.1.5.13': ('PBES2', ),
'1.2.840.113549.1.5.14': ('PBMAC1', ),
'1.2.840.113549.1.7': ('pkcs7', ),
'1.2.840.113549.1.7.1': ('pkcs7-data', ),
'1.2.840.113549.1.7.2': ('pkcs7-signedData', ),
'1.2.840.113549.1.7.3': ('pkcs7-envelopedData', ),
'1.2.840.113549.1.7.4': ('pkcs7-signedAndEnvelopedData', ),
'1.2.840.113549.1.7.5': ('pkcs7-digestData', ),
'1.2.840.113549.1.7.6': ('pkcs7-encryptedData', ),
'1.2.840.113549.1.9': ('pkcs9', ),
'1.2.840.113549.1.9.1': ('emailAddress', ),
'1.2.840.113549.1.9.2': ('unstructuredName', ),
'1.2.840.113549.1.9.3': ('contentType', ),
'1.2.840.113549.1.9.4': ('messageDigest', ),
'1.2.840.113549.1.9.5': ('signingTime', ),
'1.2.840.113549.1.9.6': ('countersignature', ),
'1.2.840.113549.1.9.7': ('challengePassword', ),
'1.2.840.113549.1.9.8': ('unstructuredAddress', ),
'1.2.840.113549.1.9.9': ('extendedCertificateAttributes', ),
'1.2.840.113549.1.9.14': ('Extension Request', 'extReq'),
'1.2.840.113549.1.9.15': ('S/MIME Capabilities', 'SMIME-CAPS'),
'1.2.840.113549.1.9.16': ('S/MIME', 'SMIME'),
'1.2.840.113549.1.9.16.0': ('id-smime-mod', ),
'1.2.840.113549.1.9.16.0.1': ('id-smime-mod-cms', ),
'1.2.840.113549.1.9.16.0.2': ('id-smime-mod-ess', ),
'1.2.840.113549.1.9.16.0.3': ('id-smime-mod-oid', ),
'1.2.840.113549.1.9.16.0.4': ('id-smime-mod-msg-v3', ),
'1.2.840.113549.1.9.16.0.5': ('id-smime-mod-ets-eSignature-88', ),
'1.2.840.113549.1.9.16.0.6': ('id-smime-mod-ets-eSignature-97', ),
'1.2.840.113549.1.9.16.0.7': ('id-smime-mod-ets-eSigPolicy-88', ),
'1.2.840.113549.1.9.16.0.8': ('id-smime-mod-ets-eSigPolicy-97', ),
'1.2.840.113549.1.9.16.1': ('id-smime-ct', ),
'1.2.840.113549.1.9.16.1.1': ('id-smime-ct-receipt', ),
'1.2.840.113549.1.9.16.1.2': ('id-smime-ct-authData', ),
'1.2.840.113549.1.9.16.1.3': ('id-smime-ct-publishCert', ),
'1.2.840.113549.1.9.16.1.4': ('id-smime-ct-TSTInfo', ),
'1.2.840.113549.1.9.16.1.5': ('id-smime-ct-TDTInfo', ),
'1.2.840.113549.1.9.16.1.6': ('id-smime-ct-contentInfo', ),
'1.2.840.113549.1.9.16.1.7': ('id-smime-ct-DVCSRequestData', ),
'1.2.840.113549.1.9.16.1.8': ('id-smime-ct-DVCSResponseData', ),
'1.2.840.113549.1.9.16.1.9': ('id-smime-ct-compressedData', ),
'1.2.840.113549.1.9.16.1.19': ('id-smime-ct-contentCollection', ),
'1.2.840.113549.1.9.16.1.23': ('id-smime-ct-authEnvelopedData', ),
'1.2.840.113549.1.9.16.1.27': ('id-ct-asciiTextWithCRLF', ),
'1.2.840.113549.1.9.16.1.28': ('id-ct-xml', ),
'1.2.840.113549.1.9.16.2': ('id-smime-aa', ),
'1.2.840.113549.1.9.16.2.1': ('id-smime-aa-receiptRequest', ),
'1.2.840.113549.1.9.16.2.2': ('id-smime-aa-securityLabel', ),
'1.2.840.113549.1.9.16.2.3': ('id-smime-aa-mlExpandHistory', ),
'1.2.840.113549.1.9.16.2.4': ('id-smime-aa-contentHint', ),
'1.2.840.113549.1.9.16.2.5': ('id-smime-aa-msgSigDigest', ),
'1.2.840.113549.1.9.16.2.6': ('id-smime-aa-encapContentType', ),
'1.2.840.113549.1.9.16.2.7': ('id-smime-aa-contentIdentifier', ),
'1.2.840.113549.1.9.16.2.8': ('id-smime-aa-macValue', ),
'1.2.840.113549.1.9.16.2.9': ('id-smime-aa-equivalentLabels', ),
'1.2.840.113549.1.9.16.2.10': ('id-smime-aa-contentReference', ),
'1.2.840.113549.1.9.16.2.11': ('id-smime-aa-encrypKeyPref', ),
'1.2.840.113549.1.9.16.2.12': ('id-smime-aa-signingCertificate', ),
'1.2.840.113549.1.9.16.2.13': ('id-smime-aa-smimeEncryptCerts', ),
'1.2.840.113549.1.9.16.2.14': ('id-smime-aa-timeStampToken', ),
'1.2.840.113549.1.9.16.2.15': ('id-smime-aa-ets-sigPolicyId', ),
'1.2.840.113549.1.9.16.2.16': ('id-smime-aa-ets-commitmentType', ),
'1.2.840.113549.1.9.16.2.17': ('id-smime-aa-ets-signerLocation', ),
'1.2.840.113549.1.9.16.2.18': ('id-smime-aa-ets-signerAttr', ),
'1.2.840.113549.1.9.16.2.19': ('id-smime-aa-ets-otherSigCert', ),
'1.2.840.113549.1.9.16.2.20': ('id-smime-aa-ets-contentTimestamp', ),
'1.2.840.113549.1.9.16.2.21': ('id-smime-aa-ets-CertificateRefs', ),
'1.2.840.113549.1.9.16.2.22': ('id-smime-aa-ets-RevocationRefs', ),
'1.2.840.113549.1.9.16.2.23': ('id-smime-aa-ets-certValues', ),
'1.2.840.113549.1.9.16.2.24': ('id-smime-aa-ets-revocationValues', ),
'1.2.840.113549.1.9.16.2.25': ('id-smime-aa-ets-escTimeStamp', ),
'1.2.840.113549.1.9.16.2.26': ('id-smime-aa-ets-certCRLTimestamp', ),
'1.2.840.113549.1.9.16.2.27': ('id-smime-aa-ets-archiveTimeStamp', ),
'1.2.840.113549.1.9.16.2.28': ('id-smime-aa-signatureType', ),
'1.2.840.113549.1.9.16.2.29': ('id-smime-aa-dvcs-dvc', ),
'1.2.840.113549.1.9.16.2.47': ('id-smime-aa-signingCertificateV2', ),
'1.2.840.113549.1.9.16.3': ('id-smime-alg', ),
'1.2.840.113549.1.9.16.3.1': ('id-smime-alg-ESDHwith3DES', ),
'1.2.840.113549.1.9.16.3.2': ('id-smime-alg-ESDHwithRC2', ),
'1.2.840.113549.1.9.16.3.3': ('id-smime-alg-3DESwrap', ),
'1.2.840.113549.1.9.16.3.4': ('id-smime-alg-RC2wrap', ),
'1.2.840.113549.1.9.16.3.5': ('id-smime-alg-ESDH', ),
'1.2.840.113549.1.9.16.3.6': ('id-smime-alg-CMS3DESwrap', ),
'1.2.840.113549.1.9.16.3.7': ('id-smime-alg-CMSRC2wrap', ),
'1.2.840.113549.1.9.16.3.8': ('zlib compression', 'ZLIB'),
'1.2.840.113549.1.9.16.3.9': ('id-alg-PWRI-KEK', ),
'1.2.840.113549.1.9.16.4': ('id-smime-cd', ),
'1.2.840.113549.1.9.16.4.1': ('id-smime-cd-ldap', ),
'1.2.840.113549.1.9.16.5': ('id-smime-spq', ),
'1.2.840.113549.1.9.16.5.1': ('id-smime-spq-ets-sqt-uri', ),
'1.2.840.113549.1.9.16.5.2': ('id-smime-spq-ets-sqt-unotice', ),
'1.2.840.113549.1.9.16.6': ('id-smime-cti', ),
'1.2.840.113549.1.9.16.6.1': ('id-smime-cti-ets-proofOfOrigin', ),
'1.2.840.113549.1.9.16.6.2': ('id-smime-cti-ets-proofOfReceipt', ),
'1.2.840.113549.1.9.16.6.3': ('id-smime-cti-ets-proofOfDelivery', ),
'1.2.840.113549.1.9.16.6.4': ('id-smime-cti-ets-proofOfSender', ),
'1.2.840.113549.1.9.16.6.5': ('id-smime-cti-ets-proofOfApproval', ),
'1.2.840.113549.1.9.16.6.6': ('id-smime-cti-ets-proofOfCreation', ),
'1.2.840.113549.1.9.20': ('friendlyName', ),
'1.2.840.113549.1.9.21': ('localKeyID', ),
'1.2.840.113549.1.9.22': ('certTypes', ),
'1.2.840.113549.1.9.22.1': ('x509Certificate', ),
'1.2.840.113549.1.9.22.2': ('sdsiCertificate', ),
'1.2.840.113549.1.9.23': ('crlTypes', ),
'1.2.840.113549.1.9.23.1': ('x509Crl', ),
'1.2.840.113549.1.12': ('pkcs12', ),
'1.2.840.113549.1.12.1': ('pkcs12-pbeids', ),
'1.2.840.113549.1.12.1.1': ('pbeWithSHA1And128BitRC4', 'PBE-SHA1-RC4-128'),
'1.2.840.113549.1.12.1.2': ('pbeWithSHA1And40BitRC4', 'PBE-SHA1-RC4-40'),
'1.2.840.113549.1.12.1.3': ('pbeWithSHA1And3-KeyTripleDES-CBC', 'PBE-SHA1-3DES'),
'1.2.840.113549.1.12.1.4': ('pbeWithSHA1And2-KeyTripleDES-CBC', 'PBE-SHA1-2DES'),
'1.2.840.113549.1.12.1.5': ('pbeWithSHA1And128BitRC2-CBC', 'PBE-SHA1-RC2-128'),
'1.2.840.113549.1.12.1.6': ('pbeWithSHA1And40BitRC2-CBC', 'PBE-SHA1-RC2-40'),
'1.2.840.113549.1.12.10': ('pkcs12-Version1', ),
'1.2.840.113549.1.12.10.1': ('pkcs12-BagIds', ),
'1.2.840.113549.1.12.10.1.1': ('keyBag', ),
'1.2.840.113549.1.12.10.1.2': ('pkcs8ShroudedKeyBag', ),
'1.2.840.113549.1.12.10.1.3': ('certBag', ),
'1.2.840.113549.1.12.10.1.4': ('crlBag', ),
'1.2.840.113549.1.12.10.1.5': ('secretBag', ),
'1.2.840.113549.1.12.10.1.6': ('safeContentsBag', ),
'1.2.840.113549.2.2': ('md2', 'MD2'),
'1.2.840.113549.2.4': ('md4', 'MD4'),
'1.2.840.113549.2.5': ('md5', 'MD5'),
'1.2.840.113549.2.6': ('hmacWithMD5', ),
'1.2.840.113549.2.7': ('hmacWithSHA1', ),
'1.2.840.113549.2.8': ('hmacWithSHA224', ),
'1.2.840.113549.2.9': ('hmacWithSHA256', ),
'1.2.840.113549.2.10': ('hmacWithSHA384', ),
'1.2.840.113549.2.11': ('hmacWithSHA512', ),
'1.2.840.113549.2.12': ('hmacWithSHA512-224', ),
'1.2.840.113549.2.13': ('hmacWithSHA512-256', ),
'1.2.840.113549.3.2': ('rc2-cbc', 'RC2-CBC'),
'1.2.840.113549.3.4': ('rc4', 'RC4'),
'1.2.840.113549.3.7': ('des-ede3-cbc', 'DES-EDE3-CBC'),
'1.2.840.113549.3.8': ('rc5-cbc', 'RC5-CBC'),
'1.2.840.113549.3.10': ('des-cdmf', 'DES-CDMF'),
'1.3': ('identified-organization', 'org', 'ORG'),
'1.3.6': ('dod', 'DOD'),
'1.3.6.1': ('iana', 'IANA', 'internet'),
'1.3.6.1.1': ('Directory', 'directory'),
'1.3.6.1.2': ('Management', 'mgmt'),
'1.3.6.1.3': ('Experimental', 'experimental'),
'1.3.6.1.4': ('Private', 'private'),
'1.3.6.1.4.1': ('Enterprises', 'enterprises'),
'1.3.6.1.4.1.188.7.1.1.2': ('idea-cbc', 'IDEA-CBC'),
'1.3.6.1.4.1.311.2.1.14': ('Microsoft Extension Request', 'msExtReq'),
'1.3.6.1.4.1.311.2.1.21': ('Microsoft Individual Code Signing', 'msCodeInd'),
'1.3.6.1.4.1.311.2.1.22': ('Microsoft Commercial Code Signing', 'msCodeCom'),
'1.3.6.1.4.1.311.10.3.1': ('Microsoft Trust List Signing', 'msCTLSign'),
'1.3.6.1.4.1.311.10.3.3': ('Microsoft Server Gated Crypto', 'msSGC'),
'1.3.6.1.4.1.311.10.3.4': ('Microsoft Encrypted File System', 'msEFS'),
'1.3.6.1.4.1.311.17.1': ('Microsoft CSP Name', 'CSPName'),
'1.3.6.1.4.1.311.17.2': ('Microsoft Local Key set', 'LocalKeySet'),
'1.3.6.1.4.1.311.20.2.2': ('Microsoft Smartcardlogin', 'msSmartcardLogin'),
'1.3.6.1.4.1.311.20.2.3': ('Microsoft Universal Principal Name', 'msUPN'),
'1.3.6.1.4.1.311.60.2.1.1': ('jurisdictionLocalityName', 'jurisdictionL'),
'1.3.6.1.4.1.311.60.2.1.2': ('jurisdictionStateOrProvinceName', 'jurisdictionST'),
'1.3.6.1.4.1.311.60.2.1.3': ('jurisdictionCountryName', 'jurisdictionC'),
'1.3.6.1.4.1.1466.344': ('dcObject', 'dcobject'),
'1.3.6.1.4.1.1722.12.2.1.16': ('blake2b512', 'BLAKE2b512'),
'1.3.6.1.4.1.1722.12.2.2.8': ('blake2s256', 'BLAKE2s256'),
'1.3.6.1.4.1.3029.1.2': ('bf-cbc', 'BF-CBC'),
'1.3.6.1.4.1.11129.2.4.2': ('CT Precertificate SCTs', 'ct_precert_scts'),
'1.3.6.1.4.1.11129.2.4.3': ('CT Precertificate Poison', 'ct_precert_poison'),
'1.3.6.1.4.1.11129.2.4.4': ('CT Precertificate Signer', 'ct_precert_signer'),
'1.3.6.1.4.1.11129.2.4.5': ('CT Certificate SCTs', 'ct_cert_scts'),
'1.3.6.1.4.1.11591.4.11': ('scrypt', 'id-scrypt'),
'1.3.6.1.5': ('Security', 'security'),
'1.3.6.1.5.2.3': ('id-pkinit', ),
'1.3.6.1.5.2.3.4': ('PKINIT Client Auth', 'pkInitClientAuth'),
'1.3.6.1.5.2.3.5': ('Signing KDC Response', 'pkInitKDC'),
'1.3.6.1.5.5.7': ('PKIX', ),
'1.3.6.1.5.5.7.0': ('id-pkix-mod', ),
'1.3.6.1.5.5.7.0.1': ('id-pkix1-explicit-88', ),
'1.3.6.1.5.5.7.0.2': ('id-pkix1-implicit-88', ),
'1.3.6.1.5.5.7.0.3': ('id-pkix1-explicit-93', ),
'1.3.6.1.5.5.7.0.4': ('id-pkix1-implicit-93', ),
'1.3.6.1.5.5.7.0.5': ('id-mod-crmf', ),
'1.3.6.1.5.5.7.0.6': ('id-mod-cmc', ),
'1.3.6.1.5.5.7.0.7': ('id-mod-kea-profile-88', ),
'1.3.6.1.5.5.7.0.8': ('id-mod-kea-profile-93', ),
'1.3.6.1.5.5.7.0.9': ('id-mod-cmp', ),
'1.3.6.1.5.5.7.0.10': ('id-mod-qualified-cert-88', ),
'1.3.6.1.5.5.7.0.11': ('id-mod-qualified-cert-93', ),
'1.3.6.1.5.5.7.0.12': ('id-mod-attribute-cert', ),
'1.3.6.1.5.5.7.0.13': ('id-mod-timestamp-protocol', ),
'1.3.6.1.5.5.7.0.14': ('id-mod-ocsp', ),
'1.3.6.1.5.5.7.0.15': ('id-mod-dvcs', ),
'1.3.6.1.5.5.7.0.16': ('id-mod-cmp2000', ),
'1.3.6.1.5.5.7.1': ('id-pe', ),
'1.3.6.1.5.5.7.1.1': ('Authority Information Access', 'authorityInfoAccess'),
'1.3.6.1.5.5.7.1.2': ('Biometric Info', 'biometricInfo'),
'1.3.6.1.5.5.7.1.3': ('qcStatements', ),
'1.3.6.1.5.5.7.1.4': ('ac-auditEntity', ),
'1.3.6.1.5.5.7.1.5': ('ac-targeting', ),
'1.3.6.1.5.5.7.1.6': ('aaControls', ),
'1.3.6.1.5.5.7.1.7': ('sbgp-ipAddrBlock', ),
'1.3.6.1.5.5.7.1.8': ('sbgp-autonomousSysNum', ),
'1.3.6.1.5.5.7.1.9': ('sbgp-routerIdentifier', ),
'1.3.6.1.5.5.7.1.10': ('ac-proxying', ),
'1.3.6.1.5.5.7.1.11': ('Subject Information Access', 'subjectInfoAccess'),
'1.3.6.1.5.5.7.1.14': ('Proxy Certificate Information', 'proxyCertInfo'),
'1.3.6.1.5.5.7.1.24': ('TLS Feature', 'tlsfeature'),
'1.3.6.1.5.5.7.2': ('id-qt', ),
'1.3.6.1.5.5.7.2.1': ('Policy Qualifier CPS', 'id-qt-cps'),
'1.3.6.1.5.5.7.2.2': ('Policy Qualifier User Notice', 'id-qt-unotice'),
'1.3.6.1.5.5.7.2.3': ('textNotice', ),
'1.3.6.1.5.5.7.3': ('id-kp', ),
'1.3.6.1.5.5.7.3.1': ('TLS Web Server Authentication', 'serverAuth'),
'1.3.6.1.5.5.7.3.2': ('TLS Web Client Authentication', 'clientAuth'),
'1.3.6.1.5.5.7.3.3': ('Code Signing', 'codeSigning'),
'1.3.6.1.5.5.7.3.4': ('E-mail Protection', 'emailProtection'),
'1.3.6.1.5.5.7.3.5': ('IPSec End System', 'ipsecEndSystem'),
'1.3.6.1.5.5.7.3.6': ('IPSec Tunnel', 'ipsecTunnel'),
'1.3.6.1.5.5.7.3.7': ('IPSec User', 'ipsecUser'),
'1.3.6.1.5.5.7.3.8': ('Time Stamping', 'timeStamping'),
'1.3.6.1.5.5.7.3.9': ('OCSP Signing', 'OCSPSigning'),
'1.3.6.1.5.5.7.3.10': ('dvcs', 'DVCS'),
'1.3.6.1.5.5.7.3.17': ('ipsec Internet Key Exchange', 'ipsecIKE'),
'1.3.6.1.5.5.7.3.18': ('Ctrl/provision WAP Access', 'capwapAC'),
'1.3.6.1.5.5.7.3.19': ('Ctrl/Provision WAP Termination', 'capwapWTP'),
'1.3.6.1.5.5.7.3.21': ('SSH Client', 'secureShellClient'),
'1.3.6.1.5.5.7.3.22': ('SSH Server', 'secureShellServer'),
'1.3.6.1.5.5.7.3.23': ('Send Router', 'sendRouter'),
'1.3.6.1.5.5.7.3.24': ('Send Proxied Router', 'sendProxiedRouter'),
'1.3.6.1.5.5.7.3.25': ('Send Owner', 'sendOwner'),
'1.3.6.1.5.5.7.3.26': ('Send Proxied Owner', 'sendProxiedOwner'),
'1.3.6.1.5.5.7.3.27': ('CMC Certificate Authority', 'cmcCA'),
'1.3.6.1.5.5.7.3.28': ('CMC Registration Authority', 'cmcRA'),
'1.3.6.1.5.5.7.4': ('id-it', ),
'1.3.6.1.5.5.7.4.1': ('id-it-caProtEncCert', ),
'1.3.6.1.5.5.7.4.2': ('id-it-signKeyPairTypes', ),
'1.3.6.1.5.5.7.4.3': ('id-it-encKeyPairTypes', ),
'1.3.6.1.5.5.7.4.4': ('id-it-preferredSymmAlg', ),
'1.3.6.1.5.5.7.4.5': ('id-it-caKeyUpdateInfo', ),
'1.3.6.1.5.5.7.4.6': ('id-it-currentCRL', ),
'1.3.6.1.5.5.7.4.7': ('id-it-unsupportedOIDs', ),
'1.3.6.1.5.5.7.4.8': ('id-it-subscriptionRequest', ),
'1.3.6.1.5.5.7.4.9': ('id-it-subscriptionResponse', ),
'1.3.6.1.5.5.7.4.10': ('id-it-keyPairParamReq', ),
'1.3.6.1.5.5.7.4.11': ('id-it-keyPairParamRep', ),
'1.3.6.1.5.5.7.4.12': ('id-it-revPassphrase', ),
'1.3.6.1.5.5.7.4.13': ('id-it-implicitConfirm', ),
'1.3.6.1.5.5.7.4.14': ('id-it-confirmWaitTime', ),
'1.3.6.1.5.5.7.4.15': ('id-it-origPKIMessage', ),
'1.3.6.1.5.5.7.4.16': ('id-it-suppLangTags', ),
'1.3.6.1.5.5.7.5': ('id-pkip', ),
'1.3.6.1.5.5.7.5.1': ('id-regCtrl', ),
'1.3.6.1.5.5.7.5.1.1': ('id-regCtrl-regToken', ),
'1.3.6.1.5.5.7.5.1.2': ('id-regCtrl-authenticator', ),
'1.3.6.1.5.5.7.5.1.3': ('id-regCtrl-pkiPublicationInfo', ),
'1.3.6.1.5.5.7.5.1.4': ('id-regCtrl-pkiArchiveOptions', ),
'1.3.6.1.5.5.7.5.1.5': ('id-regCtrl-oldCertID', ),
'1.3.6.1.5.5.7.5.1.6': ('id-regCtrl-protocolEncrKey', ),
'1.3.6.1.5.5.7.5.2': ('id-regInfo', ),
'1.3.6.1.5.5.7.5.2.1': ('id-regInfo-utf8Pairs', ),
'1.3.6.1.5.5.7.5.2.2': ('id-regInfo-certReq', ),
'1.3.6.1.5.5.7.6': ('id-alg', ),
'1.3.6.1.5.5.7.6.1': ('id-alg-des40', ),
'1.3.6.1.5.5.7.6.2': ('id-alg-noSignature', ),
'1.3.6.1.5.5.7.6.3': ('id-alg-dh-sig-hmac-sha1', ),
'1.3.6.1.5.5.7.6.4': ('id-alg-dh-pop', ),
'1.3.6.1.5.5.7.7': ('id-cmc', ),
'1.3.6.1.5.5.7.7.1': ('id-cmc-statusInfo', ),
'1.3.6.1.5.5.7.7.2': ('id-cmc-identification', ),
'1.3.6.1.5.5.7.7.3': ('id-cmc-identityProof', ),
'1.3.6.1.5.5.7.7.4': ('id-cmc-dataReturn', ),
'1.3.6.1.5.5.7.7.5': ('id-cmc-transactionId', ),
'1.3.6.1.5.5.7.7.6': ('id-cmc-senderNonce', ),
'1.3.6.1.5.5.7.7.7': ('id-cmc-recipientNonce', ),
'1.3.6.1.5.5.7.7.8': ('id-cmc-addExtensions', ),
'1.3.6.1.5.5.7.7.9': ('id-cmc-encryptedPOP', ),
'1.3.6.1.5.5.7.7.10': ('id-cmc-decryptedPOP', ),
'1.3.6.1.5.5.7.7.11': ('id-cmc-lraPOPWitness', ),
'1.3.6.1.5.5.7.7.15': ('id-cmc-getCert', ),
'1.3.6.1.5.5.7.7.16': ('id-cmc-getCRL', ),
'1.3.6.1.5.5.7.7.17': ('id-cmc-revokeRequest', ),
'1.3.6.1.5.5.7.7.18': ('id-cmc-regInfo', ),
'1.3.6.1.5.5.7.7.19': ('id-cmc-responseInfo', ),
'1.3.6.1.5.5.7.7.21': ('id-cmc-queryPending', ),
'1.3.6.1.5.5.7.7.22': ('id-cmc-popLinkRandom', ),
'1.3.6.1.5.5.7.7.23': ('id-cmc-popLinkWitness', ),
'1.3.6.1.5.5.7.7.24': ('id-cmc-confirmCertAcceptance', ),
'1.3.6.1.5.5.7.8': ('id-on', ),
'1.3.6.1.5.5.7.8.1': ('id-on-personalData', ),
'1.3.6.1.5.5.7.8.3': ('Permanent Identifier', 'id-on-permanentIdentifier'),
'1.3.6.1.5.5.7.9': ('id-pda', ),
'1.3.6.1.5.5.7.9.1': ('id-pda-dateOfBirth', ),
'1.3.6.1.5.5.7.9.2': ('id-pda-placeOfBirth', ),
'1.3.6.1.5.5.7.9.3': ('id-pda-gender', ),
'1.3.6.1.5.5.7.9.4': ('id-pda-countryOfCitizenship', ),
'1.3.6.1.5.5.7.9.5': ('id-pda-countryOfResidence', ),
'1.3.6.1.5.5.7.10': ('id-aca', ),
'1.3.6.1.5.5.7.10.1': ('id-aca-authenticationInfo', ),
'1.3.6.1.5.5.7.10.2': ('id-aca-accessIdentity', ),
'1.3.6.1.5.5.7.10.3': ('id-aca-chargingIdentity', ),
'1.3.6.1.5.5.7.10.4': ('id-aca-group', ),
'1.3.6.1.5.5.7.10.5': ('id-aca-role', ),
'1.3.6.1.5.5.7.10.6': ('id-aca-encAttrs', ),
'1.3.6.1.5.5.7.11': ('id-qcs', ),
'1.3.6.1.5.5.7.11.1': ('id-qcs-pkixQCSyntax-v1', ),
'1.3.6.1.5.5.7.12': ('id-cct', ),
'1.3.6.1.5.5.7.12.1': ('id-cct-crs', ),
'1.3.6.1.5.5.7.12.2': ('id-cct-PKIData', ),
'1.3.6.1.5.5.7.12.3': ('id-cct-PKIResponse', ),
'1.3.6.1.5.5.7.21': ('id-ppl', ),
'1.3.6.1.5.5.7.21.0': ('Any language', 'id-ppl-anyLanguage'),
'1.3.6.1.5.5.7.21.1': ('Inherit all', 'id-ppl-inheritAll'),
'1.3.6.1.5.5.7.21.2': ('Independent', 'id-ppl-independent'),
'1.3.6.1.5.5.7.48': ('id-ad', ),
'1.3.6.1.5.5.7.48.1': ('OCSP', 'OCSP', 'id-pkix-OCSP'),
'1.3.6.1.5.5.7.48.1.1': ('Basic OCSP Response', 'basicOCSPResponse'),
'1.3.6.1.5.5.7.48.1.2': ('OCSP Nonce', 'Nonce'),
'1.3.6.1.5.5.7.48.1.3': ('OCSP CRL ID', 'CrlID'),
'1.3.6.1.5.5.7.48.1.4': ('Acceptable OCSP Responses', 'acceptableResponses'),
'1.3.6.1.5.5.7.48.1.5': ('OCSP No Check', 'noCheck'),
'1.3.6.1.5.5.7.48.1.6': ('OCSP Archive Cutoff', 'archiveCutoff'),
'1.3.6.1.5.5.7.48.1.7': ('OCSP Service Locator', 'serviceLocator'),
'1.3.6.1.5.5.7.48.1.8': ('Extended OCSP Status', 'extendedStatus'),
'1.3.6.1.5.5.7.48.1.9': ('valid', ),
'1.3.6.1.5.5.7.48.1.10': ('path', ),
'1.3.6.1.5.5.7.48.1.11': ('Trust Root', 'trustRoot'),
'1.3.6.1.5.5.7.48.2': ('CA Issuers', 'caIssuers'),
'1.3.6.1.5.5.7.48.3': ('AD Time Stamping', 'ad_timestamping'),
'1.3.6.1.5.5.7.48.4': ('ad dvcs', 'AD_DVCS'),
'1.3.6.1.5.5.7.48.5': ('CA Repository', 'caRepository'),
'1.3.6.1.5.5.8.1.1': ('hmac-md5', 'HMAC-MD5'),
'1.3.6.1.5.5.8.1.2': ('hmac-sha1', 'HMAC-SHA1'),
'1.3.6.1.6': ('SNMPv2', 'snmpv2'),
'1.3.6.1.7': ('Mail', ),
'1.3.6.1.7.1': ('MIME MHS', 'mime-mhs'),
'1.3.6.1.7.1.1': ('mime-mhs-headings', 'mime-mhs-headings'),
'1.3.6.1.7.1.1.1': ('id-hex-partial-message', 'id-hex-partial-message'),
'1.3.6.1.7.1.1.2': ('id-hex-multipart-message', 'id-hex-multipart-message'),
'1.3.6.1.7.1.2': ('mime-mhs-bodies', 'mime-mhs-bodies'),
'1.3.14.3.2': ('algorithm', 'algorithm'),
'1.3.14.3.2.3': ('md5WithRSA', 'RSA-NP-MD5'),
'1.3.14.3.2.6': ('des-ecb', 'DES-ECB'),
'1.3.14.3.2.7': ('des-cbc', 'DES-CBC'),
'1.3.14.3.2.8': ('des-ofb', 'DES-OFB'),
'1.3.14.3.2.9': ('des-cfb', 'DES-CFB'),
'1.3.14.3.2.11': ('rsaSignature', ),
'1.3.14.3.2.12': ('dsaEncryption-old', 'DSA-old'),
'1.3.14.3.2.13': ('dsaWithSHA', 'DSA-SHA'),
'1.3.14.3.2.15': ('shaWithRSAEncryption', 'RSA-SHA'),
'1.3.14.3.2.17': ('des-ede', 'DES-EDE'),
'1.3.14.3.2.18': ('sha', 'SHA'),
'1.3.14.3.2.26': ('sha1', 'SHA1'),
'1.3.14.3.2.27': ('dsaWithSHA1-old', 'DSA-SHA1-old'),
'1.3.14.3.2.29': ('sha1WithRSA', 'RSA-SHA1-2'),
'1.3.36.3.2.1': ('ripemd160', 'RIPEMD160'),
'1.3.36.3.3.1.2': ('ripemd160WithRSA', 'RSA-RIPEMD160'),
'1.3.36.3.3.2.8.1.1.1': ('brainpoolP160r1', ),
'1.3.36.3.3.2.8.1.1.2': ('brainpoolP160t1', ),
'1.3.36.3.3.2.8.1.1.3': ('brainpoolP192r1', ),
'1.3.36.3.3.2.8.1.1.4': ('brainpoolP192t1', ),
'1.3.36.3.3.2.8.1.1.5': ('brainpoolP224r1', ),
'1.3.36.3.3.2.8.1.1.6': ('brainpoolP224t1', ),
'1.3.36.3.3.2.8.1.1.7': ('brainpoolP256r1', ),
'1.3.36.3.3.2.8.1.1.8': ('brainpoolP256t1', ),
'1.3.36.3.3.2.8.1.1.9': ('brainpoolP320r1', ),
'1.3.36.3.3.2.8.1.1.10': ('brainpoolP320t1', ),
'1.3.36.3.3.2.8.1.1.11': ('brainpoolP384r1', ),
'1.3.36.3.3.2.8.1.1.12': ('brainpoolP384t1', ),
'1.3.36.3.3.2.8.1.1.13': ('brainpoolP512r1', ),
'1.3.36.3.3.2.8.1.1.14': ('brainpoolP512t1', ),
'1.3.36.8.3.3': ('Professional Information or basis for Admission', 'x509ExtAdmission'),
'1.3.101.1.4.1': ('Strong Extranet ID', 'SXNetID'),
'1.3.101.110': ('X25519', ),
'1.3.101.111': ('X448', ),
'1.3.101.112': ('ED25519', ),
'1.3.101.113': ('ED448', ),
'1.3.111': ('ieee', ),
'1.3.111.2.1619': ('IEEE Security in Storage Working Group', 'ieee-siswg'),
'1.3.111.2.1619.0.1.1': ('aes-128-xts', 'AES-128-XTS'),
'1.3.111.2.1619.0.1.2': ('aes-256-xts', 'AES-256-XTS'),
'1.3.132': ('certicom-arc', ),
'1.3.132.0': ('secg_ellipticCurve', ),
'1.3.132.0.1': ('sect163k1', ),
'1.3.132.0.2': ('sect163r1', ),
'1.3.132.0.3': ('sect239k1', ),
'1.3.132.0.4': ('sect113r1', ),
'1.3.132.0.5': ('sect113r2', ),
'1.3.132.0.6': ('secp112r1', ),
'1.3.132.0.7': ('secp112r2', ),
'1.3.132.0.8': ('secp160r1', ),
'1.3.132.0.9': ('secp160k1', ),
'1.3.132.0.10': ('secp256k1', ),
'1.3.132.0.15': ('sect163r2', ),
'1.3.132.0.16': ('sect283k1', ),
'1.3.132.0.17': ('sect283r1', ),
'1.3.132.0.22': ('sect131r1', ),
'1.3.132.0.23': ('sect131r2', ),
'1.3.132.0.24': ('sect193r1', ),
'1.3.132.0.25': ('sect193r2', ),
'1.3.132.0.26': ('sect233k1', ),
'1.3.132.0.27': ('sect233r1', ),
'1.3.132.0.28': ('secp128r1', ),
'1.3.132.0.29': ('secp128r2', ),
'1.3.132.0.30': ('secp160r2', ),
'1.3.132.0.31': ('secp192k1', ),
'1.3.132.0.32': ('secp224k1', ),
'1.3.132.0.33': ('secp224r1', ),
'1.3.132.0.34': ('secp384r1', ),
'1.3.132.0.35': ('secp521r1', ),
'1.3.132.0.36': ('sect409k1', ),
'1.3.132.0.37': ('sect409r1', ),
'1.3.132.0.38': ('sect571k1', ),
'1.3.132.0.39': ('sect571r1', ),
'1.3.132.1': ('secg-scheme', ),
'1.3.132.1.11.0': ('dhSinglePass-stdDH-sha224kdf-scheme', ),
'1.3.132.1.11.1': ('dhSinglePass-stdDH-sha256kdf-scheme', ),
'1.3.132.1.11.2': ('dhSinglePass-stdDH-sha384kdf-scheme', ),
'1.3.132.1.11.3': ('dhSinglePass-stdDH-sha512kdf-scheme', ),
'1.3.132.1.14.0': ('dhSinglePass-cofactorDH-sha224kdf-scheme', ),
'1.3.132.1.14.1': ('dhSinglePass-cofactorDH-sha256kdf-scheme', ),
'1.3.132.1.14.2': ('dhSinglePass-cofactorDH-sha384kdf-scheme', ),
'1.3.132.1.14.3': ('dhSinglePass-cofactorDH-sha512kdf-scheme', ),
'1.3.133.16.840.63.0': ('x9-63-scheme', ),
'1.3.133.16.840.63.0.2': ('dhSinglePass-stdDH-sha1kdf-scheme', ),
'1.3.133.16.840.63.0.3': ('dhSinglePass-cofactorDH-sha1kdf-scheme', ),
'2': ('joint-iso-itu-t', 'JOINT-ISO-ITU-T', 'joint-iso-ccitt'),
'2.5': ('directory services (X.500)', 'X500'),
'2.5.1.5': ('Selected Attribute Types', 'selected-attribute-types'),
'2.5.1.5.55': ('clearance', ),
'2.5.4': ('X509', ),
'2.5.4.3': ('commonName', 'CN'),
'2.5.4.4': ('surname', 'SN'),
'2.5.4.5': ('serialNumber', ),
'2.5.4.6': ('countryName', 'C'),
'2.5.4.7': ('localityName', 'L'),
'2.5.4.8': ('stateOrProvinceName', 'ST'),
'2.5.4.9': ('streetAddress', 'street'),
'2.5.4.10': ('organizationName', 'O'),
'2.5.4.11': ('organizationalUnitName', 'OU'),
'2.5.4.12': ('title', 'title'),
'2.5.4.13': ('description', ),
'2.5.4.14': ('searchGuide', ),
'2.5.4.15': ('businessCategory', ),
'2.5.4.16': ('postalAddress', ),
'2.5.4.17': ('postalCode', ),
'2.5.4.18': ('postOfficeBox', ),
'2.5.4.19': ('physicalDeliveryOfficeName', ),
'2.5.4.20': ('telephoneNumber', ),
'2.5.4.21': ('telexNumber', ),
'2.5.4.22': ('teletexTerminalIdentifier', ),
'2.5.4.23': ('facsimileTelephoneNumber', ),
'2.5.4.24': ('x121Address', ),
'2.5.4.25': ('internationaliSDNNumber', ),
'2.5.4.26': ('registeredAddress', ),
'2.5.4.27': ('destinationIndicator', ),
'2.5.4.28': ('preferredDeliveryMethod', ),
'2.5.4.29': ('presentationAddress', ),
'2.5.4.30': ('supportedApplicationContext', ),
'2.5.4.31': ('member', ),
'2.5.4.32': ('owner', ),
'2.5.4.33': ('roleOccupant', ),
'2.5.4.34': ('seeAlso', ),
'2.5.4.35': ('userPassword', ),
'2.5.4.36': ('userCertificate', ),
'2.5.4.37': ('cACertificate', ),
'2.5.4.38': ('authorityRevocationList', ),
'2.5.4.39': ('certificateRevocationList', ),
'2.5.4.40': ('crossCertificatePair', ),
'2.5.4.41': ('name', 'name'),
'2.5.4.42': ('givenName', 'GN'),
'2.5.4.43': ('initials', 'initials'),
'2.5.4.44': ('generationQualifier', ),
'2.5.4.45': ('x500UniqueIdentifier', ),
'2.5.4.46': ('dnQualifier', 'dnQualifier'),
'2.5.4.47': ('enhancedSearchGuide', ),
'2.5.4.48': ('protocolInformation', ),
'2.5.4.49': ('distinguishedName', ),
'2.5.4.50': ('uniqueMember', ),
'2.5.4.51': ('houseIdentifier', ),
'2.5.4.52': ('supportedAlgorithms', ),
'2.5.4.53': ('deltaRevocationList', ),
'2.5.4.54': ('dmdName', ),
'2.5.4.65': ('pseudonym', ),
'2.5.4.72': ('role', 'role'),
'2.5.4.97': ('organizationIdentifier', ),
'2.5.4.98': ('countryCode3c', 'c3'),
'2.5.4.99': ('countryCode3n', 'n3'),
'2.5.4.100': ('dnsName', ),
'2.5.8': ('directory services - algorithms', 'X500algorithms'),
'2.5.8.1.1': ('rsa', 'RSA'),
'2.5.8.3.100': ('mdc2WithRSA', 'RSA-MDC2'),
'2.5.8.3.101': ('mdc2', 'MDC2'),
'2.5.29': ('id-ce', ),
'2.5.29.9': ('X509v3 Subject Directory Attributes', 'subjectDirectoryAttributes'),
'2.5.29.14': ('X509v3 Subject Key Identifier', 'subjectKeyIdentifier'),
'2.5.29.15': ('X509v3 Key Usage', 'keyUsage'),
'2.5.29.16': ('X509v3 Private Key Usage Period', 'privateKeyUsagePeriod'),
'2.5.29.17': ('X509v3 Subject Alternative Name', 'subjectAltName'),
'2.5.29.18': ('X509v3 Issuer Alternative Name', 'issuerAltName'),
'2.5.29.19': ('X509v3 Basic Constraints', 'basicConstraints'),
'2.5.29.20': ('X509v3 CRL Number', 'crlNumber'),
'2.5.29.21': ('X509v3 CRL Reason Code', 'CRLReason'),
'2.5.29.23': ('Hold Instruction Code', 'holdInstructionCode'),
'2.5.29.24': ('Invalidity Date', 'invalidityDate'),
'2.5.29.27': ('X509v3 Delta CRL Indicator', 'deltaCRL'),
'2.5.29.28': ('X509v3 Issuing Distribution Point', 'issuingDistributionPoint'),
'2.5.29.29': ('X509v3 Certificate Issuer', 'certificateIssuer'),
'2.5.29.30': ('X509v3 Name Constraints', 'nameConstraints'),
'2.5.29.31': ('X509v3 CRL Distribution Points', 'crlDistributionPoints'),
'2.5.29.32': ('X509v3 Certificate Policies', 'certificatePolicies'),
'2.5.29.32.0': ('X509v3 Any Policy', 'anyPolicy'),
'2.5.29.33': ('X509v3 Policy Mappings', 'policyMappings'),
'2.5.29.35': ('X509v3 Authority Key Identifier', 'authorityKeyIdentifier'),
'2.5.29.36': ('X509v3 Policy Constraints', 'policyConstraints'),
'2.5.29.37': ('X509v3 Extended Key Usage', 'extendedKeyUsage'),
'2.5.29.37.0': ('Any Extended Key Usage', 'anyExtendedKeyUsage'),
'2.5.29.46': ('X509v3 Freshest CRL', 'freshestCRL'),
'2.5.29.54': ('X509v3 Inhibit Any Policy', 'inhibitAnyPolicy'),
'2.5.29.55': ('X509v3 AC Targeting', 'targetInformation'),
'2.5.29.56': ('X509v3 No Revocation Available', 'noRevAvail'),
'2.16.840.1.101.3': ('csor', ),
'2.16.840.1.101.3.4': ('nistAlgorithms', ),
'2.16.840.1.101.3.4.1': ('aes', ),
'2.16.840.1.101.3.4.1.1': ('aes-128-ecb', 'AES-128-ECB'),
'2.16.840.1.101.3.4.1.2': ('aes-128-cbc', 'AES-128-CBC'),
'2.16.840.1.101.3.4.1.3': ('aes-128-ofb', 'AES-128-OFB'),
'2.16.840.1.101.3.4.1.4': ('aes-128-cfb', 'AES-128-CFB'),
'2.16.840.1.101.3.4.1.5': ('id-aes128-wrap', ),
'2.16.840.1.101.3.4.1.6': ('aes-128-gcm', 'id-aes128-GCM'),
'2.16.840.1.101.3.4.1.7': ('aes-128-ccm', 'id-aes128-CCM'),
'2.16.840.1.101.3.4.1.8': ('id-aes128-wrap-pad', ),
'2.16.840.1.101.3.4.1.21': ('aes-192-ecb', 'AES-192-ECB'),
'2.16.840.1.101.3.4.1.22': ('aes-192-cbc', 'AES-192-CBC'),
'2.16.840.1.101.3.4.1.23': ('aes-192-ofb', 'AES-192-OFB'),
'2.16.840.1.101.3.4.1.24': ('aes-192-cfb', 'AES-192-CFB'),
'2.16.840.1.101.3.4.1.25': ('id-aes192-wrap', ),
'2.16.840.1.101.3.4.1.26': ('aes-192-gcm', 'id-aes192-GCM'),
'2.16.840.1.101.3.4.1.27': ('aes-192-ccm', 'id-aes192-CCM'),
'2.16.840.1.101.3.4.1.28': ('id-aes192-wrap-pad', ),
'2.16.840.1.101.3.4.1.41': ('aes-256-ecb', 'AES-256-ECB'),
'2.16.840.1.101.3.4.1.42': ('aes-256-cbc', 'AES-256-CBC'),
'2.16.840.1.101.3.4.1.43': ('aes-256-ofb', 'AES-256-OFB'),
'2.16.840.1.101.3.4.1.44': ('aes-256-cfb', 'AES-256-CFB'),
'2.16.840.1.101.3.4.1.45': ('id-aes256-wrap', ),
'2.16.840.1.101.3.4.1.46': ('aes-256-gcm', 'id-aes256-GCM'),
'2.16.840.1.101.3.4.1.47': ('aes-256-ccm', 'id-aes256-CCM'),
'2.16.840.1.101.3.4.1.48': ('id-aes256-wrap-pad', ),
'2.16.840.1.101.3.4.2': ('nist_hashalgs', ),
'2.16.840.1.101.3.4.2.1': ('sha256', 'SHA256'),
'2.16.840.1.101.3.4.2.2': ('sha384', 'SHA384'),
'2.16.840.1.101.3.4.2.3': ('sha512', 'SHA512'),
'2.16.840.1.101.3.4.2.4': ('sha224', 'SHA224'),
'2.16.840.1.101.3.4.2.5': ('sha512-224', 'SHA512-224'),
'2.16.840.1.101.3.4.2.6': ('sha512-256', 'SHA512-256'),
'2.16.840.1.101.3.4.2.7': ('sha3-224', 'SHA3-224'),
'2.16.840.1.101.3.4.2.8': ('sha3-256', 'SHA3-256'),
'2.16.840.1.101.3.4.2.9': ('sha3-384', 'SHA3-384'),
'2.16.840.1.101.3.4.2.10': ('sha3-512', 'SHA3-512'),
'2.16.840.1.101.3.4.2.11': ('shake128', 'SHAKE128'),
'2.16.840.1.101.3.4.2.12': ('shake256', 'SHAKE256'),
'2.16.840.1.101.3.4.2.13': ('hmac-sha3-224', 'id-hmacWithSHA3-224'),
'2.16.840.1.101.3.4.2.14': ('hmac-sha3-256', 'id-hmacWithSHA3-256'),
'2.16.840.1.101.3.4.2.15': ('hmac-sha3-384', 'id-hmacWithSHA3-384'),
'2.16.840.1.101.3.4.2.16': ('hmac-sha3-512', 'id-hmacWithSHA3-512'),
'2.16.840.1.101.3.4.3': ('dsa_with_sha2', 'sigAlgs'),
'2.16.840.1.101.3.4.3.1': ('dsa_with_SHA224', ),
'2.16.840.1.101.3.4.3.2': ('dsa_with_SHA256', ),
'2.16.840.1.101.3.4.3.3': ('dsa_with_SHA384', 'id-dsa-with-sha384'),
'2.16.840.1.101.3.4.3.4': ('dsa_with_SHA512', 'id-dsa-with-sha512'),
'2.16.840.1.101.3.4.3.5': ('dsa_with_SHA3-224', 'id-dsa-with-sha3-224'),
'2.16.840.1.101.3.4.3.6': ('dsa_with_SHA3-256', 'id-dsa-with-sha3-256'),
'2.16.840.1.101.3.4.3.7': ('dsa_with_SHA3-384', 'id-dsa-with-sha3-384'),
'2.16.840.1.101.3.4.3.8': ('dsa_with_SHA3-512', 'id-dsa-with-sha3-512'),
'2.16.840.1.101.3.4.3.9': ('ecdsa_with_SHA3-224', 'id-ecdsa-with-sha3-224'),
'2.16.840.1.101.3.4.3.10': ('ecdsa_with_SHA3-256', 'id-ecdsa-with-sha3-256'),
'2.16.840.1.101.3.4.3.11': ('ecdsa_with_SHA3-384', 'id-ecdsa-with-sha3-384'),
'2.16.840.1.101.3.4.3.12': ('ecdsa_with_SHA3-512', 'id-ecdsa-with-sha3-512'),
'2.16.840.1.101.3.4.3.13': ('RSA-SHA3-224', 'id-rsassa-pkcs1-v1_5-with-sha3-224'),
'2.16.840.1.101.3.4.3.14': ('RSA-SHA3-256', 'id-rsassa-pkcs1-v1_5-with-sha3-256'),
'2.16.840.1.101.3.4.3.15': ('RSA-SHA3-384', 'id-rsassa-pkcs1-v1_5-with-sha3-384'),
'2.16.840.1.101.3.4.3.16': ('RSA-SHA3-512', 'id-rsassa-pkcs1-v1_5-with-sha3-512'),
'2.16.840.1.113730': ('Netscape Communications Corp.', 'Netscape'),
'2.16.840.1.113730.1': ('Netscape Certificate Extension', 'nsCertExt'),
'2.16.840.1.113730.1.1': ('Netscape Cert Type', 'nsCertType'),
'2.16.840.1.113730.1.2': ('Netscape Base Url', 'nsBaseUrl'),
'2.16.840.1.113730.1.3': ('Netscape Revocation Url', 'nsRevocationUrl'),
'2.16.840.1.113730.1.4': ('Netscape CA Revocation Url', 'nsCaRevocationUrl'),
'2.16.840.1.113730.1.7': ('Netscape Renewal Url', 'nsRenewalUrl'),
'2.16.840.1.113730.1.8': ('Netscape CA Policy Url', 'nsCaPolicyUrl'),
'2.16.840.1.113730.1.12': ('Netscape SSL Server Name', 'nsSslServerName'),
'2.16.840.1.113730.1.13': ('Netscape Comment', 'nsComment'),
'2.16.840.1.113730.2': ('Netscape Data Type', 'nsDataType'),
'2.16.840.1.113730.2.5': ('Netscape Certificate Sequence', 'nsCertSequence'),
'2.16.840.1.113730.4.1': ('Netscape Server Gated Crypto', 'nsSGC'),
'2.23': ('International Organizations', 'international-organizations'),
'2.23.42': ('Secure Electronic Transactions', 'id-set'),
'2.23.42.0': ('content types', 'set-ctype'),
'2.23.42.0.0': ('setct-PANData', ),
'2.23.42.0.1': ('setct-PANToken', ),
'2.23.42.0.2': ('setct-PANOnly', ),
'2.23.42.0.3': ('setct-OIData', ),
'2.23.42.0.4': ('setct-PI', ),
'2.23.42.0.5': ('setct-PIData', ),
'2.23.42.0.6': ('setct-PIDataUnsigned', ),
'2.23.42.0.7': ('setct-HODInput', ),
'2.23.42.0.8': ('setct-AuthResBaggage', ),
'2.23.42.0.9': ('setct-AuthRevReqBaggage', ),
'2.23.42.0.10': ('setct-AuthRevResBaggage', ),
'2.23.42.0.11': ('setct-CapTokenSeq', ),
'2.23.42.0.12': ('setct-PInitResData', ),
'2.23.42.0.13': ('setct-PI-TBS', ),
'2.23.42.0.14': ('setct-PResData', ),
'2.23.42.0.16': ('setct-AuthReqTBS', ),
'2.23.42.0.17': ('setct-AuthResTBS', ),
'2.23.42.0.18': ('setct-AuthResTBSX', ),
'2.23.42.0.19': ('setct-AuthTokenTBS', ),
'2.23.42.0.20': ('setct-CapTokenData', ),
'2.23.42.0.21': ('setct-CapTokenTBS', ),
'2.23.42.0.22': ('setct-AcqCardCodeMsg', ),
'2.23.42.0.23': ('setct-AuthRevReqTBS', ),
'2.23.42.0.24': ('setct-AuthRevResData', ),
'2.23.42.0.25': ('setct-AuthRevResTBS', ),
'2.23.42.0.26': ('setct-CapReqTBS', ),
'2.23.42.0.27': ('setct-CapReqTBSX', ),
'2.23.42.0.28': ('setct-CapResData', ),
'2.23.42.0.29': ('setct-CapRevReqTBS', ),
'2.23.42.0.30': ('setct-CapRevReqTBSX', ),
'2.23.42.0.31': ('setct-CapRevResData', ),
'2.23.42.0.32': ('setct-CredReqTBS', ),
'2.23.42.0.33': ('setct-CredReqTBSX', ),
'2.23.42.0.34': ('setct-CredResData', ),
'2.23.42.0.35': ('setct-CredRevReqTBS', ),
'2.23.42.0.36': ('setct-CredRevReqTBSX', ),
'2.23.42.0.37': ('setct-CredRevResData', ),
'2.23.42.0.38': ('setct-PCertReqData', ),
'2.23.42.0.39': ('setct-PCertResTBS', ),
'2.23.42.0.40': ('setct-BatchAdminReqData', ),
'2.23.42.0.41': ('setct-BatchAdminResData', ),
'2.23.42.0.42': ('setct-CardCInitResTBS', ),
'2.23.42.0.43': ('setct-MeAqCInitResTBS', ),
'2.23.42.0.44': ('setct-RegFormResTBS', ),
'2.23.42.0.45': ('setct-CertReqData', ),
'2.23.42.0.46': ('setct-CertReqTBS', ),
'2.23.42.0.47': ('setct-CertResData', ),
'2.23.42.0.48': ('setct-CertInqReqTBS', ),
'2.23.42.0.49': ('setct-ErrorTBS', ),
'2.23.42.0.50': ('setct-PIDualSignedTBE', ),
'2.23.42.0.51': ('setct-PIUnsignedTBE', ),
'2.23.42.0.52': ('setct-AuthReqTBE', ),
'2.23.42.0.53': ('setct-AuthResTBE', ),
'2.23.42.0.54': ('setct-AuthResTBEX', ),
'2.23.42.0.55': ('setct-AuthTokenTBE', ),
'2.23.42.0.56': ('setct-CapTokenTBE', ),
'2.23.42.0.57': ('setct-CapTokenTBEX', ),
'2.23.42.0.58': ('setct-AcqCardCodeMsgTBE', ),
'2.23.42.0.59': ('setct-AuthRevReqTBE', ),
'2.23.42.0.60': ('setct-AuthRevResTBE', ),
'2.23.42.0.61': ('setct-AuthRevResTBEB', ),
'2.23.42.0.62': ('setct-CapReqTBE', ),
'2.23.42.0.63': ('setct-CapReqTBEX', ),
'2.23.42.0.64': ('setct-CapResTBE', ),
'2.23.42.0.65': ('setct-CapRevReqTBE', ),
'2.23.42.0.66': ('setct-CapRevReqTBEX', ),
'2.23.42.0.67': ('setct-CapRevResTBE', ),
'2.23.42.0.68': ('setct-CredReqTBE', ),
'2.23.42.0.69': ('setct-CredReqTBEX', ),
'2.23.42.0.70': ('setct-CredResTBE', ),
'2.23.42.0.71': ('setct-CredRevReqTBE', ),
'2.23.42.0.72': ('setct-CredRevReqTBEX', ),
'2.23.42.0.73': ('setct-CredRevResTBE', ),
'2.23.42.0.74': ('setct-BatchAdminReqTBE', ),
'2.23.42.0.75': ('setct-BatchAdminResTBE', ),
'2.23.42.0.76': ('setct-RegFormReqTBE', ),
'2.23.42.0.77': ('setct-CertReqTBE', ),
'2.23.42.0.78': ('setct-CertReqTBEX', ),
'2.23.42.0.79': ('setct-CertResTBE', ),
'2.23.42.0.80': ('setct-CRLNotificationTBS', ),
'2.23.42.0.81': ('setct-CRLNotificationResTBS', ),
'2.23.42.0.82': ('setct-BCIDistributionTBS', ),
'2.23.42.1': ('message extensions', 'set-msgExt'),
'2.23.42.1.1': ('generic cryptogram', 'setext-genCrypt'),
'2.23.42.1.3': ('merchant initiated auth', 'setext-miAuth'),
'2.23.42.1.4': ('setext-pinSecure', ),
'2.23.42.1.5': ('setext-pinAny', ),
'2.23.42.1.7': ('setext-track2', ),
'2.23.42.1.8': ('additional verification', 'setext-cv'),
'2.23.42.3': ('set-attr', ),
'2.23.42.3.0': ('setAttr-Cert', ),
'2.23.42.3.0.0': ('set-rootKeyThumb', ),
'2.23.42.3.0.1': ('set-addPolicy', ),
'2.23.42.3.1': ('payment gateway capabilities', 'setAttr-PGWYcap'),
'2.23.42.3.2': ('setAttr-TokenType', ),
'2.23.42.3.2.1': ('setAttr-Token-EMV', ),
'2.23.42.3.2.2': ('setAttr-Token-B0Prime', ),
'2.23.42.3.3': ('issuer capabilities', 'setAttr-IssCap'),
'2.23.42.3.3.3': ('setAttr-IssCap-CVM', ),
'2.23.42.3.3.3.1': ('generate cryptogram', 'setAttr-GenCryptgrm'),
'2.23.42.3.3.4': ('setAttr-IssCap-T2', ),
'2.23.42.3.3.4.1': ('encrypted track 2', 'setAttr-T2Enc'),
'2.23.42.3.3.4.2': ('cleartext track 2', 'setAttr-T2cleartxt'),
'2.23.42.3.3.5': ('setAttr-IssCap-Sig', ),
'2.23.42.3.3.5.1': ('ICC or token signature', 'setAttr-TokICCsig'),
'2.23.42.3.3.5.2': ('secure device signature', 'setAttr-SecDevSig'),
'2.23.42.5': ('set-policy', ),
'2.23.42.5.0': ('set-policy-root', ),
'2.23.42.7': ('certificate extensions', 'set-certExt'),
'2.23.42.7.0': ('setCext-hashedRoot', ),
'2.23.42.7.1': ('setCext-certType', ),
'2.23.42.7.2': ('setCext-merchData', ),
'2.23.42.7.3': ('setCext-cCertRequired', ),
'2.23.42.7.4': ('setCext-tunneling', ),
'2.23.42.7.5': ('setCext-setExt', ),
'2.23.42.7.6': ('setCext-setQualf', ),
'2.23.42.7.7': ('setCext-PGWYcapabilities', ),
'2.23.42.7.8': ('setCext-TokenIdentifier', ),
'2.23.42.7.9': ('setCext-Track2Data', ),
'2.23.42.7.10': ('setCext-TokenType', ),
'2.23.42.7.11': ('setCext-IssuerCapabilities', ),
'2.23.42.8': ('set-brand', ),
'2.23.42.8.1': ('set-brand-IATA-ATA', ),
'2.23.42.8.4': ('set-brand-Visa', ),
'2.23.42.8.5': ('set-brand-MasterCard', ),
'2.23.42.8.30': ('set-brand-Diners', ),
'2.23.42.8.34': ('set-brand-AmericanExpress', ),
'2.23.42.8.35': ('set-brand-JCB', ),
'2.23.42.8.6011': ('set-brand-Novus', ),
'2.23.43': ('wap', ),
'2.23.43.1': ('wap-wsg', ),
'2.23.43.1.4': ('wap-wsg-idm-ecid', ),
'2.23.43.1.4.1': ('wap-wsg-idm-ecid-wtls1', ),
'2.23.43.1.4.3': ('wap-wsg-idm-ecid-wtls3', ),
'2.23.43.1.4.4': ('wap-wsg-idm-ecid-wtls4', ),
'2.23.43.1.4.5': ('wap-wsg-idm-ecid-wtls5', ),
'2.23.43.1.4.6': ('wap-wsg-idm-ecid-wtls6', ),
'2.23.43.1.4.7': ('wap-wsg-idm-ecid-wtls7', ),
'2.23.43.1.4.8': ('wap-wsg-idm-ecid-wtls8', ),
'2.23.43.1.4.9': ('wap-wsg-idm-ecid-wtls9', ),
'2.23.43.1.4.10': ('wap-wsg-idm-ecid-wtls10', ),
'2.23.43.1.4.11': ('wap-wsg-idm-ecid-wtls11', ),
'2.23.43.1.4.12': ('wap-wsg-idm-ecid-wtls12', ),
}
# #####################################################################################
# #####################################################################################
_OID_LOOKUP = dict()
_NORMALIZE_NAMES = dict()
_NORMALIZE_NAMES_SHORT = dict()
for dotted, names in _OID_MAP.items():
for name in names:
if name in _NORMALIZE_NAMES and _OID_LOOKUP[name] != dotted:
raise AssertionError(
'Name collision during setup: "{0}" for OIDs {1} and {2}'
.format(name, dotted, _OID_LOOKUP[name])
)
_NORMALIZE_NAMES[name] = names[0]
_NORMALIZE_NAMES_SHORT[name] = names[-1]
_OID_LOOKUP[name] = dotted
for alias, original in [('userID', 'userId')]:
if alias in _NORMALIZE_NAMES:
raise AssertionError(
'Name collision during adding aliases: "{0}" (alias for "{1}") is already mapped to OID {2}'
.format(alias, original, _OID_LOOKUP[alias])
)
_NORMALIZE_NAMES[alias] = original
_NORMALIZE_NAMES_SHORT[alias] = _NORMALIZE_NAMES_SHORT[original]
_OID_LOOKUP[alias] = _OID_LOOKUP[original]
def pyopenssl_normalize_name(name, short=False):
nid = OpenSSL._util.lib.OBJ_txt2nid(to_bytes(name))
if nid != 0:
b_name = OpenSSL._util.lib.OBJ_nid2ln(nid)
name = to_text(OpenSSL._util.ffi.string(b_name))
if short:
return _NORMALIZE_NAMES_SHORT.get(name, name)
else:
return _NORMALIZE_NAMES.get(name, name)
# #####################################################################################
# #####################################################################################
# # This excerpt is dual licensed under the terms of the Apache License, Version
# # 2.0, and the BSD License. See the LICENSE file at
# # https://github.com/pyca/cryptography/blob/master/LICENSE for complete details.
# #
# # Adapted from cryptography's hazmat/backends/openssl/decode_asn1.py
# #
# # Copyright (c) 2015, 2016 Paul Kehrer (@reaperhulk)
# # Copyright (c) 2017 Fraser Tweedale (@frasertweedale)
# #
# # Relevant commits from cryptography project (https://github.com/pyca/cryptography):
# # pyca/cryptography@719d536dd691e84e208534798f2eb4f82aaa2e07
# # pyca/cryptography@5ab6d6a5c05572bd1c75f05baf264a2d0001894a
# # pyca/cryptography@2e776e20eb60378e0af9b7439000d0e80da7c7e3
# # pyca/cryptography@fb309ed24647d1be9e319b61b1f2aa8ebb87b90b
# # pyca/cryptography@2917e460993c475c72d7146c50dc3bbc2414280d
# # pyca/cryptography@3057f91ea9a05fb593825006d87a391286a4d828
# # pyca/cryptography@d607dd7e5bc5c08854ec0c9baff70ba4a35be36f
def _obj2txt(openssl_lib, openssl_ffi, obj):
# Set to 80 on the recommendation of
# https://www.openssl.org/docs/crypto/OBJ_nid2ln.html#return_values
#
# But OIDs longer than this occur in real life (e.g. Active
# Directory makes some very long OIDs). So we need to detect
# and properly handle the case where the default buffer is not
# big enough.
#
buf_len = 80
buf = openssl_ffi.new("char[]", buf_len)
# 'res' is the number of bytes that *would* be written if the
# buffer is large enough. If 'res' > buf_len - 1, we need to
# alloc a big-enough buffer and go again.
res = openssl_lib.OBJ_obj2txt(buf, buf_len, obj, 1)
if res > buf_len - 1: # account for terminating null byte
buf_len = res + 1
buf = openssl_ffi.new("char[]", buf_len)
res = openssl_lib.OBJ_obj2txt(buf, buf_len, obj, 1)
return openssl_ffi.buffer(buf, res)[:].decode()
# #####################################################################################
# #####################################################################################
def cryptography_get_extensions_from_cert(cert):
# Since cryptography won't give us the DER value for an extension
# (that is only stored for unrecognized extensions), we have to re-do
# the extension parsing outselves.
result = dict()
backend = cert._backend
x509_obj = cert._x509
for i in range(backend._lib.X509_get_ext_count(x509_obj)):
ext = backend._lib.X509_get_ext(x509_obj, i)
if ext == backend._ffi.NULL:
continue
crit = backend._lib.X509_EXTENSION_get_critical(ext)
data = backend._lib.X509_EXTENSION_get_data(ext)
backend.openssl_assert(data != backend._ffi.NULL)
der = backend._ffi.buffer(data.data, data.length)[:]
entry = dict(
critical=(crit == 1),
value=base64.b64encode(der),
)
oid = _obj2txt(backend._lib, backend._ffi, backend._lib.X509_EXTENSION_get_object(ext))
result[oid] = entry
return result
def cryptography_get_extensions_from_csr(csr):
# Since cryptography won't give us the DER value for an extension
# (that is only stored for unrecognized extensions), we have to re-do
# the extension parsing outselves.
result = dict()
backend = csr._backend
extensions = backend._lib.X509_REQ_get_extensions(csr._x509_req)
extensions = backend._ffi.gc(
extensions,
lambda ext: backend._lib.sk_X509_EXTENSION_pop_free(
ext,
backend._ffi.addressof(backend._lib._original_lib, "X509_EXTENSION_free")
)
)
for i in range(backend._lib.sk_X509_EXTENSION_num(extensions)):
ext = backend._lib.sk_X509_EXTENSION_value(extensions, i)
if ext == backend._ffi.NULL:
continue
crit = backend._lib.X509_EXTENSION_get_critical(ext)
data = backend._lib.X509_EXTENSION_get_data(ext)
backend.openssl_assert(data != backend._ffi.NULL)
der = backend._ffi.buffer(data.data, data.length)[:]
entry = dict(
critical=(crit == 1),
value=base64.b64encode(der),
)
oid = _obj2txt(backend._lib, backend._ffi, backend._lib.X509_EXTENSION_get_object(ext))
result[oid] = entry
return result
def pyopenssl_get_extensions_from_cert(cert):
# While pyOpenSSL allows us to get an extension's DER value, it won't
# give us the dotted string for an OID. So we have to do some magic to
# get hold of it.
result = dict()
ext_count = cert.get_extension_count()
for i in range(0, ext_count):
ext = cert.get_extension(i)
entry = dict(
critical=bool(ext.get_critical()),
value=base64.b64encode(ext.get_data()),
)
oid = _obj2txt(
OpenSSL._util.lib,
OpenSSL._util.ffi,
OpenSSL._util.lib.X509_EXTENSION_get_object(ext._extension)
)
# This could also be done a bit simpler:
#
# oid = _obj2txt(OpenSSL._util.lib, OpenSSL._util.ffi, OpenSSL._util.lib.OBJ_nid2obj(ext._nid))
#
# Unfortunately this gives the wrong result in case the linked OpenSSL
# doesn't know the OID. That's why we have to get the OID dotted string
# similarly to how cryptography does it.
result[oid] = entry
return result
def pyopenssl_get_extensions_from_csr(csr):
# While pyOpenSSL allows us to get an extension's DER value, it won't
# give us the dotted string for an OID. So we have to do some magic to
# get hold of it.
result = dict()
for ext in csr.get_extensions():
entry = dict(
critical=bool(ext.get_critical()),
value=base64.b64encode(ext.get_data()),
)
oid = _obj2txt(
OpenSSL._util.lib,
OpenSSL._util.ffi,
OpenSSL._util.lib.X509_EXTENSION_get_object(ext._extension)
)
# This could also be done a bit simpler:
#
# oid = _obj2txt(OpenSSL._util.lib, OpenSSL._util.ffi, OpenSSL._util.lib.OBJ_nid2obj(ext._nid))
#
# Unfortunately this gives the wrong result in case the linked OpenSSL
# doesn't know the OID. That's why we have to get the OID dotted string
# similarly to how cryptography does it.
result[oid] = entry
return result
def cryptography_name_to_oid(name):
dotted = _OID_LOOKUP.get(name)
if dotted is None:
raise OpenSSLObjectError('Cannot find OID for "{0}"'.format(name))
return x509.oid.ObjectIdentifier(dotted)
def cryptography_oid_to_name(oid, short=False):
dotted_string = oid.dotted_string
names = _OID_MAP.get(dotted_string)
name = names[0] if names else oid._name
if short:
return _NORMALIZE_NAMES_SHORT.get(name, name)
else:
return _NORMALIZE_NAMES.get(name, name)
def cryptography_get_name(name):
'''
Given a name string, returns a cryptography x509.Name object.
Raises an OpenSSLObjectError if the name is unknown or cannot be parsed.
'''
try:
if name.startswith('DNS:'):
return x509.DNSName(to_text(name[4:]))
if name.startswith('IP:'):
return x509.IPAddress(ipaddress.ip_address(to_text(name[3:])))
if name.startswith('email:'):
return x509.RFC822Name(to_text(name[6:]))
if name.startswith('URI:'):
return x509.UniformResourceIdentifier(to_text(name[4:]))
except Exception as e:
raise OpenSSLObjectError('Cannot parse Subject Alternative Name "{0}": {1}'.format(name, e))
if ':' not in name:
raise OpenSSLObjectError('Cannot parse Subject Alternative Name "{0}" (forgot "DNS:" prefix?)'.format(name))
raise OpenSSLObjectError('Cannot parse Subject Alternative Name "{0}" (potentially unsupported by cryptography backend)'.format(name))
def _get_hex(bytesstr):
if bytesstr is None:
return bytesstr
data = binascii.hexlify(bytesstr)
data = to_text(b':'.join(data[i:i + 2] for i in range(0, len(data), 2)))
return data
def cryptography_decode_name(name):
'''
Given a cryptography x509.Name object, returns a string.
Raises an OpenSSLObjectError if the name is not supported.
'''
if isinstance(name, x509.DNSName):
return 'DNS:{0}'.format(name.value)
if isinstance(name, x509.IPAddress):
return 'IP:{0}'.format(name.value.compressed)
if isinstance(name, x509.RFC822Name):
return 'email:{0}'.format(name.value)
if isinstance(name, x509.UniformResourceIdentifier):
return 'URI:{0}'.format(name.value)
if isinstance(name, x509.DirectoryName):
# FIXME: test
return 'DirName:' + ''.join(['/{0}:{1}'.format(attribute.oid._name, attribute.value) for attribute in name.value])
if isinstance(name, x509.RegisteredID):
# FIXME: test
return 'RegisteredID:{0}'.format(name.value)
if isinstance(name, x509.OtherName):
# FIXME: test
return '{0}:{1}'.format(name.type_id.dotted_string, _get_hex(name.value))
raise OpenSSLObjectError('Cannot decode name "{0}"'.format(name))
def _cryptography_get_keyusage(usage):
'''
Given a key usage identifier string, returns the parameter name used by cryptography's x509.KeyUsage().
Raises an OpenSSLObjectError if the identifier is unknown.
'''
if usage in ('Digital Signature', 'digitalSignature'):
return 'digital_signature'
if usage in ('Non Repudiation', 'nonRepudiation'):
return 'content_commitment'
if usage in ('Key Encipherment', 'keyEncipherment'):
return 'key_encipherment'
if usage in ('Data Encipherment', 'dataEncipherment'):
return 'data_encipherment'
if usage in ('Key Agreement', 'keyAgreement'):
return 'key_agreement'
if usage in ('Certificate Sign', 'keyCertSign'):
return 'key_cert_sign'
if usage in ('CRL Sign', 'cRLSign'):
return 'crl_sign'
if usage in ('Encipher Only', 'encipherOnly'):
return 'encipher_only'
if usage in ('Decipher Only', 'decipherOnly'):
return 'decipher_only'
raise OpenSSLObjectError('Unknown key usage "{0}"'.format(usage))
def cryptography_parse_key_usage_params(usages):
'''
Given a list of key usage identifier strings, returns the parameters for cryptography's x509.KeyUsage().
Raises an OpenSSLObjectError if an identifier is unknown.
'''
params = dict(
digital_signature=False,
content_commitment=False,
key_encipherment=False,
data_encipherment=False,
key_agreement=False,
key_cert_sign=False,
crl_sign=False,
encipher_only=False,
decipher_only=False,
)
for usage in usages:
params[_cryptography_get_keyusage(usage)] = True
return params
def cryptography_get_basic_constraints(constraints):
'''
Given a list of constraints, returns a tuple (ca, path_length).
Raises an OpenSSLObjectError if a constraint is unknown or cannot be parsed.
'''
ca = False
path_length = None
if constraints:
for constraint in constraints:
if constraint.startswith('CA:'):
if constraint == 'CA:TRUE':
ca = True
elif constraint == 'CA:FALSE':
ca = False
else:
raise OpenSSLObjectError('Unknown basic constraint value "{0}" for CA'.format(constraint[3:]))
elif constraint.startswith('pathlen:'):
v = constraint[len('pathlen:'):]
try:
path_length = int(v)
except Exception as e:
raise OpenSSLObjectError('Cannot parse path length constraint "{0}" ({1})'.format(v, e))
else:
raise OpenSSLObjectError('Unknown basic constraint "{0}"'.format(constraint))
return ca, path_length
def binary_exp_mod(f, e, m):
'''Computes f^e mod m in O(log e) multiplications modulo m.'''
# Compute len_e = floor(log_2(e))
len_e = -1
x = e
while x > 0:
x >>= 1
len_e += 1
# Compute f**e mod m
result = 1
for k in range(len_e, -1, -1):
result = (result * result) % m
if ((e >> k) & 1) != 0:
result = (result * f) % m
return result
def simple_gcd(a, b):
'''Compute GCD of its two inputs.'''
while b != 0:
a, b = b, a % b
return a
def quick_is_not_prime(n):
'''Does some quick checks to see if we can poke a hole into the primality of n.
A result of `False` does **not** mean that the number is prime; it just means
that we couldn't detect quickly whether it is not prime.
'''
if n <= 2:
return True
# The constant in the next line is the product of all primes < 200
if simple_gcd(n, 7799922041683461553249199106329813876687996789903550945093032474868511536164700810) > 1:
return True
# TODO: maybe do some iterations of Miller-Rabin to increase confidence
# (https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test)
return False
python_version = (sys.version_info[0], sys.version_info[1])
if python_version >= (2, 7) or python_version >= (3, 1):
# Ansible still supports Python 2.6 on remote nodes
def count_bits(no):
no = abs(no)
if no == 0:
return 0
return no.bit_length()
else:
# Slow, but works
def count_bits(no):
no = abs(no)
count = 0
while no > 0:
no >>= 1
count += 1
return count
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,253 |
Allow to select openssl_privatekey format
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Allow to specify a new "format" parameter in "openssl_privatekey". Today the output format is decided by a very simple heuristic which requires further commands to work properly.
I need, for example, to generate a RSA private key but in PKCS8 format. The current heuristic uses OpenSSH format for RSA keys and only uses PKCS8 for Ed25519 and similar. This makes me use a "command" with "openssl" to convert the generated key, which leads to idempotency problems and unnecessary complexity.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
openssl_privatekey
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The current heuristic should be kept unless overwritten by user.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
format: pkcs8
```
The values allowed are `pkcs1`, `raw` and `pkcs8`
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/59253
|
https://github.com/ansible/ansible/pull/60388
|
e3c7e356568c3254a14440d7b832cc2cf32a2b14
|
d00d0c81b379f4232421ec01a104b3ce1c6e2820
| 2019-07-18T15:29:13Z |
python
| 2019-10-17T08:40:13Z |
lib/ansible/modules/crypto/openssl_privatekey.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Yanis Guenane <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: openssl_privatekey
version_added: "2.3"
short_description: Generate OpenSSL private keys
description:
- This module allows one to (re)generate OpenSSL private keys.
- One can generate L(RSA,https://en.wikipedia.org/wiki/RSA_(cryptosystem)),
L(DSA,https://en.wikipedia.org/wiki/Digital_Signature_Algorithm),
L(ECC,https://en.wikipedia.org/wiki/Elliptic-curve_cryptography) or
L(EdDSA,https://en.wikipedia.org/wiki/EdDSA) private keys.
- Keys are generated in PEM format.
- "Please note that the module regenerates private keys if they don't match
the module's options. In particular, if you provide another passphrase
(or specify none), change the keysize, etc., the private key will be
regenerated. If you are concerned that this could **overwrite your private key**,
consider using the I(backup) option."
- The module can use the cryptography Python library, or the pyOpenSSL Python
library. By default, it tries to detect which one is available. This can be
overridden with the I(select_crypto_backend) option. Please note that the
PyOpenSSL backend was deprecated in Ansible 2.9 and will be removed in Ansible 2.13."
requirements:
- Either cryptography >= 1.2.3 (older versions might work as well)
- Or pyOpenSSL
author:
- Yanis Guenane (@Spredzy)
- Felix Fontein (@felixfontein)
options:
state:
description:
- Whether the private key should exist or not, taking action if the state is different from what is stated.
type: str
default: present
choices: [ absent, present ]
size:
description:
- Size (in bits) of the TLS/SSL key to generate.
type: int
default: 4096
type:
description:
- The algorithm used to generate the TLS/SSL private key.
- Note that C(ECC), C(X25519), C(X448), C(Ed25519) and C(Ed448) require the C(cryptography) backend.
C(X25519) needs cryptography 2.5 or newer, while C(X448), C(Ed25519) and C(Ed448) require
cryptography 2.6 or newer. For C(ECC), the minimal cryptography version required depends on the
I(curve) option.
type: str
default: RSA
choices: [ DSA, ECC, Ed25519, Ed448, RSA, X25519, X448 ]
curve:
description:
- Note that not all curves are supported by all versions of C(cryptography).
- For maximal interoperability, C(secp384r1) or C(secp256r1) should be used.
- We use the curve names as defined in the
L(IANA registry for TLS,https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-8).
type: str
choices:
- secp384r1
- secp521r1
- secp224r1
- secp192r1
- secp256r1
- secp256k1
- brainpoolP256r1
- brainpoolP384r1
- brainpoolP512r1
- sect571k1
- sect409k1
- sect283k1
- sect233k1
- sect163k1
- sect571r1
- sect409r1
- sect283r1
- sect233r1
- sect163r2
version_added: "2.8"
force:
description:
- Should the key be regenerated even if it already exists.
type: bool
default: no
path:
description:
- Name of the file in which the generated TLS/SSL private key will be written. It will have 0600 mode.
type: path
required: true
passphrase:
description:
- The passphrase for the private key.
type: str
version_added: "2.4"
cipher:
description:
- The cipher to encrypt the private key. (Valid values can be found by
running `openssl list -cipher-algorithms` or `openssl list-cipher-algorithms`,
depending on your OpenSSL version.)
- When using the C(cryptography) backend, use C(auto).
type: str
version_added: "2.4"
select_crypto_backend:
description:
- Determines which crypto backend to use.
- The default choice is C(auto), which tries to use C(cryptography) if available, and falls back to C(pyopenssl).
- If set to C(pyopenssl), will try to use the L(pyOpenSSL,https://pypi.org/project/pyOpenSSL/) library.
- If set to C(cryptography), will try to use the L(cryptography,https://cryptography.io/) library.
- Please note that the C(pyopenssl) backend has been deprecated in Ansible 2.9, and will be removed in Ansible 2.13.
From that point on, only the C(cryptography) backend will be available.
type: str
default: auto
choices: [ auto, cryptography, pyopenssl ]
version_added: "2.8"
backup:
description:
- Create a backup file including a timestamp so you can get
the original private key back if you overwrote it with a new one by accident.
type: bool
default: no
version_added: "2.8"
extends_documentation_fragment:
- files
seealso:
- module: openssl_certificate
- module: openssl_csr
- module: openssl_dhparam
- module: openssl_pkcs12
- module: openssl_publickey
'''
EXAMPLES = r'''
- name: Generate an OpenSSL private key with the default values (4096 bits, RSA)
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
- name: Generate an OpenSSL private key with the default values (4096 bits, RSA) and a passphrase
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
passphrase: ansible
cipher: aes256
- name: Generate an OpenSSL private key with a different size (2048 bits)
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
size: 2048
- name: Force regenerate an OpenSSL private key if it already exists
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
force: yes
- name: Generate an OpenSSL private key with a different algorithm (DSA)
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
type: DSA
'''
RETURN = r'''
size:
description: Size (in bits) of the TLS/SSL private key.
returned: changed or success
type: int
sample: 4096
type:
description: Algorithm used to generate the TLS/SSL private key.
returned: changed or success
type: str
sample: RSA
curve:
description: Elliptic curve used to generate the TLS/SSL private key.
returned: changed or success, and I(type) is C(ECC)
type: str
sample: secp256r1
filename:
description: Path to the generated TLS/SSL private key file.
returned: changed or success
type: str
sample: /etc/ssl/private/ansible.com.pem
fingerprint:
description:
- The fingerprint of the public key. Fingerprint will be generated for each C(hashlib.algorithms) available.
- The PyOpenSSL backend requires PyOpenSSL >= 16.0 for meaningful output.
returned: changed or success
type: dict
sample:
md5: "84:75:71:72:8d:04:b5:6c:4d:37:6d:66:83:f5:4c:29"
sha1: "51:cc:7c:68:5d:eb:41:43:88:7e:1a:ae:c7:f8:24:72:ee:71:f6:10"
sha224: "b1:19:a6:6c:14:ac:33:1d:ed:18:50:d3:06:5c:b2:32:91:f1:f1:52:8c:cb:d5:75:e9:f5:9b:46"
sha256: "41:ab:c7:cb:d5:5f:30:60:46:99:ac:d4:00:70:cf:a1:76:4f:24:5d:10:24:57:5d:51:6e:09:97:df:2f:de:c7"
sha384: "85:39:50:4e:de:d9:19:33:40:70:ae:10:ab:59:24:19:51:c3:a2:e4:0b:1c:b1:6e:dd:b3:0c:d9:9e:6a:46:af:da:18:f8:ef:ae:2e:c0:9a:75:2c:9b:b3:0f:3a:5f:3d"
sha512: "fd:ed:5e:39:48:5f:9f:fe:7f:25:06:3f:79:08:cd:ee:a5:e7:b3:3d:13:82:87:1f:84:e1:f5:c7:28:77:53:94:86:56:38:69:f0:d9:35:22:01:1e:a6:60:...:0f:9b"
backup_file:
description: Name of backup file created.
returned: changed and if I(backup) is C(yes)
type: str
sample: /path/to/privatekey.pem.2019-03-09@11:22~
'''
import abc
import os
import traceback
from distutils.version import LooseVersion
MINIMAL_PYOPENSSL_VERSION = '0.6'
MINIMAL_CRYPTOGRAPHY_VERSION = '1.2.3'
PYOPENSSL_IMP_ERR = None
try:
import OpenSSL
from OpenSSL import crypto
PYOPENSSL_VERSION = LooseVersion(OpenSSL.__version__)
except ImportError:
PYOPENSSL_IMP_ERR = traceback.format_exc()
PYOPENSSL_FOUND = False
else:
PYOPENSSL_FOUND = True
CRYPTOGRAPHY_IMP_ERR = None
try:
import cryptography
import cryptography.exceptions
import cryptography.hazmat.backends
import cryptography.hazmat.primitives.serialization
import cryptography.hazmat.primitives.asymmetric.rsa
import cryptography.hazmat.primitives.asymmetric.dsa
import cryptography.hazmat.primitives.asymmetric.ec
import cryptography.hazmat.primitives.asymmetric.utils
CRYPTOGRAPHY_VERSION = LooseVersion(cryptography.__version__)
except ImportError:
CRYPTOGRAPHY_IMP_ERR = traceback.format_exc()
CRYPTOGRAPHY_FOUND = False
else:
CRYPTOGRAPHY_FOUND = True
try:
import cryptography.hazmat.primitives.asymmetric.x25519
CRYPTOGRAPHY_HAS_X25519 = True
try:
cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.private_bytes
CRYPTOGRAPHY_HAS_X25519_FULL = True
except AttributeError:
CRYPTOGRAPHY_HAS_X25519_FULL = False
except ImportError:
CRYPTOGRAPHY_HAS_X25519 = False
CRYPTOGRAPHY_HAS_X25519_FULL = False
try:
import cryptography.hazmat.primitives.asymmetric.x448
CRYPTOGRAPHY_HAS_X448 = True
except ImportError:
CRYPTOGRAPHY_HAS_X448 = False
try:
import cryptography.hazmat.primitives.asymmetric.ed25519
CRYPTOGRAPHY_HAS_ED25519 = True
except ImportError:
CRYPTOGRAPHY_HAS_ED25519 = False
try:
import cryptography.hazmat.primitives.asymmetric.ed448
CRYPTOGRAPHY_HAS_ED448 = True
except ImportError:
CRYPTOGRAPHY_HAS_ED448 = False
from ansible.module_utils import crypto as crypto_utils
from ansible.module_utils._text import to_native, to_bytes
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class PrivateKeyError(crypto_utils.OpenSSLObjectError):
pass
class PrivateKeyBase(crypto_utils.OpenSSLObject):
def __init__(self, module):
super(PrivateKeyBase, self).__init__(
module.params['path'],
module.params['state'],
module.params['force'],
module.check_mode
)
self.size = module.params['size']
self.passphrase = module.params['passphrase']
self.cipher = module.params['cipher']
self.privatekey = None
self.fingerprint = {}
self.backup = module.params['backup']
self.backup_file = None
if module.params['mode'] is None:
module.params['mode'] = '0600'
@abc.abstractmethod
def _generate_private_key_data(self):
pass
@abc.abstractmethod
def _get_fingerprint(self):
pass
def generate(self, module):
"""Generate a keypair."""
if not self.check(module, perms_required=False) or self.force:
if self.backup:
self.backup_file = module.backup_local(self.path)
privatekey_data = self._generate_private_key_data()
crypto_utils.write_file(module, privatekey_data, 0o600)
self.changed = True
self.fingerprint = self._get_fingerprint()
file_args = module.load_file_common_arguments(module.params)
if module.set_fs_attributes_if_different(file_args, False):
self.changed = True
def remove(self, module):
if self.backup:
self.backup_file = module.backup_local(self.path)
super(PrivateKeyBase, self).remove(module)
@abc.abstractmethod
def _check_passphrase(self):
pass
@abc.abstractmethod
def _check_size_and_type(self):
pass
def check(self, module, perms_required=True):
"""Ensure the resource is in its desired state."""
state_and_perms = super(PrivateKeyBase, self).check(module, perms_required)
if not state_and_perms or not self._check_passphrase():
return False
return self._check_size_and_type()
def dump(self):
"""Serialize the object into a dictionary."""
result = {
'size': self.size,
'filename': self.path,
'changed': self.changed,
'fingerprint': self.fingerprint,
}
if self.backup_file:
result['backup_file'] = self.backup_file
return result
# Implementation with using pyOpenSSL
class PrivateKeyPyOpenSSL(PrivateKeyBase):
def __init__(self, module):
super(PrivateKeyPyOpenSSL, self).__init__(module)
if module.params['type'] == 'RSA':
self.type = crypto.TYPE_RSA
elif module.params['type'] == 'DSA':
self.type = crypto.TYPE_DSA
else:
module.fail_json(msg="PyOpenSSL backend only supports RSA and DSA keys.")
def _generate_private_key_data(self):
self.privatekey = crypto.PKey()
try:
self.privatekey.generate_key(self.type, self.size)
except (TypeError, ValueError) as exc:
raise PrivateKeyError(exc)
if self.cipher and self.passphrase:
return crypto.dump_privatekey(crypto.FILETYPE_PEM, self.privatekey,
self.cipher, to_bytes(self.passphrase))
else:
return crypto.dump_privatekey(crypto.FILETYPE_PEM, self.privatekey)
def _get_fingerprint(self):
return crypto_utils.get_fingerprint(self.path, self.passphrase)
def _check_passphrase(self):
try:
crypto_utils.load_privatekey(self.path, self.passphrase)
return True
except Exception as dummy:
return False
def _check_size_and_type(self):
def _check_size(privatekey):
return self.size == privatekey.bits()
def _check_type(privatekey):
return self.type == privatekey.type()
try:
privatekey = crypto_utils.load_privatekey(self.path, self.passphrase)
except crypto_utils.OpenSSLBadPassphraseError as exc:
raise PrivateKeyError(exc)
return _check_size(privatekey) and _check_type(privatekey)
def dump(self):
"""Serialize the object into a dictionary."""
result = super(PrivateKeyPyOpenSSL, self).dump()
if self.type == crypto.TYPE_RSA:
result['type'] = 'RSA'
else:
result['type'] = 'DSA'
return result
# Implementation with using cryptography
class PrivateKeyCryptography(PrivateKeyBase):
def _get_ec_class(self, ectype):
ecclass = cryptography.hazmat.primitives.asymmetric.ec.__dict__.get(ectype)
if ecclass is None:
self.module.fail_json(msg='Your cryptography version does not support {0}'.format(ectype))
return ecclass
def _add_curve(self, name, ectype, deprecated=False):
def create(size):
ecclass = self._get_ec_class(ectype)
return ecclass()
def verify(privatekey):
ecclass = self._get_ec_class(ectype)
return isinstance(privatekey.private_numbers().public_numbers.curve, ecclass)
self.curves[name] = {
'create': create,
'verify': verify,
'deprecated': deprecated,
}
def __init__(self, module):
super(PrivateKeyCryptography, self).__init__(module)
self.curves = dict()
self._add_curve('secp384r1', 'SECP384R1')
self._add_curve('secp521r1', 'SECP521R1')
self._add_curve('secp224r1', 'SECP224R1')
self._add_curve('secp192r1', 'SECP192R1')
self._add_curve('secp256r1', 'SECP256R1')
self._add_curve('secp256k1', 'SECP256K1')
self._add_curve('brainpoolP256r1', 'BrainpoolP256R1', deprecated=True)
self._add_curve('brainpoolP384r1', 'BrainpoolP384R1', deprecated=True)
self._add_curve('brainpoolP512r1', 'BrainpoolP512R1', deprecated=True)
self._add_curve('sect571k1', 'SECT571K1', deprecated=True)
self._add_curve('sect409k1', 'SECT409K1', deprecated=True)
self._add_curve('sect283k1', 'SECT283K1', deprecated=True)
self._add_curve('sect233k1', 'SECT233K1', deprecated=True)
self._add_curve('sect163k1', 'SECT163K1', deprecated=True)
self._add_curve('sect571r1', 'SECT571R1', deprecated=True)
self._add_curve('sect409r1', 'SECT409R1', deprecated=True)
self._add_curve('sect283r1', 'SECT283R1', deprecated=True)
self._add_curve('sect233r1', 'SECT233R1', deprecated=True)
self._add_curve('sect163r2', 'SECT163R2', deprecated=True)
self.module = module
self.cryptography_backend = cryptography.hazmat.backends.default_backend()
self.type = module.params['type']
self.curve = module.params['curve']
if not CRYPTOGRAPHY_HAS_X25519 and self.type == 'X25519':
self.module.fail_json(msg='Your cryptography version does not support X25519')
if not CRYPTOGRAPHY_HAS_X25519_FULL and self.type == 'X25519':
self.module.fail_json(msg='Your cryptography version does not support X25519 serialization')
if not CRYPTOGRAPHY_HAS_X448 and self.type == 'X448':
self.module.fail_json(msg='Your cryptography version does not support X448')
if not CRYPTOGRAPHY_HAS_ED25519 and self.type == 'Ed25519':
self.module.fail_json(msg='Your cryptography version does not support Ed25519')
if not CRYPTOGRAPHY_HAS_ED448 and self.type == 'Ed448':
self.module.fail_json(msg='Your cryptography version does not support Ed448')
def _generate_private_key_data(self):
export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.TraditionalOpenSSL
try:
if self.type == 'RSA':
self.privatekey = cryptography.hazmat.primitives.asymmetric.rsa.generate_private_key(
public_exponent=65537, # OpenSSL always uses this
key_size=self.size,
backend=self.cryptography_backend
)
if self.type == 'DSA':
self.privatekey = cryptography.hazmat.primitives.asymmetric.dsa.generate_private_key(
key_size=self.size,
backend=self.cryptography_backend
)
if CRYPTOGRAPHY_HAS_X25519_FULL and self.type == 'X25519':
self.privatekey = cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.generate()
export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.PKCS8
if CRYPTOGRAPHY_HAS_X448 and self.type == 'X448':
self.privatekey = cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey.generate()
export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.PKCS8
if CRYPTOGRAPHY_HAS_ED25519 and self.type == 'Ed25519':
self.privatekey = cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey.generate()
export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.PKCS8
if CRYPTOGRAPHY_HAS_ED448 and self.type == 'Ed448':
self.privatekey = cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey.generate()
export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.PKCS8
if self.type == 'ECC' and self.curve in self.curves:
if self.curves[self.curve]['deprecated']:
self.module.warn('Elliptic curves of type {0} should not be used for new keys!'.format(self.curve))
self.privatekey = cryptography.hazmat.primitives.asymmetric.ec.generate_private_key(
curve=self.curves[self.curve]['create'](self.size),
backend=self.cryptography_backend
)
except cryptography.exceptions.UnsupportedAlgorithm as dummy:
self.module.fail_json(msg='Cryptography backend does not support the algorithm required for {0}'.format(self.type))
# Select key encryption
encryption_algorithm = cryptography.hazmat.primitives.serialization.NoEncryption()
if self.cipher and self.passphrase:
if self.cipher == 'auto':
encryption_algorithm = cryptography.hazmat.primitives.serialization.BestAvailableEncryption(to_bytes(self.passphrase))
else:
self.module.fail_json(msg='Cryptography backend can only use "auto" for cipher option.')
# Serialize key
return self.privatekey.private_bytes(
encoding=cryptography.hazmat.primitives.serialization.Encoding.PEM,
format=export_format,
encryption_algorithm=encryption_algorithm
)
def _load_privatekey(self):
try:
with open(self.path, 'rb') as f:
return cryptography.hazmat.primitives.serialization.load_pem_private_key(
f.read(),
None if self.passphrase is None else to_bytes(self.passphrase),
backend=self.cryptography_backend
)
except Exception as e:
raise PrivateKeyError(e)
def _get_fingerprint(self):
# Get bytes of public key
private_key = self._load_privatekey()
public_key = private_key.public_key()
public_key_bytes = public_key.public_bytes(
cryptography.hazmat.primitives.serialization.Encoding.DER,
cryptography.hazmat.primitives.serialization.PublicFormat.SubjectPublicKeyInfo
)
# Get fingerprints of public_key_bytes
return crypto_utils.get_fingerprint_of_bytes(public_key_bytes)
def _check_passphrase(self):
try:
with open(self.path, 'rb') as f:
return cryptography.hazmat.primitives.serialization.load_pem_private_key(
f.read(),
None if self.passphrase is None else to_bytes(self.passphrase),
backend=self.cryptography_backend
)
return True
except Exception as dummy:
return False
def _check_size_and_type(self):
privatekey = self._load_privatekey()
if isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateKey):
return self.type == 'RSA' and self.size == privatekey.key_size
if isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.dsa.DSAPrivateKey):
return self.type == 'DSA' and self.size == privatekey.key_size
if CRYPTOGRAPHY_HAS_X25519 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey):
return self.type == 'X25519'
if CRYPTOGRAPHY_HAS_X448 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey):
return self.type == 'X448'
if CRYPTOGRAPHY_HAS_ED25519 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey):
return self.type == 'Ed25519'
if CRYPTOGRAPHY_HAS_ED448 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey):
return self.type == 'Ed448'
if isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePrivateKey):
if self.type != 'ECC':
return False
if self.curve not in self.curves:
return False
return self.curves[self.curve]['verify'](privatekey)
return False
def dump(self):
"""Serialize the object into a dictionary."""
result = super(PrivateKeyCryptography, self).dump()
result['type'] = self.type
if self.type == 'ECC':
result['curve'] = self.curve
return result
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['present', 'absent']),
size=dict(type='int', default=4096),
type=dict(type='str', default='RSA', choices=[
'DSA', 'ECC', 'Ed25519', 'Ed448', 'RSA', 'X25519', 'X448'
]),
curve=dict(type='str', choices=[
'secp384r1', 'secp521r1', 'secp224r1', 'secp192r1', 'secp256r1',
'secp256k1', 'brainpoolP256r1', 'brainpoolP384r1', 'brainpoolP512r1',
'sect571k1', 'sect409k1', 'sect283k1', 'sect233k1', 'sect163k1',
'sect571r1', 'sect409r1', 'sect283r1', 'sect233r1', 'sect163r2',
]),
force=dict(type='bool', default=False),
path=dict(type='path', required=True),
passphrase=dict(type='str', no_log=True),
cipher=dict(type='str'),
backup=dict(type='bool', default=False),
select_crypto_backend=dict(type='str', choices=['auto', 'pyopenssl', 'cryptography'], default='auto'),
),
supports_check_mode=True,
add_file_common_args=True,
required_together=[
['cipher', 'passphrase']
],
required_if=[
['type', 'ECC', ['curve']],
],
)
base_dir = os.path.dirname(module.params['path']) or '.'
if not os.path.isdir(base_dir):
module.fail_json(
name=base_dir,
msg='The directory %s does not exist or the file is not a directory' % base_dir
)
backend = module.params['select_crypto_backend']
if backend == 'auto':
# Detection what is possible
can_use_cryptography = CRYPTOGRAPHY_FOUND and CRYPTOGRAPHY_VERSION >= LooseVersion(MINIMAL_CRYPTOGRAPHY_VERSION)
can_use_pyopenssl = PYOPENSSL_FOUND and PYOPENSSL_VERSION >= LooseVersion(MINIMAL_PYOPENSSL_VERSION)
# Decision
if module.params['cipher'] and module.params['passphrase'] and module.params['cipher'] != 'auto':
# First try pyOpenSSL, then cryptography
if can_use_pyopenssl:
backend = 'pyopenssl'
elif can_use_cryptography:
backend = 'cryptography'
else:
# First try cryptography, then pyOpenSSL
if can_use_cryptography:
backend = 'cryptography'
elif can_use_pyopenssl:
backend = 'pyopenssl'
# Success?
if backend == 'auto':
module.fail_json(msg=("Can't detect any of the required Python libraries "
"cryptography (>= {0}) or PyOpenSSL (>= {1})").format(
MINIMAL_CRYPTOGRAPHY_VERSION,
MINIMAL_PYOPENSSL_VERSION))
try:
if backend == 'pyopenssl':
if not PYOPENSSL_FOUND:
module.fail_json(msg=missing_required_lib('pyOpenSSL >= {0}'.format(MINIMAL_PYOPENSSL_VERSION)),
exception=PYOPENSSL_IMP_ERR)
module.deprecate('The module is using the PyOpenSSL backend. This backend has been deprecated', version='2.13')
private_key = PrivateKeyPyOpenSSL(module)
elif backend == 'cryptography':
if not CRYPTOGRAPHY_FOUND:
module.fail_json(msg=missing_required_lib('cryptography >= {0}'.format(MINIMAL_CRYPTOGRAPHY_VERSION)),
exception=CRYPTOGRAPHY_IMP_ERR)
private_key = PrivateKeyCryptography(module)
if private_key.state == 'present':
if module.check_mode:
result = private_key.dump()
result['changed'] = module.params['force'] or not private_key.check(module)
module.exit_json(**result)
private_key.generate(module)
else:
if module.check_mode:
result = private_key.dump()
result['changed'] = os.path.exists(module.params['path'])
module.exit_json(**result)
private_key.remove(module)
result = private_key.dump()
module.exit_json(**result)
except crypto_utils.OpenSSLObjectError as exc:
module.fail_json(msg=to_native(exc))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,253 |
Allow to select openssl_privatekey format
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Allow to specify a new "format" parameter in "openssl_privatekey". Today the output format is decided by a very simple heuristic which requires further commands to work properly.
I need, for example, to generate a RSA private key but in PKCS8 format. The current heuristic uses OpenSSH format for RSA keys and only uses PKCS8 for Ed25519 and similar. This makes me use a "command" with "openssl" to convert the generated key, which leads to idempotency problems and unnecessary complexity.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
openssl_privatekey
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The current heuristic should be kept unless overwritten by user.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
format: pkcs8
```
The values allowed are `pkcs1`, `raw` and `pkcs8`
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/59253
|
https://github.com/ansible/ansible/pull/60388
|
e3c7e356568c3254a14440d7b832cc2cf32a2b14
|
d00d0c81b379f4232421ec01a104b3ce1c6e2820
| 2019-07-18T15:29:13Z |
python
| 2019-10-17T08:40:13Z |
test/integration/targets/openssl_privatekey/tasks/impl.yml
|
---
- name: Generate privatekey1 - standard
openssl_privatekey:
path: '{{ output_dir }}/privatekey1.pem'
select_crypto_backend: '{{ select_crypto_backend }}'
- name: Generate privatekey2 - size 2048
openssl_privatekey:
path: '{{ output_dir }}/privatekey2.pem'
size: 2048
select_crypto_backend: '{{ select_crypto_backend }}'
- name: Generate privatekey3 - type DSA
openssl_privatekey:
path: '{{ output_dir }}/privatekey3.pem'
type: DSA
size: 3072
select_crypto_backend: '{{ select_crypto_backend }}'
- name: Generate privatekey4 - standard
openssl_privatekey:
path: '{{ output_dir }}/privatekey4.pem'
select_crypto_backend: '{{ select_crypto_backend }}'
- name: Delete privatekey4 - standard
openssl_privatekey:
state: absent
path: '{{ output_dir }}/privatekey4.pem'
select_crypto_backend: '{{ select_crypto_backend }}'
- name: Generate privatekey5 - standard - with passphrase
openssl_privatekey:
path: '{{ output_dir }}/privatekey5.pem'
passphrase: ansible
cipher: "{{ 'aes256' if select_crypto_backend == 'pyopenssl' else 'auto' }}"
select_crypto_backend: '{{ select_crypto_backend }}'
- name: Generate privatekey5 - standard - idempotence
openssl_privatekey:
path: '{{ output_dir }}/privatekey5.pem'
passphrase: ansible
cipher: "{{ 'aes256' if select_crypto_backend == 'pyopenssl' else 'auto' }}"
select_crypto_backend: '{{ select_crypto_backend }}'
register: privatekey5_idempotence
- name: Generate privatekey6 - standard - with non-ASCII passphrase
openssl_privatekey:
path: '{{ output_dir }}/privatekey6.pem'
passphrase: ànsïblé
cipher: "{{ 'aes256' if select_crypto_backend == 'pyopenssl' else 'auto' }}"
select_crypto_backend: '{{ select_crypto_backend }}'
- set_fact:
ecc_types: []
when: select_crypto_backend == 'pyopenssl'
- set_fact:
ecc_types:
- curve: secp384r1
openssl_name: secp384r1
min_cryptography_version: "0.5"
- curve: secp521r1
openssl_name: secp521r1
min_cryptography_version: "0.5"
- curve: secp224r1
openssl_name: secp224r1
min_cryptography_version: "0.5"
- curve: secp192r1
openssl_name: prime192v1
min_cryptography_version: "0.5"
- curve: secp256r1
openssl_name: secp256r1
min_cryptography_version: "0.5"
- curve: secp256k1
openssl_name: secp256k1
min_cryptography_version: "0.9"
- curve: brainpoolP256r1
openssl_name: brainpoolP256r1
min_cryptography_version: "2.2"
- curve: brainpoolP384r1
openssl_name: brainpoolP384r1
min_cryptography_version: "2.2"
- curve: brainpoolP512r1
openssl_name: brainpoolP512r1
min_cryptography_version: "2.2"
- curve: sect571k1
openssl_name: sect571k1
min_cryptography_version: "0.5"
- curve: sect409k1
openssl_name: sect409k1
min_cryptography_version: "0.5"
- curve: sect283k1
openssl_name: sect283k1
min_cryptography_version: "0.5"
- curve: sect233k1
openssl_name: sect233k1
min_cryptography_version: "0.5"
- curve: sect163k1
openssl_name: sect163k1
min_cryptography_version: "0.5"
- curve: sect571r1
openssl_name: sect571r1
min_cryptography_version: "0.5"
- curve: sect409r1
openssl_name: sect409r1
min_cryptography_version: "0.5"
- curve: sect283r1
openssl_name: sect283r1
min_cryptography_version: "0.5"
- curve: sect233r1
openssl_name: sect233r1
min_cryptography_version: "0.5"
- curve: sect163r2
openssl_name: sect163r2
min_cryptography_version: "0.5"
when: select_crypto_backend == 'cryptography'
- name: Test ECC key generation
openssl_privatekey:
path: '{{ output_dir }}/privatekey-{{ item.curve }}.pem'
type: ECC
curve: "{{ item.curve }}"
select_crypto_backend: '{{ select_crypto_backend }}'
when: |
cryptography_version.stdout is version(item.min_cryptography_version, '>=') and
item.openssl_name in openssl_ecc_list
loop: "{{ ecc_types }}"
loop_control:
label: "{{ item.curve }}"
register: privatekey_ecc_generate
- name: Test ECC key generation (idempotency)
openssl_privatekey:
path: '{{ output_dir }}/privatekey-{{ item.curve }}.pem'
type: ECC
curve: "{{ item.curve }}"
select_crypto_backend: '{{ select_crypto_backend }}'
when: |
cryptography_version.stdout is version(item.min_cryptography_version, '>=') and
item.openssl_name in openssl_ecc_list
loop: "{{ ecc_types }}"
loop_control:
label: "{{ item.curve }}"
register: privatekey_ecc_idempotency
- block:
- name: Test other type generation
openssl_privatekey:
path: '{{ output_dir }}/privatekey-{{ item.type }}.pem'
type: "{{ item.type }}"
select_crypto_backend: '{{ select_crypto_backend }}'
when: cryptography_version.stdout is version(item.min_version, '>=')
loop: "{{ types }}"
loop_control:
label: "{{ item.type }}"
register: privatekey_t1_generate
- name: Test other type generation (idempotency)
openssl_privatekey:
path: '{{ output_dir }}/privatekey-{{ item.type }}.pem'
type: "{{ item.type }}"
select_crypto_backend: '{{ select_crypto_backend }}'
when: cryptography_version.stdout is version(item.min_version, '>=')
loop: "{{ types }}"
loop_control:
label: "{{ item.type }}"
register: privatekey_t1_idempotency
when: select_crypto_backend == 'cryptography'
vars:
types:
- type: X25519
min_version: '2.5'
- type: Ed25519
min_version: '2.6'
- type: Ed448
min_version: '2.6'
- type: X448
min_version: '2.6'
- name: Generate privatekey with passphrase
openssl_privatekey:
path: '{{ output_dir }}/privatekeypw.pem'
passphrase: hunter2
cipher: "{{ 'aes256' if select_crypto_backend == 'pyopenssl' else 'auto' }}"
select_crypto_backend: '{{ select_crypto_backend }}'
backup: yes
register: passphrase_1
- name: Generate privatekey with passphrase (idempotent)
openssl_privatekey:
path: '{{ output_dir }}/privatekeypw.pem'
passphrase: hunter2
cipher: "{{ 'aes256' if select_crypto_backend == 'pyopenssl' else 'auto' }}"
select_crypto_backend: '{{ select_crypto_backend }}'
backup: yes
register: passphrase_2
- name: Regenerate privatekey without passphrase
openssl_privatekey:
path: '{{ output_dir }}/privatekeypw.pem'
select_crypto_backend: '{{ select_crypto_backend }}'
backup: yes
register: passphrase_3
- name: Regenerate privatekey without passphrase (idempotent)
openssl_privatekey:
path: '{{ output_dir }}/privatekeypw.pem'
select_crypto_backend: '{{ select_crypto_backend }}'
backup: yes
register: passphrase_4
- name: Regenerate privatekey with passphrase
openssl_privatekey:
path: '{{ output_dir }}/privatekeypw.pem'
passphrase: hunter2
cipher: "{{ 'aes256' if select_crypto_backend == 'pyopenssl' else 'auto' }}"
select_crypto_backend: '{{ select_crypto_backend }}'
backup: yes
register: passphrase_5
- name: Create broken key
copy:
dest: "{{ output_dir }}/broken"
content: "broken"
- name: Regenerate broken key
openssl_privatekey:
path: '{{ output_dir }}/broken.pem'
select_crypto_backend: '{{ select_crypto_backend }}'
register: output_broken
- name: Remove module
openssl_privatekey:
path: '{{ output_dir }}/privatekeypw.pem'
passphrase: hunter2
cipher: "{{ 'aes256' if select_crypto_backend == 'pyopenssl' else 'auto' }}"
select_crypto_backend: '{{ select_crypto_backend }}'
backup: yes
state: absent
register: remove_1
- name: Remove module (idempotent)
openssl_privatekey:
path: '{{ output_dir }}/privatekeypw.pem'
passphrase: hunter2
cipher: "{{ 'aes256' if select_crypto_backend == 'pyopenssl' else 'auto' }}"
select_crypto_backend: '{{ select_crypto_backend }}'
backup: yes
state: absent
register: remove_2
- name: Generate privatekey_mode (mode 0400)
openssl_privatekey:
path: '{{ output_dir }}/privatekey_mode.pem'
mode: '0400'
select_crypto_backend: '{{ select_crypto_backend }}'
register: privatekey_mode_1
- name: Stat for privatekey_mode
stat:
path: '{{ output_dir }}/privatekey_mode.pem'
register: privatekey_mode_1_stat
- name: Generate privatekey_mode (mode 0400, idempotency)
openssl_privatekey:
path: '{{ output_dir }}/privatekey_mode.pem'
mode: '0400'
select_crypto_backend: '{{ select_crypto_backend }}'
register: privatekey_mode_2
- name: Generate privatekey_mode (mode 0400, force)
openssl_privatekey:
path: '{{ output_dir }}/privatekey_mode.pem'
mode: '0400'
force: yes
select_crypto_backend: '{{ select_crypto_backend }}'
register: privatekey_mode_3
- name: Stat for privatekey_mode
stat:
path: '{{ output_dir }}/privatekey_mode.pem'
register: privatekey_mode_3_stat
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,253 |
Allow to select openssl_privatekey format
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Allow to specify a new "format" parameter in "openssl_privatekey". Today the output format is decided by a very simple heuristic which requires further commands to work properly.
I need, for example, to generate a RSA private key but in PKCS8 format. The current heuristic uses OpenSSH format for RSA keys and only uses PKCS8 for Ed25519 and similar. This makes me use a "command" with "openssl" to convert the generated key, which leads to idempotency problems and unnecessary complexity.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
openssl_privatekey
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The current heuristic should be kept unless overwritten by user.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
format: pkcs8
```
The values allowed are `pkcs1`, `raw` and `pkcs8`
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/59253
|
https://github.com/ansible/ansible/pull/60388
|
e3c7e356568c3254a14440d7b832cc2cf32a2b14
|
d00d0c81b379f4232421ec01a104b3ce1c6e2820
| 2019-07-18T15:29:13Z |
python
| 2019-10-17T08:40:13Z |
test/integration/targets/openssl_privatekey/tasks/main.yml
|
---
- name: Find out which elliptic curves are supported by installed OpenSSL
command: openssl ecparam -list_curves
register: openssl_ecc
- name: Compile list of elliptic curves supported by OpenSSL
set_fact:
openssl_ecc_list: |
{{
openssl_ecc.stdout_lines
| map('regex_search', '^ *([a-zA-Z0-9_-]+) *: .*$')
| select()
| map('regex_replace', '^ *([a-zA-Z0-9_-]+) *: .*$', '\1')
| list
}}
when: ansible_distribution != 'CentOS' or ansible_distribution_major_version != '6'
# CentOS comes with a very old jinja2 which does not include the map() filter...
- name: Compile list of elliptic curves supported by OpenSSL (CentOS 6)
set_fact:
openssl_ecc_list:
- secp384r1
- secp521r1
- prime256v1
when: ansible_distribution == 'CentOS' and ansible_distribution_major_version == '6'
- name: List of elliptic curves supported by OpenSSL
debug: var=openssl_ecc_list
- name: Run module with backend autodetection
openssl_privatekey:
path: '{{ output_dir }}/privatekey_backend_selection.pem'
- block:
- name: Running tests with pyOpenSSL backend
include_tasks: impl.yml
vars:
select_crypto_backend: pyopenssl
- import_tasks: ../tests/validate.yml
# FIXME: minimal pyOpenSSL version?!
when: pyopenssl_version.stdout is version('0.6', '>=')
- name: Remove output directory
file:
path: "{{ output_dir }}"
state: absent
- name: Re-create output directory
file:
path: "{{ output_dir }}"
state: directory
- block:
- name: Running tests with cryptography backend
include_tasks: impl.yml
vars:
select_crypto_backend: cryptography
- import_tasks: ../tests/validate.yml
when: cryptography_version.stdout is version('0.5', '>=')
- name: Check that fingerprints do not depend on the backend
block:
- name: "Fingerprint comparison: pyOpenSSL"
openssl_privatekey:
path: '{{ output_dir }}/fingerprint-{{ item }}.pem'
type: "{{ item }}"
size: 1024
select_crypto_backend: pyopenssl
loop:
- RSA
- DSA
register: fingerprint_pyopenssl
- name: "Fingerprint comparison: cryptography"
openssl_privatekey:
path: '{{ output_dir }}/fingerprint-{{ item }}.pem'
type: "{{ item }}"
size: 1024
select_crypto_backend: cryptography
loop:
- RSA
- DSA
register: fingerprint_cryptography
- name: Verify that fingerprints match
assert:
that: item.0.fingerprint[item.2] == item.1.fingerprint[item.2]
when: item.0 is not skipped and item.1 is not skipped
loop: |
{{ query('nested',
fingerprint_pyopenssl.results | zip(fingerprint_cryptography.results),
fingerprint_pyopenssl.results[0].fingerprint.keys()
) if fingerprint_pyopenssl.results[0].fingerprint else [] }}
loop_control:
label: "{{ [item.0.item, item.2] }}"
when: pyopenssl_version.stdout is version('0.6', '>=') and cryptography_version.stdout is version('0.5', '>=')
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,253 |
Allow to select openssl_privatekey format
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Allow to specify a new "format" parameter in "openssl_privatekey". Today the output format is decided by a very simple heuristic which requires further commands to work properly.
I need, for example, to generate a RSA private key but in PKCS8 format. The current heuristic uses OpenSSH format for RSA keys and only uses PKCS8 for Ed25519 and similar. This makes me use a "command" with "openssl" to convert the generated key, which leads to idempotency problems and unnecessary complexity.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
openssl_privatekey
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
The current heuristic should be kept unless overwritten by user.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
format: pkcs8
```
The values allowed are `pkcs1`, `raw` and `pkcs8`
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/59253
|
https://github.com/ansible/ansible/pull/60388
|
e3c7e356568c3254a14440d7b832cc2cf32a2b14
|
d00d0c81b379f4232421ec01a104b3ce1c6e2820
| 2019-07-18T15:29:13Z |
python
| 2019-10-17T08:40:13Z |
test/integration/targets/openssl_privatekey/tests/validate.yml
|
---
- name: Validate privatekey1 (test - RSA key with size 4096 bits)
shell: "openssl rsa -noout -text -in {{ output_dir }}/privatekey1.pem | grep Private | sed 's/\\(RSA *\\)*Private-Key: (\\(.*\\) bit.*)/\\2/'"
register: privatekey1
- name: Validate privatekey1 (assert - RSA key with size 4096 bits)
assert:
that:
- privatekey1.stdout == '4096'
- name: Validate privatekey2 (test - RSA key with size 2048 bits)
shell: "openssl rsa -noout -text -in {{ output_dir }}/privatekey2.pem | grep Private | sed 's/\\(RSA *\\)*Private-Key: (\\(.*\\) bit.*)/\\2/'"
register: privatekey2
- name: Validate privatekey2 (assert - RSA key with size 2048 bits)
assert:
that:
- privatekey2.stdout == '2048'
- name: Validate privatekey3 (test - DSA key with size 3072 bits)
shell: "openssl dsa -noout -text -in {{ output_dir }}/privatekey3.pem | grep Private | sed 's/\\(RSA *\\)*Private-Key: (\\(.*\\) bit.*)/\\2/'"
register: privatekey3
- name: Validate privatekey3 (assert - DSA key with size 3072 bits)
assert:
that:
- privatekey3.stdout == '3072'
- name: Validate privatekey4 (test - Ensure key has been removed)
stat:
path: '{{ output_dir }}/privatekey4.pem'
register: privatekey4
- name: Validate privatekey4 (assert - Ensure key has been removed)
assert:
that:
- privatekey4.stat.exists == False
- name: Validate privatekey5 (test - Passphrase protected key + idempotence)
shell: "openssl rsa -noout -text -in {{ output_dir }}/privatekey5.pem -passin pass:ansible | grep Private | sed 's/\\(RSA *\\)*Private-Key: (\\(.*\\) bit.*)/\\2/'"
register: privatekey5
# Current version of OS/X that runs in the CI (10.11) does not have an up to date version of the OpenSSL library
# leading to this test to fail when run in the CI. However, this test has been run for 10.12 and has returned succesfully.
when: openssl_version.stdout is version('0.9.8zh', '>=')
- name: Validate privatekey5 (assert - Passphrase protected key + idempotence)
assert:
that:
- privatekey5.stdout == '4096'
when: openssl_version.stdout is version('0.9.8zh', '>=')
- name: Validate privatekey5 idempotence (assert - Passphrase protected key + idempotence)
assert:
that:
- privatekey5_idempotence is not changed
- name: Validate privatekey6 (test - Passphrase protected key with non ascii character)
shell: "openssl rsa -noout -text -in {{ output_dir }}/privatekey6.pem -passin pass:ànsïblé | grep Private | sed 's/\\(RSA *\\)*Private-Key: (\\(.*\\) bit.*)/\\2/'"
register: privatekey6
when: openssl_version.stdout is version('0.9.8zh', '>=')
- name: Validate privatekey6 (assert - Passphrase protected key with non ascii character)
assert:
that:
- privatekey6.stdout == '4096'
when: openssl_version.stdout is version('0.9.8zh', '>=')
- name: Validate ECC generation (dump with OpenSSL)
shell: "openssl ec -in {{ output_dir }}/privatekey-{{ item.item.curve }}.pem -noout -text | grep 'ASN1 OID: ' | sed 's/ASN1 OID: \\([^ ]*\\)/\\1/'"
loop: "{{ privatekey_ecc_generate.results }}"
register: privatekey_ecc_dump
when: openssl_version.stdout is version('0.9.8zh', '>=') and 'skip_reason' not in item
loop_control:
label: "{{ item.item.curve }}"
- name: Validate ECC generation
assert:
that:
- item is changed
loop: "{{ privatekey_ecc_generate.results }}"
when: "'skip_reason' not in item"
loop_control:
label: "{{ item.item.curve }}"
- name: Validate ECC generation (curve type)
assert:
that:
- "'skip_reason' in item or item.item.item.openssl_name == item.stdout"
loop: "{{ privatekey_ecc_dump.results }}"
when: "'skip_reason' not in item"
loop_control:
label: "{{ item.item.item }} - {{ item.stdout if 'stdout' in item else '<unsupported>' }}"
- name: Validate ECC generation idempotency
assert:
that:
- item is not changed
loop: "{{ privatekey_ecc_idempotency.results }}"
when: "'skip_reason' not in item"
loop_control:
label: "{{ item.item.curve }}"
- name: Validate other type generation (just check changed)
assert:
that:
- item is changed
loop: "{{ privatekey_t1_generate.results }}"
when: "'skip_reason' not in item"
loop_control:
label: "{{ item.item.type }}"
- name: Validate other type generation idempotency
assert:
that:
- item is not changed
loop: "{{ privatekey_t1_idempotency.results }}"
when: "'skip_reason' not in item"
loop_control:
label: "{{ item.item.type }}"
- name: Validate passphrase changing
assert:
that:
- passphrase_1 is changed
- passphrase_2 is not changed
- passphrase_3 is changed
- passphrase_4 is not changed
- passphrase_5 is changed
- passphrase_1.backup_file is undefined
- passphrase_2.backup_file is undefined
- passphrase_3.backup_file is string
- passphrase_4.backup_file is undefined
- passphrase_5.backup_file is string
- name: Verify that broken key will be regenerated
assert:
that:
- output_broken is changed
- name: Validate remove
assert:
that:
- remove_1 is changed
- remove_2 is not changed
- remove_1.backup_file is string
- remove_2.backup_file is undefined
- name: Validate mode
assert:
that:
- privatekey_mode_1 is changed
- privatekey_mode_1_stat.stat.mode == '0400'
- privatekey_mode_2 is not changed
- privatekey_mode_3 is changed
- privatekey_mode_3_stat.stat.mode == '0400'
- privatekey_mode_1_stat.stat.mtime != privatekey_mode_3_stat.stat.mtime
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,227 |
Setting openssl_csr version has no effect
|
##### SUMMARY
Setting [openssl_csr](https://docs.ansible.com/ansible/latest/modules/openssl_csr_module.html)'s `version` parameter has no effect in ansible 2.8.4; it always remains the default (1).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
openssl_csr
##### ANSIBLE VERSION
```
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/bilge/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
```yaml
- name: Generate CSR
openssl_csr:
privatekey_path: /tmp/private.pem
path: /tmp/csr
subject_alt_name: DNS:example.com
version: 3
```
##### EXPECTED RESULTS
```
Certificate Request:
Data:
Version: 3 (0x2)
```
##### ACTUAL RESULTS
>$ openssl req -in /tmp/csr -noout -text
```
Certificate Request:
Data:
Version: 1 (0x0)
```
|
https://github.com/ansible/ansible/issues/62227
|
https://github.com/ansible/ansible/pull/63432
|
d00d0c81b379f4232421ec01a104b3ce1c6e2820
|
ba686154b98194de04a0c37970a3b997394ab7be
| 2019-09-12T20:44:37Z |
python
| 2019-10-17T08:42:05Z |
changelogs/fragments/63432-openssl_csr-version.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,227 |
Setting openssl_csr version has no effect
|
##### SUMMARY
Setting [openssl_csr](https://docs.ansible.com/ansible/latest/modules/openssl_csr_module.html)'s `version` parameter has no effect in ansible 2.8.4; it always remains the default (1).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
openssl_csr
##### ANSIBLE VERSION
```
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/bilge/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
```yaml
- name: Generate CSR
openssl_csr:
privatekey_path: /tmp/private.pem
path: /tmp/csr
subject_alt_name: DNS:example.com
version: 3
```
##### EXPECTED RESULTS
```
Certificate Request:
Data:
Version: 3 (0x2)
```
##### ACTUAL RESULTS
>$ openssl req -in /tmp/csr -noout -text
```
Certificate Request:
Data:
Version: 1 (0x0)
```
|
https://github.com/ansible/ansible/issues/62227
|
https://github.com/ansible/ansible/pull/63432
|
d00d0c81b379f4232421ec01a104b3ce1c6e2820
|
ba686154b98194de04a0c37970a3b997394ab7be
| 2019-09-12T20:44:37Z |
python
| 2019-10-17T08:42:05Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
No notable changes
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
No notable changes
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
Plugins
=======
No notable changes
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,227 |
Setting openssl_csr version has no effect
|
##### SUMMARY
Setting [openssl_csr](https://docs.ansible.com/ansible/latest/modules/openssl_csr_module.html)'s `version` parameter has no effect in ansible 2.8.4; it always remains the default (1).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
openssl_csr
##### ANSIBLE VERSION
```
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/bilge/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
```
##### CONFIGURATION
```yaml
- name: Generate CSR
openssl_csr:
privatekey_path: /tmp/private.pem
path: /tmp/csr
subject_alt_name: DNS:example.com
version: 3
```
##### EXPECTED RESULTS
```
Certificate Request:
Data:
Version: 3 (0x2)
```
##### ACTUAL RESULTS
>$ openssl req -in /tmp/csr -noout -text
```
Certificate Request:
Data:
Version: 1 (0x0)
```
|
https://github.com/ansible/ansible/issues/62227
|
https://github.com/ansible/ansible/pull/63432
|
d00d0c81b379f4232421ec01a104b3ce1c6e2820
|
ba686154b98194de04a0c37970a3b997394ab7be
| 2019-09-12T20:44:37Z |
python
| 2019-10-17T08:42:05Z |
lib/ansible/modules/crypto/openssl_csr.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyrigt: (c) 2017, Yanis Guenane <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: openssl_csr
version_added: '2.4'
short_description: Generate OpenSSL Certificate Signing Request (CSR)
description:
- This module allows one to (re)generate OpenSSL certificate signing requests.
- It uses the pyOpenSSL python library to interact with openssl. This module supports
the subjectAltName, keyUsage, extendedKeyUsage, basicConstraints and OCSP Must Staple
extensions.
- "Please note that the module regenerates existing CSR if it doesn't match the module's
options, or if it seems to be corrupt. If you are concerned that this could overwrite
your existing CSR, consider using the I(backup) option."
- The module can use the cryptography Python library, or the pyOpenSSL Python
library. By default, it tries to detect which one is available. This can be
overridden with the I(select_crypto_backend) option. Please note that the
PyOpenSSL backend was deprecated in Ansible 2.9 and will be removed in Ansible 2.13."
requirements:
- Either cryptography >= 1.3
- Or pyOpenSSL >= 0.15
author:
- Yanis Guenane (@Spredzy)
options:
state:
description:
- Whether the certificate signing request should exist or not, taking action if the state is different from what is stated.
type: str
default: present
choices: [ absent, present ]
digest:
description:
- The digest used when signing the certificate signing request with the private key.
type: str
default: sha256
privatekey_path:
description:
- The path to the private key to use when signing the certificate signing request.
type: path
required: true
privatekey_passphrase:
description:
- The passphrase for the private key.
- This is required if the private key is password protected.
type: str
version:
description:
- The version of the certificate signing request.
type: int
default: 1
force:
description:
- Should the certificate signing request be forced regenerated by this ansible module.
type: bool
default: no
path:
description:
- The name of the file into which the generated OpenSSL certificate signing request will be written.
type: path
required: true
subject:
description:
- Key/value pairs that will be present in the subject name field of the certificate signing request.
- If you need to specify more than one value with the same key, use a list as value.
type: dict
version_added: '2.5'
country_name:
description:
- The countryName field of the certificate signing request subject.
type: str
aliases: [ C, countryName ]
state_or_province_name:
description:
- The stateOrProvinceName field of the certificate signing request subject.
type: str
aliases: [ ST, stateOrProvinceName ]
locality_name:
description:
- The localityName field of the certificate signing request subject.
type: str
aliases: [ L, localityName ]
organization_name:
description:
- The organizationName field of the certificate signing request subject.
type: str
aliases: [ O, organizationName ]
organizational_unit_name:
description:
- The organizationalUnitName field of the certificate signing request subject.
type: str
aliases: [ OU, organizationalUnitName ]
common_name:
description:
- The commonName field of the certificate signing request subject.
type: str
aliases: [ CN, commonName ]
email_address:
description:
- The emailAddress field of the certificate signing request subject.
type: str
aliases: [ E, emailAddress ]
subject_alt_name:
description:
- SAN extension to attach to the certificate signing request.
- This can either be a 'comma separated string' or a YAML list.
- Values must be prefixed by their options. (i.e., C(email), C(URI), C(DNS), C(RID), C(IP), C(dirName),
C(otherName) and the ones specific to your CA)
- Note that if no SAN is specified, but a common name, the common
name will be added as a SAN except if C(useCommonNameForSAN) is
set to I(false).
- More at U(https://tools.ietf.org/html/rfc5280#section-4.2.1.6).
type: list
elements: str
aliases: [ subjectAltName ]
subject_alt_name_critical:
description:
- Should the subjectAltName extension be considered as critical.
type: bool
aliases: [ subjectAltName_critical ]
use_common_name_for_san:
description:
- If set to C(yes), the module will fill the common name in for
C(subject_alt_name) with C(DNS:) prefix if no SAN is specified.
type: bool
default: yes
version_added: '2.8'
aliases: [ useCommonNameForSAN ]
key_usage:
description:
- This defines the purpose (e.g. encipherment, signature, certificate signing)
of the key contained in the certificate.
type: list
elements: str
aliases: [ keyUsage ]
key_usage_critical:
description:
- Should the keyUsage extension be considered as critical.
type: bool
aliases: [ keyUsage_critical ]
extended_key_usage:
description:
- Additional restrictions (e.g. client authentication, server authentication)
on the allowed purposes for which the public key may be used.
type: list
elements: str
aliases: [ extKeyUsage, extendedKeyUsage ]
extended_key_usage_critical:
description:
- Should the extkeyUsage extension be considered as critical.
type: bool
aliases: [ extKeyUsage_critical, extendedKeyUsage_critical ]
basic_constraints:
description:
- Indicates basic constraints, such as if the certificate is a CA.
type: list
elements: str
version_added: '2.5'
aliases: [ basicConstraints ]
basic_constraints_critical:
description:
- Should the basicConstraints extension be considered as critical.
type: bool
version_added: '2.5'
aliases: [ basicConstraints_critical ]
ocsp_must_staple:
description:
- Indicates that the certificate should contain the OCSP Must Staple
extension (U(https://tools.ietf.org/html/rfc7633)).
type: bool
version_added: '2.5'
aliases: [ ocspMustStaple ]
ocsp_must_staple_critical:
description:
- Should the OCSP Must Staple extension be considered as critical
- Note that according to the RFC, this extension should not be marked
as critical, as old clients not knowing about OCSP Must Staple
are required to reject such certificates
(see U(https://tools.ietf.org/html/rfc7633#section-4)).
type: bool
version_added: '2.5'
aliases: [ ocspMustStaple_critical ]
select_crypto_backend:
description:
- Determines which crypto backend to use.
- The default choice is C(auto), which tries to use C(cryptography) if available, and falls back to C(pyopenssl).
- If set to C(pyopenssl), will try to use the L(pyOpenSSL,https://pypi.org/project/pyOpenSSL/) library.
- If set to C(cryptography), will try to use the L(cryptography,https://cryptography.io/) library.
- Please note that the C(pyopenssl) backend has been deprecated in Ansible 2.9, and will be removed in Ansible 2.13.
From that point on, only the C(cryptography) backend will be available.
type: str
default: auto
choices: [ auto, cryptography, pyopenssl ]
version_added: '2.8'
backup:
description:
- Create a backup file including a timestamp so you can get the original
CSR back if you overwrote it with a new one by accident.
type: bool
default: no
version_added: "2.8"
create_subject_key_identifier:
description:
- Create the Subject Key Identifier from the public key.
- "Please note that commercial CAs can ignore the value, respectively use a value of
their own choice instead. Specifying this option is mostly useful for self-signed
certificates or for own CAs."
- Note that this is only supported if the C(cryptography) backend is used!
type: bool
default: no
version_added: "2.9"
subject_key_identifier:
description:
- The subject key identifier as a hex string, where two bytes are separated by colons.
- "Example: C(00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11:22:33)"
- "Please note that commercial CAs ignore this value, respectively use a value of their
own choice. Specifying this option is mostly useful for self-signed certificates
or for own CAs."
- Note that this option can only be used if I(create_subject_key_identifier) is C(no).
- Note that this is only supported if the C(cryptography) backend is used!
type: str
version_added: "2.9"
authority_key_identifier:
description:
- The authority key identifier as a hex string, where two bytes are separated by colons.
- "Example: C(00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11:22:33)"
- If specified, I(authority_cert_issuer) must also be specified.
- "Please note that commercial CAs ignore this value, respectively use a value of their
own choice. Specifying this option is mostly useful for self-signed certificates
or for own CAs."
- Note that this is only supported if the C(cryptography) backend is used!
- The C(AuthorityKeyIdentifier) will only be added if at least one of I(authority_key_identifier),
I(authority_cert_issuer) and I(authority_cert_serial_number) is specified.
type: str
version_added: "2.9"
authority_cert_issuer:
description:
- Names that will be present in the authority cert issuer field of the certificate signing request.
- Values must be prefixed by their options. (i.e., C(email), C(URI), C(DNS), C(RID), C(IP), C(dirName),
C(otherName) and the ones specific to your CA)
- "Example: C(DNS:ca.example.org)"
- If specified, I(authority_key_identifier) must also be specified.
- "Please note that commercial CAs ignore this value, respectively use a value of their
own choice. Specifying this option is mostly useful for self-signed certificates
or for own CAs."
- Note that this is only supported if the C(cryptography) backend is used!
- The C(AuthorityKeyIdentifier) will only be added if at least one of I(authority_key_identifier),
I(authority_cert_issuer) and I(authority_cert_serial_number) is specified.
type: list
elements: str
version_added: "2.9"
authority_cert_serial_number:
description:
- The authority cert serial number.
- Note that this is only supported if the C(cryptography) backend is used!
- "Please note that commercial CAs ignore this value, respectively use a value of their
own choice. Specifying this option is mostly useful for self-signed certificates
or for own CAs."
- The C(AuthorityKeyIdentifier) will only be added if at least one of I(authority_key_identifier),
I(authority_cert_issuer) and I(authority_cert_serial_number) is specified.
type: int
version_added: "2.9"
extends_documentation_fragment:
- files
notes:
- If the certificate signing request already exists it will be checked whether subjectAltName,
keyUsage, extendedKeyUsage and basicConstraints only contain the requested values, whether
OCSP Must Staple is as requested, and if the request was signed by the given private key.
seealso:
- module: openssl_certificate
- module: openssl_dhparam
- module: openssl_pkcs12
- module: openssl_privatekey
- module: openssl_publickey
'''
EXAMPLES = r'''
- name: Generate an OpenSSL Certificate Signing Request
openssl_csr:
path: /etc/ssl/csr/www.ansible.com.csr
privatekey_path: /etc/ssl/private/ansible.com.pem
common_name: www.ansible.com
- name: Generate an OpenSSL Certificate Signing Request with a passphrase protected private key
openssl_csr:
path: /etc/ssl/csr/www.ansible.com.csr
privatekey_path: /etc/ssl/private/ansible.com.pem
privatekey_passphrase: ansible
common_name: www.ansible.com
- name: Generate an OpenSSL Certificate Signing Request with Subject information
openssl_csr:
path: /etc/ssl/csr/www.ansible.com.csr
privatekey_path: /etc/ssl/private/ansible.com.pem
country_name: FR
organization_name: Ansible
email_address: [email protected]
common_name: www.ansible.com
- name: Generate an OpenSSL Certificate Signing Request with subjectAltName extension
openssl_csr:
path: /etc/ssl/csr/www.ansible.com.csr
privatekey_path: /etc/ssl/private/ansible.com.pem
subject_alt_name: 'DNS:www.ansible.com,DNS:m.ansible.com'
- name: Generate an OpenSSL CSR with subjectAltName extension with dynamic list
openssl_csr:
path: /etc/ssl/csr/www.ansible.com.csr
privatekey_path: /etc/ssl/private/ansible.com.pem
subject_alt_name: "{{ item.value | map('regex_replace', '^', 'DNS:') | list }}"
with_dict:
dns_server:
- www.ansible.com
- m.ansible.com
- name: Force regenerate an OpenSSL Certificate Signing Request
openssl_csr:
path: /etc/ssl/csr/www.ansible.com.csr
privatekey_path: /etc/ssl/private/ansible.com.pem
force: yes
common_name: www.ansible.com
- name: Generate an OpenSSL Certificate Signing Request with special key usages
openssl_csr:
path: /etc/ssl/csr/www.ansible.com.csr
privatekey_path: /etc/ssl/private/ansible.com.pem
common_name: www.ansible.com
key_usage:
- digitalSignature
- keyAgreement
extended_key_usage:
- clientAuth
- name: Generate an OpenSSL Certificate Signing Request with OCSP Must Staple
openssl_csr:
path: /etc/ssl/csr/www.ansible.com.csr
privatekey_path: /etc/ssl/private/ansible.com.pem
common_name: www.ansible.com
ocsp_must_staple: yes
'''
RETURN = r'''
privatekey:
description: Path to the TLS/SSL private key the CSR was generated for
returned: changed or success
type: str
sample: /etc/ssl/private/ansible.com.pem
filename:
description: Path to the generated Certificate Signing Request
returned: changed or success
type: str
sample: /etc/ssl/csr/www.ansible.com.csr
subject:
description: A list of the subject tuples attached to the CSR
returned: changed or success
type: list
elements: list
sample: "[('CN', 'www.ansible.com'), ('O', 'Ansible')]"
subjectAltName:
description: The alternative names this CSR is valid for
returned: changed or success
type: list
elements: str
sample: [ 'DNS:www.ansible.com', 'DNS:m.ansible.com' ]
keyUsage:
description: Purpose for which the public key may be used
returned: changed or success
type: list
elements: str
sample: [ 'digitalSignature', 'keyAgreement' ]
extendedKeyUsage:
description: Additional restriction on the public key purposes
returned: changed or success
type: list
elements: str
sample: [ 'clientAuth' ]
basicConstraints:
description: Indicates if the certificate belongs to a CA
returned: changed or success
type: list
elements: str
sample: ['CA:TRUE', 'pathLenConstraint:0']
ocsp_must_staple:
description: Indicates whether the certificate has the OCSP
Must Staple feature enabled
returned: changed or success
type: bool
sample: false
backup_file:
description: Name of backup file created.
returned: changed and if I(backup) is C(yes)
type: str
sample: /path/to/www.ansible.com.csr.2019-03-09@11:22~
'''
import abc
import binascii
import os
import traceback
from distutils.version import LooseVersion
from ansible.module_utils import crypto as crypto_utils
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native, to_bytes, to_text
from ansible.module_utils.compat import ipaddress as compat_ipaddress
MINIMAL_PYOPENSSL_VERSION = '0.15'
MINIMAL_CRYPTOGRAPHY_VERSION = '1.3'
PYOPENSSL_IMP_ERR = None
try:
import OpenSSL
from OpenSSL import crypto
PYOPENSSL_VERSION = LooseVersion(OpenSSL.__version__)
except ImportError:
PYOPENSSL_IMP_ERR = traceback.format_exc()
PYOPENSSL_FOUND = False
else:
PYOPENSSL_FOUND = True
if OpenSSL.SSL.OPENSSL_VERSION_NUMBER >= 0x10100000:
# OpenSSL 1.1.0 or newer
OPENSSL_MUST_STAPLE_NAME = b"tlsfeature"
OPENSSL_MUST_STAPLE_VALUE = b"status_request"
else:
# OpenSSL 1.0.x or older
OPENSSL_MUST_STAPLE_NAME = b"1.3.6.1.5.5.7.1.24"
OPENSSL_MUST_STAPLE_VALUE = b"DER:30:03:02:01:05"
CRYPTOGRAPHY_IMP_ERR = None
try:
import cryptography
import cryptography.x509
import cryptography.x509.oid
import cryptography.exceptions
import cryptography.hazmat.backends
import cryptography.hazmat.primitives.serialization
import cryptography.hazmat.primitives.hashes
CRYPTOGRAPHY_VERSION = LooseVersion(cryptography.__version__)
except ImportError:
CRYPTOGRAPHY_IMP_ERR = traceback.format_exc()
CRYPTOGRAPHY_FOUND = False
else:
CRYPTOGRAPHY_FOUND = True
CRYPTOGRAPHY_MUST_STAPLE_NAME = cryptography.x509.oid.ObjectIdentifier("1.3.6.1.5.5.7.1.24")
CRYPTOGRAPHY_MUST_STAPLE_VALUE = b"\x30\x03\x02\x01\x05"
class CertificateSigningRequestError(crypto_utils.OpenSSLObjectError):
pass
class CertificateSigningRequestBase(crypto_utils.OpenSSLObject):
def __init__(self, module):
super(CertificateSigningRequestBase, self).__init__(
module.params['path'],
module.params['state'],
module.params['force'],
module.check_mode
)
self.digest = module.params['digest']
self.privatekey_path = module.params['privatekey_path']
self.privatekey_passphrase = module.params['privatekey_passphrase']
self.version = module.params['version']
self.subjectAltName = module.params['subject_alt_name']
self.subjectAltName_critical = module.params['subject_alt_name_critical']
self.keyUsage = module.params['key_usage']
self.keyUsage_critical = module.params['key_usage_critical']
self.extendedKeyUsage = module.params['extended_key_usage']
self.extendedKeyUsage_critical = module.params['extended_key_usage_critical']
self.basicConstraints = module.params['basic_constraints']
self.basicConstraints_critical = module.params['basic_constraints_critical']
self.ocspMustStaple = module.params['ocsp_must_staple']
self.ocspMustStaple_critical = module.params['ocsp_must_staple_critical']
self.create_subject_key_identifier = module.params['create_subject_key_identifier']
self.subject_key_identifier = module.params['subject_key_identifier']
self.authority_key_identifier = module.params['authority_key_identifier']
self.authority_cert_issuer = module.params['authority_cert_issuer']
self.authority_cert_serial_number = module.params['authority_cert_serial_number']
self.request = None
self.privatekey = None
if self.create_subject_key_identifier and self.subject_key_identifier is not None:
module.fail_json(msg='subject_key_identifier cannot be specified if create_subject_key_identifier is true')
self.backup = module.params['backup']
self.backup_file = None
self.subject = [
('C', module.params['country_name']),
('ST', module.params['state_or_province_name']),
('L', module.params['locality_name']),
('O', module.params['organization_name']),
('OU', module.params['organizational_unit_name']),
('CN', module.params['common_name']),
('emailAddress', module.params['email_address']),
]
if module.params['subject']:
self.subject = self.subject + crypto_utils.parse_name_field(module.params['subject'])
self.subject = [(entry[0], entry[1]) for entry in self.subject if entry[1]]
if not self.subjectAltName and module.params['use_common_name_for_san']:
for sub in self.subject:
if sub[0] in ('commonName', 'CN'):
self.subjectAltName = ['DNS:%s' % sub[1]]
break
if self.subject_key_identifier is not None:
try:
self.subject_key_identifier = binascii.unhexlify(self.subject_key_identifier.replace(':', ''))
except Exception as e:
raise CertificateSigningRequestError('Cannot parse subject_key_identifier: {0}'.format(e))
if self.authority_key_identifier is not None:
try:
self.authority_key_identifier = binascii.unhexlify(self.authority_key_identifier.replace(':', ''))
except Exception as e:
raise CertificateSigningRequestError('Cannot parse authority_key_identifier: {0}'.format(e))
@abc.abstractmethod
def _generate_csr(self):
pass
def generate(self, module):
'''Generate the certificate signing request.'''
if not self.check(module, perms_required=False) or self.force:
result = self._generate_csr()
if self.backup:
self.backup_file = module.backup_local(self.path)
crypto_utils.write_file(module, result)
self.changed = True
file_args = module.load_file_common_arguments(module.params)
if module.set_fs_attributes_if_different(file_args, False):
self.changed = True
@abc.abstractmethod
def _load_private_key(self):
pass
@abc.abstractmethod
def _check_csr(self):
pass
def check(self, module, perms_required=True):
"""Ensure the resource is in its desired state."""
state_and_perms = super(CertificateSigningRequestBase, self).check(module, perms_required)
self._load_private_key()
if not state_and_perms:
return False
return self._check_csr()
def remove(self, module):
if self.backup:
self.backup_file = module.backup_local(self.path)
super(CertificateSigningRequestBase, self).remove(module)
def dump(self):
'''Serialize the object into a dictionary.'''
result = {
'privatekey': self.privatekey_path,
'filename': self.path,
'subject': self.subject,
'subjectAltName': self.subjectAltName,
'keyUsage': self.keyUsage,
'extendedKeyUsage': self.extendedKeyUsage,
'basicConstraints': self.basicConstraints,
'ocspMustStaple': self.ocspMustStaple,
'changed': self.changed
}
if self.backup_file:
result['backup_file'] = self.backup_file
return result
class CertificateSigningRequestPyOpenSSL(CertificateSigningRequestBase):
def __init__(self, module):
if module.params['create_subject_key_identifier']:
module.fail_json(msg='You cannot use create_subject_key_identifier with the pyOpenSSL backend!')
for o in ('subject_key_identifier', 'authority_key_identifier', 'authority_cert_issuer', 'authority_cert_serial_number'):
if module.params[o] is not None:
module.fail_json(msg='You cannot use {0} with the pyOpenSSL backend!'.format(o))
super(CertificateSigningRequestPyOpenSSL, self).__init__(module)
def _generate_csr(self):
req = crypto.X509Req()
req.set_version(self.version - 1)
subject = req.get_subject()
for entry in self.subject:
if entry[1] is not None:
# Workaround for https://github.com/pyca/pyopenssl/issues/165
nid = OpenSSL._util.lib.OBJ_txt2nid(to_bytes(entry[0]))
if nid == 0:
raise CertificateSigningRequestError('Unknown subject field identifier "{0}"'.format(entry[0]))
res = OpenSSL._util.lib.X509_NAME_add_entry_by_NID(subject._name, nid, OpenSSL._util.lib.MBSTRING_UTF8, to_bytes(entry[1]), -1, -1, 0)
if res == 0:
raise CertificateSigningRequestError('Invalid value for subject field identifier "{0}": {1}'.format(entry[0], entry[1]))
extensions = []
if self.subjectAltName:
altnames = ', '.join(self.subjectAltName)
try:
extensions.append(crypto.X509Extension(b"subjectAltName", self.subjectAltName_critical, altnames.encode('ascii')))
except OpenSSL.crypto.Error as e:
raise CertificateSigningRequestError(
'Error while parsing Subject Alternative Names {0} (check for missing type prefix, such as "DNS:"!): {1}'.format(
', '.join(["{0}".format(san) for san in self.subjectAltName]), str(e)
)
)
if self.keyUsage:
usages = ', '.join(self.keyUsage)
extensions.append(crypto.X509Extension(b"keyUsage", self.keyUsage_critical, usages.encode('ascii')))
if self.extendedKeyUsage:
usages = ', '.join(self.extendedKeyUsage)
extensions.append(crypto.X509Extension(b"extendedKeyUsage", self.extendedKeyUsage_critical, usages.encode('ascii')))
if self.basicConstraints:
usages = ', '.join(self.basicConstraints)
extensions.append(crypto.X509Extension(b"basicConstraints", self.basicConstraints_critical, usages.encode('ascii')))
if self.ocspMustStaple:
extensions.append(crypto.X509Extension(OPENSSL_MUST_STAPLE_NAME, self.ocspMustStaple_critical, OPENSSL_MUST_STAPLE_VALUE))
if extensions:
req.add_extensions(extensions)
req.set_pubkey(self.privatekey)
req.sign(self.privatekey, self.digest)
self.request = req
return crypto.dump_certificate_request(crypto.FILETYPE_PEM, self.request)
def _load_private_key(self):
try:
self.privatekey = crypto_utils.load_privatekey(self.privatekey_path, self.privatekey_passphrase)
except crypto_utils.OpenSSLBadPassphraseError as exc:
raise CertificateSigningRequestError(exc)
def _normalize_san(self, san):
# Apparently OpenSSL returns 'IP address' not 'IP' as specifier when converting the subjectAltName to string
# although it won't accept this specifier when generating the CSR. (https://github.com/openssl/openssl/issues/4004)
if san.startswith('IP Address:'):
san = 'IP:' + san[len('IP Address:'):]
if san.startswith('IP:'):
ip = compat_ipaddress.ip_address(san[3:])
san = 'IP:{0}'.format(ip.compressed)
return san
def _check_csr(self):
def _check_subject(csr):
subject = [(OpenSSL._util.lib.OBJ_txt2nid(to_bytes(sub[0])), to_bytes(sub[1])) for sub in self.subject]
current_subject = [(OpenSSL._util.lib.OBJ_txt2nid(to_bytes(sub[0])), to_bytes(sub[1])) for sub in csr.get_subject().get_components()]
if not set(subject) == set(current_subject):
return False
return True
def _check_subjectAltName(extensions):
altnames_ext = next((ext for ext in extensions if ext.get_short_name() == b'subjectAltName'), '')
altnames = [self._normalize_san(altname.strip()) for altname in
to_text(altnames_ext, errors='surrogate_or_strict').split(',') if altname.strip()]
if self.subjectAltName:
if (set(altnames) != set([self._normalize_san(to_text(name)) for name in self.subjectAltName]) or
altnames_ext.get_critical() != self.subjectAltName_critical):
return False
else:
if altnames:
return False
return True
def _check_keyUsage_(extensions, extName, expected, critical):
usages_ext = [ext for ext in extensions if ext.get_short_name() == extName]
if (not usages_ext and expected) or (usages_ext and not expected):
return False
elif not usages_ext and not expected:
return True
else:
current = [OpenSSL._util.lib.OBJ_txt2nid(to_bytes(usage.strip())) for usage in str(usages_ext[0]).split(',')]
expected = [OpenSSL._util.lib.OBJ_txt2nid(to_bytes(usage)) for usage in expected]
return set(current) == set(expected) and usages_ext[0].get_critical() == critical
def _check_keyUsage(extensions):
usages_ext = [ext for ext in extensions if ext.get_short_name() == b'keyUsage']
if (not usages_ext and self.keyUsage) or (usages_ext and not self.keyUsage):
return False
elif not usages_ext and not self.keyUsage:
return True
else:
# OpenSSL._util.lib.OBJ_txt2nid() always returns 0 for all keyUsage values
# (since keyUsage has a fixed bitfield for these values and is not extensible).
# Therefore, we create an extension for the wanted values, and compare the
# data of the extensions (which is the serialized bitfield).
expected_ext = crypto.X509Extension(b"keyUsage", False, ', '.join(self.keyUsage).encode('ascii'))
return usages_ext[0].get_data() == expected_ext.get_data() and usages_ext[0].get_critical() == self.keyUsage_critical
def _check_extenededKeyUsage(extensions):
return _check_keyUsage_(extensions, b'extendedKeyUsage', self.extendedKeyUsage, self.extendedKeyUsage_critical)
def _check_basicConstraints(extensions):
return _check_keyUsage_(extensions, b'basicConstraints', self.basicConstraints, self.basicConstraints_critical)
def _check_ocspMustStaple(extensions):
oms_ext = [ext for ext in extensions if to_bytes(ext.get_short_name()) == OPENSSL_MUST_STAPLE_NAME and to_bytes(ext) == OPENSSL_MUST_STAPLE_VALUE]
if OpenSSL.SSL.OPENSSL_VERSION_NUMBER < 0x10100000:
# Older versions of libssl don't know about OCSP Must Staple
oms_ext.extend([ext for ext in extensions if ext.get_short_name() == b'UNDEF' and ext.get_data() == b'\x30\x03\x02\x01\x05'])
if self.ocspMustStaple:
return len(oms_ext) > 0 and oms_ext[0].get_critical() == self.ocspMustStaple_critical
else:
return len(oms_ext) == 0
def _check_extensions(csr):
extensions = csr.get_extensions()
return (_check_subjectAltName(extensions) and _check_keyUsage(extensions) and
_check_extenededKeyUsage(extensions) and _check_basicConstraints(extensions) and
_check_ocspMustStaple(extensions))
def _check_signature(csr):
try:
return csr.verify(self.privatekey)
except crypto.Error:
return False
try:
csr = crypto_utils.load_certificate_request(self.path, backend='pyopenssl')
except Exception as dummy:
return False
return _check_subject(csr) and _check_extensions(csr) and _check_signature(csr)
class CertificateSigningRequestCryptography(CertificateSigningRequestBase):
def __init__(self, module):
super(CertificateSigningRequestCryptography, self).__init__(module)
self.cryptography_backend = cryptography.hazmat.backends.default_backend()
def _generate_csr(self):
csr = cryptography.x509.CertificateSigningRequestBuilder()
try:
csr = csr.subject_name(cryptography.x509.Name([
cryptography.x509.NameAttribute(crypto_utils.cryptography_name_to_oid(entry[0]), to_text(entry[1])) for entry in self.subject
]))
except ValueError as e:
raise CertificateSigningRequestError(e)
if self.subjectAltName:
csr = csr.add_extension(cryptography.x509.SubjectAlternativeName([
crypto_utils.cryptography_get_name(name) for name in self.subjectAltName
]), critical=self.subjectAltName_critical)
if self.keyUsage:
params = crypto_utils.cryptography_parse_key_usage_params(self.keyUsage)
csr = csr.add_extension(cryptography.x509.KeyUsage(**params), critical=self.keyUsage_critical)
if self.extendedKeyUsage:
usages = [crypto_utils.cryptography_name_to_oid(usage) for usage in self.extendedKeyUsage]
csr = csr.add_extension(cryptography.x509.ExtendedKeyUsage(usages), critical=self.extendedKeyUsage_critical)
if self.basicConstraints:
params = {}
ca, path_length = crypto_utils.cryptography_get_basic_constraints(self.basicConstraints)
csr = csr.add_extension(cryptography.x509.BasicConstraints(ca, path_length), critical=self.basicConstraints_critical)
if self.ocspMustStaple:
try:
# This only works with cryptography >= 2.1
csr = csr.add_extension(cryptography.x509.TLSFeature([cryptography.x509.TLSFeatureType.status_request]), critical=self.ocspMustStaple_critical)
except AttributeError as dummy:
csr = csr.add_extension(
cryptography.x509.UnrecognizedExtension(CRYPTOGRAPHY_MUST_STAPLE_NAME, CRYPTOGRAPHY_MUST_STAPLE_VALUE),
critical=self.ocspMustStaple_critical
)
if self.create_subject_key_identifier:
csr = csr.add_extension(
cryptography.x509.SubjectKeyIdentifier.from_public_key(self.privatekey.public_key()),
critical=False
)
elif self.subject_key_identifier is not None:
csr = csr.add_extension(cryptography.x509.SubjectKeyIdentifier(self.subject_key_identifier), critical=False)
if self.authority_key_identifier is not None or self.authority_cert_issuer is not None or self.authority_cert_serial_number is not None:
issuers = None
if self.authority_cert_issuer is not None:
issuers = [crypto_utils.cryptography_get_name(n) for n in self.authority_cert_issuer]
csr = csr.add_extension(
cryptography.x509.AuthorityKeyIdentifier(self.authority_key_identifier, issuers, self.authority_cert_serial_number),
critical=False
)
digest = None
if self.digest == 'sha256':
digest = cryptography.hazmat.primitives.hashes.SHA256()
elif self.digest == 'sha384':
digest = cryptography.hazmat.primitives.hashes.SHA384()
elif self.digest == 'sha512':
digest = cryptography.hazmat.primitives.hashes.SHA512()
elif self.digest == 'sha1':
digest = cryptography.hazmat.primitives.hashes.SHA1()
elif self.digest == 'md5':
digest = cryptography.hazmat.primitives.hashes.MD5()
# FIXME
else:
raise CertificateSigningRequestError('Unsupported digest "{0}"'.format(self.digest))
self.request = csr.sign(self.privatekey, digest, self.cryptography_backend)
return self.request.public_bytes(cryptography.hazmat.primitives.serialization.Encoding.PEM)
def _load_private_key(self):
try:
with open(self.privatekey_path, 'rb') as f:
self.privatekey = cryptography.hazmat.primitives.serialization.load_pem_private_key(
f.read(),
None if self.privatekey_passphrase is None else to_bytes(self.privatekey_passphrase),
backend=self.cryptography_backend
)
except Exception as e:
raise CertificateSigningRequestError(e)
def _check_csr(self):
def _check_subject(csr):
subject = [(crypto_utils.cryptography_name_to_oid(entry[0]), entry[1]) for entry in self.subject]
current_subject = [(sub.oid, sub.value) for sub in csr.subject]
return set(subject) == set(current_subject)
def _find_extension(extensions, exttype):
return next(
(ext for ext in extensions if isinstance(ext.value, exttype)),
None
)
def _check_subjectAltName(extensions):
current_altnames_ext = _find_extension(extensions, cryptography.x509.SubjectAlternativeName)
current_altnames = [str(altname) for altname in current_altnames_ext.value] if current_altnames_ext else []
altnames = [str(crypto_utils.cryptography_get_name(altname)) for altname in self.subjectAltName] if self.subjectAltName else []
if set(altnames) != set(current_altnames):
return False
if altnames:
if current_altnames_ext.critical != self.subjectAltName_critical:
return False
return True
def _check_keyUsage(extensions):
current_keyusage_ext = _find_extension(extensions, cryptography.x509.KeyUsage)
if not self.keyUsage:
return current_keyusage_ext is None
elif current_keyusage_ext is None:
return False
params = crypto_utils.cryptography_parse_key_usage_params(self.keyUsage)
for param in params:
if getattr(current_keyusage_ext.value, '_' + param) != params[param]:
return False
if current_keyusage_ext.critical != self.keyUsage_critical:
return False
return True
def _check_extenededKeyUsage(extensions):
current_usages_ext = _find_extension(extensions, cryptography.x509.ExtendedKeyUsage)
current_usages = [str(usage) for usage in current_usages_ext.value] if current_usages_ext else []
usages = [str(crypto_utils.cryptography_name_to_oid(usage)) for usage in self.extendedKeyUsage] if self.extendedKeyUsage else []
if set(current_usages) != set(usages):
return False
if usages:
if current_usages_ext.critical != self.extendedKeyUsage_critical:
return False
return True
def _check_basicConstraints(extensions):
bc_ext = _find_extension(extensions, cryptography.x509.BasicConstraints)
current_ca = bc_ext.value.ca if bc_ext else False
current_path_length = bc_ext.value.path_length if bc_ext else None
ca, path_length = crypto_utils.cryptography_get_basic_constraints(self.basicConstraints)
# Check CA flag
if ca != current_ca:
return False
# Check path length
if path_length != current_path_length:
return False
# Check criticality
if self.basicConstraints:
if bc_ext.critical != self.basicConstraints_critical:
return False
return True
def _check_ocspMustStaple(extensions):
try:
# This only works with cryptography >= 2.1
tlsfeature_ext = _find_extension(extensions, cryptography.x509.TLSFeature)
has_tlsfeature = True
except AttributeError as dummy:
tlsfeature_ext = next(
(ext for ext in extensions if ext.value.oid == CRYPTOGRAPHY_MUST_STAPLE_NAME),
None
)
has_tlsfeature = False
if self.ocspMustStaple:
if not tlsfeature_ext or tlsfeature_ext.critical != self.ocspMustStaple_critical:
return False
if has_tlsfeature:
return cryptography.x509.TLSFeatureType.status_request in tlsfeature_ext.value
else:
return tlsfeature_ext.value.value == CRYPTOGRAPHY_MUST_STAPLE_VALUE
else:
return tlsfeature_ext is None
def _check_subject_key_identifier(extensions):
ext = _find_extension(extensions, cryptography.x509.SubjectKeyIdentifier)
if self.create_subject_key_identifier or self.subject_key_identifier is not None:
if not ext or ext.critical:
return False
if self.create_subject_key_identifier:
digest = cryptography.x509.SubjectKeyIdentifier.from_public_key(self.privatekey.public_key()).digest
return ext.value.digest == digest
else:
return ext.value.digest == self.subject_key_identifier
else:
return ext is None
def _check_authority_key_identifier(extensions):
ext = _find_extension(extensions, cryptography.x509.AuthorityKeyIdentifier)
if self.authority_key_identifier is not None or self.authority_cert_issuer is not None or self.authority_cert_serial_number is not None:
if not ext or ext.critical:
return False
aci = None
csr_aci = None
if self.authority_cert_issuer is not None:
aci = [str(crypto_utils.cryptography_get_name(n)) for n in self.authority_cert_issuer]
if ext.value.authority_cert_issuer is not None:
csr_aci = [str(n) for n in ext.value.authority_cert_issuer]
return (ext.value.key_identifier == self.authority_key_identifier
and csr_aci == aci
and ext.value.authority_cert_serial_number == self.authority_cert_serial_number)
else:
return ext is None
def _check_extensions(csr):
extensions = csr.extensions
return (_check_subjectAltName(extensions) and _check_keyUsage(extensions) and
_check_extenededKeyUsage(extensions) and _check_basicConstraints(extensions) and
_check_ocspMustStaple(extensions) and _check_subject_key_identifier(extensions) and
_check_authority_key_identifier(extensions))
def _check_signature(csr):
if not csr.is_signature_valid:
return False
# To check whether public key of CSR belongs to private key,
# encode both public keys and compare PEMs.
key_a = csr.public_key().public_bytes(
cryptography.hazmat.primitives.serialization.Encoding.PEM,
cryptography.hazmat.primitives.serialization.PublicFormat.SubjectPublicKeyInfo
)
key_b = self.privatekey.public_key().public_bytes(
cryptography.hazmat.primitives.serialization.Encoding.PEM,
cryptography.hazmat.primitives.serialization.PublicFormat.SubjectPublicKeyInfo
)
return key_a == key_b
try:
csr = crypto_utils.load_certificate_request(self.path, backend='cryptography')
except Exception as dummy:
return False
return _check_subject(csr) and _check_extensions(csr) and _check_signature(csr)
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
digest=dict(type='str', default='sha256'),
privatekey_path=dict(type='path', require=True),
privatekey_passphrase=dict(type='str', no_log=True),
version=dict(type='int', default=1),
force=dict(type='bool', default=False),
path=dict(type='path', required=True),
subject=dict(type='dict'),
country_name=dict(type='str', aliases=['C', 'countryName']),
state_or_province_name=dict(type='str', aliases=['ST', 'stateOrProvinceName']),
locality_name=dict(type='str', aliases=['L', 'localityName']),
organization_name=dict(type='str', aliases=['O', 'organizationName']),
organizational_unit_name=dict(type='str', aliases=['OU', 'organizationalUnitName']),
common_name=dict(type='str', aliases=['CN', 'commonName']),
email_address=dict(type='str', aliases=['E', 'emailAddress']),
subject_alt_name=dict(type='list', elements='str', aliases=['subjectAltName']),
subject_alt_name_critical=dict(type='bool', default=False, aliases=['subjectAltName_critical']),
use_common_name_for_san=dict(type='bool', default=True, aliases=['useCommonNameForSAN']),
key_usage=dict(type='list', elements='str', aliases=['keyUsage']),
key_usage_critical=dict(type='bool', default=False, aliases=['keyUsage_critical']),
extended_key_usage=dict(type='list', elements='str', aliases=['extKeyUsage', 'extendedKeyUsage']),
extended_key_usage_critical=dict(type='bool', default=False, aliases=['extKeyUsage_critical', 'extendedKeyUsage_critical']),
basic_constraints=dict(type='list', elements='str', aliases=['basicConstraints']),
basic_constraints_critical=dict(type='bool', default=False, aliases=['basicConstraints_critical']),
ocsp_must_staple=dict(type='bool', default=False, aliases=['ocspMustStaple']),
ocsp_must_staple_critical=dict(type='bool', default=False, aliases=['ocspMustStaple_critical']),
backup=dict(type='bool', default=False),
create_subject_key_identifier=dict(type='bool', default=False),
subject_key_identifier=dict(type='str'),
authority_key_identifier=dict(type='str'),
authority_cert_issuer=dict(type='list', elements='str'),
authority_cert_serial_number=dict(type='int'),
select_crypto_backend=dict(type='str', default='auto', choices=['auto', 'cryptography', 'pyopenssl']),
),
required_together=[('authority_cert_issuer', 'authority_cert_serial_number')],
add_file_common_args=True,
supports_check_mode=True,
)
base_dir = os.path.dirname(module.params['path']) or '.'
if not os.path.isdir(base_dir):
module.fail_json(name=base_dir, msg='The directory %s does not exist or the file is not a directory' % base_dir)
backend = module.params['select_crypto_backend']
if backend == 'auto':
# Detection what is possible
can_use_cryptography = CRYPTOGRAPHY_FOUND and CRYPTOGRAPHY_VERSION >= LooseVersion(MINIMAL_CRYPTOGRAPHY_VERSION)
can_use_pyopenssl = PYOPENSSL_FOUND and PYOPENSSL_VERSION >= LooseVersion(MINIMAL_PYOPENSSL_VERSION)
# First try cryptography, then pyOpenSSL
if can_use_cryptography:
backend = 'cryptography'
elif can_use_pyopenssl:
backend = 'pyopenssl'
# Success?
if backend == 'auto':
module.fail_json(msg=("Can't detect any of the required Python libraries "
"cryptography (>= {0}) or PyOpenSSL (>= {1})").format(
MINIMAL_CRYPTOGRAPHY_VERSION,
MINIMAL_PYOPENSSL_VERSION))
try:
if backend == 'pyopenssl':
if not PYOPENSSL_FOUND:
module.fail_json(msg=missing_required_lib('pyOpenSSL >= {0}'.format(MINIMAL_PYOPENSSL_VERSION)),
exception=PYOPENSSL_IMP_ERR)
try:
getattr(crypto.X509Req, 'get_extensions')
except AttributeError:
module.fail_json(msg='You need to have PyOpenSSL>=0.15 to generate CSRs')
module.deprecate('The module is using the PyOpenSSL backend. This backend has been deprecated', version='2.13')
csr = CertificateSigningRequestPyOpenSSL(module)
elif backend == 'cryptography':
if not CRYPTOGRAPHY_FOUND:
module.fail_json(msg=missing_required_lib('cryptography >= {0}'.format(MINIMAL_CRYPTOGRAPHY_VERSION)),
exception=CRYPTOGRAPHY_IMP_ERR)
csr = CertificateSigningRequestCryptography(module)
if module.params['state'] == 'present':
if module.check_mode:
result = csr.dump()
result['changed'] = module.params['force'] or not csr.check(module)
module.exit_json(**result)
csr.generate(module)
else:
if module.check_mode:
result = csr.dump()
result['changed'] = os.path.exists(module.params['path'])
module.exit_json(**result)
csr.remove(module)
result = csr.dump()
module.exit_json(**result)
except crypto_utils.OpenSSLObjectError as exc:
module.fail_json(msg=to_native(exc))
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,432 |
postgresql_privs module should support Altering permissions on database object type TYPES
|
##### SUMMARY
postgresql_privs module should support a type parameter option for TYPES to allow Grant/Revoke of usage permissions.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
postgresql_privs
##### ADDITIONAL INFORMATION
Types is supported in for assigning default privs, but we dont have parity for the ability to modify permissions of existing TYPES objects, GRANT USAGE ON TYPE schema.obj TO role;
|
https://github.com/ansible/ansible/issues/62432
|
https://github.com/ansible/ansible/pull/63555
|
ae16a840f64e97f8d082c1f7d51342f0f587e68d
|
7dd46f7b2ded8585ecb4e26f3c448c426d2008d9
| 2019-09-17T15:47:17Z |
python
| 2019-10-17T13:59:06Z |
changelogs/fragments/63555-postgresql_privs_typy_obj_types.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,432 |
postgresql_privs module should support Altering permissions on database object type TYPES
|
##### SUMMARY
postgresql_privs module should support a type parameter option for TYPES to allow Grant/Revoke of usage permissions.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
postgresql_privs
##### ADDITIONAL INFORMATION
Types is supported in for assigning default privs, but we dont have parity for the ability to modify permissions of existing TYPES objects, GRANT USAGE ON TYPE schema.obj TO role;
|
https://github.com/ansible/ansible/issues/62432
|
https://github.com/ansible/ansible/pull/63555
|
ae16a840f64e97f8d082c1f7d51342f0f587e68d
|
7dd46f7b2ded8585ecb4e26f3c448c426d2008d9
| 2019-09-17T15:47:17Z |
python
| 2019-10-17T13:59:06Z |
lib/ansible/modules/database/postgresql/postgresql_privs.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: postgresql_privs
version_added: '1.2'
short_description: Grant or revoke privileges on PostgreSQL database objects
description:
- Grant or revoke privileges on PostgreSQL database objects.
- This module is basically a wrapper around most of the functionality of
PostgreSQL's GRANT and REVOKE statements with detection of changes
(GRANT/REVOKE I(privs) ON I(type) I(objs) TO/FROM I(roles)).
options:
database:
description:
- Name of database to connect to.
required: yes
type: str
aliases:
- db
- login_db
state:
description:
- If C(present), the specified privileges are granted, if C(absent) they are revoked.
type: str
default: present
choices: [ absent, present ]
privs:
description:
- Comma separated list of privileges to grant/revoke.
type: str
aliases:
- priv
type:
description:
- Type of database object to set privileges on.
- The `default_privs` choice is available starting at version 2.7.
- The 'foreign_data_wrapper' and 'foreign_server' object types are available from Ansible version '2.8'.
type: str
default: table
choices: [ database, default_privs, foreign_data_wrapper, foreign_server, function,
group, language, table, tablespace, schema, sequence ]
objs:
description:
- Comma separated list of database objects to set privileges on.
- If I(type) is C(table), C(partition table), C(sequence) or C(function),
the special valueC(ALL_IN_SCHEMA) can be provided instead to specify all
database objects of type I(type) in the schema specified via I(schema).
(This also works with PostgreSQL < 9.0.) (C(ALL_IN_SCHEMA) is available
for C(function) and C(partition table) from version 2.8)
- If I(type) is C(database), this parameter can be omitted, in which case
privileges are set for the database specified via I(database).
- 'If I(type) is I(function), colons (":") in object names will be
replaced with commas (needed to specify function signatures, see examples)'
type: str
aliases:
- obj
schema:
description:
- Schema that contains the database objects specified via I(objs).
- May only be provided if I(type) is C(table), C(sequence), C(function)
or C(default_privs). Defaults to C(public) in these cases.
type: str
roles:
description:
- Comma separated list of role (user/group) names to set permissions for.
- The special value C(PUBLIC) can be provided instead to set permissions
for the implicitly defined PUBLIC group.
type: str
required: yes
aliases:
- role
fail_on_role:
version_added: '2.8'
description:
- If C(yes), fail when target role (for whom privs need to be granted) does not exist.
Otherwise just warn and continue.
default: yes
type: bool
session_role:
version_added: '2.8'
description:
- Switch to session_role after connecting.
- The specified session_role must be a role that the current login_user is a member of.
- Permissions checking for SQL commands is carried out as though the session_role were the one that had logged in originally.
type: str
target_roles:
description:
- A list of existing role (user/group) names to set as the
default permissions for database objects subsequently created by them.
- Parameter I(target_roles) is only available with C(type=default_privs).
type: str
version_added: '2.8'
grant_option:
description:
- Whether C(role) may grant/revoke the specified privileges/group memberships to others.
- Set to C(no) to revoke GRANT OPTION, leave unspecified to make no changes.
- I(grant_option) only has an effect if I(state) is C(present).
type: bool
aliases:
- admin_option
host:
description:
- Database host address. If unspecified, connect via Unix socket.
type: str
aliases:
- login_host
port:
description:
- Database port to connect to.
type: int
default: 5432
aliases:
- login_port
unix_socket:
description:
- Path to a Unix domain socket for local connections.
type: str
aliases:
- login_unix_socket
login:
description:
- The username to authenticate with.
type: str
default: postgres
aliases:
- login_user
password:
description:
- The password to authenticate with.
type: str
aliases:
- login_password
ssl_mode:
description:
- Determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server.
- See https://www.postgresql.org/docs/current/static/libpq-ssl.html for more information on the modes.
- Default of C(prefer) matches libpq default.
type: str
default: prefer
choices: [ allow, disable, prefer, require, verify-ca, verify-full ]
version_added: '2.3'
ca_cert:
description:
- Specifies the name of a file containing SSL certificate authority (CA) certificate(s).
- If the file exists, the server's certificate will be verified to be signed by one of these authorities.
version_added: '2.3'
type: str
aliases:
- ssl_rootcert
notes:
- Parameters that accept comma separated lists (I(privs), I(objs), I(roles))
have singular alias names (I(priv), I(obj), I(role)).
- To revoke only C(GRANT OPTION) for a specific object, set I(state) to
C(present) and I(grant_option) to C(no) (see examples).
- Note that when revoking privileges from a role R, this role may still have
access via privileges granted to any role R is a member of including C(PUBLIC).
- Note that when revoking privileges from a role R, you do so as the user
specified via I(login). If R has been granted the same privileges by
another user also, R can still access database objects via these privileges.
- When revoking privileges, C(RESTRICT) is assumed (see PostgreSQL docs).
seealso:
- module: postgresql_user
- module: postgresql_owner
- module: postgresql_membership
- name: PostgreSQL privileges
description: General information about PostgreSQL privileges.
link: https://www.postgresql.org/docs/current/ddl-priv.html
- name: PostgreSQL GRANT command reference
description: Complete reference of the PostgreSQL GRANT command documentation.
link: https://www.postgresql.org/docs/current/sql-grant.html
- name: PostgreSQL REVOKE command reference
description: Complete reference of the PostgreSQL REVOKE command documentation.
link: https://www.postgresql.org/docs/current/sql-revoke.html
extends_documentation_fragment:
- postgres
author:
- Bernhard Weitzhofer (@b6d)
'''
EXAMPLES = r'''
# On database "library":
# GRANT SELECT, INSERT, UPDATE ON TABLE public.books, public.authors
# TO librarian, reader WITH GRANT OPTION
- name: Grant privs to librarian and reader on database library
postgresql_privs:
database: library
state: present
privs: SELECT,INSERT,UPDATE
type: table
objs: books,authors
schema: public
roles: librarian,reader
grant_option: yes
- name: Same as above leveraging default values
postgresql_privs:
db: library
privs: SELECT,INSERT,UPDATE
objs: books,authors
roles: librarian,reader
grant_option: yes
# REVOKE GRANT OPTION FOR INSERT ON TABLE books FROM reader
# Note that role "reader" will be *granted* INSERT privilege itself if this
# isn't already the case (since state: present).
- name: Revoke privs from reader
postgresql_privs:
db: library
state: present
priv: INSERT
obj: books
role: reader
grant_option: no
# "public" is the default schema. This also works for PostgreSQL 8.x.
- name: REVOKE INSERT, UPDATE ON ALL TABLES IN SCHEMA public FROM reader
postgresql_privs:
db: library
state: absent
privs: INSERT,UPDATE
objs: ALL_IN_SCHEMA
role: reader
- name: GRANT ALL PRIVILEGES ON SCHEMA public, math TO librarian
postgresql_privs:
db: library
privs: ALL
type: schema
objs: public,math
role: librarian
# Note the separation of arguments with colons.
- name: GRANT ALL PRIVILEGES ON FUNCTION math.add(int, int) TO librarian, reader
postgresql_privs:
db: library
privs: ALL
type: function
obj: add(int:int)
schema: math
roles: librarian,reader
# Note that group role memberships apply cluster-wide and therefore are not
# restricted to database "library" here.
- name: GRANT librarian, reader TO alice, bob WITH ADMIN OPTION
postgresql_privs:
db: library
type: group
objs: librarian,reader
roles: alice,bob
admin_option: yes
# Note that here "db: postgres" specifies the database to connect to, not the
# database to grant privileges on (which is specified via the "objs" param)
- name: GRANT ALL PRIVILEGES ON DATABASE library TO librarian
postgresql_privs:
db: postgres
privs: ALL
type: database
obj: library
role: librarian
# If objs is omitted for type "database", it defaults to the database
# to which the connection is established
- name: GRANT ALL PRIVILEGES ON DATABASE library TO librarian
postgresql_privs:
db: library
privs: ALL
type: database
role: librarian
# Available since version 2.7
# Objs must be set, ALL_DEFAULT to TABLES/SEQUENCES/TYPES/FUNCTIONS
# ALL_DEFAULT works only with privs=ALL
# For specific
- name: ALTER DEFAULT PRIVILEGES ON DATABASE library TO librarian
postgresql_privs:
db: library
objs: ALL_DEFAULT
privs: ALL
type: default_privs
role: librarian
grant_option: yes
# Available since version 2.7
# Objs must be set, ALL_DEFAULT to TABLES/SEQUENCES/TYPES/FUNCTIONS
# ALL_DEFAULT works only with privs=ALL
# For specific
- name: ALTER DEFAULT PRIVILEGES ON DATABASE library TO reader, step 1
postgresql_privs:
db: library
objs: TABLES,SEQUENCES
privs: SELECT
type: default_privs
role: reader
- name: ALTER DEFAULT PRIVILEGES ON DATABASE library TO reader, step 2
postgresql_privs:
db: library
objs: TYPES
privs: USAGE
type: default_privs
role: reader
# Available since version 2.8
- name: GRANT ALL PRIVILEGES ON FOREIGN DATA WRAPPER fdw TO reader
postgresql_privs:
db: test
objs: fdw
privs: ALL
type: foreign_data_wrapper
role: reader
# Available since version 2.8
- name: GRANT ALL PRIVILEGES ON FOREIGN SERVER fdw_server TO reader
postgresql_privs:
db: test
objs: fdw_server
privs: ALL
type: foreign_server
role: reader
# Available since version 2.8
# Grant 'execute' permissions on all functions in schema 'common' to role 'caller'
- name: GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA common TO caller
postgresql_privs:
type: function
state: present
privs: EXECUTE
roles: caller
objs: ALL_IN_SCHEMA
schema: common
# Available since version 2.8
# ALTER DEFAULT PRIVILEGES FOR ROLE librarian IN SCHEMA library GRANT SELECT ON TABLES TO reader
# GRANT SELECT privileges for new TABLES objects created by librarian as
# default to the role reader.
# For specific
- name: ALTER privs
postgresql_privs:
db: library
schema: library
objs: TABLES
privs: SELECT
type: default_privs
role: reader
target_roles: librarian
# Available since version 2.8
# ALTER DEFAULT PRIVILEGES FOR ROLE librarian IN SCHEMA library REVOKE SELECT ON TABLES FROM reader
# REVOKE SELECT privileges for new TABLES objects created by librarian as
# default from the role reader.
# For specific
- name: ALTER privs
postgresql_privs:
db: library
state: absent
schema: library
objs: TABLES
privs: SELECT
type: default_privs
role: reader
target_roles: librarian
'''
RETURN = r'''
queries:
description: List of executed queries.
returned: always
type: list
sample: ['REVOKE GRANT OPTION FOR INSERT ON TABLE "books" FROM "reader";']
version_added: '2.8'
'''
import traceback
PSYCOPG2_IMP_ERR = None
try:
import psycopg2
import psycopg2.extensions
except ImportError:
PSYCOPG2_IMP_ERR = traceback.format_exc()
psycopg2 = None
# import module snippets
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.database import pg_quote_identifier
from ansible.module_utils.postgres import postgres_common_argument_spec
from ansible.module_utils._text import to_native
VALID_PRIVS = frozenset(('SELECT', 'INSERT', 'UPDATE', 'DELETE', 'TRUNCATE',
'REFERENCES', 'TRIGGER', 'CREATE', 'CONNECT',
'TEMPORARY', 'TEMP', 'EXECUTE', 'USAGE', 'ALL', 'USAGE'))
VALID_DEFAULT_OBJS = {'TABLES': ('ALL', 'SELECT', 'INSERT', 'UPDATE', 'DELETE', 'TRUNCATE', 'REFERENCES', 'TRIGGER'),
'SEQUENCES': ('ALL', 'SELECT', 'UPDATE', 'USAGE'),
'FUNCTIONS': ('ALL', 'EXECUTE'),
'TYPES': ('ALL', 'USAGE')}
executed_queries = []
class Error(Exception):
pass
def role_exists(module, cursor, rolname):
"""Check user exists or not"""
query = "SELECT 1 FROM pg_roles WHERE rolname = '%s'" % rolname
try:
cursor.execute(query)
return cursor.rowcount > 0
except Exception as e:
module.fail_json(msg="Cannot execute SQL '%s': %s" % (query, to_native(e)))
return False
# We don't have functools.partial in Python < 2.5
def partial(f, *args, **kwargs):
"""Partial function application"""
def g(*g_args, **g_kwargs):
new_kwargs = kwargs.copy()
new_kwargs.update(g_kwargs)
return f(*(args + g_args), **g_kwargs)
g.f = f
g.args = args
g.kwargs = kwargs
return g
class Connection(object):
"""Wrapper around a psycopg2 connection with some convenience methods"""
def __init__(self, params, module):
self.database = params.database
self.module = module
# To use defaults values, keyword arguments must be absent, so
# check which values are empty and don't include in the **kw
# dictionary
params_map = {
"host": "host",
"login": "user",
"password": "password",
"port": "port",
"database": "database",
"ssl_mode": "sslmode",
"ca_cert": "sslrootcert"
}
kw = dict((params_map[k], getattr(params, k)) for k in params_map
if getattr(params, k) != '' and getattr(params, k) is not None)
# If a unix_socket is specified, incorporate it here.
is_localhost = "host" not in kw or kw["host"] == "" or kw["host"] == "localhost"
if is_localhost and params.unix_socket != "":
kw["host"] = params.unix_socket
sslrootcert = params.ca_cert
if psycopg2.__version__ < '2.4.3' and sslrootcert is not None:
raise ValueError('psycopg2 must be at least 2.4.3 in order to user the ca_cert parameter')
self.connection = psycopg2.connect(**kw)
self.cursor = self.connection.cursor()
def commit(self):
self.connection.commit()
def rollback(self):
self.connection.rollback()
@property
def encoding(self):
"""Connection encoding in Python-compatible form"""
return psycopg2.extensions.encodings[self.connection.encoding]
# Methods for querying database objects
# PostgreSQL < 9.0 doesn't support "ALL TABLES IN SCHEMA schema"-like
# phrases in GRANT or REVOKE statements, therefore alternative methods are
# provided here.
def schema_exists(self, schema):
query = """SELECT count(*)
FROM pg_catalog.pg_namespace WHERE nspname = %s"""
self.cursor.execute(query, (schema,))
return self.cursor.fetchone()[0] > 0
def get_all_tables_in_schema(self, schema):
if not self.schema_exists(schema):
raise Error('Schema "%s" does not exist.' % schema)
query = """SELECT relname
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE nspname = %s AND relkind in ('r', 'v', 'm', 'p')"""
self.cursor.execute(query, (schema,))
return [t[0] for t in self.cursor.fetchall()]
def get_all_sequences_in_schema(self, schema):
if not self.schema_exists(schema):
raise Error('Schema "%s" does not exist.' % schema)
query = """SELECT relname
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE nspname = %s AND relkind = 'S'"""
self.cursor.execute(query, (schema,))
return [t[0] for t in self.cursor.fetchall()]
def get_all_functions_in_schema(self, schema):
if not self.schema_exists(schema):
raise Error('Schema "%s" does not exist.' % schema)
query = """SELECT p.proname, oidvectortypes(p.proargtypes)
FROM pg_catalog.pg_proc p
JOIN pg_namespace n ON n.oid = p.pronamespace
WHERE nspname = %s"""
self.cursor.execute(query, (schema,))
return ["%s(%s)" % (t[0], t[1]) for t in self.cursor.fetchall()]
# Methods for getting access control lists and group membership info
# To determine whether anything has changed after granting/revoking
# privileges, we compare the access control lists of the specified database
# objects before and afterwards. Python's list/string comparison should
# suffice for change detection, we should not actually have to parse ACLs.
# The same should apply to group membership information.
def get_table_acls(self, schema, tables):
query = """SELECT relacl
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE nspname = %s AND relkind in ('r','p','v','m') AND relname = ANY (%s)
ORDER BY relname"""
self.cursor.execute(query, (schema, tables))
return [t[0] for t in self.cursor.fetchall()]
def get_sequence_acls(self, schema, sequences):
query = """SELECT relacl
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE nspname = %s AND relkind = 'S' AND relname = ANY (%s)
ORDER BY relname"""
self.cursor.execute(query, (schema, sequences))
return [t[0] for t in self.cursor.fetchall()]
def get_function_acls(self, schema, function_signatures):
funcnames = [f.split('(', 1)[0] for f in function_signatures]
query = """SELECT proacl
FROM pg_catalog.pg_proc p
JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE nspname = %s AND proname = ANY (%s)
ORDER BY proname, proargtypes"""
self.cursor.execute(query, (schema, funcnames))
return [t[0] for t in self.cursor.fetchall()]
def get_schema_acls(self, schemas):
query = """SELECT nspacl FROM pg_catalog.pg_namespace
WHERE nspname = ANY (%s) ORDER BY nspname"""
self.cursor.execute(query, (schemas,))
return [t[0] for t in self.cursor.fetchall()]
def get_language_acls(self, languages):
query = """SELECT lanacl FROM pg_catalog.pg_language
WHERE lanname = ANY (%s) ORDER BY lanname"""
self.cursor.execute(query, (languages,))
return [t[0] for t in self.cursor.fetchall()]
def get_tablespace_acls(self, tablespaces):
query = """SELECT spcacl FROM pg_catalog.pg_tablespace
WHERE spcname = ANY (%s) ORDER BY spcname"""
self.cursor.execute(query, (tablespaces,))
return [t[0] for t in self.cursor.fetchall()]
def get_database_acls(self, databases):
query = """SELECT datacl FROM pg_catalog.pg_database
WHERE datname = ANY (%s) ORDER BY datname"""
self.cursor.execute(query, (databases,))
return [t[0] for t in self.cursor.fetchall()]
def get_group_memberships(self, groups):
query = """SELECT roleid, grantor, member, admin_option
FROM pg_catalog.pg_auth_members am
JOIN pg_catalog.pg_roles r ON r.oid = am.roleid
WHERE r.rolname = ANY(%s)
ORDER BY roleid, grantor, member"""
self.cursor.execute(query, (groups,))
return self.cursor.fetchall()
def get_default_privs(self, schema, *args):
query = """SELECT defaclacl
FROM pg_default_acl a
JOIN pg_namespace b ON a.defaclnamespace=b.oid
WHERE b.nspname = %s;"""
self.cursor.execute(query, (schema,))
return [t[0] for t in self.cursor.fetchall()]
def get_foreign_data_wrapper_acls(self, fdws):
query = """SELECT fdwacl FROM pg_catalog.pg_foreign_data_wrapper
WHERE fdwname = ANY (%s) ORDER BY fdwname"""
self.cursor.execute(query, (fdws,))
return [t[0] for t in self.cursor.fetchall()]
def get_foreign_server_acls(self, fs):
query = """SELECT srvacl FROM pg_catalog.pg_foreign_server
WHERE srvname = ANY (%s) ORDER BY srvname"""
self.cursor.execute(query, (fs,))
return [t[0] for t in self.cursor.fetchall()]
# Manipulating privileges
def manipulate_privs(self, obj_type, privs, objs, roles, target_roles,
state, grant_option, schema_qualifier=None, fail_on_role=True):
"""Manipulate database object privileges.
:param obj_type: Type of database object to grant/revoke
privileges for.
:param privs: Either a list of privileges to grant/revoke
or None if type is "group".
:param objs: List of database objects to grant/revoke
privileges for.
:param roles: Either a list of role names or "PUBLIC"
for the implicitly defined "PUBLIC" group
:param target_roles: List of role names to grant/revoke
default privileges as.
:param state: "present" to grant privileges, "absent" to revoke.
:param grant_option: Only for state "present": If True, set
grant/admin option. If False, revoke it.
If None, don't change grant option.
:param schema_qualifier: Some object types ("TABLE", "SEQUENCE",
"FUNCTION") must be qualified by schema.
Ignored for other Types.
"""
# get_status: function to get current status
if obj_type == 'table':
get_status = partial(self.get_table_acls, schema_qualifier)
elif obj_type == 'sequence':
get_status = partial(self.get_sequence_acls, schema_qualifier)
elif obj_type == 'function':
get_status = partial(self.get_function_acls, schema_qualifier)
elif obj_type == 'schema':
get_status = self.get_schema_acls
elif obj_type == 'language':
get_status = self.get_language_acls
elif obj_type == 'tablespace':
get_status = self.get_tablespace_acls
elif obj_type == 'database':
get_status = self.get_database_acls
elif obj_type == 'group':
get_status = self.get_group_memberships
elif obj_type == 'default_privs':
get_status = partial(self.get_default_privs, schema_qualifier)
elif obj_type == 'foreign_data_wrapper':
get_status = self.get_foreign_data_wrapper_acls
elif obj_type == 'foreign_server':
get_status = self.get_foreign_server_acls
else:
raise Error('Unsupported database object type "%s".' % obj_type)
# Return False (nothing has changed) if there are no objs to work on.
if not objs:
return False
# obj_ids: quoted db object identifiers (sometimes schema-qualified)
if obj_type == 'function':
obj_ids = []
for obj in objs:
try:
f, args = obj.split('(', 1)
except Exception:
raise Error('Illegal function signature: "%s".' % obj)
obj_ids.append('"%s"."%s"(%s' % (schema_qualifier, f, args))
elif obj_type in ['table', 'sequence']:
obj_ids = ['"%s"."%s"' % (schema_qualifier, o) for o in objs]
else:
obj_ids = ['"%s"' % o for o in objs]
# set_what: SQL-fragment specifying what to set for the target roles:
# Either group membership or privileges on objects of a certain type
if obj_type == 'group':
set_what = ','.join(pg_quote_identifier(i, 'role') for i in obj_ids)
elif obj_type == 'default_privs':
# We don't want privs to be quoted here
set_what = ','.join(privs)
else:
# function types are already quoted above
if obj_type != 'function':
obj_ids = [pg_quote_identifier(i, 'table') for i in obj_ids]
# Note: obj_type has been checked against a set of string literals
# and privs was escaped when it was parsed
# Note: Underscores are replaced with spaces to support multi-word obj_type
set_what = '%s ON %s %s' % (','.join(privs), obj_type.replace('_', ' '),
','.join(obj_ids))
# for_whom: SQL-fragment specifying for whom to set the above
if roles == 'PUBLIC':
for_whom = 'PUBLIC'
else:
for_whom = []
for r in roles:
if not role_exists(self.module, self.cursor, r):
if fail_on_role:
self.module.fail_json(msg="Role '%s' does not exist" % r.strip())
else:
self.module.warn("Role '%s' does not exist, pass it" % r.strip())
else:
for_whom.append(pg_quote_identifier(r, 'role'))
if not for_whom:
return False
for_whom = ','.join(for_whom)
# as_who:
as_who = None
if target_roles:
as_who = ','.join(pg_quote_identifier(r, 'role') for r in target_roles)
status_before = get_status(objs)
query = QueryBuilder(state) \
.for_objtype(obj_type) \
.with_grant_option(grant_option) \
.for_whom(for_whom) \
.as_who(as_who) \
.for_schema(schema_qualifier) \
.set_what(set_what) \
.for_objs(objs) \
.build()
executed_queries.append(query)
self.cursor.execute(query)
status_after = get_status(objs)
return status_before != status_after
class QueryBuilder(object):
def __init__(self, state):
self._grant_option = None
self._for_whom = None
self._as_who = None
self._set_what = None
self._obj_type = None
self._state = state
self._schema = None
self._objs = None
self.query = []
def for_objs(self, objs):
self._objs = objs
return self
def for_schema(self, schema):
self._schema = schema
return self
def with_grant_option(self, option):
self._grant_option = option
return self
def for_whom(self, who):
self._for_whom = who
return self
def as_who(self, target_roles):
self._as_who = target_roles
return self
def set_what(self, what):
self._set_what = what
return self
def for_objtype(self, objtype):
self._obj_type = objtype
return self
def build(self):
if self._state == 'present':
self.build_present()
elif self._state == 'absent':
self.build_absent()
else:
self.build_absent()
return '\n'.join(self.query)
def add_default_revoke(self):
for obj in self._objs:
if self._as_who:
self.query.append(
'ALTER DEFAULT PRIVILEGES FOR ROLE {0} IN SCHEMA {1} REVOKE ALL ON {2} FROM {3};'.format(self._as_who,
self._schema, obj,
self._for_whom))
else:
self.query.append(
'ALTER DEFAULT PRIVILEGES IN SCHEMA {0} REVOKE ALL ON {1} FROM {2};'.format(self._schema, obj,
self._for_whom))
def add_grant_option(self):
if self._grant_option:
if self._obj_type == 'group':
self.query[-1] += ' WITH ADMIN OPTION;'
else:
self.query[-1] += ' WITH GRANT OPTION;'
else:
self.query[-1] += ';'
if self._obj_type == 'group':
self.query.append('REVOKE ADMIN OPTION FOR {0} FROM {1};'.format(self._set_what, self._for_whom))
elif not self._obj_type == 'default_privs':
self.query.append('REVOKE GRANT OPTION FOR {0} FROM {1};'.format(self._set_what, self._for_whom))
def add_default_priv(self):
for obj in self._objs:
if self._as_who:
self.query.append(
'ALTER DEFAULT PRIVILEGES FOR ROLE {0} IN SCHEMA {1} GRANT {2} ON {3} TO {4}'.format(self._as_who,
self._schema,
self._set_what,
obj,
self._for_whom))
else:
self.query.append(
'ALTER DEFAULT PRIVILEGES IN SCHEMA {0} GRANT {1} ON {2} TO {3}'.format(self._schema,
self._set_what,
obj,
self._for_whom))
self.add_grant_option()
if self._as_who:
self.query.append(
'ALTER DEFAULT PRIVILEGES FOR ROLE {0} IN SCHEMA {1} GRANT USAGE ON TYPES TO {2}'.format(self._as_who,
self._schema,
self._for_whom))
else:
self.query.append(
'ALTER DEFAULT PRIVILEGES IN SCHEMA {0} GRANT USAGE ON TYPES TO {1}'.format(self._schema, self._for_whom))
self.add_grant_option()
def build_present(self):
if self._obj_type == 'default_privs':
self.add_default_revoke()
self.add_default_priv()
else:
self.query.append('GRANT {0} TO {1}'.format(self._set_what, self._for_whom))
self.add_grant_option()
def build_absent(self):
if self._obj_type == 'default_privs':
self.query = []
for obj in ['TABLES', 'SEQUENCES', 'TYPES']:
if self._as_who:
self.query.append(
'ALTER DEFAULT PRIVILEGES FOR ROLE {0} IN SCHEMA {1} REVOKE ALL ON {2} FROM {3};'.format(self._as_who,
self._schema, obj,
self._for_whom))
else:
self.query.append(
'ALTER DEFAULT PRIVILEGES IN SCHEMA {0} REVOKE ALL ON {1} FROM {2};'.format(self._schema, obj,
self._for_whom))
else:
self.query.append('REVOKE {0} FROM {1};'.format(self._set_what, self._for_whom))
def main():
argument_spec = postgres_common_argument_spec()
argument_spec.update(
database=dict(required=True, aliases=['db', 'login_db']),
state=dict(default='present', choices=['present', 'absent']),
privs=dict(required=False, aliases=['priv']),
type=dict(default='table',
choices=['table',
'sequence',
'function',
'database',
'schema',
'language',
'tablespace',
'group',
'default_privs',
'foreign_data_wrapper',
'foreign_server']),
objs=dict(required=False, aliases=['obj']),
schema=dict(required=False),
roles=dict(required=True, aliases=['role']),
session_role=dict(required=False),
target_roles=dict(required=False),
grant_option=dict(required=False, type='bool',
aliases=['admin_option']),
host=dict(default='', aliases=['login_host']),
unix_socket=dict(default='', aliases=['login_unix_socket']),
login=dict(default='postgres', aliases=['login_user']),
password=dict(default='', aliases=['login_password'], no_log=True),
fail_on_role=dict(type='bool', default=True),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
fail_on_role = module.params['fail_on_role']
# Create type object as namespace for module params
p = type('Params', (), module.params)
# param "schema": default, allowed depends on param "type"
if p.type in ['table', 'sequence', 'function', 'default_privs']:
p.schema = p.schema or 'public'
elif p.schema:
module.fail_json(msg='Argument "schema" is not allowed '
'for type "%s".' % p.type)
# param "objs": default, required depends on param "type"
if p.type == 'database':
p.objs = p.objs or p.database
elif not p.objs:
module.fail_json(msg='Argument "objs" is required '
'for type "%s".' % p.type)
# param "privs": allowed, required depends on param "type"
if p.type == 'group':
if p.privs:
module.fail_json(msg='Argument "privs" is not allowed '
'for type "group".')
elif not p.privs:
module.fail_json(msg='Argument "privs" is required '
'for type "%s".' % p.type)
# Connect to Database
if not psycopg2:
module.fail_json(msg=missing_required_lib('psycopg2'), exception=PSYCOPG2_IMP_ERR)
try:
conn = Connection(p, module)
except psycopg2.Error as e:
module.fail_json(msg='Could not connect to database: %s' % to_native(e), exception=traceback.format_exc())
except TypeError as e:
if 'sslrootcert' in e.args[0]:
module.fail_json(msg='Postgresql server must be at least version 8.4 to support sslrootcert')
module.fail_json(msg="unable to connect to database: %s" % to_native(e), exception=traceback.format_exc())
except ValueError as e:
# We raise this when the psycopg library is too old
module.fail_json(msg=to_native(e))
if p.session_role:
try:
conn.cursor.execute('SET ROLE %s' % pg_quote_identifier(p.session_role, 'role'))
except Exception as e:
module.fail_json(msg="Could not switch to role %s: %s" % (p.session_role, to_native(e)), exception=traceback.format_exc())
try:
# privs
if p.privs:
privs = frozenset(pr.upper() for pr in p.privs.split(','))
if not privs.issubset(VALID_PRIVS):
module.fail_json(msg='Invalid privileges specified: %s' % privs.difference(VALID_PRIVS))
else:
privs = None
# objs:
if p.type == 'table' and p.objs == 'ALL_IN_SCHEMA':
objs = conn.get_all_tables_in_schema(p.schema)
elif p.type == 'sequence' and p.objs == 'ALL_IN_SCHEMA':
objs = conn.get_all_sequences_in_schema(p.schema)
elif p.type == 'function' and p.objs == 'ALL_IN_SCHEMA':
objs = conn.get_all_functions_in_schema(p.schema)
elif p.type == 'default_privs':
if p.objs == 'ALL_DEFAULT':
objs = frozenset(VALID_DEFAULT_OBJS.keys())
else:
objs = frozenset(obj.upper() for obj in p.objs.split(','))
if not objs.issubset(VALID_DEFAULT_OBJS):
module.fail_json(
msg='Invalid Object set specified: %s' % objs.difference(VALID_DEFAULT_OBJS.keys()))
# Again, do we have valid privs specified for object type:
valid_objects_for_priv = frozenset(obj for obj in objs if privs.issubset(VALID_DEFAULT_OBJS[obj]))
if not valid_objects_for_priv == objs:
module.fail_json(
msg='Invalid priv specified. Valid object for priv: {0}. Objects: {1}'.format(
valid_objects_for_priv, objs))
else:
objs = p.objs.split(',')
# function signatures are encoded using ':' to separate args
if p.type == 'function':
objs = [obj.replace(':', ',') for obj in objs]
# roles
if p.roles == 'PUBLIC':
roles = 'PUBLIC'
else:
roles = p.roles.split(',')
if len(roles) == 1 and not role_exists(module, conn.cursor, roles[0]):
module.exit_json(changed=False)
if fail_on_role:
module.fail_json(msg="Role '%s' does not exist" % roles[0].strip())
else:
module.warn("Role '%s' does not exist, nothing to do" % roles[0].strip())
# check if target_roles is set with type: default_privs
if p.target_roles and not p.type == 'default_privs':
module.warn('"target_roles" will be ignored '
'Argument "type: default_privs" is required for usage of "target_roles".')
# target roles
if p.target_roles:
target_roles = p.target_roles.split(',')
else:
target_roles = None
changed = conn.manipulate_privs(
obj_type=p.type,
privs=privs,
objs=objs,
roles=roles,
target_roles=target_roles,
state=p.state,
grant_option=p.grant_option,
schema_qualifier=p.schema,
fail_on_role=fail_on_role,
)
except Error as e:
conn.rollback()
module.fail_json(msg=e.message, exception=traceback.format_exc())
except psycopg2.Error as e:
conn.rollback()
module.fail_json(msg=to_native(e.message))
if module.check_mode:
conn.rollback()
else:
conn.commit()
module.exit_json(changed=changed, queries=executed_queries)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,432 |
postgresql_privs module should support Altering permissions on database object type TYPES
|
##### SUMMARY
postgresql_privs module should support a type parameter option for TYPES to allow Grant/Revoke of usage permissions.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
postgresql_privs
##### ADDITIONAL INFORMATION
Types is supported in for assigning default privs, but we dont have parity for the ability to modify permissions of existing TYPES objects, GRANT USAGE ON TYPE schema.obj TO role;
|
https://github.com/ansible/ansible/issues/62432
|
https://github.com/ansible/ansible/pull/63555
|
ae16a840f64e97f8d082c1f7d51342f0f587e68d
|
7dd46f7b2ded8585ecb4e26f3c448c426d2008d9
| 2019-09-17T15:47:17Z |
python
| 2019-10-17T13:59:06Z |
test/integration/targets/postgresql_privs/tasks/postgresql_privs_general.yml
|
# Setup
- name: Create DB
become_user: "{{ pg_user }}"
become: yes
postgresql_db:
state: present
name: "{{ db_name }}"
login_user: "{{ pg_user }}"
- name: Create a user to be owner of objects
postgresql_user:
name: "{{ db_user3 }}"
state: present
encrypted: yes
password: password
role_attr_flags: CREATEDB,LOGIN
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
- name: Create a user to be given permissions and other tests
postgresql_user:
name: "{{ db_user2 }}"
state: present
encrypted: yes
password: password
role_attr_flags: LOGIN
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
#############################
# Test of solving bug 27327 #
#############################
# Create the test table and view:
- name: Create table
become: yes
become_user: "{{ pg_user }}"
postgresql_table:
login_user: "{{ pg_user }}"
db: postgres
name: test_table1
columns:
- id int
- name: Create view
become: yes
become_user: "{{ pg_user }}"
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: "CREATE VIEW test_view AS SELECT id FROM test_table1"
# Test check_mode:
- name: Grant SELECT on test_view, check_mode
become: yes
become_user: "{{ pg_user }}"
postgresql_privs:
login_user: "{{ pg_user }}"
db: postgres
state: present
privs: SELECT
type: table
objs: test_view
roles: "{{ db_user2 }}"
check_mode: yes
register: result
- assert:
that:
- result is changed
# Check:
- name: Check that nothing was changed after the prev step
become: yes
become_user: "{{ pg_user }}"
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: "SELECT grantee FROM information_schema.role_table_grants WHERE table_name='test_view' AND grantee = '{{ db_user2 }}'"
register: result
- assert:
that:
- result.rowcount == 0
# Test true mode:
- name: Grant SELECT on test_view
become: yes
become_user: "{{ pg_user }}"
postgresql_privs:
login_user: "{{ pg_user }}"
db: postgres
state: present
privs: SELECT
type: table
objs: test_view
roles: "{{ db_user2 }}"
register: result
- assert:
that:
- result is changed
# Check:
- name: Check that nothing was changed after the prev step
become: yes
become_user: "{{ pg_user }}"
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: "SELECT grantee FROM information_schema.role_table_grants WHERE table_name='test_view' AND grantee = '{{ db_user2 }}'"
register: result
- assert:
that:
- result.rowcount == 1
# Test true mode:
- name: Try to grant SELECT again
become: yes
become_user: "{{ pg_user }}"
postgresql_privs:
login_user: "{{ pg_user }}"
db: postgres
state: present
privs: SELECT
type: table
objs: test_view
roles: "{{ db_user2 }}"
register: result
- assert:
that:
- result is not changed
# Cleanup:
- name: Drop test view
become: yes
become_user: "{{ pg_user }}"
postgresql_query:
login_user: "{{ pg_user }}"
db: postgres
query: "DROP VIEW test_view"
- name: Drop test table
become: yes
become_user: "{{ pg_user }}"
postgresql_table:
login_user: "{{ pg_user }}"
db: postgres
name: test_table1
state: absent
######################################################
# Test foreign data wrapper and foreign server privs #
######################################################
# Foreign data wrapper setup
- name: Create foreign data wrapper extension
become: yes
become_user: "{{ pg_user }}"
shell: echo "CREATE EXTENSION postgres_fdw" | psql -d "{{ db_name }}"
- name: Create dummy foreign data wrapper
become: yes
become_user: "{{ pg_user }}"
shell: echo "CREATE FOREIGN DATA WRAPPER dummy" | psql -d "{{ db_name }}"
- name: Create foreign server
become: yes
become_user: "{{ pg_user }}"
shell: echo "CREATE SERVER dummy_server FOREIGN DATA WRAPPER dummy" | psql -d "{{ db_name }}"
# Test
- name: Grant foreign data wrapper privileges
postgresql_privs:
state: present
type: foreign_data_wrapper
roles: "{{ db_user2 }}"
privs: ALL
objs: dummy
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
register: result
ignore_errors: yes
# Checks
- assert:
that:
- result is changed
- name: Get foreign data wrapper privileges
become: yes
become_user: "{{ pg_user }}"
shell: echo "{{ fdw_query }}" | psql -d "{{ db_name }}"
vars:
fdw_query: >
SELECT fdwacl FROM pg_catalog.pg_foreign_data_wrapper
WHERE fdwname = ANY (ARRAY['dummy']) ORDER BY fdwname
register: fdw_result
- assert:
that:
- "fdw_result.stdout_lines[-1] == '(1 row)'"
- "'{{ db_user2 }}' in fdw_result.stdout_lines[-2]"
# Test
- name: Grant foreign data wrapper privileges second time
postgresql_privs:
state: present
type: foreign_data_wrapper
roles: "{{ db_user2 }}"
privs: ALL
objs: dummy
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
register: result
ignore_errors: yes
# Checks
- assert:
that:
- result is not changed
# Test
- name: Revoke foreign data wrapper privileges
postgresql_privs:
state: absent
type: foreign_data_wrapper
roles: "{{ db_user2 }}"
privs: ALL
objs: dummy
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
register: result
ignore_errors: yes
# Checks
- assert:
that:
- result is changed
- name: Get foreign data wrapper privileges
become: yes
become_user: "{{ pg_user }}"
shell: echo "{{ fdw_query }}" | psql -d "{{ db_name }}"
vars:
fdw_query: >
SELECT fdwacl FROM pg_catalog.pg_foreign_data_wrapper
WHERE fdwname = ANY (ARRAY['dummy']) ORDER BY fdwname
register: fdw_result
- assert:
that:
- "fdw_result.stdout_lines[-1] == '(1 row)'"
- "'{{ db_user2 }}' not in fdw_result.stdout_lines[-2]"
# Test
- name: Revoke foreign data wrapper privileges for second time
postgresql_privs:
state: absent
type: foreign_data_wrapper
roles: "{{ db_user2 }}"
privs: ALL
objs: dummy
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
register: result
ignore_errors: yes
# Checks
- assert:
that:
- result is not changed
# Test
- name: Grant foreign server privileges
postgresql_privs:
state: present
type: foreign_server
roles: "{{ db_user2 }}"
privs: ALL
objs: dummy_server
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
register: result
ignore_errors: yes
# Checks
- assert:
that:
- result is changed
- name: Get foreign server privileges
become: yes
become_user: "{{ pg_user }}"
shell: echo "{{ fdw_query }}" | psql -d "{{ db_name }}"
vars:
fdw_query: >
SELECT srvacl FROM pg_catalog.pg_foreign_server
WHERE srvname = ANY (ARRAY['dummy_server']) ORDER BY srvname
register: fs_result
- assert:
that:
- "fs_result.stdout_lines[-1] == '(1 row)'"
- "'{{ db_user2 }}' in fs_result.stdout_lines[-2]"
# Test
- name: Grant foreign server privileges for second time
postgresql_privs:
state: present
type: foreign_server
roles: "{{ db_user2 }}"
privs: ALL
objs: dummy_server
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
register: result
ignore_errors: yes
# Checks
- assert:
that:
- result is not changed
# Test
- name: Revoke foreign server privileges
postgresql_privs:
state: absent
type: foreign_server
roles: "{{ db_user2 }}"
privs: ALL
objs: dummy_server
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
register: result
ignore_errors: yes
# Checks
- assert:
that:
- result is changed
- name: Get foreign server privileges
become: yes
become_user: "{{ pg_user }}"
shell: echo "{{ fdw_query }}" | psql -d "{{ db_name }}"
vars:
fdw_query: >
SELECT srvacl FROM pg_catalog.pg_foreign_server
WHERE srvname = ANY (ARRAY['dummy_server']) ORDER BY srvname
register: fs_result
- assert:
that:
- "fs_result.stdout_lines[-1] == '(1 row)'"
- "'{{ db_user2 }}' not in fs_result.stdout_lines[-2]"
# Test
- name: Revoke foreign server privileges for second time
postgresql_privs:
state: absent
type: foreign_server
roles: "{{ db_user2 }}"
privs: ALL
objs: dummy_server
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
register: result
ignore_errors: yes
# Checks
- assert:
that:
- result is not changed
# Foreign data wrapper cleanup
- name: Drop foreign server
become: yes
become_user: "{{ pg_user }}"
shell: echo "DROP SERVER dummy_server" | psql -d "{{ db_name }}"
- name: Drop dummy foreign data wrapper
become: yes
become_user: "{{ pg_user }}"
shell: echo "DROP FOREIGN DATA WRAPPER dummy" | psql -d "{{ db_name }}"
- name: Drop foreign data wrapper extension
become: yes
become_user: "{{ pg_user }}"
shell: echo "DROP EXTENSION postgres_fdw" | psql -d "{{ db_name }}"
##########################################
# Test ALL_IN_SCHEMA for 'function' type #
##########################################
# Function ALL_IN_SCHEMA Setup
- name: Create function for test
postgresql_query:
query: CREATE FUNCTION public.a() RETURNS integer LANGUAGE SQL AS 'SELECT 2';
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
# Test
- name: Grant execute to all functions
postgresql_privs:
type: function
state: present
privs: EXECUTE
roles: "{{ db_user2 }}"
objs: ALL_IN_SCHEMA
schema: public
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
register: result
ignore_errors: yes
# Checks
- assert:
that: result is changed
- name: Check that all functions have execute privileges
become: yes
become_user: "{{ pg_user }}"
shell: psql {{ db_name }} -c "SELECT proacl FROM pg_proc WHERE proname = 'a'" -t
register: result
- assert:
that: "'{{ db_user2 }}=X/{{ db_user3 }}' in '{{ result.stdout_lines[0] }}'"
# Test
- name: Grant execute to all functions again
postgresql_privs:
type: function
state: present
privs: EXECUTE
roles: "{{ db_user2 }}"
objs: ALL_IN_SCHEMA
schema: public
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
register: result
ignore_errors: yes
# Checks
- assert:
that: result is not changed
# Test
- name: Revoke execute to all functions
postgresql_privs:
type: function
state: absent
privs: EXECUTE
roles: "{{ db_user2 }}"
objs: ALL_IN_SCHEMA
schema: public
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
register: result
ignore_errors: yes
# Checks
- assert:
that: result is changed
# Test
- name: Revoke execute to all functions again
postgresql_privs:
type: function
state: absent
privs: EXECUTE
roles: "{{ db_user2 }}"
objs: ALL_IN_SCHEMA
schema: public
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
register: result
ignore_errors: yes
- assert:
that: result is not changed
# Function ALL_IN_SCHEMA cleanup
- name: Remove function for test
postgresql_query:
query: DROP FUNCTION public.a();
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
#################################################
# Test ALL_IN_SCHEMA for 'partioned tables type #
#################################################
# Partitioning tables is a feature introduced in Postgresql 10.
# (see https://www.postgresql.org/docs/10/ddl-partitioning.html )
# The test below check for this version
# Function ALL_IN_SCHEMA Setup
- name: Create partioned table for test purpose
postgresql_query:
query: CREATE TABLE public.testpt (id int not null, logdate date not null) PARTITION BY RANGE (logdate);
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
when: postgres_version_resp.stdout is version('10', '>=')
# Test
- name: Grant execute to all tables in check mode
postgresql_privs:
type: table
state: present
privs: SELECT
roles: "{{ db_user2 }}"
objs: ALL_IN_SCHEMA
schema: public
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
register: result
ignore_errors: yes
when: postgres_version_resp.stdout is version('10', '>=')
check_mode: yes
# Checks
- name: Check that all partitioned tables don't have select privileges after the check mode task
postgresql_query:
query: SELECT grantee, privilege_type FROM information_schema.role_table_grants WHERE table_name='testpt' and privilege_type='SELECT' and grantee = %(grantuser)s
db: "{{ db_name }}"
login_user: '{{ db_user2 }}'
login_password: password
named_args:
grantuser: '{{ db_user2 }}'
become: yes
become_user: "{{ pg_user }}"
register: result
when: postgres_version_resp.stdout is version('10', '>=')
- assert:
that:
- result.rowcount == 0
when: postgres_version_resp.stdout is version('10', '>=')
# Test
- name: Grant execute to all tables
postgresql_privs:
type: table
state: present
privs: SELECT
roles: "{{ db_user2 }}"
objs: ALL_IN_SCHEMA
schema: public
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
register: result
ignore_errors: yes
when: postgres_version_resp.stdout is version('10', '>=')
# Checks
- assert:
that: result is changed
when: postgres_version_resp.stdout is version('10', '>=')
- name: Check that all partitioned tables have select privileges
postgresql_query:
query: SELECT grantee, privilege_type FROM information_schema.role_table_grants WHERE table_name='testpt' and privilege_type='SELECT' and grantee = %(grantuser)s
db: "{{ db_name }}"
login_user: '{{ db_user2 }}'
login_password: password
named_args:
grantuser: '{{ db_user2 }}'
become: yes
become_user: "{{ pg_user }}"
register: result
when: postgres_version_resp.stdout is version('10', '>=')
- assert:
that:
- result.rowcount == 1
when: postgres_version_resp.stdout is version('10', '>=')
# Test
- name: Grant execute to all tables again to see no changes are reported
postgresql_privs:
type: table
state: present
privs: SELECT
roles: "{{ db_user2 }}"
objs: ALL_IN_SCHEMA
schema: public
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
register: result
ignore_errors: yes
when: postgres_version_resp.stdout is version('10', '>=')
# Checks
- assert:
that: result is not changed
when: postgres_version_resp.stdout is version('10', '>=')
# Test
- name: Revoke SELECT to all tables
postgresql_privs:
type: table
state: absent
privs: SELECT
roles: "{{ db_user2 }}"
objs: ALL_IN_SCHEMA
schema: public
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
register: result
ignore_errors: yes
when: postgres_version_resp.stdout is version('10', '>=')
# Checks
- assert:
that: result is changed
when: postgres_version_resp.stdout is version('10', '>=')
- name: Check that all partitioned tables don't have select privileges
postgresql_query:
query: SELECT grantee, privilege_type FROM information_schema.role_table_grants WHERE table_name='testpt' and privilege_type='SELECT' and grantee = %(grantuser)s
db: "{{ db_name }}"
login_user: '{{ db_user2 }}'
login_password: password
named_args:
grantuser: '{{ db_user2 }}'
become: yes
become_user: "{{ pg_user }}"
register: result
when: postgres_version_resp.stdout is version('10', '>=')
- assert:
that:
- result.rowcount == 0
when: postgres_version_resp.stdout is version('10', '>=')
# Test
- name: Revoke SELECT to all tables and no changes are reported
postgresql_privs:
type: table
state: absent
privs: SELECT
roles: "{{ db_user2 }}"
objs: ALL_IN_SCHEMA
schema: public
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
register: result
ignore_errors: yes
when: postgres_version_resp.stdout is version('10', '>=')
- assert:
that: result is not changed
when: postgres_version_resp.stdout is version('10', '>=')
# Table ALL_IN_SCHEMA cleanup
- name: Remove table for test
postgresql_query:
query: DROP TABLE public.testpt;
db: "{{ db_name }}"
login_user: "{{ db_user3 }}"
login_password: password
ignore_errors: yes
when: postgres_version_resp.stdout is version('10', '>=')
# Cleanup
- name: Remove user given permissions
postgresql_user:
name: "{{ db_user2 }}"
state: absent
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
- name: Remove user owner of objects
postgresql_user:
name: "{{ db_user3 }}"
state: absent
db: "{{ db_name }}"
login_user: "{{ pg_user }}"
- name: Destroy DB
become_user: "{{ pg_user }}"
become: yes
postgresql_db:
state: absent
name: "{{ db_name }}"
login_user: "{{ pg_user }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,447 |
gitlab_user does not set sshkey when user is new or changed
|
##### SUMMARY
The gitlab_user module does not set sshkey when the user is newly created or has been changed already. This is due to the `changed = changed or ...` construct on line 237 in branch devel.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/source_control/gitlab_user.py`
##### ANSIBLE VERSION
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/.virtualenvs/ans3.6/lib/python3.6/site-packages/ansible
executable location = /home/user/.virtualenvs/ans3.6/bin/ansible
python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
Create a new gitlab user and set sshkey:
```
- name: Create gitlab user
gitlab_user:
server_url: "{{ gitlab_url }}"
api_token: "{{ gitlab_token }}"
name: test user
username: test_user
password: "{{ user_password }}"
email: "{{ user_email }}"
sshkey_name: test_user
sshkey_file: "{{ key_content }}"
state: present
# Note that the sshkey for test_user is not actually set now
# Run the same module again to set the sskey
- name: Actually set sshkey for gitlab user
gitlab_user:
server_url: "{{ gitlab_url }}"
api_token: "{{ gitlab_token }}"
name: test user
username: test_user
password: "{{ user_password }}"
email: "{{ user_email }}"
sshkey_name: test_user
sshkey_file: "{{ key_content }}"
state: present
```
##### EXPECTED RESULTS
Only having to run the module once should set all the fields correctly.
##### ACTUAL RESULTS
Running the module once does not set sshkey, if the user was not yet there or changed.
|
https://github.com/ansible/ansible/issues/63447
|
https://github.com/ansible/ansible/pull/63621
|
7dd46f7b2ded8585ecb4e26f3c448c426d2008d9
|
b4bb3dee9aebf2491e88f0ee63a5f2e704827c50
| 2019-10-14T08:35:56Z |
python
| 2019-10-17T14:22:15Z |
changelogs/fragments/63621-gitlab_user-fix-sshkey-and-user.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,447 |
gitlab_user does not set sshkey when user is new or changed
|
##### SUMMARY
The gitlab_user module does not set sshkey when the user is newly created or has been changed already. This is due to the `changed = changed or ...` construct on line 237 in branch devel.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`lib/ansible/modules/source_control/gitlab_user.py`
##### ANSIBLE VERSION
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/.virtualenvs/ans3.6/lib/python3.6/site-packages/ansible
executable location = /home/user/.virtualenvs/ans3.6/bin/ansible
python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
Create a new gitlab user and set sshkey:
```
- name: Create gitlab user
gitlab_user:
server_url: "{{ gitlab_url }}"
api_token: "{{ gitlab_token }}"
name: test user
username: test_user
password: "{{ user_password }}"
email: "{{ user_email }}"
sshkey_name: test_user
sshkey_file: "{{ key_content }}"
state: present
# Note that the sshkey for test_user is not actually set now
# Run the same module again to set the sskey
- name: Actually set sshkey for gitlab user
gitlab_user:
server_url: "{{ gitlab_url }}"
api_token: "{{ gitlab_token }}"
name: test user
username: test_user
password: "{{ user_password }}"
email: "{{ user_email }}"
sshkey_name: test_user
sshkey_file: "{{ key_content }}"
state: present
```
##### EXPECTED RESULTS
Only having to run the module once should set all the fields correctly.
##### ACTUAL RESULTS
Running the module once does not set sshkey, if the user was not yet there or changed.
|
https://github.com/ansible/ansible/issues/63447
|
https://github.com/ansible/ansible/pull/63621
|
7dd46f7b2ded8585ecb4e26f3c448c426d2008d9
|
b4bb3dee9aebf2491e88f0ee63a5f2e704827c50
| 2019-10-14T08:35:56Z |
python
| 2019-10-17T14:22:15Z |
lib/ansible/modules/source_control/gitlab_user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_user
short_description: Creates/updates/deletes GitLab Users
description:
- When the user does not exist in GitLab, it will be created.
- When the user does exists and state=absent, the user will be deleted.
- When changes are made to user, the user will be updated.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
- administrator rights on the GitLab server
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
name:
description:
- Name of the user you want to create
required: true
type: str
username:
description:
- The username of the user.
required: true
type: str
password:
description:
- The password of the user.
- GitLab server enforces minimum password length to 8, set this value with 8 or more characters.
required: true
type: str
email:
description:
- The email that belongs to the user.
required: true
type: str
sshkey_name:
description:
- The name of the sshkey
type: str
sshkey_file:
description:
- The ssh key itself.
type: str
group:
description:
- Id or Full path of parent group in the form of group/name
- Add user as an member to this group.
type: str
access_level:
description:
- The access level to the group. One of the following can be used.
- guest
- reporter
- developer
- master (alias for maintainer)
- maintainer
- owner
default: guest
type: str
choices: ["guest", "reporter", "developer", "master", "maintainer", "owner"]
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
confirm:
description:
- Require confirmation.
type: bool
default: yes
version_added: "2.4"
isadmin:
description:
- Grant admin privileges to the user
type: bool
default: no
version_added: "2.8"
external:
description:
- Define external parameter for this user
type: bool
default: no
version_added: "2.8"
'''
EXAMPLES = '''
- name: "Delete GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
username: myusername
state: absent
delegate_to: localhost
- name: "Create GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: My Name
username: myusername
password: mysecretpassword
email: [email protected]
sshkey_name: MySSH
sshkey_file: ssh-rsa AAAAB3NzaC1yc...
state: present
group: super_group/mon_group
access_level: owner
delegate_to: localhost
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
user:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabUser(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.userObject = None
self.ACCESS_LEVEL = {
'guest': gitlab.GUEST_ACCESS,
'reporter': gitlab.REPORTER_ACCESS,
'developer': gitlab.DEVELOPER_ACCESS,
'master': gitlab.MAINTAINER_ACCESS,
'maintainer': gitlab.MAINTAINER_ACCESS,
'owner': gitlab.OWNER_ACCESS}
'''
@param username Username of the user
@param options User options
'''
def createOrUpdateUser(self, username, options):
changed = False
# Because we have already call userExists in main()
if self.userObject is None:
user = self.createUser({
'name': options['name'],
'username': username,
'password': options['password'],
'email': options['email'],
'skip_confirmation': not options['confirm'],
'admin': options['isadmin'],
'external': options['external']})
changed = True
else:
changed, user = self.updateUser(self.userObject, {
'name': options['name'],
'email': options['email'],
'is_admin': options['isadmin'],
'external': options['external']})
# Assign ssh keys
if options['sshkey_name'] and options['sshkey_file']:
changed = changed or self.addSshKeyToUser(user, {
'name': options['sshkey_name'],
'file': options['sshkey_file']})
# Assign group
if options['group_path']:
changed = changed or self.assignUserToGroup(user, options['group_path'], options['access_level'])
self.userObject = user
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the user %s" % username)
try:
user.save()
except Exception as e:
self._module.fail_json(msg="Failed to update user: %s " % to_native(e))
return True
else:
return False
'''
@param group User object
'''
def getUserId(self, user):
if user is not None:
return user.id
return None
'''
@param user User object
@param sshkey_name Name of the ssh key
'''
def sshKeyExists(self, user, sshkey_name):
keyList = map(lambda k: k.title, user.keys.list())
return sshkey_name in keyList
'''
@param user User object
@param sshkey Dict containing sshkey infos {"name": "", "file": ""}
'''
def addSshKeyToUser(self, user, sshkey):
if not self.sshKeyExists(user, sshkey['name']):
if self._module.check_mode:
return True
try:
user.keys.create({
'title': sshkey['name'],
'key': sshkey['file']})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign sshkey to user: %s" % to_native(e))
return True
return False
'''
@param group Group object
@param user_id Id of the user to find
'''
def findMember(self, group, user_id):
try:
member = group.members.get(user_id)
except gitlab.exceptions.GitlabGetError as e:
return None
return member
'''
@param group Group object
@param user_id Id of the user to check
'''
def memberExists(self, group, user_id):
member = self.findMember(group, user_id)
return member is not None
'''
@param group Group object
@param user_id Id of the user to check
@param access_level GitLab access_level to check
'''
def memberAsGoodAccessLevel(self, group, user_id, access_level):
member = self.findMember(group, user_id)
return member.access_level == access_level
'''
@param user User object
@param group_path Complete path of the Group including parent group path. <parent_path>/<group_path>
@param access_level GitLab access_level to assign
'''
def assignUserToGroup(self, user, group_identifier, access_level):
group = findGroup(self._gitlab, group_identifier)
if self._module.check_mode:
return True
if group is None:
return False
if self.memberExists(group, self.getUserId(user)):
member = self.findMember(group, self.getUserId(user))
if not self.memberAsGoodAccessLevel(group, member.id, self.ACCESS_LEVEL[access_level]):
member.access_level = self.ACCESS_LEVEL[access_level]
member.save()
return True
else:
try:
group.members.create({
'user_id': self.getUserId(user),
'access_level': self.ACCESS_LEVEL[access_level]})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign user to group: %s" % to_native(e))
return True
return False
'''
@param user User object
@param arguments User attributes
'''
def updateUser(self, user, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(user, arg_key) != arguments[arg_key]:
setattr(user, arg_key, arguments[arg_key])
changed = True
return (changed, user)
'''
@param arguments User attributes
'''
def createUser(self, arguments):
if self._module.check_mode:
return True
try:
user = self._gitlab.users.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create user: %s " % to_native(e))
return user
'''
@param username Username of the user
'''
def findUser(self, username):
users = self._gitlab.users.list(search=username)
for user in users:
if (user.username == username):
return user
'''
@param username Username of the user
'''
def existsUser(self, username):
# When user exists, object will be stored in self.userObject.
user = self.findUser(username)
if user:
self.userObject = user
return True
return False
def deleteUser(self):
if self._module.check_mode:
return True
user = self.userObject
return user.delete()
def deprecation_warning(module):
deprecated_aliases = ['login_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
name=dict(type='str', required=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
username=dict(type='str', required=True),
password=dict(type='str', required=True, no_log=True),
email=dict(type='str', required=True),
sshkey_name=dict(type='str'),
sshkey_file=dict(type='str'),
group=dict(type='str'),
access_level=dict(type='str', default="guest", choices=["developer", "guest", "maintainer", "master", "owner", "reporter"]),
confirm=dict(type='bool', default=True),
isadmin=dict(type='bool', default=False),
external=dict(type='bool', default=False),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token'],
['server_url', 'api_url']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
user_name = module.params['name']
state = module.params['state']
user_username = module.params['username'].lower()
user_password = module.params['password']
user_email = module.params['email']
user_sshkey_name = module.params['sshkey_name']
user_sshkey_file = module.params['sshkey_file']
group_path = module.params['group']
access_level = module.params['access_level']
confirm = module.params['confirm']
user_isadmin = module.params['isadmin']
user_external = module.params['external']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_user = GitLabUser(module, gitlab_instance)
user_exists = gitlab_user.existsUser(user_username)
if state == 'absent':
if user_exists:
gitlab_user.deleteUser()
module.exit_json(changed=True, msg="Successfully deleted user %s" % user_username)
else:
module.exit_json(changed=False, msg="User deleted or does not exists")
if state == 'present':
if gitlab_user.createOrUpdateUser(user_username, {
"name": user_name,
"password": user_password,
"email": user_email,
"sshkey_name": user_sshkey_name,
"sshkey_file": user_sshkey_file,
"group_path": group_path,
"access_level": access_level,
"confirm": confirm,
"isadmin": user_isadmin,
"external": user_external}):
module.exit_json(changed=True, msg="Successfully created or updated the user %s" % user_username, user=gitlab_user.userObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the user %s" % user_username, user=gitlab_user.userObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,623 |
IOS Vlans resource module override state not working as expected
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
IOS Vlans resource module override state not working as expected when VLAN that doesn't pre-exist is tried to be configured via override operation the VLAN is not configured.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_vlans
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
ios
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
New VLAN should be configured via the override state
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
New VLAN is missed and not configured the override state
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63623
|
https://github.com/ansible/ansible/pull/63624
|
684e70c8d79422a8264de86d2ce0bdf95d321b71
|
4f2665810fd19ed625de83bb41bd2cb521052834
| 2019-10-17T11:10:16Z |
python
| 2019-10-18T08:26:23Z |
lib/ansible/module_utils/network/ios/config/vlans/vlans.py
|
#
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The ios_vlans class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.utils import to_list
from ansible.module_utils.network.ios.facts.facts import Facts
from ansible.module_utils.network.ios.utils.utils import dict_to_set
class Vlans(ConfigBase):
"""
The ios_vlans class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'vlans',
]
def __init__(self, module):
super(Vlans, self).__init__(module)
def get_interfaces_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
interfaces_facts = facts['ansible_network_resources'].get('vlans')
if not interfaces_facts:
return []
return interfaces_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
commands = list()
warnings = list()
existing_interfaces_facts = self.get_interfaces_facts()
commands.extend(self.set_config(existing_interfaces_facts))
if commands:
if not self._module.check_mode:
self._connection.edit_config(commands)
result['changed'] = True
result['commands'] = commands
changed_interfaces_facts = self.get_interfaces_facts()
result['before'] = existing_interfaces_facts
if result['changed']:
result['after'] = changed_interfaces_facts
result['warnings'] = warnings
return result
def set_config(self, existing_interfaces_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_interfaces_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
state = self._module.params['state']
if state in ('overridden', 'merged', 'replaced') and not want:
self._module.fail_json(msg='value of config parameter must not be empty for state {0}'.format(state))
if state == 'overridden':
commands = self._state_overridden(want, have, state)
elif state == 'deleted':
commands = self._state_deleted(want, have, state)
elif state == 'merged':
commands = self._state_merged(want, have)
elif state == 'replaced':
commands = self._state_replaced(want, have)
return commands
def _state_replaced(self, want, have):
""" The command generator when state is replaced
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
check = False
for each in want:
for every in have:
if every['vlan_id'] == each['vlan_id']:
check = True
break
else:
continue
if check:
commands.extend(self._set_config(each, every))
else:
commands.extend(self._set_config(each, dict()))
return commands
def _state_overridden(self, want, have, state):
""" The command generator when state is overridden
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
for each in have:
for every in want:
if each['vlan_id'] == every['vlan_id']:
break
else:
# We didn't find a matching desired state, which means we can
# pretend we recieved an empty desired state.
commands.extend(self._clear_config(every, each, state))
continue
commands.extend(self._set_config(every, each))
return commands
def _state_merged(self, want, have):
""" The command generator when state is merged
:rtype: A list
:returns: the commands necessary to merge the provided into
the current configuration
"""
commands = []
check = False
for each in want:
for every in have:
if each.get('vlan_id') == every.get('vlan_id'):
check = True
break
else:
continue
if check:
commands.extend(self._set_config(each, every))
else:
commands.extend(self._set_config(each, dict()))
return commands
def _state_deleted(self, want, have, state):
""" The command generator when state is deleted
:rtype: A list
:returns: the commands necessary to remove the current configuration
of the provided objects
"""
commands = []
if want:
check = False
for each in want:
for every in have:
if each.get('vlan_id') == every.get('vlan_id'):
check = True
break
else:
check = False
continue
if check:
commands.extend(self._clear_config(each, every, state))
else:
for each in have:
commands.extend(self._clear_config(dict(), each, state))
return commands
def remove_command_from_config_list(self, vlan, cmd, commands):
if vlan not in commands and cmd != 'vlan':
commands.insert(0, vlan)
elif cmd == 'vlan':
commands.append('no %s' % vlan)
return commands
commands.append('no %s' % cmd)
return commands
def add_command_to_config_list(self, vlan_id, cmd, commands):
if vlan_id not in commands:
commands.insert(0, vlan_id)
if cmd not in commands:
commands.append(cmd)
def _set_config(self, want, have):
# Set the interface config based on the want and have config
commands = []
vlan = 'vlan {0}'.format(want.get('vlan_id'))
# Get the diff b/w want n have
want_dict = dict_to_set(want)
have_dict = dict_to_set(have)
diff = want_dict - have_dict
if diff:
name = dict(diff).get('name')
state = dict(diff).get('state')
shutdown = dict(diff).get('shutdown')
mtu = dict(diff).get('mtu')
remote_span = dict(diff).get('remote_span')
if name:
cmd = 'name {0}'.format(name)
self.add_command_to_config_list(vlan, cmd, commands)
if state:
cmd = 'state {0}'.format(state)
self.add_command_to_config_list(vlan, cmd, commands)
if mtu:
cmd = 'mtu {0}'.format(mtu)
self.add_command_to_config_list(vlan, cmd, commands)
if remote_span:
self.add_command_to_config_list(vlan, 'remote-span', commands)
if shutdown == 'enabled':
self.add_command_to_config_list(vlan, 'shutdown', commands)
elif shutdown == 'disabled':
self.add_command_to_config_list(vlan, 'no shutdown', commands)
return commands
def _clear_config(self, want, have, state):
# Delete the interface config based on the want and have config
commands = []
vlan = 'vlan {0}'.format(have.get('vlan_id'))
if have.get('vlan_id') and 'default' not in have.get('name')\
and (have.get('vlan_id') != want.get('vlan_id') or state == 'deleted'):
self.remove_command_from_config_list(vlan, 'vlan', commands)
elif 'default' not in have.get('name'):
if have.get('mtu') != want.get('mtu'):
self.remove_command_from_config_list(vlan, 'mtu', commands)
if have.get('remote_span') != want.get('remote_span') and want.get('remote_span'):
self.remove_command_from_config_list(vlan, 'remote-span', commands)
if have.get('shutdown') != want.get('shutdown') and want.get('shutdown'):
self.remove_command_from_config_list(vlan, 'shutdown', commands)
if have.get('state') != want.get('state') and want.get('state'):
self.remove_command_from_config_list(vlan, 'state', commands)
return commands
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,623 |
IOS Vlans resource module override state not working as expected
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
IOS Vlans resource module override state not working as expected when VLAN that doesn't pre-exist is tried to be configured via override operation the VLAN is not configured.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_vlans
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
ios
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
New VLAN should be configured via the override state
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
New VLAN is missed and not configured the override state
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63623
|
https://github.com/ansible/ansible/pull/63624
|
684e70c8d79422a8264de86d2ce0bdf95d321b71
|
4f2665810fd19ed625de83bb41bd2cb521052834
| 2019-10-17T11:10:16Z |
python
| 2019-10-18T08:26:23Z |
test/integration/targets/ios_vlans/tests/cli/_remove_config.yaml
|
---
- name: Remove Config
cli_config:
config: "{{ lines }}"
vars:
lines: |
no vlan 10
no vlan 20
no vlan 30
when: ansible_net_version != "15.6(2)T"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,623 |
IOS Vlans resource module override state not working as expected
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
IOS Vlans resource module override state not working as expected when VLAN that doesn't pre-exist is tried to be configured via override operation the VLAN is not configured.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_vlans
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
ios
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
New VLAN should be configured via the override state
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
New VLAN is missed and not configured the override state
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63623
|
https://github.com/ansible/ansible/pull/63624
|
684e70c8d79422a8264de86d2ce0bdf95d321b71
|
4f2665810fd19ed625de83bb41bd2cb521052834
| 2019-10-17T11:10:16Z |
python
| 2019-10-18T08:26:23Z |
test/integration/targets/ios_vlans/tests/cli/merged.yaml
|
---
- debug:
msg: "START Merged ios_vlans state for integration tests on connection={{ ansible_connection }}"
- include_tasks: _remove_config.yaml
- block:
- name: Merge provided configuration with device configuration
ios_vlans: &merged
config:
- name: Vlan_10
vlan_id: 10
state: active
shutdown: disabled
remote_span: 10
- name: Vlan_20
vlan_id: 20
mtu: 610
state: active
shutdown: enabled
- name: Vlan_30
vlan_id: 30
state: suspend
shutdown: enabled
state: merged
register: result
- name: Assert that correct set of commands were generated
assert:
that:
- "{{ merged['commands'] | symmetric_difference(result['commands']) | length == 0 }}"
- name: Assert that before dicts are correctly generated
assert:
that:
- "{{ merged['before'] | symmetric_difference(result['before']) | length == 0 }}"
- name: Assert that after dict is correctly generated
assert:
that:
- "{{ merged['after'] | symmetric_difference(result['after']) | length == 0 }}"
- name: Merge provided configuration with device configuration (IDEMPOTENT)
ios_vlans: *merged
register: result
- name: Assert that the previous task was idempotent
assert:
that:
- "result['changed'] == false"
when: ansible_net_version != "15.6(2)T"
always:
- include_tasks: _remove_config.yaml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,623 |
IOS Vlans resource module override state not working as expected
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
IOS Vlans resource module override state not working as expected when VLAN that doesn't pre-exist is tried to be configured via override operation the VLAN is not configured.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_vlans
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
ios
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
New VLAN should be configured via the override state
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
New VLAN is missed and not configured the override state
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63623
|
https://github.com/ansible/ansible/pull/63624
|
684e70c8d79422a8264de86d2ce0bdf95d321b71
|
4f2665810fd19ed625de83bb41bd2cb521052834
| 2019-10-17T11:10:16Z |
python
| 2019-10-18T08:26:23Z |
test/integration/targets/ios_vlans/tests/cli/overridden.yaml
|
---
- debug:
msg: "START Overridden ios_vlans state for integration tests on connection={{ ansible_connection }}"
- include_tasks: _remove_config.yaml
- include_tasks: _populate_config.yaml
- block:
- name: Override device configuration of all VLANs with provided configuration
ios_vlans: &overridden
config:
- name: VLAN_10
vlan_id: 10
mtu: 1000
state: overridden
register: result
- name: Assert that correct set of commands were generated
assert:
that:
- "{{ overridden['commands'] | symmetric_difference(result['commands']) | length == 0 }}"
- name: Assert that before dicts are correctly generated
assert:
that:
- "{{ overridden['before'] | symmetric_difference(result['before']) | length == 0 }}"
- name: Assert that after dict is correctly generated
assert:
that:
- "{{ overridden['after'] | symmetric_difference(result['after']) | length == 0 }}"
- name: Override device configuration of all interfaces with provided configuration (IDEMPOTENT)
ios_vlans: *overridden
register: result
- name: Assert that task was idempotent
assert:
that:
- "result['changed'] == false"
when: ansible_net_version != "15.6(2)T"
always:
- include_tasks: _remove_config.yaml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,623 |
IOS Vlans resource module override state not working as expected
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
IOS Vlans resource module override state not working as expected when VLAN that doesn't pre-exist is tried to be configured via override operation the VLAN is not configured.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ios_vlans
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
ios
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
New VLAN should be configured via the override state
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
New VLAN is missed and not configured the override state
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63623
|
https://github.com/ansible/ansible/pull/63624
|
684e70c8d79422a8264de86d2ce0bdf95d321b71
|
4f2665810fd19ed625de83bb41bd2cb521052834
| 2019-10-17T11:10:16Z |
python
| 2019-10-18T08:26:23Z |
test/integration/targets/ios_vlans/vars/main.yaml
|
---
merged:
before:
- mtu: 1500
name: default
shutdown: disabled
state: active
vlan_id: 1
- mtu: 1500
name: fddi-default
shutdown: enabled
state: active
vlan_id: 1002
- mtu: 1500
name: token-ring-default
shutdown: enabled
state: active
vlan_id: 1003
- mtu: 1500
name: fddinet-default
shutdown: enabled
state: active
vlan_id: 1004
- mtu: 1500
name: trnet-default
shutdown: enabled
state: active
vlan_id: 1005
commands:
- "vlan 10"
- "name Vlan_10"
- "state active"
- "remote-span"
- "no shutdown"
- "vlan 20"
- "name Vlan_20"
- "state active"
- "mtu 610"
- "shutdown"
- "vlan 30"
- "name Vlan_30"
- "state suspend"
- "shutdown"
after:
- mtu: 1500
name: default
shutdown: disabled
state: active
vlan_id: 1
- mtu: 1500
name: Vlan_10
remote_span: true
shutdown: disabled
state: active
vlan_id: 10
- mtu: 610
name: Vlan_20
shutdown: enabled
state: active
vlan_id: 20
- mtu: 1500
name: Vlan_30
shutdown: enabled
state: suspend
vlan_id: 30
- mtu: 1500
name: fddi-default
shutdown: enabled
state: active
vlan_id: 1002
- mtu: 1500
name: token-ring-default
shutdown: enabled
state: active
vlan_id: 1003
- mtu: 1500
name: fddinet-default
shutdown: enabled
state: active
vlan_id: 1004
- mtu: 1500
name: trnet-default
shutdown: enabled
state: active
vlan_id: 1005
replaced:
before:
- mtu: 1500
name: default
shutdown: disabled
state: active
vlan_id: 1
- mtu: 1500
name: VLAN0010
shutdown: disabled
state: active
vlan_id: 10
- mtu: 1500
name: VLAN0020
shutdown: disabled
state: active
vlan_id: 20
- mtu: 1500
name: VLAN0030
shutdown: disabled
state: active
vlan_id: 30
- mtu: 1500
name: fddi-default
shutdown: enabled
state: active
vlan_id: 1002
- mtu: 1500
name: token-ring-default
shutdown: enabled
state: active
vlan_id: 1003
- mtu: 1500
name: fddinet-default
shutdown: enabled
state: active
vlan_id: 1004
- mtu: 1500
name: trnet-default
shutdown: enabled
state: active
vlan_id: 1005
commands:
- "vlan 20"
- "name Test_VLAN20"
- "mtu 700"
- "vlan 30"
- "name Test_VLAN30"
- "mtu 1000"
after:
- mtu: 1500
name: default
shutdown: disabled
state: active
vlan_id: 1
- mtu: 1500
name: VLAN0010
shutdown: disabled
state: active
vlan_id: 10
- mtu: 700
name: Test_VLAN20
shutdown: disabled
state: active
vlan_id: 20
- mtu: 1000
name: Test_VLAN30
shutdown: disabled
state: active
vlan_id: 30
- mtu: 1500
name: fddi-default
shutdown: enabled
state: active
vlan_id: 1002
- mtu: 1500
name: token-ring-default
shutdown: enabled
state: active
vlan_id: 1003
- mtu: 1500
name: fddinet-default
shutdown: enabled
state: active
vlan_id: 1004
- mtu: 1500
name: trnet-default
shutdown: enabled
state: active
vlan_id: 1005
overridden:
before:
- mtu: 1500
name: default
shutdown: disabled
state: active
vlan_id: 1
- mtu: 1500
name: VLAN0010
shutdown: disabled
state: active
vlan_id: 10
- mtu: 1500
name: VLAN0020
shutdown: disabled
state: active
vlan_id: 20
- mtu: 1500
name: VLAN0030
shutdown: disabled
state: active
vlan_id: 30
- mtu: 1500
name: fddi-default
shutdown: enabled
state: active
vlan_id: 1002
- mtu: 1500
name: token-ring-default
shutdown: enabled
state: active
vlan_id: 1003
- mtu: 1500
name: fddinet-default
shutdown: enabled
state: active
vlan_id: 1004
- mtu: 1500
name: trnet-default
shutdown: enabled
state: active
vlan_id: 1005
commands:
- "vlan 10"
- "name VLAN_10"
- "mtu 1000"
- "no vlan 20"
- "no vlan 30"
after:
- mtu: 1500
name: default
shutdown: disabled
state: active
vlan_id: 1
- mtu: 1000
name: VLAN_10
shutdown: disabled
state: active
vlan_id: 10
- mtu: 1500
name: fddi-default
shutdown: enabled
state: active
vlan_id: 1002
- mtu: 1500
name: token-ring-default
shutdown: enabled
state: active
vlan_id: 1003
- mtu: 1500
name: fddinet-default
shutdown: enabled
state: active
vlan_id: 1004
- mtu: 1500
name: trnet-default
shutdown: enabled
state: active
vlan_id: 1005
deleted:
before:
- mtu: 1500
name: default
shutdown: disabled
state: active
vlan_id: 1
- mtu: 1500
name: VLAN0010
shutdown: disabled
state: active
vlan_id: 10
- mtu: 1500
name: VLAN0020
shutdown: disabled
state: active
vlan_id: 20
- mtu: 1500
name: VLAN0030
shutdown: disabled
state: active
vlan_id: 30
- mtu: 1500
name: fddi-default
shutdown: enabled
state: active
vlan_id: 1002
- mtu: 1500
name: token-ring-default
shutdown: enabled
state: active
vlan_id: 1003
- mtu: 1500
name: fddinet-default
shutdown: enabled
state: active
vlan_id: 1004
- mtu: 1500
name: trnet-default
shutdown: enabled
state: active
vlan_id: 1005
commands:
- "no vlan 10"
- "no vlan 20"
- "no vlan 30"
after:
- mtu: 1500
name: default
shutdown: disabled
state: active
vlan_id: 1
- mtu: 1500
name: fddi-default
shutdown: enabled
state: active
vlan_id: 1002
- mtu: 1500
name: token-ring-default
shutdown: enabled
state: active
vlan_id: 1003
- mtu: 1500
name: fddinet-default
shutdown: enabled
state: active
vlan_id: 1004
- mtu: 1500
name: trnet-default
shutdown: enabled
state: active
vlan_id: 1005
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,685 |
Folder support for grafana_dashboard
|
##### SUMMARY
When creating a dashboard, I'd like to assign it to a folder in Grafana.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
`grafana_dashboard`
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Folders in Grafana are used for ACL etc.. When no folder is referenced, the dashboard ends up in the default folder (and can't be moved as it's managed outside of Grafana).
Here's the API related to creating/updating dashboards:
https://grafana.com/docs/http_api/dashboard/
The relevant bit is `folderId`.
For context, here's the folder API (probably a concern for a different module — `grafana_folder`):
https://grafana.com/docs/http_api/folder/
I'd propose something like this and then map `folder_id` to `folderId`. The default is a value of `0`. And then dashboards end up in "General".
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
connection: local
tasks:
- name: Import Grafana dashboard
grafana_dashboard:
grafana_url: http://localhost:3000
grafana_api_key: "{{ grafana_api_key }}"
state: present
folder_id: 0
message: Updated by ansible
overwrite: yes
path: /path/to/mydashboard.json
```
As an additional motivation, I'm a user of a Grafana role. They are planning to deprecate their dashboard management in favour of this module.
Related: https://github.com/cloudalchemy/ansible-grafana/issues/174
|
https://github.com/ansible/ansible/issues/61685
|
https://github.com/ansible/ansible/pull/63342
|
7d9a0baa7f934f2995d9207d84bf1647f81506b1
|
0b9c241822061e77d0df4b5141befaa725d6594a
| 2019-09-02T12:08:39Z |
python
| 2019-10-18T09:35:56Z |
lib/ansible/modules/monitoring/grafana_dashboard.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Thierry Sallé (@seuf)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
ANSIBLE_METADATA = {
'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'
}
DOCUMENTATION = '''
---
module: grafana_dashboard
author:
- Thierry Sallé (@seuf)
version_added: "2.5"
short_description: Manage Grafana dashboards
description:
- Create, update, delete, export Grafana dashboards via API.
options:
url:
description:
- The Grafana URL.
required: true
aliases: [ grafana_url ]
version_added: 2.7
url_username:
description:
- The Grafana API user.
default: admin
aliases: [ grafana_user ]
version_added: 2.7
url_password:
description:
- The Grafana API password.
default: admin
aliases: [ grafana_password ]
version_added: 2.7
grafana_api_key:
description:
- The Grafana API key.
- If set, I(grafana_user) and I(grafana_password) will be ignored.
org_id:
description:
- The Grafana Organisation ID where the dashboard will be imported / exported.
- Not used when I(grafana_api_key) is set, because the grafana_api_key only belongs to one organisation..
default: 1
state:
description:
- State of the dashboard.
required: true
choices: [ absent, export, present ]
default: present
slug:
description:
- Deprecated since Grafana 5. Use grafana dashboard uid instead.
- slug of the dashboard. It's the friendly url name of the dashboard.
- When C(state) is C(present), this parameter can override the slug in the meta section of the json file.
- If you want to import a json dashboard exported directly from the interface (not from the api),
you have to specify the slug parameter because there is no meta section in the exported json.
uid:
version_added: 2.7
description:
- uid of the dashboard to export when C(state) is C(export) or C(absent).
path:
description:
- The path to the json file containing the Grafana dashboard to import or export.
overwrite:
description:
- Override existing dashboard when state is present.
type: bool
default: 'no'
message:
description:
- Set a commit message for the version history.
- Only used when C(state) is C(present).
validate_certs:
description:
- If C(no), SSL certificates will not be validated.
- This should only be used on personally controlled sites using self-signed certificates.
type: bool
default: 'yes'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, client_key is not required
version_added: 2.7
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client
- authentication. If client_cert contains both the certificate and key, this option is not required
version_added: 2.7
use_proxy:
description:
- Boolean of whether or not to use proxy.
default: 'yes'
type: bool
version_added: 2.7
'''
EXAMPLES = '''
- hosts: localhost
connection: local
tasks:
- name: Import Grafana dashboard foo
grafana_dashboard:
grafana_url: http://grafana.company.com
grafana_api_key: "{{ grafana_api_key }}"
state: present
message: Updated by ansible
overwrite: yes
path: /path/to/dashboards/foo.json
- name: Export dashboard
grafana_dashboard:
grafana_url: http://grafana.company.com
grafana_user: "admin"
grafana_password: "{{ grafana_password }}"
org_id: 1
state: export
uid: "000000653"
path: "/path/to/dashboards/000000653.json"
'''
RETURN = '''
---
uid:
description: uid or slug of the created / deleted / exported dashboard.
returned: success
type: str
sample: 000000063
'''
import json
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.urls import fetch_url, url_argument_spec
from ansible.module_utils._text import to_native
from ansible.module_utils._text import to_text
__metaclass__ = type
class GrafanaAPIException(Exception):
pass
class GrafanaMalformedJson(Exception):
pass
class GrafanaExportException(Exception):
pass
class GrafanaDeleteException(Exception):
pass
def grafana_switch_organisation(module, grafana_url, org_id, headers):
r, info = fetch_url(module, '%s/api/user/using/%s' % (grafana_url, org_id), headers=headers, method='POST')
if info['status'] != 200:
raise GrafanaAPIException('Unable to switch to organization %s : %s' % (org_id, info))
def grafana_headers(module, data):
headers = {'content-type': 'application/json; charset=utf8'}
if 'grafana_api_key' in data and data['grafana_api_key']:
headers['Authorization'] = "Bearer %s" % data['grafana_api_key']
else:
module.params['force_basic_auth'] = True
grafana_switch_organisation(module, data['grafana_url'], data['org_id'], headers)
return headers
def get_grafana_version(module, grafana_url, headers):
grafana_version = None
r, info = fetch_url(module, '%s/api/frontend/settings' % grafana_url, headers=headers, method='GET')
if info['status'] == 200:
try:
settings = json.loads(to_text(r.read()))
grafana_version = settings['buildInfo']['version'].split('.')[0]
except UnicodeError as e:
raise GrafanaAPIException('Unable to decode version string to Unicode')
except Exception as e:
raise GrafanaAPIException(e)
else:
raise GrafanaAPIException('Unable to get grafana version : %s' % info)
return int(grafana_version)
def grafana_dashboard_exists(module, grafana_url, uid, headers):
dashboard_exists = False
dashboard = {}
grafana_version = get_grafana_version(module, grafana_url, headers)
if grafana_version >= 5:
r, info = fetch_url(module, '%s/api/dashboards/uid/%s' % (grafana_url, uid), headers=headers, method='GET')
else:
r, info = fetch_url(module, '%s/api/dashboards/db/%s' % (grafana_url, uid), headers=headers, method='GET')
if info['status'] == 200:
dashboard_exists = True
try:
dashboard = json.loads(r.read())
except Exception as e:
raise GrafanaAPIException(e)
elif info['status'] == 404:
dashboard_exists = False
else:
raise GrafanaAPIException('Unable to get dashboard %s : %s' % (uid, info))
return dashboard_exists, dashboard
def grafana_create_dashboard(module, data):
# define data payload for grafana API
try:
with open(data['path'], 'r') as json_file:
payload = json.load(json_file)
except Exception as e:
raise GrafanaAPIException("Can't load json file %s" % to_native(e))
# Check that the dashboard JSON is nested under the 'dashboard' key
if 'dashboard' not in payload:
payload = {'dashboard': payload}
# define http header
headers = grafana_headers(module, data)
grafana_version = get_grafana_version(module, data['grafana_url'], headers)
if grafana_version < 5:
if data.get('slug'):
uid = data['slug']
elif 'meta' in payload and 'slug' in payload['meta']:
uid = payload['meta']['slug']
else:
raise GrafanaMalformedJson('No slug found in json. Needed with grafana < 5')
else:
if data.get('uid'):
uid = data['uid']
elif 'uid' in payload['dashboard']:
uid = payload['dashboard']['uid']
else:
uid = None
# test if dashboard already exists
dashboard_exists, dashboard = grafana_dashboard_exists(module, data['grafana_url'], uid, headers=headers)
result = {}
if dashboard_exists is True:
if dashboard == payload:
# unchanged
result['uid'] = uid
result['msg'] = "Dashboard %s unchanged." % uid
result['changed'] = False
else:
# update
if 'overwrite' in data and data['overwrite']:
payload['overwrite'] = True
if 'message' in data and data['message']:
payload['message'] = data['message']
r, info = fetch_url(module, '%s/api/dashboards/db' % data['grafana_url'], data=json.dumps(payload), headers=headers, method='POST')
if info['status'] == 200:
if grafana_version >= 5:
try:
dashboard = json.loads(r.read())
uid = dashboard['uid']
except Exception as e:
raise GrafanaAPIException(e)
result['uid'] = uid
result['msg'] = "Dashboard %s updated" % uid
result['changed'] = True
else:
body = json.loads(info['body'])
raise GrafanaAPIException('Unable to update the dashboard %s : %s' % (uid, body['message']))
else:
# create
r, info = fetch_url(module, '%s/api/dashboards/db' % data['grafana_url'], data=json.dumps(payload), headers=headers, method='POST')
if info['status'] == 200:
result['msg'] = "Dashboard %s created" % uid
result['changed'] = True
if grafana_version >= 5:
try:
dashboard = json.loads(r.read())
uid = dashboard['uid']
except Exception as e:
raise GrafanaAPIException(e)
result['uid'] = uid
else:
raise GrafanaAPIException('Unable to create the new dashboard %s : %s - %s.' % (uid, info['status'], info))
return result
def grafana_delete_dashboard(module, data):
# define http headers
headers = grafana_headers(module, data)
grafana_version = get_grafana_version(module, data['grafana_url'], headers)
if grafana_version < 5:
if data.get('slug'):
uid = data['slug']
else:
raise GrafanaMalformedJson('No slug parameter. Needed with grafana < 5')
else:
if data.get('uid'):
uid = data['uid']
else:
raise GrafanaDeleteException('No uid specified %s')
# test if dashboard already exists
dashboard_exists, dashboard = grafana_dashboard_exists(module, data['grafana_url'], uid, headers=headers)
result = {}
if dashboard_exists is True:
# delete
if grafana_version < 5:
r, info = fetch_url(module, '%s/api/dashboards/db/%s' % (data['grafana_url'], uid), headers=headers, method='DELETE')
else:
r, info = fetch_url(module, '%s/api/dashboards/uid/%s' % (data['grafana_url'], uid), headers=headers, method='DELETE')
if info['status'] == 200:
result['msg'] = "Dashboard %s deleted" % uid
result['changed'] = True
result['uid'] = uid
else:
raise GrafanaAPIException('Unable to update the dashboard %s : %s' % (uid, info))
else:
# dashboard does not exist, do nothing
result = {'msg': "Dashboard %s does not exist." % uid,
'changed': False,
'uid': uid}
return result
def grafana_export_dashboard(module, data):
# define http headers
headers = grafana_headers(module, data)
grafana_version = get_grafana_version(module, data['grafana_url'], headers)
if grafana_version < 5:
if data.get('slug'):
uid = data['slug']
else:
raise GrafanaMalformedJson('No slug parameter. Needed with grafana < 5')
else:
if data.get('uid'):
uid = data['uid']
else:
raise GrafanaExportException('No uid specified')
# test if dashboard already exists
dashboard_exists, dashboard = grafana_dashboard_exists(module, data['grafana_url'], uid, headers=headers)
if dashboard_exists is True:
try:
with open(data['path'], 'w') as f:
f.write(json.dumps(dashboard))
except Exception as e:
raise GrafanaExportException("Can't write json file : %s" % to_native(e))
result = {'msg': "Dashboard %s exported to %s" % (uid, data['path']),
'uid': uid,
'changed': True}
else:
result = {'msg': "Dashboard %s does not exist." % uid,
'uid': uid,
'changed': False}
return result
def main():
# use the predefined argument spec for url
argument_spec = url_argument_spec()
# remove unnecessary arguments
del argument_spec['force']
del argument_spec['force_basic_auth']
del argument_spec['http_agent']
argument_spec.update(
state=dict(choices=['present', 'absent', 'export'], default='present'),
url=dict(aliases=['grafana_url'], required=True),
url_username=dict(aliases=['grafana_user'], default='admin'),
url_password=dict(aliases=['grafana_password'], default='admin', no_log=True),
grafana_api_key=dict(type='str', no_log=True),
org_id=dict(default=1, type='int'),
uid=dict(type='str'),
slug=dict(type='str'),
path=dict(type='str'),
overwrite=dict(type='bool', default=False),
message=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=False,
required_together=[['url_username', 'url_password', 'org_id']],
mutually_exclusive=[['grafana_user', 'grafana_api_key'], ['uid', 'slug']],
)
try:
if module.params['state'] == 'present':
result = grafana_create_dashboard(module, module.params)
elif module.params['state'] == 'absent':
result = grafana_delete_dashboard(module, module.params)
else:
result = grafana_export_dashboard(module, module.params)
except GrafanaAPIException as e:
module.fail_json(
failed=True,
msg="error : %s" % to_native(e)
)
return
except GrafanaMalformedJson as e:
module.fail_json(
failed=True,
msg="error : json file does not contain a meta section with a slug parameter, or you did not specify the slug parameter"
)
return
except GrafanaDeleteException as e:
module.fail_json(
failed=True,
msg="error : Can't delete dashboard : %s" % to_native(e)
)
return
except GrafanaExportException as e:
module.fail_json(
failed=True,
msg="error : Can't export dashboard : %s" % to_native(e)
)
return
module.exit_json(
failed=False,
**result
)
return
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,862 |
unclear explanation of `firstmatch` option in the `lineinfile` module
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The current documentation has this explanation for the `firstmatch`
parameter of the `lineinfile` action:
```
If set, insertafter and insertbefore find a first line has regular expression matches.
```
I have no idea what `firstmatch` does, based on the above line.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
lineinfile
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.4
config file = None
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/.virtualenvs/elasticluster/local/lib/python2.7/site-packages/ansible
executable location = /home/user/.virtualenvs/elasticluster/bin/ansible
python version = 2.7.14 (default, Sep 23 2017, 22:06:14) [GCC 7.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62862
|
https://github.com/ansible/ansible/pull/62896
|
0b97180c74e83d275fdaaf6d831b963facecd45d
|
aeb0dde7ccb0b7ac34623241835cc876067b8c86
| 2019-09-26T10:30:52Z |
python
| 2019-10-18T11:53:50Z |
lib/ansible/modules/files/lineinfile.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# Copyright: (c) 2014, Ahti Kitsik <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: lineinfile
short_description: Manage lines in text files
description:
- This module ensures a particular line is in a file, or replace an
existing line using a back-referenced regular expression.
- This is primarily useful when you want to change a single line in a file only.
- See the M(replace) module if you want to change multiple, similar lines
or check M(blockinfile) if you want to insert/update/remove a block of lines in a file.
For other cases, see the M(copy) or M(template) modules.
version_added: "0.7"
options:
path:
description:
- The file to modify.
- Before Ansible 2.3 this option was only usable as I(dest), I(destfile) and I(name).
type: path
required: true
aliases: [ dest, destfile, name ]
regexp:
description:
- The regular expression to look for in every line of the file.
- For C(state=present), the pattern to replace if found. Only the last line found will be replaced.
- For C(state=absent), the pattern of the line(s) to remove.
- If the regular expression is not matched, the line will be
added to the file in keeping with C(insertbefore) or C(insertafter)
settings.
- When modifying a line the regexp should typically match both the initial state of
the line as well as its state after replacement by C(line) to ensure idempotence.
- Uses Python regular expressions. See U(http://docs.python.org/2/library/re.html).
type: str
aliases: [ regex ]
version_added: '1.7'
state:
description:
- Whether the line should be there or not.
type: str
choices: [ absent, present ]
default: present
line:
description:
- The line to insert/replace into the file.
- Required for C(state=present).
- If C(backrefs) is set, may contain backreferences that will get
expanded with the C(regexp) capture groups if the regexp matches.
type: str
aliases: [ value ]
backrefs:
description:
- Used with C(state=present).
- If set, C(line) can contain backreferences (both positional and named)
that will get populated if the C(regexp) matches.
- This parameter changes the operation of the module slightly;
C(insertbefore) and C(insertafter) will be ignored, and if the C(regexp)
does not match anywhere in the file, the file will be left unchanged.
- If the C(regexp) does match, the last matching line will be replaced by
the expanded line parameter.
type: bool
default: no
version_added: "1.1"
insertafter:
description:
- Used with C(state=present).
- If specified, the line will be inserted after the last match of specified regular expression.
- If the first match is required, use(firstmatch=yes).
- A special value is available; C(EOF) for inserting the line at the end of the file.
- If specified regular expression has no matches, EOF will be used instead.
- If C(insertbefore) is set, default value C(EOF) will be ignored.
- If regular expressions are passed to both C(regexp) and C(insertafter), C(insertafter) is only honored if no match for C(regexp) is found.
- May not be used with C(backrefs) or C(insertbefore).
type: str
choices: [ EOF, '*regex*' ]
default: EOF
insertbefore:
description:
- Used with C(state=present).
- If specified, the line will be inserted before the last match of specified regular expression.
- If the first match is required, use C(firstmatch=yes).
- A value is available; C(BOF) for inserting the line at the beginning of the file.
- If specified regular expression has no matches, the line will be inserted at the end of the file.
- If regular expressions are passed to both C(regexp) and C(insertbefore), C(insertbefore) is only honored if no match for C(regexp) is found.
- May not be used with C(backrefs) or C(insertafter).
type: str
choices: [ BOF, '*regex*' ]
version_added: "1.1"
create:
description:
- Used with C(state=present).
- If specified, the file will be created if it does not already exist.
- By default it will fail if the file is missing.
type: bool
default: no
backup:
description:
- Create a backup file including the timestamp information so you can
get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
firstmatch:
description:
- Used with C(insertafter) or C(insertbefore).
- If set, C(insertafter) and C(insertbefore) find a first line has regular expression matches.
type: bool
default: no
version_added: "2.5"
others:
description:
- All arguments accepted by the M(file) module also work here.
type: str
extends_documentation_fragment:
- files
- validate
notes:
- As of Ansible 2.3, the I(dest) option has been changed to I(path) as default, but I(dest) still works as well.
seealso:
- module: blockinfile
- module: copy
- module: file
- module: replace
- module: template
- module: win_lineinfile
author:
- Daniel Hokka Zakrissoni (@dhozac)
- Ahti Kitsik (@ahtik)
'''
EXAMPLES = r'''
# NOTE: Before 2.3, option 'dest', 'destfile' or 'name' was used instead of 'path'
- name: Ensure SELinux is set to enforcing mode
lineinfile:
path: /etc/selinux/config
regexp: '^SELINUX='
line: SELINUX=enforcing
- name: Make sure group wheel is not in the sudoers configuration
lineinfile:
path: /etc/sudoers
state: absent
regexp: '^%wheel'
- name: Replace a localhost entry with our own
lineinfile:
path: /etc/hosts
regexp: '^127\.0\.0\.1'
line: 127.0.0.1 localhost
owner: root
group: root
mode: '0644'
- name: Ensure the default Apache port is 8080
lineinfile:
path: /etc/httpd/conf/httpd.conf
regexp: '^Listen '
insertafter: '^#Listen '
line: Listen 8080
- name: Ensure we have our own comment added to /etc/services
lineinfile:
path: /etc/services
regexp: '^# port for http'
insertbefore: '^www.*80/tcp'
line: '# port for http by default'
- name: Add a line to a file if the file does not exist, without passing regexp
lineinfile:
path: /tmp/testfile
line: 192.168.1.99 foo.lab.net foo
create: yes
# NOTE: Yaml requires escaping backslashes in double quotes but not in single quotes
- name: Ensure the JBoss memory settings are exactly as needed
lineinfile:
path: /opt/jboss-as/bin/standalone.conf
regexp: '^(.*)Xms(\\d+)m(.*)$'
line: '\1Xms${xms}m\3'
backrefs: yes
# NOTE: Fully quoted because of the ': ' on the line. See the Gotchas in the YAML docs.
- name: Validate the sudoers file before saving
lineinfile:
path: /etc/sudoers
state: present
regexp: '^%ADMIN ALL='
line: '%ADMIN ALL=(ALL) NOPASSWD: ALL'
validate: /usr/sbin/visudo -cf %s
'''
import os
import re
import tempfile
# import module snippets
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes, to_native
def write_changes(module, b_lines, dest):
tmpfd, tmpfile = tempfile.mkstemp()
with os.fdopen(tmpfd, 'wb') as f:
f.writelines(b_lines)
validate = module.params.get('validate', None)
valid = not validate
if validate:
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(to_bytes(validate % tmpfile, errors='surrogate_or_strict'))
valid = rc == 0
if rc != 0:
module.fail_json(msg='failed to validate: '
'rc:%s error:%s' % (rc, err))
if valid:
module.atomic_move(tmpfile,
to_native(os.path.realpath(to_bytes(dest, errors='surrogate_or_strict')), errors='surrogate_or_strict'),
unsafe_writes=module.params['unsafe_writes'])
def check_file_attrs(module, changed, message, diff):
file_args = module.load_file_common_arguments(module.params)
if module.set_fs_attributes_if_different(file_args, False, diff=diff):
if changed:
message += " and "
changed = True
message += "ownership, perms or SE linux context changed"
return message, changed
def present(module, dest, regexp, line, insertafter, insertbefore, create,
backup, backrefs, firstmatch):
diff = {'before': '',
'after': '',
'before_header': '%s (content)' % dest,
'after_header': '%s (content)' % dest}
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest):
if not create:
module.fail_json(rc=257, msg='Destination %s does not exist !' % dest)
b_destpath = os.path.dirname(b_dest)
if not os.path.exists(b_destpath) and not module.check_mode:
try:
os.makedirs(b_destpath)
except Exception as e:
module.fail_json(msg='Error creating %s Error code: %s Error description: %s' % (b_destpath, e[0], e[1]))
b_lines = []
else:
with open(b_dest, 'rb') as f:
b_lines = f.readlines()
if module._diff:
diff['before'] = to_native(b''.join(b_lines))
if regexp is not None:
bre_m = re.compile(to_bytes(regexp, errors='surrogate_or_strict'))
if insertafter not in (None, 'BOF', 'EOF'):
bre_ins = re.compile(to_bytes(insertafter, errors='surrogate_or_strict'))
elif insertbefore not in (None, 'BOF'):
bre_ins = re.compile(to_bytes(insertbefore, errors='surrogate_or_strict'))
else:
bre_ins = None
# index[0] is the line num where regexp has been found
# index[1] is the line num where insertafter/insertbefore has been found
index = [-1, -1]
m = None
b_line = to_bytes(line, errors='surrogate_or_strict')
# The module's doc says
# "If regular expressions are passed to both regexp and
# insertafter, insertafter is only honored if no match for regexp is found."
# Therefore:
# 1. regexp was found -> ignore insertafter, replace the founded line
# 2. regexp was not found -> insert the line after 'insertafter' or 'insertbefore' line
# Given the above:
# 1. First check that there is no match for regexp:
if regexp is not None:
for lineno, b_cur_line in enumerate(b_lines):
match_found = bre_m.search(b_cur_line)
if match_found:
index[0] = lineno
m = match_found
if firstmatch:
break
# 2. When no match found on the previous step,
# parse for searching insertafter/insertbefore:
if not m:
for lineno, b_cur_line in enumerate(b_lines):
match_found = b_line == b_cur_line.rstrip(b'\r\n')
if match_found:
index[0] = lineno
m = match_found
elif bre_ins is not None and bre_ins.search(b_cur_line):
if insertafter:
# + 1 for the next line
index[1] = lineno + 1
if firstmatch:
break
if insertbefore:
# index[1] for the previous line
index[1] = lineno
if firstmatch:
break
msg = ''
changed = False
b_linesep = to_bytes(os.linesep, errors='surrogate_or_strict')
# Exact line or Regexp matched a line in the file
if index[0] != -1:
if backrefs:
b_new_line = m.expand(b_line)
else:
# Don't do backref expansion if not asked.
b_new_line = b_line
if not b_new_line.endswith(b_linesep):
b_new_line += b_linesep
# If no regexp was given and no line match is found anywhere in the file,
# insert the line appropriately if using insertbefore or insertafter
if regexp is None and m is None:
# Insert lines
if insertafter and insertafter != 'EOF':
# Ensure there is a line separator after the found string
# at the end of the file.
if b_lines and not b_lines[-1][-1:] in (b'\n', b'\r'):
b_lines[-1] = b_lines[-1] + b_linesep
# If the line to insert after is at the end of the file
# use the appropriate index value.
if len(b_lines) == index[1]:
if b_lines[index[1] - 1].rstrip(b'\r\n') != b_line:
b_lines.append(b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[1]].rstrip(b'\r\n') != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif insertbefore and insertbefore != 'BOF':
# If the line to insert before is at the beginning of the file
# use the appropriate index value.
if index[1] <= 0:
if b_lines[index[1]].rstrip(b'\r\n') != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[1] - 1].rstrip(b'\r\n') != b_line:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
elif b_lines[index[0]] != b_new_line:
b_lines[index[0]] = b_new_line
msg = 'line replaced'
changed = True
elif backrefs:
# Do absolutely nothing, since it's not safe generating the line
# without the regexp matching to populate the backrefs.
pass
# Add it to the beginning of the file
elif insertbefore == 'BOF' or insertafter == 'BOF':
b_lines.insert(0, b_line + b_linesep)
msg = 'line added'
changed = True
# Add it to the end of the file if requested or
# if insertafter/insertbefore didn't match anything
# (so default behaviour is to add at the end)
elif insertafter == 'EOF' or index[1] == -1:
# If the file is not empty then ensure there's a newline before the added line
if b_lines and not b_lines[-1][-1:] in (b'\n', b'\r'):
b_lines.append(b_linesep)
b_lines.append(b_line + b_linesep)
msg = 'line added'
changed = True
elif insertafter and index[1] != -1:
# Don't insert the line if it already matches at the index
if b_line != b_lines[index[1]].rstrip(b'\n\r'):
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
# insert matched, but not the regexp
else:
b_lines.insert(index[1], b_line + b_linesep)
msg = 'line added'
changed = True
if module._diff:
diff['after'] = to_native(b''.join(b_lines))
backupdest = ""
if changed and not module.check_mode:
if backup and os.path.exists(b_dest):
backupdest = module.backup_local(dest)
write_changes(module, b_lines, dest)
if module.check_mode and not os.path.exists(b_dest):
module.exit_json(changed=changed, msg=msg, backup=backupdest, diff=diff)
attr_diff = {}
msg, changed = check_file_attrs(module, changed, msg, attr_diff)
attr_diff['before_header'] = '%s (file attributes)' % dest
attr_diff['after_header'] = '%s (file attributes)' % dest
difflist = [diff, attr_diff]
module.exit_json(changed=changed, msg=msg, backup=backupdest, diff=difflist)
def absent(module, dest, regexp, line, backup):
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest):
module.exit_json(changed=False, msg="file not present")
msg = ''
diff = {'before': '',
'after': '',
'before_header': '%s (content)' % dest,
'after_header': '%s (content)' % dest}
with open(b_dest, 'rb') as f:
b_lines = f.readlines()
if module._diff:
diff['before'] = to_native(b''.join(b_lines))
if regexp is not None:
bre_c = re.compile(to_bytes(regexp, errors='surrogate_or_strict'))
found = []
b_line = to_bytes(line, errors='surrogate_or_strict')
def matcher(b_cur_line):
if regexp is not None:
match_found = bre_c.search(b_cur_line)
else:
match_found = b_line == b_cur_line.rstrip(b'\r\n')
if match_found:
found.append(b_cur_line)
return not match_found
b_lines = [l for l in b_lines if matcher(l)]
changed = len(found) > 0
if module._diff:
diff['after'] = to_native(b''.join(b_lines))
backupdest = ""
if changed and not module.check_mode:
if backup:
backupdest = module.backup_local(dest)
write_changes(module, b_lines, dest)
if changed:
msg = "%s line(s) removed" % len(found)
attr_diff = {}
msg, changed = check_file_attrs(module, changed, msg, attr_diff)
attr_diff['before_header'] = '%s (file attributes)' % dest
attr_diff['after_header'] = '%s (file attributes)' % dest
difflist = [diff, attr_diff]
module.exit_json(changed=changed, found=len(found), msg=msg, backup=backupdest, diff=difflist)
def main():
module = AnsibleModule(
argument_spec=dict(
path=dict(type='path', required=True, aliases=['dest', 'destfile', 'name']),
state=dict(type='str', default='present', choices=['absent', 'present']),
regexp=dict(type='str', aliases=['regex']),
line=dict(type='str', aliases=['value']),
insertafter=dict(type='str'),
insertbefore=dict(type='str'),
backrefs=dict(type='bool', default=False),
create=dict(type='bool', default=False),
backup=dict(type='bool', default=False),
firstmatch=dict(type='bool', default=False),
validate=dict(type='str'),
),
mutually_exclusive=[['insertbefore', 'insertafter']],
add_file_common_args=True,
supports_check_mode=True,
)
params = module.params
create = params['create']
backup = params['backup']
backrefs = params['backrefs']
path = params['path']
firstmatch = params['firstmatch']
regexp = params['regexp']
line = params['line']
if regexp == '':
module.warn(
"The regular expression is an empty string, which will match every line in the file. "
"This may have unintended consequences, such as replacing the last line in the file rather than appending. "
"If this is desired, use '^' to match every line in the file and avoid this warning.")
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.isdir(b_path):
module.fail_json(rc=256, msg='Path %s is a directory !' % path)
if params['state'] == 'present':
if backrefs and regexp is None:
module.fail_json(msg='regexp is required with backrefs=true')
if line is None:
module.fail_json(msg='line is required with state=present')
# Deal with the insertafter default value manually, to avoid errors
# because of the mutually_exclusive mechanism.
ins_bef, ins_aft = params['insertbefore'], params['insertafter']
if ins_bef is None and ins_aft is None:
ins_aft = 'EOF'
present(module, path, regexp, line,
ins_aft, ins_bef, create, backup, backrefs, firstmatch)
else:
if regexp is None and line is None:
module.fail_json(msg='one of line or regexp is required with state=absent')
absent(module, path, regexp, line, backup)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,406 |
eos_vlans : 'Overridden' state is not adding the new configs
|
##### SUMMARY
In eos_vlans, when state = "overridden" , the existing configurations are deleted/defaulted as expected. But it fails to add the new configuration. The playbook still declares "Pass".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_vlans.py
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gosriniv/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/gosriniv/ansible/lib/ansible
executable location = /home/gosriniv/ansible/bin/ansible
python version = 3.6.5 (default, Sep 4 2019, 12:23:33) [GCC 9.0.1 20190312 (Red Hat 9.0.1-0.10)]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
```
Arista vEOS
```
##### STEPS TO REPRODUCE
```
---
- hosts: eos
gather_facts: false
connection: network_cli
tasks:
- name: Merge given VLAN attributes with device configuration
eos_vlans:
config:
- vlan_id: 10
name: "ten"
- vlan_id: 20
name: "newvlan"
state: merged
- name: Override given VLAN attributes with device configuration
eos_vlans:
config:
- vlan_id: 50
name: "fifty"
state: "suspend"
state: overridden
```
##### EXPECTED RESULTS
Deletion of vlan 10 and vlan 20.
Addition of vlan 50
##### ACTUAL RESULTS
```
Refer to 'after'
changed: [10.8.38.31] => {
"after": [
{
"vlan_id": 10
},
{
"vlan_id": 20
}
],
"before": [
{
"name": "ten",
"vlan_id": 10
},
{
"name": "newvlan",
"vlan_id": 20
}
],
"changed": true,
"commands": [
"vlan 10",
"no name",
"vlan 20",
"no name"
],
"invocation": {
"module_args": {
"config": [
{
"name": "fifty",
"state": "suspend",
"vlan_id": 50
}
],
"state": "overridden"
}
}
}
```
|
https://github.com/ansible/ansible/issues/63406
|
https://github.com/ansible/ansible/pull/63639
|
4326165be56c1bc990d0e9eaef951227b0e94687
|
741d529409a03c361efc78a418bc56c4dc5a52eb
| 2019-10-11T17:13:52Z |
python
| 2019-10-18T14:36:30Z |
lib/ansible/module_utils/network/eos/config/vlans/vlans.py
|
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The eos_vlans class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.utils import to_list, dict_diff, param_list_to_dict
from ansible.module_utils.network.eos.facts.facts import Facts
class Vlans(ConfigBase):
"""
The eos_vlans class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'vlans',
]
def get_vlans_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
vlans_facts = facts['ansible_network_resources'].get('vlans')
if not vlans_facts:
return []
return vlans_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
warnings = list()
commands = list()
existing_vlans_facts = self.get_vlans_facts()
commands.extend(self.set_config(existing_vlans_facts))
if commands:
if not self._module.check_mode:
self._connection.edit_config(commands)
result['changed'] = True
result['commands'] = commands
changed_vlans_facts = self.get_vlans_facts()
result['before'] = existing_vlans_facts
if result['changed']:
result['after'] = changed_vlans_facts
result['warnings'] = warnings
return result
def set_config(self, existing_vlans_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_vlans_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
state = self._module.params['state']
want = param_list_to_dict(want, "vlan_id", False)
have = param_list_to_dict(have, "vlan_id", False)
if state == 'overridden':
commands = self._state_overridden(want, have)
elif state == 'deleted':
commands = self._state_deleted(want, have)
elif state == 'merged':
commands = self._state_merged(want, have)
elif state == 'replaced':
commands = self._state_replaced(want, have)
return commands
@staticmethod
def _state_replaced(want, have):
""" The command generator when state is replaced
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
for key, desired in want.items():
if key in have:
extant = have[key]
else:
extant = dict(vlan_id=key)
add_config = dict_diff(extant, desired)
del_config = dict_diff(desired, extant)
commands.extend(generate_commands(key, add_config, del_config))
return commands
@staticmethod
def _state_overridden(want, have):
""" The command generator when state is overridden
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
for key, extant in have.items():
if key in want:
desired = want[key]
else:
desired = dict(vlan_id=key)
add_config = dict_diff(extant, desired)
del_config = dict_diff(desired, extant)
commands.extend(generate_commands(key, add_config, del_config))
return commands
@staticmethod
def _state_merged(want, have):
""" The command generator when state is merged
:rtype: A list
:returns: the commands necessary to merge the provided into
the current configuration
"""
commands = []
for key, desired in want.items():
if key in have:
extant = have[key]
else:
extant = dict(vlan_id=key)
add_config = dict_diff(extant, desired)
commands.extend(generate_commands(key, add_config, {}))
return commands
@staticmethod
def _state_deleted(want, have):
""" The command generator when state is deleted
:rtype: A list
:returns: the commands necessary to remove the current configuration
of the provided objects
"""
commands = []
for key in want.keys():
desired = dict(vlan_id=key)
if key in have:
extant = have[key]
else:
extant = dict(vlan_id=key)
del_config = dict_diff(desired, extant)
commands.extend(generate_commands(key, {}, del_config))
return commands
def generate_commands(vlan_id, to_set, to_remove):
commands = []
for key, value in to_set.items():
if value is None:
continue
commands.append("{0} {1}".format(key, value))
for key in to_remove.keys():
commands.append("no {0}".format(key))
if commands:
commands.insert(0, "vlan {0}".format(vlan_id))
return commands
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,406 |
eos_vlans : 'Overridden' state is not adding the new configs
|
##### SUMMARY
In eos_vlans, when state = "overridden" , the existing configurations are deleted/defaulted as expected. But it fails to add the new configuration. The playbook still declares "Pass".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_vlans.py
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gosriniv/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/gosriniv/ansible/lib/ansible
executable location = /home/gosriniv/ansible/bin/ansible
python version = 3.6.5 (default, Sep 4 2019, 12:23:33) [GCC 9.0.1 20190312 (Red Hat 9.0.1-0.10)]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
```
Arista vEOS
```
##### STEPS TO REPRODUCE
```
---
- hosts: eos
gather_facts: false
connection: network_cli
tasks:
- name: Merge given VLAN attributes with device configuration
eos_vlans:
config:
- vlan_id: 10
name: "ten"
- vlan_id: 20
name: "newvlan"
state: merged
- name: Override given VLAN attributes with device configuration
eos_vlans:
config:
- vlan_id: 50
name: "fifty"
state: "suspend"
state: overridden
```
##### EXPECTED RESULTS
Deletion of vlan 10 and vlan 20.
Addition of vlan 50
##### ACTUAL RESULTS
```
Refer to 'after'
changed: [10.8.38.31] => {
"after": [
{
"vlan_id": 10
},
{
"vlan_id": 20
}
],
"before": [
{
"name": "ten",
"vlan_id": 10
},
{
"name": "newvlan",
"vlan_id": 20
}
],
"changed": true,
"commands": [
"vlan 10",
"no name",
"vlan 20",
"no name"
],
"invocation": {
"module_args": {
"config": [
{
"name": "fifty",
"state": "suspend",
"vlan_id": 50
}
],
"state": "overridden"
}
}
}
```
|
https://github.com/ansible/ansible/issues/63406
|
https://github.com/ansible/ansible/pull/63639
|
4326165be56c1bc990d0e9eaef951227b0e94687
|
741d529409a03c361efc78a418bc56c4dc5a52eb
| 2019-10-11T17:13:52Z |
python
| 2019-10-18T14:36:30Z |
test/integration/targets/eos_vlans/tests/cli/deleted.yaml
|
---
- include_tasks: reset_config.yml
- set_fact:
config:
- vlan_id: 20
- eos_facts:
gather_network_resources: vlans
become: yes
- name: Returns vlans to default parameters
eos_vlans:
config: "{{ config }}"
state: deleted
register: result
become: yes
- assert:
that:
- "ansible_facts.network_resources.vlans|symmetric_difference(result.before) == []"
- eos_facts:
gather_network_resources: vlans
become: yes
- assert:
that:
- "ansible_facts.network_resources.vlans|symmetric_difference(result.after) == []"
- set_fact:
expected_config:
- vlan_id: 10
name: ten
- vlan_id: 20
- assert:
that:
- "expected_config|symmetric_difference(ansible_facts.network_resources.vlans) == []"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,406 |
eos_vlans : 'Overridden' state is not adding the new configs
|
##### SUMMARY
In eos_vlans, when state = "overridden" , the existing configurations are deleted/defaulted as expected. But it fails to add the new configuration. The playbook still declares "Pass".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_vlans.py
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gosriniv/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/gosriniv/ansible/lib/ansible
executable location = /home/gosriniv/ansible/bin/ansible
python version = 3.6.5 (default, Sep 4 2019, 12:23:33) [GCC 9.0.1 20190312 (Red Hat 9.0.1-0.10)]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
```
Arista vEOS
```
##### STEPS TO REPRODUCE
```
---
- hosts: eos
gather_facts: false
connection: network_cli
tasks:
- name: Merge given VLAN attributes with device configuration
eos_vlans:
config:
- vlan_id: 10
name: "ten"
- vlan_id: 20
name: "newvlan"
state: merged
- name: Override given VLAN attributes with device configuration
eos_vlans:
config:
- vlan_id: 50
name: "fifty"
state: "suspend"
state: overridden
```
##### EXPECTED RESULTS
Deletion of vlan 10 and vlan 20.
Addition of vlan 50
##### ACTUAL RESULTS
```
Refer to 'after'
changed: [10.8.38.31] => {
"after": [
{
"vlan_id": 10
},
{
"vlan_id": 20
}
],
"before": [
{
"name": "ten",
"vlan_id": 10
},
{
"name": "newvlan",
"vlan_id": 20
}
],
"changed": true,
"commands": [
"vlan 10",
"no name",
"vlan 20",
"no name"
],
"invocation": {
"module_args": {
"config": [
{
"name": "fifty",
"state": "suspend",
"vlan_id": 50
}
],
"state": "overridden"
}
}
}
```
|
https://github.com/ansible/ansible/issues/63406
|
https://github.com/ansible/ansible/pull/63639
|
4326165be56c1bc990d0e9eaef951227b0e94687
|
741d529409a03c361efc78a418bc56c4dc5a52eb
| 2019-10-11T17:13:52Z |
python
| 2019-10-18T14:36:30Z |
test/integration/targets/eos_vlans/tests/cli/overridden.yaml
|
---
- include_tasks: reset_config.yml
- set_fact:
config:
- vlan_id: 20
state: suspend
other_config:
- vlan_id: 10
- eos_facts:
gather_network_resources: vlans
become: yes
- name: Overrides device configuration of all vlans with provided configuration
eos_vlans:
config: "{{ config }}"
state: overridden
register: result
become: yes
- assert:
that:
- "ansible_facts.network_resources.vlans|symmetric_difference(result.before) == []"
- eos_facts:
gather_network_resources: vlans
become: yes
- assert:
that:
- "ansible_facts.network_resources.vlans|symmetric_difference(result.after) == []"
- set_fact:
expected_config: "{{ config }} + {{ other_config }}"
- assert:
that:
- "expected_config|symmetric_difference(ansible_facts.network_resources.vlans) == []"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,007 |
ASA: command is considered complete if it is not authorized
|
##### SUMMARY
ASA: command is considered complete if it is not authorized. Initially, the problem was discovered when it turned out that our TACACS server does not include in the white list the command `no terminal pager`, which is executed when a connection is established with the device ([lib/ansible/plugins/terminal/asa.py#L49](https://github.com/ansible/ansible/blob/f09bd91ad078a5a484cac77df39af51586745990/lib/ansible/plugins/terminal/asa.py#L49)). The device responded with the line `Command authorization failed`, paging did not turn off, but no errors were output, but the subsequent execution of commands was incorrect due to the fact that the device performed paging and suggested pressing the spacebar.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
* lib/ansible/plugins/terminal/asa.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /home/administrator/ansible/playbooks/asa_collect/ansible.cfg
configured module search path = [u'/home/administrator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/administrator/ansible_dev/ansible/lib/ansible
executable location = /home/administrator/ansible_dev/ansible/bin/ansible
python version = 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
PARAMIKO_HOST_KEY_AUTO_ADD(/home/administrator/ansible/playbooks/asa_collect/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/home/administrator/ansible/playbooks/asa_collect/ansible.cfg) = True
PERSISTENT_COMMAND_TIMEOUT(/home/administrator/ansible/playbooks/asa_collect/ansible.cfg) = 60
RETRY_FILES_ENABLED(/home/administrator/ansible/playbooks/asa_collect/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
* debian 9.5
* ASA 9.6.*
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
* ASA device authorizes commands through TACACS server
* the `no terminal pager` command is not included in the white list of allowed commands on the TACACS server
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: all
gather_facts: no
tasks:
- name: show commands
asa_command:
commands:
- '<not allowed command>'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
expected error output when executing a command that is not allowed.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
no errors are output, but subsequent commands with a long output may be cut off or not executed at all due to the need to press the spacebar to scroll.
|
https://github.com/ansible/ansible/issues/59007
|
https://github.com/ansible/ansible/pull/59008
|
8eb46acbb118494b6855f0117df33ba1ec74f020
|
df0e7478992b568189623c8b52175b884d23c049
| 2019-07-12T04:47:11Z |
python
| 2019-10-19T06:06:16Z |
lib/ansible/plugins/terminal/asa.py
|
#
# (c) 2016 Red Hat Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
import json
from ansible.errors import AnsibleConnectionFailure
from ansible.module_utils._text import to_text, to_bytes
from ansible.plugins.terminal import TerminalBase
class TerminalModule(TerminalBase):
terminal_stdout_re = [
re.compile(br"[\r\n]?[\w+\-\.:\/\[\]]+(?:\([^\)]+\)){,3}(?:>|#) ?$"),
re.compile(br"\[\w+\@[\w\-\.]+(?: [^\]])\] ?[>#\$] ?$")
]
terminal_stderr_re = [
re.compile(br"error:", re.I),
re.compile(br"Removing.* not allowed, it is being used")
]
def on_open_shell(self):
if self._get_prompt().strip().endswith(b'#'):
self.disable_pager()
def disable_pager(self):
cmd = {u'command': u'no terminal pager'}
try:
self._exec_cli_command(u'no terminal pager')
except AnsibleConnectionFailure:
raise AnsibleConnectionFailure('unable to disable terminal pager')
def on_become(self, passwd=None):
if self._get_prompt().strip().endswith(b'#'):
return
cmd = {u'command': u'enable'}
if passwd:
# Note: python-3.5 cannot combine u"" and r"" together. Thus make
# an r string and use to_text to ensure it's text on both py2 and py3.
cmd[u'prompt'] = to_text(r"[\r\n]?[Pp]assword: $", errors='surrogate_or_strict')
cmd[u'answer'] = passwd
try:
self._exec_cli_command(to_bytes(json.dumps(cmd), errors='surrogate_or_strict'))
except AnsibleConnectionFailure:
raise AnsibleConnectionFailure('unable to elevate privilege to enable mode')
self.disable_pager()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,898 |
gitlab_user contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_user contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_user.py:414:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_user.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61898
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:20Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_deploy_key.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# Based on code:
# Copyright: (c) 2013, Phillip Gentry <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: gitlab_deploy_key
short_description: Manages GitLab project deploy keys.
description:
- Adds, updates and removes project deploy keys
version_added: "2.6"
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
version_added: "2.8"
type: str
aliases:
- private_token
- access_token
project:
description:
- Id or Full path of project in the form of group/name
required: true
type: str
title:
description:
- Deploy key's title
required: true
type: str
key:
description:
- Deploy key
required: true
type: str
can_push:
description:
- Whether this key can push to the project
type: bool
default: no
state:
description:
- When C(present) the deploy key added to the project if it doesn't exist.
- When C(absent) it will be removed from the project if it exists
required: true
default: present
type: str
choices: [ "present", "absent" ]
'''
EXAMPLES = '''
- name: "Adding a project deploy key"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
- name: "Update the above deploy key to add push access"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
can_push: yes
- name: "Remove the previous deploy key from the project"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
state: absent
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: key is already in use"
deploy_key:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findProject
class GitLabDeployKey(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.deployKeyObject = None
'''
@param project Project object
@param key_title Title of the key
@param key_key String of the key
@param key_can_push Option of the deployKey
@param options Deploy key options
'''
def createOrUpdateDeployKey(self, project, key_title, key_key, options):
changed = False
# Because we have already call existsDeployKey in main()
if self.deployKeyObject is None:
deployKey = self.createDeployKey(project, {
'title': key_title,
'key': key_key,
'can_push': options['can_push']})
changed = True
else:
changed, deployKey = self.updateDeployKey(self.deployKeyObject, {
'can_push': options['can_push']})
self.deployKeyObject = deployKey
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title)
try:
deployKey.save()
except Exception as e:
self._module.fail_json(msg="Failed to update deploy key: %s " % e)
return True
else:
return False
'''
@param project Project Object
@param arguments Attributs of the deployKey
'''
def createDeployKey(self, project, arguments):
if self._module.check_mode:
return True
try:
deployKey = project.keys.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create deploy key: %s " % to_native(e))
return deployKey
'''
@param deployKey Deploy Key Object
@param arguments Attributs of the deployKey
'''
def updateDeployKey(self, deployKey, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(deployKey, arg_key) != arguments[arg_key]:
setattr(deployKey, arg_key, arguments[arg_key])
changed = True
return (changed, deployKey)
'''
@param project Project object
@param key_title Title of the key
'''
def findDeployKey(self, project, key_title):
deployKeys = project.keys.list()
for deployKey in deployKeys:
if (deployKey.title == key_title):
return deployKey
'''
@param project Project object
@param key_title Title of the key
'''
def existsDeployKey(self, project, key_title):
# When project exists, object will be stored in self.projectObject.
deployKey = self.findDeployKey(project, key_title)
if deployKey:
self.deployKeyObject = deployKey
return True
return False
def deleteDeployKey(self):
if self._module.check_mode:
return True
return self.deployKeyObject.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token', 'access_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True, aliases=["private_token", "access_token"]),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
key=dict(type='str', required=True),
can_push=dict(type='bool', default=False),
title=dict(type='str', required=True)
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
gitlab_url = re.sub('/api.*', '', module.params['api_url'])
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
project_identifier = module.params['project']
key_title = module.params['title']
key_keyfile = module.params['key']
key_can_push = module.params['can_push']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_deploy_key = GitLabDeployKey(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create deploy key: project %s doesn't exists" % project_identifier)
deployKey_exists = gitlab_deploy_key.existsDeployKey(project, key_title)
if state == 'absent':
if deployKey_exists:
gitlab_deploy_key.deleteDeployKey()
module.exit_json(changed=True, msg="Successfully deleted deploy key %s" % key_title)
else:
module.exit_json(changed=False, msg="Deploy key deleted or does not exists")
if state == 'present':
if gitlab_deploy_key.createOrUpdateDeployKey(project, key_title, key_keyfile, {'can_push': key_can_push}):
module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,898 |
gitlab_user contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_user contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_user.py:414:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_user.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61898
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:20Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_group.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_group
short_description: Creates/updates/deletes GitLab Groups
description:
- When the group does not exist in GitLab, it will be created.
- When the group does exist and state=absent, the group will be deleted.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
name:
description:
- Name of the group you want to create.
required: true
type: str
path:
description:
- The path of the group you want to create, this will be server_url/group_path
- If not supplied, the group_name will be used.
type: str
description:
description:
- A description for the group.
version_added: "2.7"
type: str
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
parent:
description:
- Allow to create subgroups
- Id or Full path of parent group in the form of group/name
version_added: "2.8"
type: str
visibility:
description:
- Default visibility of the group
version_added: "2.8"
choices: ["private", "internal", "public"]
default: private
type: str
'''
EXAMPLES = '''
- name: "Delete GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
name: my_first_group
state: absent
- name: "Create GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
# The group will by created at https://gitlab.dj-wasabi.local/super_parent/parent/my_first_group
- name: "Create GitLab SubGroup"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
parent: "super_parent/parent"
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
group:
description: API object
returned: always
type: dict
'''
import os
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabGroup(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.groupObject = None
'''
@param group Group object
'''
def getGroupId(self, group):
if group is not None:
return group.id
return None
'''
@param name Name of the group
@param parent Parent group full path
@param options Group options
'''
def createOrUpdateGroup(self, name, parent, options):
changed = False
# Because we have already call userExists in main()
if self.groupObject is None:
parent_id = self.getGroupId(parent)
group = self.createGroup({
'name': name,
'path': options['path'],
'parent_id': parent_id,
'visibility': options['visibility']})
changed = True
else:
changed, group = self.updateGroup(self.groupObject, {
'name': name,
'description': options['description'],
'visibility': options['visibility']})
self.groupObject = group
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the group %s" % name)
try:
group.save()
except Exception as e:
self._module.fail_json(msg="Failed to update group: %s " % e)
return True
else:
return False
'''
@param arguments Attributs of the group
'''
def createGroup(self, arguments):
if self._module.check_mode:
return True
try:
group = self._gitlab.groups.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create group: %s " % to_native(e))
return group
'''
@param group Group Object
@param arguments Attributs of the group
'''
def updateGroup(self, group, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(group, arg_key) != arguments[arg_key]:
setattr(group, arg_key, arguments[arg_key])
changed = True
return (changed, group)
def deleteGroup(self):
group = self.groupObject
if len(group.projects.list()) >= 1:
self._module.fail_json(
msg="There are still projects in this group. These needs to be moved or deleted before this group can be removed.")
else:
if self._module.check_mode:
return True
try:
group.delete()
except Exception as e:
self._module.fail_json(msg="Failed to delete group: %s " % to_native(e))
'''
@param name Name of the groupe
@param full_path Complete path of the Group including parent group path. <parent_path>/<group_path>
'''
def existsGroup(self, project_identifier):
# When group/user exists, object will be stored in self.groupObject.
group = findGroup(self._gitlab, project_identifier)
if group:
self.groupObject = group
return True
return False
def deprecation_warning(module):
deprecated_aliases = ['login_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
state=dict(type='str', default="present", choices=["absent", "present"]),
parent=dict(type='str'),
visibility=dict(type='str', default="private", choices=["internal", "private", "public"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token'],
['server_url', 'api_url']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
group_name = module.params['name']
group_path = module.params['path']
description = module.params['description']
state = module.params['state']
parent_identifier = module.params['parent']
group_visibility = module.params['visibility']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e))
# Define default group_path based on group_name
if group_path is None:
group_path = group_name.replace(" ", "_")
gitlab_group = GitLabGroup(module, gitlab_instance)
parent_group = None
if parent_identifier:
parent_group = findGroup(gitlab_instance, parent_identifier)
if not parent_group:
module.fail_json(msg="Failed create GitLab group: Parent group doesn't exists")
group_exists = gitlab_group.existsGroup(parent_group.full_path + '/' + group_path)
else:
group_exists = gitlab_group.existsGroup(group_path)
if state == 'absent':
if group_exists:
gitlab_group.deleteGroup()
module.exit_json(changed=True, msg="Successfully deleted group %s" % group_name)
else:
module.exit_json(changed=False, msg="Group deleted or does not exists")
if state == 'present':
if gitlab_group.createOrUpdateGroup(group_name, parent_group, {
"path": group_path,
"description": description,
"visibility": group_visibility}):
module.exit_json(changed=True, msg="Successfully created or updated the group %s" % group_name, group=gitlab_group.groupObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the group %s" % group_name, group=gitlab_group.groupObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,898 |
gitlab_user contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_user contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_user.py:414:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_user.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61898
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:20Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_hook.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# Based on code:
# Copyright: (c) 2013, Phillip Gentry <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_hook
short_description: Manages GitLab project hooks.
description:
- Adds, updates and removes project hook
version_added: "2.6"
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
version_added: "2.8"
type: str
aliases:
- private_token
- access_token
project:
description:
- Id or Full path of the project in the form of group/name.
required: true
type: str
hook_url:
description:
- The url that you want GitLab to post to, this is used as the primary key for updates and deletion.
required: true
type: str
state:
description:
- When C(present) the hook will be updated to match the input or created if it doesn't exist.
- When C(absent) hook will be deleted if it exists.
required: true
default: present
type: str
choices: [ "present", "absent" ]
push_events:
description:
- Trigger hook on push events.
type: bool
default: yes
issues_events:
description:
- Trigger hook on issues events.
type: bool
default: no
merge_requests_events:
description:
- Trigger hook on merge requests events.
type: bool
default: no
tag_push_events:
description:
- Trigger hook on tag push events.
type: bool
default: no
note_events:
description:
- Trigger hook on note events or when someone adds a comment.
type: bool
default: no
job_events:
description:
- Trigger hook on job events.
type: bool
default: no
pipeline_events:
description:
- Trigger hook on pipeline events.
type: bool
default: no
wiki_page_events:
description:
- Trigger hook on wiki events.
type: bool
default: no
hook_validate_certs:
description:
- Whether GitLab will do SSL verification when triggering the hook.
type: bool
default: no
aliases: [ enable_ssl_verification ]
token:
description:
- Secret token to validate hook messages at the receiver.
- If this is present it will always result in a change as it cannot be retrieved from GitLab.
- Will show up in the X-GitLab-Token HTTP request header.
required: false
type: str
'''
EXAMPLES = '''
- name: "Adding a project hook"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: present
push_events: yes
tag_push_events: yes
hook_validate_certs: no
token: "my-super-secret-token-that-my-ci-server-will-check"
- name: "Delete the previous hook"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: absent
- name: "Delete a hook by numeric project id"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: 10
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: absent
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
hook:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findProject
class GitLabHook(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.hookObject = None
'''
@param project Project Object
@param hook_url Url to call on event
@param description Description of the group
@param parent Parent group full path
'''
def createOrUpdateHook(self, project, hook_url, options):
changed = False
# Because we have already call userExists in main()
if self.hookObject is None:
hook = self.createHook(project, {
'url': hook_url,
'push_events': options['push_events'],
'issues_events': options['issues_events'],
'merge_requests_events': options['merge_requests_events'],
'tag_push_events': options['tag_push_events'],
'note_events': options['note_events'],
'job_events': options['job_events'],
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
changed = True
else:
changed, hook = self.updateHook(self.hookObject, {
'push_events': options['push_events'],
'issues_events': options['issues_events'],
'merge_requests_events': options['merge_requests_events'],
'tag_push_events': options['tag_push_events'],
'note_events': options['note_events'],
'job_events': options['job_events'],
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
self.hookObject = hook
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url)
try:
hook.save()
except Exception as e:
self._module.fail_json(msg="Failed to update hook: %s " % e)
return True
else:
return False
'''
@param project Project Object
@param arguments Attributs of the hook
'''
def createHook(self, project, arguments):
if self._module.check_mode:
return True
hook = project.hooks.create(arguments)
return hook
'''
@param hook Hook Object
@param arguments Attributs of the hook
'''
def updateHook(self, hook, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(hook, arg_key) != arguments[arg_key]:
setattr(hook, arg_key, arguments[arg_key])
changed = True
return (changed, hook)
'''
@param project Project object
@param hook_url Url to call on event
'''
def findHook(self, project, hook_url):
hooks = project.hooks.list()
for hook in hooks:
if (hook.url == hook_url):
return hook
'''
@param project Project object
@param hook_url Url to call on event
'''
def existsHook(self, project, hook_url):
# When project exists, object will be stored in self.projectObject.
hook = self.findHook(project, hook_url)
if hook:
self.hookObject = hook
return True
return False
def deleteHook(self):
if self._module.check_mode:
return True
return self.hookObject.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token', 'access_token', 'enable_ssl_verification']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True, aliases=["private_token", "access_token"]),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
hook_url=dict(type='str', required=True),
push_events=dict(type='bool', default=True),
issues_events=dict(type='bool', default=False),
merge_requests_events=dict(type='bool', default=False),
tag_push_events=dict(type='bool', default=False),
note_events=dict(type='bool', default=False),
job_events=dict(type='bool', default=False),
pipeline_events=dict(type='bool', default=False),
wiki_page_events=dict(type='bool', default=False),
hook_validate_certs=dict(type='bool', default=False, aliases=['enable_ssl_verification']),
token=dict(type='str', no_log=True),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
gitlab_url = re.sub('/api.*', '', module.params['api_url'])
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
project_identifier = module.params['project']
hook_url = module.params['hook_url']
push_events = module.params['push_events']
issues_events = module.params['issues_events']
merge_requests_events = module.params['merge_requests_events']
tag_push_events = module.params['tag_push_events']
note_events = module.params['note_events']
job_events = module.params['job_events']
pipeline_events = module.params['pipeline_events']
wiki_page_events = module.params['wiki_page_events']
enable_ssl_verification = module.params['hook_validate_certs']
hook_token = module.params['token']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_hook = GitLabHook(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create hook: project %s doesn't exists" % project_identifier)
hook_exists = gitlab_hook.existsHook(project, hook_url)
if state == 'absent':
if hook_exists:
gitlab_hook.deleteHook()
module.exit_json(changed=True, msg="Successfully deleted hook %s" % hook_url)
else:
module.exit_json(changed=False, msg="Hook deleted or does not exists")
if state == 'present':
if gitlab_hook.createOrUpdateHook(project, hook_url, {
"push_events": push_events,
"issues_events": issues_events,
"merge_requests_events": merge_requests_events,
"tag_push_events": tag_push_events,
"note_events": note_events,
"job_events": job_events,
"pipeline_events": pipeline_events,
"wiki_page_events": wiki_page_events,
"enable_ssl_verification": enable_ssl_verification,
"token": hook_token}):
module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,898 |
gitlab_user contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_user contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_user.py:414:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_user.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61898
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:20Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_project.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_project
short_description: Creates/updates/deletes GitLab Projects
description:
- When the project does not exist in GitLab, it will be created.
- When the project does exists and state=absent, the project will be deleted.
- When changes are made to the project, the project will be updated.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
group:
description:
- Id or The full path of the group of which this projects belongs to.
type: str
name:
description:
- The name of the project
required: true
type: str
path:
description:
- The path of the project you want to create, this will be server_url/<group>/path
- If not supplied, name will be used.
type: str
description:
description:
- An description for the project.
type: str
issues_enabled:
description:
- Whether you want to create issues or not.
- Possible values are true and false.
type: bool
default: yes
merge_requests_enabled:
description:
- If merge requests can be made or not.
- Possible values are true and false.
type: bool
default: yes
wiki_enabled:
description:
- If an wiki for this project should be available or not.
- Possible values are true and false.
type: bool
default: yes
snippets_enabled:
description:
- If creating snippets should be available or not.
- Possible values are true and false.
type: bool
default: yes
visibility:
description:
- Private. Project access must be granted explicitly for each user.
- Internal. The project can be cloned by any logged in user.
- Public. The project can be cloned without any authentication.
default: private
type: str
choices: ["private", "internal", "public"]
aliases:
- visibility_level
import_url:
description:
- Git repository which will be imported into gitlab.
- GitLab server needs read access to this git repository.
required: false
type: str
state:
description:
- create or delete project.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
'''
EXAMPLES = '''
- name: Delete GitLab Project
gitlab_project:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
name: my_first_project
state: absent
delegate_to: localhost
- name: Create GitLab Project in group Ansible
gitlab_project:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_project
group: ansible
issues_enabled: False
wiki_enabled: True
snippets_enabled: True
import_url: http://git.example.com/example/lab.git
state: present
delegate_to: localhost
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
project:
description: API object
returned: always
type: dict
'''
import os
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup, findProject
class GitLabProject(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.projectObject = None
'''
@param project_name Name of the project
@param namespace Namespace Object (User or Group)
@param options Options of the project
'''
def createOrUpdateProject(self, project_name, namespace, options):
changed = False
# Because we have already call userExists in main()
if self.projectObject is None:
project = self.createProject(namespace, {
'name': project_name,
'path': options['path'],
'description': options['description'],
'issues_enabled': options['issues_enabled'],
'merge_requests_enabled': options['merge_requests_enabled'],
'wiki_enabled': options['wiki_enabled'],
'snippets_enabled': options['snippets_enabled'],
'visibility': options['visibility'],
'import_url': options['import_url']})
changed = True
else:
changed, project = self.updateProject(self.projectObject, {
'name': project_name,
'description': options['description'],
'issues_enabled': options['issues_enabled'],
'merge_requests_enabled': options['merge_requests_enabled'],
'wiki_enabled': options['wiki_enabled'],
'snippets_enabled': options['snippets_enabled'],
'visibility': options['visibility']})
self.projectObject = project
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name)
try:
project.save()
except Exception as e:
self._module.fail_json(msg="Failed update project: %s " % e)
return True
else:
return False
'''
@param namespace Namespace Object (User or Group)
@param arguments Attributs of the project
'''
def createProject(self, namespace, arguments):
if self._module.check_mode:
return True
arguments['namespace_id'] = namespace.id
try:
project = self._gitlab.projects.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create project: %s " % to_native(e))
return project
'''
@param project Project Object
@param arguments Attributs of the project
'''
def updateProject(self, project, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(project, arg_key) != arguments[arg_key]:
setattr(project, arg_key, arguments[arg_key])
changed = True
return (changed, project)
def deleteProject(self):
if self._module.check_mode:
return True
project = self.projectObject
return project.delete()
'''
@param namespace User/Group object
@param name Name of the project
'''
def existsProject(self, namespace, path):
# When project exists, object will be stored in self.projectObject.
project = findProject(self._gitlab, namespace.full_path + '/' + path)
if project:
self.projectObject = project
return True
return False
def deprecation_warning(module):
deprecated_aliases = ['login_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
group=dict(type='str'),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
issues_enabled=dict(type='bool', default=True),
merge_requests_enabled=dict(type='bool', default=True),
wiki_enabled=dict(type='bool', default=True),
snippets_enabled=dict(default=True, type='bool'),
visibility=dict(type='str', default="private", choices=["internal", "private", "public"], aliases=["visibility_level"]),
import_url=dict(type='str'),
state=dict(type='str', default="present", choices=["absent", "present"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token'],
['server_url', 'api_url']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
group_identifier = module.params['group']
project_name = module.params['name']
project_path = module.params['path']
project_description = module.params['description']
issues_enabled = module.params['issues_enabled']
merge_requests_enabled = module.params['merge_requests_enabled']
wiki_enabled = module.params['wiki_enabled']
snippets_enabled = module.params['snippets_enabled']
visibility = module.params['visibility']
import_url = module.params['import_url']
state = module.params['state']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
# Set project_path to project_name if it is empty.
if project_path is None:
project_path = project_name.replace(" ", "_")
gitlab_project = GitLabProject(module, gitlab_instance)
if group_identifier:
group = findGroup(gitlab_instance, group_identifier)
if group is None:
module.fail_json(msg="Failed to create project: group %s doesn't exists" % group_identifier)
namespace = gitlab_instance.namespaces.get(group.id)
project_exists = gitlab_project.existsProject(namespace, project_path)
else:
user = gitlab_instance.users.list(username=gitlab_instance.user.username)[0]
namespace = gitlab_instance.namespaces.get(user.id)
project_exists = gitlab_project.existsProject(namespace, project_path)
if state == 'absent':
if project_exists:
gitlab_project.deleteProject()
module.exit_json(changed=True, msg="Successfully deleted project %s" % project_name)
else:
module.exit_json(changed=False, msg="Project deleted or does not exists")
if state == 'present':
if gitlab_project.createOrUpdateProject(project_name, namespace, {
"path": project_path,
"description": project_description,
"issues_enabled": issues_enabled,
"merge_requests_enabled": merge_requests_enabled,
"wiki_enabled": wiki_enabled,
"snippets_enabled": snippets_enabled,
"visibility": visibility,
"import_url": import_url}):
module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name, project=gitlab_project.projectObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the project %s" % project_name, project=gitlab_project.projectObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,898 |
gitlab_user contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_user contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_user.py:414:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_user.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61898
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:20Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_runner.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Samy Coenen <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_runner
short_description: Create, modify and delete GitLab Runners.
description:
- Register, update and delete runners with the GitLab API.
- All operations are performed using the GitLab API v4.
- For details, consult the full API documentation at U(https://docs.gitlab.com/ee/api/runners.html).
- A valid private API token is required for all operations. You can create as many tokens as you like using the GitLab web interface at
U(https://$GITLAB_URL/profile/personal_access_tokens).
- A valid registration token is required for registering a new runner.
To create shared runners, you need to ask your administrator to give you this token.
It can be found at U(https://$GITLAB_URL/admin/runners/).
notes:
- To create a new runner at least the C(api_token), C(description) and C(url) options are required.
- Runners need to have unique descriptions.
version_added: 2.8
author:
- Samy Coenen (@SamyCoenen)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab >= 1.5.0
extends_documentation_fragment:
- auth_basic
options:
url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
api_token:
description:
- Your private token to interact with the GitLab API.
required: True
type: str
aliases:
- private_token
description:
description:
- The unique name of the runner.
required: True
type: str
aliases:
- name
state:
description:
- Make sure that the runner with the same name exists with the same configuration or delete the runner with the same name.
required: False
default: present
choices: ["present", "absent"]
type: str
registration_token:
description:
- The registration token is used to register new runners.
required: True
type: str
active:
description:
- Define if the runners is immediately active after creation.
required: False
default: yes
type: bool
locked:
description:
- Determines if the runner is locked or not.
required: False
default: False
type: bool
access_level:
description:
- Determines if a runner can pick up jobs from protected branches.
required: False
default: ref_protected
choices: ["ref_protected", "not_protected"]
type: str
maximum_timeout:
description:
- The maximum timeout that a runner has to pick up a specific job.
required: False
default: 3600
type: int
run_untagged:
description:
- Run untagged jobs or not.
required: False
default: yes
type: bool
tag_list:
description: The tags that apply to the runner.
required: False
default: []
type: list
'''
EXAMPLES = '''
- name: "Register runner"
gitlab_runner:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
registration_token: 4gfdsg345
description: Docker Machine t1
state: present
active: True
tag_list: ['docker']
run_untagged: False
locked: False
- name: "Delete runner"
gitlab_runner:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
description: Docker Machine t1
state: absent
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
runner:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
try:
cmp
except NameError:
def cmp(a, b):
return (a > b) - (a < b)
class GitLabRunner(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.runnerObject = None
def createOrUpdateRunner(self, description, options):
changed = False
# Because we have already call userExists in main()
if self.runnerObject is None:
runner = self.createRunner({
'description': description,
'active': options['active'],
'token': options['registration_token'],
'locked': options['locked'],
'run_untagged': options['run_untagged'],
'maximum_timeout': options['maximum_timeout'],
'tag_list': options['tag_list']})
changed = True
else:
changed, runner = self.updateRunner(self.runnerObject, {
'active': options['active'],
'locked': options['locked'],
'run_untagged': options['run_untagged'],
'maximum_timeout': options['maximum_timeout'],
'access_level': options['access_level'],
'tag_list': options['tag_list']})
self.runnerObject = runner
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the runner %s" % description)
try:
runner.save()
except Exception as e:
self._module.fail_json(msg="Failed to update runner: %s " % to_native(e))
return True
else:
return False
'''
@param arguments Attributs of the runner
'''
def createRunner(self, arguments):
if self._module.check_mode:
return True
try:
runner = self._gitlab.runners.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create runner: %s " % to_native(e))
return runner
'''
@param runner Runner object
@param arguments Attributs of the runner
'''
def updateRunner(self, runner, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if isinstance(arguments[arg_key], list):
list1 = getattr(runner, arg_key)
list1.sort()
list2 = arguments[arg_key]
list2.sort()
if cmp(list1, list2):
setattr(runner, arg_key, arguments[arg_key])
changed = True
else:
if getattr(runner, arg_key) != arguments[arg_key]:
setattr(runner, arg_key, arguments[arg_key])
changed = True
return (changed, runner)
'''
@param description Description of the runner
'''
def findRunner(self, description):
runners = self._gitlab.runners.list(as_list=False)
for runner in runners:
if (runner.description == description):
return self._gitlab.runners.get(runner.id)
'''
@param description Description of the runner
'''
def existsRunner(self, description):
# When runner exists, object will be stored in self.runnerObject.
runner = self.findRunner(description)
if runner:
self.runnerObject = runner
return True
return False
def deleteRunner(self):
if self._module.check_mode:
return True
runner = self.runnerObject
return runner.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
url=dict(type='str', removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["private_token"]),
description=dict(type='str', required=True, aliases=["name"]),
active=dict(type='bool', default=True),
tag_list=dict(type='list', default=[]),
run_untagged=dict(type='bool', default=True),
locked=dict(type='bool', default=False),
access_level=dict(type='str', default='ref_protected', choices=["not_protected", "ref_protected"]),
maximum_timeout=dict(type='int', default=3600),
registration_token=dict(type='str', required=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'url'],
['api_username', 'api_token'],
['api_password', 'api_token'],
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token'],
['api_url', 'url']
],
supports_check_mode=True,
)
deprecation_warning(module)
if module.params['url'] is not None:
url = re.sub('/api.*', '', module.params['url'])
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
gitlab_url = url if api_url is None else api_url
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
runner_description = module.params['description']
runner_active = module.params['active']
tag_list = module.params['tag_list']
run_untagged = module.params['run_untagged']
runner_locked = module.params['locked']
access_level = module.params['access_level']
maximum_timeout = module.params['maximum_timeout']
registration_token = module.params['registration_token']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e))
gitlab_runner = GitLabRunner(module, gitlab_instance)
runner_exists = gitlab_runner.existsRunner(runner_description)
if state == 'absent':
if runner_exists:
gitlab_runner.deleteRunner()
module.exit_json(changed=True, msg="Successfully deleted runner %s" % runner_description)
else:
module.exit_json(changed=False, msg="Runner deleted or does not exists")
if state == 'present':
if gitlab_runner.createOrUpdateRunner(runner_description, {
"active": runner_active,
"tag_list": tag_list,
"run_untagged": run_untagged,
"locked": runner_locked,
"access_level": access_level,
"maximum_timeout": maximum_timeout,
"registration_token": registration_token}):
module.exit_json(changed=True, runner=gitlab_runner.runnerObject._attrs,
msg="Successfully created or updated the runner %s" % runner_description)
else:
module.exit_json(changed=False, runner=gitlab_runner.runnerObject._attrs,
msg="No need to update the runner %s" % runner_description)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,898 |
gitlab_user contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_user contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_user.py:414:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_user.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61898
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:20Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_user
short_description: Creates/updates/deletes GitLab Users
description:
- When the user does not exist in GitLab, it will be created.
- When the user does exists and state=absent, the user will be deleted.
- When changes are made to user, the user will be updated.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
- administrator rights on the GitLab server
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
name:
description:
- Name of the user you want to create
required: true
type: str
username:
description:
- The username of the user.
required: true
type: str
password:
description:
- The password of the user.
- GitLab server enforces minimum password length to 8, set this value with 8 or more characters.
required: true
type: str
email:
description:
- The email that belongs to the user.
required: true
type: str
sshkey_name:
description:
- The name of the sshkey
type: str
sshkey_file:
description:
- The ssh key itself.
type: str
group:
description:
- Id or Full path of parent group in the form of group/name
- Add user as an member to this group.
type: str
access_level:
description:
- The access level to the group. One of the following can be used.
- guest
- reporter
- developer
- master (alias for maintainer)
- maintainer
- owner
default: guest
type: str
choices: ["guest", "reporter", "developer", "master", "maintainer", "owner"]
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
confirm:
description:
- Require confirmation.
type: bool
default: yes
version_added: "2.4"
isadmin:
description:
- Grant admin privileges to the user
type: bool
default: no
version_added: "2.8"
external:
description:
- Define external parameter for this user
type: bool
default: no
version_added: "2.8"
'''
EXAMPLES = '''
- name: "Delete GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
username: myusername
state: absent
delegate_to: localhost
- name: "Create GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: My Name
username: myusername
password: mysecretpassword
email: [email protected]
sshkey_name: MySSH
sshkey_file: ssh-rsa AAAAB3NzaC1yc...
state: present
group: super_group/mon_group
access_level: owner
delegate_to: localhost
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
user:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabUser(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.userObject = None
self.ACCESS_LEVEL = {
'guest': gitlab.GUEST_ACCESS,
'reporter': gitlab.REPORTER_ACCESS,
'developer': gitlab.DEVELOPER_ACCESS,
'master': gitlab.MAINTAINER_ACCESS,
'maintainer': gitlab.MAINTAINER_ACCESS,
'owner': gitlab.OWNER_ACCESS}
'''
@param username Username of the user
@param options User options
'''
def createOrUpdateUser(self, username, options):
changed = False
# Because we have already call userExists in main()
if self.userObject is None:
user = self.createUser({
'name': options['name'],
'username': username,
'password': options['password'],
'email': options['email'],
'skip_confirmation': not options['confirm'],
'admin': options['isadmin'],
'external': options['external']})
changed = True
else:
changed, user = self.updateUser(self.userObject, {
'name': options['name'],
'email': options['email'],
'is_admin': options['isadmin'],
'external': options['external']})
# Assign ssh keys
if options['sshkey_name'] and options['sshkey_file']:
key_changed = self.addSshKeyToUser(user, {
'name': options['sshkey_name'],
'file': options['sshkey_file']})
changed = changed or key_changed
# Assign group
if options['group_path']:
group_changed = self.assignUserToGroup(user, options['group_path'], options['access_level'])
changed = changed or group_changed
self.userObject = user
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the user %s" % username)
try:
user.save()
except Exception as e:
self._module.fail_json(msg="Failed to update user: %s " % to_native(e))
return True
else:
return False
'''
@param group User object
'''
def getUserId(self, user):
if user is not None:
return user.id
return None
'''
@param user User object
@param sshkey_name Name of the ssh key
'''
def sshKeyExists(self, user, sshkey_name):
keyList = map(lambda k: k.title, user.keys.list())
return sshkey_name in keyList
'''
@param user User object
@param sshkey Dict containing sshkey infos {"name": "", "file": ""}
'''
def addSshKeyToUser(self, user, sshkey):
if not self.sshKeyExists(user, sshkey['name']):
if self._module.check_mode:
return True
try:
user.keys.create({
'title': sshkey['name'],
'key': sshkey['file']})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign sshkey to user: %s" % to_native(e))
return True
return False
'''
@param group Group object
@param user_id Id of the user to find
'''
def findMember(self, group, user_id):
try:
member = group.members.get(user_id)
except gitlab.exceptions.GitlabGetError as e:
return None
return member
'''
@param group Group object
@param user_id Id of the user to check
'''
def memberExists(self, group, user_id):
member = self.findMember(group, user_id)
return member is not None
'''
@param group Group object
@param user_id Id of the user to check
@param access_level GitLab access_level to check
'''
def memberAsGoodAccessLevel(self, group, user_id, access_level):
member = self.findMember(group, user_id)
return member.access_level == access_level
'''
@param user User object
@param group_path Complete path of the Group including parent group path. <parent_path>/<group_path>
@param access_level GitLab access_level to assign
'''
def assignUserToGroup(self, user, group_identifier, access_level):
group = findGroup(self._gitlab, group_identifier)
if self._module.check_mode:
return True
if group is None:
return False
if self.memberExists(group, self.getUserId(user)):
member = self.findMember(group, self.getUserId(user))
if not self.memberAsGoodAccessLevel(group, member.id, self.ACCESS_LEVEL[access_level]):
member.access_level = self.ACCESS_LEVEL[access_level]
member.save()
return True
else:
try:
group.members.create({
'user_id': self.getUserId(user),
'access_level': self.ACCESS_LEVEL[access_level]})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign user to group: %s" % to_native(e))
return True
return False
'''
@param user User object
@param arguments User attributes
'''
def updateUser(self, user, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(user, arg_key) != arguments[arg_key]:
setattr(user, arg_key, arguments[arg_key])
changed = True
return (changed, user)
'''
@param arguments User attributes
'''
def createUser(self, arguments):
if self._module.check_mode:
return True
try:
user = self._gitlab.users.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create user: %s " % to_native(e))
return user
'''
@param username Username of the user
'''
def findUser(self, username):
users = self._gitlab.users.list(search=username)
for user in users:
if (user.username == username):
return user
'''
@param username Username of the user
'''
def existsUser(self, username):
# When user exists, object will be stored in self.userObject.
user = self.findUser(username)
if user:
self.userObject = user
return True
return False
def deleteUser(self):
if self._module.check_mode:
return True
user = self.userObject
return user.delete()
def deprecation_warning(module):
deprecated_aliases = ['login_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
name=dict(type='str', required=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
username=dict(type='str', required=True),
password=dict(type='str', required=True, no_log=True),
email=dict(type='str', required=True),
sshkey_name=dict(type='str'),
sshkey_file=dict(type='str'),
group=dict(type='str'),
access_level=dict(type='str', default="guest", choices=["developer", "guest", "maintainer", "master", "owner", "reporter"]),
confirm=dict(type='bool', default=True),
isadmin=dict(type='bool', default=False),
external=dict(type='bool', default=False),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token'],
['server_url', 'api_url']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
user_name = module.params['name']
state = module.params['state']
user_username = module.params['username'].lower()
user_password = module.params['password']
user_email = module.params['email']
user_sshkey_name = module.params['sshkey_name']
user_sshkey_file = module.params['sshkey_file']
group_path = module.params['group']
access_level = module.params['access_level']
confirm = module.params['confirm']
user_isadmin = module.params['isadmin']
user_external = module.params['external']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_user = GitLabUser(module, gitlab_instance)
user_exists = gitlab_user.existsUser(user_username)
if state == 'absent':
if user_exists:
gitlab_user.deleteUser()
module.exit_json(changed=True, msg="Successfully deleted user %s" % user_username)
else:
module.exit_json(changed=False, msg="User deleted or does not exists")
if state == 'present':
if gitlab_user.createOrUpdateUser(user_username, {
"name": user_name,
"password": user_password,
"email": user_email,
"sshkey_name": user_sshkey_name,
"sshkey_file": user_sshkey_file,
"group_path": group_path,
"access_level": access_level,
"confirm": confirm,
"isadmin": user_isadmin,
"external": user_external}):
module.exit_json(changed=True, msg="Successfully created or updated the user %s" % user_username, user=gitlab_user.userObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the user %s" % user_username, user=gitlab_user.userObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,897 |
gitlab_runner contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_runner contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_runner.py:291:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_runner.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61897
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:19Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_deploy_key.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# Based on code:
# Copyright: (c) 2013, Phillip Gentry <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: gitlab_deploy_key
short_description: Manages GitLab project deploy keys.
description:
- Adds, updates and removes project deploy keys
version_added: "2.6"
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
version_added: "2.8"
type: str
aliases:
- private_token
- access_token
project:
description:
- Id or Full path of project in the form of group/name
required: true
type: str
title:
description:
- Deploy key's title
required: true
type: str
key:
description:
- Deploy key
required: true
type: str
can_push:
description:
- Whether this key can push to the project
type: bool
default: no
state:
description:
- When C(present) the deploy key added to the project if it doesn't exist.
- When C(absent) it will be removed from the project if it exists
required: true
default: present
type: str
choices: [ "present", "absent" ]
'''
EXAMPLES = '''
- name: "Adding a project deploy key"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
- name: "Update the above deploy key to add push access"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
can_push: yes
- name: "Remove the previous deploy key from the project"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
state: absent
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: key is already in use"
deploy_key:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findProject
class GitLabDeployKey(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.deployKeyObject = None
'''
@param project Project object
@param key_title Title of the key
@param key_key String of the key
@param key_can_push Option of the deployKey
@param options Deploy key options
'''
def createOrUpdateDeployKey(self, project, key_title, key_key, options):
changed = False
# Because we have already call existsDeployKey in main()
if self.deployKeyObject is None:
deployKey = self.createDeployKey(project, {
'title': key_title,
'key': key_key,
'can_push': options['can_push']})
changed = True
else:
changed, deployKey = self.updateDeployKey(self.deployKeyObject, {
'can_push': options['can_push']})
self.deployKeyObject = deployKey
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title)
try:
deployKey.save()
except Exception as e:
self._module.fail_json(msg="Failed to update deploy key: %s " % e)
return True
else:
return False
'''
@param project Project Object
@param arguments Attributs of the deployKey
'''
def createDeployKey(self, project, arguments):
if self._module.check_mode:
return True
try:
deployKey = project.keys.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create deploy key: %s " % to_native(e))
return deployKey
'''
@param deployKey Deploy Key Object
@param arguments Attributs of the deployKey
'''
def updateDeployKey(self, deployKey, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(deployKey, arg_key) != arguments[arg_key]:
setattr(deployKey, arg_key, arguments[arg_key])
changed = True
return (changed, deployKey)
'''
@param project Project object
@param key_title Title of the key
'''
def findDeployKey(self, project, key_title):
deployKeys = project.keys.list()
for deployKey in deployKeys:
if (deployKey.title == key_title):
return deployKey
'''
@param project Project object
@param key_title Title of the key
'''
def existsDeployKey(self, project, key_title):
# When project exists, object will be stored in self.projectObject.
deployKey = self.findDeployKey(project, key_title)
if deployKey:
self.deployKeyObject = deployKey
return True
return False
def deleteDeployKey(self):
if self._module.check_mode:
return True
return self.deployKeyObject.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token', 'access_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True, aliases=["private_token", "access_token"]),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
key=dict(type='str', required=True),
can_push=dict(type='bool', default=False),
title=dict(type='str', required=True)
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
gitlab_url = re.sub('/api.*', '', module.params['api_url'])
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
project_identifier = module.params['project']
key_title = module.params['title']
key_keyfile = module.params['key']
key_can_push = module.params['can_push']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_deploy_key = GitLabDeployKey(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create deploy key: project %s doesn't exists" % project_identifier)
deployKey_exists = gitlab_deploy_key.existsDeployKey(project, key_title)
if state == 'absent':
if deployKey_exists:
gitlab_deploy_key.deleteDeployKey()
module.exit_json(changed=True, msg="Successfully deleted deploy key %s" % key_title)
else:
module.exit_json(changed=False, msg="Deploy key deleted or does not exists")
if state == 'present':
if gitlab_deploy_key.createOrUpdateDeployKey(project, key_title, key_keyfile, {'can_push': key_can_push}):
module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,897 |
gitlab_runner contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_runner contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_runner.py:291:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_runner.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61897
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:19Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_group.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_group
short_description: Creates/updates/deletes GitLab Groups
description:
- When the group does not exist in GitLab, it will be created.
- When the group does exist and state=absent, the group will be deleted.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
name:
description:
- Name of the group you want to create.
required: true
type: str
path:
description:
- The path of the group you want to create, this will be server_url/group_path
- If not supplied, the group_name will be used.
type: str
description:
description:
- A description for the group.
version_added: "2.7"
type: str
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
parent:
description:
- Allow to create subgroups
- Id or Full path of parent group in the form of group/name
version_added: "2.8"
type: str
visibility:
description:
- Default visibility of the group
version_added: "2.8"
choices: ["private", "internal", "public"]
default: private
type: str
'''
EXAMPLES = '''
- name: "Delete GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
name: my_first_group
state: absent
- name: "Create GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
# The group will by created at https://gitlab.dj-wasabi.local/super_parent/parent/my_first_group
- name: "Create GitLab SubGroup"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
parent: "super_parent/parent"
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
group:
description: API object
returned: always
type: dict
'''
import os
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabGroup(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.groupObject = None
'''
@param group Group object
'''
def getGroupId(self, group):
if group is not None:
return group.id
return None
'''
@param name Name of the group
@param parent Parent group full path
@param options Group options
'''
def createOrUpdateGroup(self, name, parent, options):
changed = False
# Because we have already call userExists in main()
if self.groupObject is None:
parent_id = self.getGroupId(parent)
group = self.createGroup({
'name': name,
'path': options['path'],
'parent_id': parent_id,
'visibility': options['visibility']})
changed = True
else:
changed, group = self.updateGroup(self.groupObject, {
'name': name,
'description': options['description'],
'visibility': options['visibility']})
self.groupObject = group
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the group %s" % name)
try:
group.save()
except Exception as e:
self._module.fail_json(msg="Failed to update group: %s " % e)
return True
else:
return False
'''
@param arguments Attributs of the group
'''
def createGroup(self, arguments):
if self._module.check_mode:
return True
try:
group = self._gitlab.groups.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create group: %s " % to_native(e))
return group
'''
@param group Group Object
@param arguments Attributs of the group
'''
def updateGroup(self, group, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(group, arg_key) != arguments[arg_key]:
setattr(group, arg_key, arguments[arg_key])
changed = True
return (changed, group)
def deleteGroup(self):
group = self.groupObject
if len(group.projects.list()) >= 1:
self._module.fail_json(
msg="There are still projects in this group. These needs to be moved or deleted before this group can be removed.")
else:
if self._module.check_mode:
return True
try:
group.delete()
except Exception as e:
self._module.fail_json(msg="Failed to delete group: %s " % to_native(e))
'''
@param name Name of the groupe
@param full_path Complete path of the Group including parent group path. <parent_path>/<group_path>
'''
def existsGroup(self, project_identifier):
# When group/user exists, object will be stored in self.groupObject.
group = findGroup(self._gitlab, project_identifier)
if group:
self.groupObject = group
return True
return False
def deprecation_warning(module):
deprecated_aliases = ['login_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
state=dict(type='str', default="present", choices=["absent", "present"]),
parent=dict(type='str'),
visibility=dict(type='str', default="private", choices=["internal", "private", "public"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token'],
['server_url', 'api_url']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
group_name = module.params['name']
group_path = module.params['path']
description = module.params['description']
state = module.params['state']
parent_identifier = module.params['parent']
group_visibility = module.params['visibility']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e))
# Define default group_path based on group_name
if group_path is None:
group_path = group_name.replace(" ", "_")
gitlab_group = GitLabGroup(module, gitlab_instance)
parent_group = None
if parent_identifier:
parent_group = findGroup(gitlab_instance, parent_identifier)
if not parent_group:
module.fail_json(msg="Failed create GitLab group: Parent group doesn't exists")
group_exists = gitlab_group.existsGroup(parent_group.full_path + '/' + group_path)
else:
group_exists = gitlab_group.existsGroup(group_path)
if state == 'absent':
if group_exists:
gitlab_group.deleteGroup()
module.exit_json(changed=True, msg="Successfully deleted group %s" % group_name)
else:
module.exit_json(changed=False, msg="Group deleted or does not exists")
if state == 'present':
if gitlab_group.createOrUpdateGroup(group_name, parent_group, {
"path": group_path,
"description": description,
"visibility": group_visibility}):
module.exit_json(changed=True, msg="Successfully created or updated the group %s" % group_name, group=gitlab_group.groupObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the group %s" % group_name, group=gitlab_group.groupObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,897 |
gitlab_runner contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_runner contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_runner.py:291:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_runner.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61897
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:19Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_hook.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# Based on code:
# Copyright: (c) 2013, Phillip Gentry <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_hook
short_description: Manages GitLab project hooks.
description:
- Adds, updates and removes project hook
version_added: "2.6"
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
version_added: "2.8"
type: str
aliases:
- private_token
- access_token
project:
description:
- Id or Full path of the project in the form of group/name.
required: true
type: str
hook_url:
description:
- The url that you want GitLab to post to, this is used as the primary key for updates and deletion.
required: true
type: str
state:
description:
- When C(present) the hook will be updated to match the input or created if it doesn't exist.
- When C(absent) hook will be deleted if it exists.
required: true
default: present
type: str
choices: [ "present", "absent" ]
push_events:
description:
- Trigger hook on push events.
type: bool
default: yes
issues_events:
description:
- Trigger hook on issues events.
type: bool
default: no
merge_requests_events:
description:
- Trigger hook on merge requests events.
type: bool
default: no
tag_push_events:
description:
- Trigger hook on tag push events.
type: bool
default: no
note_events:
description:
- Trigger hook on note events or when someone adds a comment.
type: bool
default: no
job_events:
description:
- Trigger hook on job events.
type: bool
default: no
pipeline_events:
description:
- Trigger hook on pipeline events.
type: bool
default: no
wiki_page_events:
description:
- Trigger hook on wiki events.
type: bool
default: no
hook_validate_certs:
description:
- Whether GitLab will do SSL verification when triggering the hook.
type: bool
default: no
aliases: [ enable_ssl_verification ]
token:
description:
- Secret token to validate hook messages at the receiver.
- If this is present it will always result in a change as it cannot be retrieved from GitLab.
- Will show up in the X-GitLab-Token HTTP request header.
required: false
type: str
'''
EXAMPLES = '''
- name: "Adding a project hook"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: present
push_events: yes
tag_push_events: yes
hook_validate_certs: no
token: "my-super-secret-token-that-my-ci-server-will-check"
- name: "Delete the previous hook"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: absent
- name: "Delete a hook by numeric project id"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: 10
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: absent
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
hook:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findProject
class GitLabHook(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.hookObject = None
'''
@param project Project Object
@param hook_url Url to call on event
@param description Description of the group
@param parent Parent group full path
'''
def createOrUpdateHook(self, project, hook_url, options):
changed = False
# Because we have already call userExists in main()
if self.hookObject is None:
hook = self.createHook(project, {
'url': hook_url,
'push_events': options['push_events'],
'issues_events': options['issues_events'],
'merge_requests_events': options['merge_requests_events'],
'tag_push_events': options['tag_push_events'],
'note_events': options['note_events'],
'job_events': options['job_events'],
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
changed = True
else:
changed, hook = self.updateHook(self.hookObject, {
'push_events': options['push_events'],
'issues_events': options['issues_events'],
'merge_requests_events': options['merge_requests_events'],
'tag_push_events': options['tag_push_events'],
'note_events': options['note_events'],
'job_events': options['job_events'],
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
self.hookObject = hook
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url)
try:
hook.save()
except Exception as e:
self._module.fail_json(msg="Failed to update hook: %s " % e)
return True
else:
return False
'''
@param project Project Object
@param arguments Attributs of the hook
'''
def createHook(self, project, arguments):
if self._module.check_mode:
return True
hook = project.hooks.create(arguments)
return hook
'''
@param hook Hook Object
@param arguments Attributs of the hook
'''
def updateHook(self, hook, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(hook, arg_key) != arguments[arg_key]:
setattr(hook, arg_key, arguments[arg_key])
changed = True
return (changed, hook)
'''
@param project Project object
@param hook_url Url to call on event
'''
def findHook(self, project, hook_url):
hooks = project.hooks.list()
for hook in hooks:
if (hook.url == hook_url):
return hook
'''
@param project Project object
@param hook_url Url to call on event
'''
def existsHook(self, project, hook_url):
# When project exists, object will be stored in self.projectObject.
hook = self.findHook(project, hook_url)
if hook:
self.hookObject = hook
return True
return False
def deleteHook(self):
if self._module.check_mode:
return True
return self.hookObject.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token', 'access_token', 'enable_ssl_verification']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True, aliases=["private_token", "access_token"]),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
hook_url=dict(type='str', required=True),
push_events=dict(type='bool', default=True),
issues_events=dict(type='bool', default=False),
merge_requests_events=dict(type='bool', default=False),
tag_push_events=dict(type='bool', default=False),
note_events=dict(type='bool', default=False),
job_events=dict(type='bool', default=False),
pipeline_events=dict(type='bool', default=False),
wiki_page_events=dict(type='bool', default=False),
hook_validate_certs=dict(type='bool', default=False, aliases=['enable_ssl_verification']),
token=dict(type='str', no_log=True),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
gitlab_url = re.sub('/api.*', '', module.params['api_url'])
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
project_identifier = module.params['project']
hook_url = module.params['hook_url']
push_events = module.params['push_events']
issues_events = module.params['issues_events']
merge_requests_events = module.params['merge_requests_events']
tag_push_events = module.params['tag_push_events']
note_events = module.params['note_events']
job_events = module.params['job_events']
pipeline_events = module.params['pipeline_events']
wiki_page_events = module.params['wiki_page_events']
enable_ssl_verification = module.params['hook_validate_certs']
hook_token = module.params['token']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_hook = GitLabHook(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create hook: project %s doesn't exists" % project_identifier)
hook_exists = gitlab_hook.existsHook(project, hook_url)
if state == 'absent':
if hook_exists:
gitlab_hook.deleteHook()
module.exit_json(changed=True, msg="Successfully deleted hook %s" % hook_url)
else:
module.exit_json(changed=False, msg="Hook deleted or does not exists")
if state == 'present':
if gitlab_hook.createOrUpdateHook(project, hook_url, {
"push_events": push_events,
"issues_events": issues_events,
"merge_requests_events": merge_requests_events,
"tag_push_events": tag_push_events,
"note_events": note_events,
"job_events": job_events,
"pipeline_events": pipeline_events,
"wiki_page_events": wiki_page_events,
"enable_ssl_verification": enable_ssl_verification,
"token": hook_token}):
module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,897 |
gitlab_runner contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_runner contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_runner.py:291:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_runner.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61897
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:19Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_project.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_project
short_description: Creates/updates/deletes GitLab Projects
description:
- When the project does not exist in GitLab, it will be created.
- When the project does exists and state=absent, the project will be deleted.
- When changes are made to the project, the project will be updated.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
group:
description:
- Id or The full path of the group of which this projects belongs to.
type: str
name:
description:
- The name of the project
required: true
type: str
path:
description:
- The path of the project you want to create, this will be server_url/<group>/path
- If not supplied, name will be used.
type: str
description:
description:
- An description for the project.
type: str
issues_enabled:
description:
- Whether you want to create issues or not.
- Possible values are true and false.
type: bool
default: yes
merge_requests_enabled:
description:
- If merge requests can be made or not.
- Possible values are true and false.
type: bool
default: yes
wiki_enabled:
description:
- If an wiki for this project should be available or not.
- Possible values are true and false.
type: bool
default: yes
snippets_enabled:
description:
- If creating snippets should be available or not.
- Possible values are true and false.
type: bool
default: yes
visibility:
description:
- Private. Project access must be granted explicitly for each user.
- Internal. The project can be cloned by any logged in user.
- Public. The project can be cloned without any authentication.
default: private
type: str
choices: ["private", "internal", "public"]
aliases:
- visibility_level
import_url:
description:
- Git repository which will be imported into gitlab.
- GitLab server needs read access to this git repository.
required: false
type: str
state:
description:
- create or delete project.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
'''
EXAMPLES = '''
- name: Delete GitLab Project
gitlab_project:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
name: my_first_project
state: absent
delegate_to: localhost
- name: Create GitLab Project in group Ansible
gitlab_project:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_project
group: ansible
issues_enabled: False
wiki_enabled: True
snippets_enabled: True
import_url: http://git.example.com/example/lab.git
state: present
delegate_to: localhost
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
project:
description: API object
returned: always
type: dict
'''
import os
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup, findProject
class GitLabProject(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.projectObject = None
'''
@param project_name Name of the project
@param namespace Namespace Object (User or Group)
@param options Options of the project
'''
def createOrUpdateProject(self, project_name, namespace, options):
changed = False
# Because we have already call userExists in main()
if self.projectObject is None:
project = self.createProject(namespace, {
'name': project_name,
'path': options['path'],
'description': options['description'],
'issues_enabled': options['issues_enabled'],
'merge_requests_enabled': options['merge_requests_enabled'],
'wiki_enabled': options['wiki_enabled'],
'snippets_enabled': options['snippets_enabled'],
'visibility': options['visibility'],
'import_url': options['import_url']})
changed = True
else:
changed, project = self.updateProject(self.projectObject, {
'name': project_name,
'description': options['description'],
'issues_enabled': options['issues_enabled'],
'merge_requests_enabled': options['merge_requests_enabled'],
'wiki_enabled': options['wiki_enabled'],
'snippets_enabled': options['snippets_enabled'],
'visibility': options['visibility']})
self.projectObject = project
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name)
try:
project.save()
except Exception as e:
self._module.fail_json(msg="Failed update project: %s " % e)
return True
else:
return False
'''
@param namespace Namespace Object (User or Group)
@param arguments Attributs of the project
'''
def createProject(self, namespace, arguments):
if self._module.check_mode:
return True
arguments['namespace_id'] = namespace.id
try:
project = self._gitlab.projects.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create project: %s " % to_native(e))
return project
'''
@param project Project Object
@param arguments Attributs of the project
'''
def updateProject(self, project, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(project, arg_key) != arguments[arg_key]:
setattr(project, arg_key, arguments[arg_key])
changed = True
return (changed, project)
def deleteProject(self):
if self._module.check_mode:
return True
project = self.projectObject
return project.delete()
'''
@param namespace User/Group object
@param name Name of the project
'''
def existsProject(self, namespace, path):
# When project exists, object will be stored in self.projectObject.
project = findProject(self._gitlab, namespace.full_path + '/' + path)
if project:
self.projectObject = project
return True
return False
def deprecation_warning(module):
deprecated_aliases = ['login_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
group=dict(type='str'),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
issues_enabled=dict(type='bool', default=True),
merge_requests_enabled=dict(type='bool', default=True),
wiki_enabled=dict(type='bool', default=True),
snippets_enabled=dict(default=True, type='bool'),
visibility=dict(type='str', default="private", choices=["internal", "private", "public"], aliases=["visibility_level"]),
import_url=dict(type='str'),
state=dict(type='str', default="present", choices=["absent", "present"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token'],
['server_url', 'api_url']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
group_identifier = module.params['group']
project_name = module.params['name']
project_path = module.params['path']
project_description = module.params['description']
issues_enabled = module.params['issues_enabled']
merge_requests_enabled = module.params['merge_requests_enabled']
wiki_enabled = module.params['wiki_enabled']
snippets_enabled = module.params['snippets_enabled']
visibility = module.params['visibility']
import_url = module.params['import_url']
state = module.params['state']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
# Set project_path to project_name if it is empty.
if project_path is None:
project_path = project_name.replace(" ", "_")
gitlab_project = GitLabProject(module, gitlab_instance)
if group_identifier:
group = findGroup(gitlab_instance, group_identifier)
if group is None:
module.fail_json(msg="Failed to create project: group %s doesn't exists" % group_identifier)
namespace = gitlab_instance.namespaces.get(group.id)
project_exists = gitlab_project.existsProject(namespace, project_path)
else:
user = gitlab_instance.users.list(username=gitlab_instance.user.username)[0]
namespace = gitlab_instance.namespaces.get(user.id)
project_exists = gitlab_project.existsProject(namespace, project_path)
if state == 'absent':
if project_exists:
gitlab_project.deleteProject()
module.exit_json(changed=True, msg="Successfully deleted project %s" % project_name)
else:
module.exit_json(changed=False, msg="Project deleted or does not exists")
if state == 'present':
if gitlab_project.createOrUpdateProject(project_name, namespace, {
"path": project_path,
"description": project_description,
"issues_enabled": issues_enabled,
"merge_requests_enabled": merge_requests_enabled,
"wiki_enabled": wiki_enabled,
"snippets_enabled": snippets_enabled,
"visibility": visibility,
"import_url": import_url}):
module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name, project=gitlab_project.projectObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the project %s" % project_name, project=gitlab_project.projectObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,897 |
gitlab_runner contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_runner contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_runner.py:291:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_runner.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61897
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:19Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_runner.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Samy Coenen <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_runner
short_description: Create, modify and delete GitLab Runners.
description:
- Register, update and delete runners with the GitLab API.
- All operations are performed using the GitLab API v4.
- For details, consult the full API documentation at U(https://docs.gitlab.com/ee/api/runners.html).
- A valid private API token is required for all operations. You can create as many tokens as you like using the GitLab web interface at
U(https://$GITLAB_URL/profile/personal_access_tokens).
- A valid registration token is required for registering a new runner.
To create shared runners, you need to ask your administrator to give you this token.
It can be found at U(https://$GITLAB_URL/admin/runners/).
notes:
- To create a new runner at least the C(api_token), C(description) and C(url) options are required.
- Runners need to have unique descriptions.
version_added: 2.8
author:
- Samy Coenen (@SamyCoenen)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab >= 1.5.0
extends_documentation_fragment:
- auth_basic
options:
url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
api_token:
description:
- Your private token to interact with the GitLab API.
required: True
type: str
aliases:
- private_token
description:
description:
- The unique name of the runner.
required: True
type: str
aliases:
- name
state:
description:
- Make sure that the runner with the same name exists with the same configuration or delete the runner with the same name.
required: False
default: present
choices: ["present", "absent"]
type: str
registration_token:
description:
- The registration token is used to register new runners.
required: True
type: str
active:
description:
- Define if the runners is immediately active after creation.
required: False
default: yes
type: bool
locked:
description:
- Determines if the runner is locked or not.
required: False
default: False
type: bool
access_level:
description:
- Determines if a runner can pick up jobs from protected branches.
required: False
default: ref_protected
choices: ["ref_protected", "not_protected"]
type: str
maximum_timeout:
description:
- The maximum timeout that a runner has to pick up a specific job.
required: False
default: 3600
type: int
run_untagged:
description:
- Run untagged jobs or not.
required: False
default: yes
type: bool
tag_list:
description: The tags that apply to the runner.
required: False
default: []
type: list
'''
EXAMPLES = '''
- name: "Register runner"
gitlab_runner:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
registration_token: 4gfdsg345
description: Docker Machine t1
state: present
active: True
tag_list: ['docker']
run_untagged: False
locked: False
- name: "Delete runner"
gitlab_runner:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
description: Docker Machine t1
state: absent
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
runner:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
try:
cmp
except NameError:
def cmp(a, b):
return (a > b) - (a < b)
class GitLabRunner(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.runnerObject = None
def createOrUpdateRunner(self, description, options):
changed = False
# Because we have already call userExists in main()
if self.runnerObject is None:
runner = self.createRunner({
'description': description,
'active': options['active'],
'token': options['registration_token'],
'locked': options['locked'],
'run_untagged': options['run_untagged'],
'maximum_timeout': options['maximum_timeout'],
'tag_list': options['tag_list']})
changed = True
else:
changed, runner = self.updateRunner(self.runnerObject, {
'active': options['active'],
'locked': options['locked'],
'run_untagged': options['run_untagged'],
'maximum_timeout': options['maximum_timeout'],
'access_level': options['access_level'],
'tag_list': options['tag_list']})
self.runnerObject = runner
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the runner %s" % description)
try:
runner.save()
except Exception as e:
self._module.fail_json(msg="Failed to update runner: %s " % to_native(e))
return True
else:
return False
'''
@param arguments Attributs of the runner
'''
def createRunner(self, arguments):
if self._module.check_mode:
return True
try:
runner = self._gitlab.runners.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create runner: %s " % to_native(e))
return runner
'''
@param runner Runner object
@param arguments Attributs of the runner
'''
def updateRunner(self, runner, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if isinstance(arguments[arg_key], list):
list1 = getattr(runner, arg_key)
list1.sort()
list2 = arguments[arg_key]
list2.sort()
if cmp(list1, list2):
setattr(runner, arg_key, arguments[arg_key])
changed = True
else:
if getattr(runner, arg_key) != arguments[arg_key]:
setattr(runner, arg_key, arguments[arg_key])
changed = True
return (changed, runner)
'''
@param description Description of the runner
'''
def findRunner(self, description):
runners = self._gitlab.runners.list(as_list=False)
for runner in runners:
if (runner.description == description):
return self._gitlab.runners.get(runner.id)
'''
@param description Description of the runner
'''
def existsRunner(self, description):
# When runner exists, object will be stored in self.runnerObject.
runner = self.findRunner(description)
if runner:
self.runnerObject = runner
return True
return False
def deleteRunner(self):
if self._module.check_mode:
return True
runner = self.runnerObject
return runner.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
url=dict(type='str', removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["private_token"]),
description=dict(type='str', required=True, aliases=["name"]),
active=dict(type='bool', default=True),
tag_list=dict(type='list', default=[]),
run_untagged=dict(type='bool', default=True),
locked=dict(type='bool', default=False),
access_level=dict(type='str', default='ref_protected', choices=["not_protected", "ref_protected"]),
maximum_timeout=dict(type='int', default=3600),
registration_token=dict(type='str', required=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'url'],
['api_username', 'api_token'],
['api_password', 'api_token'],
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token'],
['api_url', 'url']
],
supports_check_mode=True,
)
deprecation_warning(module)
if module.params['url'] is not None:
url = re.sub('/api.*', '', module.params['url'])
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
gitlab_url = url if api_url is None else api_url
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
runner_description = module.params['description']
runner_active = module.params['active']
tag_list = module.params['tag_list']
run_untagged = module.params['run_untagged']
runner_locked = module.params['locked']
access_level = module.params['access_level']
maximum_timeout = module.params['maximum_timeout']
registration_token = module.params['registration_token']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e))
gitlab_runner = GitLabRunner(module, gitlab_instance)
runner_exists = gitlab_runner.existsRunner(runner_description)
if state == 'absent':
if runner_exists:
gitlab_runner.deleteRunner()
module.exit_json(changed=True, msg="Successfully deleted runner %s" % runner_description)
else:
module.exit_json(changed=False, msg="Runner deleted or does not exists")
if state == 'present':
if gitlab_runner.createOrUpdateRunner(runner_description, {
"active": runner_active,
"tag_list": tag_list,
"run_untagged": run_untagged,
"locked": runner_locked,
"access_level": access_level,
"maximum_timeout": maximum_timeout,
"registration_token": registration_token}):
module.exit_json(changed=True, runner=gitlab_runner.runnerObject._attrs,
msg="Successfully created or updated the runner %s" % runner_description)
else:
module.exit_json(changed=False, runner=gitlab_runner.runnerObject._attrs,
msg="No need to update the runner %s" % runner_description)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,897 |
gitlab_runner contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_runner contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_runner.py:291:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_runner.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61897
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:19Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_user
short_description: Creates/updates/deletes GitLab Users
description:
- When the user does not exist in GitLab, it will be created.
- When the user does exists and state=absent, the user will be deleted.
- When changes are made to user, the user will be updated.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
- administrator rights on the GitLab server
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
name:
description:
- Name of the user you want to create
required: true
type: str
username:
description:
- The username of the user.
required: true
type: str
password:
description:
- The password of the user.
- GitLab server enforces minimum password length to 8, set this value with 8 or more characters.
required: true
type: str
email:
description:
- The email that belongs to the user.
required: true
type: str
sshkey_name:
description:
- The name of the sshkey
type: str
sshkey_file:
description:
- The ssh key itself.
type: str
group:
description:
- Id or Full path of parent group in the form of group/name
- Add user as an member to this group.
type: str
access_level:
description:
- The access level to the group. One of the following can be used.
- guest
- reporter
- developer
- master (alias for maintainer)
- maintainer
- owner
default: guest
type: str
choices: ["guest", "reporter", "developer", "master", "maintainer", "owner"]
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
confirm:
description:
- Require confirmation.
type: bool
default: yes
version_added: "2.4"
isadmin:
description:
- Grant admin privileges to the user
type: bool
default: no
version_added: "2.8"
external:
description:
- Define external parameter for this user
type: bool
default: no
version_added: "2.8"
'''
EXAMPLES = '''
- name: "Delete GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
username: myusername
state: absent
delegate_to: localhost
- name: "Create GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: My Name
username: myusername
password: mysecretpassword
email: [email protected]
sshkey_name: MySSH
sshkey_file: ssh-rsa AAAAB3NzaC1yc...
state: present
group: super_group/mon_group
access_level: owner
delegate_to: localhost
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
user:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabUser(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.userObject = None
self.ACCESS_LEVEL = {
'guest': gitlab.GUEST_ACCESS,
'reporter': gitlab.REPORTER_ACCESS,
'developer': gitlab.DEVELOPER_ACCESS,
'master': gitlab.MAINTAINER_ACCESS,
'maintainer': gitlab.MAINTAINER_ACCESS,
'owner': gitlab.OWNER_ACCESS}
'''
@param username Username of the user
@param options User options
'''
def createOrUpdateUser(self, username, options):
changed = False
# Because we have already call userExists in main()
if self.userObject is None:
user = self.createUser({
'name': options['name'],
'username': username,
'password': options['password'],
'email': options['email'],
'skip_confirmation': not options['confirm'],
'admin': options['isadmin'],
'external': options['external']})
changed = True
else:
changed, user = self.updateUser(self.userObject, {
'name': options['name'],
'email': options['email'],
'is_admin': options['isadmin'],
'external': options['external']})
# Assign ssh keys
if options['sshkey_name'] and options['sshkey_file']:
key_changed = self.addSshKeyToUser(user, {
'name': options['sshkey_name'],
'file': options['sshkey_file']})
changed = changed or key_changed
# Assign group
if options['group_path']:
group_changed = self.assignUserToGroup(user, options['group_path'], options['access_level'])
changed = changed or group_changed
self.userObject = user
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the user %s" % username)
try:
user.save()
except Exception as e:
self._module.fail_json(msg="Failed to update user: %s " % to_native(e))
return True
else:
return False
'''
@param group User object
'''
def getUserId(self, user):
if user is not None:
return user.id
return None
'''
@param user User object
@param sshkey_name Name of the ssh key
'''
def sshKeyExists(self, user, sshkey_name):
keyList = map(lambda k: k.title, user.keys.list())
return sshkey_name in keyList
'''
@param user User object
@param sshkey Dict containing sshkey infos {"name": "", "file": ""}
'''
def addSshKeyToUser(self, user, sshkey):
if not self.sshKeyExists(user, sshkey['name']):
if self._module.check_mode:
return True
try:
user.keys.create({
'title': sshkey['name'],
'key': sshkey['file']})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign sshkey to user: %s" % to_native(e))
return True
return False
'''
@param group Group object
@param user_id Id of the user to find
'''
def findMember(self, group, user_id):
try:
member = group.members.get(user_id)
except gitlab.exceptions.GitlabGetError as e:
return None
return member
'''
@param group Group object
@param user_id Id of the user to check
'''
def memberExists(self, group, user_id):
member = self.findMember(group, user_id)
return member is not None
'''
@param group Group object
@param user_id Id of the user to check
@param access_level GitLab access_level to check
'''
def memberAsGoodAccessLevel(self, group, user_id, access_level):
member = self.findMember(group, user_id)
return member.access_level == access_level
'''
@param user User object
@param group_path Complete path of the Group including parent group path. <parent_path>/<group_path>
@param access_level GitLab access_level to assign
'''
def assignUserToGroup(self, user, group_identifier, access_level):
group = findGroup(self._gitlab, group_identifier)
if self._module.check_mode:
return True
if group is None:
return False
if self.memberExists(group, self.getUserId(user)):
member = self.findMember(group, self.getUserId(user))
if not self.memberAsGoodAccessLevel(group, member.id, self.ACCESS_LEVEL[access_level]):
member.access_level = self.ACCESS_LEVEL[access_level]
member.save()
return True
else:
try:
group.members.create({
'user_id': self.getUserId(user),
'access_level': self.ACCESS_LEVEL[access_level]})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign user to group: %s" % to_native(e))
return True
return False
'''
@param user User object
@param arguments User attributes
'''
def updateUser(self, user, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(user, arg_key) != arguments[arg_key]:
setattr(user, arg_key, arguments[arg_key])
changed = True
return (changed, user)
'''
@param arguments User attributes
'''
def createUser(self, arguments):
if self._module.check_mode:
return True
try:
user = self._gitlab.users.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create user: %s " % to_native(e))
return user
'''
@param username Username of the user
'''
def findUser(self, username):
users = self._gitlab.users.list(search=username)
for user in users:
if (user.username == username):
return user
'''
@param username Username of the user
'''
def existsUser(self, username):
# When user exists, object will be stored in self.userObject.
user = self.findUser(username)
if user:
self.userObject = user
return True
return False
def deleteUser(self):
if self._module.check_mode:
return True
user = self.userObject
return user.delete()
def deprecation_warning(module):
deprecated_aliases = ['login_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
name=dict(type='str', required=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
username=dict(type='str', required=True),
password=dict(type='str', required=True, no_log=True),
email=dict(type='str', required=True),
sshkey_name=dict(type='str'),
sshkey_file=dict(type='str'),
group=dict(type='str'),
access_level=dict(type='str', default="guest", choices=["developer", "guest", "maintainer", "master", "owner", "reporter"]),
confirm=dict(type='bool', default=True),
isadmin=dict(type='bool', default=False),
external=dict(type='bool', default=False),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token'],
['server_url', 'api_url']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
user_name = module.params['name']
state = module.params['state']
user_username = module.params['username'].lower()
user_password = module.params['password']
user_email = module.params['email']
user_sshkey_name = module.params['sshkey_name']
user_sshkey_file = module.params['sshkey_file']
group_path = module.params['group']
access_level = module.params['access_level']
confirm = module.params['confirm']
user_isadmin = module.params['isadmin']
user_external = module.params['external']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_user = GitLabUser(module, gitlab_instance)
user_exists = gitlab_user.existsUser(user_username)
if state == 'absent':
if user_exists:
gitlab_user.deleteUser()
module.exit_json(changed=True, msg="Successfully deleted user %s" % user_username)
else:
module.exit_json(changed=False, msg="User deleted or does not exists")
if state == 'present':
if gitlab_user.createOrUpdateUser(user_username, {
"name": user_name,
"password": user_password,
"email": user_email,
"sshkey_name": user_sshkey_name,
"sshkey_file": user_sshkey_file,
"group_path": group_path,
"access_level": access_level,
"confirm": confirm,
"isadmin": user_isadmin,
"external": user_external}):
module.exit_json(changed=True, msg="Successfully created or updated the user %s" % user_username, user=gitlab_user.userObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the user %s" % user_username, user=gitlab_user.userObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,896 |
gitlab_project contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_project contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_project.py:292:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_project.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61896
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:18Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_deploy_key.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# Based on code:
# Copyright: (c) 2013, Phillip Gentry <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: gitlab_deploy_key
short_description: Manages GitLab project deploy keys.
description:
- Adds, updates and removes project deploy keys
version_added: "2.6"
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
version_added: "2.8"
type: str
aliases:
- private_token
- access_token
project:
description:
- Id or Full path of project in the form of group/name
required: true
type: str
title:
description:
- Deploy key's title
required: true
type: str
key:
description:
- Deploy key
required: true
type: str
can_push:
description:
- Whether this key can push to the project
type: bool
default: no
state:
description:
- When C(present) the deploy key added to the project if it doesn't exist.
- When C(absent) it will be removed from the project if it exists
required: true
default: present
type: str
choices: [ "present", "absent" ]
'''
EXAMPLES = '''
- name: "Adding a project deploy key"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
- name: "Update the above deploy key to add push access"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
can_push: yes
- name: "Remove the previous deploy key from the project"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
state: absent
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: key is already in use"
deploy_key:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findProject
class GitLabDeployKey(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.deployKeyObject = None
'''
@param project Project object
@param key_title Title of the key
@param key_key String of the key
@param key_can_push Option of the deployKey
@param options Deploy key options
'''
def createOrUpdateDeployKey(self, project, key_title, key_key, options):
changed = False
# Because we have already call existsDeployKey in main()
if self.deployKeyObject is None:
deployKey = self.createDeployKey(project, {
'title': key_title,
'key': key_key,
'can_push': options['can_push']})
changed = True
else:
changed, deployKey = self.updateDeployKey(self.deployKeyObject, {
'can_push': options['can_push']})
self.deployKeyObject = deployKey
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title)
try:
deployKey.save()
except Exception as e:
self._module.fail_json(msg="Failed to update deploy key: %s " % e)
return True
else:
return False
'''
@param project Project Object
@param arguments Attributs of the deployKey
'''
def createDeployKey(self, project, arguments):
if self._module.check_mode:
return True
try:
deployKey = project.keys.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create deploy key: %s " % to_native(e))
return deployKey
'''
@param deployKey Deploy Key Object
@param arguments Attributs of the deployKey
'''
def updateDeployKey(self, deployKey, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(deployKey, arg_key) != arguments[arg_key]:
setattr(deployKey, arg_key, arguments[arg_key])
changed = True
return (changed, deployKey)
'''
@param project Project object
@param key_title Title of the key
'''
def findDeployKey(self, project, key_title):
deployKeys = project.keys.list()
for deployKey in deployKeys:
if (deployKey.title == key_title):
return deployKey
'''
@param project Project object
@param key_title Title of the key
'''
def existsDeployKey(self, project, key_title):
# When project exists, object will be stored in self.projectObject.
deployKey = self.findDeployKey(project, key_title)
if deployKey:
self.deployKeyObject = deployKey
return True
return False
def deleteDeployKey(self):
if self._module.check_mode:
return True
return self.deployKeyObject.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token', 'access_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True, aliases=["private_token", "access_token"]),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
key=dict(type='str', required=True),
can_push=dict(type='bool', default=False),
title=dict(type='str', required=True)
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
gitlab_url = re.sub('/api.*', '', module.params['api_url'])
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
project_identifier = module.params['project']
key_title = module.params['title']
key_keyfile = module.params['key']
key_can_push = module.params['can_push']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_deploy_key = GitLabDeployKey(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create deploy key: project %s doesn't exists" % project_identifier)
deployKey_exists = gitlab_deploy_key.existsDeployKey(project, key_title)
if state == 'absent':
if deployKey_exists:
gitlab_deploy_key.deleteDeployKey()
module.exit_json(changed=True, msg="Successfully deleted deploy key %s" % key_title)
else:
module.exit_json(changed=False, msg="Deploy key deleted or does not exists")
if state == 'present':
if gitlab_deploy_key.createOrUpdateDeployKey(project, key_title, key_keyfile, {'can_push': key_can_push}):
module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,896 |
gitlab_project contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_project contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_project.py:292:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_project.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61896
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:18Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_group.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_group
short_description: Creates/updates/deletes GitLab Groups
description:
- When the group does not exist in GitLab, it will be created.
- When the group does exist and state=absent, the group will be deleted.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
name:
description:
- Name of the group you want to create.
required: true
type: str
path:
description:
- The path of the group you want to create, this will be server_url/group_path
- If not supplied, the group_name will be used.
type: str
description:
description:
- A description for the group.
version_added: "2.7"
type: str
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
parent:
description:
- Allow to create subgroups
- Id or Full path of parent group in the form of group/name
version_added: "2.8"
type: str
visibility:
description:
- Default visibility of the group
version_added: "2.8"
choices: ["private", "internal", "public"]
default: private
type: str
'''
EXAMPLES = '''
- name: "Delete GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
name: my_first_group
state: absent
- name: "Create GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
# The group will by created at https://gitlab.dj-wasabi.local/super_parent/parent/my_first_group
- name: "Create GitLab SubGroup"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
parent: "super_parent/parent"
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
group:
description: API object
returned: always
type: dict
'''
import os
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabGroup(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.groupObject = None
'''
@param group Group object
'''
def getGroupId(self, group):
if group is not None:
return group.id
return None
'''
@param name Name of the group
@param parent Parent group full path
@param options Group options
'''
def createOrUpdateGroup(self, name, parent, options):
changed = False
# Because we have already call userExists in main()
if self.groupObject is None:
parent_id = self.getGroupId(parent)
group = self.createGroup({
'name': name,
'path': options['path'],
'parent_id': parent_id,
'visibility': options['visibility']})
changed = True
else:
changed, group = self.updateGroup(self.groupObject, {
'name': name,
'description': options['description'],
'visibility': options['visibility']})
self.groupObject = group
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the group %s" % name)
try:
group.save()
except Exception as e:
self._module.fail_json(msg="Failed to update group: %s " % e)
return True
else:
return False
'''
@param arguments Attributs of the group
'''
def createGroup(self, arguments):
if self._module.check_mode:
return True
try:
group = self._gitlab.groups.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create group: %s " % to_native(e))
return group
'''
@param group Group Object
@param arguments Attributs of the group
'''
def updateGroup(self, group, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(group, arg_key) != arguments[arg_key]:
setattr(group, arg_key, arguments[arg_key])
changed = True
return (changed, group)
def deleteGroup(self):
group = self.groupObject
if len(group.projects.list()) >= 1:
self._module.fail_json(
msg="There are still projects in this group. These needs to be moved or deleted before this group can be removed.")
else:
if self._module.check_mode:
return True
try:
group.delete()
except Exception as e:
self._module.fail_json(msg="Failed to delete group: %s " % to_native(e))
'''
@param name Name of the groupe
@param full_path Complete path of the Group including parent group path. <parent_path>/<group_path>
'''
def existsGroup(self, project_identifier):
# When group/user exists, object will be stored in self.groupObject.
group = findGroup(self._gitlab, project_identifier)
if group:
self.groupObject = group
return True
return False
def deprecation_warning(module):
deprecated_aliases = ['login_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
state=dict(type='str', default="present", choices=["absent", "present"]),
parent=dict(type='str'),
visibility=dict(type='str', default="private", choices=["internal", "private", "public"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token'],
['server_url', 'api_url']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
group_name = module.params['name']
group_path = module.params['path']
description = module.params['description']
state = module.params['state']
parent_identifier = module.params['parent']
group_visibility = module.params['visibility']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e))
# Define default group_path based on group_name
if group_path is None:
group_path = group_name.replace(" ", "_")
gitlab_group = GitLabGroup(module, gitlab_instance)
parent_group = None
if parent_identifier:
parent_group = findGroup(gitlab_instance, parent_identifier)
if not parent_group:
module.fail_json(msg="Failed create GitLab group: Parent group doesn't exists")
group_exists = gitlab_group.existsGroup(parent_group.full_path + '/' + group_path)
else:
group_exists = gitlab_group.existsGroup(group_path)
if state == 'absent':
if group_exists:
gitlab_group.deleteGroup()
module.exit_json(changed=True, msg="Successfully deleted group %s" % group_name)
else:
module.exit_json(changed=False, msg="Group deleted or does not exists")
if state == 'present':
if gitlab_group.createOrUpdateGroup(group_name, parent_group, {
"path": group_path,
"description": description,
"visibility": group_visibility}):
module.exit_json(changed=True, msg="Successfully created or updated the group %s" % group_name, group=gitlab_group.groupObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the group %s" % group_name, group=gitlab_group.groupObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,896 |
gitlab_project contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_project contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_project.py:292:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_project.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61896
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:18Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_hook.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# Based on code:
# Copyright: (c) 2013, Phillip Gentry <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_hook
short_description: Manages GitLab project hooks.
description:
- Adds, updates and removes project hook
version_added: "2.6"
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
version_added: "2.8"
type: str
aliases:
- private_token
- access_token
project:
description:
- Id or Full path of the project in the form of group/name.
required: true
type: str
hook_url:
description:
- The url that you want GitLab to post to, this is used as the primary key for updates and deletion.
required: true
type: str
state:
description:
- When C(present) the hook will be updated to match the input or created if it doesn't exist.
- When C(absent) hook will be deleted if it exists.
required: true
default: present
type: str
choices: [ "present", "absent" ]
push_events:
description:
- Trigger hook on push events.
type: bool
default: yes
issues_events:
description:
- Trigger hook on issues events.
type: bool
default: no
merge_requests_events:
description:
- Trigger hook on merge requests events.
type: bool
default: no
tag_push_events:
description:
- Trigger hook on tag push events.
type: bool
default: no
note_events:
description:
- Trigger hook on note events or when someone adds a comment.
type: bool
default: no
job_events:
description:
- Trigger hook on job events.
type: bool
default: no
pipeline_events:
description:
- Trigger hook on pipeline events.
type: bool
default: no
wiki_page_events:
description:
- Trigger hook on wiki events.
type: bool
default: no
hook_validate_certs:
description:
- Whether GitLab will do SSL verification when triggering the hook.
type: bool
default: no
aliases: [ enable_ssl_verification ]
token:
description:
- Secret token to validate hook messages at the receiver.
- If this is present it will always result in a change as it cannot be retrieved from GitLab.
- Will show up in the X-GitLab-Token HTTP request header.
required: false
type: str
'''
EXAMPLES = '''
- name: "Adding a project hook"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: present
push_events: yes
tag_push_events: yes
hook_validate_certs: no
token: "my-super-secret-token-that-my-ci-server-will-check"
- name: "Delete the previous hook"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: absent
- name: "Delete a hook by numeric project id"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: 10
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: absent
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
hook:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findProject
class GitLabHook(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.hookObject = None
'''
@param project Project Object
@param hook_url Url to call on event
@param description Description of the group
@param parent Parent group full path
'''
def createOrUpdateHook(self, project, hook_url, options):
changed = False
# Because we have already call userExists in main()
if self.hookObject is None:
hook = self.createHook(project, {
'url': hook_url,
'push_events': options['push_events'],
'issues_events': options['issues_events'],
'merge_requests_events': options['merge_requests_events'],
'tag_push_events': options['tag_push_events'],
'note_events': options['note_events'],
'job_events': options['job_events'],
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
changed = True
else:
changed, hook = self.updateHook(self.hookObject, {
'push_events': options['push_events'],
'issues_events': options['issues_events'],
'merge_requests_events': options['merge_requests_events'],
'tag_push_events': options['tag_push_events'],
'note_events': options['note_events'],
'job_events': options['job_events'],
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
self.hookObject = hook
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url)
try:
hook.save()
except Exception as e:
self._module.fail_json(msg="Failed to update hook: %s " % e)
return True
else:
return False
'''
@param project Project Object
@param arguments Attributs of the hook
'''
def createHook(self, project, arguments):
if self._module.check_mode:
return True
hook = project.hooks.create(arguments)
return hook
'''
@param hook Hook Object
@param arguments Attributs of the hook
'''
def updateHook(self, hook, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(hook, arg_key) != arguments[arg_key]:
setattr(hook, arg_key, arguments[arg_key])
changed = True
return (changed, hook)
'''
@param project Project object
@param hook_url Url to call on event
'''
def findHook(self, project, hook_url):
hooks = project.hooks.list()
for hook in hooks:
if (hook.url == hook_url):
return hook
'''
@param project Project object
@param hook_url Url to call on event
'''
def existsHook(self, project, hook_url):
# When project exists, object will be stored in self.projectObject.
hook = self.findHook(project, hook_url)
if hook:
self.hookObject = hook
return True
return False
def deleteHook(self):
if self._module.check_mode:
return True
return self.hookObject.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token', 'access_token', 'enable_ssl_verification']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True, aliases=["private_token", "access_token"]),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
hook_url=dict(type='str', required=True),
push_events=dict(type='bool', default=True),
issues_events=dict(type='bool', default=False),
merge_requests_events=dict(type='bool', default=False),
tag_push_events=dict(type='bool', default=False),
note_events=dict(type='bool', default=False),
job_events=dict(type='bool', default=False),
pipeline_events=dict(type='bool', default=False),
wiki_page_events=dict(type='bool', default=False),
hook_validate_certs=dict(type='bool', default=False, aliases=['enable_ssl_verification']),
token=dict(type='str', no_log=True),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
gitlab_url = re.sub('/api.*', '', module.params['api_url'])
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
project_identifier = module.params['project']
hook_url = module.params['hook_url']
push_events = module.params['push_events']
issues_events = module.params['issues_events']
merge_requests_events = module.params['merge_requests_events']
tag_push_events = module.params['tag_push_events']
note_events = module.params['note_events']
job_events = module.params['job_events']
pipeline_events = module.params['pipeline_events']
wiki_page_events = module.params['wiki_page_events']
enable_ssl_verification = module.params['hook_validate_certs']
hook_token = module.params['token']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_hook = GitLabHook(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create hook: project %s doesn't exists" % project_identifier)
hook_exists = gitlab_hook.existsHook(project, hook_url)
if state == 'absent':
if hook_exists:
gitlab_hook.deleteHook()
module.exit_json(changed=True, msg="Successfully deleted hook %s" % hook_url)
else:
module.exit_json(changed=False, msg="Hook deleted or does not exists")
if state == 'present':
if gitlab_hook.createOrUpdateHook(project, hook_url, {
"push_events": push_events,
"issues_events": issues_events,
"merge_requests_events": merge_requests_events,
"tag_push_events": tag_push_events,
"note_events": note_events,
"job_events": job_events,
"pipeline_events": pipeline_events,
"wiki_page_events": wiki_page_events,
"enable_ssl_verification": enable_ssl_verification,
"token": hook_token}):
module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,896 |
gitlab_project contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_project contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_project.py:292:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_project.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61896
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:18Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_project.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_project
short_description: Creates/updates/deletes GitLab Projects
description:
- When the project does not exist in GitLab, it will be created.
- When the project does exists and state=absent, the project will be deleted.
- When changes are made to the project, the project will be updated.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
group:
description:
- Id or The full path of the group of which this projects belongs to.
type: str
name:
description:
- The name of the project
required: true
type: str
path:
description:
- The path of the project you want to create, this will be server_url/<group>/path
- If not supplied, name will be used.
type: str
description:
description:
- An description for the project.
type: str
issues_enabled:
description:
- Whether you want to create issues or not.
- Possible values are true and false.
type: bool
default: yes
merge_requests_enabled:
description:
- If merge requests can be made or not.
- Possible values are true and false.
type: bool
default: yes
wiki_enabled:
description:
- If an wiki for this project should be available or not.
- Possible values are true and false.
type: bool
default: yes
snippets_enabled:
description:
- If creating snippets should be available or not.
- Possible values are true and false.
type: bool
default: yes
visibility:
description:
- Private. Project access must be granted explicitly for each user.
- Internal. The project can be cloned by any logged in user.
- Public. The project can be cloned without any authentication.
default: private
type: str
choices: ["private", "internal", "public"]
aliases:
- visibility_level
import_url:
description:
- Git repository which will be imported into gitlab.
- GitLab server needs read access to this git repository.
required: false
type: str
state:
description:
- create or delete project.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
'''
EXAMPLES = '''
- name: Delete GitLab Project
gitlab_project:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
name: my_first_project
state: absent
delegate_to: localhost
- name: Create GitLab Project in group Ansible
gitlab_project:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_project
group: ansible
issues_enabled: False
wiki_enabled: True
snippets_enabled: True
import_url: http://git.example.com/example/lab.git
state: present
delegate_to: localhost
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
project:
description: API object
returned: always
type: dict
'''
import os
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup, findProject
class GitLabProject(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.projectObject = None
'''
@param project_name Name of the project
@param namespace Namespace Object (User or Group)
@param options Options of the project
'''
def createOrUpdateProject(self, project_name, namespace, options):
changed = False
# Because we have already call userExists in main()
if self.projectObject is None:
project = self.createProject(namespace, {
'name': project_name,
'path': options['path'],
'description': options['description'],
'issues_enabled': options['issues_enabled'],
'merge_requests_enabled': options['merge_requests_enabled'],
'wiki_enabled': options['wiki_enabled'],
'snippets_enabled': options['snippets_enabled'],
'visibility': options['visibility'],
'import_url': options['import_url']})
changed = True
else:
changed, project = self.updateProject(self.projectObject, {
'name': project_name,
'description': options['description'],
'issues_enabled': options['issues_enabled'],
'merge_requests_enabled': options['merge_requests_enabled'],
'wiki_enabled': options['wiki_enabled'],
'snippets_enabled': options['snippets_enabled'],
'visibility': options['visibility']})
self.projectObject = project
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name)
try:
project.save()
except Exception as e:
self._module.fail_json(msg="Failed update project: %s " % e)
return True
else:
return False
'''
@param namespace Namespace Object (User or Group)
@param arguments Attributs of the project
'''
def createProject(self, namespace, arguments):
if self._module.check_mode:
return True
arguments['namespace_id'] = namespace.id
try:
project = self._gitlab.projects.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create project: %s " % to_native(e))
return project
'''
@param project Project Object
@param arguments Attributs of the project
'''
def updateProject(self, project, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(project, arg_key) != arguments[arg_key]:
setattr(project, arg_key, arguments[arg_key])
changed = True
return (changed, project)
def deleteProject(self):
if self._module.check_mode:
return True
project = self.projectObject
return project.delete()
'''
@param namespace User/Group object
@param name Name of the project
'''
def existsProject(self, namespace, path):
# When project exists, object will be stored in self.projectObject.
project = findProject(self._gitlab, namespace.full_path + '/' + path)
if project:
self.projectObject = project
return True
return False
def deprecation_warning(module):
deprecated_aliases = ['login_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
group=dict(type='str'),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
issues_enabled=dict(type='bool', default=True),
merge_requests_enabled=dict(type='bool', default=True),
wiki_enabled=dict(type='bool', default=True),
snippets_enabled=dict(default=True, type='bool'),
visibility=dict(type='str', default="private", choices=["internal", "private", "public"], aliases=["visibility_level"]),
import_url=dict(type='str'),
state=dict(type='str', default="present", choices=["absent", "present"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token'],
['server_url', 'api_url']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
group_identifier = module.params['group']
project_name = module.params['name']
project_path = module.params['path']
project_description = module.params['description']
issues_enabled = module.params['issues_enabled']
merge_requests_enabled = module.params['merge_requests_enabled']
wiki_enabled = module.params['wiki_enabled']
snippets_enabled = module.params['snippets_enabled']
visibility = module.params['visibility']
import_url = module.params['import_url']
state = module.params['state']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
# Set project_path to project_name if it is empty.
if project_path is None:
project_path = project_name.replace(" ", "_")
gitlab_project = GitLabProject(module, gitlab_instance)
if group_identifier:
group = findGroup(gitlab_instance, group_identifier)
if group is None:
module.fail_json(msg="Failed to create project: group %s doesn't exists" % group_identifier)
namespace = gitlab_instance.namespaces.get(group.id)
project_exists = gitlab_project.existsProject(namespace, project_path)
else:
user = gitlab_instance.users.list(username=gitlab_instance.user.username)[0]
namespace = gitlab_instance.namespaces.get(user.id)
project_exists = gitlab_project.existsProject(namespace, project_path)
if state == 'absent':
if project_exists:
gitlab_project.deleteProject()
module.exit_json(changed=True, msg="Successfully deleted project %s" % project_name)
else:
module.exit_json(changed=False, msg="Project deleted or does not exists")
if state == 'present':
if gitlab_project.createOrUpdateProject(project_name, namespace, {
"path": project_path,
"description": project_description,
"issues_enabled": issues_enabled,
"merge_requests_enabled": merge_requests_enabled,
"wiki_enabled": wiki_enabled,
"snippets_enabled": snippets_enabled,
"visibility": visibility,
"import_url": import_url}):
module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name, project=gitlab_project.projectObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the project %s" % project_name, project=gitlab_project.projectObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,896 |
gitlab_project contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_project contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_project.py:292:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_project.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61896
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:18Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_runner.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Samy Coenen <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_runner
short_description: Create, modify and delete GitLab Runners.
description:
- Register, update and delete runners with the GitLab API.
- All operations are performed using the GitLab API v4.
- For details, consult the full API documentation at U(https://docs.gitlab.com/ee/api/runners.html).
- A valid private API token is required for all operations. You can create as many tokens as you like using the GitLab web interface at
U(https://$GITLAB_URL/profile/personal_access_tokens).
- A valid registration token is required for registering a new runner.
To create shared runners, you need to ask your administrator to give you this token.
It can be found at U(https://$GITLAB_URL/admin/runners/).
notes:
- To create a new runner at least the C(api_token), C(description) and C(url) options are required.
- Runners need to have unique descriptions.
version_added: 2.8
author:
- Samy Coenen (@SamyCoenen)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab >= 1.5.0
extends_documentation_fragment:
- auth_basic
options:
url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
api_token:
description:
- Your private token to interact with the GitLab API.
required: True
type: str
aliases:
- private_token
description:
description:
- The unique name of the runner.
required: True
type: str
aliases:
- name
state:
description:
- Make sure that the runner with the same name exists with the same configuration or delete the runner with the same name.
required: False
default: present
choices: ["present", "absent"]
type: str
registration_token:
description:
- The registration token is used to register new runners.
required: True
type: str
active:
description:
- Define if the runners is immediately active after creation.
required: False
default: yes
type: bool
locked:
description:
- Determines if the runner is locked or not.
required: False
default: False
type: bool
access_level:
description:
- Determines if a runner can pick up jobs from protected branches.
required: False
default: ref_protected
choices: ["ref_protected", "not_protected"]
type: str
maximum_timeout:
description:
- The maximum timeout that a runner has to pick up a specific job.
required: False
default: 3600
type: int
run_untagged:
description:
- Run untagged jobs or not.
required: False
default: yes
type: bool
tag_list:
description: The tags that apply to the runner.
required: False
default: []
type: list
'''
EXAMPLES = '''
- name: "Register runner"
gitlab_runner:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
registration_token: 4gfdsg345
description: Docker Machine t1
state: present
active: True
tag_list: ['docker']
run_untagged: False
locked: False
- name: "Delete runner"
gitlab_runner:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
description: Docker Machine t1
state: absent
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
runner:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
try:
cmp
except NameError:
def cmp(a, b):
return (a > b) - (a < b)
class GitLabRunner(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.runnerObject = None
def createOrUpdateRunner(self, description, options):
changed = False
# Because we have already call userExists in main()
if self.runnerObject is None:
runner = self.createRunner({
'description': description,
'active': options['active'],
'token': options['registration_token'],
'locked': options['locked'],
'run_untagged': options['run_untagged'],
'maximum_timeout': options['maximum_timeout'],
'tag_list': options['tag_list']})
changed = True
else:
changed, runner = self.updateRunner(self.runnerObject, {
'active': options['active'],
'locked': options['locked'],
'run_untagged': options['run_untagged'],
'maximum_timeout': options['maximum_timeout'],
'access_level': options['access_level'],
'tag_list': options['tag_list']})
self.runnerObject = runner
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the runner %s" % description)
try:
runner.save()
except Exception as e:
self._module.fail_json(msg="Failed to update runner: %s " % to_native(e))
return True
else:
return False
'''
@param arguments Attributs of the runner
'''
def createRunner(self, arguments):
if self._module.check_mode:
return True
try:
runner = self._gitlab.runners.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create runner: %s " % to_native(e))
return runner
'''
@param runner Runner object
@param arguments Attributs of the runner
'''
def updateRunner(self, runner, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if isinstance(arguments[arg_key], list):
list1 = getattr(runner, arg_key)
list1.sort()
list2 = arguments[arg_key]
list2.sort()
if cmp(list1, list2):
setattr(runner, arg_key, arguments[arg_key])
changed = True
else:
if getattr(runner, arg_key) != arguments[arg_key]:
setattr(runner, arg_key, arguments[arg_key])
changed = True
return (changed, runner)
'''
@param description Description of the runner
'''
def findRunner(self, description):
runners = self._gitlab.runners.list(as_list=False)
for runner in runners:
if (runner.description == description):
return self._gitlab.runners.get(runner.id)
'''
@param description Description of the runner
'''
def existsRunner(self, description):
# When runner exists, object will be stored in self.runnerObject.
runner = self.findRunner(description)
if runner:
self.runnerObject = runner
return True
return False
def deleteRunner(self):
if self._module.check_mode:
return True
runner = self.runnerObject
return runner.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
url=dict(type='str', removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["private_token"]),
description=dict(type='str', required=True, aliases=["name"]),
active=dict(type='bool', default=True),
tag_list=dict(type='list', default=[]),
run_untagged=dict(type='bool', default=True),
locked=dict(type='bool', default=False),
access_level=dict(type='str', default='ref_protected', choices=["not_protected", "ref_protected"]),
maximum_timeout=dict(type='int', default=3600),
registration_token=dict(type='str', required=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'url'],
['api_username', 'api_token'],
['api_password', 'api_token'],
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token'],
['api_url', 'url']
],
supports_check_mode=True,
)
deprecation_warning(module)
if module.params['url'] is not None:
url = re.sub('/api.*', '', module.params['url'])
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
gitlab_url = url if api_url is None else api_url
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
runner_description = module.params['description']
runner_active = module.params['active']
tag_list = module.params['tag_list']
run_untagged = module.params['run_untagged']
runner_locked = module.params['locked']
access_level = module.params['access_level']
maximum_timeout = module.params['maximum_timeout']
registration_token = module.params['registration_token']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e))
gitlab_runner = GitLabRunner(module, gitlab_instance)
runner_exists = gitlab_runner.existsRunner(runner_description)
if state == 'absent':
if runner_exists:
gitlab_runner.deleteRunner()
module.exit_json(changed=True, msg="Successfully deleted runner %s" % runner_description)
else:
module.exit_json(changed=False, msg="Runner deleted or does not exists")
if state == 'present':
if gitlab_runner.createOrUpdateRunner(runner_description, {
"active": runner_active,
"tag_list": tag_list,
"run_untagged": run_untagged,
"locked": runner_locked,
"access_level": access_level,
"maximum_timeout": maximum_timeout,
"registration_token": registration_token}):
module.exit_json(changed=True, runner=gitlab_runner.runnerObject._attrs,
msg="Successfully created or updated the runner %s" % runner_description)
else:
module.exit_json(changed=False, runner=gitlab_runner.runnerObject._attrs,
msg="No need to update the runner %s" % runner_description)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,896 |
gitlab_project contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_project contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_project.py:292:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_project.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61896
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:18Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_user
short_description: Creates/updates/deletes GitLab Users
description:
- When the user does not exist in GitLab, it will be created.
- When the user does exists and state=absent, the user will be deleted.
- When changes are made to user, the user will be updated.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
- administrator rights on the GitLab server
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
name:
description:
- Name of the user you want to create
required: true
type: str
username:
description:
- The username of the user.
required: true
type: str
password:
description:
- The password of the user.
- GitLab server enforces minimum password length to 8, set this value with 8 or more characters.
required: true
type: str
email:
description:
- The email that belongs to the user.
required: true
type: str
sshkey_name:
description:
- The name of the sshkey
type: str
sshkey_file:
description:
- The ssh key itself.
type: str
group:
description:
- Id or Full path of parent group in the form of group/name
- Add user as an member to this group.
type: str
access_level:
description:
- The access level to the group. One of the following can be used.
- guest
- reporter
- developer
- master (alias for maintainer)
- maintainer
- owner
default: guest
type: str
choices: ["guest", "reporter", "developer", "master", "maintainer", "owner"]
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
confirm:
description:
- Require confirmation.
type: bool
default: yes
version_added: "2.4"
isadmin:
description:
- Grant admin privileges to the user
type: bool
default: no
version_added: "2.8"
external:
description:
- Define external parameter for this user
type: bool
default: no
version_added: "2.8"
'''
EXAMPLES = '''
- name: "Delete GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
username: myusername
state: absent
delegate_to: localhost
- name: "Create GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: My Name
username: myusername
password: mysecretpassword
email: [email protected]
sshkey_name: MySSH
sshkey_file: ssh-rsa AAAAB3NzaC1yc...
state: present
group: super_group/mon_group
access_level: owner
delegate_to: localhost
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
user:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabUser(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.userObject = None
self.ACCESS_LEVEL = {
'guest': gitlab.GUEST_ACCESS,
'reporter': gitlab.REPORTER_ACCESS,
'developer': gitlab.DEVELOPER_ACCESS,
'master': gitlab.MAINTAINER_ACCESS,
'maintainer': gitlab.MAINTAINER_ACCESS,
'owner': gitlab.OWNER_ACCESS}
'''
@param username Username of the user
@param options User options
'''
def createOrUpdateUser(self, username, options):
changed = False
# Because we have already call userExists in main()
if self.userObject is None:
user = self.createUser({
'name': options['name'],
'username': username,
'password': options['password'],
'email': options['email'],
'skip_confirmation': not options['confirm'],
'admin': options['isadmin'],
'external': options['external']})
changed = True
else:
changed, user = self.updateUser(self.userObject, {
'name': options['name'],
'email': options['email'],
'is_admin': options['isadmin'],
'external': options['external']})
# Assign ssh keys
if options['sshkey_name'] and options['sshkey_file']:
key_changed = self.addSshKeyToUser(user, {
'name': options['sshkey_name'],
'file': options['sshkey_file']})
changed = changed or key_changed
# Assign group
if options['group_path']:
group_changed = self.assignUserToGroup(user, options['group_path'], options['access_level'])
changed = changed or group_changed
self.userObject = user
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the user %s" % username)
try:
user.save()
except Exception as e:
self._module.fail_json(msg="Failed to update user: %s " % to_native(e))
return True
else:
return False
'''
@param group User object
'''
def getUserId(self, user):
if user is not None:
return user.id
return None
'''
@param user User object
@param sshkey_name Name of the ssh key
'''
def sshKeyExists(self, user, sshkey_name):
keyList = map(lambda k: k.title, user.keys.list())
return sshkey_name in keyList
'''
@param user User object
@param sshkey Dict containing sshkey infos {"name": "", "file": ""}
'''
def addSshKeyToUser(self, user, sshkey):
if not self.sshKeyExists(user, sshkey['name']):
if self._module.check_mode:
return True
try:
user.keys.create({
'title': sshkey['name'],
'key': sshkey['file']})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign sshkey to user: %s" % to_native(e))
return True
return False
'''
@param group Group object
@param user_id Id of the user to find
'''
def findMember(self, group, user_id):
try:
member = group.members.get(user_id)
except gitlab.exceptions.GitlabGetError as e:
return None
return member
'''
@param group Group object
@param user_id Id of the user to check
'''
def memberExists(self, group, user_id):
member = self.findMember(group, user_id)
return member is not None
'''
@param group Group object
@param user_id Id of the user to check
@param access_level GitLab access_level to check
'''
def memberAsGoodAccessLevel(self, group, user_id, access_level):
member = self.findMember(group, user_id)
return member.access_level == access_level
'''
@param user User object
@param group_path Complete path of the Group including parent group path. <parent_path>/<group_path>
@param access_level GitLab access_level to assign
'''
def assignUserToGroup(self, user, group_identifier, access_level):
group = findGroup(self._gitlab, group_identifier)
if self._module.check_mode:
return True
if group is None:
return False
if self.memberExists(group, self.getUserId(user)):
member = self.findMember(group, self.getUserId(user))
if not self.memberAsGoodAccessLevel(group, member.id, self.ACCESS_LEVEL[access_level]):
member.access_level = self.ACCESS_LEVEL[access_level]
member.save()
return True
else:
try:
group.members.create({
'user_id': self.getUserId(user),
'access_level': self.ACCESS_LEVEL[access_level]})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign user to group: %s" % to_native(e))
return True
return False
'''
@param user User object
@param arguments User attributes
'''
def updateUser(self, user, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(user, arg_key) != arguments[arg_key]:
setattr(user, arg_key, arguments[arg_key])
changed = True
return (changed, user)
'''
@param arguments User attributes
'''
def createUser(self, arguments):
if self._module.check_mode:
return True
try:
user = self._gitlab.users.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create user: %s " % to_native(e))
return user
'''
@param username Username of the user
'''
def findUser(self, username):
users = self._gitlab.users.list(search=username)
for user in users:
if (user.username == username):
return user
'''
@param username Username of the user
'''
def existsUser(self, username):
# When user exists, object will be stored in self.userObject.
user = self.findUser(username)
if user:
self.userObject = user
return True
return False
def deleteUser(self):
if self._module.check_mode:
return True
user = self.userObject
return user.delete()
def deprecation_warning(module):
deprecated_aliases = ['login_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
name=dict(type='str', required=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
username=dict(type='str', required=True),
password=dict(type='str', required=True, no_log=True),
email=dict(type='str', required=True),
sshkey_name=dict(type='str'),
sshkey_file=dict(type='str'),
group=dict(type='str'),
access_level=dict(type='str', default="guest", choices=["developer", "guest", "maintainer", "master", "owner", "reporter"]),
confirm=dict(type='bool', default=True),
isadmin=dict(type='bool', default=False),
external=dict(type='bool', default=False),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token'],
['server_url', 'api_url']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
user_name = module.params['name']
state = module.params['state']
user_username = module.params['username'].lower()
user_password = module.params['password']
user_email = module.params['email']
user_sshkey_name = module.params['sshkey_name']
user_sshkey_file = module.params['sshkey_file']
group_path = module.params['group']
access_level = module.params['access_level']
confirm = module.params['confirm']
user_isadmin = module.params['isadmin']
user_external = module.params['external']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_user = GitLabUser(module, gitlab_instance)
user_exists = gitlab_user.existsUser(user_username)
if state == 'absent':
if user_exists:
gitlab_user.deleteUser()
module.exit_json(changed=True, msg="Successfully deleted user %s" % user_username)
else:
module.exit_json(changed=False, msg="User deleted or does not exists")
if state == 'present':
if gitlab_user.createOrUpdateUser(user_username, {
"name": user_name,
"password": user_password,
"email": user_email,
"sshkey_name": user_sshkey_name,
"sshkey_file": user_sshkey_file,
"group_path": group_path,
"access_level": access_level,
"confirm": confirm,
"isadmin": user_isadmin,
"external": user_external}):
module.exit_json(changed=True, msg="Successfully created or updated the user %s" % user_username, user=gitlab_user.userObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the user %s" % user_username, user=gitlab_user.userObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,895 |
gitlab_hook contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_hook contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_hook.py:302:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_hook.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61895
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:17Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_deploy_key.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# Based on code:
# Copyright: (c) 2013, Phillip Gentry <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: gitlab_deploy_key
short_description: Manages GitLab project deploy keys.
description:
- Adds, updates and removes project deploy keys
version_added: "2.6"
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
version_added: "2.8"
type: str
aliases:
- private_token
- access_token
project:
description:
- Id or Full path of project in the form of group/name
required: true
type: str
title:
description:
- Deploy key's title
required: true
type: str
key:
description:
- Deploy key
required: true
type: str
can_push:
description:
- Whether this key can push to the project
type: bool
default: no
state:
description:
- When C(present) the deploy key added to the project if it doesn't exist.
- When C(absent) it will be removed from the project if it exists
required: true
default: present
type: str
choices: [ "present", "absent" ]
'''
EXAMPLES = '''
- name: "Adding a project deploy key"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
- name: "Update the above deploy key to add push access"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
can_push: yes
- name: "Remove the previous deploy key from the project"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
state: absent
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: key is already in use"
deploy_key:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findProject
class GitLabDeployKey(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.deployKeyObject = None
'''
@param project Project object
@param key_title Title of the key
@param key_key String of the key
@param key_can_push Option of the deployKey
@param options Deploy key options
'''
def createOrUpdateDeployKey(self, project, key_title, key_key, options):
changed = False
# Because we have already call existsDeployKey in main()
if self.deployKeyObject is None:
deployKey = self.createDeployKey(project, {
'title': key_title,
'key': key_key,
'can_push': options['can_push']})
changed = True
else:
changed, deployKey = self.updateDeployKey(self.deployKeyObject, {
'can_push': options['can_push']})
self.deployKeyObject = deployKey
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title)
try:
deployKey.save()
except Exception as e:
self._module.fail_json(msg="Failed to update deploy key: %s " % e)
return True
else:
return False
'''
@param project Project Object
@param arguments Attributs of the deployKey
'''
def createDeployKey(self, project, arguments):
if self._module.check_mode:
return True
try:
deployKey = project.keys.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create deploy key: %s " % to_native(e))
return deployKey
'''
@param deployKey Deploy Key Object
@param arguments Attributs of the deployKey
'''
def updateDeployKey(self, deployKey, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(deployKey, arg_key) != arguments[arg_key]:
setattr(deployKey, arg_key, arguments[arg_key])
changed = True
return (changed, deployKey)
'''
@param project Project object
@param key_title Title of the key
'''
def findDeployKey(self, project, key_title):
deployKeys = project.keys.list()
for deployKey in deployKeys:
if (deployKey.title == key_title):
return deployKey
'''
@param project Project object
@param key_title Title of the key
'''
def existsDeployKey(self, project, key_title):
# When project exists, object will be stored in self.projectObject.
deployKey = self.findDeployKey(project, key_title)
if deployKey:
self.deployKeyObject = deployKey
return True
return False
def deleteDeployKey(self):
if self._module.check_mode:
return True
return self.deployKeyObject.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token', 'access_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True, aliases=["private_token", "access_token"]),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
key=dict(type='str', required=True),
can_push=dict(type='bool', default=False),
title=dict(type='str', required=True)
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
gitlab_url = re.sub('/api.*', '', module.params['api_url'])
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
project_identifier = module.params['project']
key_title = module.params['title']
key_keyfile = module.params['key']
key_can_push = module.params['can_push']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_deploy_key = GitLabDeployKey(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create deploy key: project %s doesn't exists" % project_identifier)
deployKey_exists = gitlab_deploy_key.existsDeployKey(project, key_title)
if state == 'absent':
if deployKey_exists:
gitlab_deploy_key.deleteDeployKey()
module.exit_json(changed=True, msg="Successfully deleted deploy key %s" % key_title)
else:
module.exit_json(changed=False, msg="Deploy key deleted or does not exists")
if state == 'present':
if gitlab_deploy_key.createOrUpdateDeployKey(project, key_title, key_keyfile, {'can_push': key_can_push}):
module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,895 |
gitlab_hook contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_hook contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_hook.py:302:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_hook.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61895
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:17Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_group.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_group
short_description: Creates/updates/deletes GitLab Groups
description:
- When the group does not exist in GitLab, it will be created.
- When the group does exist and state=absent, the group will be deleted.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
server_url:
description:
- The URL of the GitLab server, with protocol (i.e. http or https).
type: str
login_user:
description:
- GitLab user name.
type: str
login_password:
description:
- GitLab password for login_user
type: str
api_token:
description:
- GitLab token for logging in.
type: str
aliases:
- login_token
name:
description:
- Name of the group you want to create.
required: true
type: str
path:
description:
- The path of the group you want to create, this will be server_url/group_path
- If not supplied, the group_name will be used.
type: str
description:
description:
- A description for the group.
version_added: "2.7"
type: str
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
parent:
description:
- Allow to create subgroups
- Id or Full path of parent group in the form of group/name
version_added: "2.8"
type: str
visibility:
description:
- Default visibility of the group
version_added: "2.8"
choices: ["private", "internal", "public"]
default: private
type: str
'''
EXAMPLES = '''
- name: "Delete GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
name: my_first_group
state: absent
- name: "Create GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
# The group will by created at https://gitlab.dj-wasabi.local/super_parent/parent/my_first_group
- name: "Create GitLab SubGroup"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
parent: "super_parent/parent"
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
group:
description: API object
returned: always
type: dict
'''
import os
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabGroup(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.groupObject = None
'''
@param group Group object
'''
def getGroupId(self, group):
if group is not None:
return group.id
return None
'''
@param name Name of the group
@param parent Parent group full path
@param options Group options
'''
def createOrUpdateGroup(self, name, parent, options):
changed = False
# Because we have already call userExists in main()
if self.groupObject is None:
parent_id = self.getGroupId(parent)
group = self.createGroup({
'name': name,
'path': options['path'],
'parent_id': parent_id,
'visibility': options['visibility']})
changed = True
else:
changed, group = self.updateGroup(self.groupObject, {
'name': name,
'description': options['description'],
'visibility': options['visibility']})
self.groupObject = group
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the group %s" % name)
try:
group.save()
except Exception as e:
self._module.fail_json(msg="Failed to update group: %s " % e)
return True
else:
return False
'''
@param arguments Attributs of the group
'''
def createGroup(self, arguments):
if self._module.check_mode:
return True
try:
group = self._gitlab.groups.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create group: %s " % to_native(e))
return group
'''
@param group Group Object
@param arguments Attributs of the group
'''
def updateGroup(self, group, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(group, arg_key) != arguments[arg_key]:
setattr(group, arg_key, arguments[arg_key])
changed = True
return (changed, group)
def deleteGroup(self):
group = self.groupObject
if len(group.projects.list()) >= 1:
self._module.fail_json(
msg="There are still projects in this group. These needs to be moved or deleted before this group can be removed.")
else:
if self._module.check_mode:
return True
try:
group.delete()
except Exception as e:
self._module.fail_json(msg="Failed to delete group: %s " % to_native(e))
'''
@param name Name of the groupe
@param full_path Complete path of the Group including parent group path. <parent_path>/<group_path>
'''
def existsGroup(self, project_identifier):
# When group/user exists, object will be stored in self.groupObject.
group = findGroup(self._gitlab, project_identifier)
if group:
self.groupObject = group
return True
return False
def deprecation_warning(module):
deprecated_aliases = ['login_token']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
server_url=dict(type='str', removed_in_version="2.10"),
login_user=dict(type='str', no_log=True, removed_in_version="2.10"),
login_password=dict(type='str', no_log=True, removed_in_version="2.10"),
api_token=dict(type='str', no_log=True, aliases=["login_token"]),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
state=dict(type='str', default="present", choices=["absent", "present"]),
parent=dict(type='str'),
visibility=dict(type='str', default="private", choices=["internal", "private", "public"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_url', 'server_url'],
['api_username', 'login_user'],
['api_password', 'login_password'],
['api_username', 'api_token'],
['api_password', 'api_token'],
['login_user', 'login_token'],
['login_password', 'login_token']
],
required_together=[
['api_username', 'api_password'],
['login_user', 'login_password'],
],
required_one_of=[
['api_username', 'api_token', 'login_user', 'login_token'],
['server_url', 'api_url']
],
supports_check_mode=True,
)
deprecation_warning(module)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
api_url = module.params['api_url']
validate_certs = module.params['validate_certs']
api_user = module.params['api_username']
api_password = module.params['api_password']
gitlab_url = server_url if api_url is None else api_url
gitlab_user = login_user if api_user is None else api_user
gitlab_password = login_password if api_password is None else api_password
gitlab_token = module.params['api_token']
group_name = module.params['name']
group_path = module.params['path']
description = module.params['description']
state = module.params['state']
parent_identifier = module.params['parent']
group_visibility = module.params['visibility']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e))
# Define default group_path based on group_name
if group_path is None:
group_path = group_name.replace(" ", "_")
gitlab_group = GitLabGroup(module, gitlab_instance)
parent_group = None
if parent_identifier:
parent_group = findGroup(gitlab_instance, parent_identifier)
if not parent_group:
module.fail_json(msg="Failed create GitLab group: Parent group doesn't exists")
group_exists = gitlab_group.existsGroup(parent_group.full_path + '/' + group_path)
else:
group_exists = gitlab_group.existsGroup(group_path)
if state == 'absent':
if group_exists:
gitlab_group.deleteGroup()
module.exit_json(changed=True, msg="Successfully deleted group %s" % group_name)
else:
module.exit_json(changed=False, msg="Group deleted or does not exists")
if state == 'present':
if gitlab_group.createOrUpdateGroup(group_name, parent_group, {
"path": group_path,
"description": description,
"visibility": group_visibility}):
module.exit_json(changed=True, msg="Successfully created or updated the group %s" % group_name, group=gitlab_group.groupObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the group %s" % group_name, group=gitlab_group.groupObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,895 |
gitlab_hook contains deprecated call to be removed in 2.10
|
##### SUMMARY
gitlab_hook contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/source_control/gitlab_hook.py:302:4: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/source_control/gitlab_hook.py
```
##### ANSIBLE VERSION
```
2.10
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
N/A
##### EXPECTED RESULTS
N/A
##### ACTUAL RESULTS
N/A
|
https://github.com/ansible/ansible/issues/61895
|
https://github.com/ansible/ansible/pull/61912
|
331d51fb16b05f09d3d24d6d6f8d705fa0db4ba7
|
f92c99b4135327629cc2c92995f00102fcf2f681
| 2019-09-05T20:41:17Z |
python
| 2019-10-20T11:38:08Z |
lib/ansible/modules/source_control/gitlab_hook.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# Based on code:
# Copyright: (c) 2013, Phillip Gentry <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_hook
short_description: Manages GitLab project hooks.
description:
- Adds, updates and removes project hook
version_added: "2.6"
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
version_added: "2.8"
type: str
aliases:
- private_token
- access_token
project:
description:
- Id or Full path of the project in the form of group/name.
required: true
type: str
hook_url:
description:
- The url that you want GitLab to post to, this is used as the primary key for updates and deletion.
required: true
type: str
state:
description:
- When C(present) the hook will be updated to match the input or created if it doesn't exist.
- When C(absent) hook will be deleted if it exists.
required: true
default: present
type: str
choices: [ "present", "absent" ]
push_events:
description:
- Trigger hook on push events.
type: bool
default: yes
issues_events:
description:
- Trigger hook on issues events.
type: bool
default: no
merge_requests_events:
description:
- Trigger hook on merge requests events.
type: bool
default: no
tag_push_events:
description:
- Trigger hook on tag push events.
type: bool
default: no
note_events:
description:
- Trigger hook on note events or when someone adds a comment.
type: bool
default: no
job_events:
description:
- Trigger hook on job events.
type: bool
default: no
pipeline_events:
description:
- Trigger hook on pipeline events.
type: bool
default: no
wiki_page_events:
description:
- Trigger hook on wiki events.
type: bool
default: no
hook_validate_certs:
description:
- Whether GitLab will do SSL verification when triggering the hook.
type: bool
default: no
aliases: [ enable_ssl_verification ]
token:
description:
- Secret token to validate hook messages at the receiver.
- If this is present it will always result in a change as it cannot be retrieved from GitLab.
- Will show up in the X-GitLab-Token HTTP request header.
required: false
type: str
'''
EXAMPLES = '''
- name: "Adding a project hook"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: present
push_events: yes
tag_push_events: yes
hook_validate_certs: no
token: "my-super-secret-token-that-my-ci-server-will-check"
- name: "Delete the previous hook"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: "my_group/my_project"
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: absent
- name: "Delete a hook by numeric project id"
gitlab_hook:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
project: 10
hook_url: "https://my-ci-server.example.com/gitlab-hook"
state: absent
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
hook:
description: API object
returned: always
type: dict
'''
import os
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findProject
class GitLabHook(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.hookObject = None
'''
@param project Project Object
@param hook_url Url to call on event
@param description Description of the group
@param parent Parent group full path
'''
def createOrUpdateHook(self, project, hook_url, options):
changed = False
# Because we have already call userExists in main()
if self.hookObject is None:
hook = self.createHook(project, {
'url': hook_url,
'push_events': options['push_events'],
'issues_events': options['issues_events'],
'merge_requests_events': options['merge_requests_events'],
'tag_push_events': options['tag_push_events'],
'note_events': options['note_events'],
'job_events': options['job_events'],
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
changed = True
else:
changed, hook = self.updateHook(self.hookObject, {
'push_events': options['push_events'],
'issues_events': options['issues_events'],
'merge_requests_events': options['merge_requests_events'],
'tag_push_events': options['tag_push_events'],
'note_events': options['note_events'],
'job_events': options['job_events'],
'pipeline_events': options['pipeline_events'],
'wiki_page_events': options['wiki_page_events'],
'enable_ssl_verification': options['enable_ssl_verification'],
'token': options['token']})
self.hookObject = hook
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url)
try:
hook.save()
except Exception as e:
self._module.fail_json(msg="Failed to update hook: %s " % e)
return True
else:
return False
'''
@param project Project Object
@param arguments Attributs of the hook
'''
def createHook(self, project, arguments):
if self._module.check_mode:
return True
hook = project.hooks.create(arguments)
return hook
'''
@param hook Hook Object
@param arguments Attributs of the hook
'''
def updateHook(self, hook, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(hook, arg_key) != arguments[arg_key]:
setattr(hook, arg_key, arguments[arg_key])
changed = True
return (changed, hook)
'''
@param project Project object
@param hook_url Url to call on event
'''
def findHook(self, project, hook_url):
hooks = project.hooks.list()
for hook in hooks:
if (hook.url == hook_url):
return hook
'''
@param project Project object
@param hook_url Url to call on event
'''
def existsHook(self, project, hook_url):
# When project exists, object will be stored in self.projectObject.
hook = self.findHook(project, hook_url)
if hook:
self.hookObject = hook
return True
return False
def deleteHook(self):
if self._module.check_mode:
return True
return self.hookObject.delete()
def deprecation_warning(module):
deprecated_aliases = ['private_token', 'access_token', 'enable_ssl_verification']
for aliase in deprecated_aliases:
if aliase in module.params:
module.deprecate("Alias \'{aliase}\' is deprecated".format(aliase=aliase), "2.10")
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True, aliases=["private_token", "access_token"]),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
hook_url=dict(type='str', required=True),
push_events=dict(type='bool', default=True),
issues_events=dict(type='bool', default=False),
merge_requests_events=dict(type='bool', default=False),
tag_push_events=dict(type='bool', default=False),
note_events=dict(type='bool', default=False),
job_events=dict(type='bool', default=False),
pipeline_events=dict(type='bool', default=False),
wiki_page_events=dict(type='bool', default=False),
hook_validate_certs=dict(type='bool', default=False, aliases=['enable_ssl_verification']),
token=dict(type='str', no_log=True),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
deprecation_warning(module)
gitlab_url = re.sub('/api.*', '', module.params['api_url'])
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
project_identifier = module.params['project']
hook_url = module.params['hook_url']
push_events = module.params['push_events']
issues_events = module.params['issues_events']
merge_requests_events = module.params['merge_requests_events']
tag_push_events = module.params['tag_push_events']
note_events = module.params['note_events']
job_events = module.params['job_events']
pipeline_events = module.params['pipeline_events']
wiki_page_events = module.params['wiki_page_events']
enable_ssl_verification = module.params['hook_validate_certs']
hook_token = module.params['token']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_hook = GitLabHook(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create hook: project %s doesn't exists" % project_identifier)
hook_exists = gitlab_hook.existsHook(project, hook_url)
if state == 'absent':
if hook_exists:
gitlab_hook.deleteHook()
module.exit_json(changed=True, msg="Successfully deleted hook %s" % hook_url)
else:
module.exit_json(changed=False, msg="Hook deleted or does not exists")
if state == 'present':
if gitlab_hook.createOrUpdateHook(project, hook_url, {
"push_events": push_events,
"issues_events": issues_events,
"merge_requests_events": merge_requests_events,
"tag_push_events": tag_push_events,
"note_events": note_events,
"job_events": job_events,
"pipeline_events": pipeline_events,
"wiki_page_events": wiki_page_events,
"enable_ssl_verification": enable_ssl_verification,
"token": hook_token}):
module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs)
if __name__ == '__main__':
main()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.