status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,050 |
options documentation doesn't include documentation of 'elements' key
|
##### SUMMARY
https://docs.ansible.com/ansible/devel/dev_guide/developing_modules_documenting.html#documentation-block
There's currently no documentation of the new 'elements' suboption for options.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs/docsite/rst/dev_guide/developing_modules_documenting.rst
##### ANSIBLE VERSION
devel/2.9
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### ADDITIONAL INFORMATION
CCing @felixfontein as requested.
|
https://github.com/ansible/ansible/issues/64050
|
https://github.com/ansible/ansible/pull/64075
|
0515633189f1ca1fdf49f7d518b95c39d0045aa0
|
0bf9146b29de6f49a47b35dd2d6273f480fcbbf1
| 2019-10-29T08:13:13Z |
python
| 2019-11-04T20:21:07Z |
docs/docsite/rst/dev_guide/developing_modules_documenting.rst
|
.. _developing_modules_documenting:
.. _module_documenting:
*******************************
Module format and documentation
*******************************
If you want to contribute your module to Ansible, you must write your module in Python and follow the standard format described below. (Unless you're writing a Windows module, in which case the :ref:`Windows guidelines <developing_modules_general_windows>` apply.) In addition to following this format, you should review our :ref:`submission checklist <developing_modules_checklist>`, :ref:`programming tips <developing_modules_best_practices>`, and :ref:`strategy for maintaining Python 2 and Python 3 compatibility <developing_python_3>`, as well as information about :ref:`testing <developing_testing>` before you open a pull request.
Every Ansible module written in Python must begin with seven standard sections in a particular order, followed by the code. The sections in order are:
.. contents::
:depth: 1
:local:
.. note:: Why don't the imports go first?
Keen Python programmers may notice that contrary to PEP 8's advice we don't put ``imports`` at the top of the file. This is because the ``ANSIBLE_METADATA`` through ``RETURN`` sections are not used by the module code itself; they are essentially extra docstrings for the file. The imports are placed after these special variables for the same reason as PEP 8 puts the imports after the introductory comments and docstrings. This keeps the active parts of the code together and the pieces which are purely informational apart. The decision to exclude E402 is based on readability (which is what PEP 8 is about). Documentation strings in a module are much more similar to module level docstrings, than code, and are never utilized by the module itself. Placing the imports below this documentation and closer to the code, consolidates and groups all related code in a congruent manner to improve readability, debugging and understanding.
.. warning:: **Copy old modules with care!**
Some older modules in Ansible Core have ``imports`` at the bottom of the file, ``Copyright`` notices with the full GPL prefix, and/or ``ANSIBLE_METADATA`` fields in the wrong order. These are legacy files that need updating - do not copy them into new modules. Over time we're updating and correcting older modules. Please follow the guidelines on this page!
.. _shebang:
Python shebang & UTF-8 coding
===============================
Every Ansible module must begin with ``#!/usr/bin/python`` - this "shebang" allows ``ansible_python_interpreter`` to work.
This is immediately followed by ``# -*- coding: utf-8 -*-`` to clarify that the file is UTF-8 encoded.
.. _copyright:
Copyright and license
=====================
After the shebang and UTF-8 coding, there should be a `copyright line <https://www.gnu.org/licenses/gpl-howto.en.html>`_ with the original copyright holder and a license declaration. The license declaration should be ONLY one line, not the full GPL prefix.:
.. code-block:: python
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Terry Jones <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
Major additions to the module (for instance, rewrites) may add additional copyright lines. Any legal review will include the source control history, so an exhaustive copyright header is not necessary. When adding a second copyright line for a significant feature or rewrite, add the newer line above the older one:
.. code-block:: python
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, [New Contributor(s)]
# Copyright: (c) 2015, [Original Contributor(s)]
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
.. _ansible_metadata_block:
ANSIBLE_METADATA block
======================
After the shebang, the UTF-8 coding, the copyright, and the license, your module file should contain an ``ANSIBLE_METADATA`` section. This section provides information about the module for use by other tools. For new modules, the following block can be simply added into your module:
.. code-block:: python
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
.. warning::
* ``metadata_version`` is the version of the ``ANSIBLE_METADATA`` schema, *not* the version of the module.
* Promoting a module's ``status`` or ``supported_by`` status should only be done by members of the Ansible Core Team.
Ansible metadata fields
-----------------------
:metadata_version: An "X.Y" formatted string. X and Y are integers which
define the metadata format version. Modules shipped with Ansible are
tied to an Ansible release, so we will only ship with a single version
of the metadata. We'll increment Y if we add fields or legal values
to an existing field. We'll increment X if we remove fields or values
or change the type or meaning of a field.
Current metadata_version is "1.1"
:supported_by: Who supports the module.
Default value is ``community``. For information on what the support level values entail, please see
:ref:`Modules Support <modules_support>`. Values are:
* core
* network
* certified
* community
* curated (*deprecated value - modules in this category should be core or
certified instead*)
:status: List of strings describing how stable the module is likely to be. See also :ref:`module_lifecycle`.
The default value is a single element list ["preview"]. The following strings are valid
statuses and have the following meanings:
:stableinterface: The module's options (the parameters or arguments it accepts) are stable. Every effort will be made not to remove options or change
their meaning. **Not** a rating of the module's code quality.
:preview: The module is in tech preview. It may be
unstable, the options may change, or it may require libraries or
web services that are themselves subject to incompatible changes.
:deprecated: The module is deprecated and will be removed in a future release.
:removed: The module is not present in the release. A stub is
kept so that documentation can be built. The documentation helps
users port from the removed module to new modules.
.. _documentation_block:
DOCUMENTATION block
===================
After the shebang, the UTF-8 coding, the copyright line, the license, and the ``ANSIBLE_METADATA`` section comes the ``DOCUMENTATION`` block. Ansible's online module documentation is generated from the ``DOCUMENTATION`` blocks in each module's source code. The ``DOCUMENTATION`` block must be valid YAML. You may find it easier to start writing your ``DOCUMENTATION`` string in an :ref:`editor with YAML syntax highlighting <other_tools_and_programs>` before you include it in your Python file. You can start by copying our `example documentation string <https://github.com/ansible/ansible/blob/devel/examples/DOCUMENTATION.yml>`_ into your module file and modifying it. If you run into syntax issues in your YAML, you can validate it on the `YAML Lint <http://www.yamllint.com/>`_ website.
Module documentation should briefly and accurately define what each module and option does, and how it works with others in the underlying system. Documentation should be written for broad audience--readable both by experts and non-experts.
* Descriptions should always start with a capital letter and end with a full stop. Consistency always helps.
* Verify that arguments in doc and module spec dict are identical.
* For password / secret arguments no_log=True should be set.
* If an option is only sometimes required, describe the conditions. For example, "Required when I(state=present)."
* If your module allows ``check_mode``, reflect this fact in the documentation.
Each documentation field is described below. Before committing your module documentation, please test it at the command line and as HTML:
* As long as your module file is :ref:`available locally <local_modules>`, you can use ``ansible-doc -t module my_module_name`` to view your module documentation at the command line. Any parsing errors will be obvious - you can view details by adding ``-vvv`` to the command.
* You should also :ref:`test the HTML output <testing_module_documentation>` of your module documentation.
Documentation fields
--------------------
All fields in the ``DOCUMENTATION`` block are lower-case. All fields are required unless specified otherwise:
:module:
* The name of the module.
* Must be the same as the filename, without the ``.py`` extension.
:short_description:
* A short description which is displayed on the :ref:`all_modules` page and ``ansible-doc -l``.
* The ``short_description`` is displayed by ``ansible-doc -l`` without any category grouping,
so it needs enough detail to explain the module's purpose without the context of the directory structure in which it lives.
* Unlike ``description:``, ``short_description`` should not have a trailing period/full stop.
:description:
* A detailed description (generally two or more sentences).
* Must be written in full sentences, i.e. with capital letters and periods/full stops.
* Shouldn't mention the module name.
* Make use of multiple entries rather than using one long paragraph.
* Don't quote complete values unless it is required by YAML.
:version_added:
* The version of Ansible when the module was added.
* This is a string, and not a float, i.e. ``version_added: '2.1'``
:author:
* Name of the module author in the form ``First Last (@GitHubID)``.
* Use a multi-line list if there is more than one author.
* Don't use quotes as it should not be required by YAML.
:deprecated:
* Marks modules that will be removed in future releases. See also :ref:`module_lifecycle`.
:options:
* Options are often called `parameters` or `arguments`. Because the documentation field is called `options`, we will use that term.
* If the module has no options (for example, it's a ``_facts`` module), all you need is one line: ``options: {}``.
* If your module has options (in other words, accepts arguments), each option should be documented thoroughly. For each module option, include:
:option-name:
* Declarative operation (not CRUD), to focus on the final state, for example `online:`, rather than `is_online:`.
* The name of the option should be consistent with the rest of the module, as well as other modules in the same category.
* When in doubt, look for other modules to find option names that are used for the same purpose, we like to offer consistency to our users.
:description:
* Detailed explanation of what this option does. It should be written in full sentences.
* The first entry is a description of the option itself; subsequent entries detail its use, dependencies, or format of possible values.
* Should not list the possible values (that's what ``choices:`` is for, though it should explain what the values do if they aren't obvious).
* If an option is only sometimes required, describe the conditions. For example, "Required when I(state=present)."
* Mutually exclusive options must be documented as the final sentence on each of the options.
:required:
* Only needed if ``true``.
* If missing, we assume the option is not required.
:default:
* If ``required`` is false/missing, ``default`` may be specified (assumed 'null' if missing).
* Ensure that the default value in the docs matches the default value in the code.
* The default field must not be listed as part of the description, unless it requires additional information or conditions.
* If the option is a boolean value, you can use any of the boolean values recognized by Ansible:
(such as true/false or yes/no). Choose the one that reads better in the context of the option.
:choices:
* List of option values.
* Should be absent if empty.
:type:
* Specifies the data type that option accepts, must match the ``argspec``.
* If an argument is ``type='bool'``, this field should be set to ``type: bool`` and no ``choices`` should be specified.
:aliases:
* List of optional name aliases.
* Generally not needed.
:version_added:
* Only needed if this option was extended after initial Ansible release, i.e. this is greater than the top level `version_added` field.
* This is a string, and not a float, i.e. ``version_added: '2.3'``.
:suboptions:
* If this option takes a dict or list of dicts, you can define the structure here.
* See :ref:`azure_rm_securitygroup_module`, :ref:`azure_rm_azurefirewall_module` and :ref:`os_ironic_node_module` for examples.
:requirements:
* List of requirements (if applicable).
* Include minimum versions.
:seealso:
* A list of references to other modules, documentation or Internet resources
* A reference can be one of the following formats:
.. code-block:: yaml+jinja
seealso:
# Reference by module name
- module: aci_tenant
# Reference by module name, including description
- module: aci_tenant
description: ACI module to create tenants on a Cisco ACI fabric.
# Reference by rST documentation anchor
- ref: aci_guide
description: Detailed information on how to manage your ACI infrastructure using Ansible.
# Reference by Internet resource
- name: APIC Management Information Model reference
description: Complete reference of the APIC object model.
link: https://developer.cisco.com/docs/apic-mim-ref/
:notes:
* Details of any important information that doesn't fit in one of the above sections.
* For example, whether ``check_mode`` is or is not supported.
Linking within module documentation
-----------------------------------
You can link from your module documentation to other module docs, other resources on docs.ansible.com, and resources elsewhere on the internet. The correct formats for these links are:
* ``L()`` for Links with a heading. For example: ``See L(IOS Platform Options guide,../network/user_guide/platform_ios.html).``
* ``U()`` for URLs. For example: ``See U(https://www.ansible.com/products/tower) for an overview.``
* ``I()`` for option names. For example: ``Required if I(state=present).``
* ``C()`` for files and option values. For example: ``If not set the environment variable C(ACME_PASSWORD) will be used.``
* ``M()`` for module names. For example: ``See also M(win_copy) or M(win_template).``
.. note::
For modules in a collection, you can only use ``L()`` and ``M()`` for content within that collection. Use ``U()`` to refer to content in a different collection.
.. note::
- To refer a group of modules, use ``C(..)``, e.g. ``Refer to the C(win_*) modules.``
- Because it stands out better, using ``seealso`` is preferred for general references over the use of notes or adding links to the description.
.. _module_docs_fragments:
Documentation fragments
-----------------------
If you're writing multiple related modules, they may share common documentation, such as authentication details, file mode settings, ``notes:`` or ``seealso:`` entries. Rather than duplicate that information in each module's ``DOCUMENTATION`` block, you can save it once as a doc_fragment plugin and use it in each module's documentation. In Ansible, shared documentation fragments are contained in a ``ModuleDocFragment`` class in `lib/ansible/plugins/doc_fragments/ <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/doc_fragments>`_. To include a documentation fragment, add ``extends_documentation_fragment: FRAGMENT_NAME`` in your module's documentation.
Modules should only use items from a doc fragment if the module will implement all of the interface documented there in a manner that behaves the same as the existing modules which import that fragment. The goal is that items imported from the doc fragment will behave identically when used in another module that imports the doc fragment.
By default, only the ``DOCUMENTATION`` property from a doc fragment is inserted into the module documentation. It is possible to define additional properties in the doc fragment in order to import only certain parts of a doc fragment or mix and match as appropriate. If a property is defined in both the doc fragment and the module, the module value overrides the doc fragment.
Here is an example doc fragment named ``example_fragment.py``:
.. code-block:: python
class ModuleDocFragment(object):
# Standard documentation
DOCUMENTATION = r'''
options:
# options here
'''
# Additional section
OTHER = r'''
options:
# other options here
'''
To insert the contents of ``OTHER`` in a module:
.. code-block:: yaml+jinja
extends_documentation_fragment: example_fragment.other
Or use both :
.. code-block:: yaml+jinja
extends_documentation_fragment:
- example_fragment
- example_fragment.other
.. _note:
* Prior to Ansible 2.8, documentation fragments were kept in ``lib/ansible/utils/module_docs_fragments``.
.. versionadded:: 2.8
Since Ansible 2.8, you can have user-supplied doc_fragments by using a ``doc_fragments`` directory adjacent to play or role, just like any other plugin.
For example, all AWS modules should include:
.. code-block:: yaml+jinja
extends_documentation_fragment:
- aws
- ec2
.. _examples_block:
EXAMPLES block
==============
After the shebang, the UTF-8 coding, the copyright line, the license, the ``ANSIBLE_METADATA`` section, and the ``DOCUMENTATION`` block comes the ``EXAMPLES`` block. Here you show users how your module works with real-world examples in multi-line plain-text YAML format. The best examples are ready for the user to copy and paste into a playbook. Review and update your examples with every change to your module.
Per playbook best practices, each example should include a ``name:`` line::
EXAMPLES = r'''
- name: Ensure foo is installed
modulename:
name: foo
state: present
'''
The ``name:`` line should be capitalized and not include a trailing dot.
If your examples use boolean options, use yes/no values. Since the documentation generates boolean values as yes/no, having the examples use these values as well makes the module documentation more consistent.
If your module returns facts that are often needed, an example of how to use them can be helpful.
.. _return_block:
RETURN block
============
After the shebang, the UTF-8 coding, the copyright line, the license, the ``ANSIBLE_METADATA`` section, ``DOCUMENTATION`` and ``EXAMPLES`` blocks comes the ``RETURN`` block. This section documents the information the module returns for use by other modules.
If your module doesn't return anything (apart from the standard returns), this section of your module should read: ``RETURN = r''' # '''``
Otherwise, for each value returned, provide the following fields. All fields are required unless specified otherwise.
:return name:
Name of the returned field.
:description:
Detailed description of what this value represents. Capitalized and with trailing dot.
:returned:
When this value is returned, such as ``always``, or ``on success``.
:type:
Data type.
:sample:
One or more examples.
:version_added:
Only needed if this return was extended after initial Ansible release, i.e. this is greater than the top level `version_added` field.
This is a string, and not a float, i.e. ``version_added: '2.3'``.
:contains:
Optional. To describe nested return values, set ``type: complex`` and repeat the elements above for each sub-field.
Here are two example ``RETURN`` sections, one with three simple fields and one with a complex nested field::
RETURN = r'''
dest:
description: Destination file/path.
returned: success
type: str
sample: /path/to/file.txt
src:
description: Source file used for the copy on the target machine.
returned: changed
type: str
sample: /home/httpd/.ansible/tmp/ansible-tmp-1423796390.97-147729857856000/source
md5sum:
description: MD5 checksum of the file after running copy.
returned: when supported
type: str
sample: 2a5aeecc61dc98c4d780b14b330e3282
'''
RETURN = r'''
packages:
description: Information about package requirements
returned: On success
type: complex
contains:
missing:
description: Packages that are missing from the system
returned: success
type: list
sample:
- libmysqlclient-dev
- libxml2-dev
badversion:
description: Packages that are installed but at bad versions.
returned: success
type: list
sample:
- package: libxml2-dev
version: 2.9.4+dfsg1-2
constraint: ">= 3.0"
'''
.. _python_imports:
Python imports
==============
After the shebang, the UTF-8 coding, the copyright line, the license, and the sections for ``ANSIBLE_METADATA``, ``DOCUMENTATION``, ``EXAMPLES``, and ``RETURN``, you can finally add the python imports. All modules must use Python imports in the form:
.. code-block:: python
from module_utils.basic import AnsibleModule
The use of "wildcard" imports such as ``from module_utils.basic import *`` is no longer allowed.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,909 |
ovirt_host_network throws typerror '<' not supported between instances of 'dict' and 'dict'
|
##### SUMMARY
calling ovirt_host_network module against a bond interface throws a TypeError exception when run against an existing a bond interface regardless of whether the module arguments have changed or not
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ovirt_host_networks in method __compare_options of the HostNetworksModule class
##### ANSIBLE VERSION
```
ansible 2.8.4
config file = None
configured module search path = ['/home/deploy/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/deploy/ansible_285_venv/lib/python3.6/site-packages/ansible
executable location = /home/deploy/ansible_285_venv/bin/ansible
```
##### CONFIGURATION
```
N/A (no changes)
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS7.7 19.08
##### STEPS TO REPRODUCE
Configure a bond interface, initial run will complete and configure the bond as expected. Second run of the same code fails with exception
<!--- Paste example playbooks or commands between quotes below -->
```
- name: "Configure bond interface"
delegate_to: localhost
ovirt_host_network:
auth: "{{ ovirt_auth }}"
name: "{{ ansible_fqdn }}"
networks: "{{ networks_on_bond }}"
bond:
name: "{{ bond_name }}"
mode: 4
interfaces: "{{ bond_interfaces }}"
options:
lacp_rate: "fast"
xmit_hash_policy: "layer3+4"
save: yes
check: yes
timeout: 180
wait: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Ansible should report that OK or CHANGED depending on whether the module arguments have changed the configuration or now
##### ACTUAL RESULTS
```
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: deploy
<localhost> EXEC /bin/sh -c 'echo ~deploy && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/deploy/.ansible/tmp/ansible-tmp-1569588017.0785801-154242329421198 `" && echo ansible-tmp-1569588017.0785801-154242329421198="` echo /home/deploy/.ansible/tmp/ansible-tmp-1569588017.0785801-154242329421198 `" ) && sleep 0'
Using module file /home/deploy/ansible_285_venv/lib/python3.6/site-packages/ansible/modules/cloud/ovirt/ovirt_host_network.py
<localhost> PUT /home/deploy/.ansible/tmp/ansible-local-24097xoko1are/tmpe2zvp1l4 TO /home/deploy/.ansible/tmp/ansible-tmp-1569588017.0785801-154242329421198/AnsiballZ_ovirt_host_network.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/deploy/.ansible/tmp/ansible-tmp-1569588017.0785801-154242329421198/ /home/deploy/.ansible/tmp/ansible-tmp-1569588017.0785801-154242329421198/AnsiballZ_ovirt_host_network.py && sleep 0'
<localhost> EXEC /bin/sh -c '/home/deploy/ansible_285_venv/bin/python3 /home/deploy/.ansible/tmp/ansible-tmp-1569588017.0785801-154242329421198/AnsiballZ_ovirt_host_network.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/deploy/.ansible/tmp/ansible-tmp-1569588017.0785801-154242329421198/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ovirt_host_network_payload_s4fkqwyn/__main__.py", line 395, in main
(nic is None or host_networks_module.has_update(nics_service.service(nic.id)))
File "/tmp/ansible_ovirt_host_network_payload_s4fkqwyn/__main__.py", line 288, in has_update
update = self.__compare_options(get_bond_options(bond.get('mode'), bond.get('options')), getattr(nic.bonding, 'options', []))
File "/tmp/ansible_ovirt_host_network_payload_s4fkqwyn/__main__.py", line 247, in __compare_options
return sorted(get_dict_of_struct(opt) for opt in new_options) != sorted(get_dict_of_struct(opt) for opt in old_options)
TypeError: '<' not supported between instances of 'dict' and 'dict'
fatal: [compute01.ovirt.nullpacket.io -> localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"bond": {
"interfaces": [
"enp61s0f0",
"enp61s0f1"
],
"mode": 4,
"name": "bond0",
"options": {
"lacp_rate": "fast",
"xmit_hash_policy": "layer3+4"
}
},
"check": true,
"fetch_nested": false,
"interface": null,
"labels": null,
"name": "compute01.ovirt.nullpacket.io",
"nested_attributes": [],
"networks": [
{
"address": "10.10.83.2",
"boot_protocol": "static",
"gateway": "10.10.83.1",
"name": "ovirtmgmt",
"netmask": "255.255.255.0"
},
{
"boot_protocol": "none",
"name": "vm_provider"
},
{
"boot_protocol": "none",
"name": "vm_okd"
}
],
"poll_interval": 3,
"save": true,
"state": "present",
"sync_networks": false,
"timeout": 180,
"wait": true
}
},
"msg": "'<' not supported between instances of 'dict' and 'dict'"
}
```
|
https://github.com/ansible/ansible/issues/62909
|
https://github.com/ansible/ansible/pull/64437
|
484943cbd1729fb8e82b9d98619a4246f8811178
|
d9f5be8d0d8a036aaa89056536f58b4a2e3c86b5
| 2019-09-27T13:08:36Z |
python
| 2019-11-05T13:22:17Z |
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2016, 2018 Red Hat, Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_host_network
short_description: Module to manage host networks in oVirt/RHV
version_added: "2.3"
author: "Ondra Machacek (@machacekondra)"
description:
- "Module to manage host networks in oVirt/RHV."
options:
name:
description:
- "Name of the host to manage networks for."
required: true
aliases:
- 'host'
state:
description:
- "Should the host be present/absent."
choices: ['present', 'absent']
default: present
bond:
description:
- "Dictionary describing network bond:"
- "C(name) - Bond name."
- "C(mode) - Bonding mode."
- "C(options) - Bonding options."
- "C(interfaces) - List of interfaces to create a bond."
interface:
description:
- "Name of the network interface where logical network should be attached."
networks:
description:
- "List of dictionary describing networks to be attached to interface or bond:"
- "C(name) - Name of the logical network to be assigned to bond or interface."
- "C(boot_protocol) - Boot protocol one of the I(none), I(static) or I(dhcp)."
- "C(address) - IP address in case of I(static) boot protocol is used."
- "C(netmask) - Subnet mask in case of I(static) boot protocol is used."
- "C(gateway) - Gateway in case of I(static) boot protocol is used."
- "C(version) - IP version. Either v4 or v6. Default is v4."
labels:
description:
- "List of names of the network label to be assigned to bond or interface."
check:
description:
- "If I(true) verify connectivity between host and engine."
- "Network configuration changes will be rolled back if connectivity between
engine and the host is lost after changing network configuration."
type: bool
save:
description:
- "If I(true) network configuration will be persistent, otherwise it is temporary. Default I(true) since Ansible 2.8."
type: bool
default: True
sync_networks:
description:
- "If I(true) all networks will be synchronized before modification"
type: bool
default: false
version_added: 2.8
extends_documentation_fragment: ovirt
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
# In all examples the durability of the configuration created is dependent on the 'save' option value:
# Create bond on eth0 and eth1 interface, and put 'myvlan' network on top of it and persist the new configuration:
- name: Bonds
ovirt_host_network:
name: myhost
save: yes
bond:
name: bond0
mode: 2
interfaces:
- eth1
- eth2
networks:
- name: myvlan
boot_protocol: static
address: 1.2.3.4
netmask: 255.255.255.0
gateway: 1.2.3.4
version: v4
# Create bond on eth1 and eth2 interface, specifying both mode and miimon:
- name: Bonds
ovirt_host_network:
name: myhost
bond:
name: bond0
mode: 1
options:
miimon: 200
interfaces:
- eth1
- eth2
# Remove bond0 bond from host interfaces:
- ovirt_host_network:
state: absent
name: myhost
bond:
name: bond0
# Assign myvlan1 and myvlan2 vlans to host eth0 interface:
- ovirt_host_network:
name: myhost
interface: eth0
networks:
- name: myvlan1
- name: myvlan2
# Remove myvlan2 vlan from host eth0 interface:
- ovirt_host_network:
state: absent
name: myhost
interface: eth0
networks:
- name: myvlan2
# Remove all networks/vlans from host eth0 interface:
- ovirt_host_network:
state: absent
name: myhost
interface: eth0
'''
RETURN = '''
id:
description: ID of the host NIC which is managed
returned: On success if host NIC is found.
type: str
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
host_nic:
description: "Dictionary of all the host NIC attributes. Host NIC attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/host_nic."
returned: On success if host NIC is found.
type: dict
'''
import traceback
try:
import ovirtsdk4.types as otypes
except ImportError:
pass
from ansible.module_utils import six
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
BaseModule,
check_sdk,
create_connection,
equal,
get_dict_of_struct,
get_entity,
get_link_name,
ovirt_full_argument_spec,
search_by_name,
engine_supported
)
def get_bond_options(mode, usr_opts):
MIIMON_100 = dict(miimon='100')
DEFAULT_MODE_OPTS = {
'1': MIIMON_100,
'2': MIIMON_100,
'3': MIIMON_100,
'4': dict(xmit_hash_policy='2', **MIIMON_100)
}
options = []
if mode is None:
return options
def get_type_name(mode_number):
"""
We need to maintain this type strings, for the __compare_options method,
for easier comparision.
"""
modes = [
'Active-Backup',
'Load balance (balance-xor)',
None,
'Dynamic link aggregation (802.3ad)',
]
if (not 0 < mode_number <= len(modes) - 1):
return None
return modes[mode_number - 1]
try:
mode_number = int(mode)
except ValueError:
raise Exception('Bond mode must be a number.')
options.append(
otypes.Option(
name='mode',
type=get_type_name(mode_number),
value=str(mode_number)
)
)
opts_dict = DEFAULT_MODE_OPTS.get(str(mode), {})
if usr_opts is not None:
opts_dict.update(**usr_opts)
options.extend(
[otypes.Option(name=opt, value=str(value))
for opt, value in six.iteritems(opts_dict)]
)
return options
class HostNetworksModule(BaseModule):
def __compare_options(self, new_options, old_options):
return sorted(get_dict_of_struct(opt) for opt in new_options) != sorted(get_dict_of_struct(opt) for opt in old_options)
def build_entity(self):
return otypes.Host()
def update_address(self, attachments_service, attachment, network):
# Check if there is any change in address assignments and
# update it if needed:
for ip in attachment.ip_address_assignments:
if str(ip.ip.version) == network.get('version', 'v4'):
changed = False
if not equal(network.get('boot_protocol'), str(ip.assignment_method)):
ip.assignment_method = otypes.BootProtocol(network.get('boot_protocol'))
changed = True
if not equal(network.get('address'), ip.ip.address):
ip.ip.address = network.get('address')
changed = True
if not equal(network.get('gateway'), ip.ip.gateway):
ip.ip.gateway = network.get('gateway')
changed = True
if not equal(network.get('netmask'), ip.ip.netmask):
ip.ip.netmask = network.get('netmask')
changed = True
if changed:
if not self._module.check_mode:
attachments_service.service(attachment.id).update(attachment)
self.changed = True
break
def has_update(self, nic_service):
update = False
bond = self._module.params['bond']
networks = self._module.params['networks']
labels = self._module.params['labels']
nic = get_entity(nic_service)
if nic is None:
return update
# Check if bond configuration should be updated:
if bond:
update = self.__compare_options(get_bond_options(bond.get('mode'), bond.get('options')), getattr(nic.bonding, 'options', []))
update = update or not equal(
sorted(bond.get('interfaces')) if bond.get('interfaces') else None,
sorted(get_link_name(self._connection, s) for s in nic.bonding.slaves)
)
# Check if labels need to be updated on interface/bond:
if labels:
net_labels = nic_service.network_labels_service().list()
# If any labels which user passed aren't assigned, relabel the interface:
if sorted(labels) != sorted([lbl.id for lbl in net_labels]):
return True
if not networks:
return update
# Check if networks attachments configuration should be updated:
attachments_service = nic_service.network_attachments_service()
network_names = [network.get('name') for network in networks]
attachments = {}
for attachment in attachments_service.list():
name = get_link_name(self._connection, attachment.network)
if name in network_names:
attachments[name] = attachment
for network in networks:
attachment = attachments.get(network.get('name'))
# If attachment don't exists, we need to create it:
if attachment is None:
return True
self.update_address(attachments_service, attachment, network)
return update
def _action_save_configuration(self, entity):
if not self._module.check_mode:
self._service.service(entity.id).commit_net_config()
self.changed = True
def needs_sync(nics_service):
nics = nics_service.list()
for nic in nics:
nic_service = nics_service.nic_service(nic.id)
for network_attachment_service in nic_service.network_attachments_service().list():
if not network_attachment_service.in_sync:
return True
return False
def main():
argument_spec = ovirt_full_argument_spec(
state=dict(
choices=['present', 'absent'],
default='present',
),
name=dict(aliases=['host'], required=True),
bond=dict(default=None, type='dict'),
interface=dict(default=None),
networks=dict(default=None, type='list'),
labels=dict(default=None, type='list'),
check=dict(default=None, type='bool'),
save=dict(default=True, type='bool'),
sync_networks=dict(default=False, type='bool'),
)
module = AnsibleModule(argument_spec=argument_spec)
check_sdk(module)
try:
auth = module.params.pop('auth')
connection = create_connection(auth)
hosts_service = connection.system_service().hosts_service()
host_networks_module = HostNetworksModule(
connection=connection,
module=module,
service=hosts_service,
)
host = host_networks_module.search_entity()
if host is None:
raise Exception("Host '%s' was not found." % module.params['name'])
bond = module.params['bond']
interface = module.params['interface']
networks = module.params['networks']
labels = module.params['labels']
nic_name = bond.get('name') if bond else module.params['interface']
host_service = hosts_service.host_service(host.id)
nics_service = host_service.nics_service()
nic = search_by_name(nics_service, nic_name)
if module.params["sync_networks"]:
if needs_sync(nics_service):
if not module.check_mode:
host_service.sync_all_networks()
host_networks_module.changed = True
network_names = [network['name'] for network in networks or []]
state = module.params['state']
if (
state == 'present' and
(nic is None or host_networks_module.has_update(nics_service.service(nic.id)))
):
# Remove networks which are attached to different interface then user want:
attachments_service = host_service.network_attachments_service()
# Append attachment ID to network if needs update:
for a in attachments_service.list():
current_network_name = get_link_name(connection, a.network)
if current_network_name in network_names:
for n in networks:
if n['name'] == current_network_name:
n['id'] = a.id
# Check if we have to break some bonds:
removed_bonds = []
if nic is not None:
for host_nic in nics_service.list():
if host_nic.bonding and nic.id in [slave.id for slave in host_nic.bonding.slaves]:
removed_bonds.append(otypes.HostNic(id=host_nic.id))
# Assign the networks:
setup_params = dict(
entity=host,
action='setup_networks',
check_connectivity=module.params['check'],
removed_bonds=removed_bonds if removed_bonds else None,
modified_bonds=[
otypes.HostNic(
name=bond.get('name'),
bonding=otypes.Bonding(
options=get_bond_options(bond.get('mode'), bond.get('options')),
slaves=[
otypes.HostNic(name=i) for i in bond.get('interfaces', [])
],
),
),
] if bond else None,
modified_labels=[
otypes.NetworkLabel(
id=str(name),
host_nic=otypes.HostNic(
name=bond.get('name') if bond else interface
),
) for name in labels
] if labels else None,
modified_network_attachments=[
otypes.NetworkAttachment(
id=network.get('id'),
network=otypes.Network(
name=network['name']
) if network['name'] else None,
host_nic=otypes.HostNic(
name=bond.get('name') if bond else interface
),
ip_address_assignments=[
otypes.IpAddressAssignment(
assignment_method=otypes.BootProtocol(
network.get('boot_protocol', 'none')
),
ip=otypes.Ip(
address=network.get('address'),
gateway=network.get('gateway'),
netmask=network.get('netmask'),
version=otypes.IpVersion(
network.get('version')
) if network.get('version') else None,
),
),
],
) for network in networks
] if networks else None,
)
if engine_supported(connection, '4.3'):
setup_params['commit_on_success'] = module.params['save']
elif module.params['save']:
setup_params['post_action'] = host_networks_module._action_save_configuration
host_networks_module.action(**setup_params)
elif state == 'absent' and nic:
attachments = []
nic_service = nics_service.nic_service(nic.id)
attached_labels = set([str(lbl.id) for lbl in nic_service.network_labels_service().list()])
if networks:
attachments_service = nic_service.network_attachments_service()
attachments = attachments_service.list()
attachments = [
attachment for attachment in attachments
if get_link_name(connection, attachment.network) in network_names
]
# Remove unmanaged networks:
unmanaged_networks_service = host_service.unmanaged_networks_service()
unmanaged_networks = [(u.id, u.name) for u in unmanaged_networks_service.list()]
for net_id, net_name in unmanaged_networks:
if net_name in network_names:
if not module.check_mode:
unmanaged_networks_service.unmanaged_network_service(net_id).remove()
host_networks_module.changed = True
# Need to check if there are any labels to be removed, as backend fail
# if we try to send remove non existing label, for bond and attachments it's OK:
if (labels and set(labels).intersection(attached_labels)) or bond or attachments:
setup_params = dict(
entity=host,
action='setup_networks',
check_connectivity=module.params['check'],
removed_bonds=[
otypes.HostNic(
name=bond.get('name'),
),
] if bond else None,
removed_labels=[
otypes.NetworkLabel(id=str(name)) for name in labels
] if labels else None,
removed_network_attachments=attachments if attachments else None,
)
if engine_supported(connection, '4.3'):
setup_params['commit_on_success'] = module.params['save']
elif module.params['save']:
setup_params['post_action'] = host_networks_module._action_save_configuration
host_networks_module.action(**setup_params)
nic = search_by_name(nics_service, nic_name)
module.exit_json(**{
'changed': host_networks_module.changed,
'id': nic.id if nic else None,
'host_nic': get_dict_of_struct(nic),
})
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,954 |
Synchronize module show SyntaxError
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When there is a file named `*SyntaxError*` the `synchronize` module would output this:
```
"msg": "SyntaxError parsing module. Perhaps invoking \"python\" on your local (or delegate_to) machine invokes python3. You can set ansible_python_interpreter for localhost (or the delegate_to machine) to the location of python2 to fix this",
```
It's confusing because it started appearing after an upgrade but the return code of it was `0`.
This does not impact me that much because I ran tests well enough to not pay attention. I also already checked the return code which I trust. But definitively confusing and was also alarming when it started to happen in production :)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`synchronize` module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.0.dev0
config file = None
configured module search path = [u'/home/creator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
mkdir /tmp/blah
touch /tmp/blah/SyntaxError.py
```
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: localhost
tasks:
- synchronize:
src: /tmp/blah/
dest: /tmp/blih/
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`msg` should be `stdout_lines` with `\n`
<!--- Paste verbatim command output between quotes -->
```
"msg": "SyntaxError parsing module. Perhaps invoking \"python\" on your local (or delegate_to) machine invokes python3. You can set ansible_python_interpreter for localhost (or the delegate_to machine) to the location of python2 to fix this",
"rc": 0,
"stdout_lines": [
".d..t...... ./",
">f+++++++++ SyntaxError.py"
]
```
I think the issue is there https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/action/synchronize.py#L420 Not sure how to help fix it.
|
https://github.com/ansible/ansible/issues/63954
|
https://github.com/ansible/ansible/pull/64344
|
d9f5be8d0d8a036aaa89056536f58b4a2e3c86b5
|
a1ab093ddbd32f1002cbf6d6f184c7d0041d890d
| 2019-10-25T16:05:27Z |
python
| 2019-11-05T15:34:18Z |
changelogs/fragments/63954-synchronize-remove-unused-block.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,954 |
Synchronize module show SyntaxError
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When there is a file named `*SyntaxError*` the `synchronize` module would output this:
```
"msg": "SyntaxError parsing module. Perhaps invoking \"python\" on your local (or delegate_to) machine invokes python3. You can set ansible_python_interpreter for localhost (or the delegate_to machine) to the location of python2 to fix this",
```
It's confusing because it started appearing after an upgrade but the return code of it was `0`.
This does not impact me that much because I ran tests well enough to not pay attention. I also already checked the return code which I trust. But definitively confusing and was also alarming when it started to happen in production :)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`synchronize` module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.10.0.dev0
config file = None
configured module search path = [u'/home/creator/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```
mkdir /tmp/blah
touch /tmp/blah/SyntaxError.py
```
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: localhost
tasks:
- synchronize:
src: /tmp/blah/
dest: /tmp/blih/
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
`msg` should be `stdout_lines` with `\n`
<!--- Paste verbatim command output between quotes -->
```
"msg": "SyntaxError parsing module. Perhaps invoking \"python\" on your local (or delegate_to) machine invokes python3. You can set ansible_python_interpreter for localhost (or the delegate_to machine) to the location of python2 to fix this",
"rc": 0,
"stdout_lines": [
".d..t...... ./",
">f+++++++++ SyntaxError.py"
]
```
I think the issue is there https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/action/synchronize.py#L420 Not sure how to help fix it.
|
https://github.com/ansible/ansible/issues/63954
|
https://github.com/ansible/ansible/pull/64344
|
d9f5be8d0d8a036aaa89056536f58b4a2e3c86b5
|
a1ab093ddbd32f1002cbf6d6f184c7d0041d890d
| 2019-10-25T16:05:27Z |
python
| 2019-11-05T15:34:18Z |
lib/ansible/plugins/action/synchronize.py
|
# -*- coding: utf-8 -*-
# (c) 2012-2013, Timothy Appnel <[email protected]>
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os.path
from ansible import constants as C
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.common._collections_compat import MutableSequence
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.plugins.action import ActionBase
from ansible.plugins.loader import connection_loader
class ActionModule(ActionBase):
def _get_absolute_path(self, path):
original_path = path
if path.startswith('rsync://'):
return path
if self._task._role is not None:
path = self._loader.path_dwim_relative(self._task._role._role_path, 'files', path)
else:
path = self._loader.path_dwim_relative(self._loader.get_basedir(), 'files', path)
if original_path and original_path[-1] == '/' and path[-1] != '/':
# make sure the dwim'd path ends in a trailing "/"
# if the original path did
path += '/'
return path
def _host_is_ipv6_address(self, host):
return ':' in to_text(host, errors='surrogate_or_strict')
def _format_rsync_rsh_target(self, host, path, user):
''' formats rsync rsh target, escaping ipv6 addresses if needed '''
user_prefix = ''
if path.startswith('rsync://'):
return path
# If using docker or buildah, do not add user information
if self._remote_transport not in ['docker', 'buildah'] and user:
user_prefix = '%s@' % (user, )
if self._host_is_ipv6_address(host):
return '[%s%s]:%s' % (user_prefix, host, path)
else:
return '%s%s:%s' % (user_prefix, host, path)
def _process_origin(self, host, path, user):
if host not in C.LOCALHOST:
return self._format_rsync_rsh_target(host, path, user)
if ':' not in path and not path.startswith('/'):
path = self._get_absolute_path(path=path)
return path
def _process_remote(self, task_args, host, path, user, port_matches_localhost_port):
"""
:arg host: hostname for the path
:arg path: file path
:arg user: username for the transfer
:arg port_matches_localhost_port: boolean whether the remote port
matches the port used by localhost's sshd. This is used in
conjunction with seeing whether the host is localhost to know
if we need to have the module substitute the pathname or if it
is a different host (for instance, an ssh tunnelled port or an
alternative ssh port to a vagrant host.)
"""
transport = self._connection.transport
# If we're connecting to a remote host or we're delegating to another
# host or we're connecting to a different ssh instance on the
# localhost then we have to format the path as a remote rsync path
if host not in C.LOCALHOST or transport != "local" or \
(host in C.LOCALHOST and not port_matches_localhost_port):
# If we're delegating to non-localhost and but the
# inventory_hostname host is localhost then we need the module to
# fix up the rsync path to use the controller's public DNS/IP
# instead of "localhost"
if port_matches_localhost_port and host in C.LOCALHOST:
task_args['_substitute_controller'] = True
return self._format_rsync_rsh_target(host, path, user)
if ':' not in path and not path.startswith('/'):
path = self._get_absolute_path(path=path)
return path
def _override_module_replaced_vars(self, task_vars):
""" Some vars are substituted into the modules. Have to make sure
that those are correct for localhost when synchronize creates its own
connection to localhost."""
# Clear the current definition of these variables as they came from the
# connection to the remote host
if 'ansible_syslog_facility' in task_vars:
del task_vars['ansible_syslog_facility']
for key in list(task_vars.keys()):
if key.startswith("ansible_") and key.endswith("_interpreter"):
del task_vars[key]
# Add the definitions from localhost
for host in C.LOCALHOST:
if host in task_vars['hostvars']:
localhost = task_vars['hostvars'][host]
break
if 'ansible_syslog_facility' in localhost:
task_vars['ansible_syslog_facility'] = localhost['ansible_syslog_facility']
for key in localhost:
if key.startswith("ansible_") and key.endswith("_interpreter"):
task_vars[key] = localhost[key]
def run(self, tmp=None, task_vars=None):
''' generates params and passes them on to the rsync module '''
# When modifying this function be aware of the tricky convolutions
# your thoughts have to go through:
#
# In normal ansible, we connect from controller to inventory_hostname
# (playbook's hosts: field) or controller to delegate_to host and run
# a module on one of those hosts.
#
# So things that are directly related to the core of ansible are in
# terms of that sort of connection that always originate on the
# controller.
#
# In synchronize we use ansible to connect to either the controller or
# to the delegate_to host and then run rsync which makes its own
# connection from controller to inventory_hostname or delegate_to to
# inventory_hostname.
#
# That means synchronize needs to have some knowledge of the
# controller to inventory_host/delegate host that ansible typically
# establishes and use those to construct a command line for rsync to
# connect from the inventory_host to the controller/delegate. The
# challenge for coders is remembering which leg of the trip is
# associated with the conditions that you're checking at any one time.
if task_vars is None:
task_vars = dict()
# We make a copy of the args here because we may fail and be asked to
# retry. If that happens we don't want to pass the munged args through
# to our next invocation. Munged args are single use only.
_tmp_args = self._task.args.copy()
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
# Store remote connection type
self._remote_transport = self._connection.transport
# Handle docker connection options
if self._remote_transport == 'docker':
self._docker_cmd = self._connection.docker_cmd
if self._play_context.docker_extra_args:
self._docker_cmd = "%s %s" % (self._docker_cmd, self._play_context.docker_extra_args)
# self._connection accounts for delegate_to so
# remote_transport is the transport ansible thought it would need
# between the controller and the delegate_to host or the controller
# and the remote_host if delegate_to isn't set.
remote_transport = False
if self._connection.transport != 'local':
remote_transport = True
try:
delegate_to = self._task.delegate_to
except (AttributeError, KeyError):
delegate_to = None
# ssh paramiko docker buildah and local are fully supported transports. Anything
# else only works with delegate_to
if delegate_to is None and self._connection.transport not in \
('ssh', 'paramiko', 'local', 'docker', 'buildah'):
result['failed'] = True
result['msg'] = (
"synchronize uses rsync to function. rsync needs to connect to the remote "
"host via ssh, docker client or a direct filesystem "
"copy. This remote host is being accessed via %s instead "
"so it cannot work." % self._connection.transport)
return result
use_ssh_args = _tmp_args.pop('use_ssh_args', None)
# Parameter name needed by the ansible module
_tmp_args['_local_rsync_path'] = task_vars.get('ansible_rsync_path') or 'rsync'
_tmp_args['_local_rsync_password'] = task_vars.get('ansible_ssh_pass') or task_vars.get('ansible_password')
# rsync thinks that one end of the connection is localhost and the
# other is the host we're running the task for (Note: We use
# ansible's delegate_to mechanism to determine which host rsync is
# running on so localhost could be a non-controller machine if
# delegate_to is used)
src_host = '127.0.0.1'
inventory_hostname = task_vars.get('inventory_hostname')
dest_host_inventory_vars = task_vars['hostvars'].get(inventory_hostname)
try:
dest_host = dest_host_inventory_vars['ansible_host']
except KeyError:
dest_host = dest_host_inventory_vars.get('ansible_ssh_host', inventory_hostname)
dest_host_ids = [hostid for hostid in (dest_host_inventory_vars.get('inventory_hostname'),
dest_host_inventory_vars.get('ansible_host'),
dest_host_inventory_vars.get('ansible_ssh_host'))
if hostid is not None]
localhost_ports = set()
for host in C.LOCALHOST:
localhost_vars = task_vars['hostvars'].get(host, {})
for port_var in C.MAGIC_VARIABLE_MAPPING['port']:
port = localhost_vars.get(port_var, None)
if port:
break
else:
port = C.DEFAULT_REMOTE_PORT
localhost_ports.add(port)
# dest_is_local tells us if the host rsync runs on is the same as the
# host rsync puts the files on. This is about *rsync's connection*,
# not about the ansible connection to run the module.
dest_is_local = False
if delegate_to is None and remote_transport is False:
dest_is_local = True
elif delegate_to is not None and delegate_to in dest_host_ids:
dest_is_local = True
# CHECK FOR NON-DEFAULT SSH PORT
inv_port = task_vars.get('ansible_ssh_port', None) or C.DEFAULT_REMOTE_PORT
if _tmp_args.get('dest_port', None) is None:
if inv_port is not None:
_tmp_args['dest_port'] = inv_port
# Set use_delegate if we are going to run rsync on a delegated host
# instead of localhost
use_delegate = False
if delegate_to is not None and delegate_to in dest_host_ids:
# edge case: explicit delegate and dest_host are the same
# so we run rsync on the remote machine targeting its localhost
# (itself)
dest_host = '127.0.0.1'
use_delegate = True
elif delegate_to is not None and remote_transport:
# If we're delegating to a remote host then we need to use the
# delegate_to settings
use_delegate = True
# Delegate to localhost as the source of the rsync unless we've been
# told (via delegate_to) that a different host is the source of the
# rsync
if not use_delegate and remote_transport:
# Create a connection to localhost to run rsync on
new_stdin = self._connection._new_stdin
# Unike port, there can be only one shell
localhost_shell = None
for host in C.LOCALHOST:
localhost_vars = task_vars['hostvars'].get(host, {})
for shell_var in C.MAGIC_VARIABLE_MAPPING['shell']:
localhost_shell = localhost_vars.get(shell_var, None)
if localhost_shell:
break
if localhost_shell:
break
else:
localhost_shell = os.path.basename(C.DEFAULT_EXECUTABLE)
self._play_context.shell = localhost_shell
# Unike port, there can be only one executable
localhost_executable = None
for host in C.LOCALHOST:
localhost_vars = task_vars['hostvars'].get(host, {})
for executable_var in C.MAGIC_VARIABLE_MAPPING['executable']:
localhost_executable = localhost_vars.get(executable_var, None)
if localhost_executable:
break
if localhost_executable:
break
else:
localhost_executable = C.DEFAULT_EXECUTABLE
self._play_context.executable = localhost_executable
new_connection = connection_loader.get('local', self._play_context, new_stdin)
self._connection = new_connection
# Override _remote_is_local as an instance attribute specifically for the synchronize use case
# ensuring we set local tmpdir correctly
self._connection._remote_is_local = True
self._override_module_replaced_vars(task_vars)
# SWITCH SRC AND DEST HOST PER MODE
if _tmp_args.get('mode', 'push') == 'pull':
(dest_host, src_host) = (src_host, dest_host)
# MUNGE SRC AND DEST PER REMOTE_HOST INFO
src = _tmp_args.get('src', None)
dest = _tmp_args.get('dest', None)
if src is None or dest is None:
return dict(failed=True, msg="synchronize requires both src and dest parameters are set")
# Determine if we need a user@
user = None
if not dest_is_local:
# Src and dest rsync "path" handling
if boolean(_tmp_args.get('set_remote_user', 'yes'), strict=False):
if use_delegate:
user = task_vars.get('ansible_delegated_vars', dict()).get('ansible_ssh_user', None)
if not user:
user = task_vars.get('ansible_ssh_user') or self._play_context.remote_user
if not user:
user = C.DEFAULT_REMOTE_USER
else:
user = task_vars.get('ansible_ssh_user') or self._play_context.remote_user
# Private key handling
private_key = self._play_context.private_key_file
if private_key is not None:
_tmp_args['private_key'] = private_key
# use the mode to define src and dest's url
if _tmp_args.get('mode', 'push') == 'pull':
# src is a remote path: <user>@<host>, dest is a local path
src = self._process_remote(_tmp_args, src_host, src, user, inv_port in localhost_ports)
dest = self._process_origin(dest_host, dest, user)
else:
# src is a local path, dest is a remote path: <user>@<host>
src = self._process_origin(src_host, src, user)
dest = self._process_remote(_tmp_args, dest_host, dest, user, inv_port in localhost_ports)
else:
# Still need to munge paths (to account for roles) even if we aren't
# copying files between hosts
if not src.startswith('/'):
src = self._get_absolute_path(path=src)
if not dest.startswith('/'):
dest = self._get_absolute_path(path=dest)
_tmp_args['src'] = src
_tmp_args['dest'] = dest
# Allow custom rsync path argument
rsync_path = _tmp_args.get('rsync_path', None)
# backup original become as we are probably about to unset it
become = self._play_context.become
if not dest_is_local:
# don't escalate for docker. doing --rsync-path with docker exec fails
# and we can switch directly to the user via docker arguments
if self._play_context.become and not rsync_path and self._remote_transport != 'docker':
# If no rsync_path is set, become was originally set, and dest is
# remote then add privilege escalation here.
if self._play_context.become_method == 'sudo':
rsync_path = 'sudo rsync'
# TODO: have to add in the rest of the become methods here
# We cannot use privilege escalation on the machine running the
# module. Instead we run it on the machine rsync is connecting
# to.
self._play_context.become = False
_tmp_args['rsync_path'] = rsync_path
if use_ssh_args:
ssh_args = [
getattr(self._play_context, 'ssh_args', ''),
getattr(self._play_context, 'ssh_common_args', ''),
getattr(self._play_context, 'ssh_extra_args', ''),
]
_tmp_args['ssh_args'] = ' '.join([a for a in ssh_args if a])
# If launching synchronize against docker container
# use rsync_opts to support container to override rsh options
if self._remote_transport in ['docker', 'buildah']:
# Replicate what we do in the module argumentspec handling for lists
if not isinstance(_tmp_args.get('rsync_opts'), MutableSequence):
tmp_rsync_opts = _tmp_args.get('rsync_opts', [])
if isinstance(tmp_rsync_opts, string_types):
tmp_rsync_opts = tmp_rsync_opts.split(',')
elif isinstance(tmp_rsync_opts, (int, float)):
tmp_rsync_opts = [to_text(tmp_rsync_opts)]
_tmp_args['rsync_opts'] = tmp_rsync_opts
if '--blocking-io' not in _tmp_args['rsync_opts']:
_tmp_args['rsync_opts'].append('--blocking-io')
if self._remote_transport in ['docker']:
if become and self._play_context.become_user:
_tmp_args['rsync_opts'].append("--rsh=%s exec -u %s -i" % (self._docker_cmd, self._play_context.become_user))
elif user is not None:
_tmp_args['rsync_opts'].append("--rsh=%s exec -u %s -i" % (self._docker_cmd, user))
else:
_tmp_args['rsync_opts'].append("--rsh=%s exec -i" % self._docker_cmd)
elif self._remote_transport in ['buildah']:
_tmp_args['rsync_opts'].append("--rsh=buildah run --")
# run the module and store the result
result.update(self._execute_module('synchronize', module_args=_tmp_args, task_vars=task_vars))
if 'SyntaxError' in result.get('exception', result.get('msg', '')):
# Emit a warning about using python3 because synchronize is
# somewhat unique in running on localhost
result['exception'] = result['msg']
result['msg'] = ('SyntaxError parsing module. Perhaps invoking "python" on your local (or delegate_to) machine invokes python3. '
'You can set ansible_python_interpreter for localhost (or the delegate_to machine) to the location of python2 to fix this')
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,355 |
Ansible Galaxy installing older version of role if none specified (for geerlingguy.php)
|
##### SUMMARY
Tonight many of my roles started failing CI tests, and after an hour or so I tracked down the problem to an old version of the `geerlingguy.php` role being installed. The current version on Galaxy (see https://galaxy.ansible.com/geerlingguy/php) is 3.7.0.
But Molecule was downloading the release prior to that, 3.6.3.
I also tested manually installing the role (without specifying a version) on two other fresh environments—in _both_ cases, it still downloaded the old not-current version, 3.6.3.
So... either the Galaxy API and the Galaxy UI are out of sync, or something is wrong with Ansible's `ansible-galaxy` command, and it's causing older-than-latest versions of at least one role to be downloaded...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
```
N/A (no changes)
```
##### OS / ENVIRONMENT
- Ubuntu 18.04: fail (3.6.3)
- Ubuntu 16.04: fail (3.6.3)
- Debian 10: fail (3.6.3)
- Debian 9: fail (3.6.3)
- Debian 8: success (3.7.0)
- CentOS 8: success (3.7.0)
- CentOS 7: fail (3.6.3)
Ansible 2.9.0 was used in each environment, installed via Pip. Very weird. I even ran the test in a fresh new Debian 8 and CentOS 8 environment 3 times each, and it was successful every time. And I ran the other tests at least twice each and they failed every time... so it doesn't _seem_ to be a cache-related issue in the API.
##### STEPS TO REPRODUCE
1. `ansible-galaxy install geerlingguy.php`
##### EXPECTED RESULTS
The latest version of the role (3.7.0) should be installed.
##### ACTUAL RESULTS
An older version of the role (3.6.3) was installed.
Note that the proper version _was_ installed on CentOS 8 and Debian 8... but none of the other OSes I tested, all confirmed to be running Ansible 2.9.0, installed via Pip.
|
https://github.com/ansible/ansible/issues/64355
|
https://github.com/ansible/ansible/pull/64373
|
a1ab093ddbd32f1002cbf6d6f184c7d0041d890d
|
7acae62fa849481b2a5e2e2d56961c5e1dcea96c
| 2019-11-03T03:00:15Z |
python
| 2019-11-05T15:34:50Z |
changelogs/fragments/galaxy-role-version.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,355 |
Ansible Galaxy installing older version of role if none specified (for geerlingguy.php)
|
##### SUMMARY
Tonight many of my roles started failing CI tests, and after an hour or so I tracked down the problem to an old version of the `geerlingguy.php` role being installed. The current version on Galaxy (see https://galaxy.ansible.com/geerlingguy/php) is 3.7.0.
But Molecule was downloading the release prior to that, 3.6.3.
I also tested manually installing the role (without specifying a version) on two other fresh environments—in _both_ cases, it still downloaded the old not-current version, 3.6.3.
So... either the Galaxy API and the Galaxy UI are out of sync, or something is wrong with Ansible's `ansible-galaxy` command, and it's causing older-than-latest versions of at least one role to be downloaded...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
```
N/A (no changes)
```
##### OS / ENVIRONMENT
- Ubuntu 18.04: fail (3.6.3)
- Ubuntu 16.04: fail (3.6.3)
- Debian 10: fail (3.6.3)
- Debian 9: fail (3.6.3)
- Debian 8: success (3.7.0)
- CentOS 8: success (3.7.0)
- CentOS 7: fail (3.6.3)
Ansible 2.9.0 was used in each environment, installed via Pip. Very weird. I even ran the test in a fresh new Debian 8 and CentOS 8 environment 3 times each, and it was successful every time. And I ran the other tests at least twice each and they failed every time... so it doesn't _seem_ to be a cache-related issue in the API.
##### STEPS TO REPRODUCE
1. `ansible-galaxy install geerlingguy.php`
##### EXPECTED RESULTS
The latest version of the role (3.7.0) should be installed.
##### ACTUAL RESULTS
An older version of the role (3.6.3) was installed.
Note that the proper version _was_ installed on CentOS 8 and Debian 8... but none of the other OSes I tested, all confirmed to be running Ansible 2.9.0, installed via Pip.
|
https://github.com/ansible/ansible/issues/64355
|
https://github.com/ansible/ansible/pull/64373
|
a1ab093ddbd32f1002cbf6d6f184c7d0041d890d
|
7acae62fa849481b2a5e2e2d56961c5e1dcea96c
| 2019-11-03T03:00:15Z |
python
| 2019-11-05T15:34:50Z |
lib/ansible/galaxy/api.py
|
# (C) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import tarfile
import uuid
import time
from ansible import context
from ansible.errors import AnsibleError
from ansible.module_utils.six import string_types
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible.module_utils.six.moves.urllib.parse import quote as urlquote, urlencode, urlparse
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.urls import open_url
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash_s
display = Display()
def g_connect(versions):
"""
Wrapper to lazily initialize connection info to Galaxy and verify the API versions required are available on the
endpoint.
:param versions: A list of API versions that the function supports.
"""
def decorator(method):
def wrapped(self, *args, **kwargs):
if not self._available_api_versions:
display.vvvv("Initial connection to galaxy_server: %s" % self.api_server)
# Determine the type of Galaxy server we are talking to. First try it unauthenticated then with Bearer
# auth for Automation Hub.
n_url = self.api_server
error_context_msg = 'Error when finding available api versions from %s (%s)' % (self.name, n_url)
if self.api_server == 'https://galaxy.ansible.com' or self.api_server == 'https://galaxy.ansible.com/':
n_url = 'https://galaxy.ansible.com/api/'
try:
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg)
except (AnsibleError, GalaxyError, ValueError, KeyError):
# Either the URL doesnt exist, or other error. Or the URL exists, but isn't a galaxy API
# root (not JSON, no 'available_versions') so try appending '/api/'
n_url = _urljoin(n_url, '/api/')
# let exceptions here bubble up
data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg)
if 'available_versions' not in data:
raise AnsibleError("Tried to find galaxy API root at %s but no 'available_versions' are available on %s"
% (n_url, self.api_server))
# Update api_server to point to the "real" API root, which in this case
# was the configured url + '/api/' appended.
self.api_server = n_url
# Default to only supporting v1, if only v1 is returned we also assume that v2 is available even though
# it isn't returned in the available_versions dict.
available_versions = data.get('available_versions', {u'v1': u'v1/'})
if list(available_versions.keys()) == [u'v1']:
available_versions[u'v2'] = u'v2/'
self._available_api_versions = available_versions
display.vvvv("Found API version '%s' with Galaxy server %s (%s)"
% (', '.join(available_versions.keys()), self.name, self.api_server))
# Verify that the API versions the function works with are available on the server specified.
available_versions = set(self._available_api_versions.keys())
common_versions = set(versions).intersection(available_versions)
if not common_versions:
raise AnsibleError("Galaxy action %s requires API versions '%s' but only '%s' are available on %s %s"
% (method.__name__, ", ".join(versions), ", ".join(available_versions),
self.name, self.api_server))
return method(self, *args, **kwargs)
return wrapped
return decorator
def _urljoin(*args):
return '/'.join(to_native(a, errors='surrogate_or_strict').strip('/') for a in args + ('',) if a)
class GalaxyError(AnsibleError):
""" Error for bad Galaxy server responses. """
def __init__(self, http_error, message):
super(GalaxyError, self).__init__(message)
self.http_code = http_error.code
self.url = http_error.geturl()
try:
http_msg = to_text(http_error.read())
err_info = json.loads(http_msg)
except (AttributeError, ValueError):
err_info = {}
url_split = self.url.split('/')
if 'v2' in url_split:
galaxy_msg = err_info.get('message', 'Unknown error returned by Galaxy server.')
code = err_info.get('code', 'Unknown')
full_error_msg = u"%s (HTTP Code: %d, Message: %s Code: %s)" % (message, self.http_code, galaxy_msg, code)
elif 'v3' in url_split:
errors = err_info.get('errors', [])
if not errors:
errors = [{}] # Defaults are set below, we just need to make sure 1 error is present.
message_lines = []
for error in errors:
error_msg = error.get('detail') or error.get('title') or 'Unknown error returned by Galaxy server.'
error_code = error.get('code') or 'Unknown'
message_line = u"(HTTP Code: %d, Message: %s Code: %s)" % (self.http_code, error_msg, error_code)
message_lines.append(message_line)
full_error_msg = "%s %s" % (message, ', '.join(message_lines))
else:
# v1 and unknown API endpoints
galaxy_msg = err_info.get('default', 'Unknown error returned by Galaxy server.')
full_error_msg = u"%s (HTTP Code: %d, Message: %s)" % (message, self.http_code, galaxy_msg)
self.message = to_native(full_error_msg)
class CollectionVersionMetadata:
def __init__(self, namespace, name, version, download_url, artifact_sha256, dependencies):
"""
Contains common information about a collection on a Galaxy server to smooth through API differences for
Collection and define a standard meta info for a collection.
:param namespace: The namespace name.
:param name: The collection name.
:param version: The version that the metadata refers to.
:param download_url: The URL to download the collection.
:param artifact_sha256: The SHA256 of the collection artifact for later verification.
:param dependencies: A dict of dependencies of the collection.
"""
self.namespace = namespace
self.name = name
self.version = version
self.download_url = download_url
self.artifact_sha256 = artifact_sha256
self.dependencies = dependencies
class GalaxyAPI:
""" This class is meant to be used as a API client for an Ansible Galaxy server """
def __init__(self, galaxy, name, url, username=None, password=None, token=None):
self.galaxy = galaxy
self.name = name
self.username = username
self.password = password
self.token = token
self.api_server = url
self.validate_certs = not context.CLIARGS['ignore_certs']
self._available_api_versions = {}
display.debug('Validate TLS certificates for %s: %s' % (self.api_server, self.validate_certs))
@property
@g_connect(['v1', 'v2', 'v3'])
def available_api_versions(self):
# Calling g_connect will populate self._available_api_versions
return self._available_api_versions
def _call_galaxy(self, url, args=None, headers=None, method=None, auth_required=False, error_context_msg=None):
headers = headers or {}
self._add_auth_token(headers, url, required=auth_required)
try:
display.vvvv("Calling Galaxy at %s" % url)
resp = open_url(to_native(url), data=args, validate_certs=self.validate_certs, headers=headers,
method=method, timeout=20)
except HTTPError as e:
raise GalaxyError(e, error_context_msg)
except Exception as e:
raise AnsibleError("Unknown error when attempting to call Galaxy at '%s': %s" % (url, to_native(e)))
resp_data = to_text(resp.read(), errors='surrogate_or_strict')
try:
data = json.loads(resp_data)
except ValueError:
raise AnsibleError("Failed to parse Galaxy response from '%s' as JSON:\n%s"
% (resp.url, to_native(resp_data)))
return data
def _add_auth_token(self, headers, url, token_type=None, required=False):
# Don't add the auth token if one is already present
if 'Authorization' in headers:
return
if not self.token and required:
raise AnsibleError("No access token or username set. A token can be set with --api-key, with "
"'ansible-galaxy login', or set in ansible.cfg.")
if self.token:
headers.update(self.token.headers())
@g_connect(['v1'])
def authenticate(self, github_token):
"""
Retrieve an authentication token
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "tokens") + '/'
args = urlencode({"github_token": github_token})
resp = open_url(url, data=args, validate_certs=self.validate_certs, method="POST")
data = json.loads(to_text(resp.read(), errors='surrogate_or_strict'))
return data
@g_connect(['v1'])
def create_import_task(self, github_user, github_repo, reference=None, role_name=None):
"""
Post an import request
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports") + '/'
args = {
"github_user": github_user,
"github_repo": github_repo,
"github_reference": reference if reference else ""
}
if role_name:
args['alternate_role_name'] = role_name
elif github_repo.startswith('ansible-role'):
args['alternate_role_name'] = github_repo[len('ansible-role') + 1:]
data = self._call_galaxy(url, args=urlencode(args), method="POST")
if data.get('results', None):
return data['results']
return data
@g_connect(['v1'])
def get_import_task(self, task_id=None, github_user=None, github_repo=None):
"""
Check the status of an import task.
"""
url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports")
if task_id is not None:
url = "%s?id=%d" % (url, task_id)
elif github_user is not None and github_repo is not None:
url = "%s?github_user=%s&github_repo=%s" % (url, github_user, github_repo)
else:
raise AnsibleError("Expected task_id or github_user and github_repo")
data = self._call_galaxy(url)
return data['results']
@g_connect(['v1'])
def lookup_role_by_name(self, role_name, notify=True):
"""
Find a role by name.
"""
role_name = to_text(urlquote(to_bytes(role_name)))
try:
parts = role_name.split(".")
user_name = ".".join(parts[0:-1])
role_name = parts[-1]
if notify:
display.display("- downloading role '%s', owned by %s" % (role_name, user_name))
except Exception:
raise AnsibleError("Invalid role name (%s). Specify role as format: username.rolename" % role_name)
url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles",
"?owner__username=%s&name=%s" % (user_name, role_name))
data = self._call_galaxy(url)
if len(data["results"]) != 0:
return data["results"][0]
return None
@g_connect(['v1'])
def fetch_role_related(self, related, role_id):
"""
Fetch the list of related items for the given role.
The url comes from the 'related' field of the role.
"""
results = []
try:
url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles", role_id, related,
"?page_size=50")
data = self._call_galaxy(url)
results = data['results']
done = (data.get('next_link', None) is None)
while not done:
url = _urljoin(self.api_server, data['next_link'])
data = self._call_galaxy(url)
results += data['results']
done = (data.get('next_link', None) is None)
except Exception as e:
display.vvvv("Unable to retrive role (id=%s) data (%s), but this is not fatal so we continue: %s"
% (role_id, related, to_text(e)))
return results
@g_connect(['v1'])
def get_list(self, what):
"""
Fetch the list of items specified.
"""
try:
url = _urljoin(self.api_server, self.available_api_versions['v1'], what, "?page_size")
data = self._call_galaxy(url)
if "results" in data:
results = data['results']
else:
results = data
done = True
if "next" in data:
done = (data.get('next_link', None) is None)
while not done:
url = _urljoin(self.api_server, data['next_link'])
data = self._call_galaxy(url)
results += data['results']
done = (data.get('next_link', None) is None)
return results
except Exception as error:
raise AnsibleError("Failed to download the %s list: %s" % (what, to_native(error)))
@g_connect(['v1'])
def search_roles(self, search, **kwargs):
search_url = _urljoin(self.api_server, self.available_api_versions['v1'], "search", "roles", "?")
if search:
search_url += '&autocomplete=' + to_text(urlquote(to_bytes(search)))
tags = kwargs.get('tags', None)
platforms = kwargs.get('platforms', None)
page_size = kwargs.get('page_size', None)
author = kwargs.get('author', None)
if tags and isinstance(tags, string_types):
tags = tags.split(',')
search_url += '&tags_autocomplete=' + '+'.join(tags)
if platforms and isinstance(platforms, string_types):
platforms = platforms.split(',')
search_url += '&platforms_autocomplete=' + '+'.join(platforms)
if page_size:
search_url += '&page_size=%s' % page_size
if author:
search_url += '&username_autocomplete=%s' % author
data = self._call_galaxy(search_url)
return data
@g_connect(['v1'])
def add_secret(self, source, github_user, github_repo, secret):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets") + '/'
args = urlencode({
"source": source,
"github_user": github_user,
"github_repo": github_repo,
"secret": secret
})
data = self._call_galaxy(url, args=args, method="POST")
return data
@g_connect(['v1'])
def list_secrets(self):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets")
data = self._call_galaxy(url, auth_required=True)
return data
@g_connect(['v1'])
def remove_secret(self, secret_id):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets", secret_id) + '/'
data = self._call_galaxy(url, auth_required=True, method='DELETE')
return data
@g_connect(['v1'])
def delete_role(self, github_user, github_repo):
url = _urljoin(self.api_server, self.available_api_versions['v1'], "removerole",
"?github_user=%s&github_repo=%s" % (github_user, github_repo))
data = self._call_galaxy(url, auth_required=True, method='DELETE')
return data
# Collection APIs #
@g_connect(['v2', 'v3'])
def publish_collection(self, collection_path):
"""
Publishes a collection to a Galaxy server and returns the import task URI.
:param collection_path: The path to the collection tarball to publish.
:return: The import task URI that contains the import results.
"""
display.display("Publishing collection artifact '%s' to %s %s" % (collection_path, self.name, self.api_server))
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
raise AnsibleError("The collection path specified '%s' does not exist." % to_native(collection_path))
elif not tarfile.is_tarfile(b_collection_path):
raise AnsibleError("The collection path specified '%s' is not a tarball, use 'ansible-galaxy collection "
"build' to create a proper release artifact." % to_native(collection_path))
with open(b_collection_path, 'rb') as collection_tar:
data = collection_tar.read()
boundary = '--------------------------%s' % uuid.uuid4().hex
b_file_name = os.path.basename(b_collection_path)
part_boundary = b"--" + to_bytes(boundary, errors='surrogate_or_strict')
form = [
part_boundary,
b"Content-Disposition: form-data; name=\"sha256\"",
to_bytes(secure_hash_s(data), errors='surrogate_or_strict'),
part_boundary,
b"Content-Disposition: file; name=\"file\"; filename=\"%s\"" % b_file_name,
b"Content-Type: application/octet-stream",
b"",
data,
b"%s--" % part_boundary,
]
data = b"\r\n".join(form)
headers = {
'Content-type': 'multipart/form-data; boundary=%s' % boundary,
'Content-length': len(data),
}
if 'v3' in self.available_api_versions:
n_url = _urljoin(self.api_server, self.available_api_versions['v3'], 'artifacts', 'collections') + '/'
else:
n_url = _urljoin(self.api_server, self.available_api_versions['v2'], 'collections') + '/'
resp = self._call_galaxy(n_url, args=data, headers=headers, method='POST', auth_required=True,
error_context_msg='Error when publishing collection to %s (%s)'
% (self.name, self.api_server))
return resp['task']
@g_connect(['v2', 'v3'])
def wait_import_task(self, task_id, timeout=0):
"""
Waits until the import process on the Galaxy server has completed or the timeout is reached.
:param task_id: The id of the import task to wait for. This can be parsed out of the return
value for GalaxyAPI.publish_collection.
:param timeout: The timeout in seconds, 0 is no timeout.
"""
# TODO: actually verify that v3 returns the same structure as v2, right now this is just an assumption.
state = 'waiting'
data = None
# Construct the appropriate URL per version
if 'v3' in self.available_api_versions:
full_url = _urljoin(self.api_server, self.available_api_versions['v3'],
'imports/collections', task_id, '/')
else:
# TODO: Should we have a trailing slash here? I'm working with what the unittests ask
# for but a trailing slash may be more correct
full_url = _urljoin(self.api_server, self.available_api_versions['v2'],
'collection-imports', task_id)
display.display("Waiting until Galaxy import task %s has completed" % full_url)
start = time.time()
wait = 2
while timeout == 0 or (time.time() - start) < timeout:
data = self._call_galaxy(full_url, method='GET', auth_required=True,
error_context_msg='Error when getting import task results at %s' % full_url)
state = data.get('state', 'waiting')
if data.get('finished_at', None):
break
display.vvv('Galaxy import process has a status of %s, wait %d seconds before trying again'
% (state, wait))
time.sleep(wait)
# poor man's exponential backoff algo so we don't flood the Galaxy API, cap at 30 seconds.
wait = min(30, wait * 1.5)
if state == 'waiting':
raise AnsibleError("Timeout while waiting for the Galaxy import process to finish, check progress at '%s'"
% to_native(full_url))
for message in data.get('messages', []):
level = message['level']
if level == 'error':
display.error("Galaxy import error message: %s" % message['message'])
elif level == 'warning':
display.warning("Galaxy import warning message: %s" % message['message'])
else:
display.vvv("Galaxy import message: %s - %s" % (level, message['message']))
if state == 'failed':
code = to_native(data['error'].get('code', 'UNKNOWN'))
description = to_native(
data['error'].get('description', "Unknown error, see %s for more details" % full_url))
raise AnsibleError("Galaxy import process failed: %s (Code: %s)" % (description, code))
@g_connect(['v2', 'v3'])
def get_collection_version_metadata(self, namespace, name, version):
"""
Gets the collection information from the Galaxy server about a specific Collection version.
:param namespace: The collection namespace.
:param name: The collection name.
:param version: Optional version of the collection to get the information for.
:return: CollectionVersionMetadata about the collection at the version requested.
"""
api_path = self.available_api_versions.get('v3', self.available_api_versions.get('v2'))
url_paths = [self.api_server, api_path, 'collections', namespace, name, 'versions', version]
n_collection_url = _urljoin(*url_paths)
error_context_msg = 'Error when getting collection version metadata for %s.%s:%s from %s (%s)' \
% (namespace, name, version, self.name, self.api_server)
data = self._call_galaxy(n_collection_url, error_context_msg=error_context_msg)
return CollectionVersionMetadata(data['namespace']['name'], data['collection']['name'], data['version'],
data['download_url'], data['artifact']['sha256'],
data['metadata']['dependencies'])
@g_connect(['v2', 'v3'])
def get_collection_versions(self, namespace, name):
"""
Gets a list of available versions for a collection on a Galaxy server.
:param namespace: The collection namespace.
:param name: The collection name.
:return: A list of versions that are available.
"""
if 'v3' in self.available_api_versions:
api_path = self.available_api_versions['v3']
results_key = 'data'
pagination_path = ['links', 'next']
else:
api_path = self.available_api_versions['v2']
results_key = 'results'
pagination_path = ['next']
n_url = _urljoin(self.api_server, api_path, 'collections', namespace, name, 'versions')
error_context_msg = 'Error when getting available collection versions for %s.%s from %s (%s)' \
% (namespace, name, self.name, self.api_server)
data = self._call_galaxy(n_url, error_context_msg=error_context_msg)
versions = []
while True:
versions += [v['version'] for v in data[results_key]]
next_link = data
for path in pagination_path:
next_link = next_link.get(path, {})
if not next_link:
break
data = self._call_galaxy(to_native(next_link, errors='surrogate_or_strict'),
error_context_msg=error_context_msg)
return versions
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,355 |
Ansible Galaxy installing older version of role if none specified (for geerlingguy.php)
|
##### SUMMARY
Tonight many of my roles started failing CI tests, and after an hour or so I tracked down the problem to an old version of the `geerlingguy.php` role being installed. The current version on Galaxy (see https://galaxy.ansible.com/geerlingguy/php) is 3.7.0.
But Molecule was downloading the release prior to that, 3.6.3.
I also tested manually installing the role (without specifying a version) on two other fresh environments—in _both_ cases, it still downloaded the old not-current version, 3.6.3.
So... either the Galaxy API and the Galaxy UI are out of sync, or something is wrong with Ansible's `ansible-galaxy` command, and it's causing older-than-latest versions of at least one role to be downloaded...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
```
N/A (no changes)
```
##### OS / ENVIRONMENT
- Ubuntu 18.04: fail (3.6.3)
- Ubuntu 16.04: fail (3.6.3)
- Debian 10: fail (3.6.3)
- Debian 9: fail (3.6.3)
- Debian 8: success (3.7.0)
- CentOS 8: success (3.7.0)
- CentOS 7: fail (3.6.3)
Ansible 2.9.0 was used in each environment, installed via Pip. Very weird. I even ran the test in a fresh new Debian 8 and CentOS 8 environment 3 times each, and it was successful every time. And I ran the other tests at least twice each and they failed every time... so it doesn't _seem_ to be a cache-related issue in the API.
##### STEPS TO REPRODUCE
1. `ansible-galaxy install geerlingguy.php`
##### EXPECTED RESULTS
The latest version of the role (3.7.0) should be installed.
##### ACTUAL RESULTS
An older version of the role (3.6.3) was installed.
Note that the proper version _was_ installed on CentOS 8 and Debian 8... but none of the other OSes I tested, all confirmed to be running Ansible 2.9.0, installed via Pip.
|
https://github.com/ansible/ansible/issues/64355
|
https://github.com/ansible/ansible/pull/64373
|
a1ab093ddbd32f1002cbf6d6f184c7d0041d890d
|
7acae62fa849481b2a5e2e2d56961c5e1dcea96c
| 2019-11-03T03:00:15Z |
python
| 2019-11-05T15:34:50Z |
test/units/galaxy/test_api.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import re
import pytest
import tarfile
import tempfile
import time
from io import BytesIO, StringIO
from units.compat.mock import MagicMock
from ansible import context
from ansible.errors import AnsibleError
from ansible.galaxy import api as galaxy_api
from ansible.galaxy.api import CollectionVersionMetadata, GalaxyAPI, GalaxyError
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six.moves.urllib import error as urllib_error
from ansible.utils import context_objects as co
from ansible.utils.display import Display
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
# Required to initialise the GalaxyAPI object
context.CLIARGS._store = {'ignore_certs': False}
yield
co.GlobalCLIArgs._Singleton__instance = None
@pytest.fixture()
def collection_artifact(tmp_path_factory):
''' Creates a collection artifact tarball that is ready to be published '''
output_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Output'))
tar_path = os.path.join(output_dir, 'namespace-collection-v1.0.0.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(b"\x00\x01\x02\x03")
tar_info = tarfile.TarInfo('test')
tar_info.size = 4
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
yield tar_path
def get_test_galaxy_api(url, version, token_ins=None, token_value=None):
token_value = token_value or "my token"
token_ins = token_ins or GalaxyToken(token_value)
api = GalaxyAPI(None, "test", url)
# Warning, this doesn't test g_connect() because _availabe_api_versions is set here. That means
# that urls for v2 servers have to append '/api/' themselves in the input data.
api._available_api_versions = {version: '%s' % version}
api.token = token_ins
return api
def test_api_no_auth():
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = {}
api._add_auth_token(actual, "")
assert actual == {}
def test_api_no_auth_but_required():
expected = "No access token or username set. A token can be set with --api-key, with 'ansible-galaxy login', " \
"or set in ansible.cfg."
with pytest.raises(AnsibleError, match=expected):
GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")._add_auth_token({}, "", required=True)
def test_api_token_auth():
token = GalaxyToken(token=u"my_token")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Token my_token'}
def test_api_token_auth_with_token_type(monkeypatch):
token = KeycloakToken(auth_url='https://api.test/')
mock_token_get = MagicMock()
mock_token_get.return_value = 'my_token'
monkeypatch.setattr(token, 'get', mock_token_get)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", token_type="Bearer", required=True)
assert actual == {'Authorization': 'Bearer my_token'}
def test_api_token_auth_with_v3_url(monkeypatch):
token = KeycloakToken(auth_url='https://api.test/')
mock_token_get = MagicMock()
mock_token_get.return_value = 'my_token'
monkeypatch.setattr(token, 'get', mock_token_get)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "https://galaxy.ansible.com/api/v3/resource/name", required=True)
assert actual == {'Authorization': 'Bearer my_token'}
def test_api_token_auth_with_v2_url():
token = GalaxyToken(token=u"my_token")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
# Add v3 to random part of URL but response should only see the v2 as the full URI path segment.
api._add_auth_token(actual, "https://galaxy.ansible.com/api/v2/resourcev3/name", required=True)
assert actual == {'Authorization': 'Token my_token'}
def test_api_basic_auth_password():
token = BasicAuthToken(username=u"user", password=u"pass")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Basic dXNlcjpwYXNz'}
def test_api_basic_auth_no_password():
token = BasicAuthToken(username=u"user")
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
actual = {}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Basic dXNlcjo='}
def test_api_dont_override_auth_header():
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = {'Authorization': 'Custom token'}
api._add_auth_token(actual, "", required=True)
assert actual == {'Authorization': 'Custom token'}
def test_initialise_galaxy(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v1":"v1/"}}'),
StringIO(u'{"token":"my token"}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = api.authenticate("github_token")
assert len(api.available_api_versions) == 2
assert api.available_api_versions['v1'] == u'v1/'
assert api.available_api_versions['v2'] == u'v2/'
assert actual == {u'token': u'my token'}
assert mock_open.call_count == 2
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
assert mock_open.mock_calls[1][1][0] == 'https://galaxy.ansible.com/api/v1/tokens/'
assert mock_open.mock_calls[1][2]['data'] == 'github_token=github_token'
def test_initialise_galaxy_with_auth(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v1":"v1/"}}'),
StringIO(u'{"token":"my token"}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=GalaxyToken(token='my_token'))
actual = api.authenticate("github_token")
assert len(api.available_api_versions) == 2
assert api.available_api_versions['v1'] == u'v1/'
assert api.available_api_versions['v2'] == u'v2/'
assert actual == {u'token': u'my token'}
assert mock_open.call_count == 2
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
assert mock_open.mock_calls[1][1][0] == 'https://galaxy.ansible.com/api/v1/tokens/'
assert mock_open.mock_calls[1][2]['data'] == 'github_token=github_token'
def test_initialise_automation_hub(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v2": "v2/", "v3":"v3/"}}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
token = KeycloakToken(auth_url='https://api.test/')
mock_token_get = MagicMock()
mock_token_get.return_value = 'my_token'
monkeypatch.setattr(token, 'get', mock_token_get)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token)
assert len(api.available_api_versions) == 2
assert api.available_api_versions['v2'] == u'v2/'
assert api.available_api_versions['v3'] == u'v3/'
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
assert mock_open.mock_calls[0][2]['headers'] == {'Authorization': 'Bearer my_token'}
def test_initialise_unknown(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
urllib_error.HTTPError('https://galaxy.ansible.com/api/', 500, 'msg', {}, StringIO(u'{"msg":"raw error"}')),
urllib_error.HTTPError('https://galaxy.ansible.com/api/api/', 500, 'msg', {}, StringIO(u'{"msg":"raw error"}')),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=GalaxyToken(token='my_token'))
expected = "Error when finding available api versions from test (%s) (HTTP Code: 500, Message: Unknown " \
"error returned by Galaxy server.)" % api.api_server
with pytest.raises(AnsibleError, match=re.escape(expected)):
api.authenticate("github_token")
def test_get_available_api_versions(monkeypatch):
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"available_versions":{"v1":"v1/","v2":"v2/"}}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")
actual = api.available_api_versions
assert len(actual) == 2
assert actual['v1'] == u'v1/'
assert actual['v2'] == u'v2/'
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/'
def test_publish_collection_missing_file():
fake_path = u'/fake/ÅÑŚÌβŁÈ/path'
expected = to_native("The collection path specified '%s' does not exist." % fake_path)
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v2")
with pytest.raises(AnsibleError, match=expected):
api.publish_collection(fake_path)
def test_publish_collection_not_a_tarball():
expected = "The collection path specified '{0}' is not a tarball, use 'ansible-galaxy collection build' to " \
"create a proper release artifact."
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v2")
with tempfile.NamedTemporaryFile(prefix=u'ÅÑŚÌβŁÈ') as temp_file:
temp_file.write(b"\x00")
temp_file.flush()
with pytest.raises(AnsibleError, match=expected.format(to_native(temp_file.name))):
api.publish_collection(temp_file.name)
def test_publish_collection_unsupported_version():
expected = "Galaxy action publish_collection requires API versions 'v2, v3' but only 'v1' are available on test " \
"https://galaxy.ansible.com/api/"
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v1")
with pytest.raises(AnsibleError, match=expected):
api.publish_collection("path")
@pytest.mark.parametrize('api_version, collection_url', [
('v2', 'collections'),
('v3', 'artifacts/collections'),
])
def test_publish_collection(api_version, collection_url, collection_artifact, monkeypatch):
api = get_test_galaxy_api("https://galaxy.ansible.com/api/", api_version)
mock_call = MagicMock()
mock_call.return_value = {'task': 'http://task.url/'}
monkeypatch.setattr(api, '_call_galaxy', mock_call)
actual = api.publish_collection(collection_artifact)
assert actual == 'http://task.url/'
assert mock_call.call_count == 1
assert mock_call.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/%s/%s/' % (api_version, collection_url)
assert mock_call.mock_calls[0][2]['headers']['Content-length'] == len(mock_call.mock_calls[0][2]['args'])
assert mock_call.mock_calls[0][2]['headers']['Content-type'].startswith(
'multipart/form-data; boundary=--------------------------')
assert mock_call.mock_calls[0][2]['args'].startswith(b'--------------------------')
assert mock_call.mock_calls[0][2]['method'] == 'POST'
assert mock_call.mock_calls[0][2]['auth_required'] is True
@pytest.mark.parametrize('api_version, collection_url, response, expected', [
('v2', 'collections', {},
'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Unknown error returned by Galaxy '
'server. Code: Unknown)'),
('v2', 'collections', {
'message': u'Galaxy error messäge',
'code': 'GWE002',
}, u'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Galaxy error messäge Code: GWE002)'),
('v3', 'artifact/collections', {},
'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Unknown error returned by Galaxy '
'server. Code: Unknown)'),
('v3', 'artifact/collections', {
'errors': [
{
'code': 'conflict.collection_exists',
'detail': 'Collection "mynamespace-mycollection-4.1.1" already exists.',
'title': 'Conflict.',
'status': '400',
},
{
'code': 'quantum_improbability',
'title': u'Rändom(?) quantum improbability.',
'source': {'parameter': 'the_arrow_of_time'},
'meta': {'remediation': 'Try again before'},
},
],
}, u'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Collection '
u'"mynamespace-mycollection-4.1.1" already exists. Code: conflict.collection_exists), (HTTP Code: 500, '
u'Message: Rändom(?) quantum improbability. Code: quantum_improbability)')
])
def test_publish_failure(api_version, collection_url, response, expected, collection_artifact, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version)
expected_url = '%s/api/%s/%s' % (api.api_server, api_version, collection_url)
mock_open = MagicMock()
mock_open.side_effect = urllib_error.HTTPError(expected_url, 500, 'msg', {},
StringIO(to_text(json.dumps(response))))
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
with pytest.raises(GalaxyError, match=re.escape(to_native(expected % api.api_server))):
api.publish_collection(collection_artifact)
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234'),
('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.return_value = StringIO(u'{"state":"success","finished_at":"time"}')
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
api.wait_import_task(import_uri)
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234'),
('https://galaxy.server.com/api/automation-hub', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_multiple_requests(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(u'{"state":"test"}'),
StringIO(u'{"state":"success","finished_at":"time"}'),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
monkeypatch.setattr(time, 'sleep', MagicMock())
api.wait_import_task(import_uri)
assert mock_open.call_count == 2
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[1][1][0] == full_import_uri
assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
assert mock_vvv.call_count == 1
assert mock_vvv.mock_calls[0][1][0] == \
'Galaxy import process has a status of test, wait 2 seconds before trying again'
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri,', [
('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234'),
('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_with_failure(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps({
'finished_at': 'some_time',
'state': 'failed',
'error': {
'code': 'GW001',
'description': u'Becäuse I said so!',
},
'messages': [
{
'level': 'error',
'message': u'Somé error',
},
{
'level': 'warning',
'message': u'Some wärning',
},
{
'level': 'info',
'message': u'Somé info',
},
],
}))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
mock_warn = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_warn)
mock_err = MagicMock()
monkeypatch.setattr(Display, 'error', mock_err)
expected = to_native(u'Galaxy import process failed: Becäuse I said so! (Code: GW001)')
with pytest.raises(AnsibleError, match=re.escape(expected)):
api.wait_import_task(import_uri)
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
assert mock_vvv.call_count == 1
assert mock_vvv.mock_calls[0][1][0] == u'Galaxy import message: info - Somé info'
assert mock_warn.call_count == 1
assert mock_warn.mock_calls[0][1][0] == u'Galaxy import warning message: Some wärning'
assert mock_err.call_count == 1
assert mock_err.mock_calls[0][1][0] == u'Galaxy import error message: Somé error'
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my_token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234'),
('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_with_failure_no_error(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps({
'finished_at': 'some_time',
'state': 'failed',
'error': {},
'messages': [
{
'level': 'error',
'message': u'Somé error',
},
{
'level': 'warning',
'message': u'Some wärning',
},
{
'level': 'info',
'message': u'Somé info',
},
],
}))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
mock_warn = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_warn)
mock_err = MagicMock()
monkeypatch.setattr(Display, 'error', mock_err)
expected = 'Galaxy import process failed: Unknown error, see %s for more details \\(Code: UNKNOWN\\)' % full_import_uri
with pytest.raises(AnsibleError, match=expected):
api.wait_import_task(import_uri)
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
assert mock_vvv.call_count == 1
assert mock_vvv.mock_calls[0][1][0] == u'Galaxy import message: info - Somé info'
assert mock_warn.call_count == 1
assert mock_warn.mock_calls[0][1][0] == u'Galaxy import warning message: Some wärning'
assert mock_err.call_count == 1
assert mock_err.mock_calls[0][1][0] == u'Galaxy import error message: Somé error'
@pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [
('https://galaxy.server.com/api', 'v2', 'Token', GalaxyToken('my token'),
'1234',
'https://galaxy.server.com/api/v2/collection-imports/1234'),
('https://galaxy.server.com/api/automation-hub', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'),
'1234',
'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'),
])
def test_wait_import_task_timeout(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch):
api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
def return_response(*args, **kwargs):
return StringIO(u'{"state":"waiting"}')
mock_open = MagicMock()
mock_open.side_effect = return_response
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_vvv = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_vvv)
monkeypatch.setattr(time, 'sleep', MagicMock())
expected = "Timeout while waiting for the Galaxy import process to finish, check progress at '%s'" % full_import_uri
with pytest.raises(AnsibleError, match=expected):
api.wait_import_task(import_uri, 1)
assert mock_open.call_count > 1
assert mock_open.mock_calls[0][1][0] == full_import_uri
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[1][1][0] == full_import_uri
assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri
# expected_wait_msg = 'Galaxy import process has a status of waiting, wait {0} seconds before trying again'
assert mock_vvv.call_count > 9 # 1st is opening Galaxy token file.
# FIXME:
# assert mock_vvv.mock_calls[1][1][0] == expected_wait_msg.format(2)
# assert mock_vvv.mock_calls[2][1][0] == expected_wait_msg.format(3)
# assert mock_vvv.mock_calls[3][1][0] == expected_wait_msg.format(4)
# assert mock_vvv.mock_calls[4][1][0] == expected_wait_msg.format(6)
# assert mock_vvv.mock_calls[5][1][0] == expected_wait_msg.format(10)
# assert mock_vvv.mock_calls[6][1][0] == expected_wait_msg.format(15)
# assert mock_vvv.mock_calls[7][1][0] == expected_wait_msg.format(22)
# assert mock_vvv.mock_calls[8][1][0] == expected_wait_msg.format(30)
@pytest.mark.parametrize('api_version, token_type, version, token_ins', [
('v2', None, 'v2.1.13', None),
('v3', 'Bearer', 'v1.0.0', KeycloakToken(auth_url='https://api.test/api/automation-hub/')),
])
def test_get_collection_version_metadata_no_version(api_version, token_type, version, token_ins, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps({
'download_url': 'https://downloadme.com',
'artifact': {
'sha256': 'ac47b6fac117d7c171812750dacda655b04533cf56b31080b82d1c0db3c9d80f',
},
'namespace': {
'name': 'namespace',
},
'collection': {
'name': 'collection',
},
'version': version,
'metadata': {
'dependencies': {},
}
}))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.get_collection_version_metadata('namespace', 'collection', version)
assert isinstance(actual, CollectionVersionMetadata)
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.download_url == u'https://downloadme.com'
assert actual.artifact_sha256 == u'ac47b6fac117d7c171812750dacda655b04533cf56b31080b82d1c0db3c9d80f'
assert actual.version == version
assert actual.dependencies == {}
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == '%s%s/collections/namespace/collection/versions/%s' \
% (api.api_server, api_version, version)
# v2 calls dont need auth, so no authz header or token_type
if token_type:
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
@pytest.mark.parametrize('api_version, token_type, token_ins, response', [
('v2', None, None, {
'count': 2,
'next': None,
'previous': None,
'results': [
{
'version': '1.0.0',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1',
},
],
}),
# TODO: Verify this once Automation Hub is actually out
('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), {
'count': 2,
'next': None,
'previous': None,
'data': [
{
'version': '1.0.0',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1',
},
],
}),
])
def test_get_collection_versions(api_version, token_type, token_ins, response, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [
StringIO(to_text(json.dumps(response))),
]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.get_collection_versions('namespace', 'collection')
assert actual == [u'1.0.0', u'1.0.1']
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions' % api_version
if token_ins:
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
@pytest.mark.parametrize('api_version, token_type, token_ins, responses', [
('v2', None, None, [
{
'count': 6,
'next': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=2',
'previous': None,
'results': [
{
'version': '1.0.0',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1',
},
],
},
{
'count': 6,
'next': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=3',
'previous': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions',
'results': [
{
'version': '1.0.2',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.2',
},
{
'version': '1.0.3',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.3',
},
],
},
{
'count': 6,
'next': None,
'previous': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=2',
'results': [
{
'version': '1.0.4',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.4',
},
{
'version': '1.0.5',
'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.5',
},
],
},
]),
('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), [
{
'count': 6,
'links': {
'next': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/?page=2',
'previous': None,
},
'data': [
{
'version': '1.0.0',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.0',
},
{
'version': '1.0.1',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.1',
},
],
},
{
'count': 6,
'links': {
'next': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/?page=3',
'previous': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions',
},
'data': [
{
'version': '1.0.2',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.2',
},
{
'version': '1.0.3',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.3',
},
],
},
{
'count': 6,
'links': {
'next': None,
'previous': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/?page=2',
},
'data': [
{
'version': '1.0.4',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.4',
},
{
'version': '1.0.5',
'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.5',
},
],
},
]),
])
def test_get_collection_versions_pagination(api_version, token_type, token_ins, responses, monkeypatch):
api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins)
if token_ins:
mock_token_get = MagicMock()
mock_token_get.return_value = 'my token'
monkeypatch.setattr(token_ins, 'get', mock_token_get)
mock_open = MagicMock()
mock_open.side_effect = [StringIO(to_text(json.dumps(r))) for r in responses]
monkeypatch.setattr(galaxy_api, 'open_url', mock_open)
actual = api.get_collection_versions('namespace', 'collection')
assert actual == [u'1.0.0', u'1.0.1', u'1.0.2', u'1.0.3', u'1.0.4', u'1.0.5']
assert mock_open.call_count == 3
assert mock_open.mock_calls[0][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions' % api_version
assert mock_open.mock_calls[1][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions/?page=2' % api_version
assert mock_open.mock_calls[2][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \
'versions/?page=3' % api_version
if token_type:
assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type
assert mock_open.mock_calls[2][2]['headers']['Authorization'] == '%s my token' % token_type
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,466 |
ansible-test import sanity test fails to report some collection import errors
|
##### SUMMARY
When using `ansible-test sanity --test import` on a collection, it is possible for an import error in a module_util to go unreported. Additionally, this can lead to import errors in modules that use that module_util reporting failures which do not indicate the actual source of the problem.
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
1. Check out https://github.com/aristanetworks/ansible-cvp/releases/tag/v1.0.0
2. Copy the `arista` directory into a collection root.
3. Run the import sanity test: `ansible-test sanity --test import --docker -v --python 3.7`
##### EXPECTED RESULTS
```
ERROR: Found 9 import issue(s) on python 3.7 which need to be resolved:
ERROR: plugins/module_utils/cv_api2018.py:39:0: traceback: ModuleNotFoundError: No module named 'cv_client_errors'
ERROR: plugins/module_utils/cv_api2019.py:39:0: traceback: ModuleNotFoundError: No module named 'cv_client'
ERROR: plugins/module_utils/cv_client.py:99:0: traceback: ModuleNotFoundError: No module named 'requests'
ERROR: plugins/modules/cv_configlet.py:95:0: traceback: ModuleNotFoundError: No module named 'requests' (at plugins/module_utils/cv_client.py:99:0)
ERROR: plugins/modules/cv_container.py:111:0: traceback: ModuleNotFoundError: No module named 'requests' (at plugins/module_utils/cv_client.py:99:0)
ERROR: plugins/modules/cv_device.py:130:25: traceback: SyntaxError: invalid syntax
ERROR: plugins/modules/cv_facts.py:62:0: traceback: ModuleNotFoundError: No module named 'requests' (at plugins/module_utils/cv_client.py:99:0)
ERROR: plugins/modules/cv_image.py:93:25: traceback: SyntaxError: invalid syntax
ERROR: plugins/modules/cv_task.py:86:0: traceback: ModuleNotFoundError: No module named 'requests' (at plugins/module_utils/cv_client.py:99:0)
```
##### ACTUAL RESULTS
```
ERROR: Found 6 import issue(s) on python 3.7 which need to be resolved:
ERROR: plugins/modules/cv_configlet.py:95:0: traceback: ImportError: cannot import name 'CvpClient' from 'ansible_collections.arista.cvp.plugins.module_utils.cv_client' (/root/ansible/ansible_collections/arista/cvp/plugins/module_utils/cv_client.py)
ERROR: plugins/modules/cv_container.py:111:0: traceback: ImportError: cannot import name 'CvpClient' from 'ansible_collections.arista.cvp.plugins.module_utils.cv_client' (/root/ansible/ansible_collections/arista/cvp/plugins/module_utils/cv_client.py)
ERROR: plugins/modules/cv_device.py:130:25: traceback: SyntaxError: invalid syntax
ERROR: plugins/modules/cv_facts.py:62:0: traceback: ImportError: cannot import name 'CvpClient' from 'ansible_collections.arista.cvp.plugins.module_utils.cv_client' (/root/ansible/ansible_collections/arista/cvp/plugins/module_utils/cv_client.py)
ERROR: plugins/modules/cv_image.py:93:25: traceback: SyntaxError: invalid syntax
ERROR: plugins/modules/cv_task.py:86:0: traceback: ImportError: cannot import name 'CvpClient' from 'ansible_collections.arista.cvp.plugins.module_utils.cv_client' (/root/ansible/ansible_collections/arista/cvp/plugins/module_utils/cv_client.py)
```
|
https://github.com/ansible/ansible/issues/64466
|
https://github.com/ansible/ansible/pull/64467
|
78be0dcbc8d92b09b54f88e7f83dcea361f78c3c
|
adcf9458f1732b02bd709d60fa294e66b0607b75
| 2019-11-05T22:39:03Z |
python
| 2019-11-06T00:06:57Z |
changelogs/fragments/ansible-test-collections-import-sanity-test.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,466 |
ansible-test import sanity test fails to report some collection import errors
|
##### SUMMARY
When using `ansible-test sanity --test import` on a collection, it is possible for an import error in a module_util to go unreported. Additionally, this can lead to import errors in modules that use that module_util reporting failures which do not indicate the actual source of the problem.
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
ansible-test
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
1. Check out https://github.com/aristanetworks/ansible-cvp/releases/tag/v1.0.0
2. Copy the `arista` directory into a collection root.
3. Run the import sanity test: `ansible-test sanity --test import --docker -v --python 3.7`
##### EXPECTED RESULTS
```
ERROR: Found 9 import issue(s) on python 3.7 which need to be resolved:
ERROR: plugins/module_utils/cv_api2018.py:39:0: traceback: ModuleNotFoundError: No module named 'cv_client_errors'
ERROR: plugins/module_utils/cv_api2019.py:39:0: traceback: ModuleNotFoundError: No module named 'cv_client'
ERROR: plugins/module_utils/cv_client.py:99:0: traceback: ModuleNotFoundError: No module named 'requests'
ERROR: plugins/modules/cv_configlet.py:95:0: traceback: ModuleNotFoundError: No module named 'requests' (at plugins/module_utils/cv_client.py:99:0)
ERROR: plugins/modules/cv_container.py:111:0: traceback: ModuleNotFoundError: No module named 'requests' (at plugins/module_utils/cv_client.py:99:0)
ERROR: plugins/modules/cv_device.py:130:25: traceback: SyntaxError: invalid syntax
ERROR: plugins/modules/cv_facts.py:62:0: traceback: ModuleNotFoundError: No module named 'requests' (at plugins/module_utils/cv_client.py:99:0)
ERROR: plugins/modules/cv_image.py:93:25: traceback: SyntaxError: invalid syntax
ERROR: plugins/modules/cv_task.py:86:0: traceback: ModuleNotFoundError: No module named 'requests' (at plugins/module_utils/cv_client.py:99:0)
```
##### ACTUAL RESULTS
```
ERROR: Found 6 import issue(s) on python 3.7 which need to be resolved:
ERROR: plugins/modules/cv_configlet.py:95:0: traceback: ImportError: cannot import name 'CvpClient' from 'ansible_collections.arista.cvp.plugins.module_utils.cv_client' (/root/ansible/ansible_collections/arista/cvp/plugins/module_utils/cv_client.py)
ERROR: plugins/modules/cv_container.py:111:0: traceback: ImportError: cannot import name 'CvpClient' from 'ansible_collections.arista.cvp.plugins.module_utils.cv_client' (/root/ansible/ansible_collections/arista/cvp/plugins/module_utils/cv_client.py)
ERROR: plugins/modules/cv_device.py:130:25: traceback: SyntaxError: invalid syntax
ERROR: plugins/modules/cv_facts.py:62:0: traceback: ImportError: cannot import name 'CvpClient' from 'ansible_collections.arista.cvp.plugins.module_utils.cv_client' (/root/ansible/ansible_collections/arista/cvp/plugins/module_utils/cv_client.py)
ERROR: plugins/modules/cv_image.py:93:25: traceback: SyntaxError: invalid syntax
ERROR: plugins/modules/cv_task.py:86:0: traceback: ImportError: cannot import name 'CvpClient' from 'ansible_collections.arista.cvp.plugins.module_utils.cv_client' (/root/ansible/ansible_collections/arista/cvp/plugins/module_utils/cv_client.py)
```
|
https://github.com/ansible/ansible/issues/64466
|
https://github.com/ansible/ansible/pull/64467
|
78be0dcbc8d92b09b54f88e7f83dcea361f78c3c
|
adcf9458f1732b02bd709d60fa294e66b0607b75
| 2019-11-05T22:39:03Z |
python
| 2019-11-06T00:06:57Z |
test/lib/ansible_test/_data/sanity/import/importer.py
|
#!/usr/bin/env python
"""Import the given python module(s) and report error(s) encountered."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
def main():
"""
Main program function used to isolate globals from imported code.
Changes to globals in imported modules on Python 2.x will overwrite our own globals.
"""
import contextlib
import os
import re
import runpy
import sys
import traceback
import types
import warnings
ansible_path = os.environ['PYTHONPATH']
temp_path = os.environ['SANITY_TEMP_PATH'] + os.path.sep
collection_full_name = os.environ.get('SANITY_COLLECTION_FULL_NAME')
try:
# noinspection PyCompatibility
from importlib import import_module
except ImportError:
def import_module(name):
__import__(name)
return sys.modules[name]
try:
# noinspection PyCompatibility
from StringIO import StringIO
except ImportError:
from io import StringIO
# pre-load an empty ansible package to prevent unwanted code in __init__.py from loading
# without this the ansible.release import there would pull in many Python modules which Ansible modules should not have access to
ansible_module = types.ModuleType('ansible')
ansible_module.__file__ = os.path.join(os.environ['PYTHONPATH'], 'ansible', '__init__.py')
ansible_module.__path__ = [os.path.dirname(ansible_module.__file__)]
ansible_module.__package__ = 'ansible'
sys.modules['ansible'] = ansible_module
if collection_full_name:
# allow importing code from collections when testing a collection
from ansible.utils.collection_loader import AnsibleCollectionLoader
from ansible.module_utils._text import to_bytes
def get_source(self, fullname):
mod = sys.modules.get(fullname)
if not mod:
mod = self.load_module(fullname)
with open(to_bytes(mod.__file__), 'rb') as mod_file:
source = mod_file.read()
return source
def get_code(self, fullname):
return compile(source=self.get_source(fullname), filename=self.get_filename(fullname), mode='exec', flags=0, dont_inherit=True)
def is_package(self, fullname):
return self.get_filename(fullname).endswith('__init__.py')
def get_filename(self, fullname):
mod = sys.modules.get(fullname) or self.load_module(fullname)
return mod.__file__
# monkeypatch collection loader to work with runpy
# remove this (and the associated code above) once implemented natively in the collection loader
AnsibleCollectionLoader.get_source = get_source
AnsibleCollectionLoader.get_code = get_code
AnsibleCollectionLoader.is_package = is_package
AnsibleCollectionLoader.get_filename = get_filename
collection_loader = AnsibleCollectionLoader()
# noinspection PyCallingNonCallable
sys.meta_path.insert(0, collection_loader)
else:
# do not support collection loading when not testing a collection
collection_loader = None
class ImporterAnsibleModuleException(Exception):
"""Exception thrown during initialization of ImporterAnsibleModule."""
class ImporterAnsibleModule:
"""Replacement for AnsibleModule to support import testing."""
def __init__(self, *args, **kwargs):
raise ImporterAnsibleModuleException()
class ImportBlacklist:
"""Blacklist inappropriate imports."""
def __init__(self, path, name):
self.path = path
self.name = name
self.loaded_modules = set()
def find_module(self, fullname, path=None):
"""Return self if the given fullname is blacklisted, otherwise return None.
:param fullname: str
:param path: str
:return: ImportBlacklist | None
"""
if fullname in self.loaded_modules:
return None # ignore modules that are already being loaded
if is_name_in_namepace(fullname, ['ansible']):
if fullname in ('ansible.module_utils.basic', 'ansible.module_utils.common.removed'):
return self # intercept loading so we can modify the result
if is_name_in_namepace(fullname, ['ansible.module_utils', self.name]):
return None # module_utils and module under test are always allowed
if os.path.exists(convert_ansible_name_to_absolute_path(fullname)):
return self # blacklist ansible files that exist
return None # ansible file does not exist, do not blacklist
if is_name_in_namepace(fullname, ['ansible_collections']):
if not collection_loader:
return self # blacklist collections when we are not testing a collection
if is_name_in_namepace(fullname, ['ansible_collections...plugins.module_utils', self.name]):
return None # module_utils and module under test are always allowed
if collection_loader.find_module(fullname, path):
return self # blacklist collection files that exist
return None # collection file does not exist, do not blacklist
# not a namespace we care about
return None
def load_module(self, fullname):
"""Raise an ImportError.
:type fullname: str
"""
if fullname == 'ansible.module_utils.basic':
module = self.__load_module(fullname)
# stop Ansible module execution during AnsibleModule instantiation
module.AnsibleModule = ImporterAnsibleModule
# no-op for _load_params since it may be called before instantiating AnsibleModule
module._load_params = lambda *args, **kwargs: {} # pylint: disable=protected-access
return module
if fullname == 'ansible.module_utils.common.removed':
module = self.__load_module(fullname)
# no-op for removed_module since it is called in place of AnsibleModule instantiation
module.removed_module = lambda *args, **kwargs: None
return module
raise ImportError('import of "%s" is not allowed in this context' % fullname)
def __load_module(self, fullname):
"""Load the requested module while avoiding infinite recursion.
:type fullname: str
:rtype: module
"""
self.loaded_modules.add(fullname)
return import_module(fullname)
def run():
"""Main program function."""
base_dir = os.getcwd()
messages = set()
for path in sys.argv[1:] or sys.stdin.read().splitlines():
name = convert_relative_path_to_name(path)
test_python_module(path, name, base_dir, messages)
if messages:
exit(10)
def test_python_module(path, name, base_dir, messages):
"""Test the given python module by importing it.
:type path: str
:type name: str
:type base_dir: str
:type messages: set[str]
"""
if name in sys.modules:
return # cannot be tested because it has already been loaded
is_ansible_module = (path.startswith('lib/ansible/modules/') or path.startswith('plugins/modules/')) and os.path.basename(path) != '__init__.py'
run_main = is_ansible_module
if path == 'lib/ansible/modules/utilities/logic/async_wrapper.py':
# async_wrapper is a non-standard Ansible module (does not use AnsibleModule) so we cannot test the main function
run_main = False
capture_normal = Capture()
capture_main = Capture()
try:
with monitor_sys_modules(path, messages):
with blacklist_imports(path, name, messages):
with capture_output(capture_normal):
import_module(name)
if run_main:
with monitor_sys_modules(path, messages):
with blacklist_imports(path, name, messages):
with capture_output(capture_main):
runpy.run_module(name, run_name='__main__')
except ImporterAnsibleModuleException:
# module instantiated AnsibleModule without raising an exception
pass
except BaseException as ex: # pylint: disable=locally-disabled, broad-except
# intentionally catch all exceptions, including calls to sys.exit
exc_type, _exc, exc_tb = sys.exc_info()
message = str(ex)
results = list(reversed(traceback.extract_tb(exc_tb)))
line = 0
offset = 0
full_path = os.path.join(base_dir, path)
base_path = base_dir + os.path.sep
source = None
# avoid line wraps in messages
message = re.sub(r'\n *', ': ', message)
for result in results:
if result[0] == full_path:
# save the line number for the file under test
line = result[1] or 0
if not source and result[0].startswith(base_path) and not result[0].startswith(temp_path):
# save the first path and line number in the traceback which is in our source tree
source = (os.path.relpath(result[0], base_path), result[1] or 0, 0)
if isinstance(ex, SyntaxError):
# SyntaxError has better information than the traceback
if ex.filename == full_path: # pylint: disable=locally-disabled, no-member
# syntax error was reported in the file under test
line = ex.lineno or 0 # pylint: disable=locally-disabled, no-member
offset = ex.offset or 0 # pylint: disable=locally-disabled, no-member
elif ex.filename.startswith(base_path) and not ex.filename.startswith(temp_path): # pylint: disable=locally-disabled, no-member
# syntax error was reported in our source tree
source = (os.path.relpath(ex.filename, base_path), ex.lineno or 0, ex.offset or 0) # pylint: disable=locally-disabled, no-member
# remove the filename and line number from the message
# either it was extracted above, or it's not really useful information
message = re.sub(r' \(.*?, line [0-9]+\)$', '', message)
if source and source[0] != path:
message += ' (at %s:%d:%d)' % (source[0], source[1], source[2])
report_message(path, line, offset, 'traceback', '%s: %s' % (exc_type.__name__, message), messages)
finally:
capture_report(path, capture_normal, messages)
capture_report(path, capture_main, messages)
def is_name_in_namepace(name, namespaces):
"""Returns True if the given name is one of the given namespaces, otherwise returns False."""
name_parts = name.split('.')
for namespace in namespaces:
namespace_parts = namespace.split('.')
length = min(len(name_parts), len(namespace_parts))
truncated_name = name_parts[0:length]
truncated_namespace = namespace_parts[0:length]
# empty parts in the namespace are treated as wildcards
# to simplify the comparison, use those empty parts to indicate the positions in the name to be empty as well
for idx, part in enumerate(truncated_namespace):
if not part:
truncated_name[idx] = part
# example: name=ansible, allowed_name=ansible.module_utils
# example: name=ansible.module_utils.system.ping, allowed_name=ansible.module_utils
if truncated_name == truncated_namespace:
return True
return False
def check_sys_modules(path, before, messages):
"""Check for unwanted changes to sys.modules.
:type path: str
:type before: dict[str, module]
:type messages: set[str]
"""
after = sys.modules
removed = set(before.keys()) - set(after.keys())
changed = set(key for key, value in before.items() if key in after and value != after[key])
# additions are checked by our custom PEP 302 loader, so we don't need to check them again here
for module in sorted(removed):
report_message(path, 0, 0, 'unload', 'unloading of "%s" in sys.modules is not supported' % module, messages)
for module in sorted(changed):
report_message(path, 0, 0, 'reload', 'reloading of "%s" in sys.modules is not supported' % module, messages)
def convert_ansible_name_to_absolute_path(name):
"""Calculate the module path from the given name.
:type name: str
:rtype: str
"""
return os.path.join(ansible_path, name.replace('.', os.path.sep))
def convert_relative_path_to_name(path):
"""Calculate the module name from the given path.
:type path: str
:rtype: str
"""
if path.endswith('/__init__.py'):
clean_path = os.path.dirname(path)
else:
clean_path = path
clean_path = os.path.splitext(clean_path)[0]
name = clean_path.replace(os.path.sep, '.')
if collection_loader:
# when testing collections the relative paths (and names) being tested are within the collection under test
name = 'ansible_collections.%s.%s' % (collection_full_name, name)
else:
# when testing ansible all files being imported reside under the lib directory
name = name[len('lib/'):]
return name
class Capture:
"""Captured output and/or exception."""
def __init__(self):
self.stdout = StringIO()
self.stderr = StringIO()
def capture_report(path, capture, messages):
"""Report on captured output.
:type path: str
:type capture: Capture
:type messages: set[str]
"""
if capture.stdout.getvalue():
first = capture.stdout.getvalue().strip().splitlines()[0].strip()
report_message(path, 0, 0, 'stdout', first, messages)
if capture.stderr.getvalue():
first = capture.stderr.getvalue().strip().splitlines()[0].strip()
report_message(path, 0, 0, 'stderr', first, messages)
def report_message(path, line, column, code, message, messages):
"""Report message if not already reported.
:type path: str
:type line: int
:type column: int
:type code: str
:type message: str
:type messages: set[str]
"""
message = '%s:%d:%d: %s: %s' % (path, line, column, code, message)
if message not in messages:
messages.add(message)
print(message)
@contextlib.contextmanager
def blacklist_imports(path, name, messages):
"""Blacklist imports.
:type path: str
:type name: str
:type messages: set[str]
"""
blacklist = ImportBlacklist(path, name)
sys.meta_path.insert(0, blacklist)
try:
yield
finally:
if sys.meta_path[0] != blacklist:
report_message(path, 0, 0, 'metapath', 'changes to sys.meta_path[0] are not permitted', messages)
while blacklist in sys.meta_path:
sys.meta_path.remove(blacklist)
@contextlib.contextmanager
def monitor_sys_modules(path, messages):
"""Monitor sys.modules for unwanted changes, reverting any additions made to our own namespaces."""
snapshot = sys.modules.copy()
try:
yield
finally:
check_sys_modules(path, snapshot, messages)
for key in set(sys.modules.keys()) - set(snapshot.keys()):
if is_name_in_namepace(key, ('ansible', 'ansible_collections')):
del sys.modules[key] # only unload our own code since we know it's native Python
@contextlib.contextmanager
def capture_output(capture):
"""Capture sys.stdout and sys.stderr.
:type capture: Capture
"""
old_stdout = sys.stdout
old_stderr = sys.stderr
sys.stdout = capture.stdout
sys.stderr = capture.stderr
# clear all warnings registries to make all warnings available
for module in sys.modules.values():
try:
module.__warningregistry__.clear()
except AttributeError:
pass
with warnings.catch_warnings():
warnings.simplefilter('error')
try:
yield
finally:
sys.stdout = old_stdout
sys.stderr = old_stderr
run()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,382 |
docker_login writes invalid config.json
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
docker_login creates an invalid json as output, when the file not existed before.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_login
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /c/Users/<user>/Documents/Git/deployment/ansible/ansible.cfg
configured module search path = ['/home/<user>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/c/Users/<user>/Documents/Git/deployment/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/c/Users/<user>/Documents/Git/deployment/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m
DEFAULT_HASH_BEHAVIOUR(/c/Users/<user>/Documents/Git/deployment/ansible/ansible.cfg) = merge
DEFAULT_LOG_PATH(/c/Users/<user>/Documents/Git/deployment/ansible/ansible.cfg) = /tmp/ansible.log
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/<user>/.ansible_vault_pass.txt
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubunutu 18.04 in WSL
Target OS: coreos with pypy and docker==4.1.0
```
Python 3.6.9 (5da45ced70e515f94686be0df47c59abd1348ebc, Oct 17 2019, 22:59:56)
[PyPy 7.2.0 with GCC 8.2.0]
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
The config.json must not exists. Then execute the docker_login module.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Login into private registry
docker_login:
config_path: /home/<os-user>/.docker/config.json
registry_url: "{{ docker.registry.url }}"
email: "{{ docker.registry.email }}"
username: "{{ docker.registry.username }}"
password: "{{ docker.registry.password }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
A valid config.json is created
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The following invalid json is created:
I replaced the actual values with placeholders, the values itself are correct
```json
{
"auths": {}
} "<registry-url>": {
"auth": "<auth>",
"email": "<email>"
}
}
}
```
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [docker : Login into private registry] ********************************************************************************************************************************************************************task path: /c/Users/<user>/Documents/Git/deployment/ansible/roles/docker/tasks/docker_private_registry.yml:10
Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/cloud/docker/docker_login.py
Pipelining is enabled.
<ip> ESTABLISH SSH CONNECTION FOR USER: <os-user>
<ip> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=30m -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="<os-user>"' -o ConnectTimeout=10 -o ControlPath=/home/<user>/.ansible/cp/8c7df41a0d 178.22.69.100 '/bin/sh -c '"'"'/opt/bin/python && sleep 0'"'"''
<ip> (0, b'\n{"changed": true, "login_result": {"IdentityToken": "", "Status": "Login Succeeded"}, "invocation": {"module_args": {"config_path": "/home/<os-user>/.docker/config.json", "registry_url": "<registry>", "email": "<email>", "username": "<user>", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "debug": true, "docker_host": "unix://var/run/docker.sock", "tls_hostname": "localhost", "api_version": "auto", "timeout": 60, "tls": false, "validate_certs": false, "reauthorize": false, "state": "present", "ca_cert": null, "client_cert": null, "client_key": null, "ssl_version": null}}}\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /home/<user>/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10163\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
changed: [sandbox-oh-01] => {
"changed": true,
"invocation": {
"module_args": {
"api_version": "auto",
"ca_cert": null,
"client_cert": null,
"client_key": null,
"config_path": "/home/core/.docker/config.json",
"debug": true,
"docker_host": "unix://var/run/docker.sock",
"email": "<email>",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"reauthorize": false,
"registry_url": "<registry>",
"ssl_version": null,
"state": "present",
"timeout": 60,
"tls": false,
"tls_hostname": "localhost",
"username": "<user>",
"validate_certs": false
}
},
"login_result": {
"IdentityToken": "",
"Status": "Login Succeeded"
}
}
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/64382
|
https://github.com/ansible/ansible/pull/64392
|
9a8d73456c1fbd1fa6699a2665209a0d59425111
|
52c4c1b00dd1aac5c6154b8b734aea47984056b2
| 2019-11-04T11:57:51Z |
python
| 2019-11-06T08:40:30Z |
changelogs/fragments/64382-docker_login-fix-invalid-json.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,382 |
docker_login writes invalid config.json
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
docker_login creates an invalid json as output, when the file not existed before.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_login
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /c/Users/<user>/Documents/Git/deployment/ansible/ansible.cfg
configured module search path = ['/home/<user>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/c/Users/<user>/Documents/Git/deployment/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/c/Users/<user>/Documents/Git/deployment/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=30m
DEFAULT_HASH_BEHAVIOUR(/c/Users/<user>/Documents/Git/deployment/ansible/ansible.cfg) = merge
DEFAULT_LOG_PATH(/c/Users/<user>/Documents/Git/deployment/ansible/ansible.cfg) = /tmp/ansible.log
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/<user>/.ansible_vault_pass.txt
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubunutu 18.04 in WSL
Target OS: coreos with pypy and docker==4.1.0
```
Python 3.6.9 (5da45ced70e515f94686be0df47c59abd1348ebc, Oct 17 2019, 22:59:56)
[PyPy 7.2.0 with GCC 8.2.0]
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
The config.json must not exists. Then execute the docker_login module.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Login into private registry
docker_login:
config_path: /home/<os-user>/.docker/config.json
registry_url: "{{ docker.registry.url }}"
email: "{{ docker.registry.email }}"
username: "{{ docker.registry.username }}"
password: "{{ docker.registry.password }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
A valid config.json is created
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The following invalid json is created:
I replaced the actual values with placeholders, the values itself are correct
```json
{
"auths": {}
} "<registry-url>": {
"auth": "<auth>",
"email": "<email>"
}
}
}
```
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [docker : Login into private registry] ********************************************************************************************************************************************************************task path: /c/Users/<user>/Documents/Git/deployment/ansible/roles/docker/tasks/docker_private_registry.yml:10
Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/cloud/docker/docker_login.py
Pipelining is enabled.
<ip> ESTABLISH SSH CONNECTION FOR USER: <os-user>
<ip> SSH: EXEC ssh -vvv -o ControlMaster=auto -o ControlPersist=30m -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="<os-user>"' -o ConnectTimeout=10 -o ControlPath=/home/<user>/.ansible/cp/8c7df41a0d 178.22.69.100 '/bin/sh -c '"'"'/opt/bin/python && sleep 0'"'"''
<ip> (0, b'\n{"changed": true, "login_result": {"IdentityToken": "", "Status": "Login Succeeded"}, "invocation": {"module_args": {"config_path": "/home/<os-user>/.docker/config.json", "registry_url": "<registry>", "email": "<email>", "username": "<user>", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "debug": true, "docker_host": "unix://var/run/docker.sock", "tls_hostname": "localhost", "api_version": "auto", "timeout": 60, "tls": false, "validate_certs": false, "reauthorize": false, "state": "present", "ca_cert": null, "client_cert": null, "client_key": null, "ssl_version": null}}}\n', b'OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017\r\ndebug1: Reading configuration data /home/<user>/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 10163\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n')
changed: [sandbox-oh-01] => {
"changed": true,
"invocation": {
"module_args": {
"api_version": "auto",
"ca_cert": null,
"client_cert": null,
"client_key": null,
"config_path": "/home/core/.docker/config.json",
"debug": true,
"docker_host": "unix://var/run/docker.sock",
"email": "<email>",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"reauthorize": false,
"registry_url": "<registry>",
"ssl_version": null,
"state": "present",
"timeout": 60,
"tls": false,
"tls_hostname": "localhost",
"username": "<user>",
"validate_certs": false
}
},
"login_result": {
"IdentityToken": "",
"Status": "Login Succeeded"
}
}
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/64382
|
https://github.com/ansible/ansible/pull/64392
|
9a8d73456c1fbd1fa6699a2665209a0d59425111
|
52c4c1b00dd1aac5c6154b8b734aea47984056b2
| 2019-11-04T11:57:51Z |
python
| 2019-11-06T08:40:30Z |
lib/ansible/modules/cloud/docker/docker_login.py
|
#!/usr/bin/python
#
# (c) 2016 Olaf Kilian <[email protected]>
# Chris Houseknecht, <[email protected]>
# James Tanner, <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_login
short_description: Log into a Docker registry.
version_added: "2.0"
description:
- Provides functionality similar to the "docker login" command.
- Authenticate with a docker registry and add the credentials to your local Docker config file. Adding the
credentials to the config files allows future connections to the registry using tools such as Ansible's Docker
modules, the Docker CLI and Docker SDK for Python without needing to provide credentials.
- Running in check mode will perform the authentication without updating the config file.
options:
registry_url:
required: False
description:
- The registry URL.
type: str
default: "https://index.docker.io/v1/"
aliases:
- registry
- url
username:
description:
- The username for the registry account
type: str
required: yes
password:
description:
- The plaintext password for the registry account
type: str
required: yes
email:
required: False
description:
- "The email address for the registry account."
type: str
reauthorize:
description:
- Refresh existing authentication found in the configuration file.
type: bool
default: no
aliases:
- reauth
config_path:
description:
- Custom path to the Docker CLI configuration file.
type: path
default: ~/.docker/config.json
aliases:
- dockercfg_path
state:
version_added: '2.3'
description:
- This controls the current state of the user. C(present) will login in a user, C(absent) will log them out.
- To logout you only need the registry server, which defaults to DockerHub.
- Before 2.1 you could ONLY log in.
- Docker does not support 'logout' with a custom config file.
type: str
default: 'present'
choices: ['present', 'absent']
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "Docker API >= 1.20"
- "Only to be able to logout, that is for I(state) = C(absent): the C(docker) command line utility"
author:
- Olaf Kilian (@olsaki) <[email protected]>
- Chris Houseknecht (@chouseknecht)
'''
EXAMPLES = '''
- name: Log into DockerHub
docker_login:
username: docker
password: rekcod
- name: Log into private registry and force re-authorization
docker_login:
registry: your.private.registry.io
username: yourself
password: secrets3
reauthorize: yes
- name: Log into DockerHub using a custom config file
docker_login:
username: docker
password: rekcod
config_path: /tmp/.mydockercfg
- name: Log out of DockerHub
docker_login:
state: absent
'''
RETURN = '''
login_results:
description: Results from the login.
returned: when state='present'
type: dict
sample: {
"email": "[email protected]",
"serveraddress": "localhost:5000",
"username": "testuser"
}
'''
import base64
import json
import os
import re
import traceback
try:
from docker.errors import DockerException
except ImportError:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DEFAULT_DOCKER_REGISTRY,
DockerBaseClass,
EMAIL_REGEX,
RequestException,
)
class LoginManager(DockerBaseClass):
def __init__(self, client, results):
super(LoginManager, self).__init__()
self.client = client
self.results = results
parameters = self.client.module.params
self.check_mode = self.client.check_mode
self.registry_url = parameters.get('registry_url')
self.username = parameters.get('username')
self.password = parameters.get('password')
self.email = parameters.get('email')
self.reauthorize = parameters.get('reauthorize')
self.config_path = parameters.get('config_path')
if parameters['state'] == 'present':
self.login()
else:
self.logout()
def fail(self, msg):
self.client.fail(msg)
def login(self):
'''
Log into the registry with provided username/password. On success update the config
file with the new authorization.
:return: None
'''
if self.email and not re.match(EMAIL_REGEX, self.email):
self.fail("Parameter error: the email address appears to be incorrect. Expecting it to match "
"/%s/" % (EMAIL_REGEX))
self.results['actions'].append("Logged into %s" % (self.registry_url))
self.log("Log into %s with username %s" % (self.registry_url, self.username))
try:
response = self.client.login(
self.username,
password=self.password,
email=self.email,
registry=self.registry_url,
reauth=self.reauthorize,
dockercfg_path=self.config_path
)
except Exception as exc:
self.fail("Logging into %s for user %s failed - %s" % (self.registry_url, self.username, str(exc)))
# If user is already logged in, then response contains password for user
# This returns correct password if user is logged in and wrong password is given.
if 'password' in response:
del response['password']
self.results['login_result'] = response
if not self.check_mode:
self.update_config_file()
def logout(self):
'''
Log out of the registry. On success update the config file.
TODO: port to API once docker.py supports this.
:return: None
'''
cmd = [self.client.module.get_bin_path('docker', True), "logout", self.registry_url]
# TODO: docker does not support config file in logout, restore this when they do
# if self.config_path and self.config_file_exists(self.config_path):
# cmd.extend(["--config", self.config_path])
(rc, out, err) = self.client.module.run_command(cmd)
if rc != 0:
self.fail("Could not log out: %s" % err)
if 'Not logged in to ' in out:
self.results['changed'] = False
elif 'Removing login credentials for ' in out:
self.results['changed'] = True
else:
self.client.module.warn('Unable to determine whether logout was successful.')
# Adding output to actions, so that user can inspect what was actually returned
self.results['actions'].append(to_text(out))
def config_file_exists(self, path):
if os.path.exists(path):
self.log("Configuration file %s exists" % (path))
return True
self.log("Configuration file %s not found." % (path))
return False
def create_config_file(self, path):
'''
Create a config file with a JSON blob containing an auths key.
:return: None
'''
self.log("Creating docker config file %s" % (path))
config_path_dir = os.path.dirname(path)
if not os.path.exists(config_path_dir):
try:
os.makedirs(config_path_dir)
except Exception as exc:
self.fail("Error: failed to create %s - %s" % (config_path_dir, str(exc)))
self.write_config(path, dict(auths=dict()))
def write_config(self, path, config):
try:
json.dump(config, open(path, "w"), indent=5, sort_keys=True)
except Exception as exc:
self.fail("Error: failed to write config to %s - %s" % (path, str(exc)))
def update_config_file(self):
'''
If the authorization not stored in the config file or reauthorize is True,
update the config file with the new authorization.
:return: None
'''
path = self.config_path
if not self.config_file_exists(path):
self.create_config_file(path)
try:
# read the existing config
config = json.load(open(path, "r"))
except ValueError:
self.log("Error reading config from %s" % (path))
config = dict()
if not config.get('auths'):
self.log("Adding auths dict to config.")
config['auths'] = dict()
if not config['auths'].get(self.registry_url):
self.log("Adding registry_url %s to auths." % (self.registry_url))
config['auths'][self.registry_url] = dict()
b64auth = base64.b64encode(
to_bytes(self.username) + b':' + to_bytes(self.password)
)
auth = to_text(b64auth)
encoded_credentials = dict(
auth=auth,
email=self.email
)
if config['auths'][self.registry_url] != encoded_credentials or self.reauthorize:
# Update the config file with the new authorization
config['auths'][self.registry_url] = encoded_credentials
self.log("Updating config file %s with new authorization for %s" % (path, self.registry_url))
self.results['actions'].append("Updated config file %s with new authorization for %s" % (
path, self.registry_url))
self.results['changed'] = True
self.write_config(path, config)
def main():
argument_spec = dict(
registry_url=dict(type='str', default=DEFAULT_DOCKER_REGISTRY, aliases=['registry', 'url']),
username=dict(type='str'),
password=dict(type='str', no_log=True),
email=dict(type='str'),
reauthorize=dict(type='bool', default=False, aliases=['reauth']),
state=dict(type='str', default='present', choices=['present', 'absent']),
config_path=dict(type='path', default='~/.docker/config.json', aliases=['dockercfg_path']),
)
required_if = [
('state', 'present', ['username', 'password']),
]
client = AnsibleDockerClient(
argument_spec=argument_spec,
supports_check_mode=True,
required_if=required_if,
min_docker_api_version='1.20',
)
try:
results = dict(
changed=False,
actions=[],
login_result={}
)
LoginManager(client, results)
if 'actions' in results:
del results['actions']
client.module.exit_json(**results)
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,155 |
postgresql_query not apply sql file with copy command
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
postgresql_query not apply sql file with copy command
for example
```sql
--
-- PostgreSQL database dump
--
-- Dumped from database version 11.1
-- Dumped by pg_dump version 11.1
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
--
-- Data for Name: sites; Type: TABLE DATA; Schema: public; Owner: billing
--
--insert into public.sites (site,description) values ('vs-c06','Инфраструктура');
COPY public.sites (site, description, link, departmentid, purposeid, os_tenant, vmware_resourcepool) FROM stdin;
vs-c06 для инфраструктуры \N 5 vs-c06_5d515af791aca \N
\.
--
-- PostgreSQL database dump complete
--
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
postgresql_query
##### ANSIBLE VERSION
```paste below
ansible 2.8.5
config file = /home/davydov/playbook/ansible.cfg
configured module search path = [u'/home/davydov/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
```paste below
DEFAULT_PRIVATE_KEY_FILE(/home/ivanov/playbook/ansible.cfg) = /home/davydov/.ssh/ivanov
DEFAULT_REMOTE_USER(/home/ivanov/playbook/ansible.cfg) = ivanov
HOST_KEY_CHECKING(/home/ivanov/playbook/ansible.cfg) = False
```
##### OS / ENVIRONMENT
CentOS Linux release 7.7.1908 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- name: sql
hosts: db
tasks:
- name: apply script
become: true
become_user: postgres
postgresql_query:
path_to_script: /tmp/test.sql
db: "{{ db_name }}"
login_user: "{{ db_user }}"
login_password: "{{ db_password }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
File "/tmp/ansible_postgresql_query_payload_Y2IpGu/__main__.py", line 211, in main
cursor.execute(query, arguments)
File "/usr/lib64/python2.7/site-packages/psycopg2/extras.py", line 120, in execute
return super(DictCursor, self).execute(query, vars)
fatal: [dbserver]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"ca_cert": null,
"db": "new_database_test",
"login_host": "",
"login_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"login_unix_socket": "",
"login_user": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"named_args": null,
"path_to_script": "/tmp/test.sql",
"port": 5432,
"positional_args": null,
"query": null,
"session_role": null,
"ssl_mode": "prefer"
}
},
"msg": "Cannot execute SQL '--\n-- PostgreSQL database dump\n--\n\n-- Dumped from database version 11.1\n-- Dumped by pg_dump version 11.1\n\nSET statement_timeout = 0;\nSET lock_timeout = 0;\nSET idle_in_transaction_session_timeout = 0;\nSET client_encoding = 'UTF8';\nSET standard_conforming_strings = on;\nSELECT pg_catalog.set_config('search_path', '', false);\nSET check_function_bodies = false;\nSET client_min_messages = warning;\nSET row_security = off;\n\n--\n-- Data for Name: sites; Type: TABLE DATA; Schema: public; Owner: ********\n--\n\n--insert into public.sites (site,description) values ('vs-c06','Инфраструктура');\nCOPY public.sites (site, description, link, departmentid, purposeid, os_tenant, vmware_resourcepool) FROM stdin;\nvs-c07\tдля инфраструктуры\t\t\\N\t5\tvs-c06_5d515af791aca\t\\N\n\\.\n\n\n--\n-- PostgreSQL database dump complete\n--\n\n\n' None: syntax error at or near \"vs\"\nLINE 24: vs-c07 для инфраструктуры \\N 5 vs-c06_5d515af791aca \\N\n ^\n"
```
|
https://github.com/ansible/ansible/issues/64155
|
https://github.com/ansible/ansible/pull/64432
|
cd8ce16d4830782063692d897e57bd0af33ab5db
|
eb58f437fb4753e13a445ba9ec7ce020aa5a5e66
| 2019-10-31T15:50:55Z |
python
| 2019-11-06T14:28:16Z |
lib/ansible/modules/database/postgresql/postgresql_query.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Felix Archambault
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'supported_by': 'community',
'status': ['preview']
}
DOCUMENTATION = r'''
---
module: postgresql_query
short_description: Run PostgreSQL queries
description:
- Runs arbitrary PostgreSQL queries.
- Can run queries from SQL script files.
version_added: '2.8'
options:
query:
description:
- SQL query to run. Variables can be escaped with psycopg2 syntax U(http://initd.org/psycopg/docs/usage.html).
type: str
positional_args:
description:
- List of values to be passed as positional arguments to the query.
When the value is a list, it will be converted to PostgreSQL array.
- Mutually exclusive with I(named_args).
type: list
named_args:
description:
- Dictionary of key-value arguments to pass to the query.
When the value is a list, it will be converted to PostgreSQL array.
- Mutually exclusive with I(positional_args).
type: dict
path_to_script:
description:
- Path to SQL script on the remote host.
- Returns result of the last query in the script.
- Mutually exclusive with I(query).
type: path
session_role:
description:
- Switch to session_role after connecting. The specified session_role must
be a role that the current login_user is a member of.
- Permissions checking for SQL commands is carried out as though
the session_role were the one that had logged in originally.
type: str
db:
description:
- Name of database to connect to and run queries against.
type: str
aliases:
- login_db
autocommit:
description:
- Execute in autocommit mode when the query can't be run inside a transaction block
(e.g., VACUUM).
- Mutually exclusive with I(check_mode).
type: bool
default: no
version_added: '2.9'
author:
- Felix Archambault (@archf)
- Andrew Klychkov (@Andersson007)
- Will Rouesnel (@wrouesnel)
extends_documentation_fragment: postgres
'''
EXAMPLES = r'''
- name: Simple select query to acme db
postgresql_query:
db: acme
query: SELECT version()
- name: Select query to db acme with positional arguments and non-default credentials
postgresql_query:
db: acme
login_user: django
login_password: mysecretpass
query: SELECT * FROM acme WHERE id = %s AND story = %s
positional_args:
- 1
- test
- name: Select query to test_db with named_args
postgresql_query:
db: test_db
query: SELECT * FROM test WHERE id = %(id_val)s AND story = %(story_val)s
named_args:
id_val: 1
story_val: test
- name: Insert query to test_table in db test_db
postgresql_query:
db: test_db
query: INSERT INTO test_table (id, story) VALUES (2, 'my_long_story')
- name: Run queries from SQL script
postgresql_query:
db: test_db
path_to_script: /var/lib/pgsql/test.sql
positional_args:
- 1
- name: Example of using autocommit parameter
postgresql_query:
db: test_db
query: VACUUM
autocommit: yes
- name: >
Insert data to the column of array type using positional_args.
Note that we use quotes here, the same as for passing JSON, etc.
postgresql_query:
query: INSERT INTO test_table (array_column) VALUES (%s)
positional_args:
- '{1,2,3}'
# Pass list and string vars as positional_args
- name: Set vars
set_fact:
my_list:
- 1
- 2
- 3
my_arr: '{1, 2, 3}'
- name: Select from test table by passing positional_args as arrays
postgresql_query:
query: SELECT * FROM test_array_table WHERE arr_col1 = %s AND arr_col2 = %s
positional_args:
- '{{ my_list }}'
- '{{ my_arr|string }}'
'''
RETURN = r'''
query:
description: Query that was tried to be executed.
returned: always
type: str
sample: 'SELECT * FROM bar'
statusmessage:
description: Attribute containing the message returned by the command.
returned: always
type: str
sample: 'INSERT 0 1'
query_result:
description:
- List of dictionaries in column:value form representing returned rows.
returned: changed
type: list
sample: [{"Column": "Value1"},{"Column": "Value2"}]
rowcount:
description: Number of affected rows.
returned: changed
type: int
sample: 5
'''
try:
from psycopg2 import ProgrammingError as Psycopg2ProgrammingError
from psycopg2.extras import DictCursor
except ImportError:
# it is needed for checking 'no result to fetch' in main(),
# psycopg2 availability will be checked by connect_to_db() into
# ansible.module_utils.postgres
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.postgres import (
connect_to_db,
get_conn_params,
postgres_common_argument_spec,
)
from ansible.module_utils._text import to_native
from ansible.module_utils.six import iteritems
# ===========================================
# Module execution.
#
def list_to_pg_array(elem):
"""Convert the passed list to PostgreSQL array
represented as a string.
Args:
elem (list): List that needs to be converted.
Returns:
elem (str): String representation of PostgreSQL array.
"""
elem = str(elem).strip('[]')
elem = '{' + elem + '}'
return elem
def convert_elements_to_pg_arrays(obj):
"""Convert list elements of the passed object
to PostgreSQL arrays represented as strings.
Args:
obj (dict or list): Object whose elements need to be converted.
Returns:
obj (dict or list): Object with converted elements.
"""
if isinstance(obj, dict):
for (key, elem) in iteritems(obj):
if isinstance(elem, list):
obj[key] = list_to_pg_array(elem)
elif isinstance(obj, list):
for i, elem in enumerate(obj):
if isinstance(elem, list):
obj[i] = list_to_pg_array(elem)
return obj
def main():
argument_spec = postgres_common_argument_spec()
argument_spec.update(
query=dict(type='str'),
db=dict(type='str', aliases=['login_db']),
positional_args=dict(type='list'),
named_args=dict(type='dict'),
session_role=dict(type='str'),
path_to_script=dict(type='path'),
autocommit=dict(type='bool', default=False),
)
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=(('positional_args', 'named_args'),),
supports_check_mode=True,
)
query = module.params["query"]
positional_args = module.params["positional_args"]
named_args = module.params["named_args"]
path_to_script = module.params["path_to_script"]
autocommit = module.params["autocommit"]
if autocommit and module.check_mode:
module.fail_json(msg="Using autocommit is mutually exclusive with check_mode")
if positional_args and named_args:
module.fail_json(msg="positional_args and named_args params are mutually exclusive")
if path_to_script and query:
module.fail_json(msg="path_to_script is mutually exclusive with query")
if positional_args:
positional_args = convert_elements_to_pg_arrays(positional_args)
elif named_args:
named_args = convert_elements_to_pg_arrays(named_args)
if path_to_script:
try:
query = open(path_to_script, 'r').read()
except Exception as e:
module.fail_json(msg="Cannot read file '%s' : %s" % (path_to_script, to_native(e)))
conn_params = get_conn_params(module, module.params)
db_connection = connect_to_db(module, conn_params, autocommit=autocommit)
cursor = db_connection.cursor(cursor_factory=DictCursor)
# Prepare args:
if module.params.get("positional_args"):
arguments = module.params["positional_args"]
elif module.params.get("named_args"):
arguments = module.params["named_args"]
else:
arguments = None
# Set defaults:
changed = False
# Execute query:
try:
cursor.execute(query, arguments)
except Exception as e:
cursor.close()
db_connection.close()
module.fail_json(msg="Cannot execute SQL '%s' %s: %s" % (query, arguments, to_native(e)))
statusmessage = cursor.statusmessage
rowcount = cursor.rowcount
try:
query_result = [dict(row) for row in cursor.fetchall()]
except Psycopg2ProgrammingError as e:
if to_native(e) == 'no results to fetch':
query_result = {}
except Exception as e:
module.fail_json(msg="Cannot fetch rows from cursor: %s" % to_native(e))
if 'SELECT' not in statusmessage:
if 'UPDATE' in statusmessage or 'INSERT' in statusmessage or 'DELETE' in statusmessage:
s = statusmessage.split()
if len(s) == 3:
if statusmessage.split()[2] != '0':
changed = True
elif len(s) == 2:
if statusmessage.split()[1] != '0':
changed = True
else:
changed = True
else:
changed = True
if module.check_mode:
db_connection.rollback()
else:
if not autocommit:
db_connection.commit()
kw = dict(
changed=changed,
query=cursor.query,
statusmessage=statusmessage,
query_result=query_result,
rowcount=rowcount if rowcount >= 0 else 0,
)
cursor.close()
db_connection.close()
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,102 |
[Docs] Update documentation and docsite for Ansible 2.9
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
Update the documentation and publish the 2.9 version of Ansible documentation. Tasks include:
- [x] Change versions for the version switcher in conf.py in devel, stable-2.9, stable-2.8 and stable-2.7
- [x] Review changelogs in stable-2.9
- [x] Update 2.9 porting guide to point to stable-2.9
- [x] Update the Release Status grid on the Release and Maintenance page in devel and stable-2.9
- [x] Backport release Status grid to 2.8 and 2.7.
- [x] Update the version suggested for RHEL users in the installation docs, see https://github.com/ansible/ansible/pull/48173
- [x] Backport the updated instructions for backporting to `latest`, see https://github.com/ansible/ansible/pull/56578
- [x] Update the “too old” version number that suppresses the publication of “version_added” metadata older than a certain Ansible version, see https://github.com/ansible/ansible/pull/50097
- [x] Update the versions listed on the docs landing page - see https://github.com/ansible/docsite/pull/8
- [x] Update EOL banner on stable-2.6
- [x] Make sure all server-side redirects are updated as necessary - see https://github.com/ansible/docsite/pull/11
- [x] Add skeleton roadmap for 2.10
- [x] Add skeleton porting guide for 2.10
- [x] post release - update intersphinx links. see #66646 (should be done for ansible-base and Ansible releases)
- [x] post release (day or so later) - update sitemap for google. See https://github.com/ansible/docsite/pull/17
- [x] post release reindex swiftype search
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/64102
|
https://github.com/ansible/ansible/pull/64532
|
26e0e4be016ea341f3cdfdcdd07a17d433ed44cb
|
ac9c75d467af366d75ad8bfdeb5f16bdc0a90766
| 2019-10-30T13:48:25Z |
python
| 2019-11-06T19:33:17Z |
docs/docsite/rst/installation_guide/intro_installation.rst
|
.. _installation_guide:
.. _intro_installation_guide:
Installation Guide
==================
.. contents:: Topics
Welcome to the Ansible Installation Guide!
.. _what_will_be_installed:
Basics / What Will Be Installed
```````````````````````````````
Ansible by default manages machines over the SSH protocol.
Once Ansible is installed, it will not add a database, and there will be no daemons to start or keep running. You only need to install it on one machine (which could easily be a laptop) and it can manage an entire fleet of remote machines from that central point. When Ansible manages remote machines, it does not leave software installed or running on them, so there's no real question about how to upgrade Ansible when moving to a new version.
.. _what_version:
What Version To Pick?
`````````````````````
Because it runs so easily from source and does not require any installation of software on remote
machines, many users will actually track the development version.
Ansible's release cycles are usually about four months long. Due to this short release cycle,
minor bugs will generally be fixed in the next release versus maintaining backports on the stable branch.
Major bugs will still have maintenance releases when needed, though these are infrequent.
If you are wishing to run the latest released version of Ansible and you are running Red Hat Enterprise Linux (TM), CentOS, Fedora, Debian, or Ubuntu, we recommend using the OS package manager.
For other installation options, we recommend installing via ``pip``, which is the Python package manager.
If you wish to track the development release to use and test the latest features, we will share
information about running from source. It's not necessary to install the program to run from source.
.. _control_node_requirements:
Control Node Requirements
````````````````````````````
Currently Ansible can be run from any machine with Python 2 (version 2.7) or Python 3 (versions 3.5 and higher) installed. Windows isn't supported for the control node.
This includes Red Hat, Debian, CentOS, macOS, any of the BSDs, and so on.
When choosing a control node, bear in mind that any management system benefits from being run near the machines being managed. If you are running Ansible in a cloud, consider running it from a machine inside that cloud. In most cases this will work better than on the open Internet.
.. note::
macOS by default is configured for a small number of file handles, so if you want to use 15 or more forks you'll need to raise the ulimit with ``sudo launchctl limit maxfiles unlimited``. This command can also fix any "Too many open files" error.
.. warning::
Please note that some modules and plugins have additional requirements. For modules these need to be satisfied on the 'target' machine and should be listed in the module specific docs.
.. _managed_node_requirements:
Managed Node Requirements
`````````````````````````
On the managed nodes, you need a way to communicate, which is normally ssh. By
default this uses sftp. If that's not available, you can switch to scp in
:file:`ansible.cfg`. You also need Python 2 (version 2.6 or later) or Python 3 (version 3.5 or
later).
.. note::
* If you have SELinux enabled on remote nodes, you will also want to install
libselinux-python on them before using any copy/file/template related functions in Ansible. You
can use the :ref:`yum module<yum_module>` or :ref:`dnf module<dnf_module>` in Ansible to install this package on remote systems
that do not have it.
* By default, Ansible uses the python interpreter located at :file:`/usr/bin/python` to run its
modules. However, some Linux distributions may only have a Python 3 interpreter installed to
:file:`/usr/bin/python3` by default. On those systems, you may see an error like::
"module_stdout": "/bin/sh: /usr/bin/python: No such file or directory\r\n"
you can either set the :ref:`ansible_python_interpreter<ansible_python_interpreter>` inventory variable (see
:ref:`inventory`) to point at your interpreter or you can install a Python 2 interpreter for
modules to use. You will still need to set :ref:`ansible_python_interpreter<ansible_python_interpreter>` if the Python
2 interpreter is not installed to :command:`/usr/bin/python`.
* Ansible's :ref:`raw module<raw_module>`, and the :ref:`script module<script_module>`, do not depend
on a client side install of Python to run. Technically, you can use Ansible to install a compatible
version of Python using the :ref:`raw module<raw_module>`, which then allows you to use everything else.
For example, if you need to bootstrap Python 2 onto a RHEL-based system, you can install it
via
.. code-block:: shell
$ ansible myhost --become -m raw -a "yum install -y python2"
.. _installing_the_control_node:
Installing the Control Node
``````````````````````````````
.. _from_yum:
Latest Release via DNF or Yum
+++++++++++++++++++++++++++++
On Fedora:
.. code-block:: bash
$ sudo dnf install ansible
On RHEL and CentOS:
.. code-block:: bash
$ sudo yum install ansible
RPMs for RHEL 7 and RHEL 8 are available from the `Ansible Engine repository <https://access.redhat.com/articles/3174981>`_.
To enable the Ansible Engine repository for RHEL 8, run the following command:
.. code-block:: bash
$ sudo subscription-manager repos --enable ansible-2.8-for-rhel-8-x86_64-rpms
To enable the Ansible Engine repository for RHEL 7, run the following command:
.. code-block:: bash
$ sudo subscription-manager repos --enable rhel-7-server-ansible-2.8-rpms
RPMs for currently supported versions of RHEL, CentOS, and Fedora are available from `EPEL <https://fedoraproject.org/wiki/EPEL>`_ as well as `releases.ansible.com <https://releases.ansible.com/ansible/rpm>`_.
Ansible version 2.4 and later can manage earlier operating systems that contain Python 2.6 or higher.
You can also build an RPM yourself. From the root of a checkout or tarball, use the ``make rpm`` command to build an RPM you can distribute and install.
.. code-block:: bash
$ git clone https://github.com/ansible/ansible.git
$ cd ./ansible
$ make rpm
$ sudo rpm -Uvh ./rpm-build/ansible-*.noarch.rpm
.. _from_apt:
Latest Releases via Apt (Ubuntu)
++++++++++++++++++++++++++++++++
Ubuntu builds are available `in a PPA here <https://launchpad.net/~ansible/+archive/ubuntu/ansible>`_.
To configure the PPA on your machine and install ansible run these commands:
.. code-block:: bash
$ sudo apt update
$ sudo apt install software-properties-common
$ sudo apt-add-repository --yes --update ppa:ansible/ansible
$ sudo apt install ansible
.. note:: On older Ubuntu distributions, "software-properties-common" is called "python-software-properties". You may want to use ``apt-get`` instead of ``apt`` in older versions. Also, be aware that only newer distributions (i.e. 18.04, 18.10, etc.) have a ``-u`` or ``--update`` flag, so adjust your script accordingly.
Debian/Ubuntu packages can also be built from the source checkout, run:
.. code-block:: bash
$ make deb
You may also wish to run from source to get the latest, which is covered below.
Latest Releases via Apt (Debian)
++++++++++++++++++++++++++++++++
Debian users may leverage the same source as the Ubuntu PPA.
Add the following line to /etc/apt/sources.list:
.. code-block:: bash
deb http://ppa.launchpad.net/ansible/ansible/ubuntu trusty main
Then run these commands:
.. code-block:: bash
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367
$ sudo apt update
$ sudo apt install ansible
.. note:: This method has been verified with the Trusty sources in Debian Jessie and Stretch but may not be supported in earlier versions. You may want to use ``apt-get`` instead of ``apt`` in older versions.
Latest Releases via Portage (Gentoo)
++++++++++++++++++++++++++++++++++++
.. code-block:: bash
$ emerge -av app-admin/ansible
To install the newest version, you may need to unmask the ansible package prior to emerging:
.. code-block:: bash
$ echo 'app-admin/ansible' >> /etc/portage/package.accept_keywords
Latest Releases via pkg (FreeBSD)
+++++++++++++++++++++++++++++++++
Though Ansible works with both Python 2 and 3 versions, FreeBSD has different packages for each Python version.
So to install you can use:
.. code-block:: bash
$ sudo pkg install py27-ansible
or:
.. code-block:: bash
$ sudo pkg install py36-ansible
You may also wish to install from ports, run:
.. code-block:: bash
$ sudo make -C /usr/ports/sysutils/ansible install
You can also choose a specific version, i.e ``ansible25``.
Older versions of FreeBSD worked with something like this (substitute for your choice of package manager):
.. code-block:: bash
$ sudo pkg install ansible
.. _on_macos:
Latest Releases on macOS
++++++++++++++++++++++++++
The preferred way to install Ansible on a Mac is via ``pip``.
The instructions can be found in `Latest Releases via Pip`_ section. If you are running macOS version 10.12 or older, then you should upgrade to the latest ``pip`` to connect to the Python Package Index securely.
.. _from_pkgutil:
Latest Releases via OpenCSW (Solaris)
+++++++++++++++++++++++++++++++++++++
Ansible is available for Solaris as `SysV package from OpenCSW <https://www.opencsw.org/packages/ansible/>`_.
.. code-block:: bash
# pkgadd -d http://get.opencsw.org/now
# /opt/csw/bin/pkgutil -i ansible
.. _from_pacman:
Latest Releases via Pacman (Arch Linux)
+++++++++++++++++++++++++++++++++++++++
Ansible is available in the Community repository::
$ pacman -S ansible
The AUR has a PKGBUILD for pulling directly from GitHub called `ansible-git <https://aur.archlinux.org/packages/ansible-git>`_.
Also see the `Ansible <https://wiki.archlinux.org/index.php/Ansible>`_ page on the ArchWiki.
.. _from_sbopkg:
Latest Releases via sbopkg (Slackware Linux)
++++++++++++++++++++++++++++++++++++++++++++
Ansible build script is available in the `SlackBuilds.org <https://slackbuilds.org/apps/ansible/>`_ repository.
Can be built and installed using `sbopkg <https://sbopkg.org/>`_.
Create queue with Ansible and all dependencies::
# sqg -p ansible
Build and install packages from created queuefile (answer Q for question if sbopkg should use queue or package)::
# sbopkg -k -i ansible
.. _from swupd:
Latest Release via swupd (Clear Linux)
+++++++++++++++++++++++++++++++++++++++
Ansible and its dependencies are available as part of the sysadmin host management bundle::
$ sudo swupd bundle-add sysadmin-hostmgmt
Update of the software will be managed by the swupd tool::
$ sudo swupd update
.. _from_pip:
Latest Releases via Pip
+++++++++++++++++++++++
Ansible can be installed via ``pip``, the Python package manager. If ``pip`` isn't already available on your system of Python, run the following commands to install it::
$ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
$ python get-pip.py --user
Then install Ansible [1]_::
$ pip install --user ansible
Or if you are looking for the latest development version::
$ pip install --user git+https://github.com/ansible/ansible.git@devel
If you are installing on macOS Mavericks (10.9), you may encounter some noise from your compiler. A workaround is to do the following::
$ CFLAGS=-Qunused-arguments CPPFLAGS=-Qunused-arguments pip install --user ansible
In order to use the ``paramiko`` connection plugin or modules that require ``paramiko``, install the required module [2]_::
$ pip install --user paramiko
Ansible can also be installed inside a new or existing ``virtualenv``::
$ python -m virtualenv ansible # Create a virtualenv if one does not already exist
$ source ansible/bin/activate # Activate the virtual environment
$ pip install ansible
If you wish to install Ansible globally, run the following commands::
$ sudo python get-pip.py
$ sudo pip install ansible
.. note::
Running ``pip`` with ``sudo`` will make global changes to the system. Since ``pip`` does not coordinate with system package managers, it could make changes to your system that leaves it in an inconsistent or non-functioning state. This is particularly true for macOS. Installing with ``--user`` is recommended unless you understand fully the implications of modifying global files on the system.
.. note::
Older versions of ``pip`` default to http://pypi.python.org/simple, which no longer works.
Please make sure you have the latest version of ``pip`` before installing Ansible.
If you have an older version of ``pip`` installed, you can upgrade by following `pip's upgrade instructions <https://pip.pypa.io/en/stable/installing/#upgrading-pip>`_ .
.. _tagged_releases:
Tarballs of Tagged Releases
+++++++++++++++++++++++++++
Packaging Ansible or wanting to build a local package yourself, but don't want to do a git checkout? Tarballs of releases are available on the `Ansible downloads <https://releases.ansible.com/ansible>`_ page.
These releases are also tagged in the `git repository <https://github.com/ansible/ansible/releases>`_ with the release version.
.. _from_source:
Running From Source
+++++++++++++++++++
Ansible is easy to run from source. You do not need ``root`` permissions
to use it and there is no software to actually install. No daemons
or database setup are required. Because of this, many users in our community use the
development version of Ansible all of the time so they can take advantage of new features
when they are implemented and easily contribute to the project. Because there is
nothing to install, following the development version is significantly easier than most
open source projects.
.. note::
If you are want to use Ansible Tower as the Control Node, do not use a source installation of Ansible. Please use an OS package manager (like ``apt`` or ``yum``) or ``pip`` to install a stable version.
To install from source, clone the Ansible git repository:
.. code-block:: bash
$ git clone https://github.com/ansible/ansible.git
$ cd ./ansible
Once ``git`` has cloned the Ansible repository, setup the Ansible environment:
Using Bash:
.. code-block:: bash
$ source ./hacking/env-setup
Using Fish::
$ source ./hacking/env-setup.fish
If you want to suppress spurious warnings/errors, use::
$ source ./hacking/env-setup -q
If you don't have ``pip`` installed in your version of Python, install it::
$ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
$ python get-pip.py --user
Ansible also uses the following Python modules that need to be installed [1]_:
.. code-block:: bash
$ pip install --user -r ./requirements.txt
To update ansible checkouts, use pull-with-rebase so any local changes are replayed.
.. code-block:: bash
$ git pull --rebase
.. code-block:: bash
$ git pull --rebase #same as above
$ git submodule update --init --recursive
Once running the env-setup script you'll be running from checkout and the default inventory file
will be ``/etc/ansible/hosts``. You can optionally specify an inventory file (see :ref:`inventory`)
other than ``/etc/ansible/hosts``:
.. code-block:: bash
$ echo "127.0.0.1" > ~/ansible_hosts
$ export ANSIBLE_INVENTORY=~/ansible_hosts
You can read more about the inventory file in later parts of the manual.
Now let's test things with a ping command:
.. code-block:: bash
$ ansible all -m ping --ask-pass
You can also use "sudo make install".
.. _shell_completion:
Shell Completion
````````````````
As of Ansible 2.9 shell completion of the ansible command line utilities is available and provided through an optional dependency
called ``argcomplete``. ``argcomplete`` supports bash, and limited support for zsh and tcsh
``python-argcomplete`` can be installed from EPEL on Red Hat Enterprise based distributions, and is available in the standard OS repositories for many other distributions.
For more information about installing and configuration see the `argcomplete documentation <https://argcomplete.readthedocs.io/en/latest/>_`.
Installing
++++++++++
via yum/dnf
-----------
On Fedora:
.. code-block:: bash
$ sudo dnf install python-argcomplete
On RHEL and CentOS:
.. code-block:: bash
$ sudo yum install epel-release
$ sudo yum install python-argcomplete
via apt
-------
.. code-block:: bash
$ sudo apt install python-argcomplete
via pip
-------
.. code-block:: bash
$ pip install argcomplete
Configuring
+++++++++++
There are 2 ways to configure argcomplete to allow shell completion of the Ansible command line utilities. Per command, or globally.
Globally
--------
Global completion requires bash 4.2
.. code-block:: bash
$ sudo activate-global-python-argcomplete
This will write a bash completion file to a global location, use ``--dest`` to change the location
Per Command
-----------
If you do not have bash 4.2, you must register each script independently
.. code-block:: bash
$ eval $(register-python-argcomplete ansible)
$ eval $(register-python-argcomplete ansible-config)
$ eval $(register-python-argcomplete ansible-console)
$ eval $(register-python-argcomplete ansible-doc)
$ eval $(register-python-argcomplete ansible-galaxy)
$ eval $(register-python-argcomplete ansible-inventory)
$ eval $(register-python-argcomplete ansible-playbook)
$ eval $(register-python-argcomplete ansible-pull)
$ eval $(register-python-argcomplete ansible-vault)
It would be advisable to place the above commands, into your shells profile file such as ``~/.profile`` or ``~/.bash_profile``.
Zsh or tcsh
-----------
See the `argcomplete documentation <https://argcomplete.readthedocs.io/en/latest/>_`.
.. _getting_ansible:
Ansible on GitHub
`````````````````
You may also wish to follow the `GitHub project <https://github.com/ansible/ansible>`_ if
you have a GitHub account. This is also where we keep the issue tracker for sharing
bugs and feature ideas.
.. seealso::
:ref:`intro_adhoc`
Examples of basic commands
:ref:`working_with_playbooks`
Learning ansible's configuration management language
:ref:`installation_faqs`
Ansible Installation related to FAQs
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
.. [1] If you have issues with the "pycrypto" package install on macOS, then you may need to try ``CC=clang sudo -E pip install pycrypto``.
.. [2] ``paramiko`` was included in Ansible's ``requirements.txt`` prior to 2.8.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,374 |
VMware: vmware_guest 'version' parameter handling error
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
vmware_guest handles 'version' parameter type error.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /root/ansible/lib/ansible
executable location = /root/ansible/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Using vmware_guest to deploy a new VM with 'version' parameter set to integer 14.
Return error 'int' object has no attribute 'lower'.
This is caused in line 1285:
"if temp_version.lower() == 'latest':"
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/64374
|
https://github.com/ansible/ansible/pull/64376
|
b475e0408c820e8b28cfc9aaf508e15761af0617
|
9a54070fa23d2ac87566749b32a17150726c53d8
| 2019-11-04T06:27:15Z |
python
| 2019-11-07T07:13:30Z |
lib/ansible/modules/cloud/vmware/vmware_guest.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# This module is also sponsored by E.T.A.I. (www.etai.fr)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: vmware_guest
short_description: Manages virtual machines in vCenter
description: >
This module can be used to create new virtual machines from templates or other virtual machines,
manage power state of virtual machine such as power on, power off, suspend, shutdown, reboot, restart etc.,
modify various virtual machine components like network, disk, customization etc.,
rename a virtual machine and remove a virtual machine with associated components.
version_added: '2.2'
author:
- Loic Blot (@nerzhul) <[email protected]>
- Philippe Dellaert (@pdellaert) <[email protected]>
- Abhijeet Kasurde (@Akasurde) <[email protected]>
requirements:
- python >= 2.6
- PyVmomi
notes:
- Please make sure that the user used for vmware_guest has the correct level of privileges.
- For example, following is the list of minimum privileges required by users to create virtual machines.
- " DataStore > Allocate Space"
- " Virtual Machine > Configuration > Add New Disk"
- " Virtual Machine > Configuration > Add or Remove Device"
- " Virtual Machine > Inventory > Create New"
- " Network > Assign Network"
- " Resource > Assign Virtual Machine to Resource Pool"
- "Module may require additional privileges as well, which may be required for gathering facts - e.g. ESXi configurations."
- Tested on vSphere 5.5, 6.0, 6.5 and 6.7
- Use SCSI disks instead of IDE when you want to expand online disks by specifying a SCSI controller
- "For additional information please visit Ansible VMware community wiki - U(https://github.com/ansible/community/wiki/VMware)."
options:
state:
description:
- Specify the state the virtual machine should be in.
- 'If C(state) is set to C(present) and virtual machine exists, ensure the virtual machine
configurations conforms to task arguments.'
- 'If C(state) is set to C(absent) and virtual machine exists, then the specified virtual machine
is removed with its associated components.'
- 'If C(state) is set to one of the following C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists, then virtual machine is deployed with given parameters.'
- 'If C(state) is set to C(poweredon) and virtual machine exists with powerstate other than powered on,
then the specified virtual machine is powered on.'
- 'If C(state) is set to C(poweredoff) and virtual machine exists with powerstate other than powered off,
then the specified virtual machine is powered off.'
- 'If C(state) is set to C(restarted) and virtual machine exists, then the virtual machine is restarted.'
- 'If C(state) is set to C(suspended) and virtual machine exists, then the virtual machine is set to suspended mode.'
- 'If C(state) is set to C(shutdownguest) and virtual machine exists, then the virtual machine is shutdown.'
- 'If C(state) is set to C(rebootguest) and virtual machine exists, then the virtual machine is rebooted.'
default: present
choices: [ present, absent, poweredon, poweredoff, restarted, suspended, shutdownguest, rebootguest ]
name:
description:
- Name of the virtual machine to work with.
- Virtual machine names in vCenter are not necessarily unique, which may be problematic, see C(name_match).
- 'If multiple virtual machines with same name exists, then C(folder) is required parameter to
identify uniqueness of the virtual machine.'
- This parameter is required, if C(state) is set to C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists.
- This parameter is case sensitive.
required: yes
name_match:
description:
- If multiple virtual machines matching the name, use the first or last found.
default: 'first'
choices: [ first, last ]
uuid:
description:
- UUID of the virtual machine to manage if known, this is VMware's unique identifier.
- This is required if C(name) is not supplied.
- If virtual machine does not exists, then this parameter is ignored.
- Please note that a supplied UUID will be ignored on virtual machine creation, as VMware creates the UUID internally.
use_instance_uuid:
description:
- Whether to use the VMware instance UUID rather than the BIOS UUID.
default: no
type: bool
version_added: '2.8'
template:
description:
- Template or existing virtual machine used to create new virtual machine.
- If this value is not set, virtual machine is created without using a template.
- If the virtual machine already exists, this parameter will be ignored.
- This parameter is case sensitive.
- You can also specify template or VM UUID for identifying source. version_added 2.8. Use C(hw_product_uuid) from M(vmware_guest_facts) as UUID value.
- From version 2.8 onwards, absolute path to virtual machine or template can be used.
aliases: [ 'template_src' ]
is_template:
description:
- Flag the instance as a template.
- This will mark the given virtual machine as template.
default: 'no'
type: bool
version_added: '2.3'
folder:
description:
- Destination folder, absolute path to find an existing guest or create the new guest.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter.
- This parameter is case sensitive.
- This parameter is required, while deploying new virtual machine. version_added 2.5.
- 'If multiple machines are found with same name, this parameter is used to identify
uniqueness of the virtual machine. version_added 2.5'
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
hardware:
description:
- Manage virtual machine's hardware attributes.
- All parameters case sensitive.
- 'Valid attributes are:'
- ' - C(hotadd_cpu) (boolean): Allow virtual CPUs to be added while the virtual machine is running.'
- ' - C(hotremove_cpu) (boolean): Allow virtual CPUs to be removed while the virtual machine is running.
version_added: 2.5'
- ' - C(hotadd_memory) (boolean): Allow memory to be added while the virtual machine is running.'
- ' - C(memory_mb) (integer): Amount of memory in MB.'
- ' - C(nested_virt) (bool): Enable nested virtualization. version_added: 2.5'
- ' - C(num_cpus) (integer): Number of CPUs.'
- ' - C(num_cpu_cores_per_socket) (integer): Number of Cores Per Socket.'
- " C(num_cpus) must be a multiple of C(num_cpu_cores_per_socket).
For example to create a VM with 2 sockets of 4 cores, specify C(num_cpus): 8 and C(num_cpu_cores_per_socket): 4"
- ' - C(scsi) (string): Valid values are C(buslogic), C(lsilogic), C(lsilogicsas) and C(paravirtual) (default).'
- " - C(memory_reservation_lock) (boolean): If set true, memory resource reservation for the virtual machine
will always be equal to the virtual machine's memory size. version_added: 2.5"
- ' - C(max_connections) (integer): Maximum number of active remote display connections for the virtual machines.
version_added: 2.5.'
- ' - C(mem_limit) (integer): The memory utilization of a virtual machine will not exceed this limit. Unit is MB.
version_added: 2.5'
- ' - C(mem_reservation) (integer): The amount of memory resource that is guaranteed available to the virtual
machine. Unit is MB. C(memory_reservation) is alias to this. version_added: 2.5'
- ' - C(cpu_limit) (integer): The CPU utilization of a virtual machine will not exceed this limit. Unit is MHz.
version_added: 2.5'
- ' - C(cpu_reservation) (integer): The amount of CPU resource that is guaranteed available to the virtual machine.
Unit is MHz. version_added: 2.5'
- ' - C(version) (integer): The Virtual machine hardware versions. Default is 10 (ESXi 5.5 and onwards).
If value specified as C(latest), version is set to the most current virtual hardware supported on the host.
C(latest) is added in version 2.10.
Please check VMware documentation for correct virtual machine hardware version.
Incorrect hardware version may lead to failure in deployment. If hardware version is already equal to the given
version then no action is taken. version_added: 2.6'
- ' - C(boot_firmware) (string): Choose which firmware should be used to boot the virtual machine.
Allowed values are "bios" and "efi". version_added: 2.7'
- ' - C(virt_based_security) (bool): Enable Virtualization Based Security feature for Windows 10.
(Support from Virtual machine hardware version 14, Guest OS Windows 10 64 bit, Windows Server 2016)'
guest_id:
description:
- Set the guest ID.
- This parameter is case sensitive.
- 'Examples:'
- " virtual machine with RHEL7 64 bit, will be 'rhel7_64Guest'"
- " virtual machine with CentOS 64 bit, will be 'centos64Guest'"
- " virtual machine with Ubuntu 64 bit, will be 'ubuntu64Guest'"
- This field is required when creating a virtual machine, not required when creating from the template.
- >
Valid values are referenced here:
U(https://code.vmware.com/apis/358/doc/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html)
version_added: '2.3'
disk:
description:
- A list of disks to add.
- This parameter is case sensitive.
- Shrinking disks is not supported.
- Removing existing disks of the virtual machine is not supported.
- 'Valid attributes are:'
- ' - C(size_[tb,gb,mb,kb]) (integer): Disk storage size in specified unit.'
- ' - C(type) (string): Valid values are:'
- ' - C(thin) thin disk'
- ' - C(eagerzeroedthick) eagerzeroedthick disk, added in version 2.5'
- ' Default: C(None) thick disk, no eagerzero.'
- ' - C(datastore) (string): The name of datastore which will be used for the disk. If C(autoselect_datastore) is set to True,
then will select the less used datastore whose name contains this "disk.datastore" string.'
- ' - C(filename) (string): Existing disk image to be used. Filename must already exist on the datastore.'
- ' Specify filename string in C([datastore_name] path/to/file.vmdk) format. Added in version 2.8.'
- ' - C(autoselect_datastore) (bool): select the less used datastore. "disk.datastore" and "disk.autoselect_datastore"
will not be used if C(datastore) is specified outside this C(disk) configuration.'
- ' - C(disk_mode) (string): Type of disk mode. Added in version 2.6'
- ' - Available options are :'
- ' - C(persistent): Changes are immediately and permanently written to the virtual disk. This is default.'
- ' - C(independent_persistent): Same as persistent, but not affected by snapshots.'
- ' - C(independent_nonpersistent): Changes to virtual disk are made to a redo log and discarded at power off, but not affected by snapshots.'
cdrom:
description:
- A CD-ROM configuration for the virtual machine.
- Or a list of CD-ROMs configuration for the virtual machine. Added in version 2.9.
- 'Parameters C(controller_type), C(controller_number), C(unit_number), C(state) are added for a list of CD-ROMs
configuration support.'
- 'Valid attributes are:'
- ' - C(type) (string): The type of CD-ROM, valid options are C(none), C(client) or C(iso). With C(none) the CD-ROM
will be disconnected but present.'
- ' - C(iso_path) (string): The datastore path to the ISO file to use, in the form of C([datastore1] path/to/file.iso).
Required if type is set C(iso).'
- ' - C(controller_type) (string): Default value is C(ide). Only C(ide) controller type for CD-ROM is supported for
now, will add SATA controller type in the future.'
- ' - C(controller_number) (int): For C(ide) controller, valid value is 0 or 1.'
- ' - C(unit_number) (int): For CD-ROM device attach to C(ide) controller, valid value is 0 or 1.
C(controller_number) and C(unit_number) are mandatory attributes.'
- ' - C(state) (string): Valid value is C(present) or C(absent). Default is C(present). If set to C(absent), then
the specified CD-ROM will be removed. For C(ide) controller, hot-add or hot-remove CD-ROM is not supported.'
version_added: '2.5'
resource_pool:
description:
- Use the given resource pool for virtual machine operation.
- This parameter is case sensitive.
- Resource pool should be child of the selected host parent.
version_added: '2.3'
wait_for_ip_address:
description:
- Wait until vCenter detects an IP address for the virtual machine.
- This requires vmware-tools (vmtoolsd) to properly work after creation.
- "vmware-tools needs to be installed on the given virtual machine in order to work with this parameter."
default: 'no'
type: bool
wait_for_ip_address_timeout:
description:
- Define a timeout (in seconds) for the wait_for_ip_address parameter.
default: '300'
type: int
version_added: '2.10'
wait_for_customization:
description:
- Wait until vCenter detects all guest customizations as successfully completed.
- When enabled, the VM will automatically be powered on.
default: 'no'
type: bool
version_added: '2.8'
state_change_timeout:
description:
- If the C(state) is set to C(shutdownguest), by default the module will return immediately after sending the shutdown signal.
- If this argument is set to a positive integer, the module will instead wait for the virtual machine to reach the poweredoff state.
- The value sets a timeout in seconds for the module to wait for the state change.
default: 0
version_added: '2.6'
snapshot_src:
description:
- Name of the existing snapshot to use to create a clone of a virtual machine.
- This parameter is case sensitive.
- While creating linked clone using C(linked_clone) parameter, this parameter is required.
version_added: '2.4'
linked_clone:
description:
- Whether to create a linked clone from the snapshot specified.
- If specified, then C(snapshot_src) is required parameter.
default: 'no'
type: bool
version_added: '2.4'
force:
description:
- Ignore warnings and complete the actions.
- This parameter is useful while removing virtual machine which is powered on state.
- 'This module reflects the VMware vCenter API and UI workflow, as such, in some cases the `force` flag will
be mandatory to perform the action to ensure you are certain the action has to be taken, no matter what the consequence.
This is specifically the case for removing a powered on the virtual machine when C(state) is set to C(absent).'
default: 'no'
type: bool
delete_from_inventory:
description:
- Whether to delete Virtual machine from inventory or delete from disk.
default: False
type: bool
version_added: '2.10'
datacenter:
description:
- Destination datacenter for the deploy operation.
- This parameter is case sensitive.
default: ha-datacenter
cluster:
description:
- The cluster name where the virtual machine will run.
- This is a required parameter, if C(esxi_hostname) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
version_added: '2.3'
esxi_hostname:
description:
- The ESXi hostname where the virtual machine will run.
- This is a required parameter, if C(cluster) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
annotation:
description:
- A note or annotation to include in the virtual machine.
version_added: '2.3'
customvalues:
description:
- Define a list of custom values to set on virtual machine.
- A custom value object takes two fields C(key) and C(value).
- Incorrect key and values will be ignored.
version_added: '2.3'
networks:
description:
- A list of networks (in the order of the NICs).
- Removing NICs is not allowed, while reconfiguring the virtual machine.
- All parameters and VMware object names are case sensitive.
- 'One of the below parameters is required per entry:'
- ' - C(name) (string): Name of the portgroup or distributed virtual portgroup for this interface.
When specifying distributed virtual portgroup make sure given C(esxi_hostname) or C(cluster) is associated with it.'
- ' - C(vlan) (integer): VLAN number for this interface.'
- 'Optional parameters per entry (used for virtual hardware):'
- ' - C(device_type) (string): Virtual network device (one of C(e1000), C(e1000e), C(pcnet32), C(vmxnet2), C(vmxnet3) (default), C(sriov)).'
- ' - C(mac) (string): Customize MAC address.'
- ' - C(dvswitch_name) (string): Name of the distributed vSwitch.
This value is required if multiple distributed portgroups exists with the same name. version_added 2.7'
- ' - C(start_connected) (bool): Indicates that virtual network adapter starts with associated virtual machine powers on. version_added: 2.5'
- 'Optional parameters per entry (used for OS customization):'
- ' - C(type) (string): Type of IP assignment (either C(dhcp) or C(static)). C(dhcp) is default.'
- ' - C(ip) (string): Static IP address (implies C(type: static)).'
- ' - C(netmask) (string): Static netmask required for C(ip).'
- ' - C(gateway) (string): Static gateway.'
- ' - C(dns_servers) (string): DNS servers for this network interface (Windows).'
- ' - C(domain) (string): Domain name for this network interface (Windows).'
- ' - C(wake_on_lan) (bool): Indicates if wake-on-LAN is enabled on this virtual network adapter. version_added: 2.5'
- ' - C(allow_guest_control) (bool): Enables guest control over whether the connectable device is connected. version_added: 2.5'
version_added: '2.3'
customization:
description:
- Parameters for OS customization when cloning from the template or the virtual machine, or apply to the existing virtual machine directly.
- Not all operating systems are supported for customization with respective vCenter version,
please check VMware documentation for respective OS customization.
- For supported customization operating system matrix, (see U(http://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf))
- All parameters and VMware object names are case sensitive.
- Linux based OSes requires Perl package to be installed for OS customizations.
- 'Common parameters (Linux/Windows):'
- ' - C(existing_vm) (bool): If set to C(True), do OS customization on the specified virtual machine directly.
If set to C(False) or not specified, do OS customization when cloning from the template or the virtual machine. version_added: 2.8'
- ' - C(dns_servers) (list): List of DNS servers to configure.'
- ' - C(dns_suffix) (list): List of domain suffixes, also known as DNS search path (default: C(domain) parameter).'
- ' - C(domain) (string): DNS domain name to use.'
- ' - C(hostname) (string): Computer hostname (default: shorted C(name) parameter). Allowed characters are alphanumeric (uppercase and lowercase)
and minus, rest of the characters are dropped as per RFC 952.'
- 'Parameters related to Linux customization:'
- ' - C(timezone) (string): Timezone (See List of supported time zones for different vSphere versions in Linux/Unix
systems (2145518) U(https://kb.vmware.com/s/article/2145518)). version_added: 2.9'
- ' - C(hwclockUTC) (bool): Specifies whether the hardware clock is in UTC or local time.
True when the hardware clock is in UTC, False when the hardware clock is in local time. version_added: 2.9'
- 'Parameters related to Windows customization:'
- ' - C(autologon) (bool): Auto logon after virtual machine customization (default: False).'
- ' - C(autologoncount) (int): Number of autologon after reboot (default: 1).'
- ' - C(domainadmin) (string): User used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(domainadminpassword) (string): Password used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(fullname) (string): Server owner name (default: Administrator).'
- ' - C(joindomain) (string): AD domain to join (Not compatible with C(joinworkgroup)).'
- ' - C(joinworkgroup) (string): Workgroup to join (Not compatible with C(joindomain), default: WORKGROUP).'
- ' - C(orgname) (string): Organisation name (default: ACME).'
- ' - C(password) (string): Local administrator password.'
- ' - C(productid) (string): Product ID.'
- ' - C(runonce) (list): List of commands to run at first user logon.'
- ' - C(timezone) (int): Timezone (See U(https://msdn.microsoft.com/en-us/library/ms912391.aspx)).'
version_added: '2.3'
vapp_properties:
description:
- A list of vApp properties
- 'For full list of attributes and types refer to: U(https://github.com/vmware/pyvmomi/blob/master/docs/vim/vApp/PropertyInfo.rst)'
- 'Basic attributes are:'
- ' - C(id) (string): Property id - required.'
- ' - C(value) (string): Property value.'
- ' - C(type) (string): Value type, string type by default.'
- ' - C(operation): C(remove): This attribute is required only when removing properties.'
version_added: '2.6'
customization_spec:
description:
- Unique name identifying the requested customization specification.
- This parameter is case sensitive.
- If set, then overrides C(customization) parameter values.
version_added: '2.6'
datastore:
description:
- Specify datastore or datastore cluster to provision virtual machine.
- 'This parameter takes precedence over "disk.datastore" parameter.'
- 'This parameter can be used to override datastore or datastore cluster setting of the virtual machine when deployed
from the template.'
- Please see example for more usage.
version_added: '2.7'
convert:
description:
- Specify convert disk type while cloning template or virtual machine.
choices: [ thin, thick, eagerzeroedthick ]
version_added: '2.8'
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Create a virtual machine on given ESXi hostname
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /DC1/vm/
name: test_vm_0001
state: poweredon
guest_id: centos64Guest
# This is hostname of particular ESXi server on which user wants VM to be deployed
esxi_hostname: "{{ esxi_hostname }}"
disk:
- size_gb: 10
type: thin
datastore: datastore1
hardware:
memory_mb: 512
num_cpus: 4
scsi: paravirtual
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
ip: 10.10.10.100
netmask: 255.255.255.0
device_type: vmxnet3
wait_for_ip_address: yes
wait_for_ip_address_timeout: 600
delegate_to: localhost
register: deploy_vm
- name: Create a virtual machine from a template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /testvms
name: testvm_2
state: poweredon
template: template_el7
disk:
- size_gb: 10
type: thin
datastore: g73_datastore
hardware:
memory_mb: 512
num_cpus: 6
num_cpu_cores_per_socket: 3
scsi: paravirtual
memory_reservation_lock: True
mem_limit: 8096
mem_reservation: 4096
cpu_limit: 8096
cpu_reservation: 4096
max_connections: 5
hotadd_cpu: True
hotremove_cpu: True
hotadd_memory: False
version: 12 # Hardware version of virtual machine
boot_firmware: "efi"
cdrom:
type: iso
iso_path: "[datastore1] livecd.iso"
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
wait_for_ip_address: yes
delegate_to: localhost
register: deploy
- name: Clone a virtual machine from Windows template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: datacenter1
cluster: cluster
name: testvm-2
template: template_windows
networks:
- name: VM Network
ip: 192.168.1.100
netmask: 255.255.255.0
gateway: 192.168.1.1
mac: aa:bb:dd:aa:00:14
domain: my_domain
dns_servers:
- 192.168.1.1
- 192.168.1.2
- vlan: 1234
type: dhcp
customization:
autologon: yes
dns_servers:
- 192.168.1.1
- 192.168.1.2
domain: my_domain
password: new_vm_password
runonce:
- powershell.exe -ExecutionPolicy Unrestricted -File C:\Windows\Temp\ConfigureRemotingForAnsible.ps1 -ForceNewSSLCert -EnableCredSSP
delegate_to: localhost
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: /DC1/vm
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: DC1_C1
networks:
- name: VM Network
ip: 192.168.10.11
netmask: 255.255.255.0
wait_for_ip_address: True
customization:
domain: "{{ guest_domain }}"
dns_servers:
- 8.9.9.9
- 7.8.8.9
dns_suffix:
- example.com
- example2.com
delegate_to: localhost
- name: Rename a virtual machine (requires the virtual machine's uuid)
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
name: new_name
state: present
delegate_to: localhost
- name: Remove a virtual machine by uuid
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: absent
delegate_to: localhost
- name: Remove a virtual machine from inventory
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: vm_name
delete_from_inventory: True
state: absent
delegate_to: localhost
- name: Manipulate vApp properties
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: vm_name
state: present
vapp_properties:
- id: remoteIP
category: Backup
label: Backup server IP
type: str
value: 10.10.10.1
- id: old_property
operation: remove
delegate_to: localhost
- name: Set powerstate of a virtual machine to poweroff by using UUID
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: poweredoff
delegate_to: localhost
- name: Deploy a virtual machine in a datastore different from the datastore of the template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ vm_name }}"
state: present
template: "{{ template_name }}"
# Here datastore can be different which holds template
datastore: "{{ virtual_machine_datastore }}"
hardware:
memory_mb: 512
num_cpus: 2
scsi: paravirtual
delegate_to: localhost
'''
RETURN = r'''
instance:
description: metadata about the new virtual machine
returned: always
type: dict
sample: None
'''
import re
import time
import string
HAS_PYVMOMI = False
try:
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
except ImportError:
pass
from random import randint
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.network import is_mac
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.vmware import (find_obj, gather_vm_facts, get_all_objs,
compile_folder_path_for_object, serialize_spec,
vmware_argument_spec, set_vm_power_state, PyVmomi,
find_dvs_by_name, find_dvspg_by_name, wait_for_vm_ip,
wait_for_task, TaskError)
def list_or_dict(value):
if isinstance(value, list) or isinstance(value, dict):
return value
else:
raise ValueError("'%s' is not valid, valid type is 'list' or 'dict'." % value)
class PyVmomiDeviceHelper(object):
""" This class is a helper to create easily VMware Objects for PyVmomiHelper """
def __init__(self, module):
self.module = module
self.next_disk_unit_number = 0
self.scsi_device_type = {
'lsilogic': vim.vm.device.VirtualLsiLogicController,
'paravirtual': vim.vm.device.ParaVirtualSCSIController,
'buslogic': vim.vm.device.VirtualBusLogicController,
'lsilogicsas': vim.vm.device.VirtualLsiLogicSASController,
}
def create_scsi_controller(self, scsi_type):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
scsi_device = self.scsi_device_type.get(scsi_type, vim.vm.device.ParaVirtualSCSIController)
scsi_ctl.device = scsi_device()
scsi_ctl.device.busNumber = 0
# While creating a new SCSI controller, temporary key value
# should be unique negative integers
scsi_ctl.device.key = -randint(1000, 9999)
scsi_ctl.device.hotAddRemove = True
scsi_ctl.device.sharedBus = 'noSharing'
scsi_ctl.device.scsiCtlrUnitNumber = 7
return scsi_ctl
def is_scsi_controller(self, device):
return isinstance(device, tuple(self.scsi_device_type.values()))
@staticmethod
def create_ide_controller(bus_number=0):
ide_ctl = vim.vm.device.VirtualDeviceSpec()
ide_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
ide_ctl.device = vim.vm.device.VirtualIDEController()
ide_ctl.device.deviceInfo = vim.Description()
# While creating a new IDE controller, temporary key value
# should be unique negative integers
ide_ctl.device.key = -randint(200, 299)
ide_ctl.device.busNumber = bus_number
return ide_ctl
@staticmethod
def create_cdrom(ide_device, cdrom_type, iso_path=None, unit_number=0):
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
cdrom_spec.device = vim.vm.device.VirtualCdrom()
cdrom_spec.device.controllerKey = ide_device.key
cdrom_spec.device.key = -randint(3000, 3999)
cdrom_spec.device.unitNumber = unit_number
cdrom_spec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_spec.device.connectable.allowGuestControl = True
cdrom_spec.device.connectable.startConnected = (cdrom_type != "none")
if cdrom_type in ["none", "client"]:
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif cdrom_type == "iso":
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
return cdrom_spec
@staticmethod
def is_equal_cdrom(vm_obj, cdrom_device, cdrom_type, iso_path):
if cdrom_type == "none":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
not cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or not cdrom_device.connectable.connected))
elif cdrom_type == "client":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
elif cdrom_type == "iso":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.IsoBackingInfo) and
cdrom_device.backing.fileName == iso_path and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
@staticmethod
def update_cdrom_config(vm_obj, cdrom_spec, cdrom_device, iso_path=None):
# Updating an existing CD-ROM
if cdrom_spec["type"] in ["client", "none"]:
cdrom_device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif cdrom_spec["type"] == "iso" and iso_path is not None:
cdrom_device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
cdrom_device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_device.connectable.allowGuestControl = True
cdrom_device.connectable.startConnected = (cdrom_spec["type"] != "none")
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_device.connectable.connected = (cdrom_spec["type"] != "none")
def remove_cdrom(self, cdrom_device):
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove
cdrom_spec.device = cdrom_device
return cdrom_spec
def create_scsi_disk(self, scsi_ctl, disk_index=None):
diskspec = vim.vm.device.VirtualDeviceSpec()
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
diskspec.device = vim.vm.device.VirtualDisk()
diskspec.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
diskspec.device.controllerKey = scsi_ctl.device.key
if self.next_disk_unit_number == 7:
raise AssertionError()
if disk_index == 7:
raise AssertionError()
"""
Configure disk unit number.
"""
if disk_index is not None:
diskspec.device.unitNumber = disk_index
self.next_disk_unit_number = disk_index + 1
else:
diskspec.device.unitNumber = self.next_disk_unit_number
self.next_disk_unit_number += 1
# unit number 7 is reserved to SCSI controller, increase next index
if self.next_disk_unit_number == 7:
self.next_disk_unit_number += 1
return diskspec
def get_device(self, device_type, name):
nic_dict = dict(pcnet32=vim.vm.device.VirtualPCNet32(),
vmxnet2=vim.vm.device.VirtualVmxnet2(),
vmxnet3=vim.vm.device.VirtualVmxnet3(),
e1000=vim.vm.device.VirtualE1000(),
e1000e=vim.vm.device.VirtualE1000e(),
sriov=vim.vm.device.VirtualSriovEthernetCard(),
)
if device_type in nic_dict:
return nic_dict[device_type]
else:
self.module.fail_json(msg='Invalid device_type "%s"'
' for network "%s"' % (device_type, name))
def create_nic(self, device_type, device_label, device_infos):
nic = vim.vm.device.VirtualDeviceSpec()
nic.device = self.get_device(device_type, device_infos['name'])
nic.device.wakeOnLanEnabled = bool(device_infos.get('wake_on_lan', True))
nic.device.deviceInfo = vim.Description()
nic.device.deviceInfo.label = device_label
nic.device.deviceInfo.summary = device_infos['name']
nic.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
nic.device.connectable.startConnected = bool(device_infos.get('start_connected', True))
nic.device.connectable.allowGuestControl = bool(device_infos.get('allow_guest_control', True))
nic.device.connectable.connected = True
if 'mac' in device_infos and is_mac(device_infos['mac']):
nic.device.addressType = 'manual'
nic.device.macAddress = device_infos['mac']
else:
nic.device.addressType = 'generated'
return nic
def integer_value(self, input_value, name):
"""
Function to return int value for given input, else return error
Args:
input_value: Input value to retrieve int value from
name: Name of the Input value (used to build error message)
Returns: (int) if integer value can be obtained, otherwise will send a error message.
"""
if isinstance(input_value, int):
return input_value
elif isinstance(input_value, str) and input_value.isdigit():
return int(input_value)
else:
self.module.fail_json(msg='"%s" attribute should be an'
' integer value.' % name)
class PyVmomiCache(object):
""" This class caches references to objects which are requested multiples times but not modified """
def __init__(self, content, dc_name=None):
self.content = content
self.dc_name = dc_name
self.networks = {}
self.clusters = {}
self.esx_hosts = {}
self.parent_datacenters = {}
def find_obj(self, content, types, name, confine_to_datacenter=True):
""" Wrapper around find_obj to set datacenter context """
result = find_obj(content, types, name)
if result and confine_to_datacenter:
if to_text(self.get_parent_datacenter(result).name) != to_text(self.dc_name):
result = None
objects = self.get_all_objs(content, types, confine_to_datacenter=True)
for obj in objects:
if name is None or to_text(obj.name) == to_text(name):
return obj
return result
def get_all_objs(self, content, types, confine_to_datacenter=True):
""" Wrapper around get_all_objs to set datacenter context """
objects = get_all_objs(content, types)
if confine_to_datacenter:
if hasattr(objects, 'items'):
# resource pools come back as a dictionary
# make a copy
for k, v in tuple(objects.items()):
parent_dc = self.get_parent_datacenter(k)
if parent_dc.name != self.dc_name:
del objects[k]
else:
# everything else should be a list
objects = [x for x in objects if self.get_parent_datacenter(x).name == self.dc_name]
return objects
def get_network(self, network):
if network not in self.networks:
self.networks[network] = self.find_obj(self.content, [vim.Network], network)
return self.networks[network]
def get_cluster(self, cluster):
if cluster not in self.clusters:
self.clusters[cluster] = self.find_obj(self.content, [vim.ClusterComputeResource], cluster)
return self.clusters[cluster]
def get_esx_host(self, host):
if host not in self.esx_hosts:
self.esx_hosts[host] = self.find_obj(self.content, [vim.HostSystem], host)
return self.esx_hosts[host]
def get_parent_datacenter(self, obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
if obj in self.parent_datacenters:
return self.parent_datacenters[obj]
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
self.parent_datacenters[obj] = datacenter
return datacenter
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
self.device_helper = PyVmomiDeviceHelper(self.module)
self.configspec = None
self.relospec = None
self.change_detected = False # a change was detected and needs to be applied through reconfiguration
self.change_applied = False # a change was applied meaning at least one task succeeded
self.customspec = None
self.cache = PyVmomiCache(self.content, dc_name=self.params['datacenter'])
def gather_facts(self, vm):
return gather_vm_facts(self.content, vm)
def remove_vm(self, vm, delete_from_inventory=False):
# https://www.vmware.com/support/developer/converter-sdk/conv60_apireference/vim.ManagedEntity.html#destroy
if vm.summary.runtime.powerState.lower() == 'poweredon':
self.module.fail_json(msg="Virtual machine %s found in 'powered on' state, "
"please use 'force' parameter to remove or poweroff VM "
"and try removing VM again." % vm.name)
# Delete VM from Inventory
if delete_from_inventory:
try:
vm.UnregisterVM()
except (vim.fault.TaskInProgress,
vmodl.RuntimeFault) as e:
return {'changed': self.change_applied, 'failed': True, 'msg': e.msg, 'op': 'UnregisterVM'}
self.change_applied = True
return {'changed': self.change_applied, 'failed': False}
# Delete VM from Disk
task = vm.Destroy()
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'destroy'}
else:
return {'changed': self.change_applied, 'failed': False}
def configure_guestid(self, vm_obj, vm_creation=False):
# guest_id is not required when using templates
if self.params['template']:
return
# guest_id is only mandatory on VM creation
if vm_creation and self.params['guest_id'] is None:
self.module.fail_json(msg="guest_id attribute is mandatory for VM creation")
if self.params['guest_id'] and \
(vm_obj is None or self.params['guest_id'].lower() != vm_obj.summary.config.guestId.lower()):
self.change_detected = True
self.configspec.guestId = self.params['guest_id']
def configure_resource_alloc_info(self, vm_obj):
"""
Function to configure resource allocation information about virtual machine
:param vm_obj: VM object in case of reconfigure, None in case of deploy
:return: None
"""
rai_change_detected = False
memory_allocation = vim.ResourceAllocationInfo()
cpu_allocation = vim.ResourceAllocationInfo()
if 'hardware' in self.params:
if 'mem_limit' in self.params['hardware']:
mem_limit = None
try:
mem_limit = int(self.params['hardware'].get('mem_limit'))
except ValueError:
self.module.fail_json(msg="hardware.mem_limit attribute should be an integer value.")
memory_allocation.limit = mem_limit
if vm_obj is None or memory_allocation.limit != vm_obj.config.memoryAllocation.limit:
rai_change_detected = True
if 'mem_reservation' in self.params['hardware'] or 'memory_reservation' in self.params['hardware']:
mem_reservation = self.params['hardware'].get('mem_reservation')
if mem_reservation is None:
mem_reservation = self.params['hardware'].get('memory_reservation')
try:
mem_reservation = int(mem_reservation)
except ValueError:
self.module.fail_json(msg="hardware.mem_reservation or hardware.memory_reservation should be an integer value.")
memory_allocation.reservation = mem_reservation
if vm_obj is None or \
memory_allocation.reservation != vm_obj.config.memoryAllocation.reservation:
rai_change_detected = True
if 'cpu_limit' in self.params['hardware']:
cpu_limit = None
try:
cpu_limit = int(self.params['hardware'].get('cpu_limit'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_limit attribute should be an integer value.")
cpu_allocation.limit = cpu_limit
if vm_obj is None or cpu_allocation.limit != vm_obj.config.cpuAllocation.limit:
rai_change_detected = True
if 'cpu_reservation' in self.params['hardware']:
cpu_reservation = None
try:
cpu_reservation = int(self.params['hardware'].get('cpu_reservation'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_reservation should be an integer value.")
cpu_allocation.reservation = cpu_reservation
if vm_obj is None or \
cpu_allocation.reservation != vm_obj.config.cpuAllocation.reservation:
rai_change_detected = True
if rai_change_detected:
self.configspec.memoryAllocation = memory_allocation
self.configspec.cpuAllocation = cpu_allocation
self.change_detected = True
def configure_cpu_and_memory(self, vm_obj, vm_creation=False):
# set cpu/memory/etc
if 'hardware' in self.params:
if 'num_cpus' in self.params['hardware']:
try:
num_cpus = int(self.params['hardware']['num_cpus'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpus attribute should be an integer value.")
# check VM power state and cpu hot-add/hot-remove state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if not vm_obj.config.cpuHotRemoveEnabled and num_cpus < vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is less than the cpu number of the VM, "
"cpuHotRemove is not enabled")
if not vm_obj.config.cpuHotAddEnabled and num_cpus > vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is more than the cpu number of the VM, "
"cpuHotAdd is not enabled")
if 'num_cpu_cores_per_socket' in self.params['hardware']:
try:
num_cpu_cores_per_socket = int(self.params['hardware']['num_cpu_cores_per_socket'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpu_cores_per_socket attribute "
"should be an integer value.")
if num_cpus % num_cpu_cores_per_socket != 0:
self.module.fail_json(msg="hardware.num_cpus attribute should be a multiple "
"of hardware.num_cpu_cores_per_socket")
self.configspec.numCoresPerSocket = num_cpu_cores_per_socket
if vm_obj is None or self.configspec.numCoresPerSocket != vm_obj.config.hardware.numCoresPerSocket:
self.change_detected = True
self.configspec.numCPUs = num_cpus
if vm_obj is None or self.configspec.numCPUs != vm_obj.config.hardware.numCPU:
self.change_detected = True
# num_cpu is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.num_cpus attribute is mandatory for VM creation")
if 'memory_mb' in self.params['hardware']:
try:
memory_mb = int(self.params['hardware']['memory_mb'])
except ValueError:
self.module.fail_json(msg="Failed to parse hardware.memory_mb value."
" Please refer the documentation and provide"
" correct value.")
# check VM power state and memory hotadd state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if vm_obj.config.memoryHotAddEnabled and memory_mb < vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="Configured memory is less than memory size of the VM, "
"operation is not supported")
elif not vm_obj.config.memoryHotAddEnabled and memory_mb != vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="memoryHotAdd is not enabled")
self.configspec.memoryMB = memory_mb
if vm_obj is None or self.configspec.memoryMB != vm_obj.config.hardware.memoryMB:
self.change_detected = True
# memory_mb is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.memory_mb attribute is mandatory for VM creation")
if 'hotadd_memory' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.memoryHotAddEnabled != bool(self.params['hardware']['hotadd_memory']):
self.module.fail_json(msg="Configure hotadd memory operation is not supported when VM is power on")
self.configspec.memoryHotAddEnabled = bool(self.params['hardware']['hotadd_memory'])
if vm_obj is None or self.configspec.memoryHotAddEnabled != vm_obj.config.memoryHotAddEnabled:
self.change_detected = True
if 'hotadd_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotAddEnabled != bool(self.params['hardware']['hotadd_cpu']):
self.module.fail_json(msg="Configure hotadd cpu operation is not supported when VM is power on")
self.configspec.cpuHotAddEnabled = bool(self.params['hardware']['hotadd_cpu'])
if vm_obj is None or self.configspec.cpuHotAddEnabled != vm_obj.config.cpuHotAddEnabled:
self.change_detected = True
if 'hotremove_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotRemoveEnabled != bool(self.params['hardware']['hotremove_cpu']):
self.module.fail_json(msg="Configure hotremove cpu operation is not supported when VM is power on")
self.configspec.cpuHotRemoveEnabled = bool(self.params['hardware']['hotremove_cpu'])
if vm_obj is None or self.configspec.cpuHotRemoveEnabled != vm_obj.config.cpuHotRemoveEnabled:
self.change_detected = True
if 'memory_reservation_lock' in self.params['hardware']:
self.configspec.memoryReservationLockedToMax = bool(self.params['hardware']['memory_reservation_lock'])
if vm_obj is None or self.configspec.memoryReservationLockedToMax != vm_obj.config.memoryReservationLockedToMax:
self.change_detected = True
if 'boot_firmware' in self.params['hardware']:
# boot firmware re-config can cause boot issue
if vm_obj is not None:
return
boot_firmware = self.params['hardware']['boot_firmware'].lower()
if boot_firmware not in ('bios', 'efi'):
self.module.fail_json(msg="hardware.boot_firmware value is invalid [%s]."
" Need one of ['bios', 'efi']." % boot_firmware)
self.configspec.firmware = boot_firmware
self.change_detected = True
def sanitize_cdrom_params(self):
# cdroms {'ide': [{num: 0, cdrom: []}, {}], 'sata': [{num: 0, cdrom: []}, {}, ...]}
cdroms = {'ide': [], 'sata': []}
expected_cdrom_spec = self.params.get('cdrom')
if expected_cdrom_spec:
for cdrom_spec in expected_cdrom_spec:
cdrom_spec['controller_type'] = cdrom_spec.get('controller_type', 'ide').lower()
if cdrom_spec['controller_type'] not in ['ide', 'sata']:
self.module.fail_json(msg="Invalid cdrom.controller_type: %s, valid value is 'ide' or 'sata'."
% cdrom_spec['controller_type'])
cdrom_spec['state'] = cdrom_spec.get('state', 'present').lower()
if cdrom_spec['state'] not in ['present', 'absent']:
self.module.fail_json(msg="Invalid cdrom.state: %s, valid value is 'present', 'absent'."
% cdrom_spec['state'])
if cdrom_spec['state'] == 'present':
if 'type' in cdrom_spec and cdrom_spec.get('type') not in ['none', 'client', 'iso']:
self.module.fail_json(msg="Invalid cdrom.type: %s, valid value is 'none', 'client' or 'iso'."
% cdrom_spec.get('type'))
if cdrom_spec.get('type') == 'iso' and not cdrom_spec.get('iso_path'):
self.module.fail_json(msg="cdrom.iso_path is mandatory when cdrom.type is set to iso.")
if cdrom_spec['controller_type'] == 'ide' and \
(cdrom_spec.get('controller_number') not in [0, 1] or cdrom_spec.get('unit_number') not in [0, 1]):
self.module.fail_json(msg="Invalid cdrom.controller_number: %s or cdrom.unit_number: %s, valid"
" values are 0 or 1 for IDE controller." % (cdrom_spec.get('controller_number'), cdrom_spec.get('unit_number')))
if cdrom_spec['controller_type'] == 'sata' and \
(cdrom_spec.get('controller_number') not in range(0, 4) or cdrom_spec.get('unit_number') not in range(0, 30)):
self.module.fail_json(msg="Invalid cdrom.controller_number: %s or cdrom.unit_number: %s,"
" valid controller_number value is 0-3, valid unit_number is 0-29"
" for SATA controller." % (cdrom_spec.get('controller_number'), cdrom_spec.get('unit_number')))
ctl_exist = False
for exist_spec in cdroms.get(cdrom_spec['controller_type']):
if exist_spec['num'] == cdrom_spec['controller_number']:
ctl_exist = True
exist_spec['cdrom'].append(cdrom_spec)
break
if not ctl_exist:
cdroms.get(cdrom_spec['controller_type']).append({'num': cdrom_spec['controller_number'], 'cdrom': [cdrom_spec]})
return cdroms
def configure_cdrom(self, vm_obj):
# Configure the VM CD-ROM
if self.params.get('cdrom'):
if vm_obj and vm_obj.config.template:
# Changing CD-ROM settings on a template is not supported
return
if isinstance(self.params.get('cdrom'), dict):
self.configure_cdrom_dict(vm_obj)
elif isinstance(self.params.get('cdrom'), list):
self.configure_cdrom_list(vm_obj)
def configure_cdrom_dict(self, vm_obj):
if self.params["cdrom"].get('type') not in ['none', 'client', 'iso']:
self.module.fail_json(msg="cdrom.type is mandatory. Options are 'none', 'client', and 'iso'.")
if self.params["cdrom"]['type'] == 'iso' and not self.params["cdrom"].get('iso_path'):
self.module.fail_json(msg="cdrom.iso_path is mandatory when cdrom.type is set to iso.")
cdrom_spec = None
cdrom_devices = self.get_vm_cdrom_devices(vm=vm_obj)
iso_path = self.params["cdrom"].get("iso_path")
if len(cdrom_devices) == 0:
# Creating new CD-ROM
ide_devices = self.get_vm_ide_devices(vm=vm_obj)
if len(ide_devices) == 0:
# Creating new IDE device
ide_ctl = self.device_helper.create_ide_controller()
ide_device = ide_ctl.device
self.change_detected = True
self.configspec.deviceChange.append(ide_ctl)
else:
ide_device = ide_devices[0]
if len(ide_device.device) > 3:
self.module.fail_json(msg="hardware.cdrom specified for a VM or template which already has 4"
" IDE devices of which none are a cdrom")
cdrom_spec = self.device_helper.create_cdrom(ide_device=ide_device, cdrom_type=self.params["cdrom"]["type"],
iso_path=iso_path)
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_spec.device.connectable.connected = (self.params["cdrom"]["type"] != "none")
elif not self.device_helper.is_equal_cdrom(vm_obj=vm_obj, cdrom_device=cdrom_devices[0],
cdrom_type=self.params["cdrom"]["type"], iso_path=iso_path):
self.device_helper.update_cdrom_config(vm_obj, self.params["cdrom"], cdrom_devices[0], iso_path=iso_path)
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
cdrom_spec.device = cdrom_devices[0]
if cdrom_spec:
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
def configure_cdrom_list(self, vm_obj):
configured_cdroms = self.sanitize_cdrom_params()
cdrom_devices = self.get_vm_cdrom_devices(vm=vm_obj)
# configure IDE CD-ROMs
if configured_cdroms['ide']:
ide_devices = self.get_vm_ide_devices(vm=vm_obj)
for expected_cdrom_spec in configured_cdroms['ide']:
ide_device = None
for device in ide_devices:
if device.busNumber == expected_cdrom_spec['num']:
ide_device = device
break
# if not find the matched ide controller or no existing ide controller
if not ide_device:
ide_ctl = self.device_helper.create_ide_controller(bus_number=expected_cdrom_spec['num'])
ide_device = ide_ctl.device
self.change_detected = True
self.configspec.deviceChange.append(ide_ctl)
for cdrom in expected_cdrom_spec['cdrom']:
cdrom_device = None
iso_path = cdrom.get('iso_path')
unit_number = cdrom.get('unit_number')
for target_cdrom in cdrom_devices:
if target_cdrom.controllerKey == ide_device.key and target_cdrom.unitNumber == unit_number:
cdrom_device = target_cdrom
break
# create new CD-ROM
if not cdrom_device and cdrom.get('state') != 'absent':
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
self.module.fail_json(msg='CD-ROM attach to IDE controller not support hot-add.')
if len(ide_device.device) == 2:
self.module.fail_json(msg='Maximum number of CD-ROMs attached to IDE controller is 2.')
cdrom_spec = self.device_helper.create_cdrom(ide_device=ide_device, cdrom_type=cdrom['type'],
iso_path=iso_path, unit_number=unit_number)
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
# re-configure CD-ROM
elif cdrom_device and cdrom.get('state') != 'absent' and \
not self.device_helper.is_equal_cdrom(vm_obj=vm_obj, cdrom_device=cdrom_device,
cdrom_type=cdrom['type'], iso_path=iso_path):
self.device_helper.update_cdrom_config(vm_obj, cdrom, cdrom_device, iso_path=iso_path)
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
cdrom_spec.device = cdrom_device
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
# delete CD-ROM
elif cdrom_device and cdrom.get('state') == 'absent':
if vm_obj and vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
self.module.fail_json(msg='CD-ROM attach to IDE controller not support hot-remove.')
cdrom_spec = self.device_helper.remove_cdrom(cdrom_device)
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
# configure SATA CD-ROMs is not supported yet
if configured_cdroms['sata']:
pass
def configure_hardware_params(self, vm_obj):
"""
Function to configure hardware related configuration of virtual machine
Args:
vm_obj: virtual machine object
"""
if 'hardware' in self.params:
if 'max_connections' in self.params['hardware']:
# maxMksConnections == max_connections
self.configspec.maxMksConnections = int(self.params['hardware']['max_connections'])
if vm_obj is None or self.configspec.maxMksConnections != vm_obj.config.maxMksConnections:
self.change_detected = True
if 'nested_virt' in self.params['hardware']:
self.configspec.nestedHVEnabled = bool(self.params['hardware']['nested_virt'])
if vm_obj is None or self.configspec.nestedHVEnabled != bool(vm_obj.config.nestedHVEnabled):
self.change_detected = True
if 'version' in self.params['hardware']:
hw_version_check_failed = False
temp_version = self.params['hardware'].get('version', 10)
if temp_version.lower() == 'latest':
# Check is to make sure vm_obj is not of type template
if vm_obj and not vm_obj.config.template:
try:
task = vm_obj.UpgradeVM_Task()
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
except vim.fault.AlreadyUpgraded:
# Don't fail if VM is already upgraded.
pass
else:
try:
temp_version = int(temp_version)
except ValueError:
hw_version_check_failed = True
if temp_version not in range(3, 16):
hw_version_check_failed = True
if hw_version_check_failed:
self.module.fail_json(msg="Failed to set hardware.version '%s' value as valid"
" values range from 3 (ESX 2.x) to 14 (ESXi 6.5 and greater)." % temp_version)
# Hardware version is denoted as "vmx-10"
version = "vmx-%02d" % temp_version
self.configspec.version = version
if vm_obj is None or self.configspec.version != vm_obj.config.version:
self.change_detected = True
# Check is to make sure vm_obj is not of type template
if vm_obj and not vm_obj.config.template:
# VM exists and we need to update the hardware version
current_version = vm_obj.config.version
# current_version = "vmx-10"
version_digit = int(current_version.split("-", 1)[-1])
if temp_version < version_digit:
self.module.fail_json(msg="Current hardware version '%d' which is greater than the specified"
" version '%d'. Downgrading hardware version is"
" not supported. Please specify version greater"
" than the current version." % (version_digit,
temp_version))
new_version = "vmx-%02d" % temp_version
try:
task = vm_obj.UpgradeVM_Task(new_version)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
except vim.fault.AlreadyUpgraded:
# Don't fail if VM is already upgraded.
pass
if 'virt_based_security' in self.params['hardware']:
host_version = self.select_host().summary.config.product.version
if int(host_version.split('.')[0]) < 6 or (int(host_version.split('.')[0]) == 6 and int(host_version.split('.')[1]) < 7):
self.module.fail_json(msg="ESXi version %s not support VBS." % host_version)
guest_ids = ['windows9_64Guest', 'windows9Server64Guest']
if vm_obj is None:
guestid = self.configspec.guestId
else:
guestid = vm_obj.summary.config.guestId
if guestid not in guest_ids:
self.module.fail_json(msg="Guest '%s' not support VBS." % guestid)
if (vm_obj is None and int(self.configspec.version.split('-')[1]) >= 14) or \
(vm_obj and int(vm_obj.config.version.split('-')[1]) >= 14 and (vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOff)):
self.configspec.flags = vim.vm.FlagInfo()
self.configspec.flags.vbsEnabled = bool(self.params['hardware']['virt_based_security'])
if bool(self.params['hardware']['virt_based_security']):
self.configspec.flags.vvtdEnabled = True
self.configspec.nestedHVEnabled = True
if (vm_obj is None and self.configspec.firmware == 'efi') or \
(vm_obj and vm_obj.config.firmware == 'efi'):
self.configspec.bootOptions = vim.vm.BootOptions()
self.configspec.bootOptions.efiSecureBootEnabled = True
else:
self.module.fail_json(msg="Not support VBS when firmware is BIOS.")
if vm_obj is None or self.configspec.flags.vbsEnabled != vm_obj.config.flags.vbsEnabled:
self.change_detected = True
def get_device_by_type(self, vm=None, type=None):
device_list = []
if vm is None or type is None:
return device_list
for device in vm.config.hardware.device:
if isinstance(device, type):
device_list.append(device)
return device_list
def get_vm_cdrom_devices(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualCdrom)
def get_vm_ide_devices(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualIDEController)
def get_vm_network_interfaces(self, vm=None):
device_list = []
if vm is None:
return device_list
nw_device_types = (vim.vm.device.VirtualPCNet32, vim.vm.device.VirtualVmxnet2,
vim.vm.device.VirtualVmxnet3, vim.vm.device.VirtualE1000,
vim.vm.device.VirtualE1000e, vim.vm.device.VirtualSriovEthernetCard)
for device in vm.config.hardware.device:
if isinstance(device, nw_device_types):
device_list.append(device)
return device_list
def sanitize_network_params(self):
"""
Sanitize user provided network provided params
Returns: A sanitized list of network params, else fails
"""
network_devices = list()
# Clean up user data here
for network in self.params['networks']:
if 'name' not in network and 'vlan' not in network:
self.module.fail_json(msg="Please specify at least a network name or"
" a VLAN name under VM network list.")
if 'name' in network and self.cache.get_network(network['name']) is None:
self.module.fail_json(msg="Network '%(name)s' does not exist." % network)
elif 'vlan' in network:
dvps = self.cache.get_all_objs(self.content, [vim.dvs.DistributedVirtualPortgroup])
for dvp in dvps:
if hasattr(dvp.config.defaultPortConfig, 'vlan') and \
isinstance(dvp.config.defaultPortConfig.vlan.vlanId, int) and \
str(dvp.config.defaultPortConfig.vlan.vlanId) == str(network['vlan']):
network['name'] = dvp.config.name
break
if 'dvswitch_name' in network and \
dvp.config.distributedVirtualSwitch.name == network['dvswitch_name'] and \
dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
if dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
else:
self.module.fail_json(msg="VLAN '%(vlan)s' does not exist." % network)
if 'type' in network:
if network['type'] not in ['dhcp', 'static']:
self.module.fail_json(msg="Network type '%(type)s' is not a valid parameter."
" Valid parameters are ['dhcp', 'static']." % network)
if network['type'] != 'static' and ('ip' in network or 'netmask' in network):
self.module.fail_json(msg='Static IP information provided for network "%(name)s",'
' but "type" is set to "%(type)s".' % network)
else:
# Type is optional parameter, if user provided IP or Subnet assume
# network type as 'static'
if 'ip' in network or 'netmask' in network:
network['type'] = 'static'
else:
# User wants network type as 'dhcp'
network['type'] = 'dhcp'
if network.get('type') == 'static':
if 'ip' in network and 'netmask' not in network:
self.module.fail_json(msg="'netmask' is required if 'ip' is"
" specified under VM network list.")
if 'ip' not in network and 'netmask' in network:
self.module.fail_json(msg="'ip' is required if 'netmask' is"
" specified under VM network list.")
validate_device_types = ['pcnet32', 'vmxnet2', 'vmxnet3', 'e1000', 'e1000e', 'sriov']
if 'device_type' in network and network['device_type'] not in validate_device_types:
self.module.fail_json(msg="Device type specified '%s' is not valid."
" Please specify correct device"
" type from ['%s']." % (network['device_type'],
"', '".join(validate_device_types)))
if 'mac' in network and not is_mac(network['mac']):
self.module.fail_json(msg="Device MAC address '%s' is invalid."
" Please provide correct MAC address." % network['mac'])
network_devices.append(network)
return network_devices
def configure_network(self, vm_obj):
# Ignore empty networks, this permits to keep networks when deploying a template/cloning a VM
if len(self.params['networks']) == 0:
return
network_devices = self.sanitize_network_params()
# List current device for Clone or Idempotency
current_net_devices = self.get_vm_network_interfaces(vm=vm_obj)
if len(network_devices) < len(current_net_devices):
self.module.fail_json(msg="Given network device list is lesser than current VM device list (%d < %d). "
"Removing interfaces is not allowed"
% (len(network_devices), len(current_net_devices)))
for key in range(0, len(network_devices)):
nic_change_detected = False
network_name = network_devices[key]['name']
if key < len(current_net_devices) and (vm_obj or self.params['template']):
# We are editing existing network devices, this is either when
# are cloning from VM or Template
nic = vim.vm.device.VirtualDeviceSpec()
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
nic.device = current_net_devices[key]
if ('wake_on_lan' in network_devices[key] and
nic.device.wakeOnLanEnabled != network_devices[key].get('wake_on_lan')):
nic.device.wakeOnLanEnabled = network_devices[key].get('wake_on_lan')
nic_change_detected = True
if ('start_connected' in network_devices[key] and
nic.device.connectable.startConnected != network_devices[key].get('start_connected')):
nic.device.connectable.startConnected = network_devices[key].get('start_connected')
nic_change_detected = True
if ('allow_guest_control' in network_devices[key] and
nic.device.connectable.allowGuestControl != network_devices[key].get('allow_guest_control')):
nic.device.connectable.allowGuestControl = network_devices[key].get('allow_guest_control')
nic_change_detected = True
if nic.device.deviceInfo.summary != network_name:
nic.device.deviceInfo.summary = network_name
nic_change_detected = True
if 'device_type' in network_devices[key]:
device = self.device_helper.get_device(network_devices[key]['device_type'], network_name)
device_class = type(device)
if not isinstance(nic.device, device_class):
self.module.fail_json(msg="Changing the device type is not possible when interface is already present. "
"The failing device type is %s" % network_devices[key]['device_type'])
# Changing mac address has no effect when editing interface
if 'mac' in network_devices[key] and nic.device.macAddress != current_net_devices[key].macAddress:
self.module.fail_json(msg="Changing MAC address has not effect when interface is already present. "
"The failing new MAC address is %s" % nic.device.macAddress)
else:
# Default device type is vmxnet3, VMware best practice
device_type = network_devices[key].get('device_type', 'vmxnet3')
nic = self.device_helper.create_nic(device_type,
'Network Adapter %s' % (key + 1),
network_devices[key])
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
nic_change_detected = True
if hasattr(self.cache.get_network(network_name), 'portKeys'):
# VDS switch
pg_obj = None
if 'dvswitch_name' in network_devices[key]:
dvs_name = network_devices[key]['dvswitch_name']
dvs_obj = find_dvs_by_name(self.content, dvs_name)
if dvs_obj is None:
self.module.fail_json(msg="Unable to find distributed virtual switch %s" % dvs_name)
pg_obj = find_dvspg_by_name(dvs_obj, network_name)
if pg_obj is None:
self.module.fail_json(msg="Unable to find distributed port group %s" % network_name)
else:
pg_obj = self.cache.find_obj(self.content, [vim.dvs.DistributedVirtualPortgroup], network_name)
# TODO: (akasurde) There is no way to find association between resource pool and distributed virtual portgroup
# For now, check if we are able to find distributed virtual switch
if not pg_obj.config.distributedVirtualSwitch:
self.module.fail_json(msg="Failed to find distributed virtual switch which is associated with"
" distributed virtual portgroup '%s'. Make sure hostsystem is associated with"
" the given distributed virtual portgroup. Also, check if user has correct"
" permission to access distributed virtual switch in the given portgroup." % pg_obj.name)
if (nic.device.backing and
(not hasattr(nic.device.backing, 'port') or
(nic.device.backing.port.portgroupKey != pg_obj.key or
nic.device.backing.port.switchUuid != pg_obj.config.distributedVirtualSwitch.uuid))):
nic_change_detected = True
dvs_port_connection = vim.dvs.PortConnection()
dvs_port_connection.portgroupKey = pg_obj.key
# If user specifies distributed port group without associating to the hostsystem on which
# virtual machine is going to be deployed then we get error. We can infer that there is no
# association between given distributed port group and host system.
host_system = self.params.get('esxi_hostname')
if host_system and host_system not in [host.config.host.name for host in pg_obj.config.distributedVirtualSwitch.config.host]:
self.module.fail_json(msg="It seems that host system '%s' is not associated with distributed"
" virtual portgroup '%s'. Please make sure host system is associated"
" with given distributed virtual portgroup" % (host_system, pg_obj.name))
dvs_port_connection.switchUuid = pg_obj.config.distributedVirtualSwitch.uuid
nic.device.backing = vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo()
nic.device.backing.port = dvs_port_connection
elif isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
# NSX-T Logical Switch
nic.device.backing = vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo()
network_id = self.cache.get_network(network_name).summary.opaqueNetworkId
nic.device.backing.opaqueNetworkType = 'nsx.LogicalSwitch'
nic.device.backing.opaqueNetworkId = network_id
nic.device.deviceInfo.summary = 'nsx.LogicalSwitch: %s' % network_id
nic_change_detected = True
else:
# vSwitch
if not isinstance(nic.device.backing, vim.vm.device.VirtualEthernetCard.NetworkBackingInfo):
nic.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()
nic_change_detected = True
net_obj = self.cache.get_network(network_name)
if nic.device.backing.network != net_obj:
nic.device.backing.network = net_obj
nic_change_detected = True
if nic.device.backing.deviceName != network_name:
nic.device.backing.deviceName = network_name
nic_change_detected = True
if nic_change_detected:
# Change to fix the issue found while configuring opaque network
# VMs cloned from a template with opaque network will get disconnected
# Replacing deprecated config parameter with relocation Spec
if isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
self.relospec.deviceChange.append(nic)
else:
self.configspec.deviceChange.append(nic)
self.change_detected = True
def configure_vapp_properties(self, vm_obj):
if len(self.params['vapp_properties']) == 0:
return
for x in self.params['vapp_properties']:
if not x.get('id'):
self.module.fail_json(msg="id is required to set vApp property")
new_vmconfig_spec = vim.vApp.VmConfigSpec()
if vm_obj:
# VM exists
# This is primarily for vcsim/integration tests, unset vAppConfig was not seen on my deployments
orig_spec = vm_obj.config.vAppConfig if vm_obj.config.vAppConfig else new_vmconfig_spec
vapp_properties_current = dict((x.id, x) for x in orig_spec.property)
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
# each property must have a unique key
# init key counter with max value + 1
all_keys = [x.key for x in orig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
for property_id, property_spec in vapp_properties_to_change.items():
is_property_changed = False
new_vapp_property_spec = vim.vApp.PropertySpec()
if property_id in vapp_properties_current:
if property_spec.get('operation') == 'remove':
new_vapp_property_spec.operation = 'remove'
new_vapp_property_spec.removeKey = vapp_properties_current[property_id].key
is_property_changed = True
else:
# this is 'edit' branch
new_vapp_property_spec.operation = 'edit'
new_vapp_property_spec.info = vapp_properties_current[property_id]
try:
for property_name, property_value in property_spec.items():
if property_name == 'operation':
# operation is not an info object property
# if set to anything other than 'remove' we don't fail
continue
# Updating attributes only if needed
if getattr(new_vapp_property_spec.info, property_name) != property_value:
setattr(new_vapp_property_spec.info, property_name, property_value)
is_property_changed = True
except Exception as e:
msg = "Failed to set vApp property field='%s' and value='%s'. Error: %s" % (property_name, property_value, to_text(e))
self.module.fail_json(msg=msg)
else:
if property_spec.get('operation') == 'remove':
# attempt to delete non-existent property
continue
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
else:
# New VM
all_keys = [x.key for x in new_vmconfig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
is_property_changed = False
for property_id, property_spec in vapp_properties_to_change.items():
new_vapp_property_spec = vim.vApp.PropertySpec()
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
if new_vmconfig_spec.property:
self.configspec.vAppConfig = new_vmconfig_spec
self.change_detected = True
def customize_customvalues(self, vm_obj):
if len(self.params['customvalues']) == 0:
return
facts = self.gather_facts(vm_obj)
for kv in self.params['customvalues']:
if 'key' not in kv or 'value' not in kv:
self.module.exit_json(msg="customvalues items required both 'key' and 'value' fields.")
key_id = None
for field in self.content.customFieldsManager.field:
if field.name == kv['key']:
key_id = field.key
break
if not key_id:
self.module.fail_json(msg="Unable to find custom value key %s" % kv['key'])
# If kv is not kv fetched from facts, change it
if kv['key'] not in facts['customvalues'] or facts['customvalues'][kv['key']] != kv['value']:
self.content.customFieldsManager.SetField(entity=vm_obj, key=key_id, value=kv['value'])
self.change_detected = True
def customize_vm(self, vm_obj):
# User specified customization specification
custom_spec_name = self.params.get('customization_spec')
if custom_spec_name:
cc_mgr = self.content.customizationSpecManager
if cc_mgr.DoesCustomizationSpecExist(name=custom_spec_name):
temp_spec = cc_mgr.GetCustomizationSpec(name=custom_spec_name)
self.customspec = temp_spec.spec
return
else:
self.module.fail_json(msg="Unable to find customization specification"
" '%s' in given configuration." % custom_spec_name)
# Network settings
adaptermaps = []
for network in self.params['networks']:
guest_map = vim.vm.customization.AdapterMapping()
guest_map.adapter = vim.vm.customization.IPSettings()
if 'ip' in network and 'netmask' in network:
guest_map.adapter.ip = vim.vm.customization.FixedIp()
guest_map.adapter.ip.ipAddress = str(network['ip'])
guest_map.adapter.subnetMask = str(network['netmask'])
elif 'type' in network and network['type'] == 'dhcp':
guest_map.adapter.ip = vim.vm.customization.DhcpIpGenerator()
if 'gateway' in network:
guest_map.adapter.gateway = network['gateway']
# On Windows, DNS domain and DNS servers can be set by network interface
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.IPSettings.html
if 'domain' in network:
guest_map.adapter.dnsDomain = network['domain']
elif 'domain' in self.params['customization']:
guest_map.adapter.dnsDomain = self.params['customization']['domain']
if 'dns_servers' in network:
guest_map.adapter.dnsServerList = network['dns_servers']
elif 'dns_servers' in self.params['customization']:
guest_map.adapter.dnsServerList = self.params['customization']['dns_servers']
adaptermaps.append(guest_map)
# Global DNS settings
globalip = vim.vm.customization.GlobalIPSettings()
if 'dns_servers' in self.params['customization']:
globalip.dnsServerList = self.params['customization']['dns_servers']
# TODO: Maybe list the different domains from the interfaces here by default ?
if 'dns_suffix' in self.params['customization']:
dns_suffix = self.params['customization']['dns_suffix']
if isinstance(dns_suffix, list):
globalip.dnsSuffixList = " ".join(dns_suffix)
else:
globalip.dnsSuffixList = dns_suffix
elif 'domain' in self.params['customization']:
globalip.dnsSuffixList = self.params['customization']['domain']
if self.params['guest_id']:
guest_id = self.params['guest_id']
else:
guest_id = vm_obj.summary.config.guestId
# For windows guest OS, use SysPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.Sysprep.html#field_detail
if 'win' in guest_id:
ident = vim.vm.customization.Sysprep()
ident.userData = vim.vm.customization.UserData()
# Setting hostName, orgName and fullName is mandatory, so we set some default when missing
ident.userData.computerName = vim.vm.customization.FixedName()
# computer name will be truncated to 15 characters if using VM name
default_name = self.params['name'].replace(' ', '')
punctuation = string.punctuation.replace('-', '')
default_name = ''.join([c for c in default_name if c not in punctuation])
ident.userData.computerName.name = str(self.params['customization'].get('hostname', default_name[0:15]))
ident.userData.fullName = str(self.params['customization'].get('fullname', 'Administrator'))
ident.userData.orgName = str(self.params['customization'].get('orgname', 'ACME'))
if 'productid' in self.params['customization']:
ident.userData.productId = str(self.params['customization']['productid'])
ident.guiUnattended = vim.vm.customization.GuiUnattended()
if 'autologon' in self.params['customization']:
ident.guiUnattended.autoLogon = self.params['customization']['autologon']
ident.guiUnattended.autoLogonCount = self.params['customization'].get('autologoncount', 1)
if 'timezone' in self.params['customization']:
# Check if timezone value is a int before proceeding.
ident.guiUnattended.timeZone = self.device_helper.integer_value(
self.params['customization']['timezone'],
'customization.timezone')
ident.identification = vim.vm.customization.Identification()
if self.params['customization'].get('password', '') != '':
ident.guiUnattended.password = vim.vm.customization.Password()
ident.guiUnattended.password.value = str(self.params['customization']['password'])
ident.guiUnattended.password.plainText = True
if 'joindomain' in self.params['customization']:
if 'domainadmin' not in self.params['customization'] or 'domainadminpassword' not in self.params['customization']:
self.module.fail_json(msg="'domainadmin' and 'domainadminpassword' entries are mandatory in 'customization' section to use "
"joindomain feature")
ident.identification.domainAdmin = str(self.params['customization']['domainadmin'])
ident.identification.joinDomain = str(self.params['customization']['joindomain'])
ident.identification.domainAdminPassword = vim.vm.customization.Password()
ident.identification.domainAdminPassword.value = str(self.params['customization']['domainadminpassword'])
ident.identification.domainAdminPassword.plainText = True
elif 'joinworkgroup' in self.params['customization']:
ident.identification.joinWorkgroup = str(self.params['customization']['joinworkgroup'])
if 'runonce' in self.params['customization']:
ident.guiRunOnce = vim.vm.customization.GuiRunOnce()
ident.guiRunOnce.commandList = self.params['customization']['runonce']
else:
# FIXME: We have no clue whether this non-Windows OS is actually Linux, hence it might fail!
# For Linux guest OS, use LinuxPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.LinuxPrep.html
ident = vim.vm.customization.LinuxPrep()
# TODO: Maybe add domain from interface if missing ?
if 'domain' in self.params['customization']:
ident.domain = str(self.params['customization']['domain'])
ident.hostName = vim.vm.customization.FixedName()
hostname = str(self.params['customization'].get('hostname', self.params['name'].split('.')[0]))
# Remove all characters except alphanumeric and minus which is allowed by RFC 952
valid_hostname = re.sub(r"[^a-zA-Z0-9\-]", "", hostname)
ident.hostName.name = valid_hostname
# List of supported time zones for different vSphere versions in Linux/Unix systems
# https://kb.vmware.com/s/article/2145518
if 'timezone' in self.params['customization']:
ident.timeZone = str(self.params['customization']['timezone'])
if 'hwclockUTC' in self.params['customization']:
ident.hwClockUTC = self.params['customization']['hwclockUTC']
self.customspec = vim.vm.customization.Specification()
self.customspec.nicSettingMap = adaptermaps
self.customspec.globalIPSettings = globalip
self.customspec.identity = ident
def get_vm_scsi_controller(self, vm_obj):
# If vm_obj doesn't exist there is no SCSI controller to find
if vm_obj is None:
return None
for device in vm_obj.config.hardware.device:
if self.device_helper.is_scsi_controller(device):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.device = device
return scsi_ctl
return None
def get_configured_disk_size(self, expected_disk_spec):
# what size is it?
if [x for x in expected_disk_spec.keys() if x.startswith('size_') or x == 'size']:
# size, size_tb, size_gb, size_mb, size_kb
if 'size' in expected_disk_spec:
size_regex = re.compile(r'(\d+(?:\.\d+)?)([tgmkTGMK][bB])')
disk_size_m = size_regex.match(expected_disk_spec['size'])
try:
if disk_size_m:
expected = disk_size_m.group(1)
unit = disk_size_m.group(2)
else:
raise ValueError
if re.match(r'\d+\.\d+', expected):
# We found float value in string, let's typecast it
expected = float(expected)
else:
# We found int value in string, let's typecast it
expected = int(expected)
if not expected or not unit:
raise ValueError
except (TypeError, ValueError, NameError):
# Common failure
self.module.fail_json(msg="Failed to parse disk size please review value"
" provided using documentation.")
else:
param = [x for x in expected_disk_spec.keys() if x.startswith('size_')][0]
unit = param.split('_')[-1].lower()
expected = [x[1] for x in expected_disk_spec.items() if x[0].startswith('size_')][0]
expected = int(expected)
disk_units = dict(tb=3, gb=2, mb=1, kb=0)
if unit in disk_units:
unit = unit.lower()
return expected * (1024 ** disk_units[unit])
else:
self.module.fail_json(msg="%s is not a supported unit for disk size."
" Supported units are ['%s']." % (unit,
"', '".join(disk_units.keys())))
# No size found but disk, fail
self.module.fail_json(
msg="No size, size_kb, size_mb, size_gb or size_tb attribute found into disk configuration")
def find_vmdk(self, vmdk_path):
"""
Takes a vsphere datastore path in the format
[datastore_name] path/to/file.vmdk
Returns vsphere file object or raises RuntimeError
"""
datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder = self.vmdk_disk_path_split(vmdk_path)
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
if datastore is None:
self.module.fail_json(msg="Failed to find the datastore %s" % datastore_name)
return self.find_vmdk_file(datastore, vmdk_fullpath, vmdk_filename, vmdk_folder)
def add_existing_vmdk(self, vm_obj, expected_disk_spec, diskspec, scsi_ctl):
"""
Adds vmdk file described by expected_disk_spec['filename'], retrieves the file
information and adds the correct spec to self.configspec.deviceChange.
"""
filename = expected_disk_spec['filename']
# if this is a new disk, or the disk file names are different
if (vm_obj and diskspec.device.backing.fileName != filename) or vm_obj is None:
vmdk_file = self.find_vmdk(expected_disk_spec['filename'])
diskspec.device.backing.fileName = expected_disk_spec['filename']
diskspec.device.capacityInKB = VmomiSupport.vmodlTypes['long'](vmdk_file.fileSize / 1024)
diskspec.device.key = -1
self.change_detected = True
self.configspec.deviceChange.append(diskspec)
def configure_disks(self, vm_obj):
# Ignore empty disk list, this permits to keep disks when deploying a template/cloning a VM
if len(self.params['disk']) == 0:
return
scsi_ctl = self.get_vm_scsi_controller(vm_obj)
# Create scsi controller only if we are deploying a new VM, not a template or reconfiguring
if vm_obj is None or scsi_ctl is None:
scsi_ctl = self.device_helper.create_scsi_controller(self.get_scsi_type())
self.change_detected = True
self.configspec.deviceChange.append(scsi_ctl)
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)] \
if vm_obj is not None else None
if disks is not None and self.params.get('disk') and len(self.params.get('disk')) < len(disks):
self.module.fail_json(msg="Provided disks configuration has less disks than "
"the target object (%d vs %d)" % (len(self.params.get('disk')), len(disks)))
disk_index = 0
for expected_disk_spec in self.params.get('disk'):
disk_modified = False
# If we are manipulating and existing objects which has disks and disk_index is in disks
if vm_obj is not None and disks is not None and disk_index < len(disks):
diskspec = vim.vm.device.VirtualDeviceSpec()
# set the operation to edit so that it knows to keep other settings
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
diskspec.device = disks[disk_index]
else:
diskspec = self.device_helper.create_scsi_disk(scsi_ctl, disk_index)
disk_modified = True
# increment index for next disk search
disk_index += 1
# index 7 is reserved to SCSI controller
if disk_index == 7:
disk_index += 1
if 'disk_mode' in expected_disk_spec:
disk_mode = expected_disk_spec.get('disk_mode', 'persistent').lower()
valid_disk_mode = ['persistent', 'independent_persistent', 'independent_nonpersistent']
if disk_mode not in valid_disk_mode:
self.module.fail_json(msg="disk_mode specified is not valid."
" Should be one of ['%s']" % "', '".join(valid_disk_mode))
if (vm_obj and diskspec.device.backing.diskMode != disk_mode) or (vm_obj is None):
diskspec.device.backing.diskMode = disk_mode
disk_modified = True
else:
diskspec.device.backing.diskMode = "persistent"
# is it thin?
if 'type' in expected_disk_spec:
disk_type = expected_disk_spec.get('type', '').lower()
if disk_type == 'thin':
diskspec.device.backing.thinProvisioned = True
elif disk_type == 'eagerzeroedthick':
diskspec.device.backing.eagerlyScrub = True
if 'filename' in expected_disk_spec and expected_disk_spec['filename'] is not None:
self.add_existing_vmdk(vm_obj, expected_disk_spec, diskspec, scsi_ctl)
continue
elif vm_obj is None or self.params['template']:
# We are creating new VM or from Template
# Only create virtual device if not backed by vmdk in original template
if diskspec.device.backing.fileName == '':
diskspec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.create
# which datastore?
if expected_disk_spec.get('datastore'):
# TODO: This is already handled by the relocation spec,
# but it needs to eventually be handled for all the
# other disks defined
pass
kb = self.get_configured_disk_size(expected_disk_spec)
# VMware doesn't allow to reduce disk sizes
if kb < diskspec.device.capacityInKB:
self.module.fail_json(
msg="Given disk size is smaller than found (%d < %d). Reducing disks is not allowed." %
(kb, diskspec.device.capacityInKB))
if kb != diskspec.device.capacityInKB or disk_modified:
diskspec.device.capacityInKB = kb
self.configspec.deviceChange.append(diskspec)
self.change_detected = True
def select_host(self):
hostsystem = self.cache.get_esx_host(self.params['esxi_hostname'])
if not hostsystem:
self.module.fail_json(msg='Failed to find ESX host "%(esxi_hostname)s"' % self.params)
if hostsystem.runtime.connectionState != 'connected' or hostsystem.runtime.inMaintenanceMode:
self.module.fail_json(msg='ESXi "%(esxi_hostname)s" is in invalid state or in maintenance mode.' % self.params)
return hostsystem
def autoselect_datastore(self):
datastore = None
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if not self.is_datastore_valid(datastore_obj=ds):
continue
if ds.summary.freeSpace > datastore_freespace:
datastore = ds
datastore_freespace = ds.summary.freeSpace
return datastore
def get_recommended_datastore(self, datastore_cluster_obj=None):
"""
Function to return Storage DRS recommended datastore from datastore cluster
Args:
datastore_cluster_obj: datastore cluster managed object
Returns: Name of recommended datastore from the given datastore cluster
"""
if datastore_cluster_obj is None:
return None
# Check if Datastore Cluster provided by user is SDRS ready
sdrs_status = datastore_cluster_obj.podStorageDrsEntry.storageDrsConfig.podConfig.enabled
if sdrs_status:
# We can get storage recommendation only if SDRS is enabled on given datastorage cluster
pod_sel_spec = vim.storageDrs.PodSelectionSpec()
pod_sel_spec.storagePod = datastore_cluster_obj
storage_spec = vim.storageDrs.StoragePlacementSpec()
storage_spec.podSelectionSpec = pod_sel_spec
storage_spec.type = 'create'
try:
rec = self.content.storageResourceManager.RecommendDatastores(storageSpec=storage_spec)
rec_action = rec.recommendations[0].action[0]
return rec_action.destination.name
except Exception:
# There is some error so we fall back to general workflow
pass
datastore = None
datastore_freespace = 0
for ds in datastore_cluster_obj.childEntity:
if isinstance(ds, vim.Datastore) and ds.summary.freeSpace > datastore_freespace:
# If datastore field is provided, filter destination datastores
if not self.is_datastore_valid(datastore_obj=ds):
continue
datastore = ds
datastore_freespace = ds.summary.freeSpace
if datastore:
return datastore.name
return None
def select_datastore(self, vm_obj=None):
datastore = None
datastore_name = None
if len(self.params['disk']) != 0:
# TODO: really use the datastore for newly created disks
if 'autoselect_datastore' in self.params['disk'][0] and self.params['disk'][0]['autoselect_datastore']:
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
datastores = [x for x in datastores if self.cache.get_parent_datacenter(x).name == self.params['datacenter']]
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if not self.is_datastore_valid(datastore_obj=ds):
continue
if (ds.summary.freeSpace > datastore_freespace) or (ds.summary.freeSpace == datastore_freespace and not datastore):
# If datastore field is provided, filter destination datastores
if 'datastore' in self.params['disk'][0] and \
isinstance(self.params['disk'][0]['datastore'], str) and \
ds.name.find(self.params['disk'][0]['datastore']) < 0:
continue
datastore = ds
datastore_name = datastore.name
datastore_freespace = ds.summary.freeSpace
elif 'datastore' in self.params['disk'][0]:
datastore_name = self.params['disk'][0]['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
self.module.fail_json(msg="Either datastore or autoselect_datastore should be provided to select datastore")
if not datastore and self.params['template']:
# use the template's existing DS
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)]
if disks:
datastore = disks[0].backing.datastore
datastore_name = datastore.name
# validation
if datastore:
dc = self.cache.get_parent_datacenter(datastore)
if dc.name != self.params['datacenter']:
datastore = self.autoselect_datastore()
datastore_name = datastore.name
if not datastore:
if len(self.params['disk']) != 0 or self.params['template'] is None:
self.module.fail_json(msg="Unable to find the datastore with given parameters."
" This could mean, %s is a non-existent virtual machine and module tried to"
" deploy it as new virtual machine with no disk. Please specify disks parameter"
" or specify template to clone from." % self.params['name'])
self.module.fail_json(msg="Failed to find a matching datastore")
return datastore, datastore_name
def obj_has_parent(self, obj, parent):
if obj is None and parent is None:
raise AssertionError()
current_parent = obj
while True:
if current_parent.name == parent.name:
return True
# Check if we have reached till root folder
moid = current_parent._moId
if moid in ['group-d1', 'ha-folder-root']:
return False
current_parent = current_parent.parent
if current_parent is None:
return False
def get_scsi_type(self):
disk_controller_type = "paravirtual"
# set cpu/memory/etc
if 'hardware' in self.params:
if 'scsi' in self.params['hardware']:
if self.params['hardware']['scsi'] in ['buslogic', 'paravirtual', 'lsilogic', 'lsilogicsas']:
disk_controller_type = self.params['hardware']['scsi']
else:
self.module.fail_json(msg="hardware.scsi attribute should be 'paravirtual' or 'lsilogic'")
return disk_controller_type
def find_folder(self, searchpath):
""" Walk inventory objects one position of the searchpath at a time """
# split the searchpath so we can iterate through it
paths = [x.replace('/', '') for x in searchpath.split('/')]
paths_total = len(paths) - 1
position = 0
# recursive walk while looking for next element in searchpath
root = self.content.rootFolder
while root and position <= paths_total:
change = False
if hasattr(root, 'childEntity'):
for child in root.childEntity:
if child.name == paths[position]:
root = child
position += 1
change = True
break
elif isinstance(root, vim.Datacenter):
if hasattr(root, 'vmFolder'):
if root.vmFolder.name == paths[position]:
root = root.vmFolder
position += 1
change = True
else:
root = None
if not change:
root = None
return root
def get_resource_pool(self, cluster=None, host=None, resource_pool=None):
""" Get a resource pool, filter on cluster, esxi_hostname or resource_pool if given """
cluster_name = cluster or self.params.get('cluster', None)
host_name = host or self.params.get('esxi_hostname', None)
resource_pool_name = resource_pool or self.params.get('resource_pool', None)
# get the datacenter object
datacenter = find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if not datacenter:
self.module.fail_json(msg='Unable to find datacenter "%s"' % self.params['datacenter'])
# if cluster is given, get the cluster object
if cluster_name:
cluster = find_obj(self.content, [vim.ComputeResource], cluster_name, folder=datacenter)
if not cluster:
self.module.fail_json(msg='Unable to find cluster "%s"' % cluster_name)
# if host is given, get the cluster object using the host
elif host_name:
host = find_obj(self.content, [vim.HostSystem], host_name, folder=datacenter)
if not host:
self.module.fail_json(msg='Unable to find host "%s"' % host_name)
cluster = host.parent
else:
cluster = None
# get resource pools limiting search to cluster or datacenter
resource_pool = find_obj(self.content, [vim.ResourcePool], resource_pool_name, folder=cluster or datacenter)
if not resource_pool:
if resource_pool_name:
self.module.fail_json(msg='Unable to find resource_pool "%s"' % resource_pool_name)
else:
self.module.fail_json(msg='Unable to find resource pool, need esxi_hostname, resource_pool, or cluster')
return resource_pool
def deploy_vm(self):
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/clone_vm.py
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.CloneSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.ConfigSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# FIXME:
# - static IPs
self.folder = self.params.get('folder', None)
if self.folder is None:
self.module.fail_json(msg="Folder is required parameter while deploying new virtual machine")
# Prepend / if it was missing from the folder path, also strip trailing slashes
if not self.folder.startswith('/'):
self.folder = '/%(folder)s' % self.params
self.folder = self.folder.rstrip('/')
datacenter = self.cache.find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if datacenter is None:
self.module.fail_json(msg='No datacenter named %(datacenter)s was found' % self.params)
dcpath = compile_folder_path_for_object(datacenter)
# Nested folder does not have trailing /
if not dcpath.endswith('/'):
dcpath += '/'
# Check for full path first in case it was already supplied
if (self.folder.startswith(dcpath + self.params['datacenter'] + '/vm') or
self.folder.startswith(dcpath + '/' + self.params['datacenter'] + '/vm')):
fullpath = self.folder
elif self.folder.startswith('/vm/') or self.folder == '/vm':
fullpath = "%s%s%s" % (dcpath, self.params['datacenter'], self.folder)
elif self.folder.startswith('/'):
fullpath = "%s%s/vm%s" % (dcpath, self.params['datacenter'], self.folder)
else:
fullpath = "%s%s/vm/%s" % (dcpath, self.params['datacenter'], self.folder)
f_obj = self.content.searchIndex.FindByInventoryPath(fullpath)
# abort if no strategy was successful
if f_obj is None:
# Add some debugging values in failure.
details = {
'datacenter': datacenter.name,
'datacenter_path': dcpath,
'folder': self.folder,
'full_search_path': fullpath,
}
self.module.fail_json(msg='No folder %s matched in the search path : %s' % (self.folder, fullpath),
details=details)
destfolder = f_obj
if self.params['template']:
vm_obj = self.get_vm_or_template(template_name=self.params['template'])
if vm_obj is None:
self.module.fail_json(msg="Could not find a template named %(template)s" % self.params)
else:
vm_obj = None
# always get a resource_pool
resource_pool = self.get_resource_pool()
# set the destination datastore for VM & disks
if self.params['datastore']:
# Give precedence to datastore value provided by user
# User may want to deploy VM to specific datastore.
datastore_name = self.params['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
(datastore, datastore_name) = self.select_datastore(vm_obj)
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=vm_obj, vm_creation=True)
self.configure_cpu_and_memory(vm_obj=vm_obj, vm_creation=True)
self.configure_hardware_params(vm_obj=vm_obj)
self.configure_resource_alloc_info(vm_obj=vm_obj)
self.configure_vapp_properties(vm_obj=vm_obj)
self.configure_disks(vm_obj=vm_obj)
self.configure_network(vm_obj=vm_obj)
self.configure_cdrom(vm_obj=vm_obj)
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 0 or network_changes or self.params.get('customization_spec') is not None:
self.customize_vm(vm_obj=vm_obj)
clonespec = None
clone_method = None
try:
if self.params['template']:
# Only select specific host when ESXi hostname is provided
if self.params['esxi_hostname']:
self.relospec.host = self.select_host()
self.relospec.datastore = datastore
# Convert disk present in template if is set
if self.params['convert']:
for device in vm_obj.config.hardware.device:
if isinstance(device, vim.vm.device.VirtualDisk):
disk_locator = vim.vm.RelocateSpec.DiskLocator()
disk_locator.diskBackingInfo = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
if self.params['convert'] in ['thin']:
disk_locator.diskBackingInfo.thinProvisioned = True
if self.params['convert'] in ['eagerzeroedthick']:
disk_locator.diskBackingInfo.eagerlyScrub = True
if self.params['convert'] in ['thick']:
disk_locator.diskBackingInfo.diskMode = "persistent"
disk_locator.diskId = device.key
disk_locator.datastore = datastore
self.relospec.disk.append(disk_locator)
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# > pool: For a clone operation from a template to a virtual machine, this argument is required.
self.relospec.pool = resource_pool
linked_clone = self.params.get('linked_clone')
snapshot_src = self.params.get('snapshot_src', None)
if linked_clone:
if snapshot_src is not None:
self.relospec.diskMoveType = vim.vm.RelocateSpec.DiskMoveOptions.createNewChildDiskBacking
else:
self.module.fail_json(msg="Parameter 'linked_src' and 'snapshot_src' are"
" required together for linked clone operation.")
clonespec = vim.vm.CloneSpec(template=self.params['is_template'], location=self.relospec)
if self.customspec:
clonespec.customization = self.customspec
if snapshot_src is not None:
if vm_obj.snapshot is None:
self.module.fail_json(msg="No snapshots present for virtual machine or template [%(template)s]" % self.params)
snapshot = self.get_snapshots_by_name_recursively(snapshots=vm_obj.snapshot.rootSnapshotList,
snapname=snapshot_src)
if len(snapshot) != 1:
self.module.fail_json(msg='virtual machine "%(template)s" does not contain'
' snapshot named "%(snapshot_src)s"' % self.params)
clonespec.snapshot = snapshot[0].snapshot
clonespec.config = self.configspec
clone_method = 'Clone'
try:
task = vm_obj.Clone(folder=destfolder, name=self.params['name'], spec=clonespec)
except vim.fault.NoPermission as e:
self.module.fail_json(msg="Failed to clone virtual machine %s to folder %s "
"due to permission issue: %s" % (self.params['name'],
destfolder,
to_native(e.msg)))
self.change_detected = True
else:
# ConfigSpec require name for VM creation
self.configspec.name = self.params['name']
self.configspec.files = vim.vm.FileInfo(logDirectory=None,
snapshotDirectory=None,
suspendDirectory=None,
vmPathName="[" + datastore_name + "]")
clone_method = 'CreateVM_Task'
try:
task = destfolder.CreateVM_Task(config=self.configspec, pool=resource_pool)
except vmodl.fault.InvalidRequest as e:
self.module.fail_json(msg="Failed to create virtual machine due to invalid configuration "
"parameter %s" % to_native(e.msg))
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to create virtual machine due to "
"product versioning restrictions: %s" % to_native(e.msg))
self.change_detected = True
self.wait_for_task(task)
except TypeError as e:
self.module.fail_json(msg="TypeError was returned, please ensure to give correct inputs. %s" % to_text(e))
if task.info.state == 'error':
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021361
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2173
# provide these to the user for debugging
clonespec_json = serialize_spec(clonespec)
configspec_json = serialize_spec(self.configspec)
kwargs = {
'changed': self.change_applied,
'failed': True,
'msg': task.info.error.msg,
'clonespec': clonespec_json,
'configspec': configspec_json,
'clone_method': clone_method
}
return kwargs
else:
# set annotation
vm = task.info.result
if self.params['annotation']:
annotation_spec = vim.vm.ConfigSpec()
annotation_spec.annotation = str(self.params['annotation'])
task = vm.ReconfigVM_Task(annotation_spec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'annotation'}
if self.params['customvalues']:
self.customize_customvalues(vm_obj=vm)
if self.params['wait_for_ip_address'] or self.params['wait_for_customization'] or self.params['state'] in ['poweredon', 'restarted']:
set_vm_power_state(self.content, vm, 'poweredon', force=False)
if self.params['wait_for_ip_address']:
wait_for_vm_ip(self.content, vm, self.params['wait_for_ip_address_timeout'])
if self.params['wait_for_customization']:
is_customization_ok = self.wait_for_customization(vm)
if not is_customization_ok:
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': True, 'instance': vm_facts, 'op': 'customization'}
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def get_snapshots_by_name_recursively(self, snapshots, snapname):
snap_obj = []
for snapshot in snapshots:
if snapshot.name == snapname:
snap_obj.append(snapshot)
else:
snap_obj = snap_obj + self.get_snapshots_by_name_recursively(snapshot.childSnapshotList, snapname)
return snap_obj
def reconfigure_vm(self):
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=self.current_vm_obj)
self.configure_cpu_and_memory(vm_obj=self.current_vm_obj)
self.configure_hardware_params(vm_obj=self.current_vm_obj)
self.configure_disks(vm_obj=self.current_vm_obj)
self.configure_network(vm_obj=self.current_vm_obj)
self.configure_cdrom(vm_obj=self.current_vm_obj)
self.customize_customvalues(vm_obj=self.current_vm_obj)
self.configure_resource_alloc_info(vm_obj=self.current_vm_obj)
self.configure_vapp_properties(vm_obj=self.current_vm_obj)
if self.params['annotation'] and self.current_vm_obj.config.annotation != self.params['annotation']:
self.configspec.annotation = str(self.params['annotation'])
self.change_detected = True
if self.params['resource_pool']:
self.relospec.pool = self.get_resource_pool()
if self.relospec.pool != self.current_vm_obj.resourcePool:
task = self.current_vm_obj.RelocateVM_Task(spec=self.relospec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'relocate'}
# Only send VMware task if we see a modification
if self.change_detected:
task = None
try:
task = self.current_vm_obj.ReconfigVM_Task(spec=self.configspec)
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to reconfigure virtual machine due to"
" product versioning restrictions: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'reconfig'}
# Rename VM
if self.params['uuid'] and self.params['name'] and self.params['name'] != self.current_vm_obj.config.name:
task = self.current_vm_obj.Rename_Task(self.params['name'])
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'rename'}
# Mark VM as Template
if self.params['is_template'] and not self.current_vm_obj.config.template:
try:
self.current_vm_obj.MarkAsTemplate()
self.change_applied = True
except vmodl.fault.NotSupported as e:
self.module.fail_json(msg="Failed to mark virtual machine [%s] "
"as template: %s" % (self.params['name'], e.msg))
# Mark Template as VM
elif not self.params['is_template'] and self.current_vm_obj.config.template:
resource_pool = self.get_resource_pool()
kwargs = dict(pool=resource_pool)
if self.params.get('esxi_hostname', None):
host_system_obj = self.select_host()
kwargs.update(host=host_system_obj)
try:
self.current_vm_obj.MarkAsVirtualMachine(**kwargs)
self.change_applied = True
except vim.fault.InvalidState as invalid_state:
self.module.fail_json(msg="Virtual machine is not marked"
" as template : %s" % to_native(invalid_state.msg))
except vim.fault.InvalidDatastore as invalid_ds:
self.module.fail_json(msg="Converting template to virtual machine"
" operation cannot be performed on the"
" target datastores: %s" % to_native(invalid_ds.msg))
except vim.fault.CannotAccessVmComponent as cannot_access:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" as operation unable access virtual machine"
" component: %s" % to_native(cannot_access.msg))
except vmodl.fault.InvalidArgument as invalid_argument:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to : %s" % to_native(invalid_argument.msg))
except Exception as generic_exc:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to generic error : %s" % to_native(generic_exc))
# Automatically update VMware UUID when converting template to VM.
# This avoids an interactive prompt during VM startup.
uuid_action = [x for x in self.current_vm_obj.config.extraConfig if x.key == "uuid.action"]
if not uuid_action:
uuid_action_opt = vim.option.OptionValue()
uuid_action_opt.key = "uuid.action"
uuid_action_opt.value = "create"
self.configspec.extraConfig.append(uuid_action_opt)
self.change_detected = True
# add customize existing VM after VM re-configure
if 'existing_vm' in self.params['customization'] and self.params['customization']['existing_vm']:
if self.current_vm_obj.config.template:
self.module.fail_json(msg="VM is template, not support guest OS customization.")
if self.current_vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
self.module.fail_json(msg="VM is not in poweroff state, can not do guest OS customization.")
cus_result = self.customize_exist_vm()
if cus_result['failed']:
return cus_result
vm_facts = self.gather_facts(self.current_vm_obj)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def customize_exist_vm(self):
task = None
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 1 or network_changes or self.params.get('customization_spec'):
self.customize_vm(vm_obj=self.current_vm_obj)
try:
task = self.current_vm_obj.CustomizeVM_Task(self.customspec)
except vim.fault.CustomizationFault as e:
self.module.fail_json(msg="Failed to customization virtual machine due to CustomizationFault: %s" % to_native(e.msg))
except vim.fault.RuntimeFault as e:
self.module.fail_json(msg="failed to customization virtual machine due to RuntimeFault: %s" % to_native(e.msg))
except Exception as e:
self.module.fail_json(msg="failed to customization virtual machine due to fault: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'customize_exist'}
if self.params['wait_for_customization']:
set_vm_power_state(self.content, self.current_vm_obj, 'poweredon', force=False)
is_customization_ok = self.wait_for_customization(self.current_vm_obj)
if not is_customization_ok:
return {'changed': self.change_applied, 'failed': True, 'op': 'wait_for_customize_exist'}
return {'changed': self.change_applied, 'failed': False}
def wait_for_task(self, task, poll_interval=1):
"""
Wait for a VMware task to complete. Terminal states are 'error' and 'success'.
Inputs:
- task: the task to wait for
- poll_interval: polling interval to check the task, in seconds
Modifies:
- self.change_applied
"""
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.Task.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.TaskInfo.html
# https://github.com/virtdevninja/pyvmomi-community-samples/blob/master/samples/tools/tasks.py
while task.info.state not in ['error', 'success']:
time.sleep(poll_interval)
self.change_applied = self.change_applied or task.info.state == 'success'
def get_vm_events(self, vm, eventTypeIdList):
byEntity = vim.event.EventFilterSpec.ByEntity(entity=vm, recursion="self")
filterSpec = vim.event.EventFilterSpec(entity=byEntity, eventTypeId=eventTypeIdList)
eventManager = self.content.eventManager
return eventManager.QueryEvent(filterSpec)
def wait_for_customization(self, vm, poll=10000, sleep=10):
thispoll = 0
while thispoll <= poll:
eventStarted = self.get_vm_events(vm, ['CustomizationStartedEvent'])
if len(eventStarted):
thispoll = 0
while thispoll <= poll:
eventsFinishedResult = self.get_vm_events(vm, ['CustomizationSucceeded', 'CustomizationFailed'])
if len(eventsFinishedResult):
if not isinstance(eventsFinishedResult[0], vim.event.CustomizationSucceeded):
self.module.fail_json(msg='Customization failed with error {0}:\n{1}'.format(
eventsFinishedResult[0]._wsdlName, eventsFinishedResult[0].fullFormattedMessage))
return False
break
else:
time.sleep(sleep)
thispoll += 1
return True
else:
time.sleep(sleep)
thispoll += 1
self.module.fail_json('waiting for customizations timed out.')
return False
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
state=dict(type='str', default='present',
choices=['absent', 'poweredoff', 'poweredon', 'present', 'rebootguest', 'restarted', 'shutdownguest', 'suspended']),
template=dict(type='str', aliases=['template_src']),
is_template=dict(type='bool', default=False),
annotation=dict(type='str', aliases=['notes']),
customvalues=dict(type='list', default=[]),
name=dict(type='str'),
name_match=dict(type='str', choices=['first', 'last'], default='first'),
uuid=dict(type='str'),
use_instance_uuid=dict(type='bool', default=False),
folder=dict(type='str'),
guest_id=dict(type='str'),
disk=dict(type='list', default=[]),
cdrom=dict(type=list_or_dict, default=[]),
hardware=dict(type='dict', default={}),
force=dict(type='bool', default=False),
datacenter=dict(type='str', default='ha-datacenter'),
esxi_hostname=dict(type='str'),
cluster=dict(type='str'),
wait_for_ip_address=dict(type='bool', default=False),
wait_for_ip_address_timeout=dict(type='int', default=300),
state_change_timeout=dict(type='int', default=0),
snapshot_src=dict(type='str'),
linked_clone=dict(type='bool', default=False),
networks=dict(type='list', default=[]),
resource_pool=dict(type='str'),
customization=dict(type='dict', default={}, no_log=True),
customization_spec=dict(type='str', default=None),
wait_for_customization=dict(type='bool', default=False),
vapp_properties=dict(type='list', default=[]),
datastore=dict(type='str'),
convert=dict(type='str', choices=['thin', 'thick', 'eagerzeroedthick']),
delete_from_inventory=dict(type='bool', default=False),
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[
['cluster', 'esxi_hostname'],
],
required_one_of=[
['name', 'uuid'],
],
)
result = {'failed': False, 'changed': False}
pyv = PyVmomiHelper(module)
# Check if the VM exists before continuing
vm = pyv.get_vm()
# VM already exists
if vm:
if module.params['state'] == 'absent':
# destroy it
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='remove_vm',
)
module.exit_json(**result)
if module.params['force']:
# has to be poweredoff first
set_vm_power_state(pyv.content, vm, 'poweredoff', module.params['force'])
result = pyv.remove_vm(vm, module.params['delete_from_inventory'])
elif module.params['state'] == 'present':
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
desired_operation='reconfigure_vm',
)
module.exit_json(**result)
result = pyv.reconfigure_vm()
elif module.params['state'] in ['poweredon', 'poweredoff', 'restarted', 'suspended', 'shutdownguest', 'rebootguest']:
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='set_vm_power_state',
)
module.exit_json(**result)
# set powerstate
tmp_result = set_vm_power_state(pyv.content, vm, module.params['state'], module.params['force'], module.params['state_change_timeout'])
if tmp_result['changed']:
result["changed"] = True
if module.params['state'] in ['poweredon', 'restarted', 'rebootguest'] and module.params['wait_for_ip_address']:
wait_result = wait_for_vm_ip(pyv.content, vm, module.params['wait_for_ip_address_timeout'])
if not wait_result:
module.fail_json(msg='Waiting for IP address timed out')
tmp_result['instance'] = wait_result
if not tmp_result["failed"]:
result["failed"] = False
result['instance'] = tmp_result['instance']
if tmp_result["failed"]:
result["failed"] = True
result["msg"] = tmp_result["msg"]
else:
# This should not happen
raise AssertionError()
# VM doesn't exist
else:
if module.params['state'] in ['poweredon', 'poweredoff', 'present', 'restarted', 'suspended']:
if module.check_mode:
result.update(
changed=True,
desired_operation='deploy_vm',
)
module.exit_json(**result)
result = pyv.deploy_vm()
if result['failed']:
module.fail_json(msg='Failed to create a virtual machine : %s' % result['msg'])
if result['failed']:
module.fail_json(**result)
else:
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,374 |
VMware: vmware_guest 'version' parameter handling error
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
vmware_guest handles 'version' parameter type error.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /root/ansible/lib/ansible
executable location = /root/ansible/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Using vmware_guest to deploy a new VM with 'version' parameter set to integer 14.
Return error 'int' object has no attribute 'lower'.
This is caused in line 1285:
"if temp_version.lower() == 'latest':"
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/64374
|
https://github.com/ansible/ansible/pull/64376
|
b475e0408c820e8b28cfc9aaf508e15761af0617
|
9a54070fa23d2ac87566749b32a17150726c53d8
| 2019-11-04T06:27:15Z |
python
| 2019-11-07T07:13:30Z |
test/integration/targets/vmware_guest/tasks/reconfig_vm_to_latest_version.yml
|
# Test code for the vmware_guest module.
# Copyright: (c) 2019, Pavan Bidkar <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Skipping out idepotency test untill issue fixed in reconfigure_vm() become_method
- name: Upgrade VM to latest version
vmware_guest:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
guest_id: centos7_64Guest
datacenter: "{{ dc1 }}"
folder: "{{ f0 }}"
datastore: '{{ ds2 }}'
hardware:
num_cpus: 4
memory_mb: 1028
version: latest
state: present
register: upgrade_vm
- name: assert that changes were made
assert:
that:
- upgrade_vm is changed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,030 |
The XML got with zabbix_template using python3 is a byte string.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The XML got by executing the following playbook was a byte string.
##### playbook
```yaml
---
- name: Get template from zabbix server
hosts: localhost
gather_facts: no
tasks:
- zabbix_template:
server_url: "{{ zabbix_server }}"
login_user: "{{ zabbix_user }}"
login_password: "{{ zabbix_password }}"
template_name: Template App Zabbix Proxy
dump_format: xml
state: dump
register: template_xml
- debug: var=template_xml
```
##### run playbook
```
$ ansible-playbook main.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Get template from zabbix server] *****************************************************************************************************************************************
TASK [zabbix_template] *********************************************************************************************************************************************************
ok: [localhost]
TASK [debug] *******************************************************************************************************************************************************************
ok: [localhost] => {
"template_xml": {
"changed": false,
"failed": false,
"template_xml": "b'<zabbix_export><version>3.0</version><date>....</graphs></zabbix_export>'"
```
In RETURN, the return value is a string.
https://github.com/ansible/ansible/blob/358574d57f2b411e820cbf4d00a8249ac8291cb9/lib/ansible/modules/monitoring/zabbix/zabbix_template.py#L259
So, I think the return value should be a string.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_template
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/dev/venv/lib64/python3.6/site-packages/ansible
executable location = /root/dev/venv/bin/ansible
python version = 3.6.8 (default, Aug 7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
```
$ pip3 show zabbix-api
Name: zabbix-api
Version: 0.5.4
Summary: Zabbix API
Home-page: https://github.com/gescheit/scripts
Author: Aleksandr Balezin
Author-email: [email protected]
License: GNU LGPL 2.1
Location: /root/dev/venv/lib/python3.6/site-packages
```
```
$ python --version
Python 3.6.8
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
default
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
$ cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
```
|
https://github.com/ansible/ansible/issues/64030
|
https://github.com/ansible/ansible/pull/64032
|
98b6b98287436535f0dfc99ffe3333e5a196b9a3
|
4078dcbb773a4ea8a1f83d832aa25352480ab2dc
| 2019-10-28T15:38:50Z |
python
| 2019-11-07T13:21:14Z |
changelogs/fragments/64032-zabbix_template_fix_return_XML_as_a_string_even_python3.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,030 |
The XML got with zabbix_template using python3 is a byte string.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The XML got by executing the following playbook was a byte string.
##### playbook
```yaml
---
- name: Get template from zabbix server
hosts: localhost
gather_facts: no
tasks:
- zabbix_template:
server_url: "{{ zabbix_server }}"
login_user: "{{ zabbix_user }}"
login_password: "{{ zabbix_password }}"
template_name: Template App Zabbix Proxy
dump_format: xml
state: dump
register: template_xml
- debug: var=template_xml
```
##### run playbook
```
$ ansible-playbook main.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Get template from zabbix server] *****************************************************************************************************************************************
TASK [zabbix_template] *********************************************************************************************************************************************************
ok: [localhost]
TASK [debug] *******************************************************************************************************************************************************************
ok: [localhost] => {
"template_xml": {
"changed": false,
"failed": false,
"template_xml": "b'<zabbix_export><version>3.0</version><date>....</graphs></zabbix_export>'"
```
In RETURN, the return value is a string.
https://github.com/ansible/ansible/blob/358574d57f2b411e820cbf4d00a8249ac8291cb9/lib/ansible/modules/monitoring/zabbix/zabbix_template.py#L259
So, I think the return value should be a string.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_template
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
$ ansible --version
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/dev/venv/lib64/python3.6/site-packages/ansible
executable location = /root/dev/venv/bin/ansible
python version = 3.6.8 (default, Aug 7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
```
$ pip3 show zabbix-api
Name: zabbix-api
Version: 0.5.4
Summary: Zabbix API
Home-page: https://github.com/gescheit/scripts
Author: Aleksandr Balezin
Author-email: [email protected]
License: GNU LGPL 2.1
Location: /root/dev/venv/lib/python3.6/site-packages
```
```
$ python --version
Python 3.6.8
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
default
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
$ cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
```
|
https://github.com/ansible/ansible/issues/64030
|
https://github.com/ansible/ansible/pull/64032
|
98b6b98287436535f0dfc99ffe3333e5a196b9a3
|
4078dcbb773a4ea8a1f83d832aa25352480ab2dc
| 2019-10-28T15:38:50Z |
python
| 2019-11-07T13:21:14Z |
lib/ansible/modules/monitoring/zabbix/zabbix_template.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2017, sookido
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: zabbix_template
short_description: Create/update/delete/dump Zabbix template
description:
- This module allows you to create, modify, delete and dump Zabbix templates.
- Multiple templates can be created or modified at once if passing JSON or XML to module.
version_added: "2.5"
author:
- "sookido (@sookido)"
- "Logan Vig (@logan2211)"
- "Dusan Matejka (@D3DeFi)"
requirements:
- "python >= 2.6"
- "zabbix-api >= 0.5.3"
options:
template_name:
description:
- Name of Zabbix template.
- Required when I(template_json) or I(template_xml) are not used.
- Mutually exclusive with I(template_json) and I(template_xml).
required: false
template_json:
description:
- JSON dump of templates to import.
- Multiple templates can be imported this way.
- Mutually exclusive with I(template_name) and I(template_xml).
required: false
type: json
template_xml:
description:
- XML dump of templates to import.
- Multiple templates can be imported this way.
- You are advised to pass XML structure matching the structure used by your version of Zabbix server.
- Custom XML structure can be imported as long as it is valid, but may not yield consistent idempotent
results on subsequent runs.
- Mutually exclusive with I(template_name) and I(template_json).
required: false
version_added: '2.9'
template_groups:
description:
- List of host groups to add template to when template is created.
- Replaces the current host groups the template belongs to if the template is already present.
- Required when creating a new template with C(state=present) and I(template_name) is used.
Not required when updating an existing template.
required: false
type: list
link_templates:
description:
- List of template names to be linked to the template.
- Templates that are not specified and are linked to the existing template will be only unlinked and not
cleared from the template.
required: false
type: list
clear_templates:
description:
- List of template names to be unlinked and cleared from the template.
- This option is ignored if template is being created for the first time.
required: false
type: list
macros:
description:
- List of user macros to create for the template.
- Macros that are not specified and are present on the existing template will be replaced.
- See examples on how to pass macros.
required: false
type: list
suboptions:
name:
description:
- Name of the macro.
- Must be specified in {$NAME} format.
value:
description:
- Value of the macro.
dump_format:
description:
- Format to use when dumping template with C(state=dump).
required: false
choices: [json, xml]
default: "json"
version_added: '2.9'
state:
description:
- Required state of the template.
- On C(state=present) template will be created/imported or updated depending if it is already present.
- On C(state=dump) template content will get dumped into required format specified in I(dump_format).
- On C(state=absent) template will be deleted.
required: false
choices: [present, absent, dump]
default: "present"
extends_documentation_fragment:
- zabbix
'''
EXAMPLES = '''
---
- name: Create a new Zabbix template linked to groups, macros and templates
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: ExampleHost
template_groups:
- Role
- Role2
link_templates:
- Example template1
- Example template2
macros:
- macro: '{$EXAMPLE_MACRO1}'
value: 30000
- macro: '{$EXAMPLE_MACRO2}'
value: 3
- macro: '{$EXAMPLE_MACRO3}'
value: 'Example'
state: present
- name: Unlink and clear templates from the existing Zabbix template
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: ExampleHost
clear_templates:
- Example template3
- Example template4
state: present
- name: Import Zabbix templates from JSON
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_json: "{{ lookup('file', 'zabbix_apache2.json') }}"
state: present
- name: Import Zabbix templates from XML
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_xml: "{{ lookup('file', 'zabbix_apache2.json') }}"
state: present
- name: Import Zabbix template from Ansible dict variable
zabbix_template:
login_user: username
login_password: password
server_url: http://127.0.0.1
template_json:
zabbix_export:
version: '3.2'
templates:
- name: Template for Testing
description: 'Testing template import'
template: Test Template
groups:
- name: Templates
applications:
- name: Test Application
state: present
- name: Configure macros on the existing Zabbix template
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
macros:
- macro: '{$TEST_MACRO}'
value: 'Example'
state: present
- name: Delete Zabbix template
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
state: absent
- name: Dump Zabbix template as JSON
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
state: dump
register: template_dump
- name: Dump Zabbix template as XML
local_action:
module: zabbix_template
server_url: http://127.0.0.1
login_user: username
login_password: password
template_name: Template
dump_format: xml
state: dump
register: template_dump
'''
RETURN = '''
---
template_json:
description: The JSON dump of the template
returned: when state is dump
type: str
sample: {
"zabbix_export":{
"date":"2017-11-29T16:37:24Z",
"templates":[{
"templates":[],
"description":"",
"httptests":[],
"screens":[],
"applications":[],
"discovery_rules":[],
"groups":[{"name":"Templates"}],
"name":"Test Template",
"items":[],
"macros":[],
"template":"test"
}],
"version":"3.2",
"groups":[{
"name":"Templates"
}]
}
}
template_xml:
description: dump of the template in XML representation
returned: when state is dump and dump_format is xml
type: str
sample: |-
<?xml version="1.0" ?>
<zabbix_export>
<version>4.2</version>
<date>2019-07-12T13:37:26Z</date>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<templates>
<template>
<template>test</template>
<name>Test Template</name>
<description/>
<groups>
<group>
<name>Templates</name>
</group>
</groups>
<applications/>
<items/>
<discovery_rules/>
<httptests/>
<macros/>
<templates/>
<screens/>
<tags/>
</template>
</templates>
</zabbix_export>
'''
import atexit
import json
import traceback
import xml.etree.ElementTree as ET
from distutils.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
try:
from zabbix_api import ZabbixAPI, ZabbixAPIException
HAS_ZABBIX_API = True
except ImportError:
ZBX_IMP_ERR = traceback.format_exc()
HAS_ZABBIX_API = False
class Template(object):
def __init__(self, module, zbx):
self._module = module
self._zapi = zbx
# check if host group exists
def check_host_group_exist(self, group_names):
for group_name in group_names:
result = self._zapi.hostgroup.get({'filter': {'name': group_name}})
if not result:
self._module.fail_json(msg="Hostgroup not found: %s" %
group_name)
return True
# get group ids by group names
def get_group_ids_by_group_names(self, group_names):
group_ids = []
if group_names is None or len(group_names) == 0:
return group_ids
if self.check_host_group_exist(group_names):
group_list = self._zapi.hostgroup.get(
{'output': 'extend',
'filter': {'name': group_names}})
for group in group_list:
group_id = group['groupid']
group_ids.append({'groupid': group_id})
return group_ids
def get_template_ids(self, template_list):
template_ids = []
if template_list is None or len(template_list) == 0:
return template_ids
for template in template_list:
template_list = self._zapi.template.get(
{'output': 'extend',
'filter': {'host': template}})
if len(template_list) < 1:
continue
else:
template_id = template_list[0]['templateid']
template_ids.append(template_id)
return template_ids
def add_template(self, template_name, group_ids, link_template_ids, macros):
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.template.create({'host': template_name, 'groups': group_ids, 'templates': link_template_ids,
'macros': macros})
def check_template_changed(self, template_ids, template_groups, link_templates, clear_templates,
template_macros, template_content, template_type):
"""Compares template parameters to already existing values if any are found.
template_json - JSON structures are compared as deep sorted dictionaries,
template_xml - XML structures are compared as strings, but filtered and formatted first,
If none above is used, all the other arguments are compared to their existing counterparts
retrieved from Zabbix API."""
changed = False
# Compare filtered and formatted XMLs strings for any changes. It is expected that provided
# XML has same structure as Zabbix uses (e.g. it was optimally exported via Zabbix GUI or API)
if template_content is not None and template_type == 'xml':
existing_template = self.dump_template(template_ids, template_type='xml')
if self.filter_xml_template(template_content) != self.filter_xml_template(existing_template):
changed = True
return changed
existing_template = self.dump_template(template_ids, template_type='json')
# Compare JSON objects as deep sorted python dictionaries
if template_content is not None and template_type == 'json':
parsed_template_json = self.load_json_template(template_content)
if self.diff_template(parsed_template_json, existing_template):
changed = True
return changed
# If neither template_json or template_xml were used, user provided all parameters via module options
if template_groups is not None:
existing_groups = [g['name'] for g in existing_template['zabbix_export']['groups']]
if set(template_groups) != set(existing_groups):
changed = True
# Check if any new templates would be linked or any existing would be unlinked
exist_child_templates = [t['name'] for t in existing_template['zabbix_export']['templates'][0]['templates']]
if link_templates is not None:
if set(link_templates) != set(exist_child_templates):
changed = True
# Mark that there will be changes when at least one existing template will be unlinked
if clear_templates is not None:
for t in clear_templates:
if t in exist_child_templates:
changed = True
break
if template_macros is not None:
existing_macros = existing_template['zabbix_export']['templates'][0]['macros']
if template_macros != existing_macros:
changed = True
return changed
def update_template(self, template_ids, group_ids, link_template_ids, clear_template_ids, template_macros):
template_changes = {}
if group_ids is not None:
template_changes.update({'groups': group_ids})
if link_template_ids is not None:
template_changes.update({'templates': link_template_ids})
if clear_template_ids is not None:
template_changes.update({'templates_clear': clear_template_ids})
if template_macros is not None:
template_changes.update({'macros': template_macros})
if template_changes:
# If we got here we know that only one template was provided via template_name
template_changes.update({'templateid': template_ids[0]})
self._zapi.template.update(template_changes)
def delete_template(self, templateids):
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.template.delete(templateids)
def ordered_json(self, obj):
# Deep sort json dicts for comparison
if isinstance(obj, dict):
return sorted((k, self.ordered_json(v)) for k, v in obj.items())
if isinstance(obj, list):
return sorted(self.ordered_json(x) for x in obj)
else:
return obj
def dump_template(self, template_ids, template_type='json'):
if self._module.check_mode:
self._module.exit_json(changed=True)
try:
dump = self._zapi.configuration.export({'format': template_type, 'options': {'templates': template_ids}})
if template_type == 'xml':
return str(ET.tostring(ET.fromstring(dump.encode('utf-8')), encoding='utf-8'))
else:
return self.load_json_template(dump)
except ZabbixAPIException as e:
self._module.fail_json(msg='Unable to export template: %s' % e)
def diff_template(self, template_json_a, template_json_b):
# Compare 2 zabbix templates and return True if they differ.
template_json_a = self.filter_template(template_json_a)
template_json_b = self.filter_template(template_json_b)
if self.ordered_json(template_json_a) == self.ordered_json(template_json_b):
return False
return True
def filter_template(self, template_json):
# Filter the template json to contain only the keys we will update
keep_keys = set(['graphs', 'templates', 'triggers', 'value_maps'])
unwanted_keys = set(template_json['zabbix_export']) - keep_keys
for unwanted_key in unwanted_keys:
del template_json['zabbix_export'][unwanted_key]
# Versions older than 2.4 do not support description field within template
desc_not_supported = False
if LooseVersion(self._zapi.api_version()).version[:2] < LooseVersion('2.4').version:
desc_not_supported = True
# Filter empty attributes from template object to allow accurate comparison
for template in template_json['zabbix_export']['templates']:
for key in list(template.keys()):
if not template[key] or (key == 'description' and desc_not_supported):
template.pop(key)
return template_json
def filter_xml_template(self, template_xml):
"""Filters out keys from XML template that may wary between exports (e.g date or version) and
keys that are not imported via this module.
It is advised that provided XML template exactly matches XML structure used by Zabbix"""
# Strip last new line and convert string to ElementTree
parsed_xml_root = self.load_xml_template(template_xml.strip())
keep_keys = ['graphs', 'templates', 'triggers', 'value_maps']
# Remove unwanted XML nodes
for node in list(parsed_xml_root):
if node.tag not in keep_keys:
parsed_xml_root.remove(node)
# Filter empty attributes from template objects to allow accurate comparison
for template in list(parsed_xml_root.find('templates')):
for element in list(template):
if element.text is None and len(list(element)) == 0:
template.remove(element)
# Filter new lines and indentation
xml_root_text = list(line.strip() for line in ET.tostring(parsed_xml_root).split('\n'))
return ''.join(xml_root_text)
def load_json_template(self, template_json):
try:
return json.loads(template_json)
except ValueError as e:
self._module.fail_json(msg='Invalid JSON provided', details=to_native(e), exception=traceback.format_exc())
def load_xml_template(self, template_xml):
try:
return ET.fromstring(template_xml)
except ET.ParseError as e:
self._module.fail_json(msg='Invalid XML provided', details=to_native(e), exception=traceback.format_exc())
def import_template(self, template_content, template_type='json'):
# rules schema latest version
update_rules = {
'applications': {
'createMissing': True,
'deleteMissing': True
},
'discoveryRules': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'graphs': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'httptests': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'items': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'templates': {
'createMissing': True,
'updateExisting': True
},
'templateLinkage': {
'createMissing': True
},
'templateScreens': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'triggers': {
'createMissing': True,
'updateExisting': True,
'deleteMissing': True
},
'valueMaps': {
'createMissing': True,
'updateExisting': True
}
}
try:
# old api version support here
api_version = self._zapi.api_version()
# updateExisting for application removed from zabbix api after 3.2
if LooseVersion(api_version).version[:2] <= LooseVersion('3.2').version:
update_rules['applications']['updateExisting'] = True
import_data = {'format': template_type, 'source': template_content, 'rules': update_rules}
self._zapi.configuration.import_(import_data)
except ZabbixAPIException as e:
self._module.fail_json(msg='Unable to import template', details=to_native(e),
exception=traceback.format_exc())
def main():
module = AnsibleModule(
argument_spec=dict(
server_url=dict(type='str', required=True, aliases=['url']),
login_user=dict(type='str', required=True),
login_password=dict(type='str', required=True, no_log=True),
http_login_user=dict(type='str', required=False, default=None),
http_login_password=dict(type='str', required=False, default=None, no_log=True),
validate_certs=dict(type='bool', required=False, default=True),
template_name=dict(type='str', required=False),
template_json=dict(type='json', required=False),
template_xml=dict(type='str', required=False),
template_groups=dict(type='list', required=False),
link_templates=dict(type='list', required=False),
clear_templates=dict(type='list', required=False),
macros=dict(type='list', required=False),
dump_format=dict(type='str', required=False, default='json', choices=['json', 'xml']),
state=dict(default="present", choices=['present', 'absent', 'dump']),
timeout=dict(type='int', default=10)
),
required_one_of=[
['template_name', 'template_json', 'template_xml']
],
mutually_exclusive=[
['template_name', 'template_json', 'template_xml']
],
required_if=[
['state', 'absent', ['template_name']],
['state', 'dump', ['template_name']]
],
supports_check_mode=True
)
if not HAS_ZABBIX_API:
module.fail_json(msg=missing_required_lib('zabbix-api', url='https://pypi.org/project/zabbix-api/'), exception=ZBX_IMP_ERR)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
http_login_user = module.params['http_login_user']
http_login_password = module.params['http_login_password']
validate_certs = module.params['validate_certs']
template_name = module.params['template_name']
template_json = module.params['template_json']
template_xml = module.params['template_xml']
template_groups = module.params['template_groups']
link_templates = module.params['link_templates']
clear_templates = module.params['clear_templates']
template_macros = module.params['macros']
dump_format = module.params['dump_format']
state = module.params['state']
timeout = module.params['timeout']
zbx = None
try:
zbx = ZabbixAPI(server_url, timeout=timeout, user=http_login_user, passwd=http_login_password,
validate_certs=validate_certs)
zbx.login(login_user, login_password)
atexit.register(zbx.logout)
except ZabbixAPIException as e:
module.fail_json(msg="Failed to connect to Zabbix server: %s" % e)
template = Template(module, zbx)
# Identify template names for IDs retrieval
# Template names are expected to reside in ['zabbix_export']['templates'][*]['template'] for both data types
template_content, template_type = None, None
if template_json is not None:
template_type = 'json'
template_content = template_json
json_parsed = template.load_json_template(template_content)
template_names = list(t['template'] for t in json_parsed['zabbix_export']['templates'])
elif template_xml is not None:
template_type = 'xml'
template_content = template_xml
xml_parsed = template.load_xml_template(template_content)
template_names = list(t.find('template').text for t in list(xml_parsed.find('templates')))
else:
template_names = [template_name]
template_ids = template.get_template_ids(template_names)
if state == "absent":
if not template_ids:
module.exit_json(changed=False, msg="Template not found. No changed: %s" % template_name)
template.delete_template(template_ids)
module.exit_json(changed=True, result="Successfully deleted template %s" % template_name)
elif state == "dump":
if not template_ids:
module.fail_json(msg='Template not found: %s' % template_name)
if dump_format == 'json':
module.exit_json(changed=False, template_json=template.dump_template(template_ids, template_type='json'))
elif dump_format == 'xml':
module.exit_json(changed=False, template_xml=template.dump_template(template_ids, template_type='xml'))
elif state == "present":
# Load all subelements for template that were provided by user
group_ids = None
if template_groups is not None:
group_ids = template.get_group_ids_by_group_names(template_groups)
link_template_ids = None
if link_templates is not None:
link_template_ids = template.get_template_ids(link_templates)
clear_template_ids = None
if clear_templates is not None:
clear_template_ids = template.get_template_ids(clear_templates)
if template_macros is not None:
# Zabbix configuration.export does not differentiate python types (numbers are returned as strings)
for macroitem in template_macros:
for key in macroitem:
macroitem[key] = str(macroitem[key])
if not template_ids:
# Assume new templates are being added when no ID's were found
if template_content is not None:
template.import_template(template_content, template_type)
module.exit_json(changed=True, result="Template import successful")
else:
if group_ids is None:
module.fail_json(msg='template_groups are required when creating a new Zabbix template')
template.add_template(template_name, group_ids, link_template_ids, template_macros)
module.exit_json(changed=True, result="Successfully added template: %s" % template_name)
else:
changed = template.check_template_changed(template_ids, template_groups, link_templates, clear_templates,
template_macros, template_content, template_type)
if module.check_mode:
module.exit_json(changed=changed)
if changed:
if template_type is not None:
template.import_template(template_content, template_type)
else:
template.update_template(template_ids, group_ids, link_template_ids, clear_template_ids,
template_macros)
module.exit_json(changed=changed, result="Template successfully updated")
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,501 |
acme_certificate fails on python 2.7
|
##### SUMMARY
acme_certificate fails on python 2.7
```
File "/tmp/ansible_acme_certificate_payload_E_vyop/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 762, in f
AttributeError: 'list' object has no attribute 'clear'
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
acme_certificate
##### ANSIBLE VERSION
```
ansible 2.9.0
```
##### OS / ENVIRONMENT
Distributor ID: Ubuntu
Description: Ubuntu 18.04.3 LTS
Release: 18.04
Codename: bionic
Python 2.7.15+
##### STEPS TO REPRODUCE
Issue a certificate with acme_certificate.
```yaml
- name: "Request certificate for {{ CERT.CN }}"
acme_certificate:
account_key: /etc/ssl/private/account.key
csr: "/etc/ssl/csr/{{ CERT.CN }}.csr"
dest: "/etc/ssl/certs/{{ CERT.CN }}.crt"
acme_directory: "https://acme-v01.api.letsencrypt.org/directory"
agreement: "https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf"
data: "{{ letsencrypt_challenge }}"
challenge: "dns-01"
force: "{{ csr.changed }}"
remaining_days: "10"
```
##### EXPECTED RESULTS
Successful issued certificate.
##### ACTUAL RESULTS
```paste below
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1573040538.74-14433469763146/AnsiballZ_acme_certificate.py", line 102, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1573040538.74-14433469763146/AnsiballZ_acme_certificate.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1573040538.74-14433469763146/AnsiballZ_acme_certificate.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.crypto.acme.acme_certificate', init_globals=None, run_name='__main__', alter_sys=False)
File "/usr/lib/python2.7/runpy.py", line 192, in run_module
fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 1090, in <module>
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 1061, in main
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 921, in get_certificate
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 766, in _new_cert_v1
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/module_utils/acme.py", line 943, in process_links
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 762, in f
AttributeError: 'list' object has no attribute 'clear'
```
|
https://github.com/ansible/ansible/issues/64501
|
https://github.com/ansible/ansible/pull/64504
|
75646037dc3b927a33912fd968a1864920115c6e
|
27d3dd58a4572b2d5b3a6d97dbf4d262560bf0f4
| 2019-11-06T12:03:30Z |
python
| 2019-11-07T21:30:03Z |
changelogs/fragments/64501-fix-python2.x-backward-compatibility.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,501 |
acme_certificate fails on python 2.7
|
##### SUMMARY
acme_certificate fails on python 2.7
```
File "/tmp/ansible_acme_certificate_payload_E_vyop/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 762, in f
AttributeError: 'list' object has no attribute 'clear'
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
acme_certificate
##### ANSIBLE VERSION
```
ansible 2.9.0
```
##### OS / ENVIRONMENT
Distributor ID: Ubuntu
Description: Ubuntu 18.04.3 LTS
Release: 18.04
Codename: bionic
Python 2.7.15+
##### STEPS TO REPRODUCE
Issue a certificate with acme_certificate.
```yaml
- name: "Request certificate for {{ CERT.CN }}"
acme_certificate:
account_key: /etc/ssl/private/account.key
csr: "/etc/ssl/csr/{{ CERT.CN }}.csr"
dest: "/etc/ssl/certs/{{ CERT.CN }}.crt"
acme_directory: "https://acme-v01.api.letsencrypt.org/directory"
agreement: "https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf"
data: "{{ letsencrypt_challenge }}"
challenge: "dns-01"
force: "{{ csr.changed }}"
remaining_days: "10"
```
##### EXPECTED RESULTS
Successful issued certificate.
##### ACTUAL RESULTS
```paste below
The full traceback is:
Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-tmp-1573040538.74-14433469763146/AnsiballZ_acme_certificate.py", line 102, in <module>
_ansiballz_main()
File "/root/.ansible/tmp/ansible-tmp-1573040538.74-14433469763146/AnsiballZ_acme_certificate.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/root/.ansible/tmp/ansible-tmp-1573040538.74-14433469763146/AnsiballZ_acme_certificate.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.crypto.acme.acme_certificate', init_globals=None, run_name='__main__', alter_sys=False)
File "/usr/lib/python2.7/runpy.py", line 192, in run_module
fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 1090, in <module>
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 1061, in main
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 921, in get_certificate
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 766, in _new_cert_v1
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/module_utils/acme.py", line 943, in process_links
File "/tmp/ansible_acme_certificate_payload_uUsYVr/ansible_acme_certificate_payload.zip/ansible/modules/crypto/acme/acme_certificate.py", line 762, in f
AttributeError: 'list' object has no attribute 'clear'
```
|
https://github.com/ansible/ansible/issues/64501
|
https://github.com/ansible/ansible/pull/64504
|
75646037dc3b927a33912fd968a1864920115c6e
|
27d3dd58a4572b2d5b3a6d97dbf4d262560bf0f4
| 2019-11-06T12:03:30Z |
python
| 2019-11-07T21:30:03Z |
lib/ansible/modules/crypto/acme/acme_certificate.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016 Michael Gruener <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: acme_certificate
author: "Michael Gruener (@mgruener)"
version_added: "2.2"
short_description: Create SSL/TLS certificates with the ACME protocol
description:
- "Create and renew SSL/TLS certificates with a CA supporting the
L(ACME protocol,https://tools.ietf.org/html/rfc8555),
such as L(Let's Encrypt,https://letsencrypt.org/) or
L(Buypass,https://www.buypass.com/). The current implementation
supports the C(http-01), C(dns-01) and C(tls-alpn-01) challenges."
- "To use this module, it has to be executed twice. Either as two
different tasks in the same run or during two runs. Note that the output
of the first run needs to be recorded and passed to the second run as the
module argument C(data)."
- "Between these two tasks you have to fulfill the required steps for the
chosen challenge by whatever means necessary. For C(http-01) that means
creating the necessary challenge file on the destination webserver. For
C(dns-01) the necessary dns record has to be created. For C(tls-alpn-01)
the necessary certificate has to be created and served.
It is I(not) the responsibility of this module to perform these steps."
- "For details on how to fulfill these challenges, you might have to read through
L(the main ACME specification,https://tools.ietf.org/html/rfc8555#section-8)
and the L(TLS-ALPN-01 specification,https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3).
Also, consider the examples provided for this module."
- "The module includes experimental support for IP identifiers according to
the L(current ACME IP draft,https://tools.ietf.org/html/draft-ietf-acme-ip-05)."
notes:
- "At least one of C(dest) and C(fullchain_dest) must be specified."
- "This module includes basic account management functionality.
If you want to have more control over your ACME account, use the M(acme_account)
module and disable account management for this module using the C(modify_account)
option."
- "This module was called C(letsencrypt) before Ansible 2.6. The usage
did not change."
seealso:
- name: The Let's Encrypt documentation
description: Documentation for the Let's Encrypt Certification Authority.
Provides useful information for example on rate limits.
link: https://letsencrypt.org/docs/
- name: Buypass Go SSL
description: Documentation for the Buypass Certification Authority.
Provides useful information for example on rate limits.
link: https://www.buypass.com/ssl/products/acme
- name: Automatic Certificate Management Environment (ACME)
description: The specification of the ACME protocol (RFC 8555).
link: https://tools.ietf.org/html/rfc8555
- name: ACME TLS ALPN Challenge Extension
description: The current draft specification of the C(tls-alpn-01) challenge.
link: https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05
- module: acme_challenge_cert_helper
description: Helps preparing C(tls-alpn-01) challenges.
- module: openssl_privatekey
description: Can be used to create private keys (both for certificates and accounts).
- module: openssl_csr
description: Can be used to create a Certificate Signing Request (CSR).
- module: certificate_complete_chain
description: Allows to find the root certificate for the returned fullchain.
- module: acme_certificate_revoke
description: Allows to revoke certificates.
- module: acme_account
description: Allows to create, modify or delete an ACME account.
- module: acme_inspect
description: Allows to debug problems.
extends_documentation_fragment:
- acme
options:
account_email:
description:
- "The email address associated with this account."
- "It will be used for certificate expiration warnings."
- "Note that when C(modify_account) is not set to C(no) and you also
used the M(acme_account) module to specify more than one contact
for your account, this module will update your account and restrict
it to the (at most one) contact email address specified here."
type: str
agreement:
description:
- "URI to a terms of service document you agree to when using the
ACME v1 service at C(acme_directory)."
- Default is latest gathered from C(acme_directory) URL.
- This option will only be used when C(acme_version) is 1.
type: str
terms_agreed:
description:
- "Boolean indicating whether you agree to the terms of service document."
- "ACME servers can require this to be true."
- This option will only be used when C(acme_version) is not 1.
type: bool
default: no
version_added: "2.5"
modify_account:
description:
- "Boolean indicating whether the module should create the account if
necessary, and update its contact data."
- "Set to C(no) if you want to use the M(acme_account) module to manage
your account instead, and to avoid accidental creation of a new account
using an old key if you changed the account key with M(acme_account)."
- "If set to C(no), C(terms_agreed) and C(account_email) are ignored."
type: bool
default: yes
version_added: "2.6"
challenge:
description: The challenge to be performed.
type: str
default: 'http-01'
choices: [ 'http-01', 'dns-01', 'tls-alpn-01' ]
csr:
description:
- "File containing the CSR for the new certificate."
- "Can be created with C(openssl req ...)."
- "The CSR may contain multiple Subject Alternate Names, but each one
will lead to an individual challenge that must be fulfilled for the
CSR to be signed."
- "I(Note): the private key used to create the CSR I(must not) be the
account key. This is a bad idea from a security point of view, and
the CA should not accept the CSR. The ACME server should return an
error in this case."
type: path
required: true
aliases: ['src']
data:
description:
- "The data to validate ongoing challenges. This must be specified for
the second run of the module only."
- "The value that must be used here will be provided by a previous use
of this module. See the examples for more details."
- "Note that for ACME v2, only the C(order_uri) entry of C(data) will
be used. For ACME v1, C(data) must be non-empty to indicate the
second stage is active; all needed data will be taken from the
CSR."
- "I(Note): the C(data) option was marked as C(no_log) up to
Ansible 2.5. From Ansible 2.6 on, it is no longer marked this way
as it causes error messages to be come unusable, and C(data) does
not contain any information which can be used without having
access to the account key or which are not public anyway."
type: dict
dest:
description:
- "The destination file for the certificate."
- "Required if C(fullchain_dest) is not specified."
type: path
aliases: ['cert']
fullchain_dest:
description:
- "The destination file for the full chain (i.e. certificate followed
by chain of intermediate certificates)."
- "Required if C(dest) is not specified."
type: path
version_added: 2.5
aliases: ['fullchain']
chain_dest:
description:
- If specified, the intermediate certificate will be written to this file.
type: path
version_added: 2.5
aliases: ['chain']
remaining_days:
description:
- "The number of days the certificate must have left being valid.
If C(cert_days < remaining_days), then it will be renewed.
If the certificate is not renewed, module return values will not
include C(challenge_data)."
- "To make sure that the certificate is renewed in any case, you can
use the C(force) option."
type: int
default: 10
deactivate_authzs:
description:
- "Deactivate authentication objects (authz) after issuing a certificate,
or when issuing the certificate failed."
- "Authentication objects are bound to an account key and remain valid
for a certain amount of time, and can be used to issue certificates
without having to re-authenticate the domain. This can be a security
concern."
type: bool
default: no
version_added: 2.6
force:
description:
- Enforces the execution of the challenge and validation, even if an
existing certificate is still valid for more than C(remaining_days).
- This is especially helpful when having an updated CSR e.g. with
additional domains for which a new certificate is desired.
type: bool
default: no
version_added: 2.6
retrieve_all_alternates:
description:
- "When set to C(yes), will retrieve all alternate trust chains offered by the ACME CA.
These will not be written to disk, but will be returned together with the main
chain as C(all_chains). See the documentation for the C(all_chains) return
value for details."
type: bool
default: no
version_added: "2.9"
select_chain:
description:
- "Allows to specify criteria by which an (alternate) trust chain can be selected."
- "The list of criteria will be processed one by one until a chain is found
matching a criterium. If such a chain is found, it will be used by the
module instead of the default chain."
- "If a criterium matches multiple chains, the first one matching will be
returned. The order is determined by the ordering of the C(Link) headers
returned by the ACME server and might not be deterministic."
- "Every criterium can consist of multiple different conditions, like I(issuer)
and I(subject). For the criterium to match a chain, all conditions must apply
to the same certificate in the chain."
- "This option can only be used with the C(cryptography) backend."
type: list
version_added: "2.10"
suboptions:
test_certificates:
description:
- "Determines which certificates in the chain will be tested."
- "I(all) tests all certificates in the chain (excluding the leaf, which is
identical in all chains)."
- "I(last) only tests the last certificate in the chain, i.e. the one furthest
away from the leaf. Its issuer is the root certificate of this chain."
type: str
default: all
choices: [last, all]
issuer:
description:
- "Allows to specify parts of the issuer of a certificate in the chain must
have to be selected."
- "If I(issuer) is empty, any certificate will match."
- 'An example value would be C({"commonName": "My Preferred CA Root"}).'
type: dict
subject:
description:
- "Allows to specify parts of the subject of a certificate in the chain must
have to be selected."
- "If I(subject) is empty, any certificate will match."
- 'An example value would be C({"CN": "My Preferred CA Intermediate"})'
type: dict
subject_key_identifier:
description:
- "Checks for the SubjectKeyIdentifier extension. This is an identifier based
on the private key of the intermediate certificate."
- "The identifier must be of the form
C(A8:4A:6A:63:04:7D:DD:BA:E6:D1:39:B7:A6:45:65:EF:F3:A8:EC:A1)."
type: str
authority_key_identifier:
description:
- "Checks for the AuthorityKeyIdentifier extension. This is an identifier based
on the private key of the issuer of the intermediate certificate."
- "The identifier must be of the form
C(C4:A7:B1:A4:7B:2C:71:FA:DB:E1:4B:90:75:FF:C4:15:60:85:89:10)."
type: str
'''
EXAMPLES = r'''
### Example with HTTP challenge ###
- name: Create a challenge for sample.com using a account key from a variable.
acme_certificate:
account_key_content: "{{ account_private_key }}"
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
register: sample_com_challenge
# Alternative first step:
- name: Create a challenge for sample.com using a account key from hashi vault.
acme_certificate:
account_key_content: "{{ lookup('hashi_vault', 'secret=secret/account_private_key:value') }}"
csr: /etc/pki/cert/csr/sample.com.csr
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
register: sample_com_challenge
# Alternative first step:
- name: Create a challenge for sample.com using a account key file.
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
register: sample_com_challenge
# perform the necessary steps to fulfill the challenge
# for example:
#
# - copy:
# dest: /var/www/html/{{ sample_com_challenge['challenge_data']['sample.com']['http-01']['resource'] }}
# content: "{{ sample_com_challenge['challenge_data']['sample.com']['http-01']['resource_value'] }}"
# when: sample_com_challenge is changed
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
csr: /etc/pki/cert/csr/sample.com.csr
dest: /etc/httpd/ssl/sample.com.crt
fullchain_dest: /etc/httpd/ssl/sample.com-fullchain.crt
chain_dest: /etc/httpd/ssl/sample.com-intermediate.crt
data: "{{ sample_com_challenge }}"
### Example with DNS challenge against production ACME server ###
- name: Create a challenge for sample.com using a account key file.
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
account_email: [email protected]
src: /etc/pki/cert/csr/sample.com.csr
cert: /etc/httpd/ssl/sample.com.crt
challenge: dns-01
acme_directory: https://acme-v01.api.letsencrypt.org/directory
# Renew if the certificate is at least 30 days old
remaining_days: 60
register: sample_com_challenge
# perform the necessary steps to fulfill the challenge
# for example:
#
# - route53:
# zone: sample.com
# record: "{{ sample_com_challenge.challenge_data['sample.com']['dns-01'].record }}"
# type: TXT
# ttl: 60
# state: present
# wait: yes
# # Note: route53 requires TXT entries to be enclosed in quotes
# value: "{{ sample_com_challenge.challenge_data['sample.com']['dns-01'].resource_value | regex_replace('^(.*)$', '\"\\1\"') }}"
# when: sample_com_challenge is changed
#
# Alternative way:
#
# - route53:
# zone: sample.com
# record: "{{ item.key }}"
# type: TXT
# ttl: 60
# state: present
# wait: yes
# # Note: item.value is a list of TXT entries, and route53
# # requires every entry to be enclosed in quotes
# value: "{{ item.value | map('regex_replace', '^(.*)$', '\"\\1\"' ) | list }}"
# loop: "{{ sample_com_challenge.challenge_data_dns | dictsort }}"
# when: sample_com_challenge is changed
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
account_email: [email protected]
src: /etc/pki/cert/csr/sample.com.csr
cert: /etc/httpd/ssl/sample.com.crt
fullchain: /etc/httpd/ssl/sample.com-fullchain.crt
chain: /etc/httpd/ssl/sample.com-intermediate.crt
challenge: dns-01
acme_directory: https://acme-v01.api.letsencrypt.org/directory
remaining_days: 60
data: "{{ sample_com_challenge }}"
when: sample_com_challenge is changed
# Alternative second step:
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
acme_certificate:
account_key_src: /etc/pki/cert/private/account.key
account_email: [email protected]
src: /etc/pki/cert/csr/sample.com.csr
cert: /etc/httpd/ssl/sample.com.crt
fullchain: /etc/httpd/ssl/sample.com-fullchain.crt
chain: /etc/httpd/ssl/sample.com-intermediate.crt
challenge: tls-alpn-01
remaining_days: 60
data: "{{ sample_com_challenge }}"
# We use Let's Encrypt's ACME v2 endpoint
acme_directory: https://acme-v02.api.letsencrypt.org/directory
acme_version: 2
# The following makes sure that if a chain with /CN=DST Root CA X3 in its issuer is provided
# as an alternative, it will be selected. These are the roots cross-signed by IdenTrust.
# As long as Let's Encrypt provides alternate chains with the cross-signed root(s) when
# switching to their own ISRG Root X1 root, this will use the chain ending with a cross-signed
# root. This chain is more compatible with older TLS clients.
select_chain:
- test_certificates: last
issuer:
CN: DST Root CA X3
O: Digital Signature Trust Co.
when: sample_com_challenge is changed
'''
RETURN = '''
cert_days:
description: The number of days the certificate remains valid.
returned: success
type: int
challenge_data:
description:
- Per identifier / challenge type challenge data.
- Since Ansible 2.8.5, only challenges which are not yet valid are returned.
returned: changed
type: list
elements: dict
contains:
resource:
description: The challenge resource that must be created for validation.
returned: changed
type: str
sample: .well-known/acme-challenge/evaGxfADs6pSRb2LAv9IZf17Dt3juxGJ-PCt92wr-oA
resource_original:
description:
- The original challenge resource including type identifier for C(tls-alpn-01)
challenges.
returned: changed and challenge is C(tls-alpn-01)
type: str
sample: DNS:example.com
version_added: "2.8"
resource_value:
description:
- The value the resource has to produce for the validation.
- For C(http-01) and C(dns-01) challenges, the value can be used as-is.
- "For C(tls-alpn-01) challenges, note that this return value contains a
Base64 encoded version of the correct binary blob which has to be put
into the acmeValidation x509 extension; see
U(https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3)
for details. To do this, you might need the C(b64decode) Jinja filter
to extract the binary blob from this return value."
returned: changed
type: str
sample: IlirfxKKXA...17Dt3juxGJ-PCt92wr-oA
record:
description: The full DNS record's name for the challenge.
returned: changed and challenge is C(dns-01)
type: str
sample: _acme-challenge.example.com
version_added: "2.5"
challenge_data_dns:
description:
- List of TXT values per DNS record, in case challenge is C(dns-01).
- Since Ansible 2.8.5, only challenges which are not yet valid are returned.
returned: changed
type: dict
version_added: "2.5"
authorizations:
description:
- ACME authorization data.
- Maps an identifier to ACME authorization objects. See U(https://tools.ietf.org/html/rfc8555#section-7.1.4).
returned: changed
type: dict
sample: '{"example.com":{...}}'
order_uri:
description: ACME order URI.
returned: changed
type: str
version_added: "2.5"
finalization_uri:
description: ACME finalization URI.
returned: changed
type: str
version_added: "2.5"
account_uri:
description: ACME account URI.
returned: changed
type: str
version_added: "2.5"
all_chains:
description:
- When I(retrieve_all_alternates) is set to C(yes), the module will query the ACME server
for alternate chains. This return value will contain a list of all chains returned,
the first entry being the main chain returned by the server.
- See L(Section 7.4.2 of RFC8555,https://tools.ietf.org/html/rfc8555#section-7.4.2) for details.
returned: when certificate was retrieved and I(retrieve_all_alternates) is set to C(yes)
type: list
elements: dict
contains:
cert:
description:
- The leaf certificate itself, in PEM format.
type: str
returned: always
chain:
description:
- The certificate chain, excluding the root, as concatenated PEM certificates.
type: str
returned: always
full_chain:
description:
- The certificate chain, excluding the root, but including the leaf certificate,
as concatenated PEM certificates.
type: str
returned: always
'''
from ansible.module_utils.acme import (
ModuleFailException,
write_file,
nopad_b64,
pem_to_der,
ACMEAccount,
HAS_CURRENT_CRYPTOGRAPHY,
cryptography_get_csr_identifiers,
openssl_get_csr_identifiers,
cryptography_get_cert_days,
handle_standard_module_arguments,
process_links,
get_default_argspec,
)
import base64
import binascii
import hashlib
import os
import re
import textwrap
import time
import traceback
from datetime import datetime
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_bytes, to_native
from ansible.module_utils.compat import ipaddress as compat_ipaddress
from ansible.module_utils import crypto as crypto_utils
try:
import cryptography
import cryptography.hazmat.backends
import cryptography.x509
except ImportError:
CRYPTOGRAPHY_IMP_ERR = traceback.format_exc()
CRYPTOGRAPHY_FOUND = False
else:
CRYPTOGRAPHY_FOUND = True
def get_cert_days(module, cert_file):
'''
Return the days the certificate in cert_file remains valid and -1
if the file was not found. If cert_file contains more than one
certificate, only the first one will be considered.
'''
if HAS_CURRENT_CRYPTOGRAPHY:
return cryptography_get_cert_days(module, cert_file)
if not os.path.exists(cert_file):
return -1
openssl_bin = module.get_bin_path('openssl', True)
openssl_cert_cmd = [openssl_bin, "x509", "-in", cert_file, "-noout", "-text"]
dummy, out, dummy = module.run_command(openssl_cert_cmd, check_rc=True, encoding=None)
try:
not_after_str = re.search(r"\s+Not After\s*:\s+(.*)", out.decode('utf8')).group(1)
not_after = datetime.fromtimestamp(time.mktime(time.strptime(not_after_str, '%b %d %H:%M:%S %Y %Z')))
except AttributeError:
raise ModuleFailException("No 'Not after' date found in {0}".format(cert_file))
except ValueError:
raise ModuleFailException("Failed to parse 'Not after' date of {0}".format(cert_file))
now = datetime.utcnow()
return (not_after - now).days
class ACMEClient(object):
'''
ACME client class. Uses an ACME account object and a CSR to
start and validate ACME challenges and download the respective
certificates.
'''
def __init__(self, module):
self.module = module
self.version = module.params['acme_version']
self.challenge = module.params['challenge']
self.csr = module.params['csr']
self.dest = module.params.get('dest')
self.fullchain_dest = module.params.get('fullchain_dest')
self.chain_dest = module.params.get('chain_dest')
self.account = ACMEAccount(module)
self.directory = self.account.directory
self.data = module.params['data']
self.authorizations = None
self.cert_days = -1
self.order_uri = self.data.get('order_uri') if self.data else None
self.finalize_uri = None
# Make sure account exists
modify_account = module.params['modify_account']
if modify_account or self.version > 1:
contact = []
if module.params['account_email']:
contact.append('mailto:' + module.params['account_email'])
created, account_data = self.account.setup_account(
contact,
agreement=module.params.get('agreement'),
terms_agreed=module.params.get('terms_agreed'),
allow_creation=modify_account,
)
if account_data is None:
raise ModuleFailException(msg='Account does not exist or is deactivated.')
updated = False
if not created and account_data and modify_account:
updated, account_data = self.account.update_account(account_data, contact)
self.changed = created or updated
else:
# This happens if modify_account is False and the ACME v1
# protocol is used. In this case, we do not call setup_account()
# to avoid accidental creation of an account. This is OK
# since for ACME v1, the account URI is not needed to send a
# signed ACME request.
pass
if not os.path.exists(self.csr):
raise ModuleFailException("CSR %s not found" % (self.csr))
self._openssl_bin = module.get_bin_path('openssl', True)
# Extract list of identifiers from CSR
self.identifiers = self._get_csr_identifiers()
def _get_csr_identifiers(self):
'''
Parse the CSR and return the list of requested identifiers
'''
if HAS_CURRENT_CRYPTOGRAPHY:
return cryptography_get_csr_identifiers(self.module, self.csr)
else:
return openssl_get_csr_identifiers(self._openssl_bin, self.module, self.csr)
def _add_or_update_auth(self, identifier_type, identifier, auth):
'''
Add or update the given authorization in the global authorizations list.
Return True if the auth was updated/added and False if no change was
necessary.
'''
if self.authorizations.get(identifier_type + ':' + identifier) == auth:
return False
self.authorizations[identifier_type + ':' + identifier] = auth
return True
def _new_authz_v1(self, identifier_type, identifier):
'''
Create a new authorization for the given identifier.
Return the authorization object of the new authorization
https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.4
'''
if self.account.uri is None:
return
new_authz = {
"resource": "new-authz",
"identifier": {"type": identifier_type, "value": identifier},
}
result, info = self.account.send_signed_request(self.directory['new-authz'], new_authz)
if info['status'] not in [200, 201]:
raise ModuleFailException("Error requesting challenges: CODE: {0} RESULT: {1}".format(info['status'], result))
else:
result['uri'] = info['location']
return result
def _get_challenge_data(self, auth, identifier_type, identifier):
'''
Returns a dict with the data for all proposed (and supported) challenges
of the given authorization.
'''
data = {}
# no need to choose a specific challenge here as this module
# is not responsible for fulfilling the challenges. Calculate
# and return the required information for each challenge.
for challenge in auth['challenges']:
challenge_type = challenge['type']
token = re.sub(r"[^A-Za-z0-9_\-]", "_", challenge['token'])
keyauthorization = self.account.get_keyauthorization(token)
if challenge_type == 'http-01':
# https://tools.ietf.org/html/rfc8555#section-8.3
resource = '.well-known/acme-challenge/' + token
data[challenge_type] = {'resource': resource, 'resource_value': keyauthorization}
elif challenge_type == 'dns-01':
if identifier_type != 'dns':
continue
# https://tools.ietf.org/html/rfc8555#section-8.4
resource = '_acme-challenge'
value = nopad_b64(hashlib.sha256(to_bytes(keyauthorization)).digest())
record = (resource + identifier[1:]) if identifier.startswith('*.') else (resource + '.' + identifier)
data[challenge_type] = {'resource': resource, 'resource_value': value, 'record': record}
elif challenge_type == 'tls-alpn-01':
# https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-05#section-3
if identifier_type == 'ip':
# IPv4/IPv6 address: use reverse mapping (RFC1034, RFC3596)
resource = compat_ipaddress.ip_address(identifier).reverse_pointer
if not resource.endswith('.'):
resource += '.'
else:
resource = identifier
value = base64.b64encode(hashlib.sha256(to_bytes(keyauthorization)).digest())
data[challenge_type] = {'resource': resource, 'resource_original': identifier_type + ':' + identifier, 'resource_value': value}
else:
continue
return data
def _fail_challenge(self, identifier_type, identifier, auth, error):
'''
Aborts with a specific error for a challenge.
'''
error_details = ''
# multiple challenges could have failed at this point, gather error
# details for all of them before failing
for challenge in auth['challenges']:
if challenge['status'] == 'invalid':
error_details += ' CHALLENGE: {0}'.format(challenge['type'])
if 'error' in challenge:
error_details += ' DETAILS: {0};'.format(challenge['error']['detail'])
else:
error_details += ';'
raise ModuleFailException("{0}: {1}".format(error.format(identifier_type + ':' + identifier), error_details))
def _validate_challenges(self, identifier_type, identifier, auth):
'''
Validate the authorization provided in the auth dict. Returns True
when the validation was successful and False when it was not.
'''
for challenge in auth['challenges']:
if self.challenge != challenge['type']:
continue
uri = challenge['uri'] if self.version == 1 else challenge['url']
challenge_response = {}
if self.version == 1:
token = re.sub(r"[^A-Za-z0-9_\-]", "_", challenge['token'])
keyauthorization = self.account.get_keyauthorization(token)
challenge_response["resource"] = "challenge"
challenge_response["keyAuthorization"] = keyauthorization
challenge_response["type"] = self.challenge
result, info = self.account.send_signed_request(uri, challenge_response)
if info['status'] not in [200, 202]:
raise ModuleFailException("Error validating challenge: CODE: {0} RESULT: {1}".format(info['status'], result))
status = ''
while status not in ['valid', 'invalid', 'revoked']:
result, dummy = self.account.get_request(auth['uri'])
result['uri'] = auth['uri']
if self._add_or_update_auth(identifier_type, identifier, result):
self.changed = True
# https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.1.2
# "status (required, string): ...
# If this field is missing, then the default value is "pending"."
if self.version == 1 and 'status' not in result:
status = 'pending'
else:
status = result['status']
time.sleep(2)
if status == 'invalid':
self._fail_challenge(identifier_type, identifier, result, 'Authorization for {0} returned invalid')
return status == 'valid'
def _finalize_cert(self):
'''
Create a new certificate based on the csr.
Return the certificate object as dict
https://tools.ietf.org/html/rfc8555#section-7.4
'''
csr = pem_to_der(self.csr)
new_cert = {
"csr": nopad_b64(csr),
}
result, info = self.account.send_signed_request(self.finalize_uri, new_cert)
if info['status'] not in [200]:
raise ModuleFailException("Error new cert: CODE: {0} RESULT: {1}".format(info['status'], result))
status = result['status']
while status not in ['valid', 'invalid']:
time.sleep(2)
result, dummy = self.account.get_request(self.order_uri)
status = result['status']
if status != 'valid':
raise ModuleFailException("Error new cert: CODE: {0} STATUS: {1} RESULT: {2}".format(info['status'], status, result))
return result['certificate']
def _der_to_pem(self, der_cert):
'''
Convert the DER format certificate in der_cert to a PEM format
certificate and return it.
'''
return """-----BEGIN CERTIFICATE-----\n{0}\n-----END CERTIFICATE-----\n""".format(
"\n".join(textwrap.wrap(base64.b64encode(der_cert).decode('utf8'), 64)))
def _download_cert(self, url):
'''
Download and parse the certificate chain.
https://tools.ietf.org/html/rfc8555#section-7.4.2
'''
content, info = self.account.get_request(url, parse_json_result=False, headers={'Accept': 'application/pem-certificate-chain'})
if not content or not info['content-type'].startswith('application/pem-certificate-chain'):
raise ModuleFailException("Cannot download certificate chain from {0}: {1} (headers: {2})".format(url, content, info))
cert = None
chain = []
# Parse data
lines = content.decode('utf-8').splitlines(True)
current = []
for line in lines:
if line.strip():
current.append(line)
if line.startswith('-----END CERTIFICATE-----'):
if cert is None:
cert = ''.join(current)
else:
chain.append(''.join(current))
current = []
alternates = []
def f(link, relation):
if relation == 'up':
# Process link-up headers if there was no chain in reply
if not chain:
chain_result, chain_info = self.account.get_request(link, parse_json_result=False)
if chain_info['status'] in [200, 201]:
chain.append(self._der_to_pem(chain_result))
elif relation == 'alternate':
alternates.append(link)
process_links(info, f)
if cert is None or current:
raise ModuleFailException("Failed to parse certificate chain download from {0}: {1} (headers: {2})".format(url, content, info))
return {'cert': cert, 'chain': chain, 'alternates': alternates}
def _new_cert_v1(self):
'''
Create a new certificate based on the CSR (ACME v1 protocol).
Return the certificate object as dict
https://tools.ietf.org/html/draft-ietf-acme-acme-02#section-6.5
'''
csr = pem_to_der(self.csr)
new_cert = {
"resource": "new-cert",
"csr": nopad_b64(csr),
}
result, info = self.account.send_signed_request(self.directory['new-cert'], new_cert)
chain = []
def f(link, relation):
if relation == 'up':
chain_result, chain_info = self.account.get_request(link, parse_json_result=False)
if chain_info['status'] in [200, 201]:
chain.clear()
chain.append(self._der_to_pem(chain_result))
process_links(info, f)
if info['status'] not in [200, 201]:
raise ModuleFailException("Error new cert: CODE: {0} RESULT: {1}".format(info['status'], result))
else:
return {'cert': self._der_to_pem(result), 'uri': info['location'], 'chain': chain}
def _new_order_v2(self):
'''
Start a new certificate order (ACME v2 protocol).
https://tools.ietf.org/html/rfc8555#section-7.4
'''
identifiers = []
for identifier_type, identifier in self.identifiers:
identifiers.append({
'type': identifier_type,
'value': identifier,
})
new_order = {
"identifiers": identifiers
}
result, info = self.account.send_signed_request(self.directory['newOrder'], new_order)
if info['status'] not in [201]:
raise ModuleFailException("Error new order: CODE: {0} RESULT: {1}".format(info['status'], result))
for auth_uri in result['authorizations']:
auth_data, dummy = self.account.get_request(auth_uri)
auth_data['uri'] = auth_uri
identifier_type = auth_data['identifier']['type']
identifier = auth_data['identifier']['value']
if auth_data.get('wildcard', False):
identifier = '*.{0}'.format(identifier)
self.authorizations[identifier_type + ':' + identifier] = auth_data
self.order_uri = info['location']
self.finalize_uri = result['finalize']
def is_first_step(self):
'''
Return True if this is the first execution of this module, i.e. if a
sufficient data object from a first run has not been provided.
'''
if self.data is None:
return True
if self.version == 1:
# As soon as self.data is a non-empty object, we are in the second stage.
return not self.data
else:
# We are in the second stage if data.order_uri is given (which has been
# stored in self.order_uri by the constructor).
return self.order_uri is None
def start_challenges(self):
'''
Create new authorizations for all identifiers of the CSR,
respectively start a new order for ACME v2.
'''
self.authorizations = {}
if self.version == 1:
for identifier_type, identifier in self.identifiers:
if identifier_type != 'dns':
raise ModuleFailException('ACME v1 only supports DNS identifiers!')
for identifier_type, identifier in self.identifiers:
new_auth = self._new_authz_v1(identifier_type, identifier)
self._add_or_update_auth(identifier_type, identifier, new_auth)
else:
self._new_order_v2()
self.changed = True
def get_challenges_data(self):
'''
Get challenge details for the chosen challenge type.
Return a tuple of generic challenge details, and specialized DNS challenge details.
'''
# Get general challenge data
data = {}
for type_identifier, auth in self.authorizations.items():
identifier_type, identifier = type_identifier.split(':', 1)
auth = self.authorizations[type_identifier]
# Skip valid authentications: their challenges are already valid
# and do not need to be returned
if auth['status'] == 'valid':
continue
# We drop the type from the key to preserve backwards compatibility
data[identifier] = self._get_challenge_data(auth, identifier_type, identifier)
# Get DNS challenge data
data_dns = {}
if self.challenge == 'dns-01':
for identifier, challenges in data.items():
if self.challenge in challenges:
values = data_dns.get(challenges[self.challenge]['record'])
if values is None:
values = []
data_dns[challenges[self.challenge]['record']] = values
values.append(challenges[self.challenge]['resource_value'])
return data, data_dns
def finish_challenges(self):
'''
Verify challenges for all identifiers of the CSR.
'''
self.authorizations = {}
# Step 1: obtain challenge information
if self.version == 1:
# For ACME v1, we attempt to create new authzs. Existing ones
# will be returned instead.
for identifier_type, identifier in self.identifiers:
new_auth = self._new_authz_v1(identifier_type, identifier)
self._add_or_update_auth(identifier_type, identifier, new_auth)
else:
# For ACME v2, we obtain the order object by fetching the
# order URI, and extract the information from there.
result, info = self.account.get_request(self.order_uri)
if not result:
raise ModuleFailException("Cannot download order from {0}: {1} (headers: {2})".format(self.order_uri, result, info))
if info['status'] not in [200]:
raise ModuleFailException("Error on downloading order: CODE: {0} RESULT: {1}".format(info['status'], result))
for auth_uri in result['authorizations']:
auth_data, dummy = self.account.get_request(auth_uri)
auth_data['uri'] = auth_uri
identifier_type = auth_data['identifier']['type']
identifier = auth_data['identifier']['value']
if auth_data.get('wildcard', False):
identifier = '*.{0}'.format(identifier)
self.authorizations[identifier_type + ':' + identifier] = auth_data
self.finalize_uri = result['finalize']
# Step 2: validate challenges
for type_identifier, auth in self.authorizations.items():
if auth['status'] == 'pending':
identifier_type, identifier = type_identifier.split(':', 1)
self._validate_challenges(identifier_type, identifier, auth)
def _chain_matches(self, chain, criterium):
'''
Check whether an alternate chain matches the specified criterium.
'''
if criterium['test_certificates'] == 'last':
chain = chain[-1:]
for cert in chain:
try:
x509 = cryptography.x509.load_pem_x509_certificate(to_bytes(cert), cryptography.hazmat.backends.default_backend())
matches = True
if criterium['subject']:
for k, v in crypto_utils.parse_name_field(criterium['subject']):
oid = crypto_utils.cryptography_name_to_oid(k)
value = to_native(v)
found = False
for attribute in x509.subject:
if attribute.oid == oid and value == to_native(attribute.value):
found = True
break
if not found:
matches = False
break
if criterium['issuer']:
for k, v in crypto_utils.parse_name_field(criterium['issuer']):
oid = crypto_utils.cryptography_name_to_oid(k)
value = to_native(v)
found = False
for attribute in x509.issuer:
if attribute.oid == oid and value == to_native(attribute.value):
found = True
break
if not found:
matches = False
break
if criterium['subject_key_identifier']:
try:
ext = x509.extensions.get_extension_for_class(cryptography.x509.SubjectKeyIdentifier)
if criterium['subject_key_identifier'] != ext.value.digest:
matches = False
except cryptography.x509.ExtensionNotFound:
matches = False
if criterium['authority_key_identifier']:
try:
ext = x509.extensions.get_extension_for_class(cryptography.x509.AuthorityKeyIdentifier)
if criterium['authority_key_identifier'] != ext.value.key_identifier:
matches = False
except cryptography.x509.ExtensionNotFound:
matches = False
if matches:
return True
except Exception as e:
self.module.warn('Error while loading certificate {0}: {1}'.format(cert, e))
return False
def get_certificate(self):
'''
Request a new certificate and write it to the destination file.
First verifies whether all authorizations are valid; if not, aborts
with an error.
'''
for identifier_type, identifier in self.identifiers:
auth = self.authorizations.get(identifier_type + ':' + identifier)
if auth is None:
raise ModuleFailException('Found no authorization information for "{0}"!'.format(identifier_type + ':' + identifier))
if 'status' not in auth:
self._fail_challenge(identifier_type, identifier, auth, 'Authorization for {0} returned no status')
if auth['status'] != 'valid':
self._fail_challenge(identifier_type, identifier, auth, 'Authorization for {0} returned status ' + str(auth['status']))
if self.version == 1:
cert = self._new_cert_v1()
else:
cert_uri = self._finalize_cert()
cert = self._download_cert(cert_uri)
if self.module.params['retrieve_all_alternates'] or self.module.params['select_chain']:
# Retrieve alternate chains
alternate_chains = []
for alternate in cert['alternates']:
try:
alt_cert = self._download_cert(alternate)
except ModuleFailException as e:
self.module.warn('Error while downloading alternative certificate {0}: {1}'.format(alternate, e))
continue
alternate_chains.append(alt_cert)
# Prepare return value for all alternate chains
if self.module.params['retrieve_all_alternates']:
self.all_chains = []
def _append_all_chains(cert_data):
self.all_chains.append(dict(
cert=cert_data['cert'].encode('utf8'),
chain=("\n".join(cert_data.get('chain', []))).encode('utf8'),
full_chain=(cert_data['cert'] + "\n".join(cert_data.get('chain', []))).encode('utf8'),
))
_append_all_chains(cert)
for alt_chain in alternate_chains:
_append_all_chains(alt_chain)
# Try to select alternate chain depending on criteria
if self.module.params['select_chain']:
matching_chain = None
all_chains = [cert] + alternate_chains
for criterium_idx, criterium in enumerate(self.module.params['select_chain']):
for v in ('subject_key_identifier', 'authority_key_identifier'):
if criterium[v]:
try:
criterium[v] = binascii.unhexlify(criterium[v].replace(':', ''))
except Exception:
self.module.warn('Criterium {0} in select_chain has invalid {1} value. '
'Ignoring criterium.'.format(criterium_idx, v))
continue
for alt_chain in all_chains:
if self._chain_matches(alt_chain.get('chain', []), criterium):
self.module.debug('Found matching chain for criterium {0}'.format(criterium_idx))
matching_chain = alt_chain
break
if matching_chain:
break
if matching_chain:
cert.update(matching_chain)
else:
self.module.debug('Found no matching alternative chain')
if cert['cert'] is not None:
pem_cert = cert['cert']
chain = [link for link in cert.get('chain', [])]
if self.dest and write_file(self.module, self.dest, pem_cert.encode('utf8')):
self.cert_days = get_cert_days(self.module, self.dest)
self.changed = True
if self.fullchain_dest and write_file(self.module, self.fullchain_dest, (pem_cert + "\n".join(chain)).encode('utf8')):
self.cert_days = get_cert_days(self.module, self.fullchain_dest)
self.changed = True
if self.chain_dest and write_file(self.module, self.chain_dest, ("\n".join(chain)).encode('utf8')):
self.changed = True
def deactivate_authzs(self):
'''
Deactivates all valid authz's. Does not raise exceptions.
https://community.letsencrypt.org/t/authorization-deactivation/19860/2
https://tools.ietf.org/html/rfc8555#section-7.5.2
'''
authz_deactivate = {
'status': 'deactivated'
}
if self.version == 1:
authz_deactivate['resource'] = 'authz'
if self.authorizations:
for identifier_type, identifier in self.identifiers:
auth = self.authorizations.get(identifier_type + ':' + identifier)
if auth is None or auth.get('status') != 'valid':
continue
try:
result, info = self.account.send_signed_request(auth['uri'], authz_deactivate)
if 200 <= info['status'] < 300 and result.get('status') == 'deactivated':
auth['status'] = 'deactivated'
except Exception as dummy:
# Ignore errors on deactivating authzs
pass
if auth.get('status') != 'deactivated':
self.module.warn(warning='Could not deactivate authz object {0}.'.format(auth['uri']))
def main():
argument_spec = get_default_argspec()
argument_spec.update(dict(
modify_account=dict(type='bool', default=True),
account_email=dict(type='str'),
agreement=dict(type='str'),
terms_agreed=dict(type='bool', default=False),
challenge=dict(type='str', default='http-01', choices=['http-01', 'dns-01', 'tls-alpn-01']),
csr=dict(type='path', required=True, aliases=['src']),
data=dict(type='dict'),
dest=dict(type='path', aliases=['cert']),
fullchain_dest=dict(type='path', aliases=['fullchain']),
chain_dest=dict(type='path', aliases=['chain']),
remaining_days=dict(type='int', default=10),
deactivate_authzs=dict(type='bool', default=False),
force=dict(type='bool', default=False),
retrieve_all_alternates=dict(type='bool', default=False),
select_chain=dict(type='list', elements='dict', options=dict(
test_certificates=dict(type='str', default='all', choices=['last', 'all']),
issuer=dict(type='dict'),
subject=dict(type='dict'),
subject_key_identifier=dict(type='str'),
authority_key_identifier=dict(type='str'),
)),
))
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=(
['account_key_src', 'account_key_content'],
['dest', 'fullchain_dest'],
),
mutually_exclusive=(
['account_key_src', 'account_key_content'],
),
supports_check_mode=True,
)
backend = handle_standard_module_arguments(module)
if module.params['select_chain']:
if backend != 'cryptography':
module.fail_json(msg="The 'select_chain' can only be used with the 'cryptography' backend.")
elif not CRYPTOGRAPHY_FOUND:
module.fail_json(msg=missing_required_lib('cryptography'))
try:
if module.params.get('dest'):
cert_days = get_cert_days(module, module.params['dest'])
else:
cert_days = get_cert_days(module, module.params['fullchain_dest'])
if module.params['force'] or cert_days < module.params['remaining_days']:
# If checkmode is active, base the changed state solely on the status
# of the certificate file as all other actions (accessing an account, checking
# the authorization status...) would lead to potential changes of the current
# state
if module.check_mode:
module.exit_json(changed=True, authorizations={}, challenge_data={}, cert_days=cert_days)
else:
client = ACMEClient(module)
client.cert_days = cert_days
other = dict()
if client.is_first_step():
# First run: start challenges / start new order
client.start_challenges()
else:
# Second run: finish challenges, and get certificate
try:
client.finish_challenges()
client.get_certificate()
if module.params['retrieve_all_alternates']:
other['all_chains'] = client.all_chains
finally:
if module.params['deactivate_authzs']:
client.deactivate_authzs()
data, data_dns = client.get_challenges_data()
auths = dict()
for k, v in client.authorizations.items():
# Remove "type:" from key
auths[k.split(':', 1)[1]] = v
module.exit_json(
changed=client.changed,
authorizations=auths,
finalize_uri=client.finalize_uri,
order_uri=client.order_uri,
account_uri=client.account.uri,
challenge_data=data,
challenge_data_dns=data_dns,
cert_days=client.cert_days,
**other
)
else:
module.exit_json(changed=False, cert_days=cert_days)
except ModuleFailException as e:
e.do_fail(module)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,455 |
zabbix_action: do not require esc_period for state: absent
|
##### SUMMARY
`esc_period` is mandatory even with `state: absent` - should be optional in that case.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_action
##### ANSIBLE VERSION
latest
##### CONFIGURATION
Not relevant.
##### OS / ENVIRONMENT
Not relevant.
##### STEPS TO REPRODUCE
```yaml
- name: "Report problems to Zabbix administrators"
event_source: 'trigger'
state: absent
```
##### EXPECTED RESULTS
Action, if exists, is removed.
##### ACTUAL RESULTS
```paste below
"msg": "missing required arguments: esc_period"
```
|
https://github.com/ansible/ansible/issues/63455
|
https://github.com/ansible/ansible/pull/63969
|
21c8dae83b832a8abde59e7ba94c74d6c7f8a128
|
0cb19e655c7a6fdf9acbde7d1e8f712dc0f7509d
| 2019-10-14T11:46:25Z |
python
| 2019-11-08T11:15:13Z |
changelogs/fragments/63969-zabbix_action_argsfix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,455 |
zabbix_action: do not require esc_period for state: absent
|
##### SUMMARY
`esc_period` is mandatory even with `state: absent` - should be optional in that case.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_action
##### ANSIBLE VERSION
latest
##### CONFIGURATION
Not relevant.
##### OS / ENVIRONMENT
Not relevant.
##### STEPS TO REPRODUCE
```yaml
- name: "Report problems to Zabbix administrators"
event_source: 'trigger'
state: absent
```
##### EXPECTED RESULTS
Action, if exists, is removed.
##### ACTUAL RESULTS
```paste below
"msg": "missing required arguments: esc_period"
```
|
https://github.com/ansible/ansible/issues/63455
|
https://github.com/ansible/ansible/pull/63969
|
21c8dae83b832a8abde59e7ba94c74d6c7f8a128
|
0cb19e655c7a6fdf9acbde7d1e8f712dc0f7509d
| 2019-10-14T11:46:25Z |
python
| 2019-11-08T11:15:13Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option wil be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
Plugins
=======
No notable changes
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,455 |
zabbix_action: do not require esc_period for state: absent
|
##### SUMMARY
`esc_period` is mandatory even with `state: absent` - should be optional in that case.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_action
##### ANSIBLE VERSION
latest
##### CONFIGURATION
Not relevant.
##### OS / ENVIRONMENT
Not relevant.
##### STEPS TO REPRODUCE
```yaml
- name: "Report problems to Zabbix administrators"
event_source: 'trigger'
state: absent
```
##### EXPECTED RESULTS
Action, if exists, is removed.
##### ACTUAL RESULTS
```paste below
"msg": "missing required arguments: esc_period"
```
|
https://github.com/ansible/ansible/issues/63455
|
https://github.com/ansible/ansible/pull/63969
|
21c8dae83b832a8abde59e7ba94c74d6c7f8a128
|
0cb19e655c7a6fdf9acbde7d1e8f712dc0f7509d
| 2019-10-14T11:46:25Z |
python
| 2019-11-08T11:15:13Z |
lib/ansible/modules/monitoring/zabbix/zabbix_action.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: zabbix_action
short_description: Create/Delete/Update Zabbix actions
version_added: "2.8"
description:
- This module allows you to create, modify and delete Zabbix actions.
author:
- Ruben Tsirunyan (@rubentsirunyan)
- Ruben Harutyunov (@K-DOT)
requirements:
- zabbix-api
options:
name:
description:
- Name of the action
required: true
event_source:
description:
- Type of events that the action will handle.
required: true
choices: ['trigger', 'discovery', 'auto_registration', 'internal']
state:
description:
- State of the action.
- On C(present), it will create an action if it does not exist or update the action if the associated data is different.
- On C(absent), it will remove the action if it exists.
choices: ['present', 'absent']
default: 'present'
status:
description:
- Status of the action.
choices: ['enabled', 'disabled']
default: 'enabled'
pause_in_maintenance:
description:
- Whether to pause escalation during maintenance periods or not.
- Can be used when I(event_source=trigger).
type: 'bool'
default: true
esc_period:
description:
- Default operation step duration. Must be greater than 60 seconds. Accepts seconds, time unit with suffix and user macro.
required: true
conditions:
type: list
description:
- List of dictionaries of conditions to evaluate.
- For more information about suboptions of this option please
check out Zabbix API documentation U(https://www.zabbix.com/documentation/3.4/manual/api/reference/action/object#action_filter_condition)
suboptions:
type:
description: Type (label) of the condition.
choices:
# trigger
- host_group
- host
- trigger
- trigger_name
- trigger_severity
- time_period
- host_template
- application
- maintenance_status
- event_tag
- event_tag_value
# discovery
- host_IP
- discovered_service_type
- discovered_service_port
- discovery_status
- uptime_or_downtime_duration
- received_value
- discovery_rule
- discovery_check
- proxy
- discovery_object
# auto_registration
- proxy
- host_name
- host_metadata
# internal
- host_group
- host
- host_template
- application
- event_type
value:
description:
- Value to compare with.
- When I(type) is set to C(discovery_status), the choices
are C(up), C(down), C(discovered), C(lost).
- When I(type) is set to C(discovery_object), the choices
are C(host), C(service).
- When I(type) is set to C(event_type), the choices
are C(item in not supported state), C(item in normal state),
C(LLD rule in not supported state),
C(LLD rule in normal state), C(trigger in unknown state), C(trigger in normal state).
- When I(type) is set to C(trigger_severity), the choices
are (case-insensitive) C(not classified), C(information), C(warning), C(average), C(high), C(disaster)
irrespective of user-visible names being changed in Zabbix. Defaults to C(not classified) if omitted.
- Besides the above options, this is usually either the name
of the object or a string to compare with.
operator:
description:
- Condition operator.
- When I(type) is set to C(time_period), the choices are C(in), C(not in).
- C(matches), C(does not match), C(Yes) and C(No) condition operators work only with >= Zabbix 4.0
choices:
- '='
- '<>'
- 'like'
- 'not like'
- 'in'
- '>='
- '<='
- 'not in'
- 'matches'
- 'does not match'
- 'Yes'
- 'No'
formulaid:
description:
- Arbitrary unique ID that is used to reference the condition from a custom expression.
- Can only contain upper-case letters.
- Required for custom expression filters.
eval_type:
description:
- Filter condition evaluation method.
- Defaults to C(andor) if conditions are less then 2 or if
I(formula) is not specified.
- Defaults to C(custom_expression) when formula is specified.
choices:
- 'andor'
- 'and'
- 'or'
- 'custom_expression'
formula:
description:
- User-defined expression to be used for evaluating conditions of filters with a custom expression.
- The expression must contain IDs that reference specific filter conditions by its formulaid.
- The IDs used in the expression must exactly match the ones
defined in the filter conditions. No condition can remain unused or omitted.
- Required for custom expression filters.
- Use sequential IDs that start at "A". If non-sequential IDs are used, Zabbix re-indexes them.
This makes each module run notice the difference in IDs and update the action.
default_message:
description:
- Problem message default text.
default_subject:
description:
- Problem message default subject.
recovery_default_message:
description:
- Recovery message text.
- Works only with >= Zabbix 3.2
recovery_default_subject:
description:
- Recovery message subject.
- Works only with >= Zabbix 3.2
acknowledge_default_message:
description:
- Update operation (known as "Acknowledge operation" before Zabbix 4.0) message text.
- Works only with >= Zabbix 3.4
acknowledge_default_subject:
description:
- Update operation (known as "Acknowledge operation" before Zabbix 4.0) message subject.
- Works only with >= Zabbix 3.4
operations:
type: list
description:
- List of action operations
suboptions:
type:
description:
- Type of operation.
choices:
- send_message
- remote_command
- add_host
- remove_host
- add_to_host_group
- remove_from_host_group
- link_to_template
- unlink_from_template
- enable_host
- disable_host
- set_host_inventory_mode
esc_period:
description:
- Duration of an escalation step in seconds.
- Must be greater than 60 seconds.
- Accepts seconds, time unit with suffix and user macro.
- If set to 0 or 0s, the default action escalation period will be used.
default: 0s
esc_step_from:
description:
- Step to start escalation from.
default: 1
esc_step_to:
description:
- Step to end escalation at.
default: 1
send_to_groups:
type: list
description:
- User groups to send messages to.
send_to_users:
type: list
description:
- Users (usernames or aliases) to send messages to.
message:
description:
- Operation message text.
- Will check the 'default message' and use the text from I(default_message) if this and I(default_subject) are not specified
subject:
description:
- Operation message subject.
- Will check the 'default message' and use the text from I(default_subject) if this and I(default_subject) are not specified
media_type:
description:
- Media type that will be used to send the message.
- Set to C(all) for all media types
default: 'all'
operation_condition:
type: 'str'
description:
- The action operation condition object defines a condition that must be met to perform the current operation.
choices:
- acknowledged
- not_acknowledged
host_groups:
type: list
description:
- List of host groups host should be added to.
- Required when I(type=add_to_host_group) or I(type=remove_from_host_group).
templates:
type: list
description:
- List of templates host should be linked to.
- Required when I(type=link_to_template) or I(type=unlink_from_template).
inventory:
description:
- Host inventory mode.
- Required when I(type=set_host_inventory_mode).
command_type:
description:
- Type of operation command.
- Required when I(type=remote_command).
choices:
- custom_script
- ipmi
- ssh
- telnet
- global_script
command:
description:
- Command to run.
- Required when I(type=remote_command) and I(command_type!=global_script).
execute_on:
description:
- Target on which the custom script operation command will be executed.
- Required when I(type=remote_command) and I(command_type=custom_script).
choices:
- agent
- server
- proxy
run_on_groups:
description:
- Host groups to run remote commands on.
- Required when I(type=remote_command) if I(run_on_hosts) is not set.
run_on_hosts:
description:
- Hosts to run remote commands on.
- Required when I(type=remote_command) if I(run_on_groups) is not set.
- If set to 0 the command will be run on the current host.
ssh_auth_type:
description:
- Authentication method used for SSH commands.
- Required when I(type=remote_command) and I(command_type=ssh).
choices:
- password
- public_key
ssh_privatekey_file:
description:
- Name of the private key file used for SSH commands with public key authentication.
- Required when I(type=remote_command) and I(command_type=ssh).
ssh_publickey_file:
description:
- Name of the public key file used for SSH commands with public key authentication.
- Required when I(type=remote_command) and I(command_type=ssh).
username:
description:
- User name used for authentication.
- Required when I(type=remote_command) and I(command_type in [ssh, telnet]).
password:
description:
- Password used for authentication.
- Required when I(type=remote_command) and I(command_type in [ssh, telnet]).
port:
description:
- Port number used for authentication.
- Required when I(type=remote_command) and I(command_type in [ssh, telnet]).
script_name:
description:
- The name of script used for global script commands.
- Required when I(type=remote_command) and I(command_type=global_script).
recovery_operations:
type: list
description:
- List of recovery operations.
- C(Suboptions) are the same as for I(operations).
- Works only with >= Zabbix 3.2
acknowledge_operations:
type: list
description:
- List of acknowledge operations.
- C(Suboptions) are the same as for I(operations).
- Works only with >= Zabbix 3.4
notes:
- Only Zabbix >= 3.0 is supported.
extends_documentation_fragment:
- zabbix
'''
EXAMPLES = '''
# Trigger action with only one condition
- name: Deploy trigger action
zabbix_action:
server_url: "http://zabbix.example.com/zabbix/"
login_user: Admin
login_password: secret
name: "Send alerts to Admin"
event_source: 'trigger'
state: present
status: enabled
esc_period: 60
conditions:
- type: 'trigger_severity'
operator: '>='
value: 'Information'
operations:
- type: send_message
subject: "Something bad is happening"
message: "Come on, guys do something"
media_type: 'Email'
send_to_users:
- 'Admin'
# Trigger action with multiple conditions and operations
- name: Deploy trigger action
zabbix_action:
server_url: "http://zabbix.example.com/zabbix/"
login_user: Admin
login_password: secret
name: "Send alerts to Admin"
event_source: 'trigger'
state: present
status: enabled
esc_period: 60
conditions:
- type: 'trigger_name'
operator: 'like'
value: 'Zabbix agent is unreachable'
formulaid: A
- type: 'trigger_severity'
operator: '>='
value: 'disaster'
formulaid: B
formula: A or B
operations:
- type: send_message
media_type: 'Email'
send_to_users:
- 'Admin'
- type: remote_command
command: 'systemctl restart zabbix-agent'
command_type: custom_script
execute_on: server
run_on_hosts:
- 0
# Trigger action with recovery and acknowledge operations
- name: Deploy trigger action
zabbix_action:
server_url: "http://zabbix.example.com/zabbix/"
login_user: Admin
login_password: secret
name: "Send alerts to Admin"
event_source: 'trigger'
state: present
status: enabled
esc_period: 60
conditions:
- type: 'trigger_severity'
operator: '>='
value: 'Information'
operations:
- type: send_message
subject: "Something bad is happening"
message: "Come on, guys do something"
media_type: 'Email'
send_to_users:
- 'Admin'
recovery_operations:
- type: send_message
subject: "Host is down"
message: "Come on, guys do something"
media_type: 'Email'
send_to_users:
- 'Admin'
acknowledge_operations:
- type: send_message
media_type: 'Email'
send_to_users:
- 'Admin'
'''
RETURN = '''
msg:
description: The result of the operation
returned: success
type: str
sample: 'Action Deleted: Register webservers, ID: 0001'
'''
import atexit
import traceback
try:
from zabbix_api import ZabbixAPI
HAS_ZABBIX_API = True
except ImportError:
ZBX_IMP_ERR = traceback.format_exc()
HAS_ZABBIX_API = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class Zapi(object):
"""
A simple wrapper over the Zabbix API
"""
def __init__(self, module, zbx):
self._module = module
self._zapi = zbx
def check_if_action_exists(self, name):
"""Check if action exists.
Args:
name: Name of the action.
Returns:
The return value. True for success, False otherwise.
"""
try:
_action = self._zapi.action.get({
"selectOperations": "extend",
"selectRecoveryOperations": "extend",
"selectAcknowledgeOperations": "extend",
"selectFilter": "extend",
'selectInventory': 'extend',
'filter': {'name': [name]}
})
if len(_action) > 0:
_action[0]['recovery_operations'] = _action[0].pop('recoveryOperations', [])
_action[0]['acknowledge_operations'] = _action[0].pop('acknowledgeOperations', [])
return _action
except Exception as e:
self._module.fail_json(msg="Failed to check if action '%s' exists: %s" % (name, e))
def get_action_by_name(self, name):
"""Get action by name
Args:
name: Name of the action.
Returns:
dict: Zabbix action
"""
try:
action_list = self._zapi.action.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [name]}
})
if len(action_list) < 1:
self._module.fail_json(msg="Action not found: " % name)
else:
return action_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get ID of '%s': %s" % (name, e))
def get_host_by_host_name(self, host_name):
"""Get host by host name
Args:
host_name: host name.
Returns:
host matching host name
"""
try:
host_list = self._zapi.host.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'host': [host_name]}
})
if len(host_list) < 1:
self._module.fail_json(msg="Host not found: %s" % host_name)
else:
return host_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get host '%s': %s" % (host_name, e))
def get_hostgroup_by_hostgroup_name(self, hostgroup_name):
"""Get host group by host group name
Args:
hostgroup_name: host group name.
Returns:
host group matching host group name
"""
try:
hostgroup_list = self._zapi.hostgroup.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [hostgroup_name]}
})
if len(hostgroup_list) < 1:
self._module.fail_json(msg="Host group not found: %s" % hostgroup_name)
else:
return hostgroup_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get host group '%s': %s" % (hostgroup_name, e))
def get_template_by_template_name(self, template_name):
"""Get template by template name
Args:
template_name: template name.
Returns:
template matching template name
"""
try:
template_list = self._zapi.template.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'host': [template_name]}
})
if len(template_list) < 1:
self._module.fail_json(msg="Template not found: %s" % template_name)
else:
return template_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get template '%s': %s" % (template_name, e))
def get_trigger_by_trigger_name(self, trigger_name):
"""Get trigger by trigger name
Args:
trigger_name: trigger name.
Returns:
trigger matching trigger name
"""
try:
trigger_list = self._zapi.trigger.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'description': [trigger_name]}
})
if len(trigger_list) < 1:
self._module.fail_json(msg="Trigger not found: %s" % trigger_name)
else:
return trigger_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get trigger '%s': %s" % (trigger_name, e))
def get_discovery_rule_by_discovery_rule_name(self, discovery_rule_name):
"""Get discovery rule by discovery rule name
Args:
discovery_rule_name: discovery rule name.
Returns:
discovery rule matching discovery rule name
"""
try:
discovery_rule_list = self._zapi.drule.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [discovery_rule_name]}
})
if len(discovery_rule_list) < 1:
self._module.fail_json(msg="Discovery rule not found: %s" % discovery_rule_name)
else:
return discovery_rule_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get discovery rule '%s': %s" % (discovery_rule_name, e))
def get_discovery_check_by_discovery_check_name(self, discovery_check_name):
"""Get discovery check by discovery check name
Args:
discovery_check_name: discovery check name.
Returns:
discovery check matching discovery check name
"""
try:
discovery_check_list = self._zapi.dcheck.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [discovery_check_name]}
})
if len(discovery_check_list) < 1:
self._module.fail_json(msg="Discovery check not found: %s" % discovery_check_name)
else:
return discovery_check_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get discovery check '%s': %s" % (discovery_check_name, e))
def get_proxy_by_proxy_name(self, proxy_name):
"""Get proxy by proxy name
Args:
proxy_name: proxy name.
Returns:
proxy matching proxy name
"""
try:
proxy_list = self._zapi.proxy.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'host': [proxy_name]}
})
if len(proxy_list) < 1:
self._module.fail_json(msg="Proxy not found: %s" % proxy_name)
else:
return proxy_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get proxy '%s': %s" % (proxy_name, e))
def get_mediatype_by_mediatype_name(self, mediatype_name):
"""Get mediatype by mediatype name
Args:
mediatype_name: mediatype name
Returns:
mediatype matching mediatype name
"""
try:
if str(mediatype_name).lower() == 'all':
return '0'
mediatype_list = self._zapi.mediatype.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'description': [mediatype_name]}
})
if len(mediatype_list) < 1:
self._module.fail_json(msg="Media type not found: %s" % mediatype_name)
else:
return mediatype_list[0]['mediatypeid']
except Exception as e:
self._module.fail_json(msg="Failed to get mediatype '%s': %s" % (mediatype_name, e))
def get_user_by_user_name(self, user_name):
"""Get user by user name
Args:
user_name: user name
Returns:
user matching user name
"""
try:
user_list = self._zapi.user.get({
'output': 'extend',
'selectInventory':
'extend', 'filter': {'alias': [user_name]}
})
if len(user_list) < 1:
self._module.fail_json(msg="User not found: %s" % user_name)
else:
return user_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get user '%s': %s" % (user_name, e))
def get_usergroup_by_usergroup_name(self, usergroup_name):
"""Get usergroup by usergroup name
Args:
usergroup_name: usergroup name
Returns:
usergroup matching usergroup name
"""
try:
usergroup_list = self._zapi.usergroup.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [usergroup_name]}
})
if len(usergroup_list) < 1:
self._module.fail_json(msg="User group not found: %s" % usergroup_name)
else:
return usergroup_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get user group '%s': %s" % (usergroup_name, e))
# get script by script name
def get_script_by_script_name(self, script_name):
"""Get script by script name
Args:
script_name: script name
Returns:
script matching script name
"""
try:
if script_name is None:
return {}
script_list = self._zapi.script.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [script_name]}
})
if len(script_list) < 1:
self._module.fail_json(msg="Script not found: %s" % script_name)
else:
return script_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get script '%s': %s" % (script_name, e))
class Action(object):
"""
Restructures the user defined action data to fit the Zabbix API requirements
"""
def __init__(self, module, zbx, zapi_wrapper):
self._module = module
self._zapi = zbx
self._zapi_wrapper = zapi_wrapper
def _construct_parameters(self, **kwargs):
"""Construct parameters.
Args:
**kwargs: Arbitrary keyword parameters.
Returns:
dict: dictionary of specified parameters
"""
_params = {
'name': kwargs['name'],
'eventsource': to_numeric_value([
'trigger',
'discovery',
'auto_registration',
'internal'], kwargs['event_source']),
'esc_period': kwargs.get('esc_period'),
'filter': kwargs['conditions'],
'def_longdata': kwargs['default_message'],
'def_shortdata': kwargs['default_subject'],
'r_longdata': kwargs['recovery_default_message'],
'r_shortdata': kwargs['recovery_default_subject'],
'ack_longdata': kwargs['acknowledge_default_message'],
'ack_shortdata': kwargs['acknowledge_default_subject'],
'operations': kwargs['operations'],
'recovery_operations': kwargs.get('recovery_operations'),
'acknowledge_operations': kwargs.get('acknowledge_operations'),
'status': to_numeric_value([
'enabled',
'disabled'], kwargs['status'])
}
if kwargs['event_source'] == 'trigger':
if float(self._zapi.api_version().rsplit('.', 1)[0]) >= 4.0:
_params['pause_suppressed'] = '1' if kwargs['pause_in_maintenance'] else '0'
else:
_params['maintenance_mode'] = '1' if kwargs['pause_in_maintenance'] else '0'
return _params
def check_difference(self, **kwargs):
"""Check difference between action and user specified parameters.
Args:
**kwargs: Arbitrary keyword parameters.
Returns:
dict: dictionary of differences
"""
existing_action = convert_unicode_to_str(self._zapi_wrapper.check_if_action_exists(kwargs['name'])[0])
parameters = convert_unicode_to_str(self._construct_parameters(**kwargs))
change_parameters = {}
_diff = cleanup_data(compare_dictionaries(parameters, existing_action, change_parameters))
return _diff
def update_action(self, **kwargs):
"""Update action.
Args:
**kwargs: Arbitrary keyword parameters.
Returns:
action: updated action
"""
try:
if self._module.check_mode:
self._module.exit_json(msg="Action would be updated if check mode was not specified: %s" % kwargs, changed=True)
kwargs['actionid'] = kwargs.pop('action_id')
return self._zapi.action.update(kwargs)
except Exception as e:
self._module.fail_json(msg="Failed to update action '%s': %s" % (kwargs['actionid'], e))
def add_action(self, **kwargs):
"""Add action.
Args:
**kwargs: Arbitrary keyword parameters.
Returns:
action: added action
"""
try:
if self._module.check_mode:
self._module.exit_json(msg="Action would be added if check mode was not specified", changed=True)
parameters = self._construct_parameters(**kwargs)
action_list = self._zapi.action.create(parameters)
return action_list['actionids'][0]
except Exception as e:
self._module.fail_json(msg="Failed to create action '%s': %s" % (kwargs['name'], e))
def delete_action(self, action_id):
"""Delete action.
Args:
action_id: Action id
Returns:
action: deleted action
"""
try:
if self._module.check_mode:
self._module.exit_json(msg="Action would be deleted if check mode was not specified", changed=True)
return self._zapi.action.delete([action_id])
except Exception as e:
self._module.fail_json(msg="Failed to delete action '%s': %s" % (action_id, e))
class Operations(object):
"""
Restructures the user defined operation data to fit the Zabbix API requirements
"""
def __init__(self, module, zbx, zapi_wrapper):
self._module = module
# self._zapi = zbx
self._zapi_wrapper = zapi_wrapper
def _construct_operationtype(self, operation):
"""Construct operation type.
Args:
operation: operation to construct
Returns:
str: constructed operation
"""
try:
return to_numeric_value([
"send_message",
"remote_command",
"add_host",
"remove_host",
"add_to_host_group",
"remove_from_host_group",
"link_to_template",
"unlink_from_template",
"enable_host",
"disable_host",
"set_host_inventory_mode"], operation['type']
)
except Exception as e:
self._module.fail_json(msg="Unsupported value '%s' for operation type." % operation['type'])
def _construct_opmessage(self, operation):
"""Construct operation message.
Args:
operation: operation to construct the message
Returns:
dict: constructed operation message
"""
try:
return {
'default_msg': '0' if operation.get('message') is not None or operation.get('subject')is not None else '1',
'mediatypeid': self._zapi_wrapper.get_mediatype_by_mediatype_name(
operation.get('media_type')
) if operation.get('media_type') is not None else '0',
'message': operation.get('message'),
'subject': operation.get('subject'),
}
except Exception as e:
self._module.fail_json(msg="Failed to construct operation message. The error was: %s" % e)
def _construct_opmessage_usr(self, operation):
"""Construct operation message user.
Args:
operation: operation to construct the message user
Returns:
list: constructed operation message user or None if operation not found
"""
if operation.get('send_to_users') is None:
return None
return [{
'userid': self._zapi_wrapper.get_user_by_user_name(_user)['userid']
} for _user in operation.get('send_to_users')]
def _construct_opmessage_grp(self, operation):
"""Construct operation message group.
Args:
operation: operation to construct the message group
Returns:
list: constructed operation message group or None if operation not found
"""
if operation.get('send_to_groups') is None:
return None
return [{
'usrgrpid': self._zapi_wrapper.get_usergroup_by_usergroup_name(_group)['usrgrpid']
} for _group in operation.get('send_to_groups')]
def _construct_opcommand(self, operation):
"""Construct operation command.
Args:
operation: operation to construct command
Returns:
list: constructed operation command
"""
try:
return {
'type': to_numeric_value([
'custom_script',
'ipmi',
'ssh',
'telnet',
'global_script'], operation.get('command_type', 'custom_script')),
'command': operation.get('command'),
'execute_on': to_numeric_value([
'agent',
'server',
'proxy'], operation.get('execute_on', 'server')),
'scriptid': self._zapi_wrapper.get_script_by_script_name(
operation.get('script_name')
).get('scriptid'),
'authtype': to_numeric_value([
'password',
'private_key'
], operation.get('ssh_auth_type', 'password')),
'privatekey': operation.get('ssh_privatekey_file'),
'publickey': operation.get('ssh_publickey_file'),
'username': operation.get('username'),
'password': operation.get('password'),
'port': operation.get('port')
}
except Exception as e:
self._module.fail_json(msg="Failed to construct operation command. The error was: %s" % e)
def _construct_opcommand_hst(self, operation):
"""Construct operation command host.
Args:
operation: operation to construct command host
Returns:
list: constructed operation command host
"""
if operation.get('run_on_hosts') is None:
return None
return [{
'hostid': self._zapi_wrapper.get_host_by_host_name(_host)['hostid']
} if str(_host) != '0' else {'hostid': '0'} for _host in operation.get('run_on_hosts')]
def _construct_opcommand_grp(self, operation):
"""Construct operation command group.
Args:
operation: operation to construct command group
Returns:
list: constructed operation command group
"""
if operation.get('run_on_groups') is None:
return None
return [{
'groupid': self._zapi_wrapper.get_hostgroup_by_hostgroup_name(_group)['hostid']
} for _group in operation.get('run_on_groups')]
def _construct_opgroup(self, operation):
"""Construct operation group.
Args:
operation: operation to construct group
Returns:
list: constructed operation group
"""
return [{
'groupid': self._zapi_wrapper.get_hostgroup_by_hostgroup_name(_group)['groupid']
} for _group in operation.get('host_groups', [])]
def _construct_optemplate(self, operation):
"""Construct operation template.
Args:
operation: operation to construct template
Returns:
list: constructed operation template
"""
return [{
'templateid': self._zapi_wrapper.get_template_by_template_name(_template)['templateid']
} for _template in operation.get('templates', [])]
def _construct_opinventory(self, operation):
"""Construct operation inventory.
Args:
operation: operation to construct inventory
Returns:
dict: constructed operation inventory
"""
return {'inventory_mode': operation.get('inventory')}
def _construct_opconditions(self, operation):
"""Construct operation conditions.
Args:
operation: operation to construct the conditions
Returns:
list: constructed operation conditions
"""
_opcond = operation.get('operation_condition')
if _opcond is not None:
if _opcond == 'acknowledged':
_value = '1'
elif _opcond == 'not_acknowledged':
_value = '0'
return [{
'conditiontype': '14',
'operator': '0',
'value': _value
}]
return []
def construct_the_data(self, operations):
"""Construct the operation data using helper methods.
Args:
operation: operation to construct
Returns:
list: constructed operation data
"""
constructed_data = []
for op in operations:
operation_type = self._construct_operationtype(op)
constructed_operation = {
'operationtype': operation_type,
'esc_period': op.get('esc_period'),
'esc_step_from': op.get('esc_step_from'),
'esc_step_to': op.get('esc_step_to')
}
# Send Message type
if constructed_operation['operationtype'] == '0':
constructed_operation['opmessage'] = self._construct_opmessage(op)
constructed_operation['opmessage_usr'] = self._construct_opmessage_usr(op)
constructed_operation['opmessage_grp'] = self._construct_opmessage_grp(op)
constructed_operation['opconditions'] = self._construct_opconditions(op)
# Send Command type
if constructed_operation['operationtype'] == '1':
constructed_operation['opcommand'] = self._construct_opcommand(op)
constructed_operation['opcommand_hst'] = self._construct_opcommand_hst(op)
constructed_operation['opcommand_grp'] = self._construct_opcommand_grp(op)
constructed_operation['opconditions'] = self._construct_opconditions(op)
# Add to/Remove from host group
if constructed_operation['operationtype'] in ('4', '5'):
constructed_operation['opgroup'] = self._construct_opgroup(op)
# Link/Unlink template
if constructed_operation['operationtype'] in ('6', '7'):
constructed_operation['optemplate'] = self._construct_optemplate(op)
# Set inventory mode
if constructed_operation['operationtype'] == '10':
constructed_operation['opinventory'] = self._construct_opinventory(op)
constructed_data.append(constructed_operation)
return cleanup_data(constructed_data)
class RecoveryOperations(Operations):
"""
Restructures the user defined recovery operations data to fit the Zabbix API requirements
"""
def _construct_operationtype(self, operation):
"""Construct operation type.
Args:
operation: operation to construct type
Returns:
str: constructed operation type
"""
try:
return to_numeric_value([
"send_message",
"remote_command",
None,
None,
None,
None,
None,
None,
None,
None,
None,
"notify_all_involved"], operation['type']
)
except Exception as e:
self._module.fail_json(msg="Unsupported value '%s' for recovery operation type." % operation['type'])
def construct_the_data(self, operations):
"""Construct the recovery operations data using helper methods.
Args:
operation: operation to construct
Returns:
list: constructed recovery operations data
"""
constructed_data = []
for op in operations:
operation_type = self._construct_operationtype(op)
constructed_operation = {
'operationtype': operation_type,
}
# Send Message type
if constructed_operation['operationtype'] in ('0', '11'):
constructed_operation['opmessage'] = self._construct_opmessage(op)
constructed_operation['opmessage_usr'] = self._construct_opmessage_usr(op)
constructed_operation['opmessage_grp'] = self._construct_opmessage_grp(op)
# Send Command type
if constructed_operation['operationtype'] == '1':
constructed_operation['opcommand'] = self._construct_opcommand(op)
constructed_operation['opcommand_hst'] = self._construct_opcommand_hst(op)
constructed_operation['opcommand_grp'] = self._construct_opcommand_grp(op)
constructed_data.append(constructed_operation)
return cleanup_data(constructed_data)
class AcknowledgeOperations(Operations):
"""
Restructures the user defined acknowledge operations data to fit the Zabbix API requirements
"""
def _construct_operationtype(self, operation):
"""Construct operation type.
Args:
operation: operation to construct type
Returns:
str: constructed operation type
"""
try:
return to_numeric_value([
"send_message",
"remote_command",
None,
None,
None,
None,
None,
None,
None,
None,
None,
None,
"notify_all_involved"], operation['type']
)
except Exception as e:
self._module.fail_json(msg="Unsupported value '%s' for acknowledge operation type." % operation['type'])
def construct_the_data(self, operations):
"""Construct the acknowledge operations data using helper methods.
Args:
operation: operation to construct
Returns:
list: constructed acknowledge operations data
"""
constructed_data = []
for op in operations:
operation_type = self._construct_operationtype(op)
constructed_operation = {
'operationtype': operation_type,
}
# Send Message type
if constructed_operation['operationtype'] in ('0', '11'):
constructed_operation['opmessage'] = self._construct_opmessage(op)
constructed_operation['opmessage_usr'] = self._construct_opmessage_usr(op)
constructed_operation['opmessage_grp'] = self._construct_opmessage_grp(op)
# Send Command type
if constructed_operation['operationtype'] == '1':
constructed_operation['opcommand'] = self._construct_opcommand(op)
constructed_operation['opcommand_hst'] = self._construct_opcommand_hst(op)
constructed_operation['opcommand_grp'] = self._construct_opcommand_grp(op)
constructed_data.append(constructed_operation)
return cleanup_data(constructed_data)
class Filter(object):
"""
Restructures the user defined filter conditions to fit the Zabbix API requirements
"""
def __init__(self, module, zbx, zapi_wrapper):
self._module = module
self._zapi = zbx
self._zapi_wrapper = zapi_wrapper
def _construct_evaltype(self, _eval_type, _formula, _conditions):
"""Construct the eval type
Args:
_formula: zabbix condition evaluation formula
_conditions: list of conditions to check
Returns:
dict: constructed acknowledge operations data
"""
if len(_conditions) <= 1:
return {
'evaltype': '0',
'formula': None
}
if _eval_type == 'andor':
return {
'evaltype': '0',
'formula': None
}
if _eval_type == 'and':
return {
'evaltype': '1',
'formula': None
}
if _eval_type == 'or':
return {
'evaltype': '2',
'formula': None
}
if _eval_type == 'custom_expression':
if _formula is not None:
return {
'evaltype': '3',
'formula': _formula
}
else:
self._module.fail_json(msg="'formula' is required when 'eval_type' is set to 'custom_expression'")
if _formula is not None:
return {
'evaltype': '3',
'formula': _formula
}
return {
'evaltype': '0',
'formula': None
}
def _construct_conditiontype(self, _condition):
"""Construct the condition type
Args:
_condition: condition to check
Returns:
str: constructed condition type data
"""
try:
return to_numeric_value([
"host_group",
"host",
"trigger",
"trigger_name",
"trigger_severity",
"trigger_value",
"time_period",
"host_ip",
"discovered_service_type",
"discovered_service_port",
"discovery_status",
"uptime_or_downtime_duration",
"received_value",
"host_template",
None,
"application",
"maintenance_status",
None,
"discovery_rule",
"discovery_check",
"proxy",
"discovery_object",
"host_name",
"event_type",
"host_metadata",
"event_tag",
"event_tag_value"], _condition['type']
)
except Exception as e:
self._module.fail_json(msg="Unsupported value '%s' for condition type." % _condition['type'])
def _construct_operator(self, _condition):
"""Construct operator
Args:
_condition: condition to construct
Returns:
str: constructed operator
"""
try:
return to_numeric_value([
"=",
"<>",
"like",
"not like",
"in",
">=",
"<=",
"not in",
"matches",
"does not match",
"Yes",
"No"], _condition['operator']
)
except Exception as e:
self._module.fail_json(msg="Unsupported value '%s' for operator." % _condition['operator'])
def _construct_value(self, conditiontype, value):
"""Construct operator
Args:
conditiontype: type of condition to construct
value: value to construct
Returns:
str: constructed value
"""
try:
# Host group
if conditiontype == '0':
return self._zapi_wrapper.get_hostgroup_by_hostgroup_name(value)['groupid']
# Host
if conditiontype == '1':
return self._zapi_wrapper.get_host_by_host_name(value)['hostid']
# Trigger
if conditiontype == '2':
return self._zapi_wrapper.get_trigger_by_trigger_name(value)['triggerid']
# Trigger name: return as is
# Trigger severity
if conditiontype == '4':
return to_numeric_value([
"not classified",
"information",
"warning",
"average",
"high",
"disaster"], value or "not classified"
)
# Trigger value
if conditiontype == '5':
return to_numeric_value([
"ok",
"problem"], value or "ok"
)
# Time period: return as is
# Host IP: return as is
# Discovered service type
if conditiontype == '8':
return to_numeric_value([
"SSH",
"LDAP",
"SMTP",
"FTP",
"HTTP",
"POP",
"NNTP",
"IMAP",
"TCP",
"Zabbix agent",
"SNMPv1 agent",
"SNMPv2 agent",
"ICMP ping",
"SNMPv3 agent",
"HTTPS",
"Telnet"], value
)
# Discovered service port: return as is
# Discovery status
if conditiontype == '10':
return to_numeric_value([
"up",
"down",
"discovered",
"lost"], value
)
if conditiontype == '13':
return self._zapi_wrapper.get_template_by_template_name(value)['templateid']
if conditiontype == '18':
return self._zapi_wrapper.get_discovery_rule_by_discovery_rule_name(value)['druleid']
if conditiontype == '19':
return self._zapi_wrapper.get_discovery_check_by_discovery_check_name(value)['dcheckid']
if conditiontype == '20':
return self._zapi_wrapper.get_proxy_by_proxy_name(value)['proxyid']
if conditiontype == '21':
return to_numeric_value([
"pchldrfor0",
"host",
"service"], value
)
if conditiontype == '23':
return to_numeric_value([
"item in not supported state",
"item in normal state",
"LLD rule in not supported state",
"LLD rule in normal state",
"trigger in unknown state",
"trigger in normal state"], value
)
return value
except Exception as e:
self._module.fail_json(
msg="""Unsupported value '%s' for specified condition type.
Check out Zabbix API documentation for supported values for
condition type '%s' at
https://www.zabbix.com/documentation/3.4/manual/api/reference/action/object#action_filter_condition""" % (value, conditiontype)
)
def construct_the_data(self, _eval_type, _formula, _conditions):
"""Construct the user defined filter conditions to fit the Zabbix API
requirements operations data using helper methods.
Args:
_formula: zabbix condition evaluation formula
_conditions: conditions to construct
Returns:
dict: user defined filter conditions
"""
if _conditions is None:
return None
constructed_data = {}
constructed_data['conditions'] = []
for cond in _conditions:
condition_type = self._construct_conditiontype(cond)
constructed_data['conditions'].append({
"conditiontype": condition_type,
"value": self._construct_value(condition_type, cond.get("value")),
"value2": cond.get("value2"),
"formulaid": cond.get("formulaid"),
"operator": self._construct_operator(cond)
})
_constructed_evaltype = self._construct_evaltype(
_eval_type,
_formula,
constructed_data['conditions']
)
constructed_data['evaltype'] = _constructed_evaltype['evaltype']
constructed_data['formula'] = _constructed_evaltype['formula']
return cleanup_data(constructed_data)
def convert_unicode_to_str(data):
"""Converts unicode objects to strings in dictionary
args:
data: unicode object
Returns:
dict: strings in dictionary
"""
if isinstance(data, dict):
return dict(map(convert_unicode_to_str, data.items()))
elif isinstance(data, (list, tuple, set)):
return type(data)(map(convert_unicode_to_str, data))
elif data is None:
return data
else:
return str(data)
def to_numeric_value(strs, value):
"""Converts string values to integers
Args:
value: string value
Returns:
int: converted integer
"""
strs = [s.lower() if isinstance(s, str) else s for s in strs]
value = value.lower()
tmp_dict = dict(zip(strs, list(range(len(strs)))))
return str(tmp_dict[value])
def compare_lists(l1, l2, diff_dict):
"""
Compares l1 and l2 lists and adds the items that are different
to the diff_dict dictionary.
Used in recursion with compare_dictionaries() function.
Args:
l1: first list to compare
l2: second list to compare
diff_dict: dictionary to store the difference
Returns:
dict: items that are different
"""
if len(l1) != len(l2):
diff_dict.append(l1)
return diff_dict
for i, item in enumerate(l1):
if isinstance(item, dict):
diff_dict.insert(i, {})
diff_dict[i] = compare_dictionaries(item, l2[i], diff_dict[i])
else:
if item != l2[i]:
diff_dict.append(item)
while {} in diff_dict:
diff_dict.remove({})
return diff_dict
def compare_dictionaries(d1, d2, diff_dict):
"""
Compares d1 and d2 dictionaries and adds the items that are different
to the diff_dict dictionary.
Used in recursion with compare_lists() function.
Args:
d1: first dictionary to compare
d2: second dictionary to compare
diff_dict: dictionary to store the difference
Returns:
dict: items that are different
"""
for k, v in d1.items():
if k not in d2:
diff_dict[k] = v
continue
if isinstance(v, dict):
diff_dict[k] = {}
compare_dictionaries(v, d2[k], diff_dict[k])
if diff_dict[k] == {}:
del diff_dict[k]
else:
diff_dict[k] = v
elif isinstance(v, list):
diff_dict[k] = []
compare_lists(v, d2[k], diff_dict[k])
if diff_dict[k] == []:
del diff_dict[k]
else:
diff_dict[k] = v
else:
if v != d2[k]:
diff_dict[k] = v
return diff_dict
def cleanup_data(obj):
"""Removes the None values from the object and returns the object
Args:
obj: object to cleanup
Returns:
object: cleaned object
"""
if isinstance(obj, (list, tuple, set)):
return type(obj)(cleanup_data(x) for x in obj if x is not None)
elif isinstance(obj, dict):
return type(obj)((cleanup_data(k), cleanup_data(v))
for k, v in obj.items() if k is not None and v is not None)
else:
return obj
def main():
"""Main ansible module function
"""
module = AnsibleModule(
argument_spec=dict(
server_url=dict(type='str', required=True, aliases=['url']),
login_user=dict(type='str', required=True),
login_password=dict(type='str', required=True, no_log=True),
http_login_user=dict(type='str', required=False, default=None),
http_login_password=dict(type='str', required=False, default=None, no_log=True),
validate_certs=dict(type='bool', required=False, default=True),
esc_period=dict(type='int', required=True),
timeout=dict(type='int', default=10),
name=dict(type='str', required=True),
event_source=dict(type='str', required=True, choices=['trigger', 'discovery', 'auto_registration', 'internal']),
state=dict(type='str', required=False, default='present', choices=['present', 'absent']),
status=dict(type='str', required=False, default='enabled', choices=['enabled', 'disabled']),
pause_in_maintenance=dict(type='bool', required=False, default=True),
default_message=dict(type='str', required=False, default=''),
default_subject=dict(type='str', required=False, default=''),
recovery_default_message=dict(type='str', required=False, default=''),
recovery_default_subject=dict(type='str', required=False, default=''),
acknowledge_default_message=dict(type='str', required=False, default=''),
acknowledge_default_subject=dict(type='str', required=False, default=''),
conditions=dict(
type='list',
required=False,
default=[],
elements='dict',
options=dict(
formulaid=dict(type='str', required=False),
operator=dict(type='str', required=True),
type=dict(type='str', required=True),
value=dict(type='str', required=True),
value2=dict(type='str', required=False)
)
),
formula=dict(type='str', required=False, default=None),
eval_type=dict(type='str', required=False, default=None, choices=['andor', 'and', 'or', 'custom_expression']),
operations=dict(
type='list',
required=False,
default=[],
elements='dict',
options=dict(
type=dict(
type='str',
required=True,
choices=[
'send_message',
'remote_command',
'add_host',
'remove_host',
'add_to_host_group',
'remove_from_host_group',
'link_to_template',
'unlink_from_template',
'enable_host',
'disable_host',
'set_host_inventory_mode',
]
),
esc_period=dict(type='int', required=False),
esc_step_from=dict(type='int', required=False, default=1),
esc_step_to=dict(type='int', required=False, default=1),
operation_condition=dict(
type='str',
required=False,
default=None,
choices=['acknowledged', 'not_acknowledged']
),
# when type is remote_command
command_type=dict(
type='str',
required=False,
choices=[
'custom_script',
'ipmi',
'ssh',
'telnet',
'global_script'
]
),
command=dict(type='str', required=False),
execute_on=dict(
type='str',
required=False,
choices=['agent', 'server', 'proxy']
),
password=dict(type='str', required=False),
port=dict(type='int', required=False),
run_on_groups=dict(type='list', required=False),
run_on_hosts=dict(type='list', required=False),
script_name=dict(type='str', required=False),
ssh_auth_type=dict(
type='str',
required=False,
default='password',
choices=['password', 'public_key']
),
ssh_privatekey_file=dict(type='str', required=False),
ssh_publickey_file=dict(type='str', required=False),
username=dict(type='str', required=False),
# when type is send_message
media_type=dict(type='str', required=False),
subject=dict(type='str', required=False),
message=dict(type='str', required=False),
send_to_groups=dict(type='list', required=False),
send_to_users=dict(type='list', required=False),
# when type is add_to_host_group or remove_from_host_group
host_groups=dict(type='list', required=False),
# when type is set_host_inventory_mode
inventory=dict(type='str', required=False),
# when type is link_to_template or unlink_from_template
templates=dict(type='list', required=False)
),
required_if=[
['type', 'remote_command', ['command_type']],
['type', 'remote_command', ['run_on_groups', 'run_on_hosts'], True],
['command_type', 'custom_script', [
'command',
'execute_on'
]],
['command_type', 'ipmi', ['command']],
['command_type', 'ssh', [
'command',
'password',
'username',
'port',
'ssh_auth_type',
'ssh_privatekey_file',
'ssh_publickey_file'
]],
['command_type', 'telnet', [
'command',
'password',
'username',
'port'
]],
['command_type', 'global_script', ['script_name']],
['type', 'add_to_host_group', ['host_groups']],
['type', 'remove_from_host_group', ['host_groups']],
['type', 'link_to_template', ['templates']],
['type', 'unlink_from_template', ['templates']],
['type', 'set_host_inventory_mode', ['inventory']],
['type', 'send_message', ['send_to_users', 'send_to_groups'], True]
]
),
recovery_operations=dict(
type='list',
required=False,
default=[],
elements='dict',
options=dict(
type=dict(
type='str',
required=True,
choices=[
'send_message',
'remote_command',
'notify_all_involved'
]
),
# when type is remote_command
command_type=dict(
type='str',
required=False,
choices=[
'custom_script',
'ipmi',
'ssh',
'telnet',
'global_script'
]
),
command=dict(type='str', required=False),
execute_on=dict(
type='str',
required=False,
choices=['agent', 'server', 'proxy']
),
password=dict(type='str', required=False),
port=dict(type='int', required=False),
run_on_groups=dict(type='list', required=False),
run_on_hosts=dict(type='list', required=False),
script_name=dict(type='str', required=False),
ssh_auth_type=dict(
type='str',
required=False,
default='password',
choices=['password', 'public_key']
),
ssh_privatekey_file=dict(type='str', required=False),
ssh_publickey_file=dict(type='str', required=False),
username=dict(type='str', required=False),
# when type is send_message
media_type=dict(type='str', required=False),
subject=dict(type='str', required=False),
message=dict(type='str', required=False),
send_to_groups=dict(type='list', required=False),
send_to_users=dict(type='list', required=False),
),
required_if=[
['type', 'remote_command', ['command_type']],
['type', 'remote_command', [
'run_on_groups',
'run_on_hosts'
], True],
['command_type', 'custom_script', [
'command',
'execute_on'
]],
['command_type', 'ipmi', ['command']],
['command_type', 'ssh', [
'command',
'password',
'username',
'port',
'ssh_auth_type',
'ssh_privatekey_file',
'ssh_publickey_file'
]],
['command_type', 'telnet', [
'command',
'password',
'username',
'port'
]],
['command_type', 'global_script', ['script_name']],
['type', 'send_message', ['send_to_users', 'send_to_groups'], True]
]
),
acknowledge_operations=dict(
type='list',
required=False,
default=[],
elements='dict',
options=dict(
type=dict(
type='str',
required=True,
choices=[
'send_message',
'remote_command',
'notify_all_involved'
]
),
# when type is remote_command
command_type=dict(
type='str',
required=False,
choices=[
'custom_script',
'ipmi',
'ssh',
'telnet',
'global_script'
]
),
command=dict(type='str', required=False),
execute_on=dict(
type='str',
required=False,
choices=['agent', 'server', 'proxy']
),
password=dict(type='str', required=False),
port=dict(type='int', required=False),
run_on_groups=dict(type='list', required=False),
run_on_hosts=dict(type='list', required=False),
script_name=dict(type='str', required=False),
ssh_auth_type=dict(
type='str',
required=False,
default='password',
choices=['password', 'public_key']
),
ssh_privatekey_file=dict(type='str', required=False),
ssh_publickey_file=dict(type='str', required=False),
username=dict(type='str', required=False),
# when type is send_message
media_type=dict(type='str', required=False),
subject=dict(type='str', required=False),
message=dict(type='str', required=False),
send_to_groups=dict(type='list', required=False),
send_to_users=dict(type='list', required=False),
),
required_if=[
['type', 'remote_command', ['command_type']],
['type', 'remote_command', [
'run_on_groups',
'run_on_hosts'
], True],
['command_type', 'custom_script', [
'command',
'execute_on'
]],
['command_type', 'ipmi', ['command']],
['command_type', 'ssh', [
'command',
'password',
'username',
'port',
'ssh_auth_type',
'ssh_privatekey_file',
'ssh_publickey_file'
]],
['command_type', 'telnet', [
'command',
'password',
'username',
'port'
]],
['command_type', 'global_script', ['script_name']],
['type', 'send_message', ['send_to_users', 'send_to_groups'], True]
]
)
),
supports_check_mode=True
)
if not HAS_ZABBIX_API:
module.fail_json(msg=missing_required_lib('zabbix-api', url='https://pypi.org/project/zabbix-api/'), exception=ZBX_IMP_ERR)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
http_login_user = module.params['http_login_user']
http_login_password = module.params['http_login_password']
validate_certs = module.params['validate_certs']
timeout = module.params['timeout']
name = module.params['name']
esc_period = module.params['esc_period']
event_source = module.params['event_source']
state = module.params['state']
status = module.params['status']
pause_in_maintenance = module.params['pause_in_maintenance']
default_message = module.params['default_message']
default_subject = module.params['default_subject']
recovery_default_message = module.params['recovery_default_message']
recovery_default_subject = module.params['recovery_default_subject']
acknowledge_default_message = module.params['acknowledge_default_message']
acknowledge_default_subject = module.params['acknowledge_default_subject']
conditions = module.params['conditions']
formula = module.params['formula']
eval_type = module.params['eval_type']
operations = module.params['operations']
recovery_operations = module.params['recovery_operations']
acknowledge_operations = module.params['acknowledge_operations']
try:
zbx = ZabbixAPI(server_url, timeout=timeout, user=http_login_user,
passwd=http_login_password, validate_certs=validate_certs)
zbx.login(login_user, login_password)
atexit.register(zbx.logout)
except Exception as e:
module.fail_json(msg="Failed to connect to Zabbix server: %s" % e)
zapi_wrapper = Zapi(module, zbx)
action = Action(module, zbx, zapi_wrapper)
action_exists = zapi_wrapper.check_if_action_exists(name)
ops = Operations(module, zbx, zapi_wrapper)
recovery_ops = RecoveryOperations(module, zbx, zapi_wrapper)
acknowledge_ops = AcknowledgeOperations(module, zbx, zapi_wrapper)
fltr = Filter(module, zbx, zapi_wrapper)
if action_exists:
action_id = zapi_wrapper.get_action_by_name(name)['actionid']
if state == "absent":
result = action.delete_action(action_id)
module.exit_json(changed=True, msg="Action Deleted: %s, ID: %s" % (name, result))
else:
difference = action.check_difference(
action_id=action_id,
name=name,
event_source=event_source,
esc_period=esc_period,
status=status,
pause_in_maintenance=pause_in_maintenance,
default_message=default_message,
default_subject=default_subject,
recovery_default_message=recovery_default_message,
recovery_default_subject=recovery_default_subject,
acknowledge_default_message=acknowledge_default_message,
acknowledge_default_subject=acknowledge_default_subject,
operations=ops.construct_the_data(operations),
recovery_operations=recovery_ops.construct_the_data(recovery_operations),
acknowledge_operations=acknowledge_ops.construct_the_data(acknowledge_operations),
conditions=fltr.construct_the_data(eval_type, formula, conditions)
)
if difference == {}:
module.exit_json(changed=False, msg="Action is up to date: %s" % (name))
else:
result = action.update_action(
action_id=action_id,
**difference
)
module.exit_json(changed=True, msg="Action Updated: %s, ID: %s" % (name, result))
else:
if state == "absent":
module.exit_json(changed=False)
else:
action_id = action.add_action(
name=name,
event_source=event_source,
esc_period=esc_period,
status=status,
pause_in_maintenance=pause_in_maintenance,
default_message=default_message,
default_subject=default_subject,
recovery_default_message=recovery_default_message,
recovery_default_subject=recovery_default_subject,
acknowledge_default_message=acknowledge_default_message,
acknowledge_default_subject=acknowledge_default_subject,
operations=ops.construct_the_data(operations),
recovery_operations=recovery_ops.construct_the_data(recovery_operations),
acknowledge_operations=acknowledge_ops.construct_the_data(acknowledge_operations),
conditions=fltr.construct_the_data(eval_type, formula, conditions)
)
module.exit_json(changed=True, msg="Action created: %s, ID: %s" % (name, action_id))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,641 |
event_source required for "absent" zabbix actions
|
##### SUMMARY
When specifying only action name and state "absent", that fails with "missing required arguments: event_source".
Zabbix action names are unique across types (event sources), thus the event source parameter in this case is not mandated by the Zabbix API.
It would be great to either lift this limitation, or add an example in the documentation explaining why it's there.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_action
##### ANSIBLE VERSION
latest
##### CONFIGURATION
Not relevant.
##### OS / ENVIRONMENT
Not relevant.
##### STEPS TO REPRODUCE
```yaml
- name: "Report problems to Zabbix administrators"
state: absent
```
##### EXPECTED RESULTS
Either success, or documented special limitation.
##### ACTUAL RESULTS
```
failed: [host] (item={u'state': u'absent', u'name': u'Report problems to Zabbix administrators'}) => {"ansible_loop_var": "item", "changed": false, "item": {"name": "Report problems to Zabbix administrators", "state": "absent"}, "msg": "missing required arguments: event_source"}
```
|
https://github.com/ansible/ansible/issues/62641
|
https://github.com/ansible/ansible/pull/63969
|
21c8dae83b832a8abde59e7ba94c74d6c7f8a128
|
0cb19e655c7a6fdf9acbde7d1e8f712dc0f7509d
| 2019-09-20T08:17:43Z |
python
| 2019-11-08T11:15:13Z |
changelogs/fragments/63969-zabbix_action_argsfix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,641 |
event_source required for "absent" zabbix actions
|
##### SUMMARY
When specifying only action name and state "absent", that fails with "missing required arguments: event_source".
Zabbix action names are unique across types (event sources), thus the event source parameter in this case is not mandated by the Zabbix API.
It would be great to either lift this limitation, or add an example in the documentation explaining why it's there.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_action
##### ANSIBLE VERSION
latest
##### CONFIGURATION
Not relevant.
##### OS / ENVIRONMENT
Not relevant.
##### STEPS TO REPRODUCE
```yaml
- name: "Report problems to Zabbix administrators"
state: absent
```
##### EXPECTED RESULTS
Either success, or documented special limitation.
##### ACTUAL RESULTS
```
failed: [host] (item={u'state': u'absent', u'name': u'Report problems to Zabbix administrators'}) => {"ansible_loop_var": "item", "changed": false, "item": {"name": "Report problems to Zabbix administrators", "state": "absent"}, "msg": "missing required arguments: event_source"}
```
|
https://github.com/ansible/ansible/issues/62641
|
https://github.com/ansible/ansible/pull/63969
|
21c8dae83b832a8abde59e7ba94c74d6c7f8a128
|
0cb19e655c7a6fdf9acbde7d1e8f712dc0f7509d
| 2019-09-20T08:17:43Z |
python
| 2019-11-08T11:15:13Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option wil be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
Plugins
=======
No notable changes
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,641 |
event_source required for "absent" zabbix actions
|
##### SUMMARY
When specifying only action name and state "absent", that fails with "missing required arguments: event_source".
Zabbix action names are unique across types (event sources), thus the event source parameter in this case is not mandated by the Zabbix API.
It would be great to either lift this limitation, or add an example in the documentation explaining why it's there.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_action
##### ANSIBLE VERSION
latest
##### CONFIGURATION
Not relevant.
##### OS / ENVIRONMENT
Not relevant.
##### STEPS TO REPRODUCE
```yaml
- name: "Report problems to Zabbix administrators"
state: absent
```
##### EXPECTED RESULTS
Either success, or documented special limitation.
##### ACTUAL RESULTS
```
failed: [host] (item={u'state': u'absent', u'name': u'Report problems to Zabbix administrators'}) => {"ansible_loop_var": "item", "changed": false, "item": {"name": "Report problems to Zabbix administrators", "state": "absent"}, "msg": "missing required arguments: event_source"}
```
|
https://github.com/ansible/ansible/issues/62641
|
https://github.com/ansible/ansible/pull/63969
|
21c8dae83b832a8abde59e7ba94c74d6c7f8a128
|
0cb19e655c7a6fdf9acbde7d1e8f712dc0f7509d
| 2019-09-20T08:17:43Z |
python
| 2019-11-08T11:15:13Z |
lib/ansible/modules/monitoring/zabbix/zabbix_action.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: zabbix_action
short_description: Create/Delete/Update Zabbix actions
version_added: "2.8"
description:
- This module allows you to create, modify and delete Zabbix actions.
author:
- Ruben Tsirunyan (@rubentsirunyan)
- Ruben Harutyunov (@K-DOT)
requirements:
- zabbix-api
options:
name:
description:
- Name of the action
required: true
event_source:
description:
- Type of events that the action will handle.
required: true
choices: ['trigger', 'discovery', 'auto_registration', 'internal']
state:
description:
- State of the action.
- On C(present), it will create an action if it does not exist or update the action if the associated data is different.
- On C(absent), it will remove the action if it exists.
choices: ['present', 'absent']
default: 'present'
status:
description:
- Status of the action.
choices: ['enabled', 'disabled']
default: 'enabled'
pause_in_maintenance:
description:
- Whether to pause escalation during maintenance periods or not.
- Can be used when I(event_source=trigger).
type: 'bool'
default: true
esc_period:
description:
- Default operation step duration. Must be greater than 60 seconds. Accepts seconds, time unit with suffix and user macro.
required: true
conditions:
type: list
description:
- List of dictionaries of conditions to evaluate.
- For more information about suboptions of this option please
check out Zabbix API documentation U(https://www.zabbix.com/documentation/3.4/manual/api/reference/action/object#action_filter_condition)
suboptions:
type:
description: Type (label) of the condition.
choices:
# trigger
- host_group
- host
- trigger
- trigger_name
- trigger_severity
- time_period
- host_template
- application
- maintenance_status
- event_tag
- event_tag_value
# discovery
- host_IP
- discovered_service_type
- discovered_service_port
- discovery_status
- uptime_or_downtime_duration
- received_value
- discovery_rule
- discovery_check
- proxy
- discovery_object
# auto_registration
- proxy
- host_name
- host_metadata
# internal
- host_group
- host
- host_template
- application
- event_type
value:
description:
- Value to compare with.
- When I(type) is set to C(discovery_status), the choices
are C(up), C(down), C(discovered), C(lost).
- When I(type) is set to C(discovery_object), the choices
are C(host), C(service).
- When I(type) is set to C(event_type), the choices
are C(item in not supported state), C(item in normal state),
C(LLD rule in not supported state),
C(LLD rule in normal state), C(trigger in unknown state), C(trigger in normal state).
- When I(type) is set to C(trigger_severity), the choices
are (case-insensitive) C(not classified), C(information), C(warning), C(average), C(high), C(disaster)
irrespective of user-visible names being changed in Zabbix. Defaults to C(not classified) if omitted.
- Besides the above options, this is usually either the name
of the object or a string to compare with.
operator:
description:
- Condition operator.
- When I(type) is set to C(time_period), the choices are C(in), C(not in).
- C(matches), C(does not match), C(Yes) and C(No) condition operators work only with >= Zabbix 4.0
choices:
- '='
- '<>'
- 'like'
- 'not like'
- 'in'
- '>='
- '<='
- 'not in'
- 'matches'
- 'does not match'
- 'Yes'
- 'No'
formulaid:
description:
- Arbitrary unique ID that is used to reference the condition from a custom expression.
- Can only contain upper-case letters.
- Required for custom expression filters.
eval_type:
description:
- Filter condition evaluation method.
- Defaults to C(andor) if conditions are less then 2 or if
I(formula) is not specified.
- Defaults to C(custom_expression) when formula is specified.
choices:
- 'andor'
- 'and'
- 'or'
- 'custom_expression'
formula:
description:
- User-defined expression to be used for evaluating conditions of filters with a custom expression.
- The expression must contain IDs that reference specific filter conditions by its formulaid.
- The IDs used in the expression must exactly match the ones
defined in the filter conditions. No condition can remain unused or omitted.
- Required for custom expression filters.
- Use sequential IDs that start at "A". If non-sequential IDs are used, Zabbix re-indexes them.
This makes each module run notice the difference in IDs and update the action.
default_message:
description:
- Problem message default text.
default_subject:
description:
- Problem message default subject.
recovery_default_message:
description:
- Recovery message text.
- Works only with >= Zabbix 3.2
recovery_default_subject:
description:
- Recovery message subject.
- Works only with >= Zabbix 3.2
acknowledge_default_message:
description:
- Update operation (known as "Acknowledge operation" before Zabbix 4.0) message text.
- Works only with >= Zabbix 3.4
acknowledge_default_subject:
description:
- Update operation (known as "Acknowledge operation" before Zabbix 4.0) message subject.
- Works only with >= Zabbix 3.4
operations:
type: list
description:
- List of action operations
suboptions:
type:
description:
- Type of operation.
choices:
- send_message
- remote_command
- add_host
- remove_host
- add_to_host_group
- remove_from_host_group
- link_to_template
- unlink_from_template
- enable_host
- disable_host
- set_host_inventory_mode
esc_period:
description:
- Duration of an escalation step in seconds.
- Must be greater than 60 seconds.
- Accepts seconds, time unit with suffix and user macro.
- If set to 0 or 0s, the default action escalation period will be used.
default: 0s
esc_step_from:
description:
- Step to start escalation from.
default: 1
esc_step_to:
description:
- Step to end escalation at.
default: 1
send_to_groups:
type: list
description:
- User groups to send messages to.
send_to_users:
type: list
description:
- Users (usernames or aliases) to send messages to.
message:
description:
- Operation message text.
- Will check the 'default message' and use the text from I(default_message) if this and I(default_subject) are not specified
subject:
description:
- Operation message subject.
- Will check the 'default message' and use the text from I(default_subject) if this and I(default_subject) are not specified
media_type:
description:
- Media type that will be used to send the message.
- Set to C(all) for all media types
default: 'all'
operation_condition:
type: 'str'
description:
- The action operation condition object defines a condition that must be met to perform the current operation.
choices:
- acknowledged
- not_acknowledged
host_groups:
type: list
description:
- List of host groups host should be added to.
- Required when I(type=add_to_host_group) or I(type=remove_from_host_group).
templates:
type: list
description:
- List of templates host should be linked to.
- Required when I(type=link_to_template) or I(type=unlink_from_template).
inventory:
description:
- Host inventory mode.
- Required when I(type=set_host_inventory_mode).
command_type:
description:
- Type of operation command.
- Required when I(type=remote_command).
choices:
- custom_script
- ipmi
- ssh
- telnet
- global_script
command:
description:
- Command to run.
- Required when I(type=remote_command) and I(command_type!=global_script).
execute_on:
description:
- Target on which the custom script operation command will be executed.
- Required when I(type=remote_command) and I(command_type=custom_script).
choices:
- agent
- server
- proxy
run_on_groups:
description:
- Host groups to run remote commands on.
- Required when I(type=remote_command) if I(run_on_hosts) is not set.
run_on_hosts:
description:
- Hosts to run remote commands on.
- Required when I(type=remote_command) if I(run_on_groups) is not set.
- If set to 0 the command will be run on the current host.
ssh_auth_type:
description:
- Authentication method used for SSH commands.
- Required when I(type=remote_command) and I(command_type=ssh).
choices:
- password
- public_key
ssh_privatekey_file:
description:
- Name of the private key file used for SSH commands with public key authentication.
- Required when I(type=remote_command) and I(command_type=ssh).
ssh_publickey_file:
description:
- Name of the public key file used for SSH commands with public key authentication.
- Required when I(type=remote_command) and I(command_type=ssh).
username:
description:
- User name used for authentication.
- Required when I(type=remote_command) and I(command_type in [ssh, telnet]).
password:
description:
- Password used for authentication.
- Required when I(type=remote_command) and I(command_type in [ssh, telnet]).
port:
description:
- Port number used for authentication.
- Required when I(type=remote_command) and I(command_type in [ssh, telnet]).
script_name:
description:
- The name of script used for global script commands.
- Required when I(type=remote_command) and I(command_type=global_script).
recovery_operations:
type: list
description:
- List of recovery operations.
- C(Suboptions) are the same as for I(operations).
- Works only with >= Zabbix 3.2
acknowledge_operations:
type: list
description:
- List of acknowledge operations.
- C(Suboptions) are the same as for I(operations).
- Works only with >= Zabbix 3.4
notes:
- Only Zabbix >= 3.0 is supported.
extends_documentation_fragment:
- zabbix
'''
EXAMPLES = '''
# Trigger action with only one condition
- name: Deploy trigger action
zabbix_action:
server_url: "http://zabbix.example.com/zabbix/"
login_user: Admin
login_password: secret
name: "Send alerts to Admin"
event_source: 'trigger'
state: present
status: enabled
esc_period: 60
conditions:
- type: 'trigger_severity'
operator: '>='
value: 'Information'
operations:
- type: send_message
subject: "Something bad is happening"
message: "Come on, guys do something"
media_type: 'Email'
send_to_users:
- 'Admin'
# Trigger action with multiple conditions and operations
- name: Deploy trigger action
zabbix_action:
server_url: "http://zabbix.example.com/zabbix/"
login_user: Admin
login_password: secret
name: "Send alerts to Admin"
event_source: 'trigger'
state: present
status: enabled
esc_period: 60
conditions:
- type: 'trigger_name'
operator: 'like'
value: 'Zabbix agent is unreachable'
formulaid: A
- type: 'trigger_severity'
operator: '>='
value: 'disaster'
formulaid: B
formula: A or B
operations:
- type: send_message
media_type: 'Email'
send_to_users:
- 'Admin'
- type: remote_command
command: 'systemctl restart zabbix-agent'
command_type: custom_script
execute_on: server
run_on_hosts:
- 0
# Trigger action with recovery and acknowledge operations
- name: Deploy trigger action
zabbix_action:
server_url: "http://zabbix.example.com/zabbix/"
login_user: Admin
login_password: secret
name: "Send alerts to Admin"
event_source: 'trigger'
state: present
status: enabled
esc_period: 60
conditions:
- type: 'trigger_severity'
operator: '>='
value: 'Information'
operations:
- type: send_message
subject: "Something bad is happening"
message: "Come on, guys do something"
media_type: 'Email'
send_to_users:
- 'Admin'
recovery_operations:
- type: send_message
subject: "Host is down"
message: "Come on, guys do something"
media_type: 'Email'
send_to_users:
- 'Admin'
acknowledge_operations:
- type: send_message
media_type: 'Email'
send_to_users:
- 'Admin'
'''
RETURN = '''
msg:
description: The result of the operation
returned: success
type: str
sample: 'Action Deleted: Register webservers, ID: 0001'
'''
import atexit
import traceback
try:
from zabbix_api import ZabbixAPI
HAS_ZABBIX_API = True
except ImportError:
ZBX_IMP_ERR = traceback.format_exc()
HAS_ZABBIX_API = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class Zapi(object):
"""
A simple wrapper over the Zabbix API
"""
def __init__(self, module, zbx):
self._module = module
self._zapi = zbx
def check_if_action_exists(self, name):
"""Check if action exists.
Args:
name: Name of the action.
Returns:
The return value. True for success, False otherwise.
"""
try:
_action = self._zapi.action.get({
"selectOperations": "extend",
"selectRecoveryOperations": "extend",
"selectAcknowledgeOperations": "extend",
"selectFilter": "extend",
'selectInventory': 'extend',
'filter': {'name': [name]}
})
if len(_action) > 0:
_action[0]['recovery_operations'] = _action[0].pop('recoveryOperations', [])
_action[0]['acknowledge_operations'] = _action[0].pop('acknowledgeOperations', [])
return _action
except Exception as e:
self._module.fail_json(msg="Failed to check if action '%s' exists: %s" % (name, e))
def get_action_by_name(self, name):
"""Get action by name
Args:
name: Name of the action.
Returns:
dict: Zabbix action
"""
try:
action_list = self._zapi.action.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [name]}
})
if len(action_list) < 1:
self._module.fail_json(msg="Action not found: " % name)
else:
return action_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get ID of '%s': %s" % (name, e))
def get_host_by_host_name(self, host_name):
"""Get host by host name
Args:
host_name: host name.
Returns:
host matching host name
"""
try:
host_list = self._zapi.host.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'host': [host_name]}
})
if len(host_list) < 1:
self._module.fail_json(msg="Host not found: %s" % host_name)
else:
return host_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get host '%s': %s" % (host_name, e))
def get_hostgroup_by_hostgroup_name(self, hostgroup_name):
"""Get host group by host group name
Args:
hostgroup_name: host group name.
Returns:
host group matching host group name
"""
try:
hostgroup_list = self._zapi.hostgroup.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [hostgroup_name]}
})
if len(hostgroup_list) < 1:
self._module.fail_json(msg="Host group not found: %s" % hostgroup_name)
else:
return hostgroup_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get host group '%s': %s" % (hostgroup_name, e))
def get_template_by_template_name(self, template_name):
"""Get template by template name
Args:
template_name: template name.
Returns:
template matching template name
"""
try:
template_list = self._zapi.template.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'host': [template_name]}
})
if len(template_list) < 1:
self._module.fail_json(msg="Template not found: %s" % template_name)
else:
return template_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get template '%s': %s" % (template_name, e))
def get_trigger_by_trigger_name(self, trigger_name):
"""Get trigger by trigger name
Args:
trigger_name: trigger name.
Returns:
trigger matching trigger name
"""
try:
trigger_list = self._zapi.trigger.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'description': [trigger_name]}
})
if len(trigger_list) < 1:
self._module.fail_json(msg="Trigger not found: %s" % trigger_name)
else:
return trigger_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get trigger '%s': %s" % (trigger_name, e))
def get_discovery_rule_by_discovery_rule_name(self, discovery_rule_name):
"""Get discovery rule by discovery rule name
Args:
discovery_rule_name: discovery rule name.
Returns:
discovery rule matching discovery rule name
"""
try:
discovery_rule_list = self._zapi.drule.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [discovery_rule_name]}
})
if len(discovery_rule_list) < 1:
self._module.fail_json(msg="Discovery rule not found: %s" % discovery_rule_name)
else:
return discovery_rule_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get discovery rule '%s': %s" % (discovery_rule_name, e))
def get_discovery_check_by_discovery_check_name(self, discovery_check_name):
"""Get discovery check by discovery check name
Args:
discovery_check_name: discovery check name.
Returns:
discovery check matching discovery check name
"""
try:
discovery_check_list = self._zapi.dcheck.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [discovery_check_name]}
})
if len(discovery_check_list) < 1:
self._module.fail_json(msg="Discovery check not found: %s" % discovery_check_name)
else:
return discovery_check_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get discovery check '%s': %s" % (discovery_check_name, e))
def get_proxy_by_proxy_name(self, proxy_name):
"""Get proxy by proxy name
Args:
proxy_name: proxy name.
Returns:
proxy matching proxy name
"""
try:
proxy_list = self._zapi.proxy.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'host': [proxy_name]}
})
if len(proxy_list) < 1:
self._module.fail_json(msg="Proxy not found: %s" % proxy_name)
else:
return proxy_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get proxy '%s': %s" % (proxy_name, e))
def get_mediatype_by_mediatype_name(self, mediatype_name):
"""Get mediatype by mediatype name
Args:
mediatype_name: mediatype name
Returns:
mediatype matching mediatype name
"""
try:
if str(mediatype_name).lower() == 'all':
return '0'
mediatype_list = self._zapi.mediatype.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'description': [mediatype_name]}
})
if len(mediatype_list) < 1:
self._module.fail_json(msg="Media type not found: %s" % mediatype_name)
else:
return mediatype_list[0]['mediatypeid']
except Exception as e:
self._module.fail_json(msg="Failed to get mediatype '%s': %s" % (mediatype_name, e))
def get_user_by_user_name(self, user_name):
"""Get user by user name
Args:
user_name: user name
Returns:
user matching user name
"""
try:
user_list = self._zapi.user.get({
'output': 'extend',
'selectInventory':
'extend', 'filter': {'alias': [user_name]}
})
if len(user_list) < 1:
self._module.fail_json(msg="User not found: %s" % user_name)
else:
return user_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get user '%s': %s" % (user_name, e))
def get_usergroup_by_usergroup_name(self, usergroup_name):
"""Get usergroup by usergroup name
Args:
usergroup_name: usergroup name
Returns:
usergroup matching usergroup name
"""
try:
usergroup_list = self._zapi.usergroup.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [usergroup_name]}
})
if len(usergroup_list) < 1:
self._module.fail_json(msg="User group not found: %s" % usergroup_name)
else:
return usergroup_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get user group '%s': %s" % (usergroup_name, e))
# get script by script name
def get_script_by_script_name(self, script_name):
"""Get script by script name
Args:
script_name: script name
Returns:
script matching script name
"""
try:
if script_name is None:
return {}
script_list = self._zapi.script.get({
'output': 'extend',
'selectInventory': 'extend',
'filter': {'name': [script_name]}
})
if len(script_list) < 1:
self._module.fail_json(msg="Script not found: %s" % script_name)
else:
return script_list[0]
except Exception as e:
self._module.fail_json(msg="Failed to get script '%s': %s" % (script_name, e))
class Action(object):
"""
Restructures the user defined action data to fit the Zabbix API requirements
"""
def __init__(self, module, zbx, zapi_wrapper):
self._module = module
self._zapi = zbx
self._zapi_wrapper = zapi_wrapper
def _construct_parameters(self, **kwargs):
"""Construct parameters.
Args:
**kwargs: Arbitrary keyword parameters.
Returns:
dict: dictionary of specified parameters
"""
_params = {
'name': kwargs['name'],
'eventsource': to_numeric_value([
'trigger',
'discovery',
'auto_registration',
'internal'], kwargs['event_source']),
'esc_period': kwargs.get('esc_period'),
'filter': kwargs['conditions'],
'def_longdata': kwargs['default_message'],
'def_shortdata': kwargs['default_subject'],
'r_longdata': kwargs['recovery_default_message'],
'r_shortdata': kwargs['recovery_default_subject'],
'ack_longdata': kwargs['acknowledge_default_message'],
'ack_shortdata': kwargs['acknowledge_default_subject'],
'operations': kwargs['operations'],
'recovery_operations': kwargs.get('recovery_operations'),
'acknowledge_operations': kwargs.get('acknowledge_operations'),
'status': to_numeric_value([
'enabled',
'disabled'], kwargs['status'])
}
if kwargs['event_source'] == 'trigger':
if float(self._zapi.api_version().rsplit('.', 1)[0]) >= 4.0:
_params['pause_suppressed'] = '1' if kwargs['pause_in_maintenance'] else '0'
else:
_params['maintenance_mode'] = '1' if kwargs['pause_in_maintenance'] else '0'
return _params
def check_difference(self, **kwargs):
"""Check difference between action and user specified parameters.
Args:
**kwargs: Arbitrary keyword parameters.
Returns:
dict: dictionary of differences
"""
existing_action = convert_unicode_to_str(self._zapi_wrapper.check_if_action_exists(kwargs['name'])[0])
parameters = convert_unicode_to_str(self._construct_parameters(**kwargs))
change_parameters = {}
_diff = cleanup_data(compare_dictionaries(parameters, existing_action, change_parameters))
return _diff
def update_action(self, **kwargs):
"""Update action.
Args:
**kwargs: Arbitrary keyword parameters.
Returns:
action: updated action
"""
try:
if self._module.check_mode:
self._module.exit_json(msg="Action would be updated if check mode was not specified: %s" % kwargs, changed=True)
kwargs['actionid'] = kwargs.pop('action_id')
return self._zapi.action.update(kwargs)
except Exception as e:
self._module.fail_json(msg="Failed to update action '%s': %s" % (kwargs['actionid'], e))
def add_action(self, **kwargs):
"""Add action.
Args:
**kwargs: Arbitrary keyword parameters.
Returns:
action: added action
"""
try:
if self._module.check_mode:
self._module.exit_json(msg="Action would be added if check mode was not specified", changed=True)
parameters = self._construct_parameters(**kwargs)
action_list = self._zapi.action.create(parameters)
return action_list['actionids'][0]
except Exception as e:
self._module.fail_json(msg="Failed to create action '%s': %s" % (kwargs['name'], e))
def delete_action(self, action_id):
"""Delete action.
Args:
action_id: Action id
Returns:
action: deleted action
"""
try:
if self._module.check_mode:
self._module.exit_json(msg="Action would be deleted if check mode was not specified", changed=True)
return self._zapi.action.delete([action_id])
except Exception as e:
self._module.fail_json(msg="Failed to delete action '%s': %s" % (action_id, e))
class Operations(object):
"""
Restructures the user defined operation data to fit the Zabbix API requirements
"""
def __init__(self, module, zbx, zapi_wrapper):
self._module = module
# self._zapi = zbx
self._zapi_wrapper = zapi_wrapper
def _construct_operationtype(self, operation):
"""Construct operation type.
Args:
operation: operation to construct
Returns:
str: constructed operation
"""
try:
return to_numeric_value([
"send_message",
"remote_command",
"add_host",
"remove_host",
"add_to_host_group",
"remove_from_host_group",
"link_to_template",
"unlink_from_template",
"enable_host",
"disable_host",
"set_host_inventory_mode"], operation['type']
)
except Exception as e:
self._module.fail_json(msg="Unsupported value '%s' for operation type." % operation['type'])
def _construct_opmessage(self, operation):
"""Construct operation message.
Args:
operation: operation to construct the message
Returns:
dict: constructed operation message
"""
try:
return {
'default_msg': '0' if operation.get('message') is not None or operation.get('subject')is not None else '1',
'mediatypeid': self._zapi_wrapper.get_mediatype_by_mediatype_name(
operation.get('media_type')
) if operation.get('media_type') is not None else '0',
'message': operation.get('message'),
'subject': operation.get('subject'),
}
except Exception as e:
self._module.fail_json(msg="Failed to construct operation message. The error was: %s" % e)
def _construct_opmessage_usr(self, operation):
"""Construct operation message user.
Args:
operation: operation to construct the message user
Returns:
list: constructed operation message user or None if operation not found
"""
if operation.get('send_to_users') is None:
return None
return [{
'userid': self._zapi_wrapper.get_user_by_user_name(_user)['userid']
} for _user in operation.get('send_to_users')]
def _construct_opmessage_grp(self, operation):
"""Construct operation message group.
Args:
operation: operation to construct the message group
Returns:
list: constructed operation message group or None if operation not found
"""
if operation.get('send_to_groups') is None:
return None
return [{
'usrgrpid': self._zapi_wrapper.get_usergroup_by_usergroup_name(_group)['usrgrpid']
} for _group in operation.get('send_to_groups')]
def _construct_opcommand(self, operation):
"""Construct operation command.
Args:
operation: operation to construct command
Returns:
list: constructed operation command
"""
try:
return {
'type': to_numeric_value([
'custom_script',
'ipmi',
'ssh',
'telnet',
'global_script'], operation.get('command_type', 'custom_script')),
'command': operation.get('command'),
'execute_on': to_numeric_value([
'agent',
'server',
'proxy'], operation.get('execute_on', 'server')),
'scriptid': self._zapi_wrapper.get_script_by_script_name(
operation.get('script_name')
).get('scriptid'),
'authtype': to_numeric_value([
'password',
'private_key'
], operation.get('ssh_auth_type', 'password')),
'privatekey': operation.get('ssh_privatekey_file'),
'publickey': operation.get('ssh_publickey_file'),
'username': operation.get('username'),
'password': operation.get('password'),
'port': operation.get('port')
}
except Exception as e:
self._module.fail_json(msg="Failed to construct operation command. The error was: %s" % e)
def _construct_opcommand_hst(self, operation):
"""Construct operation command host.
Args:
operation: operation to construct command host
Returns:
list: constructed operation command host
"""
if operation.get('run_on_hosts') is None:
return None
return [{
'hostid': self._zapi_wrapper.get_host_by_host_name(_host)['hostid']
} if str(_host) != '0' else {'hostid': '0'} for _host in operation.get('run_on_hosts')]
def _construct_opcommand_grp(self, operation):
"""Construct operation command group.
Args:
operation: operation to construct command group
Returns:
list: constructed operation command group
"""
if operation.get('run_on_groups') is None:
return None
return [{
'groupid': self._zapi_wrapper.get_hostgroup_by_hostgroup_name(_group)['hostid']
} for _group in operation.get('run_on_groups')]
def _construct_opgroup(self, operation):
"""Construct operation group.
Args:
operation: operation to construct group
Returns:
list: constructed operation group
"""
return [{
'groupid': self._zapi_wrapper.get_hostgroup_by_hostgroup_name(_group)['groupid']
} for _group in operation.get('host_groups', [])]
def _construct_optemplate(self, operation):
"""Construct operation template.
Args:
operation: operation to construct template
Returns:
list: constructed operation template
"""
return [{
'templateid': self._zapi_wrapper.get_template_by_template_name(_template)['templateid']
} for _template in operation.get('templates', [])]
def _construct_opinventory(self, operation):
"""Construct operation inventory.
Args:
operation: operation to construct inventory
Returns:
dict: constructed operation inventory
"""
return {'inventory_mode': operation.get('inventory')}
def _construct_opconditions(self, operation):
"""Construct operation conditions.
Args:
operation: operation to construct the conditions
Returns:
list: constructed operation conditions
"""
_opcond = operation.get('operation_condition')
if _opcond is not None:
if _opcond == 'acknowledged':
_value = '1'
elif _opcond == 'not_acknowledged':
_value = '0'
return [{
'conditiontype': '14',
'operator': '0',
'value': _value
}]
return []
def construct_the_data(self, operations):
"""Construct the operation data using helper methods.
Args:
operation: operation to construct
Returns:
list: constructed operation data
"""
constructed_data = []
for op in operations:
operation_type = self._construct_operationtype(op)
constructed_operation = {
'operationtype': operation_type,
'esc_period': op.get('esc_period'),
'esc_step_from': op.get('esc_step_from'),
'esc_step_to': op.get('esc_step_to')
}
# Send Message type
if constructed_operation['operationtype'] == '0':
constructed_operation['opmessage'] = self._construct_opmessage(op)
constructed_operation['opmessage_usr'] = self._construct_opmessage_usr(op)
constructed_operation['opmessage_grp'] = self._construct_opmessage_grp(op)
constructed_operation['opconditions'] = self._construct_opconditions(op)
# Send Command type
if constructed_operation['operationtype'] == '1':
constructed_operation['opcommand'] = self._construct_opcommand(op)
constructed_operation['opcommand_hst'] = self._construct_opcommand_hst(op)
constructed_operation['opcommand_grp'] = self._construct_opcommand_grp(op)
constructed_operation['opconditions'] = self._construct_opconditions(op)
# Add to/Remove from host group
if constructed_operation['operationtype'] in ('4', '5'):
constructed_operation['opgroup'] = self._construct_opgroup(op)
# Link/Unlink template
if constructed_operation['operationtype'] in ('6', '7'):
constructed_operation['optemplate'] = self._construct_optemplate(op)
# Set inventory mode
if constructed_operation['operationtype'] == '10':
constructed_operation['opinventory'] = self._construct_opinventory(op)
constructed_data.append(constructed_operation)
return cleanup_data(constructed_data)
class RecoveryOperations(Operations):
"""
Restructures the user defined recovery operations data to fit the Zabbix API requirements
"""
def _construct_operationtype(self, operation):
"""Construct operation type.
Args:
operation: operation to construct type
Returns:
str: constructed operation type
"""
try:
return to_numeric_value([
"send_message",
"remote_command",
None,
None,
None,
None,
None,
None,
None,
None,
None,
"notify_all_involved"], operation['type']
)
except Exception as e:
self._module.fail_json(msg="Unsupported value '%s' for recovery operation type." % operation['type'])
def construct_the_data(self, operations):
"""Construct the recovery operations data using helper methods.
Args:
operation: operation to construct
Returns:
list: constructed recovery operations data
"""
constructed_data = []
for op in operations:
operation_type = self._construct_operationtype(op)
constructed_operation = {
'operationtype': operation_type,
}
# Send Message type
if constructed_operation['operationtype'] in ('0', '11'):
constructed_operation['opmessage'] = self._construct_opmessage(op)
constructed_operation['opmessage_usr'] = self._construct_opmessage_usr(op)
constructed_operation['opmessage_grp'] = self._construct_opmessage_grp(op)
# Send Command type
if constructed_operation['operationtype'] == '1':
constructed_operation['opcommand'] = self._construct_opcommand(op)
constructed_operation['opcommand_hst'] = self._construct_opcommand_hst(op)
constructed_operation['opcommand_grp'] = self._construct_opcommand_grp(op)
constructed_data.append(constructed_operation)
return cleanup_data(constructed_data)
class AcknowledgeOperations(Operations):
"""
Restructures the user defined acknowledge operations data to fit the Zabbix API requirements
"""
def _construct_operationtype(self, operation):
"""Construct operation type.
Args:
operation: operation to construct type
Returns:
str: constructed operation type
"""
try:
return to_numeric_value([
"send_message",
"remote_command",
None,
None,
None,
None,
None,
None,
None,
None,
None,
None,
"notify_all_involved"], operation['type']
)
except Exception as e:
self._module.fail_json(msg="Unsupported value '%s' for acknowledge operation type." % operation['type'])
def construct_the_data(self, operations):
"""Construct the acknowledge operations data using helper methods.
Args:
operation: operation to construct
Returns:
list: constructed acknowledge operations data
"""
constructed_data = []
for op in operations:
operation_type = self._construct_operationtype(op)
constructed_operation = {
'operationtype': operation_type,
}
# Send Message type
if constructed_operation['operationtype'] in ('0', '11'):
constructed_operation['opmessage'] = self._construct_opmessage(op)
constructed_operation['opmessage_usr'] = self._construct_opmessage_usr(op)
constructed_operation['opmessage_grp'] = self._construct_opmessage_grp(op)
# Send Command type
if constructed_operation['operationtype'] == '1':
constructed_operation['opcommand'] = self._construct_opcommand(op)
constructed_operation['opcommand_hst'] = self._construct_opcommand_hst(op)
constructed_operation['opcommand_grp'] = self._construct_opcommand_grp(op)
constructed_data.append(constructed_operation)
return cleanup_data(constructed_data)
class Filter(object):
"""
Restructures the user defined filter conditions to fit the Zabbix API requirements
"""
def __init__(self, module, zbx, zapi_wrapper):
self._module = module
self._zapi = zbx
self._zapi_wrapper = zapi_wrapper
def _construct_evaltype(self, _eval_type, _formula, _conditions):
"""Construct the eval type
Args:
_formula: zabbix condition evaluation formula
_conditions: list of conditions to check
Returns:
dict: constructed acknowledge operations data
"""
if len(_conditions) <= 1:
return {
'evaltype': '0',
'formula': None
}
if _eval_type == 'andor':
return {
'evaltype': '0',
'formula': None
}
if _eval_type == 'and':
return {
'evaltype': '1',
'formula': None
}
if _eval_type == 'or':
return {
'evaltype': '2',
'formula': None
}
if _eval_type == 'custom_expression':
if _formula is not None:
return {
'evaltype': '3',
'formula': _formula
}
else:
self._module.fail_json(msg="'formula' is required when 'eval_type' is set to 'custom_expression'")
if _formula is not None:
return {
'evaltype': '3',
'formula': _formula
}
return {
'evaltype': '0',
'formula': None
}
def _construct_conditiontype(self, _condition):
"""Construct the condition type
Args:
_condition: condition to check
Returns:
str: constructed condition type data
"""
try:
return to_numeric_value([
"host_group",
"host",
"trigger",
"trigger_name",
"trigger_severity",
"trigger_value",
"time_period",
"host_ip",
"discovered_service_type",
"discovered_service_port",
"discovery_status",
"uptime_or_downtime_duration",
"received_value",
"host_template",
None,
"application",
"maintenance_status",
None,
"discovery_rule",
"discovery_check",
"proxy",
"discovery_object",
"host_name",
"event_type",
"host_metadata",
"event_tag",
"event_tag_value"], _condition['type']
)
except Exception as e:
self._module.fail_json(msg="Unsupported value '%s' for condition type." % _condition['type'])
def _construct_operator(self, _condition):
"""Construct operator
Args:
_condition: condition to construct
Returns:
str: constructed operator
"""
try:
return to_numeric_value([
"=",
"<>",
"like",
"not like",
"in",
">=",
"<=",
"not in",
"matches",
"does not match",
"Yes",
"No"], _condition['operator']
)
except Exception as e:
self._module.fail_json(msg="Unsupported value '%s' for operator." % _condition['operator'])
def _construct_value(self, conditiontype, value):
"""Construct operator
Args:
conditiontype: type of condition to construct
value: value to construct
Returns:
str: constructed value
"""
try:
# Host group
if conditiontype == '0':
return self._zapi_wrapper.get_hostgroup_by_hostgroup_name(value)['groupid']
# Host
if conditiontype == '1':
return self._zapi_wrapper.get_host_by_host_name(value)['hostid']
# Trigger
if conditiontype == '2':
return self._zapi_wrapper.get_trigger_by_trigger_name(value)['triggerid']
# Trigger name: return as is
# Trigger severity
if conditiontype == '4':
return to_numeric_value([
"not classified",
"information",
"warning",
"average",
"high",
"disaster"], value or "not classified"
)
# Trigger value
if conditiontype == '5':
return to_numeric_value([
"ok",
"problem"], value or "ok"
)
# Time period: return as is
# Host IP: return as is
# Discovered service type
if conditiontype == '8':
return to_numeric_value([
"SSH",
"LDAP",
"SMTP",
"FTP",
"HTTP",
"POP",
"NNTP",
"IMAP",
"TCP",
"Zabbix agent",
"SNMPv1 agent",
"SNMPv2 agent",
"ICMP ping",
"SNMPv3 agent",
"HTTPS",
"Telnet"], value
)
# Discovered service port: return as is
# Discovery status
if conditiontype == '10':
return to_numeric_value([
"up",
"down",
"discovered",
"lost"], value
)
if conditiontype == '13':
return self._zapi_wrapper.get_template_by_template_name(value)['templateid']
if conditiontype == '18':
return self._zapi_wrapper.get_discovery_rule_by_discovery_rule_name(value)['druleid']
if conditiontype == '19':
return self._zapi_wrapper.get_discovery_check_by_discovery_check_name(value)['dcheckid']
if conditiontype == '20':
return self._zapi_wrapper.get_proxy_by_proxy_name(value)['proxyid']
if conditiontype == '21':
return to_numeric_value([
"pchldrfor0",
"host",
"service"], value
)
if conditiontype == '23':
return to_numeric_value([
"item in not supported state",
"item in normal state",
"LLD rule in not supported state",
"LLD rule in normal state",
"trigger in unknown state",
"trigger in normal state"], value
)
return value
except Exception as e:
self._module.fail_json(
msg="""Unsupported value '%s' for specified condition type.
Check out Zabbix API documentation for supported values for
condition type '%s' at
https://www.zabbix.com/documentation/3.4/manual/api/reference/action/object#action_filter_condition""" % (value, conditiontype)
)
def construct_the_data(self, _eval_type, _formula, _conditions):
"""Construct the user defined filter conditions to fit the Zabbix API
requirements operations data using helper methods.
Args:
_formula: zabbix condition evaluation formula
_conditions: conditions to construct
Returns:
dict: user defined filter conditions
"""
if _conditions is None:
return None
constructed_data = {}
constructed_data['conditions'] = []
for cond in _conditions:
condition_type = self._construct_conditiontype(cond)
constructed_data['conditions'].append({
"conditiontype": condition_type,
"value": self._construct_value(condition_type, cond.get("value")),
"value2": cond.get("value2"),
"formulaid": cond.get("formulaid"),
"operator": self._construct_operator(cond)
})
_constructed_evaltype = self._construct_evaltype(
_eval_type,
_formula,
constructed_data['conditions']
)
constructed_data['evaltype'] = _constructed_evaltype['evaltype']
constructed_data['formula'] = _constructed_evaltype['formula']
return cleanup_data(constructed_data)
def convert_unicode_to_str(data):
"""Converts unicode objects to strings in dictionary
args:
data: unicode object
Returns:
dict: strings in dictionary
"""
if isinstance(data, dict):
return dict(map(convert_unicode_to_str, data.items()))
elif isinstance(data, (list, tuple, set)):
return type(data)(map(convert_unicode_to_str, data))
elif data is None:
return data
else:
return str(data)
def to_numeric_value(strs, value):
"""Converts string values to integers
Args:
value: string value
Returns:
int: converted integer
"""
strs = [s.lower() if isinstance(s, str) else s for s in strs]
value = value.lower()
tmp_dict = dict(zip(strs, list(range(len(strs)))))
return str(tmp_dict[value])
def compare_lists(l1, l2, diff_dict):
"""
Compares l1 and l2 lists and adds the items that are different
to the diff_dict dictionary.
Used in recursion with compare_dictionaries() function.
Args:
l1: first list to compare
l2: second list to compare
diff_dict: dictionary to store the difference
Returns:
dict: items that are different
"""
if len(l1) != len(l2):
diff_dict.append(l1)
return diff_dict
for i, item in enumerate(l1):
if isinstance(item, dict):
diff_dict.insert(i, {})
diff_dict[i] = compare_dictionaries(item, l2[i], diff_dict[i])
else:
if item != l2[i]:
diff_dict.append(item)
while {} in diff_dict:
diff_dict.remove({})
return diff_dict
def compare_dictionaries(d1, d2, diff_dict):
"""
Compares d1 and d2 dictionaries and adds the items that are different
to the diff_dict dictionary.
Used in recursion with compare_lists() function.
Args:
d1: first dictionary to compare
d2: second dictionary to compare
diff_dict: dictionary to store the difference
Returns:
dict: items that are different
"""
for k, v in d1.items():
if k not in d2:
diff_dict[k] = v
continue
if isinstance(v, dict):
diff_dict[k] = {}
compare_dictionaries(v, d2[k], diff_dict[k])
if diff_dict[k] == {}:
del diff_dict[k]
else:
diff_dict[k] = v
elif isinstance(v, list):
diff_dict[k] = []
compare_lists(v, d2[k], diff_dict[k])
if diff_dict[k] == []:
del diff_dict[k]
else:
diff_dict[k] = v
else:
if v != d2[k]:
diff_dict[k] = v
return diff_dict
def cleanup_data(obj):
"""Removes the None values from the object and returns the object
Args:
obj: object to cleanup
Returns:
object: cleaned object
"""
if isinstance(obj, (list, tuple, set)):
return type(obj)(cleanup_data(x) for x in obj if x is not None)
elif isinstance(obj, dict):
return type(obj)((cleanup_data(k), cleanup_data(v))
for k, v in obj.items() if k is not None and v is not None)
else:
return obj
def main():
"""Main ansible module function
"""
module = AnsibleModule(
argument_spec=dict(
server_url=dict(type='str', required=True, aliases=['url']),
login_user=dict(type='str', required=True),
login_password=dict(type='str', required=True, no_log=True),
http_login_user=dict(type='str', required=False, default=None),
http_login_password=dict(type='str', required=False, default=None, no_log=True),
validate_certs=dict(type='bool', required=False, default=True),
esc_period=dict(type='int', required=True),
timeout=dict(type='int', default=10),
name=dict(type='str', required=True),
event_source=dict(type='str', required=True, choices=['trigger', 'discovery', 'auto_registration', 'internal']),
state=dict(type='str', required=False, default='present', choices=['present', 'absent']),
status=dict(type='str', required=False, default='enabled', choices=['enabled', 'disabled']),
pause_in_maintenance=dict(type='bool', required=False, default=True),
default_message=dict(type='str', required=False, default=''),
default_subject=dict(type='str', required=False, default=''),
recovery_default_message=dict(type='str', required=False, default=''),
recovery_default_subject=dict(type='str', required=False, default=''),
acknowledge_default_message=dict(type='str', required=False, default=''),
acknowledge_default_subject=dict(type='str', required=False, default=''),
conditions=dict(
type='list',
required=False,
default=[],
elements='dict',
options=dict(
formulaid=dict(type='str', required=False),
operator=dict(type='str', required=True),
type=dict(type='str', required=True),
value=dict(type='str', required=True),
value2=dict(type='str', required=False)
)
),
formula=dict(type='str', required=False, default=None),
eval_type=dict(type='str', required=False, default=None, choices=['andor', 'and', 'or', 'custom_expression']),
operations=dict(
type='list',
required=False,
default=[],
elements='dict',
options=dict(
type=dict(
type='str',
required=True,
choices=[
'send_message',
'remote_command',
'add_host',
'remove_host',
'add_to_host_group',
'remove_from_host_group',
'link_to_template',
'unlink_from_template',
'enable_host',
'disable_host',
'set_host_inventory_mode',
]
),
esc_period=dict(type='int', required=False),
esc_step_from=dict(type='int', required=False, default=1),
esc_step_to=dict(type='int', required=False, default=1),
operation_condition=dict(
type='str',
required=False,
default=None,
choices=['acknowledged', 'not_acknowledged']
),
# when type is remote_command
command_type=dict(
type='str',
required=False,
choices=[
'custom_script',
'ipmi',
'ssh',
'telnet',
'global_script'
]
),
command=dict(type='str', required=False),
execute_on=dict(
type='str',
required=False,
choices=['agent', 'server', 'proxy']
),
password=dict(type='str', required=False),
port=dict(type='int', required=False),
run_on_groups=dict(type='list', required=False),
run_on_hosts=dict(type='list', required=False),
script_name=dict(type='str', required=False),
ssh_auth_type=dict(
type='str',
required=False,
default='password',
choices=['password', 'public_key']
),
ssh_privatekey_file=dict(type='str', required=False),
ssh_publickey_file=dict(type='str', required=False),
username=dict(type='str', required=False),
# when type is send_message
media_type=dict(type='str', required=False),
subject=dict(type='str', required=False),
message=dict(type='str', required=False),
send_to_groups=dict(type='list', required=False),
send_to_users=dict(type='list', required=False),
# when type is add_to_host_group or remove_from_host_group
host_groups=dict(type='list', required=False),
# when type is set_host_inventory_mode
inventory=dict(type='str', required=False),
# when type is link_to_template or unlink_from_template
templates=dict(type='list', required=False)
),
required_if=[
['type', 'remote_command', ['command_type']],
['type', 'remote_command', ['run_on_groups', 'run_on_hosts'], True],
['command_type', 'custom_script', [
'command',
'execute_on'
]],
['command_type', 'ipmi', ['command']],
['command_type', 'ssh', [
'command',
'password',
'username',
'port',
'ssh_auth_type',
'ssh_privatekey_file',
'ssh_publickey_file'
]],
['command_type', 'telnet', [
'command',
'password',
'username',
'port'
]],
['command_type', 'global_script', ['script_name']],
['type', 'add_to_host_group', ['host_groups']],
['type', 'remove_from_host_group', ['host_groups']],
['type', 'link_to_template', ['templates']],
['type', 'unlink_from_template', ['templates']],
['type', 'set_host_inventory_mode', ['inventory']],
['type', 'send_message', ['send_to_users', 'send_to_groups'], True]
]
),
recovery_operations=dict(
type='list',
required=False,
default=[],
elements='dict',
options=dict(
type=dict(
type='str',
required=True,
choices=[
'send_message',
'remote_command',
'notify_all_involved'
]
),
# when type is remote_command
command_type=dict(
type='str',
required=False,
choices=[
'custom_script',
'ipmi',
'ssh',
'telnet',
'global_script'
]
),
command=dict(type='str', required=False),
execute_on=dict(
type='str',
required=False,
choices=['agent', 'server', 'proxy']
),
password=dict(type='str', required=False),
port=dict(type='int', required=False),
run_on_groups=dict(type='list', required=False),
run_on_hosts=dict(type='list', required=False),
script_name=dict(type='str', required=False),
ssh_auth_type=dict(
type='str',
required=False,
default='password',
choices=['password', 'public_key']
),
ssh_privatekey_file=dict(type='str', required=False),
ssh_publickey_file=dict(type='str', required=False),
username=dict(type='str', required=False),
# when type is send_message
media_type=dict(type='str', required=False),
subject=dict(type='str', required=False),
message=dict(type='str', required=False),
send_to_groups=dict(type='list', required=False),
send_to_users=dict(type='list', required=False),
),
required_if=[
['type', 'remote_command', ['command_type']],
['type', 'remote_command', [
'run_on_groups',
'run_on_hosts'
], True],
['command_type', 'custom_script', [
'command',
'execute_on'
]],
['command_type', 'ipmi', ['command']],
['command_type', 'ssh', [
'command',
'password',
'username',
'port',
'ssh_auth_type',
'ssh_privatekey_file',
'ssh_publickey_file'
]],
['command_type', 'telnet', [
'command',
'password',
'username',
'port'
]],
['command_type', 'global_script', ['script_name']],
['type', 'send_message', ['send_to_users', 'send_to_groups'], True]
]
),
acknowledge_operations=dict(
type='list',
required=False,
default=[],
elements='dict',
options=dict(
type=dict(
type='str',
required=True,
choices=[
'send_message',
'remote_command',
'notify_all_involved'
]
),
# when type is remote_command
command_type=dict(
type='str',
required=False,
choices=[
'custom_script',
'ipmi',
'ssh',
'telnet',
'global_script'
]
),
command=dict(type='str', required=False),
execute_on=dict(
type='str',
required=False,
choices=['agent', 'server', 'proxy']
),
password=dict(type='str', required=False),
port=dict(type='int', required=False),
run_on_groups=dict(type='list', required=False),
run_on_hosts=dict(type='list', required=False),
script_name=dict(type='str', required=False),
ssh_auth_type=dict(
type='str',
required=False,
default='password',
choices=['password', 'public_key']
),
ssh_privatekey_file=dict(type='str', required=False),
ssh_publickey_file=dict(type='str', required=False),
username=dict(type='str', required=False),
# when type is send_message
media_type=dict(type='str', required=False),
subject=dict(type='str', required=False),
message=dict(type='str', required=False),
send_to_groups=dict(type='list', required=False),
send_to_users=dict(type='list', required=False),
),
required_if=[
['type', 'remote_command', ['command_type']],
['type', 'remote_command', [
'run_on_groups',
'run_on_hosts'
], True],
['command_type', 'custom_script', [
'command',
'execute_on'
]],
['command_type', 'ipmi', ['command']],
['command_type', 'ssh', [
'command',
'password',
'username',
'port',
'ssh_auth_type',
'ssh_privatekey_file',
'ssh_publickey_file'
]],
['command_type', 'telnet', [
'command',
'password',
'username',
'port'
]],
['command_type', 'global_script', ['script_name']],
['type', 'send_message', ['send_to_users', 'send_to_groups'], True]
]
)
),
supports_check_mode=True
)
if not HAS_ZABBIX_API:
module.fail_json(msg=missing_required_lib('zabbix-api', url='https://pypi.org/project/zabbix-api/'), exception=ZBX_IMP_ERR)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
http_login_user = module.params['http_login_user']
http_login_password = module.params['http_login_password']
validate_certs = module.params['validate_certs']
timeout = module.params['timeout']
name = module.params['name']
esc_period = module.params['esc_period']
event_source = module.params['event_source']
state = module.params['state']
status = module.params['status']
pause_in_maintenance = module.params['pause_in_maintenance']
default_message = module.params['default_message']
default_subject = module.params['default_subject']
recovery_default_message = module.params['recovery_default_message']
recovery_default_subject = module.params['recovery_default_subject']
acknowledge_default_message = module.params['acknowledge_default_message']
acknowledge_default_subject = module.params['acknowledge_default_subject']
conditions = module.params['conditions']
formula = module.params['formula']
eval_type = module.params['eval_type']
operations = module.params['operations']
recovery_operations = module.params['recovery_operations']
acknowledge_operations = module.params['acknowledge_operations']
try:
zbx = ZabbixAPI(server_url, timeout=timeout, user=http_login_user,
passwd=http_login_password, validate_certs=validate_certs)
zbx.login(login_user, login_password)
atexit.register(zbx.logout)
except Exception as e:
module.fail_json(msg="Failed to connect to Zabbix server: %s" % e)
zapi_wrapper = Zapi(module, zbx)
action = Action(module, zbx, zapi_wrapper)
action_exists = zapi_wrapper.check_if_action_exists(name)
ops = Operations(module, zbx, zapi_wrapper)
recovery_ops = RecoveryOperations(module, zbx, zapi_wrapper)
acknowledge_ops = AcknowledgeOperations(module, zbx, zapi_wrapper)
fltr = Filter(module, zbx, zapi_wrapper)
if action_exists:
action_id = zapi_wrapper.get_action_by_name(name)['actionid']
if state == "absent":
result = action.delete_action(action_id)
module.exit_json(changed=True, msg="Action Deleted: %s, ID: %s" % (name, result))
else:
difference = action.check_difference(
action_id=action_id,
name=name,
event_source=event_source,
esc_period=esc_period,
status=status,
pause_in_maintenance=pause_in_maintenance,
default_message=default_message,
default_subject=default_subject,
recovery_default_message=recovery_default_message,
recovery_default_subject=recovery_default_subject,
acknowledge_default_message=acknowledge_default_message,
acknowledge_default_subject=acknowledge_default_subject,
operations=ops.construct_the_data(operations),
recovery_operations=recovery_ops.construct_the_data(recovery_operations),
acknowledge_operations=acknowledge_ops.construct_the_data(acknowledge_operations),
conditions=fltr.construct_the_data(eval_type, formula, conditions)
)
if difference == {}:
module.exit_json(changed=False, msg="Action is up to date: %s" % (name))
else:
result = action.update_action(
action_id=action_id,
**difference
)
module.exit_json(changed=True, msg="Action Updated: %s, ID: %s" % (name, result))
else:
if state == "absent":
module.exit_json(changed=False)
else:
action_id = action.add_action(
name=name,
event_source=event_source,
esc_period=esc_period,
status=status,
pause_in_maintenance=pause_in_maintenance,
default_message=default_message,
default_subject=default_subject,
recovery_default_message=recovery_default_message,
recovery_default_subject=recovery_default_subject,
acknowledge_default_message=acknowledge_default_message,
acknowledge_default_subject=acknowledge_default_subject,
operations=ops.construct_the_data(operations),
recovery_operations=recovery_ops.construct_the_data(recovery_operations),
acknowledge_operations=acknowledge_ops.construct_the_data(acknowledge_operations),
conditions=fltr.construct_the_data(eval_type, formula, conditions)
)
module.exit_json(changed=True, msg="Action created: %s, ID: %s" % (name, action_id))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,534 |
route53_info uses type string for <=2.9 and type int for greater than 2.10
|
##### SUMMARY
The task level parameter `max_items` for the route53_info uses a string for Ansible 2.9 and older, but for the development branch (2.10) and newer it uses an integer (type int)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
route53_info
##### ANSIBLE VERSION
```paste below
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/jenkins/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /root/.venv/lib/python2.7/site-packages/ansible
executable location = /root/.venv/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
N/A, happening on multiple systems
##### OS / ENVIRONMENT
N/A, happening on multiple systems
##### STEPS TO REPRODUCE
For the parameter `max_items` just send a string on the development branch, here is what I do in workshops today.
```yaml
- name: GRAB ROUTE53 INFORMATION
route53_info:
type: A
query: record_sets
hosted_zone_id: "{{AWSINFO.zone_id}}"
start_record_name: "student1.{{ec2_name_prefix|lower}}.{{workshop_dns_zone}}"
max_items: "{{student_total|string}}"
register: record_sets
```
##### EXPECTED RESULTS
unsure, I can switch but this will break many users Ansible Playbooks, we should accept string and convert to int
##### ACTUAL RESULTS
```paste below
fatal: [localhost]: FAILED! => changed=false
msg: |-
Parameter validation failed:
Invalid type for parameter MaxItems, value: 2, type: <type 'int'>, valid types: <type 'basestring'>
```
for internal red hat folks the QE tests are happening here: http://jenkins.ansible.eng.rdu2.redhat.com/blue/organizations/jenkins/tower-qe-compatibility-pipepline/detail/tower-qe-compatibility-pipepline/567/pipeline
|
https://github.com/ansible/ansible/issues/64534
|
https://github.com/ansible/ansible/pull/64617
|
84bffff96a5e3b81a8caaaaca1da5d589cb80e82
|
4e7779030d86b8a36d0944eef1a00fe29e1f2064
| 2019-11-06T21:15:37Z |
python
| 2019-11-08T20:06:30Z |
lib/ansible/modules/cloud/amazon/route53_info.py
|
#!/usr/bin/python
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: route53_info
short_description: Retrieves route53 details using AWS methods
description:
- Gets various details related to Route53 zone, record set or health check details.
- This module was called C(route53_facts) before Ansible 2.9. The usage did not change.
version_added: "2.0"
options:
query:
description:
- Specifies the query action to take.
required: True
choices: [
'change',
'checker_ip_range',
'health_check',
'hosted_zone',
'record_sets',
'reusable_delegation_set',
]
type: str
change_id:
description:
- The ID of the change batch request.
- The value that you specify here is the value that
ChangeResourceRecordSets returned in the Id element
when you submitted the request.
- Required if I(query=change).
required: false
type: str
hosted_zone_id:
description:
- The Hosted Zone ID of the DNS zone.
- Required if I(query) is set to I(hosted_zone) and I(hosted_zone_method) is set to I(details).
- Required if I(query) is set to I(record_sets).
required: false
type: str
max_items:
description:
- Maximum number of items to return for various get/list requests.
required: false
type: int
next_marker:
description:
- "Some requests such as list_command: hosted_zones will return a maximum
number of entries - EG 100 or the number specified by I(max_items).
If the number of entries exceeds this maximum another request can be sent
using the NextMarker entry from the first response to get the next page
of results."
required: false
type: int
delegation_set_id:
description:
- The DNS Zone delegation set ID.
required: false
type: str
start_record_name:
description:
- "The first name in the lexicographic ordering of domain names that you want
the list_command: record_sets to start listing from."
required: false
type: str
type:
description:
- The type of DNS record.
required: false
choices: [ 'A', 'CNAME', 'MX', 'AAAA', 'TXT', 'PTR', 'SRV', 'SPF', 'CAA', 'NS' ]
type: str
dns_name:
description:
- The first name in the lexicographic ordering of domain names that you want
the list_command to start listing from.
required: false
type: str
resource_id:
description:
- The ID/s of the specified resource/s.
- Required if I(query=health_check) and I(health_check_method=tags).
- Required if I(query=hosted_zone) and I(hosted_zone_method=tags).
required: false
aliases: ['resource_ids']
type: list
elements: str
health_check_id:
description:
- The ID of the health check.
- Required if C(query) is set to C(health_check) and
C(health_check_method) is set to C(details) or C(status) or C(failure_reason).
required: false
type: str
hosted_zone_method:
description:
- "This is used in conjunction with query: hosted_zone.
It allows for listing details, counts or tags of various
hosted zone details."
required: false
choices: [
'details',
'list',
'list_by_name',
'count',
'tags',
]
default: 'list'
type: str
health_check_method:
description:
- "This is used in conjunction with query: health_check.
It allows for listing details, counts or tags of various
health check details."
required: false
choices: [
'list',
'details',
'status',
'failure_reason',
'count',
'tags',
]
default: 'list'
type: str
author: Karen Cheng (@Etherdaemon)
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
# Simple example of listing all hosted zones
- name: List all hosted zones
route53_info:
query: hosted_zone
register: hosted_zones
# Getting a count of hosted zones
- name: Return a count of all hosted zones
route53_info:
query: hosted_zone
hosted_zone_method: count
register: hosted_zone_count
- name: List the first 20 resource record sets in a given hosted zone
route53_info:
profile: account_name
query: record_sets
hosted_zone_id: ZZZ1111112222
max_items: 20
register: record_sets
- name: List first 20 health checks
route53_info:
query: health_check
health_check_method: list
max_items: 20
register: health_checks
- name: Get health check last failure_reason
route53_info:
query: health_check
health_check_method: failure_reason
health_check_id: 00000000-1111-2222-3333-12345678abcd
register: health_check_failure_reason
- name: Retrieve reusable delegation set details
route53_info:
query: reusable_delegation_set
delegation_set_id: delegation id
register: delegation_sets
- name: setup of example for using next_marker
route53_info:
query: hosted_zone
max_items: 1
register: first_info
- name: example for using next_marker
route53_info:
query: hosted_zone
next_marker: "{{ first_info.NextMarker }}"
max_items: 1
when: "{{ 'NextMarker' in first_info }}"
- name: retrieve host entries starting with host1.workshop.test.io
block:
- name: grab zone id
route53_zone:
zone: "test.io"
register: AWSINFO
- name: grab Route53 record information
route53_info:
type: A
query: record_sets
hosted_zone_id: "{{ AWSINFO.zone_id }}"
start_record_name: "host1.workshop.test.io"
register: RECORDS
'''
try:
import boto
import botocore
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
try:
import boto3
HAS_BOTO3 = True
except ImportError:
HAS_BOTO3 = False
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ec2 import boto3_conn, ec2_argument_spec, get_aws_connection_info
from ansible.module_utils._text import to_native
def get_hosted_zone(client, module):
params = dict()
if module.params.get('hosted_zone_id'):
params['Id'] = module.params.get('hosted_zone_id')
else:
module.fail_json(msg="Hosted Zone Id is required")
return client.get_hosted_zone(**params)
def reusable_delegation_set_details(client, module):
params = dict()
if not module.params.get('delegation_set_id'):
if module.params.get('max_items'):
params['MaxItems'] = module.params.get('max_items')
if module.params.get('next_marker'):
params['Marker'] = module.params.get('next_marker')
results = client.list_reusable_delegation_sets(**params)
else:
params['DelegationSetId'] = module.params.get('delegation_set_id')
results = client.get_reusable_delegation_set(**params)
return results
def list_hosted_zones(client, module):
params = dict()
if module.params.get('max_items'):
params['MaxItems'] = module.params.get('max_items')
if module.params.get('next_marker'):
params['Marker'] = module.params.get('next_marker')
if module.params.get('delegation_set_id'):
params['DelegationSetId'] = module.params.get('delegation_set_id')
return client.list_hosted_zones(**params)
def list_hosted_zones_by_name(client, module):
params = dict()
if module.params.get('hosted_zone_id'):
params['HostedZoneId'] = module.params.get('hosted_zone_id')
if module.params.get('dns_name'):
params['DNSName'] = module.params.get('dns_name')
if module.params.get('max_items'):
params['MaxItems'] = module.params.get('max_items')
return client.list_hosted_zones_by_name(**params)
def change_details(client, module):
params = dict()
if module.params.get('change_id'):
params['Id'] = module.params.get('change_id')
else:
module.fail_json(msg="change_id is required")
results = client.get_change(**params)
return results
def checker_ip_range_details(client, module):
return client.get_checker_ip_ranges()
def get_count(client, module):
if module.params.get('query') == 'health_check':
results = client.get_health_check_count()
else:
results = client.get_hosted_zone_count()
return results
def get_health_check(client, module):
params = dict()
if not module.params.get('health_check_id'):
module.fail_json(msg="health_check_id is required")
else:
params['HealthCheckId'] = module.params.get('health_check_id')
if module.params.get('health_check_method') == 'details':
results = client.get_health_check(**params)
elif module.params.get('health_check_method') == 'failure_reason':
results = client.get_health_check_last_failure_reason(**params)
elif module.params.get('health_check_method') == 'status':
results = client.get_health_check_status(**params)
return results
def get_resource_tags(client, module):
params = dict()
if module.params.get('resource_id'):
params['ResourceIds'] = module.params.get('resource_id')
else:
module.fail_json(msg="resource_id or resource_ids is required")
if module.params.get('query') == 'health_check':
params['ResourceType'] = 'healthcheck'
else:
params['ResourceType'] = 'hostedzone'
return client.list_tags_for_resources(**params)
def list_health_checks(client, module):
params = dict()
if module.params.get('max_items'):
params['MaxItems'] = module.params.get('max_items')
if module.params.get('next_marker'):
params['Marker'] = module.params.get('next_marker')
return client.list_health_checks(**params)
def record_sets_details(client, module):
params = dict()
if module.params.get('hosted_zone_id'):
params['HostedZoneId'] = module.params.get('hosted_zone_id')
else:
module.fail_json(msg="Hosted Zone Id is required")
if module.params.get('max_items'):
params['MaxItems'] = module.params.get('max_items')
if module.params.get('start_record_name'):
params['StartRecordName'] = module.params.get('start_record_name')
if module.params.get('type') and not module.params.get('start_record_name'):
module.fail_json(msg="start_record_name must be specified if type is set")
elif module.params.get('type'):
params['StartRecordType'] = module.params.get('type')
return client.list_resource_record_sets(**params)
def health_check_details(client, module):
health_check_invocations = {
'list': list_health_checks,
'details': get_health_check,
'status': get_health_check,
'failure_reason': get_health_check,
'count': get_count,
'tags': get_resource_tags,
}
results = health_check_invocations[module.params.get('health_check_method')](client, module)
return results
def hosted_zone_details(client, module):
hosted_zone_invocations = {
'details': get_hosted_zone,
'list': list_hosted_zones,
'list_by_name': list_hosted_zones_by_name,
'count': get_count,
'tags': get_resource_tags,
}
results = hosted_zone_invocations[module.params.get('hosted_zone_method')](client, module)
return results
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
query=dict(choices=[
'change',
'checker_ip_range',
'health_check',
'hosted_zone',
'record_sets',
'reusable_delegation_set',
], required=True),
change_id=dict(),
hosted_zone_id=dict(),
max_items=dict(type='int'),
next_marker=dict(type='int'),
delegation_set_id=dict(),
start_record_name=dict(),
type=dict(choices=[
'A', 'CNAME', 'MX', 'AAAA', 'TXT', 'PTR', 'SRV', 'SPF', 'CAA', 'NS'
]),
dns_name=dict(),
resource_id=dict(type='list', aliases=['resource_ids']),
health_check_id=dict(),
hosted_zone_method=dict(choices=[
'details',
'list',
'list_by_name',
'count',
'tags'
], default='list'),
health_check_method=dict(choices=[
'list',
'details',
'status',
'failure_reason',
'count',
'tags',
], default='list'),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[
['hosted_zone_method', 'health_check_method'],
],
)
if module._name == 'route53_facts':
module.deprecate("The 'route53_facts' module has been renamed to 'route53_info'", version='2.13')
# Validate Requirements
if not (HAS_BOTO or HAS_BOTO3):
module.fail_json(msg='json and boto/boto3 is required.')
region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module, boto3=True)
route53 = boto3_conn(module, conn_type='client', resource='route53', region=region, endpoint=ec2_url, **aws_connect_kwargs)
invocations = {
'change': change_details,
'checker_ip_range': checker_ip_range_details,
'health_check': health_check_details,
'hosted_zone': hosted_zone_details,
'record_sets': record_sets_details,
'reusable_delegation_set': reusable_delegation_set_details,
}
results = dict(changed=False)
try:
results = invocations[module.params.get('query')](route53, module)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg=to_native(e))
module.exit_json(**results)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,534 |
route53_info uses type string for <=2.9 and type int for greater than 2.10
|
##### SUMMARY
The task level parameter `max_items` for the route53_info uses a string for Ansible 2.9 and older, but for the development branch (2.10) and newer it uses an integer (type int)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
route53_info
##### ANSIBLE VERSION
```paste below
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/jenkins/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /root/.venv/lib/python2.7/site-packages/ansible
executable location = /root/.venv/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
N/A, happening on multiple systems
##### OS / ENVIRONMENT
N/A, happening on multiple systems
##### STEPS TO REPRODUCE
For the parameter `max_items` just send a string on the development branch, here is what I do in workshops today.
```yaml
- name: GRAB ROUTE53 INFORMATION
route53_info:
type: A
query: record_sets
hosted_zone_id: "{{AWSINFO.zone_id}}"
start_record_name: "student1.{{ec2_name_prefix|lower}}.{{workshop_dns_zone}}"
max_items: "{{student_total|string}}"
register: record_sets
```
##### EXPECTED RESULTS
unsure, I can switch but this will break many users Ansible Playbooks, we should accept string and convert to int
##### ACTUAL RESULTS
```paste below
fatal: [localhost]: FAILED! => changed=false
msg: |-
Parameter validation failed:
Invalid type for parameter MaxItems, value: 2, type: <type 'int'>, valid types: <type 'basestring'>
```
for internal red hat folks the QE tests are happening here: http://jenkins.ansible.eng.rdu2.redhat.com/blue/organizations/jenkins/tower-qe-compatibility-pipepline/detail/tower-qe-compatibility-pipepline/567/pipeline
|
https://github.com/ansible/ansible/issues/64534
|
https://github.com/ansible/ansible/pull/64617
|
84bffff96a5e3b81a8caaaaca1da5d589cb80e82
|
4e7779030d86b8a36d0944eef1a00fe29e1f2064
| 2019-11-06T21:15:37Z |
python
| 2019-11-08T20:06:30Z |
test/integration/targets/route53/aliases
|
cloud/aws
shippable/aws/group2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,534 |
route53_info uses type string for <=2.9 and type int for greater than 2.10
|
##### SUMMARY
The task level parameter `max_items` for the route53_info uses a string for Ansible 2.9 and older, but for the development branch (2.10) and newer it uses an integer (type int)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
route53_info
##### ANSIBLE VERSION
```paste below
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/jenkins/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /root/.venv/lib/python2.7/site-packages/ansible
executable location = /root/.venv/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
N/A, happening on multiple systems
##### OS / ENVIRONMENT
N/A, happening on multiple systems
##### STEPS TO REPRODUCE
For the parameter `max_items` just send a string on the development branch, here is what I do in workshops today.
```yaml
- name: GRAB ROUTE53 INFORMATION
route53_info:
type: A
query: record_sets
hosted_zone_id: "{{AWSINFO.zone_id}}"
start_record_name: "student1.{{ec2_name_prefix|lower}}.{{workshop_dns_zone}}"
max_items: "{{student_total|string}}"
register: record_sets
```
##### EXPECTED RESULTS
unsure, I can switch but this will break many users Ansible Playbooks, we should accept string and convert to int
##### ACTUAL RESULTS
```paste below
fatal: [localhost]: FAILED! => changed=false
msg: |-
Parameter validation failed:
Invalid type for parameter MaxItems, value: 2, type: <type 'int'>, valid types: <type 'basestring'>
```
for internal red hat folks the QE tests are happening here: http://jenkins.ansible.eng.rdu2.redhat.com/blue/organizations/jenkins/tower-qe-compatibility-pipepline/detail/tower-qe-compatibility-pipepline/567/pipeline
|
https://github.com/ansible/ansible/issues/64534
|
https://github.com/ansible/ansible/pull/64617
|
84bffff96a5e3b81a8caaaaca1da5d589cb80e82
|
4e7779030d86b8a36d0944eef1a00fe29e1f2064
| 2019-11-06T21:15:37Z |
python
| 2019-11-08T20:06:30Z |
test/integration/targets/route53/tasks/main.yml
|
---
# tasks file for Route53 integration tests
- set_fact:
zone_one: '{{ resource_prefix | replace("-", "") }}.one.fakeansible.com.'
zone_two: '{{ resource_prefix | replace("-", "") }}.two.fakeansible.com.'
- debug: msg='Set zones {{ zone_one }} and {{ zone_two }}'
- name: Test basics (new zone, A and AAAA records)
module_defaults:
group/aws:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token }}"
region: "{{ aws_region }}"
route53:
region: null
block:
- route53_zone:
zone: '{{ zone_one }}'
comment: Created in Ansible test {{ resource_prefix }}
register: z1
- debug: msg='TODO write tests'
- debug: var=z1
- name: Create A record using zone fqdn
route53:
state: present
zone: '{{ zone_one }}'
record: 'qdn_test.{{ zone_one }}'
type: A
value: 1.2.3.4
register: qdn
- assert:
that:
- qdn is not failed
- qdn is changed
- name: Create same A record using zone non-qualified domain
route53:
state: present
zone: '{{ zone_one[:-1] }}'
record: 'qdn_test.{{ zone_one[:-1] }}'
type: A
value: 1.2.3.4
register: non_qdn
- assert:
that:
- non_qdn is not failed
- non_qdn is not changed
- name: Create A record using zone ID
route53:
state: present
hosted_zone_id: '{{ z1.zone_id }}'
record: 'zid_test.{{ zone_one }}'
type: A
value: 1.2.3.4
register: zid
- assert:
that:
- zid is not failed
- zid is changed
- name: Create a multi-value A record with values in different order
route53:
state: present
zone: '{{ zone_one }}'
record: 'order_test.{{ zone_one }}'
type: A
value:
- 4.5.6.7
- 1.2.3.4
register: mv_a_record
- assert:
that:
- mv_a_record is not failed
- mv_a_record is changed
- name: Create same multi-value A record with values in different order
route53:
state: present
zone: '{{ zone_one }}'
record: 'order_test.{{ zone_one }}'
type: A
value:
- 4.5.6.7
- 1.2.3.4
register: mv_a_record
- assert:
that:
- mv_a_record is not failed
- mv_a_record is not changed
- name: Remove a member from multi-value A record with values in different order
route53:
state: present
zone: '{{ zone_one }}'
record: 'order_test.{{ zone_one }}'
type: A
value:
- 4.5.6.7
register: del_a_record
ignore_errors: true
- name: This should fail, because `overwrite` is false
assert:
that:
- del_a_record is failed
- name: Remove a member from multi-value A record with values in different order
route53:
state: present
zone: '{{ zone_one }}'
record: 'order_test.{{ zone_one }}'
overwrite: true
type: A
value:
- 4.5.6.7
register: del_a_record
ignore_errors: true
- name: This should fail, because `overwrite` is false
assert:
that:
- del_a_record is not failed
- del_a_record is changed
- name: Create a LetsEncrypt CAA record
route53:
state: present
zone: '{{ zone_one }}'
record: '{{ zone_one }}'
type: CAA
value:
- 0 issue "letsencrypt.org;"
- 0 issuewild "letsencrypt.org;"
overwrite: true
register: caa
- assert:
that:
- caa is not failed
- caa is changed
- name: Re-create the same LetsEncrypt CAA record
route53:
state: present
zone: '{{ zone_one }}'
record: '{{ zone_one }}'
type: CAA
value:
- 0 issue "letsencrypt.org;"
- 0 issuewild "letsencrypt.org;"
overwrite: true
register: caa
- assert:
that:
- caa is not failed
- caa is not changed
- name: Re-create the same LetsEncrypt CAA record in opposite-order
route53:
state: present
zone: '{{ zone_one }}'
record: '{{ zone_one }}'
type: CAA
value:
- 0 issuewild "letsencrypt.org;"
- 0 issue "letsencrypt.org;"
overwrite: true
register: caa
- name: This should not be changed, as CAA records are not order sensitive
assert:
that:
- caa is not failed
- caa is not changed
always:
- route53_info:
query: record_sets
hosted_zone_id: '{{ z1.zone_id }}'
register: z1_records
- debug: var=z1_records
- name: Loop over A/AAAA/CNAME records and delete them
route53:
state: absent
zone: '{{ zone_one }}'
record: '{{ item.Name }}'
type: '{{ item.Type }}'
value: '{{ item.ResourceRecords | map(attribute="Value") | join(",") }}'
loop: '{{ z1_records.ResourceRecordSets | selectattr("Type", "in", ["A", "AAAA", "CNAME", "CAA"]) | list }}'
- name: Delete test zone one '{{ zone_one }}'
route53_zone:
state: absent
zone: '{{ zone_one }}'
register: delete_one
ignore_errors: yes
retries: 10
until: delete_one is not failed
- name: Delete test zone two '{{ zone_two }}'
route53_zone:
state: absent
zone: '{{ zone_two }}'
register: delete_two
ignore_errors: yes
retries: 10
until: delete_two is not failed
when: false
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,016 |
Update to Debian 10 broke WinRM Kerberos connection
|
##### SUMMARY
After updating from Debian 9 to Debian 10 my previously working setup for connecting to Windows Machines using WinRM and Kerberos broke. The issue boiled down to upgrading pykerberos as suggested [here](https://groups.google.com/forum/?hl=nl&fromgroups#!topic/ansible-project/KfCkdsI0g2g) and upgrading to package versions pywinrm 0.3.0 requests-kerberos 0.12.0 pykerberos 1.2.1 requests-ntlm 1.1.0 pyOpenSSL 19.0.0. Some of this was suggested [here](https://access.redhat.com/solutions/3486461).
This seems to work now again. I can confirm, the same setup also works on CentOS 7.7 btw.
##### ISSUE TYPE
- Should mention updates (the pip commands) in the documentation, since this really wasn't that obvious from an administrators perspective.
##### COMPONENT NAME
ansible/docs/docsite/rst/user_guide/windows_winrm.rst
##### ANSIBLE VERSION
```
ansible 2.7.7
config file = /home/<user>/ansible/ansible.cfg
configured module search path = ['/home/<user>/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0]
```
##### ADDITIONAL INFORMATION
```
<HOST> | UNREACHABLE! => {
"changed": false,
"msg": "kerberos: Bad HTTP response returned from server. Code 500",
"unreachable": true
}
```
I tended to get an error like this:
```
/usr/lib/python3/dist-packages/winrm/transport.py:308: UserWarning: Function <function HTTPKerberosAuth.__init__ at <SOME HEX ADDRESS>> does not contain optional arg send_cbt, check installed version with pip list
% (str(function), name))
```
##### OS / ENVIRONMENT
Debian 10
|
https://github.com/ansible/ansible/issues/63016
|
https://github.com/ansible/ansible/pull/64478
|
6f197880ce83d43158dfb1bba1eb82f4b4e9689e
|
74b0760cb4df7437879b6d7228d2875cee558cb7
| 2019-10-01T16:03:46Z |
python
| 2019-11-08T21:11:18Z |
docs/docsite/rst/user_guide/windows_winrm.rst
|
.. _windows_winrm:
Windows Remote Management
=========================
Unlike Linux/Unix hosts, which use SSH by default, Windows hosts are
configured with WinRM. This topic covers how to configure and use WinRM with Ansible.
.. contents:: Topics
:local:
What is WinRM?
``````````````
WinRM is a management protocol used by Windows to remotely communicate with
another server. It is a SOAP-based protocol that communicates over HTTP/HTTPS, and is
included in all recent Windows operating systems. Since Windows
Server 2012, WinRM has been enabled by default, but in most cases extra
configuration is required to use WinRM with Ansible.
Ansible uses the `pywinrm <https://github.com/diyan/pywinrm>`_ package to
communicate with Windows servers over WinRM. It is not installed by default
with the Ansible package, but can be installed by running the following:
.. code-block:: shell
pip install "pywinrm>=0.3.0"
.. Note:: on distributions with multiple python versions, use pip2 or pip2.x,
where x matches the python minor version Ansible is running under.
Authentication Options
``````````````````````
When connecting to a Windows host, there are several different options that can be used
when authenticating with an account. The authentication type may be set on inventory
hosts or groups with the ``ansible_winrm_transport`` variable.
The following matrix is a high level overview of the options:
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Option | Local Accounts | Active Directory Accounts | Credential Delegation | HTTP Encryption |
+=============+================+===========================+=======================+=================+
| Basic | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Certificate | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Kerberos | No | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| NTLM | Yes | Yes | No | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| CredSSP | Yes | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
Basic
-----
Basic authentication is one of the simplest authentication options to use, but is
also the most insecure. This is because the username and password are simply
base64 encoded, and if a secure channel is not in use (eg, HTTPS) then it can be
decoded by anyone. Basic authentication can only be used for local accounts (not domain accounts).
The following example shows host vars configured for basic authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: basic
Basic authentication is not enabled by default on a Windows host but can be
enabled by running the following in PowerShell::
Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
Certificate
-----------
Certificate authentication uses certificates as keys similar to SSH key
pairs, but the file format and key generation process is different.
The following example shows host vars configured for certificate authentication:
.. code-block:: yaml+jinja
ansible_connection: winrm
ansible_winrm_cert_pem: /path/to/certificate/public/key.pem
ansible_winrm_cert_key_pem: /path/to/certificate/private/key.pem
ansible_winrm_transport: certificate
Certificate authentication is not enabled by default on a Windows host but can
be enabled by running the following in PowerShell::
Set-Item -Path WSMan:\localhost\Service\Auth\Certificate -Value $true
.. Note:: Encrypted private keys cannot be used as the urllib3 library that
is used by Ansible for WinRM does not support this functionality.
Generate a Certificate
++++++++++++++++++++++
A certificate must be generated before it can be mapped to a local user.
This can be done using one of the following methods:
* OpenSSL
* PowerShell, using the ``New-SelfSignedCertificate`` cmdlet
* Active Directory Certificate Services
Active Directory Certificate Services is beyond of scope in this documentation but may be
the best option to use when running in a domain environment. For more information,
see the `Active Directory Certificate Services documentation <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732625(v=ws.11)>`_.
.. Note:: Using the PowerShell cmdlet ``New-SelfSignedCertificate`` to generate
a certificate for authentication only works when being generated from a
Windows 10 or Windows Server 2012 R2 host or later. OpenSSL is still required to
extract the private key from the PFX certificate to a PEM file for Ansible
to use.
To generate a certificate with ``OpenSSL``:
.. code-block:: shell
# Set the name of the local user that will have the key mapped to
USERNAME="username"
cat > openssl.conf << EOL
distinguished_name = req_distinguished_name
[req_distinguished_name]
[v3_req_client]
extendedKeyUsage = clientAuth
subjectAltName = otherName:1.3.6.1.4.1.311.20.2.3;UTF8:$USERNAME@localhost
EOL
export OPENSSL_CONF=openssl.conf
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out cert.pem -outform PEM -keyout cert_key.pem -subj "/CN=$USERNAME" -extensions v3_req_client
rm openssl.conf
To generate a certificate with ``New-SelfSignedCertificate``:
.. code-block:: powershell
# Set the name of the local user that will have the key mapped
$username = "username"
$output_path = "C:\temp"
# Instead of generating a file, the cert will be added to the personal
# LocalComputer folder in the certificate store
$cert = New-SelfSignedCertificate -Type Custom `
-Subject "CN=$username" `
-TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2","2.5.29.17={text}upn=$username@localhost") `
-KeyUsage DigitalSignature,KeyEncipherment `
-KeyAlgorithm RSA `
-KeyLength 2048
# Export the public key
$pem_output = @()
$pem_output += "-----BEGIN CERTIFICATE-----"
$pem_output += [System.Convert]::ToBase64String($cert.RawData) -replace ".{64}", "$&`n"
$pem_output += "-----END CERTIFICATE-----"
[System.IO.File]::WriteAllLines("$output_path\cert.pem", $pem_output)
# Export the private key in a PFX file
[System.IO.File]::WriteAllBytes("$output_path\cert.pfx", $cert.Export("Pfx"))
.. Note:: To convert the PFX file to a private key that pywinrm can use, run
the following command with OpenSSL
``openssl pkcs12 -in cert.pfx -nocerts -nodes -out cert_key.pem -passin pass: -passout pass:``
Import a Certificate to the Certificate Store
+++++++++++++++++++++++++++++++++++++++++++++
Once a certificate has been generated, the issuing certificate needs to be
imported into the ``Trusted Root Certificate Authorities`` of the
``LocalMachine`` store, and the client certificate public key must be present
in the ``Trusted People`` folder of the ``LocalMachine`` store. For this example,
both the issuing certificate and public key are the same.
Following example shows how to import the issuing certificate:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import("cert.pem")
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::Root
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
.. Note:: If using ADCS to generate the certificate, then the issuing
certificate will already be imported and this step can be skipped.
The code to import the client certificate public key is:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import("cert.pem")
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::TrustedPeople
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
Mapping a Certificate to an Account
+++++++++++++++++++++++++++++++++++
Once the certificate has been imported, map it to the local user account::
$username = "username"
$password = ConvertTo-SecureString -String "password" -AsPlainText -Force
$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
# This is the issuer thumbprint which in the case of a self generated cert
# is the public key thumbprint, additional logic may be required for other
# scenarios
$thumbprint = (Get-ChildItem -Path cert:\LocalMachine\root | Where-Object { $_.Subject -eq "CN=$username" }).Thumbprint
New-Item -Path WSMan:\localhost\ClientCertificate `
-Subject "$username@localhost" `
-URI * `
-Issuer $thumbprint `
-Credential $credential `
-Force
Once this is complete, the hostvar ``ansible_winrm_cert_pem`` should be set to
the path of the public key and the ``ansible_winrm_cert_key_pem`` variable should be set to
the path of the private key.
NTLM
----
NTLM is an older authentication mechanism used by Microsoft that can support
both local and domain accounts. NTLM is enabled by default on the WinRM
service, so no setup is required before using it.
NTLM is the easiest authentication protocol to use and is more secure than
``Basic`` authentication. If running in a domain environment, ``Kerberos`` should be used
instead of NTLM.
Kerberos has several advantages over using NTLM:
* NTLM is an older protocol and does not support newer encryption
protocols.
* NTLM is slower to authenticate because it requires more round trips to the host in
the authentication stage.
* Unlike Kerberos, NTLM does not allow credential delegation.
This example shows host variables configured to use NTLM authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: ntlm
Kerberos
--------
Kerberos is the recommended authentication option to use when running in a
domain environment. Kerberos supports features like credential delegation and
message encryption over HTTP and is one of the more secure options that
is available through WinRM.
Kerberos requires some additional setup work on the Ansible host before it can be
used properly.
The following example shows host vars configured for Kerberos authentication:
.. code-block:: yaml+jinja
ansible_user: [email protected]
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: kerberos
As of Ansible version 2.3, the Kerberos ticket will be created based on
``ansible_user`` and ``ansible_password``. If running on an older version of
Ansible or when ``ansible_winrm_kinit_mode`` is ``manual``, a Kerberos
ticket must already be obtained. See below for more details.
There are some extra host variables that can be set::
ansible_winrm_kinit_mode: managed/manual (manual means Ansible will not obtain a ticket)
ansible_winrm_kinit_cmd: the kinit binary to use to obtain a Kerberos ticket (default to kinit)
ansible_winrm_service: overrides the SPN prefix that is used, the default is ``HTTP`` and should rarely ever need changing
ansible_winrm_kerberos_delegation: allows the credentials to traverse multiple hops
ansible_winrm_kerberos_hostname_override: the hostname to be used for the kerberos exchange
Installing the Kerberos Library
+++++++++++++++++++++++++++++++
Some system dependencies that must be installed prior to using Kerberos. The script below lists the dependencies based on the distro:
.. code-block:: shell
# Via Yum (RHEL/Centos/Fedora)
yum -y install python-devel krb5-devel krb5-libs krb5-workstation
# Via Apt (Ubuntu)
sudo apt-get install python-dev libkrb5-dev krb5-user
# Via Portage (Gentoo)
emerge -av app-crypt/mit-krb5
emerge -av dev-python/setuptools
# Via Pkg (FreeBSD)
sudo pkg install security/krb5
# Via OpenCSW (Solaris)
pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -y -i libkrb5_3
# Via Pacman (Arch Linux)
pacman -S krb5
Once the dependencies have been installed, the ``python-kerberos`` wrapper can
be install using ``pip``:
.. code-block:: shell
pip install pywinrm[kerberos]
Configuring Host Kerberos
+++++++++++++++++++++++++
Once the dependencies have been installed, Kerberos needs to be configured so
that it can communicate with a domain. This configuration is done through the
``/etc/krb5.conf`` file, which is installed with the packages in the script above.
To configure Kerberos, in the section that starts with:
.. code-block:: ini
[realms]
Add the full domain name and the fully qualified domain names of the primary
and secondary Active Directory domain controllers. It should look something
like this:
.. code-block:: ini
[realms]
MY.DOMAIN.COM = {
kdc = domain-controller1.my.domain.com
kdc = domain-controller2.my.domain.com
}
In the section that starts with:
.. code-block:: ini
[domain_realm]
Add a line like the following for each domain that Ansible needs access for:
.. code-block:: ini
[domain_realm]
.my.domain.com = MY.DOMAIN.COM
You can configure other settings in this file such as the default domain. See
`krb5.conf <https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html>`_
for more details.
Automatic Kerberos Ticket Management
++++++++++++++++++++++++++++++++++++
Ansible version 2.3 and later defaults to automatically managing Kerberos tickets
when both ``ansible_user`` and ``ansible_password`` are specified for a host. In
this process, a new ticket is created in a temporary credential cache for each
host. This is done before each task executes to minimize the chance of ticket
expiration. The temporary credential caches are deleted after each task
completes and will not interfere with the default credential cache.
To disable automatic ticket management, set ``ansible_winrm_kinit_mode=manual``
via the inventory.
Automatic ticket management requires a standard ``kinit`` binary on the control
host system path. To specify a different location or binary name, set the
``ansible_winrm_kinit_cmd`` hostvar to the fully qualified path to a MIT krbv5
``kinit``-compatible binary.
Manual Kerberos Ticket Management
+++++++++++++++++++++++++++++++++
To manually manage Kerberos tickets, the ``kinit`` binary is used. To
obtain a new ticket the following command is used:
.. code-block:: shell
kinit [email protected]
.. Note:: The domain must match the configured Kerberos realm exactly, and must be in upper case.
To see what tickets (if any) have been acquired, use the following command:
.. code-block:: shell
klist
To destroy all the tickets that have been acquired, use the following command:
.. code-block:: shell
kdestroy
Troubleshooting Kerberos
++++++++++++++++++++++++
Kerberos is reliant on a properly-configured environment to
work. To troubleshoot Kerberos issues, ensure that:
* The hostname set for the Windows host is the FQDN and not an IP address.
* The forward and reverse DNS lookups are working properly in the domain. To
test this, ping the windows host by name and then use the ip address returned
with ``nslookup``. The same name should be returned when using ``nslookup``
on the IP address.
* The Ansible host's clock is synchronized with the domain controller. Kerberos
is time sensitive, and a little clock drift can cause the ticket generation
process to fail.
* Ensure that the fully qualified domain name for the domain is configured in
the ``krb5.conf`` file. To check this, run::
kinit -C [email protected]
klist
If the domain name returned by ``klist`` is different from the one requested,
an alias is being used. The ``krb5.conf`` file needs to be updated so that
the fully qualified domain name is used and not an alias.
* If the default kerberos tooling has been replaced or modified (some IdM solutions may do this), this may cause issues when installing or upgrading the Python Kerberos library. As of the time of this writing, this library is called ``pykerberos`` and is known to work with both MIT and Heimdal Kerberos libraries. To resolve ``pykerberos`` installation issues, ensure the system dependencies for Kerberos have been met (see: `Installing the Kerberos Library`_), remove any custom Kerberos tooling paths from the PATH environment variable, and retry the installation of Python Kerberos library package.
CredSSP
-------
CredSSP authentication is a newer authentication protocol that allows
credential delegation. This is achieved by encrypting the username and password
after authentication has succeeded and sending that to the server using the
CredSSP protocol.
Because the username and password are sent to the server to be used for double
hop authentication, ensure that the hosts that the Windows host communicates with are
not compromised and are trusted.
CredSSP can be used for both local and domain accounts and also supports
message encryption over HTTP.
To use CredSSP authentication, the host vars are configured like so:
.. code-block:: yaml+jinja
ansible_user: Username
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: credssp
There are some extra host variables that can be set as shown below::
ansible_winrm_credssp_disable_tlsv1_2: when true, will not use TLS 1.2 in the CredSSP auth process
CredSSP authentication is not enabled by default on a Windows host, but can
be enabled by running the following in PowerShell:
.. code-block:: powershell
Enable-WSManCredSSP -Role Server -Force
Installing CredSSP Library
++++++++++++++++++++++++++
The ``requests-credssp`` wrapper can be installed using ``pip``:
.. code-block:: bash
pip install pywinrm[credssp]
CredSSP and TLS 1.2
+++++++++++++++++++
By default the ``requests-credssp`` library is configured to authenticate over
the TLS 1.2 protocol. TLS 1.2 is installed and enabled by default for Windows Server 2012
and Windows 8 and more recent releases.
There are two ways that older hosts can be used with CredSSP:
* Install and enable a hotfix to enable TLS 1.2 support (recommended
for Server 2008 R2 and Windows 7).
* Set ``ansible_winrm_credssp_disable_tlsv1_2=True`` in the inventory to run
over TLS 1.0. This is the only option when connecting to Windows Server 2008, which
has no way of supporting TLS 1.2
See :ref:`winrm_tls12` for more information on how to enable TLS 1.2 on the
Windows host.
Set CredSSP Certificate
+++++++++++++++++++++++
CredSSP works by encrypting the credentials through the TLS protocol and uses a self-signed certificate by default. The ``CertificateThumbprint`` option under the WinRM service configuration can be used to specify the thumbprint of
another certificate.
.. Note:: This certificate configuration is independent of the WinRM listener
certificate. With CredSSP, message transport still occurs over the WinRM listener,
but the TLS-encrypted messages inside the channel use the service-level certificate.
To explicitly set the certificate to use for CredSSP::
# Note the value $certificate_thumbprint will be different in each
# situation, this needs to be set based on the cert that is used.
$certificate_thumbprint = "7C8DCBD5427AFEE6560F4AF524E325915F51172C"
# Set the thumbprint value
Set-Item -Path WSMan:\localhost\Service\CertificateThumbprint -Value $certificate_thumbprint
Non-Administrator Accounts
``````````````````````````
WinRM is configured by default to only allow connections from accounts in the local
``Administrators`` group. This can be changed by running:
.. code-block:: powershell
winrm configSDDL default
This will display an ACL editor, where new users or groups may be added. To run commands
over WinRM, users and groups must have at least the ``Read`` and ``Execute`` permissions
enabled.
While non-administrative accounts can be used with WinRM, most typical server administration
tasks require some level of administrative access, so the utility is usually limited.
WinRM Encryption
````````````````
By default WinRM will fail to work when running over an unencrypted channel.
The WinRM protocol considers the channel to be encrypted if using TLS over HTTP
(HTTPS) or using message level encryption. Using WinRM with TLS is the
recommended option as it works with all authentication options, but requires
a certificate to be created and used on the WinRM listener.
The ``ConfigureRemotingForAnsible.ps1`` creates a self-signed certificate and
creates the listener with that certificate. If in a domain environment, ADCS
can also create a certificate for the host that is issued by the domain itself.
If using HTTPS is not an option, then HTTP can be used when the authentication
option is ``NTLM``, ``Kerberos`` or ``CredSSP``. These protocols will encrypt
the WinRM payload with their own encryption method before sending it to the
server. The message-level encryption is not used when running over HTTPS because the
encryption uses the more secure TLS protocol instead. If both transport and
message encryption is required, set ``ansible_winrm_message_encryption=always``
in the host vars.
A last resort is to disable the encryption requirement on the Windows host. This
should only be used for development and debugging purposes, as anything sent
from Ansible can be viewed, manipulated and also the remote session can completely
be taken over by anyone on the same network. To disable the encryption
requirement::
Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true
.. Note:: Do not disable the encryption check unless it is
absolutely required. Doing so could allow sensitive information like
credentials and files to be intercepted by others on the network.
Inventory Options
`````````````````
Ansible's Windows support relies on a few standard variables to indicate the
username, password, and connection type of the remote hosts. These variables
are most easily set up in the inventory, but can be set on the ``host_vars``/
``group_vars`` level.
When setting up the inventory, the following variables are required:
.. code-block:: yaml+jinja
# It is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml
ansible_connection: winrm
# May also be passed on the command-line via --user
ansible_user: Administrator
# May also be supplied at runtime with --ask-pass
ansible_password: SecretPasswordGoesHere
Using the variables above, Ansible will connect to the Windows host with Basic
authentication through HTTPS. If ``ansible_user`` has a UPN value like
``[email protected]`` then the authentication option will automatically attempt
to use Kerberos unless ``ansible_winrm_transport`` has been set to something other than
``kerberos``.
The following custom inventory variables are also supported
for additional configuration of WinRM connections:
* ``ansible_port``: The port WinRM will run over, HTTPS is ``5986`` which is
the default while HTTP is ``5985``
* ``ansible_winrm_scheme``: Specify the connection scheme (``http`` or
``https``) to use for the WinRM connection. Ansible uses ``https`` by default
unless ``ansible_port`` is ``5985``
* ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint,
Ansible uses ``/wsman`` by default
* ``ansible_winrm_realm``: Specify the realm to use for Kerberos
authentication. If ``ansible_user`` contains ``@``, Ansible will use the part
of the username after ``@`` by default
* ``ansible_winrm_transport``: Specify one or more authentication transport
options as a comma-separated list. By default, Ansible will use ``kerberos,
basic`` if the ``kerberos`` module is installed and a realm is defined,
otherwise it will be ``plaintext``
* ``ansible_winrm_server_cert_validation``: Specify the server certificate
validation mode (``ignore`` or ``validate``). Ansible defaults to
``validate`` on Python 2.7.9 and higher, which will result in certificate
validation errors against the Windows self-signed certificates. Unless
verifiable certificates have been configured on the WinRM listeners, this
should be set to ``ignore``
* ``ansible_winrm_operation_timeout_sec``: Increase the default timeout for
WinRM operations, Ansible uses ``20`` by default
* ``ansible_winrm_read_timeout_sec``: Increase the WinRM read timeout, Ansible
uses ``30`` by default. Useful if there are intermittent network issues and
read timeout errors keep occurring
* ``ansible_winrm_message_encryption``: Specify the message encryption
operation (``auto``, ``always``, ``never``) to use, Ansible uses ``auto`` by
default. ``auto`` means message encryption is only used when
``ansible_winrm_scheme`` is ``http`` and ``ansible_winrm_transport`` supports
message encryption. ``always`` means message encryption will always be used
and ``never`` means message encryption will never be used
* ``ansible_winrm_ca_trust_path``: Used to specify a different cacert container
than the one used in the ``certifi`` module. See the HTTPS Certificate
Validation section for more details.
* ``ansible_winrm_send_cbt``: When using ``ntlm`` or ``kerberos`` over HTTPS,
the authentication library will try to send channel binding tokens to
mitigate against man in the middle attacks. This flag controls whether these
bindings will be sent or not (default: ``yes``).
* ``ansible_winrm_*``: Any additional keyword arguments supported by
``winrm.Protocol`` may be provided in place of ``*``
In addition, there are also specific variables that need to be set
for each authentication option. See the section on authentication above for more information.
.. Note:: Ansible 2.0 has deprecated the "ssh" from ``ansible_ssh_user``,
``ansible_ssh_pass``, ``ansible_ssh_host``, and ``ansible_ssh_port`` to
become ``ansible_user``, ``ansible_password``, ``ansible_host``, and
``ansible_port``. If using a version of Ansible prior to 2.0, the older
style (``ansible_ssh_*``) should be used instead. The shorter variables
are ignored, without warning, in older versions of Ansible.
.. Note:: ``ansible_winrm_message_encryption`` is different from transport
encryption done over TLS. The WinRM payload is still encrypted with TLS
when run over HTTPS, even if ``ansible_winrm_message_encryption=never``.
IPv6 Addresses
``````````````
IPv6 addresses can be used instead of IPv4 addresses or hostnames. This option
is normally set in an inventory. Ansible will attempt to parse the address
using the `ipaddress <https://docs.python.org/3/library/ipaddress.html>`_
package and pass to pywinrm correctly.
When defining a host using an IPv6 address, just add the IPv6 address as you
would an IPv4 address or hostname:
.. code-block:: ini
[windows-server]
2001:db8::1
[windows-server:vars]
ansible_user=username
ansible_password=password
ansible_connection=winrm
.. Note:: The ipaddress library is only included by default in Python 3.x. To
use IPv6 addresses in Python 2.7, make sure to run ``pip install ipaddress`` which installs
a backported package.
HTTPS Certificate Validation
````````````````````````````
As part of the TLS protocol, the certificate is validated to ensure the host
matches the subject and the client trusts the issuer of the server certificate.
When using a self-signed certificate or setting
``ansible_winrm_server_cert_validation: ignore`` these security mechanisms are
bypassed. While self signed certificates will always need the ``ignore`` flag,
certificates that have been issued from a certificate authority can still be
validated.
One of the more common ways of setting up a HTTPS listener in a domain
environment is to use Active Directory Certificate Service (AD CS). AD CS is
used to generate signed certificates from a Certificate Signing Request (CSR).
If the WinRM HTTPS listener is using a certificate that has been signed by
another authority, like AD CS, then Ansible can be set up to trust that
issuer as part of the TLS handshake.
To get Ansible to trust a Certificate Authority (CA) like AD CS, the issuer
certificate of the CA can be exported as a PEM encoded certificate. This
certificate can then be copied locally to the Ansible controller and used as a
source of certificate validation, otherwise known as a CA chain.
The CA chain can contain a single or multiple issuer certificates and each
entry is contained on a new line. To then use the custom CA chain as part of
the validation process, set ``ansible_winrm_ca_trust_path`` to the path of the
file. If this variable is not set, the default CA chain is used instead which
is located in the install path of the Python package
`certifi <https://github.com/certifi/python-certifi>`_.
.. Note:: Each HTTP call is done by the Python requests library which does not
use the systems built-in certificate store as a trust authority.
Certificate validation will fail if the server's certificate issuer is
only added to the system's truststore.
.. _winrm_tls12:
TLS 1.2 Support
```````````````
As WinRM runs over the HTTP protocol, using HTTPS means that the TLS protocol
is used to encrypt the WinRM messages. TLS will automatically attempt to
negotiate the best protocol and cipher suite that is available to both the
client and the server. If a match cannot be found then Ansible will error out
with a message similar to::
HTTPSConnectionPool(host='server', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLError(1, '[SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1056)')))
Commonly this is when the Windows host has not been configured to support
TLS v1.2 but it could also mean the Ansible controller has an older OpenSSL
version installed.
Windows 8 and Windows Server 2012 come with TLS v1.2 installed and enabled by
default but older hosts, like Server 2008 R2 and Windows 7, have to be enabled
manually.
.. Note:: There is a bug with the TLS 1.2 patch for Server 2008 which will stop
Ansible from connecting to the Windows host. This means that Server 2008
cannot be configured to use TLS 1.2. Server 2008 R2 and Windows 7 are not
affected by this issue and can use TLS 1.2.
To verify what protocol the Windows host supports, you can run the following
command on the Ansible controller::
openssl s_client -connect <hostname>:5986
The output will contain information about the TLS session and the ``Protocol``
line will display the version that was negotiated::
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : ECDHE-RSA-AES256-SHA
Session-ID: 962A00001C95D2A601BE1CCFA7831B85A7EEE897AECDBF3D9ECD4A3BE4F6AC9B
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976474
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: AE16000050DA9FD44D03BB8839B64449805D9E43DBD670346D3D9E05D1AEEA84
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976538
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
If the host is returning ``TLSv1`` then it should be configured so that
TLS v1.2 is enable. You can do this by running the following PowerShell
script:
.. code-block:: powershell
Function Enable-TLS12 {
param(
[ValidateSet("Server", "Client")]
[String]$Component = "Server"
)
$protocols_path = 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols'
New-Item -Path "$protocols_path\TLS 1.2\$Component" -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name Enabled -Value 1 -Type DWORD -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name DisabledByDefault -Value 0 -Type DWORD -Force
}
Enable-TLS12 -Component Server
# Not required but highly recommended to enable the Client side TLS 1.2 components
Enable-TLS12 -Component Client
Restart-Computer
The below Ansible tasks can also be used to enable TLS v1.2:
.. code-block:: yaml+jinja
- name: enable TLSv1.2 support
win_regedit:
path: HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\{{ item.type }}
name: '{{ item.property }}'
data: '{{ item.value }}'
type: dword
state: present
register: enable_tls12
loop:
- type: Server
property: Enabled
value: 1
- type: Server
property: DisabledByDefault
value: 0
- type: Client
property: Enabled
value: 1
- type: Client
property: DisabledByDefault
value: 0
- name: reboot if TLS config was applied
win_reboot:
when: enable_tls12 is changed
There are other ways to configure the TLS protocols as well as the cipher
suites that are offered by the Windows host. One tool that can give you a GUI
to manage these settings is `IIS Crypto <https://www.nartac.com/Products/IISCrypto/>`_
from Nartac Software.
Limitations
```````````
Due to the design of the WinRM protocol , there are a few limitations
when using WinRM that can cause issues when creating playbooks for Ansible.
These include:
* Credentials are not delegated for most authentication types, which causes
authentication errors when accessing network resources or installing certain
programs.
* Many calls to the Windows Update API are blocked when running over WinRM.
* Some programs fail to install with WinRM due to no credential delegation or
because they access forbidden Windows API like WUA over WinRM.
* Commands under WinRM are done under a non-interactive session, which can prevent
certain commands or executables from running.
* You cannot run a process that interacts with ``DPAPI``, which is used by some
installers (like Microsoft SQL Server).
Some of these limitations can be mitigated by doing one of the following:
* Set ``ansible_winrm_transport`` to ``credssp`` or ``kerberos`` (with
``ansible_winrm_kerberos_delegation=true``) to bypass the double hop issue
and access network resources
* Use ``become`` to bypass all WinRM restrictions and run a command as it would
locally. Unlike using an authentication transport like ``credssp``, this will
also remove the non-interactive restriction and API restrictions like WUA and
DPAPI
* Use a scheduled task to run a command which can be created with the
``win_scheduled_task`` module. Like ``become``, this bypasses all WinRM
restrictions but can only run a command and not modules.
.. seealso::
:ref:`playbooks_intro`
An introduction to playbooks
:ref:`playbooks_best_practices`
Best practices advice
:ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,728 |
tmpfs src is a required string
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
In docker_swarm_service module in ansible, if we are specifying mount option as tmpfs, it is required to set src. But for docker service create, src is not supported for tmpfs.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_swarm_service
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.4
config file = /home/vani/Projects/voody/ansible/ansible.cfg
configured module search path = ['/home/vani/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 20 2019, 17:12:48) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/home/user/Projects/ansible/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/user/Projects/ansible/ansible.cfg) = yaml
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62728
|
https://github.com/ansible/ansible/pull/64637
|
dd5415017e554188e207e4b213c778333e913d55
|
574bd32db230b518c883a2eac45af76f3385db56
| 2019-09-23T06:54:03Z |
python
| 2019-11-09T20:01:56Z |
changelogs/fragments/64637-docker_swarm_service-tmpfs-source.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,728 |
tmpfs src is a required string
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
In docker_swarm_service module in ansible, if we are specifying mount option as tmpfs, it is required to set src. But for docker service create, src is not supported for tmpfs.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_swarm_service
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.4
config file = /home/vani/Projects/voody/ansible/ansible.cfg
configured module search path = ['/home/vani/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 20 2019, 17:12:48) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/home/user/Projects/ansible/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/user/Projects/ansible/ansible.cfg) = yaml
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62728
|
https://github.com/ansible/ansible/pull/64637
|
dd5415017e554188e207e4b213c778333e913d55
|
574bd32db230b518c883a2eac45af76f3385db56
| 2019-09-23T06:54:03Z |
python
| 2019-11-09T20:01:56Z |
lib/ansible/modules/cloud/docker/docker_swarm_service.py
|
#!/usr/bin/python
#
# (c) 2017, Dario Zanzico ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: docker_swarm_service
author:
- "Dario Zanzico (@dariko)"
- "Jason Witkowski (@jwitko)"
- "Hannes Ljungberg (@hannseman)"
short_description: docker swarm service
description:
- Manages docker services via a swarm manager node.
version_added: "2.7"
options:
args:
description:
- List arguments to be passed to the container.
- Corresponds to the C(ARG) parameter of C(docker service create).
type: list
elements: str
command:
description:
- Command to execute when the container starts.
- A command may be either a string or a list or a list of strings.
- Corresponds to the C(COMMAND) parameter of C(docker service create).
type: raw
version_added: 2.8
configs:
description:
- List of dictionaries describing the service configs.
- Corresponds to the C(--config) option of C(docker service create).
- Requires API version >= 1.30.
type: list
elements: dict
suboptions:
config_id:
description:
- Config's ID.
type: str
config_name:
description:
- Config's name as defined at its creation.
type: str
required: yes
filename:
description:
- Name of the file containing the config. Defaults to the I(config_name) if not specified.
type: str
required: yes
uid:
description:
- UID of the config file's owner.
type: str
gid:
description:
- GID of the config file's group.
type: str
mode:
description:
- File access mode inside the container. Must be an octal number (like C(0644) or C(0444)).
type: int
constraints:
description:
- List of the service constraints.
- Corresponds to the C(--constraint) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(placement.constraints) instead.
type: list
elements: str
container_labels:
description:
- Dictionary of key value pairs.
- Corresponds to the C(--container-label) option of C(docker service create).
type: dict
dns:
description:
- List of custom DNS servers.
- Corresponds to the C(--dns) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: str
dns_search:
description:
- List of custom DNS search domains.
- Corresponds to the C(--dns-search) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: str
dns_options:
description:
- List of custom DNS options.
- Corresponds to the C(--dns-option) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: str
endpoint_mode:
description:
- Service endpoint mode.
- Corresponds to the C(--endpoint-mode) option of C(docker service create).
- Requires API version >= 1.25.
type: str
choices:
- vip
- dnsrr
env:
description:
- List or dictionary of the service environment variables.
- If passed a list each items need to be in the format of C(KEY=VALUE).
- If passed a dictionary values which might be parsed as numbers,
booleans or other types by the YAML parser must be quoted (e.g. C("true"))
in order to avoid data loss.
- Corresponds to the C(--env) option of C(docker service create).
type: raw
env_files:
description:
- List of paths to files, present on the target, containing environment variables C(FOO=BAR).
- The order of the list is significant in determining the value assigned to a
variable that shows up more than once.
- If variable also present in I(env), then I(env) value will override.
type: list
elements: path
version_added: "2.8"
force_update:
description:
- Force update even if no changes require it.
- Corresponds to the C(--force) option of C(docker service update).
- Requires API version >= 1.25.
type: bool
default: no
groups:
description:
- List of additional group names and/or IDs that the container process will run as.
- Corresponds to the C(--group) option of C(docker service update).
- Requires API version >= 1.25.
type: list
elements: str
version_added: "2.8"
healthcheck:
description:
- Configure a check that is run to determine whether or not containers for this service are "healthy".
See the docs for the L(HEALTHCHECK Dockerfile instruction,https://docs.docker.com/engine/reference/builder/#healthcheck)
for details on how healthchecks work.
- "I(interval), I(timeout) and I(start_period) are specified as durations. They accept duration as a string in a format
that look like: C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Requires API version >= 1.25.
type: dict
suboptions:
test:
description:
- Command to run to check health.
- Must be either a string or a list. If it is a list, the first item must be one of C(NONE), C(CMD) or C(CMD-SHELL).
type: raw
interval:
description:
- Time between running the check.
type: str
timeout:
description:
- Maximum time to allow one check to run.
type: str
retries:
description:
- Consecutive failures needed to report unhealthy. It accept integer value.
type: int
start_period:
description:
- Start period for the container to initialize before starting health-retries countdown.
type: str
version_added: "2.8"
hostname:
description:
- Container hostname.
- Corresponds to the C(--hostname) option of C(docker service create).
- Requires API version >= 1.25.
type: str
hosts:
description:
- Dict of host-to-IP mappings, where each host name is a key in the dictionary.
Each host name will be added to the container's /etc/hosts file.
- Corresponds to the C(--host) option of C(docker service create).
- Requires API version >= 1.25.
type: dict
version_added: "2.8"
image:
description:
- Service image path and tag.
- Corresponds to the C(IMAGE) parameter of C(docker service create).
type: str
required: yes
labels:
description:
- Dictionary of key value pairs.
- Corresponds to the C(--label) option of C(docker service create).
type: dict
limits:
description:
- Configures service resource limits.
suboptions:
cpus:
description:
- Service CPU limit. C(0) equals no limit.
- Corresponds to the C(--limit-cpu) option of C(docker service create).
type: float
memory:
description:
- "Service memory reservation in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no reservation.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--reserve-memory) option of C(docker service create).
type: str
type: dict
version_added: "2.8"
limit_cpu:
description:
- Service CPU limit. C(0) equals no limit.
- Corresponds to the C(--limit-cpu) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(limits.cpus) instead.
type: float
limit_memory:
description:
- "Service memory limit in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no limit.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--limit-memory) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(limits.memory) instead.
type: str
logging:
description:
- "Logging configuration for the service."
suboptions:
driver:
description:
- Configure the logging driver for a service.
- Corresponds to the C(--log-driver) option of C(docker service create).
type: str
options:
description:
- Options for service logging driver.
- Corresponds to the C(--log-opt) option of C(docker service create).
type: dict
type: dict
version_added: "2.8"
log_driver:
description:
- Configure the logging driver for a service.
- Corresponds to the C(--log-driver) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(logging.driver) instead.
type: str
log_driver_options:
description:
- Options for service logging driver.
- Corresponds to the C(--log-opt) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(logging.options) instead.
type: dict
mode:
description:
- Service replication mode.
- Service will be removed and recreated when changed.
- Corresponds to the C(--mode) option of C(docker service create).
type: str
default: replicated
choices:
- replicated
- global
mounts:
description:
- List of dictionaries describing the service mounts.
- Corresponds to the C(--mount) option of C(docker service create).
type: list
elements: dict
suboptions:
source:
description:
- Mount source (e.g. a volume name or a host path).
type: str
required: yes
target:
description:
- Container path.
type: str
required: yes
type:
description:
- The mount type.
- Note that C(npipe) is only supported by Docker for Windows. Also note that C(npipe) was added in Ansible 2.9.
type: str
default: bind
choices:
- bind
- volume
- tmpfs
- npipe
readonly:
description:
- Whether the mount should be read-only.
type: bool
labels:
description:
- Volume labels to apply.
type: dict
version_added: "2.8"
propagation:
description:
- The propagation mode to use.
- Can only be used when I(mode) is C(bind).
type: str
choices:
- shared
- slave
- private
- rshared
- rslave
- rprivate
version_added: "2.8"
no_copy:
description:
- Disable copying of data from a container when a volume is created.
- Can only be used when I(mode) is C(volume).
type: bool
version_added: "2.8"
driver_config:
description:
- Volume driver configuration.
- Can only be used when I(mode) is C(volume).
suboptions:
name:
description:
- Name of the volume-driver plugin to use for the volume.
type: str
options:
description:
- Options as key-value pairs to pass to the driver for this volume.
type: dict
type: dict
version_added: "2.8"
tmpfs_size:
description:
- "Size of the tmpfs mount in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- Can only be used when I(mode) is C(tmpfs).
type: str
version_added: "2.8"
tmpfs_mode:
description:
- File mode of the tmpfs in octal.
- Can only be used when I(mode) is C(tmpfs).
type: int
version_added: "2.8"
name:
description:
- Service name.
- Corresponds to the C(--name) option of C(docker service create).
type: str
required: yes
networks:
description:
- List of the service networks names or dictionaries.
- When passed dictionaries valid sub-options are I(name), which is required, and
I(aliases) and I(options).
- Prior to API version 1.29, updating and removing networks is not supported.
If changes are made the service will then be removed and recreated.
- Corresponds to the C(--network) option of C(docker service create).
type: list
elements: raw
placement:
description:
- Configures service placement preferences and constraints.
suboptions:
constraints:
description:
- List of the service constraints.
- Corresponds to the C(--constraint) option of C(docker service create).
type: list
elements: str
preferences:
description:
- List of the placement preferences as key value pairs.
- Corresponds to the C(--placement-pref) option of C(docker service create).
- Requires API version >= 1.27.
type: list
elements: dict
type: dict
version_added: "2.8"
publish:
description:
- List of dictionaries describing the service published ports.
- Corresponds to the C(--publish) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: dict
suboptions:
published_port:
description:
- The port to make externally available.
type: int
required: yes
target_port:
description:
- The port inside the container to expose.
type: int
required: yes
protocol:
description:
- What protocol to use.
type: str
default: tcp
choices:
- tcp
- udp
mode:
description:
- What publish mode to use.
- Requires API version >= 1.32.
type: str
choices:
- ingress
- host
read_only:
description:
- Mount the containers root filesystem as read only.
- Corresponds to the C(--read-only) option of C(docker service create).
type: bool
version_added: "2.8"
replicas:
description:
- Number of containers instantiated in the service. Valid only if I(mode) is C(replicated).
- If set to C(-1), and service is not present, service replicas will be set to C(1).
- If set to C(-1), and service is present, service replicas will be unchanged.
- Corresponds to the C(--replicas) option of C(docker service create).
type: int
default: -1
reservations:
description:
- Configures service resource reservations.
suboptions:
cpus:
description:
- Service CPU reservation. C(0) equals no reservation.
- Corresponds to the C(--reserve-cpu) option of C(docker service create).
type: float
memory:
description:
- "Service memory reservation in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no reservation.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--reserve-memory) option of C(docker service create).
type: str
type: dict
version_added: "2.8"
reserve_cpu:
description:
- Service CPU reservation. C(0) equals no reservation.
- Corresponds to the C(--reserve-cpu) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(reservations.cpus) instead.
type: float
reserve_memory:
description:
- "Service memory reservation in format C(<number>[<unit>]). Number is a positive integer.
Unit can be C(B) (byte), C(K) (kibibyte, 1024B), C(M) (mebibyte), C(G) (gibibyte),
C(T) (tebibyte), or C(P) (pebibyte)."
- C(0) equals no reservation.
- Omitting the unit defaults to bytes.
- Corresponds to the C(--reserve-memory) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(reservations.memory) instead.
type: str
resolve_image:
description:
- If the current image digest should be resolved from registry and updated if changed.
- Requires API version >= 1.30.
type: bool
default: no
version_added: 2.8
restart_config:
description:
- Configures if and how to restart containers when they exit.
suboptions:
condition:
description:
- Restart condition of the service.
- Corresponds to the C(--restart-condition) option of C(docker service create).
type: str
choices:
- none
- on-failure
- any
delay:
description:
- Delay between restarts.
- "Accepts a a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-delay) option of C(docker service create).
type: str
max_attempts:
description:
- Maximum number of service restarts.
- Corresponds to the C(--restart-condition) option of C(docker service create).
type: int
window:
description:
- Restart policy evaluation window.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-window) option of C(docker service create).
type: str
type: dict
version_added: "2.8"
restart_policy:
description:
- Restart condition of the service.
- Corresponds to the C(--restart-condition) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.condition) instead.
type: str
choices:
- none
- on-failure
- any
restart_policy_attempts:
description:
- Maximum number of service restarts.
- Corresponds to the C(--restart-condition) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.max_attempts) instead.
type: int
restart_policy_delay:
description:
- Delay between restarts.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-delay) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.delay) instead.
type: raw
restart_policy_window:
description:
- Restart policy evaluation window.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--restart-window) option of C(docker service create).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(restart_config.window) instead.
type: raw
rollback_config:
description:
- Configures how the service should be rolled back in case of a failing update.
suboptions:
parallelism:
description:
- The number of containers to rollback at a time. If set to 0, all containers rollback simultaneously.
- Corresponds to the C(--rollback-parallelism) option of C(docker service create).
- Requires API version >= 1.28.
type: int
delay:
description:
- Delay between task rollbacks.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--rollback-delay) option of C(docker service create).
- Requires API version >= 1.28.
type: str
failure_action:
description:
- Action to take in case of rollback failure.
- Corresponds to the C(--rollback-failure-action) option of C(docker service create).
- Requires API version >= 1.28.
type: str
choices:
- continue
- pause
monitor:
description:
- Duration after each task rollback to monitor for failure.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--rollback-monitor) option of C(docker service create).
- Requires API version >= 1.28.
type: str
max_failure_ratio:
description:
- Fraction of tasks that may fail during a rollback.
- Corresponds to the C(--rollback-max-failure-ratio) option of C(docker service create).
- Requires API version >= 1.28.
type: float
order:
description:
- Specifies the order of operations during rollbacks.
- Corresponds to the C(--rollback-order) option of C(docker service create).
- Requires API version >= 1.29.
type: str
type: dict
version_added: "2.8"
secrets:
description:
- List of dictionaries describing the service secrets.
- Corresponds to the C(--secret) option of C(docker service create).
- Requires API version >= 1.25.
type: list
elements: dict
suboptions:
secret_id:
description:
- Secret's ID.
type: str
secret_name:
description:
- Secret's name as defined at its creation.
type: str
required: yes
filename:
description:
- Name of the file containing the secret. Defaults to the I(secret_name) if not specified.
- Corresponds to the C(target) key of C(docker service create --secret).
type: str
uid:
description:
- UID of the secret file's owner.
type: str
gid:
description:
- GID of the secret file's group.
type: str
mode:
description:
- File access mode inside the container. Must be an octal number (like C(0644) or C(0444)).
type: int
state:
description:
- C(absent) - A service matching the specified name will be removed and have its tasks stopped.
- C(present) - Asserts the existence of a service matching the name and provided configuration parameters.
Unspecified configuration parameters will be set to docker defaults.
type: str
required: yes
default: present
choices:
- present
- absent
stop_grace_period:
description:
- Time to wait before force killing a container.
- "Accepts a duration as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--stop-grace-period) option of C(docker service create).
type: str
version_added: "2.8"
stop_signal:
description:
- Override default signal used to stop the container.
- Corresponds to the C(--stop-signal) option of C(docker service create).
type: str
version_added: "2.8"
tty:
description:
- Allocate a pseudo-TTY.
- Corresponds to the C(--tty) option of C(docker service create).
- Requires API version >= 1.25.
type: bool
update_config:
description:
- Configures how the service should be updated. Useful for configuring rolling updates.
suboptions:
parallelism:
description:
- Rolling update parallelism.
- Corresponds to the C(--update-parallelism) option of C(docker service create).
type: int
delay:
description:
- Rolling update delay.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-delay) option of C(docker service create).
type: str
failure_action:
description:
- Action to take in case of container failure.
- Corresponds to the C(--update-failure-action) option of C(docker service create).
- Usage of I(rollback) requires API version >= 1.29.
type: str
choices:
- continue
- pause
- rollback
monitor:
description:
- Time to monitor updated tasks for failures.
- "Accepts a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-monitor) option of C(docker service create).
- Requires API version >= 1.25.
type: str
max_failure_ratio:
description:
- Fraction of tasks that may fail during an update before the failure action is invoked.
- Corresponds to the C(--update-max-failure-ratio) option of C(docker service create).
- Requires API version >= 1.25.
type: float
order:
description:
- Specifies the order of operations when rolling out an updated task.
- Corresponds to the C(--update-order) option of C(docker service create).
- Requires API version >= 1.29.
type: str
type: dict
version_added: "2.8"
update_delay:
description:
- Rolling update delay.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-delay) option of C(docker service create).
- Before Ansible 2.8, the default value for this option was C(10).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.delay) instead.
type: raw
update_parallelism:
description:
- Rolling update parallelism.
- Corresponds to the C(--update-parallelism) option of C(docker service create).
- Before Ansible 2.8, the default value for this option was C(1).
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.parallelism) instead.
type: int
update_failure_action:
description:
- Action to take in case of container failure.
- Corresponds to the C(--update-failure-action) option of C(docker service create).
- Usage of I(rollback) requires API version >= 1.29.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.failure_action) instead.
type: str
choices:
- continue
- pause
- rollback
update_monitor:
description:
- Time to monitor updated tasks for failures.
- "Accepts a duration as an integer in nanoseconds or as a string in a format that look like:
C(5h34m56s), C(1m30s) etc. The supported units are C(us), C(ms), C(s), C(m) and C(h)."
- Corresponds to the C(--update-monitor) option of C(docker service create).
- Requires API version >= 1.25.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.monitor) instead.
type: raw
update_max_failure_ratio:
description:
- Fraction of tasks that may fail during an update before the failure action is invoked.
- Corresponds to the C(--update-max-failure-ratio) option of C(docker service create).
- Requires API version >= 1.25.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.max_failure_ratio) instead.
type: float
update_order:
description:
- Specifies the order of operations when rolling out an updated task.
- Corresponds to the C(--update-order) option of C(docker service create).
- Requires API version >= 1.29.
- Deprecated in 2.8, will be removed in 2.12. Use parameter C(update_config.order) instead.
type: str
choices:
- stop-first
- start-first
user:
description:
- Sets the username or UID used for the specified command.
- Before Ansible 2.8, the default value for this option was C(root).
- The default has been removed so that the user defined in the image is used if no user is specified here.
- Corresponds to the C(--user) option of C(docker service create).
type: str
working_dir:
description:
- Path to the working directory.
- Corresponds to the C(--workdir) option of C(docker service create).
type: str
version_added: "2.8"
extends_documentation_fragment:
- docker
- docker.docker_py_2_documentation
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 2.0.2"
- "Docker API >= 1.24"
notes:
- "Images will only resolve to the latest digest when using Docker API >= 1.30 and Docker SDK for Python >= 3.2.0.
When using older versions use C(force_update: true) to trigger the swarm to resolve a new image."
'''
RETURN = '''
swarm_service:
returned: always
type: dict
description:
- Dictionary of variables representing the current state of the service.
Matches the module parameters format.
- Note that facts are not part of registered vars but accessible directly.
- Note that before Ansible 2.7.9, the return variable was documented as C(ansible_swarm_service),
while the module actually returned a variable called C(ansible_docker_service). The variable
was renamed to C(swarm_service) in both code and documentation for Ansible 2.7.9 and Ansible 2.8.0.
In Ansible 2.7.x, the old name C(ansible_docker_service) can still be used.
sample: '{
"args": [
"3600"
],
"command": [
"sleep"
],
"configs": null,
"constraints": [
"node.role == manager",
"engine.labels.operatingsystem == ubuntu 14.04"
],
"container_labels": null,
"dns": null,
"dns_options": null,
"dns_search": null,
"endpoint_mode": null,
"env": [
"ENVVAR1=envvar1",
"ENVVAR2=envvar2"
],
"force_update": null,
"groups": null,
"healthcheck": {
"interval": 90000000000,
"retries": 3,
"start_period": 30000000000,
"test": [
"CMD",
"curl",
"--fail",
"http://nginx.host.com"
],
"timeout": 10000000000
},
"healthcheck_disabled": false,
"hostname": null,
"hosts": null,
"image": "alpine:latest@sha256:b3dbf31b77fd99d9c08f780ce6f5282aba076d70a513a8be859d8d3a4d0c92b8",
"labels": {
"com.example.department": "Finance",
"com.example.description": "Accounting webapp"
},
"limit_cpu": 0.5,
"limit_memory": 52428800,
"log_driver": "fluentd",
"log_driver_options": {
"fluentd-address": "127.0.0.1:24224",
"fluentd-async-connect": "true",
"tag": "myservice"
},
"mode": "replicated",
"mounts": [
{
"readonly": false,
"source": "/tmp/",
"target": "/remote_tmp/",
"type": "bind",
"labels": null,
"propagation": null,
"no_copy": null,
"driver_config": null,
"tmpfs_size": null,
"tmpfs_mode": null
}
],
"networks": null,
"placement_preferences": [
{
"spread": "node.labels.mylabel"
}
],
"publish": null,
"read_only": null,
"replicas": 1,
"reserve_cpu": 0.25,
"reserve_memory": 20971520,
"restart_policy": "on-failure",
"restart_policy_attempts": 3,
"restart_policy_delay": 5000000000,
"restart_policy_window": 120000000000,
"secrets": null,
"stop_grace_period": null,
"stop_signal": null,
"tty": null,
"update_delay": 10000000000,
"update_failure_action": null,
"update_max_failure_ratio": null,
"update_monitor": null,
"update_order": "stop-first",
"update_parallelism": 2,
"user": null,
"working_dir": null
}'
changes:
returned: always
description:
- List of changed service attributes if a service has been altered, [] otherwise.
type: list
elements: str
sample: ['container_labels', 'replicas']
rebuilt:
returned: always
description:
- True if the service has been recreated (removed and created)
type: bool
sample: True
'''
EXAMPLES = '''
- name: Set command and arguments
docker_swarm_service:
name: myservice
image: alpine
command: sleep
args:
- "3600"
- name: Set a bind mount
docker_swarm_service:
name: myservice
image: alpine
mounts:
- source: /tmp/
target: /remote_tmp/
type: bind
- name: Set service labels
docker_swarm_service:
name: myservice
image: alpine
labels:
com.example.description: "Accounting webapp"
com.example.department: "Finance"
- name: Set environment variables
docker_swarm_service:
name: myservice
image: alpine
env:
ENVVAR1: envvar1
ENVVAR2: envvar2
env_files:
- envs/common.env
- envs/apps/web.env
- name: Set fluentd logging
docker_swarm_service:
name: myservice
image: alpine
logging:
driver: fluentd
options:
fluentd-address: "127.0.0.1:24224"
fluentd-async-connect: "true"
tag: myservice
- name: Set restart policies
docker_swarm_service:
name: myservice
image: alpine
restart_config:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
- name: Set update config
docker_swarm_service:
name: myservice
image: alpine
update_config:
parallelism: 2
delay: 10s
order: stop-first
- name: Set rollback config
docker_swarm_service:
name: myservice
image: alpine
update_config:
failure_action: rollback
rollback_config:
parallelism: 2
delay: 10s
order: stop-first
- name: Set placement preferences
docker_swarm_service:
name: myservice
image: alpine:edge
placement:
preferences:
- spread: node.labels.mylabel
constraints:
- node.role == manager
- engine.labels.operatingsystem == ubuntu 14.04
- name: Set configs
docker_swarm_service:
name: myservice
image: alpine:edge
configs:
- config_name: myconfig_name
filename: "/tmp/config.txt"
- name: Set networks
docker_swarm_service:
name: myservice
image: alpine:edge
networks:
- mynetwork
- name: Set networks as a dictionary
docker_swarm_service:
name: myservice
image: alpine:edge
networks:
- name: "mynetwork"
aliases:
- "mynetwork_alias"
options:
foo: bar
- name: Set secrets
docker_swarm_service:
name: myservice
image: alpine:edge
secrets:
- secret_name: mysecret_name
filename: "/run/secrets/secret.txt"
- name: Start service with healthcheck
docker_swarm_service:
name: myservice
image: nginx:1.13
healthcheck:
# Check if nginx server is healthy by curl'ing the server.
# If this fails or timeouts, the healthcheck fails.
test: ["CMD", "curl", "--fail", "http://nginx.host.com"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
- name: Configure service resources
docker_swarm_service:
name: myservice
image: alpine:edge
reservations:
cpus: 0.25
memory: 20M
limits:
cpus: 0.50
memory: 50M
- name: Remove service
docker_swarm_service:
name: myservice
state: absent
'''
import shlex
import time
import operator
import traceback
from distutils.version import LooseVersion
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
DifferenceTracker,
DockerBaseClass,
convert_duration_to_nanosecond,
parse_healthcheck,
clean_dict_booleans_for_docker_api,
RequestException,
)
from ansible.module_utils.basic import human_to_bytes
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_text
try:
from docker import types
from docker.utils import (
parse_repository_tag,
parse_env_file,
format_environment,
)
from docker.errors import (
APIError,
DockerException,
NotFound,
)
except ImportError:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
def get_docker_environment(env, env_files):
"""
Will return a list of "KEY=VALUE" items. Supplied env variable can
be either a list or a dictionary.
If environment files are combined with explicit environment variables,
the explicit environment variables take precedence.
"""
env_dict = {}
if env_files:
for env_file in env_files:
parsed_env_file = parse_env_file(env_file)
for name, value in parsed_env_file.items():
env_dict[name] = str(value)
if env is not None and isinstance(env, string_types):
env = env.split(',')
if env is not None and isinstance(env, dict):
for name, value in env.items():
if not isinstance(value, string_types):
raise ValueError(
'Non-string value found for env option. '
'Ambiguous env options must be wrapped in quotes to avoid YAML parsing. Key: %s' % name
)
env_dict[name] = str(value)
elif env is not None and isinstance(env, list):
for item in env:
try:
name, value = item.split('=', 1)
except ValueError:
raise ValueError('Invalid environment variable found in list, needs to be in format KEY=VALUE.')
env_dict[name] = value
elif env is not None:
raise ValueError(
'Invalid type for env %s (%s). Only list or dict allowed.' % (env, type(env))
)
env_list = format_environment(env_dict)
if not env_list:
if env is not None or env_files is not None:
return []
else:
return None
return sorted(env_list)
def get_docker_networks(networks, network_ids):
"""
Validate a list of network names or a list of network dictionaries.
Network names will be resolved to ids by using the network_ids mapping.
"""
if networks is None:
return None
parsed_networks = []
for network in networks:
if isinstance(network, string_types):
parsed_network = {'name': network}
elif isinstance(network, dict):
if 'name' not in network:
raise TypeError(
'"name" is required when networks are passed as dictionaries.'
)
name = network.pop('name')
parsed_network = {'name': name}
aliases = network.pop('aliases', None)
if aliases is not None:
if not isinstance(aliases, list):
raise TypeError('"aliases" network option is only allowed as a list')
if not all(
isinstance(alias, string_types) for alias in aliases
):
raise TypeError('Only strings are allowed as network aliases.')
parsed_network['aliases'] = aliases
options = network.pop('options', None)
if options is not None:
if not isinstance(options, dict):
raise TypeError('Only dict is allowed as network options.')
parsed_network['options'] = clean_dict_booleans_for_docker_api(options)
# Check if any invalid keys left
if network:
invalid_keys = ', '.join(network.keys())
raise TypeError(
'%s are not valid keys for the networks option' % invalid_keys
)
else:
raise TypeError(
'Only a list of strings or dictionaries are allowed to be passed as networks.'
)
network_name = parsed_network.pop('name')
try:
parsed_network['id'] = network_ids[network_name]
except KeyError as e:
raise ValueError('Could not find a network named: %s.' % e)
parsed_networks.append(parsed_network)
return parsed_networks or []
def get_nanoseconds_from_raw_option(name, value):
if value is None:
return None
elif isinstance(value, int):
return value
elif isinstance(value, string_types):
try:
return int(value)
except ValueError:
return convert_duration_to_nanosecond(value)
else:
raise ValueError(
'Invalid type for %s %s (%s). Only string or int allowed.'
% (name, value, type(value))
)
def get_value(key, values, default=None):
value = values.get(key)
return value if value is not None else default
def has_dict_changed(new_dict, old_dict):
"""
Check if new_dict has differences compared to old_dict while
ignoring keys in old_dict which are None in new_dict.
"""
if new_dict is None:
return False
if not new_dict and old_dict:
return True
if not old_dict and new_dict:
return True
defined_options = dict(
(option, value) for option, value in new_dict.items()
if value is not None
)
for option, value in defined_options.items():
old_value = old_dict.get(option)
if not value and not old_value:
continue
if value != old_value:
return True
return False
def has_list_changed(new_list, old_list):
"""
Check two lists has differences.
"""
if new_list is None:
return False
old_list = old_list or []
if len(new_list) != len(old_list):
return True
for new_item, old_item in zip(new_list, old_list):
is_same_type = type(new_item) == type(old_item)
if not is_same_type:
return True
if isinstance(new_item, dict):
if has_dict_changed(new_item, old_item):
return True
elif new_item != old_item:
return True
return False
class DockerService(DockerBaseClass):
def __init__(self, docker_api_version, docker_py_version):
super(DockerService, self).__init__()
self.image = ""
self.command = None
self.args = None
self.endpoint_mode = None
self.dns = None
self.healthcheck = None
self.healthcheck_disabled = None
self.hostname = None
self.hosts = None
self.tty = None
self.dns_search = None
self.dns_options = None
self.env = None
self.force_update = None
self.groups = None
self.log_driver = None
self.log_driver_options = None
self.labels = None
self.container_labels = None
self.limit_cpu = None
self.limit_memory = None
self.reserve_cpu = None
self.reserve_memory = None
self.mode = "replicated"
self.user = None
self.mounts = None
self.configs = None
self.secrets = None
self.constraints = None
self.networks = None
self.stop_grace_period = None
self.stop_signal = None
self.publish = None
self.placement_preferences = None
self.replicas = -1
self.service_id = False
self.service_version = False
self.read_only = None
self.restart_policy = None
self.restart_policy_attempts = None
self.restart_policy_delay = None
self.restart_policy_window = None
self.rollback_config = None
self.update_delay = None
self.update_parallelism = None
self.update_failure_action = None
self.update_monitor = None
self.update_max_failure_ratio = None
self.update_order = None
self.working_dir = None
self.docker_api_version = docker_api_version
self.docker_py_version = docker_py_version
def get_facts(self):
return {
'image': self.image,
'mounts': self.mounts,
'configs': self.configs,
'networks': self.networks,
'command': self.command,
'args': self.args,
'tty': self.tty,
'dns': self.dns,
'dns_search': self.dns_search,
'dns_options': self.dns_options,
'healthcheck': self.healthcheck,
'healthcheck_disabled': self.healthcheck_disabled,
'hostname': self.hostname,
'hosts': self.hosts,
'env': self.env,
'force_update': self.force_update,
'groups': self.groups,
'log_driver': self.log_driver,
'log_driver_options': self.log_driver_options,
'publish': self.publish,
'constraints': self.constraints,
'placement_preferences': self.placement_preferences,
'labels': self.labels,
'container_labels': self.container_labels,
'mode': self.mode,
'replicas': self.replicas,
'endpoint_mode': self.endpoint_mode,
'restart_policy': self.restart_policy,
'secrets': self.secrets,
'stop_grace_period': self.stop_grace_period,
'stop_signal': self.stop_signal,
'limit_cpu': self.limit_cpu,
'limit_memory': self.limit_memory,
'read_only': self.read_only,
'reserve_cpu': self.reserve_cpu,
'reserve_memory': self.reserve_memory,
'restart_policy_delay': self.restart_policy_delay,
'restart_policy_attempts': self.restart_policy_attempts,
'restart_policy_window': self.restart_policy_window,
'rollback_config': self.rollback_config,
'update_delay': self.update_delay,
'update_parallelism': self.update_parallelism,
'update_failure_action': self.update_failure_action,
'update_monitor': self.update_monitor,
'update_max_failure_ratio': self.update_max_failure_ratio,
'update_order': self.update_order,
'user': self.user,
'working_dir': self.working_dir,
}
@property
def can_update_networks(self):
# Before Docker API 1.29 adding/removing networks was not supported
return (
self.docker_api_version >= LooseVersion('1.29') and
self.docker_py_version >= LooseVersion('2.7')
)
@property
def can_use_task_template_networks(self):
# In Docker API 1.25 attaching networks to TaskTemplate is preferred over Spec
return (
self.docker_api_version >= LooseVersion('1.25') and
self.docker_py_version >= LooseVersion('2.7')
)
@staticmethod
def get_restart_config_from_ansible_params(params):
restart_config = params['restart_config'] or {}
condition = get_value(
'condition',
restart_config,
default=params['restart_policy']
)
delay = get_value(
'delay',
restart_config,
default=params['restart_policy_delay']
)
delay = get_nanoseconds_from_raw_option(
'restart_policy_delay',
delay
)
max_attempts = get_value(
'max_attempts',
restart_config,
default=params['restart_policy_attempts']
)
window = get_value(
'window',
restart_config,
default=params['restart_policy_window']
)
window = get_nanoseconds_from_raw_option(
'restart_policy_window',
window
)
return {
'restart_policy': condition,
'restart_policy_delay': delay,
'restart_policy_attempts': max_attempts,
'restart_policy_window': window
}
@staticmethod
def get_update_config_from_ansible_params(params):
update_config = params['update_config'] or {}
parallelism = get_value(
'parallelism',
update_config,
default=params['update_parallelism']
)
delay = get_value(
'delay',
update_config,
default=params['update_delay']
)
delay = get_nanoseconds_from_raw_option(
'update_delay',
delay
)
failure_action = get_value(
'failure_action',
update_config,
default=params['update_failure_action']
)
monitor = get_value(
'monitor',
update_config,
default=params['update_monitor']
)
monitor = get_nanoseconds_from_raw_option(
'update_monitor',
monitor
)
max_failure_ratio = get_value(
'max_failure_ratio',
update_config,
default=params['update_max_failure_ratio']
)
order = get_value(
'order',
update_config,
default=params['update_order']
)
return {
'update_parallelism': parallelism,
'update_delay': delay,
'update_failure_action': failure_action,
'update_monitor': monitor,
'update_max_failure_ratio': max_failure_ratio,
'update_order': order
}
@staticmethod
def get_rollback_config_from_ansible_params(params):
if params['rollback_config'] is None:
return None
rollback_config = params['rollback_config'] or {}
delay = get_nanoseconds_from_raw_option(
'rollback_config.delay',
rollback_config.get('delay')
)
monitor = get_nanoseconds_from_raw_option(
'rollback_config.monitor',
rollback_config.get('monitor')
)
return {
'parallelism': rollback_config.get('parallelism'),
'delay': delay,
'failure_action': rollback_config.get('failure_action'),
'monitor': monitor,
'max_failure_ratio': rollback_config.get('max_failure_ratio'),
'order': rollback_config.get('order'),
}
@staticmethod
def get_logging_from_ansible_params(params):
logging_config = params['logging'] or {}
driver = get_value(
'driver',
logging_config,
default=params['log_driver']
)
options = get_value(
'options',
logging_config,
default=params['log_driver_options']
)
return {
'log_driver': driver,
'log_driver_options': options,
}
@staticmethod
def get_limits_from_ansible_params(params):
limits = params['limits'] or {}
cpus = get_value(
'cpus',
limits,
default=params['limit_cpu']
)
memory = get_value(
'memory',
limits,
default=params['limit_memory']
)
if memory is not None:
try:
memory = human_to_bytes(memory)
except ValueError as exc:
raise Exception('Failed to convert limit_memory to bytes: %s' % exc)
return {
'limit_cpu': cpus,
'limit_memory': memory,
}
@staticmethod
def get_reservations_from_ansible_params(params):
reservations = params['reservations'] or {}
cpus = get_value(
'cpus',
reservations,
default=params['reserve_cpu']
)
memory = get_value(
'memory',
reservations,
default=params['reserve_memory']
)
if memory is not None:
try:
memory = human_to_bytes(memory)
except ValueError as exc:
raise Exception('Failed to convert reserve_memory to bytes: %s' % exc)
return {
'reserve_cpu': cpus,
'reserve_memory': memory,
}
@staticmethod
def get_placement_from_ansible_params(params):
placement = params['placement'] or {}
constraints = get_value(
'constraints',
placement,
default=params['constraints']
)
preferences = placement.get('preferences')
return {
'constraints': constraints,
'placement_preferences': preferences,
}
@classmethod
def from_ansible_params(
cls,
ap,
old_service,
image_digest,
secret_ids,
config_ids,
network_ids,
docker_api_version,
docker_py_version,
):
s = DockerService(docker_api_version, docker_py_version)
s.image = image_digest
s.args = ap['args']
s.endpoint_mode = ap['endpoint_mode']
s.dns = ap['dns']
s.dns_search = ap['dns_search']
s.dns_options = ap['dns_options']
s.healthcheck, s.healthcheck_disabled = parse_healthcheck(ap['healthcheck'])
s.hostname = ap['hostname']
s.hosts = ap['hosts']
s.tty = ap['tty']
s.labels = ap['labels']
s.container_labels = ap['container_labels']
s.mode = ap['mode']
s.stop_signal = ap['stop_signal']
s.user = ap['user']
s.working_dir = ap['working_dir']
s.read_only = ap['read_only']
s.networks = get_docker_networks(ap['networks'], network_ids)
s.command = ap['command']
if isinstance(s.command, string_types):
s.command = shlex.split(s.command)
elif isinstance(s.command, list):
invalid_items = [
(index, item)
for index, item in enumerate(s.command)
if not isinstance(item, string_types)
]
if invalid_items:
errors = ', '.join(
[
'%s (%s) at index %s' % (item, type(item), index)
for index, item in invalid_items
]
)
raise Exception(
'All items in a command list need to be strings. '
'Check quoting. Invalid items: %s.'
% errors
)
s.command = ap['command']
elif s.command is not None:
raise ValueError(
'Invalid type for command %s (%s). '
'Only string or list allowed. Check quoting.'
% (s.command, type(s.command))
)
s.env = get_docker_environment(ap['env'], ap['env_files'])
s.rollback_config = cls.get_rollback_config_from_ansible_params(ap)
update_config = cls.get_update_config_from_ansible_params(ap)
for key, value in update_config.items():
setattr(s, key, value)
restart_config = cls.get_restart_config_from_ansible_params(ap)
for key, value in restart_config.items():
setattr(s, key, value)
logging_config = cls.get_logging_from_ansible_params(ap)
for key, value in logging_config.items():
setattr(s, key, value)
limits = cls.get_limits_from_ansible_params(ap)
for key, value in limits.items():
setattr(s, key, value)
reservations = cls.get_reservations_from_ansible_params(ap)
for key, value in reservations.items():
setattr(s, key, value)
placement = cls.get_placement_from_ansible_params(ap)
for key, value in placement.items():
setattr(s, key, value)
if ap['stop_grace_period'] is not None:
s.stop_grace_period = convert_duration_to_nanosecond(ap['stop_grace_period'])
if ap['force_update']:
s.force_update = int(str(time.time()).replace('.', ''))
if ap['groups'] is not None:
# In case integers are passed as groups, we need to convert them to
# strings as docker internally treats them as strings.
s.groups = [str(g) for g in ap['groups']]
if ap['replicas'] == -1:
if old_service:
s.replicas = old_service.replicas
else:
s.replicas = 1
else:
s.replicas = ap['replicas']
if ap['publish'] is not None:
s.publish = []
for param_p in ap['publish']:
service_p = {}
service_p['protocol'] = param_p['protocol']
service_p['mode'] = param_p['mode']
service_p['published_port'] = param_p['published_port']
service_p['target_port'] = param_p['target_port']
s.publish.append(service_p)
if ap['mounts'] is not None:
s.mounts = []
for param_m in ap['mounts']:
service_m = {}
service_m['readonly'] = param_m['readonly']
service_m['type'] = param_m['type']
service_m['source'] = param_m['source']
service_m['target'] = param_m['target']
service_m['labels'] = param_m['labels']
service_m['no_copy'] = param_m['no_copy']
service_m['propagation'] = param_m['propagation']
service_m['driver_config'] = param_m['driver_config']
service_m['tmpfs_mode'] = param_m['tmpfs_mode']
tmpfs_size = param_m['tmpfs_size']
if tmpfs_size is not None:
try:
tmpfs_size = human_to_bytes(tmpfs_size)
except ValueError as exc:
raise ValueError(
'Failed to convert tmpfs_size to bytes: %s' % exc
)
service_m['tmpfs_size'] = tmpfs_size
s.mounts.append(service_m)
if ap['configs'] is not None:
s.configs = []
for param_m in ap['configs']:
service_c = {}
config_name = param_m['config_name']
service_c['config_id'] = param_m['config_id'] or config_ids[config_name]
service_c['config_name'] = config_name
service_c['filename'] = param_m['filename'] or config_name
service_c['uid'] = param_m['uid']
service_c['gid'] = param_m['gid']
service_c['mode'] = param_m['mode']
s.configs.append(service_c)
if ap['secrets'] is not None:
s.secrets = []
for param_m in ap['secrets']:
service_s = {}
secret_name = param_m['secret_name']
service_s['secret_id'] = param_m['secret_id'] or secret_ids[secret_name]
service_s['secret_name'] = secret_name
service_s['filename'] = param_m['filename'] or secret_name
service_s['uid'] = param_m['uid']
service_s['gid'] = param_m['gid']
service_s['mode'] = param_m['mode']
s.secrets.append(service_s)
return s
def compare(self, os):
differences = DifferenceTracker()
needs_rebuild = False
force_update = False
if self.endpoint_mode is not None and self.endpoint_mode != os.endpoint_mode:
differences.add('endpoint_mode', parameter=self.endpoint_mode, active=os.endpoint_mode)
if self.env is not None and self.env != (os.env or []):
differences.add('env', parameter=self.env, active=os.env)
if self.log_driver is not None and self.log_driver != os.log_driver:
differences.add('log_driver', parameter=self.log_driver, active=os.log_driver)
if self.log_driver_options is not None and self.log_driver_options != (os.log_driver_options or {}):
differences.add('log_opt', parameter=self.log_driver_options, active=os.log_driver_options)
if self.mode != os.mode:
needs_rebuild = True
differences.add('mode', parameter=self.mode, active=os.mode)
if has_list_changed(self.mounts, os.mounts):
differences.add('mounts', parameter=self.mounts, active=os.mounts)
if has_list_changed(self.configs, os.configs):
differences.add('configs', parameter=self.configs, active=os.configs)
if has_list_changed(self.secrets, os.secrets):
differences.add('secrets', parameter=self.secrets, active=os.secrets)
if has_list_changed(self.networks, os.networks):
differences.add('networks', parameter=self.networks, active=os.networks)
needs_rebuild = not self.can_update_networks
if self.replicas != os.replicas:
differences.add('replicas', parameter=self.replicas, active=os.replicas)
if self.command is not None and self.command != (os.command or []):
differences.add('command', parameter=self.command, active=os.command)
if self.args is not None and self.args != (os.args or []):
differences.add('args', parameter=self.args, active=os.args)
if self.constraints is not None and self.constraints != (os.constraints or []):
differences.add('constraints', parameter=self.constraints, active=os.constraints)
if self.placement_preferences is not None and self.placement_preferences != (os.placement_preferences or []):
differences.add('placement_preferences', parameter=self.placement_preferences, active=os.placement_preferences)
if self.groups is not None and self.groups != (os.groups or []):
differences.add('groups', parameter=self.groups, active=os.groups)
if self.labels is not None and self.labels != (os.labels or {}):
differences.add('labels', parameter=self.labels, active=os.labels)
if self.limit_cpu is not None and self.limit_cpu != os.limit_cpu:
differences.add('limit_cpu', parameter=self.limit_cpu, active=os.limit_cpu)
if self.limit_memory is not None and self.limit_memory != os.limit_memory:
differences.add('limit_memory', parameter=self.limit_memory, active=os.limit_memory)
if self.reserve_cpu is not None and self.reserve_cpu != os.reserve_cpu:
differences.add('reserve_cpu', parameter=self.reserve_cpu, active=os.reserve_cpu)
if self.reserve_memory is not None and self.reserve_memory != os.reserve_memory:
differences.add('reserve_memory', parameter=self.reserve_memory, active=os.reserve_memory)
if self.container_labels is not None and self.container_labels != (os.container_labels or {}):
differences.add('container_labels', parameter=self.container_labels, active=os.container_labels)
if self.stop_signal is not None and self.stop_signal != os.stop_signal:
differences.add('stop_signal', parameter=self.stop_signal, active=os.stop_signal)
if self.stop_grace_period is not None and self.stop_grace_period != os.stop_grace_period:
differences.add('stop_grace_period', parameter=self.stop_grace_period, active=os.stop_grace_period)
if self.has_publish_changed(os.publish):
differences.add('publish', parameter=self.publish, active=os.publish)
if self.read_only is not None and self.read_only != os.read_only:
differences.add('read_only', parameter=self.read_only, active=os.read_only)
if self.restart_policy is not None and self.restart_policy != os.restart_policy:
differences.add('restart_policy', parameter=self.restart_policy, active=os.restart_policy)
if self.restart_policy_attempts is not None and self.restart_policy_attempts != os.restart_policy_attempts:
differences.add('restart_policy_attempts', parameter=self.restart_policy_attempts, active=os.restart_policy_attempts)
if self.restart_policy_delay is not None and self.restart_policy_delay != os.restart_policy_delay:
differences.add('restart_policy_delay', parameter=self.restart_policy_delay, active=os.restart_policy_delay)
if self.restart_policy_window is not None and self.restart_policy_window != os.restart_policy_window:
differences.add('restart_policy_window', parameter=self.restart_policy_window, active=os.restart_policy_window)
if has_dict_changed(self.rollback_config, os.rollback_config):
differences.add('rollback_config', parameter=self.rollback_config, active=os.rollback_config)
if self.update_delay is not None and self.update_delay != os.update_delay:
differences.add('update_delay', parameter=self.update_delay, active=os.update_delay)
if self.update_parallelism is not None and self.update_parallelism != os.update_parallelism:
differences.add('update_parallelism', parameter=self.update_parallelism, active=os.update_parallelism)
if self.update_failure_action is not None and self.update_failure_action != os.update_failure_action:
differences.add('update_failure_action', parameter=self.update_failure_action, active=os.update_failure_action)
if self.update_monitor is not None and self.update_monitor != os.update_monitor:
differences.add('update_monitor', parameter=self.update_monitor, active=os.update_monitor)
if self.update_max_failure_ratio is not None and self.update_max_failure_ratio != os.update_max_failure_ratio:
differences.add('update_max_failure_ratio', parameter=self.update_max_failure_ratio, active=os.update_max_failure_ratio)
if self.update_order is not None and self.update_order != os.update_order:
differences.add('update_order', parameter=self.update_order, active=os.update_order)
has_image_changed, change = self.has_image_changed(os.image)
if has_image_changed:
differences.add('image', parameter=self.image, active=change)
if self.user and self.user != os.user:
differences.add('user', parameter=self.user, active=os.user)
if self.dns is not None and self.dns != (os.dns or []):
differences.add('dns', parameter=self.dns, active=os.dns)
if self.dns_search is not None and self.dns_search != (os.dns_search or []):
differences.add('dns_search', parameter=self.dns_search, active=os.dns_search)
if self.dns_options is not None and self.dns_options != (os.dns_options or []):
differences.add('dns_options', parameter=self.dns_options, active=os.dns_options)
if self.has_healthcheck_changed(os):
differences.add('healthcheck', parameter=self.healthcheck, active=os.healthcheck)
if self.hostname is not None and self.hostname != os.hostname:
differences.add('hostname', parameter=self.hostname, active=os.hostname)
if self.hosts is not None and self.hosts != (os.hosts or {}):
differences.add('hosts', parameter=self.hosts, active=os.hosts)
if self.tty is not None and self.tty != os.tty:
differences.add('tty', parameter=self.tty, active=os.tty)
if self.working_dir is not None and self.working_dir != os.working_dir:
differences.add('working_dir', parameter=self.working_dir, active=os.working_dir)
if self.force_update:
force_update = True
return not differences.empty or force_update, differences, needs_rebuild, force_update
def has_healthcheck_changed(self, old_publish):
if self.healthcheck_disabled is False and self.healthcheck is None:
return False
if self.healthcheck_disabled and old_publish.healthcheck is None:
return False
return self.healthcheck != old_publish.healthcheck
def has_publish_changed(self, old_publish):
if self.publish is None:
return False
old_publish = old_publish or []
if len(self.publish) != len(old_publish):
return True
publish_sorter = operator.itemgetter('published_port', 'target_port', 'protocol')
publish = sorted(self.publish, key=publish_sorter)
old_publish = sorted(old_publish, key=publish_sorter)
for publish_item, old_publish_item in zip(publish, old_publish):
ignored_keys = set()
if not publish_item.get('mode'):
ignored_keys.add('mode')
# Create copies of publish_item dicts where keys specified in ignored_keys are left out
filtered_old_publish_item = dict(
(k, v) for k, v in old_publish_item.items() if k not in ignored_keys
)
filtered_publish_item = dict(
(k, v) for k, v in publish_item.items() if k not in ignored_keys
)
if filtered_publish_item != filtered_old_publish_item:
return True
return False
def has_image_changed(self, old_image):
if '@' not in self.image:
old_image = old_image.split('@')[0]
return self.image != old_image, old_image
def build_container_spec(self):
mounts = None
if self.mounts is not None:
mounts = []
for mount_config in self.mounts:
mount_options = {
'target': 'target',
'source': 'source',
'type': 'type',
'readonly': 'read_only',
'propagation': 'propagation',
'labels': 'labels',
'no_copy': 'no_copy',
'driver_config': 'driver_config',
'tmpfs_size': 'tmpfs_size',
'tmpfs_mode': 'tmpfs_mode'
}
mount_args = {}
for option, mount_arg in mount_options.items():
value = mount_config.get(option)
if value is not None:
mount_args[mount_arg] = value
mounts.append(types.Mount(**mount_args))
configs = None
if self.configs is not None:
configs = []
for config_config in self.configs:
config_args = {
'config_id': config_config['config_id'],
'config_name': config_config['config_name']
}
filename = config_config.get('filename')
if filename:
config_args['filename'] = filename
uid = config_config.get('uid')
if uid:
config_args['uid'] = uid
gid = config_config.get('gid')
if gid:
config_args['gid'] = gid
mode = config_config.get('mode')
if mode:
config_args['mode'] = mode
configs.append(types.ConfigReference(**config_args))
secrets = None
if self.secrets is not None:
secrets = []
for secret_config in self.secrets:
secret_args = {
'secret_id': secret_config['secret_id'],
'secret_name': secret_config['secret_name']
}
filename = secret_config.get('filename')
if filename:
secret_args['filename'] = filename
uid = secret_config.get('uid')
if uid:
secret_args['uid'] = uid
gid = secret_config.get('gid')
if gid:
secret_args['gid'] = gid
mode = secret_config.get('mode')
if mode:
secret_args['mode'] = mode
secrets.append(types.SecretReference(**secret_args))
dns_config_args = {}
if self.dns is not None:
dns_config_args['nameservers'] = self.dns
if self.dns_search is not None:
dns_config_args['search'] = self.dns_search
if self.dns_options is not None:
dns_config_args['options'] = self.dns_options
dns_config = types.DNSConfig(**dns_config_args) if dns_config_args else None
container_spec_args = {}
if self.command is not None:
container_spec_args['command'] = self.command
if self.args is not None:
container_spec_args['args'] = self.args
if self.env is not None:
container_spec_args['env'] = self.env
if self.user is not None:
container_spec_args['user'] = self.user
if self.container_labels is not None:
container_spec_args['labels'] = self.container_labels
if self.healthcheck is not None:
container_spec_args['healthcheck'] = types.Healthcheck(**self.healthcheck)
if self.hostname is not None:
container_spec_args['hostname'] = self.hostname
if self.hosts is not None:
container_spec_args['hosts'] = self.hosts
if self.read_only is not None:
container_spec_args['read_only'] = self.read_only
if self.stop_grace_period is not None:
container_spec_args['stop_grace_period'] = self.stop_grace_period
if self.stop_signal is not None:
container_spec_args['stop_signal'] = self.stop_signal
if self.tty is not None:
container_spec_args['tty'] = self.tty
if self.groups is not None:
container_spec_args['groups'] = self.groups
if self.working_dir is not None:
container_spec_args['workdir'] = self.working_dir
if secrets is not None:
container_spec_args['secrets'] = secrets
if mounts is not None:
container_spec_args['mounts'] = mounts
if dns_config is not None:
container_spec_args['dns_config'] = dns_config
if configs is not None:
container_spec_args['configs'] = configs
return types.ContainerSpec(self.image, **container_spec_args)
def build_placement(self):
placement_args = {}
if self.constraints is not None:
placement_args['constraints'] = self.constraints
if self.placement_preferences is not None:
placement_args['preferences'] = [
{key.title(): {'SpreadDescriptor': value}}
for preference in self.placement_preferences
for key, value in preference.items()
]
return types.Placement(**placement_args) if placement_args else None
def build_update_config(self):
update_config_args = {}
if self.update_parallelism is not None:
update_config_args['parallelism'] = self.update_parallelism
if self.update_delay is not None:
update_config_args['delay'] = self.update_delay
if self.update_failure_action is not None:
update_config_args['failure_action'] = self.update_failure_action
if self.update_monitor is not None:
update_config_args['monitor'] = self.update_monitor
if self.update_max_failure_ratio is not None:
update_config_args['max_failure_ratio'] = self.update_max_failure_ratio
if self.update_order is not None:
update_config_args['order'] = self.update_order
return types.UpdateConfig(**update_config_args) if update_config_args else None
def build_log_driver(self):
log_driver_args = {}
if self.log_driver is not None:
log_driver_args['name'] = self.log_driver
if self.log_driver_options is not None:
log_driver_args['options'] = self.log_driver_options
return types.DriverConfig(**log_driver_args) if log_driver_args else None
def build_restart_policy(self):
restart_policy_args = {}
if self.restart_policy is not None:
restart_policy_args['condition'] = self.restart_policy
if self.restart_policy_delay is not None:
restart_policy_args['delay'] = self.restart_policy_delay
if self.restart_policy_attempts is not None:
restart_policy_args['max_attempts'] = self.restart_policy_attempts
if self.restart_policy_window is not None:
restart_policy_args['window'] = self.restart_policy_window
return types.RestartPolicy(**restart_policy_args) if restart_policy_args else None
def build_rollback_config(self):
if self.rollback_config is None:
return None
rollback_config_options = [
'parallelism',
'delay',
'failure_action',
'monitor',
'max_failure_ratio',
'order',
]
rollback_config_args = {}
for option in rollback_config_options:
value = self.rollback_config.get(option)
if value is not None:
rollback_config_args[option] = value
return types.RollbackConfig(**rollback_config_args) if rollback_config_args else None
def build_resources(self):
resources_args = {}
if self.limit_cpu is not None:
resources_args['cpu_limit'] = int(self.limit_cpu * 1000000000.0)
if self.limit_memory is not None:
resources_args['mem_limit'] = self.limit_memory
if self.reserve_cpu is not None:
resources_args['cpu_reservation'] = int(self.reserve_cpu * 1000000000.0)
if self.reserve_memory is not None:
resources_args['mem_reservation'] = self.reserve_memory
return types.Resources(**resources_args) if resources_args else None
def build_task_template(self, container_spec, placement=None):
log_driver = self.build_log_driver()
restart_policy = self.build_restart_policy()
resources = self.build_resources()
task_template_args = {}
if placement is not None:
task_template_args['placement'] = placement
if log_driver is not None:
task_template_args['log_driver'] = log_driver
if restart_policy is not None:
task_template_args['restart_policy'] = restart_policy
if resources is not None:
task_template_args['resources'] = resources
if self.force_update:
task_template_args['force_update'] = self.force_update
if self.can_use_task_template_networks:
networks = self.build_networks()
if networks:
task_template_args['networks'] = networks
return types.TaskTemplate(container_spec=container_spec, **task_template_args)
def build_service_mode(self):
if self.mode == 'global':
self.replicas = None
return types.ServiceMode(self.mode, replicas=self.replicas)
def build_networks(self):
networks = None
if self.networks is not None:
networks = []
for network in self.networks:
docker_network = {'Target': network['id']}
if 'aliases' in network:
docker_network['Aliases'] = network['aliases']
if 'options' in network:
docker_network['DriverOpts'] = network['options']
networks.append(docker_network)
return networks
def build_endpoint_spec(self):
endpoint_spec_args = {}
if self.publish is not None:
ports = []
for port in self.publish:
port_spec = {
'Protocol': port['protocol'],
'PublishedPort': port['published_port'],
'TargetPort': port['target_port']
}
if port.get('mode'):
port_spec['PublishMode'] = port['mode']
ports.append(port_spec)
endpoint_spec_args['ports'] = ports
if self.endpoint_mode is not None:
endpoint_spec_args['mode'] = self.endpoint_mode
return types.EndpointSpec(**endpoint_spec_args) if endpoint_spec_args else None
def build_docker_service(self):
container_spec = self.build_container_spec()
placement = self.build_placement()
task_template = self.build_task_template(container_spec, placement)
update_config = self.build_update_config()
rollback_config = self.build_rollback_config()
service_mode = self.build_service_mode()
endpoint_spec = self.build_endpoint_spec()
service = {'task_template': task_template, 'mode': service_mode}
if update_config:
service['update_config'] = update_config
if rollback_config:
service['rollback_config'] = rollback_config
if endpoint_spec:
service['endpoint_spec'] = endpoint_spec
if self.labels:
service['labels'] = self.labels
if not self.can_use_task_template_networks:
networks = self.build_networks()
if networks:
service['networks'] = networks
return service
class DockerServiceManager(object):
def __init__(self, client):
self.client = client
self.retries = 2
self.diff_tracker = None
def get_service(self, name):
try:
raw_data = self.client.inspect_service(name)
except NotFound:
return None
ds = DockerService(self.client.docker_api_version, self.client.docker_py_version)
task_template_data = raw_data['Spec']['TaskTemplate']
ds.image = task_template_data['ContainerSpec']['Image']
ds.user = task_template_data['ContainerSpec'].get('User')
ds.env = task_template_data['ContainerSpec'].get('Env')
ds.command = task_template_data['ContainerSpec'].get('Command')
ds.args = task_template_data['ContainerSpec'].get('Args')
ds.groups = task_template_data['ContainerSpec'].get('Groups')
ds.stop_grace_period = task_template_data['ContainerSpec'].get('StopGracePeriod')
ds.stop_signal = task_template_data['ContainerSpec'].get('StopSignal')
ds.working_dir = task_template_data['ContainerSpec'].get('Dir')
ds.read_only = task_template_data['ContainerSpec'].get('ReadOnly')
healthcheck_data = task_template_data['ContainerSpec'].get('Healthcheck')
if healthcheck_data:
options = ['test', 'interval', 'timeout', 'start_period', 'retries']
healthcheck = dict(
(key.lower(), value) for key, value in healthcheck_data.items()
if value is not None and key.lower() in options
)
ds.healthcheck = healthcheck
update_config_data = raw_data['Spec'].get('UpdateConfig')
if update_config_data:
ds.update_delay = update_config_data.get('Delay')
ds.update_parallelism = update_config_data.get('Parallelism')
ds.update_failure_action = update_config_data.get('FailureAction')
ds.update_monitor = update_config_data.get('Monitor')
ds.update_max_failure_ratio = update_config_data.get('MaxFailureRatio')
ds.update_order = update_config_data.get('Order')
rollback_config_data = raw_data['Spec'].get('RollbackConfig')
if rollback_config_data:
ds.rollback_config = {
'parallelism': rollback_config_data.get('Parallelism'),
'delay': rollback_config_data.get('Delay'),
'failure_action': rollback_config_data.get('FailureAction'),
'monitor': rollback_config_data.get('Monitor'),
'max_failure_ratio': rollback_config_data.get('MaxFailureRatio'),
'order': rollback_config_data.get('Order'),
}
dns_config = task_template_data['ContainerSpec'].get('DNSConfig')
if dns_config:
ds.dns = dns_config.get('Nameservers')
ds.dns_search = dns_config.get('Search')
ds.dns_options = dns_config.get('Options')
ds.hostname = task_template_data['ContainerSpec'].get('Hostname')
hosts = task_template_data['ContainerSpec'].get('Hosts')
if hosts:
hosts = [
list(reversed(host.split(":", 1)))
if ":" in host
else host.split(" ", 1)
for host in hosts
]
ds.hosts = dict((hostname, ip) for ip, hostname in hosts)
ds.tty = task_template_data['ContainerSpec'].get('TTY')
placement = task_template_data.get('Placement')
if placement:
ds.constraints = placement.get('Constraints')
placement_preferences = []
for preference in placement.get('Preferences', []):
placement_preferences.append(
dict(
(key.lower(), value['SpreadDescriptor'])
for key, value in preference.items()
)
)
ds.placement_preferences = placement_preferences or None
restart_policy_data = task_template_data.get('RestartPolicy')
if restart_policy_data:
ds.restart_policy = restart_policy_data.get('Condition')
ds.restart_policy_delay = restart_policy_data.get('Delay')
ds.restart_policy_attempts = restart_policy_data.get('MaxAttempts')
ds.restart_policy_window = restart_policy_data.get('Window')
raw_data_endpoint_spec = raw_data['Spec'].get('EndpointSpec')
if raw_data_endpoint_spec:
ds.endpoint_mode = raw_data_endpoint_spec.get('Mode')
raw_data_ports = raw_data_endpoint_spec.get('Ports')
if raw_data_ports:
ds.publish = []
for port in raw_data_ports:
ds.publish.append({
'protocol': port['Protocol'],
'mode': port.get('PublishMode', None),
'published_port': int(port['PublishedPort']),
'target_port': int(port['TargetPort'])
})
raw_data_limits = task_template_data.get('Resources', {}).get('Limits')
if raw_data_limits:
raw_cpu_limits = raw_data_limits.get('NanoCPUs')
if raw_cpu_limits:
ds.limit_cpu = float(raw_cpu_limits) / 1000000000
raw_memory_limits = raw_data_limits.get('MemoryBytes')
if raw_memory_limits:
ds.limit_memory = int(raw_memory_limits)
raw_data_reservations = task_template_data.get('Resources', {}).get('Reservations')
if raw_data_reservations:
raw_cpu_reservations = raw_data_reservations.get('NanoCPUs')
if raw_cpu_reservations:
ds.reserve_cpu = float(raw_cpu_reservations) / 1000000000
raw_memory_reservations = raw_data_reservations.get('MemoryBytes')
if raw_memory_reservations:
ds.reserve_memory = int(raw_memory_reservations)
ds.labels = raw_data['Spec'].get('Labels')
ds.log_driver = task_template_data.get('LogDriver', {}).get('Name')
ds.log_driver_options = task_template_data.get('LogDriver', {}).get('Options')
ds.container_labels = task_template_data['ContainerSpec'].get('Labels')
mode = raw_data['Spec']['Mode']
if 'Replicated' in mode.keys():
ds.mode = to_text('replicated', encoding='utf-8')
ds.replicas = mode['Replicated']['Replicas']
elif 'Global' in mode.keys():
ds.mode = 'global'
else:
raise Exception('Unknown service mode: %s' % mode)
raw_data_mounts = task_template_data['ContainerSpec'].get('Mounts')
if raw_data_mounts:
ds.mounts = []
for mount_data in raw_data_mounts:
bind_options = mount_data.get('BindOptions', {})
volume_options = mount_data.get('VolumeOptions', {})
tmpfs_options = mount_data.get('TmpfsOptions', {})
driver_config = volume_options.get('DriverConfig', {})
driver_config = dict(
(key.lower(), value) for key, value in driver_config.items()
) or None
ds.mounts.append({
'source': mount_data.get('Source', ''),
'type': mount_data['Type'],
'target': mount_data['Target'],
'readonly': mount_data.get('ReadOnly'),
'propagation': bind_options.get('Propagation'),
'no_copy': volume_options.get('NoCopy'),
'labels': volume_options.get('Labels'),
'driver_config': driver_config,
'tmpfs_mode': tmpfs_options.get('Mode'),
'tmpfs_size': tmpfs_options.get('SizeBytes'),
})
raw_data_configs = task_template_data['ContainerSpec'].get('Configs')
if raw_data_configs:
ds.configs = []
for config_data in raw_data_configs:
ds.configs.append({
'config_id': config_data['ConfigID'],
'config_name': config_data['ConfigName'],
'filename': config_data['File'].get('Name'),
'uid': config_data['File'].get('UID'),
'gid': config_data['File'].get('GID'),
'mode': config_data['File'].get('Mode')
})
raw_data_secrets = task_template_data['ContainerSpec'].get('Secrets')
if raw_data_secrets:
ds.secrets = []
for secret_data in raw_data_secrets:
ds.secrets.append({
'secret_id': secret_data['SecretID'],
'secret_name': secret_data['SecretName'],
'filename': secret_data['File'].get('Name'),
'uid': secret_data['File'].get('UID'),
'gid': secret_data['File'].get('GID'),
'mode': secret_data['File'].get('Mode')
})
raw_networks_data = task_template_data.get('Networks', raw_data['Spec'].get('Networks'))
if raw_networks_data:
ds.networks = []
for network_data in raw_networks_data:
network = {'id': network_data['Target']}
if 'Aliases' in network_data:
network['aliases'] = network_data['Aliases']
if 'DriverOpts' in network_data:
network['options'] = network_data['DriverOpts']
ds.networks.append(network)
ds.service_version = raw_data['Version']['Index']
ds.service_id = raw_data['ID']
return ds
def update_service(self, name, old_service, new_service):
service_data = new_service.build_docker_service()
result = self.client.update_service(
old_service.service_id,
old_service.service_version,
name=name,
**service_data
)
# Prior to Docker SDK 4.0.0 no warnings were returned and will thus be ignored.
# (see https://github.com/docker/docker-py/pull/2272)
self.client.report_warnings(result, ['Warning'])
def create_service(self, name, service):
service_data = service.build_docker_service()
result = self.client.create_service(name=name, **service_data)
self.client.report_warnings(result, ['Warning'])
def remove_service(self, name):
self.client.remove_service(name)
def get_image_digest(self, name, resolve=False):
if (
not name
or not resolve
):
return name
repo, tag = parse_repository_tag(name)
if not tag:
tag = 'latest'
name = repo + ':' + tag
distribution_data = self.client.inspect_distribution(name)
digest = distribution_data['Descriptor']['digest']
return '%s@%s' % (name, digest)
def get_networks_names_ids(self):
return dict(
(network['Name'], network['Id']) for network in self.client.networks()
)
def get_missing_secret_ids(self):
"""
Resolve missing secret ids by looking them up by name
"""
secret_names = [
secret['secret_name']
for secret in self.client.module.params.get('secrets') or []
if secret['secret_id'] is None
]
if not secret_names:
return {}
secrets = self.client.secrets(filters={'name': secret_names})
secrets = dict(
(secret['Spec']['Name'], secret['ID'])
for secret in secrets
if secret['Spec']['Name'] in secret_names
)
for secret_name in secret_names:
if secret_name not in secrets:
self.client.fail(
'Could not find a secret named "%s"' % secret_name
)
return secrets
def get_missing_config_ids(self):
"""
Resolve missing config ids by looking them up by name
"""
config_names = [
config['config_name']
for config in self.client.module.params.get('configs') or []
if config['config_id'] is None
]
if not config_names:
return {}
configs = self.client.configs(filters={'name': config_names})
configs = dict(
(config['Spec']['Name'], config['ID'])
for config in configs
if config['Spec']['Name'] in config_names
)
for config_name in config_names:
if config_name not in configs:
self.client.fail(
'Could not find a config named "%s"' % config_name
)
return configs
def run(self):
self.diff_tracker = DifferenceTracker()
module = self.client.module
image = module.params['image']
try:
image_digest = self.get_image_digest(
name=image,
resolve=module.params['resolve_image']
)
except DockerException as e:
self.client.fail(
'Error looking for an image named %s: %s'
% (image, e)
)
try:
current_service = self.get_service(module.params['name'])
except Exception as e:
self.client.fail(
'Error looking for service named %s: %s'
% (module.params['name'], e)
)
try:
secret_ids = self.get_missing_secret_ids()
config_ids = self.get_missing_config_ids()
network_ids = self.get_networks_names_ids()
new_service = DockerService.from_ansible_params(
module.params,
current_service,
image_digest,
secret_ids,
config_ids,
network_ids,
self.client.docker_api_version,
self.client.docker_py_version
)
except Exception as e:
return self.client.fail(
'Error parsing module parameters: %s' % e
)
changed = False
msg = 'noop'
rebuilt = False
differences = DifferenceTracker()
facts = {}
if current_service:
if module.params['state'] == 'absent':
if not module.check_mode:
self.remove_service(module.params['name'])
msg = 'Service removed'
changed = True
else:
changed, differences, need_rebuild, force_update = new_service.compare(
current_service
)
if changed:
self.diff_tracker.merge(differences)
if need_rebuild:
if not module.check_mode:
self.remove_service(module.params['name'])
self.create_service(
module.params['name'],
new_service
)
msg = 'Service rebuilt'
rebuilt = True
else:
if not module.check_mode:
self.update_service(
module.params['name'],
current_service,
new_service
)
msg = 'Service updated'
rebuilt = False
else:
if force_update:
if not module.check_mode:
self.update_service(
module.params['name'],
current_service,
new_service
)
msg = 'Service forcefully updated'
rebuilt = False
changed = True
else:
msg = 'Service unchanged'
facts = new_service.get_facts()
else:
if module.params['state'] == 'absent':
msg = 'Service absent'
else:
if not module.check_mode:
self.create_service(module.params['name'], new_service)
msg = 'Service created'
changed = True
facts = new_service.get_facts()
return msg, changed, rebuilt, differences.get_legacy_docker_diffs(), facts
def run_safe(self):
while True:
try:
return self.run()
except APIError as e:
# Sometimes Version.Index will have changed between an inspect and
# update. If this is encountered we'll retry the update.
if self.retries > 0 and 'update out of sequence' in str(e.explanation):
self.retries -= 1
time.sleep(1)
else:
raise
def _detect_publish_mode_usage(client):
for publish_def in client.module.params['publish'] or []:
if publish_def.get('mode'):
return True
return False
def _detect_healthcheck_start_period(client):
if client.module.params['healthcheck']:
return client.module.params['healthcheck']['start_period'] is not None
return False
def _detect_mount_tmpfs_usage(client):
for mount in client.module.params['mounts'] or []:
if mount.get('type') == 'tmpfs':
return True
if mount.get('tmpfs_size') is not None:
return True
if mount.get('tmpfs_mode') is not None:
return True
return False
def _detect_update_config_failure_action_rollback(client):
rollback_config_failure_action = (
(client.module.params['update_config'] or {}).get('failure_action')
)
update_failure_action = client.module.params['update_failure_action']
failure_action = rollback_config_failure_action or update_failure_action
return failure_action == 'rollback'
def main():
argument_spec = dict(
name=dict(type='str', required=True),
image=dict(type='str'),
state=dict(type='str', default='present', choices=['present', 'absent']),
mounts=dict(type='list', elements='dict', options=dict(
source=dict(type='str', required=True),
target=dict(type='str', required=True),
type=dict(
type='str',
default='bind',
choices=['bind', 'volume', 'tmpfs', 'npipe'],
),
readonly=dict(type='bool'),
labels=dict(type='dict'),
propagation=dict(
type='str',
choices=[
'shared',
'slave',
'private',
'rshared',
'rslave',
'rprivate'
]
),
no_copy=dict(type='bool'),
driver_config=dict(type='dict', options=dict(
name=dict(type='str'),
options=dict(type='dict')
)),
tmpfs_size=dict(type='str'),
tmpfs_mode=dict(type='int')
)),
configs=dict(type='list', elements='dict', options=dict(
config_id=dict(type='str'),
config_name=dict(type='str', required=True),
filename=dict(type='str'),
uid=dict(type='str'),
gid=dict(type='str'),
mode=dict(type='int'),
)),
secrets=dict(type='list', elements='dict', options=dict(
secret_id=dict(type='str'),
secret_name=dict(type='str', required=True),
filename=dict(type='str'),
uid=dict(type='str'),
gid=dict(type='str'),
mode=dict(type='int'),
)),
networks=dict(type='list', elements='raw'),
command=dict(type='raw'),
args=dict(type='list', elements='str'),
env=dict(type='raw'),
env_files=dict(type='list', elements='path'),
force_update=dict(type='bool', default=False),
groups=dict(type='list', elements='str'),
logging=dict(type='dict', options=dict(
driver=dict(type='str'),
options=dict(type='dict'),
)),
log_driver=dict(type='str', removed_in_version='2.12'),
log_driver_options=dict(type='dict', removed_in_version='2.12'),
publish=dict(type='list', elements='dict', options=dict(
published_port=dict(type='int', required=True),
target_port=dict(type='int', required=True),
protocol=dict(type='str', default='tcp', choices=['tcp', 'udp']),
mode=dict(type='str', choices=['ingress', 'host']),
)),
placement=dict(type='dict', options=dict(
constraints=dict(type='list', elements='str'),
preferences=dict(type='list', elements='dict'),
)),
constraints=dict(type='list', elements='str', removed_in_version='2.12'),
tty=dict(type='bool'),
dns=dict(type='list', elements='str'),
dns_search=dict(type='list', elements='str'),
dns_options=dict(type='list', elements='str'),
healthcheck=dict(type='dict', options=dict(
test=dict(type='raw'),
interval=dict(type='str'),
timeout=dict(type='str'),
start_period=dict(type='str'),
retries=dict(type='int'),
)),
hostname=dict(type='str'),
hosts=dict(type='dict'),
labels=dict(type='dict'),
container_labels=dict(type='dict'),
mode=dict(
type='str',
default='replicated',
choices=['replicated', 'global']
),
replicas=dict(type='int', default=-1),
endpoint_mode=dict(type='str', choices=['vip', 'dnsrr']),
stop_grace_period=dict(type='str'),
stop_signal=dict(type='str'),
limits=dict(type='dict', options=dict(
cpus=dict(type='float'),
memory=dict(type='str'),
)),
limit_cpu=dict(type='float', removed_in_version='2.12'),
limit_memory=dict(type='str', removed_in_version='2.12'),
read_only=dict(type='bool'),
reservations=dict(type='dict', options=dict(
cpus=dict(type='float'),
memory=dict(type='str'),
)),
reserve_cpu=dict(type='float', removed_in_version='2.12'),
reserve_memory=dict(type='str', removed_in_version='2.12'),
resolve_image=dict(type='bool', default=False),
restart_config=dict(type='dict', options=dict(
condition=dict(type='str', choices=['none', 'on-failure', 'any']),
delay=dict(type='str'),
max_attempts=dict(type='int'),
window=dict(type='str'),
)),
restart_policy=dict(
type='str',
choices=['none', 'on-failure', 'any'],
removed_in_version='2.12'
),
restart_policy_delay=dict(type='raw', removed_in_version='2.12'),
restart_policy_attempts=dict(type='int', removed_in_version='2.12'),
restart_policy_window=dict(type='raw', removed_in_version='2.12'),
rollback_config=dict(type='dict', options=dict(
parallelism=dict(type='int'),
delay=dict(type='str'),
failure_action=dict(
type='str',
choices=['continue', 'pause']
),
monitor=dict(type='str'),
max_failure_ratio=dict(type='float'),
order=dict(type='str'),
)),
update_config=dict(type='dict', options=dict(
parallelism=dict(type='int'),
delay=dict(type='str'),
failure_action=dict(
type='str',
choices=['continue', 'pause', 'rollback']
),
monitor=dict(type='str'),
max_failure_ratio=dict(type='float'),
order=dict(type='str'),
)),
update_delay=dict(type='raw', removed_in_version='2.12'),
update_parallelism=dict(type='int', removed_in_version='2.12'),
update_failure_action=dict(
type='str',
choices=['continue', 'pause', 'rollback'],
removed_in_version='2.12'
),
update_monitor=dict(type='raw', removed_in_version='2.12'),
update_max_failure_ratio=dict(type='float', removed_in_version='2.12'),
update_order=dict(
type='str',
choices=['stop-first', 'start-first'],
removed_in_version='2.12'
),
user=dict(type='str'),
working_dir=dict(type='str'),
)
option_minimal_versions = dict(
constraints=dict(docker_py_version='2.4.0'),
dns=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
dns_options=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
dns_search=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
endpoint_mode=dict(docker_py_version='3.0.0', docker_api_version='1.25'),
force_update=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
healthcheck=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
hostname=dict(docker_py_version='2.2.0', docker_api_version='1.25'),
hosts=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
groups=dict(docker_py_version='2.6.0', docker_api_version='1.25'),
tty=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
secrets=dict(docker_py_version='2.4.0', docker_api_version='1.25'),
configs=dict(docker_py_version='2.6.0', docker_api_version='1.30'),
update_max_failure_ratio=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
update_monitor=dict(docker_py_version='2.1.0', docker_api_version='1.25'),
update_order=dict(docker_py_version='2.7.0', docker_api_version='1.29'),
stop_signal=dict(docker_py_version='2.6.0', docker_api_version='1.28'),
publish=dict(docker_py_version='3.0.0', docker_api_version='1.25'),
read_only=dict(docker_py_version='2.6.0', docker_api_version='1.28'),
resolve_image=dict(docker_api_version='1.30', docker_py_version='3.2.0'),
rollback_config=dict(docker_py_version='3.5.0', docker_api_version='1.28'),
# specials
publish_mode=dict(
docker_py_version='3.0.0',
docker_api_version='1.25',
detect_usage=_detect_publish_mode_usage,
usage_msg='set publish.mode'
),
healthcheck_start_period=dict(
docker_py_version='2.4.0',
docker_api_version='1.25',
detect_usage=_detect_healthcheck_start_period,
usage_msg='set healthcheck.start_period'
),
update_config_max_failure_ratio=dict(
docker_py_version='2.1.0',
docker_api_version='1.25',
detect_usage=lambda c: (c.module.params['update_config'] or {}).get(
'max_failure_ratio'
) is not None,
usage_msg='set update_config.max_failure_ratio'
),
update_config_failure_action=dict(
docker_py_version='3.5.0',
docker_api_version='1.28',
detect_usage=_detect_update_config_failure_action_rollback,
usage_msg='set update_config.failure_action.rollback'
),
update_config_monitor=dict(
docker_py_version='2.1.0',
docker_api_version='1.25',
detect_usage=lambda c: (c.module.params['update_config'] or {}).get(
'monitor'
) is not None,
usage_msg='set update_config.monitor'
),
update_config_order=dict(
docker_py_version='2.7.0',
docker_api_version='1.29',
detect_usage=lambda c: (c.module.params['update_config'] or {}).get(
'order'
) is not None,
usage_msg='set update_config.order'
),
placement_config_preferences=dict(
docker_py_version='2.4.0',
docker_api_version='1.27',
detect_usage=lambda c: (c.module.params['placement'] or {}).get(
'preferences'
) is not None,
usage_msg='set placement.preferences'
),
placement_config_constraints=dict(
docker_py_version='2.4.0',
detect_usage=lambda c: (c.module.params['placement'] or {}).get(
'constraints'
) is not None,
usage_msg='set placement.constraints'
),
mounts_tmpfs=dict(
docker_py_version='2.6.0',
detect_usage=_detect_mount_tmpfs_usage,
usage_msg='set mounts.tmpfs'
),
rollback_config_order=dict(
docker_api_version='1.29',
detect_usage=lambda c: (c.module.params['rollback_config'] or {}).get(
'order'
) is not None,
usage_msg='set rollback_config.order'
),
)
required_if = [
('state', 'present', ['image'])
]
client = AnsibleDockerClient(
argument_spec=argument_spec,
required_if=required_if,
supports_check_mode=True,
min_docker_version='2.0.2',
min_docker_api_version='1.24',
option_minimal_versions=option_minimal_versions,
)
try:
dsm = DockerServiceManager(client)
msg, changed, rebuilt, changes, facts = dsm.run_safe()
results = dict(
msg=msg,
changed=changed,
rebuilt=rebuilt,
changes=changes,
swarm_service=facts,
)
if client.module._diff:
before, after = dsm.diff_tracker.get_before_after()
results['diff'] = dict(before=before, after=after)
client.module.exit_json(**results)
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,728 |
tmpfs src is a required string
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
In docker_swarm_service module in ansible, if we are specifying mount option as tmpfs, it is required to set src. But for docker service create, src is not supported for tmpfs.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_swarm_service
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.4
config file = /home/vani/Projects/voody/ansible/ansible.cfg
configured module search path = ['/home/vani/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 20 2019, 17:12:48) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/home/user/Projects/ansible/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/home/user/Projects/ansible/ansible.cfg) = yaml
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62728
|
https://github.com/ansible/ansible/pull/64637
|
dd5415017e554188e207e4b213c778333e913d55
|
574bd32db230b518c883a2eac45af76f3385db56
| 2019-09-23T06:54:03Z |
python
| 2019-11-09T20:01:56Z |
test/integration/targets/docker_swarm_service/tasks/tests/mounts.yml
|
- name: Registering service name
set_fact:
service_name: "{{ name_prefix ~ '-mounts' }}"
volume_name_1: "{{ name_prefix ~ '-volume-1' }}"
volume_name_2: "{{ name_prefix ~ '-volume-2' }}"
- name: Registering service name
set_fact:
service_names: "{{ service_names + [service_name] }}"
volume_names: "{{ volume_names + [volume_name_1, volume_name_2] }}"
- docker_volume:
name: "{{ volume_name }}"
state: present
loop:
- "{{ volume_name_1 }}"
- "{{ volume_name_2 }}"
loop_control:
loop_var: volume_name
####################################################################
## mounts ##########################################################
####################################################################
- name: mounts
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
register: mounts_1
- name: mounts (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
register: mounts_2
- name: mounts (add)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
- source: "/tmp/"
target: "/tmp/{{ volume_name_2 }}"
type: "bind"
register: mounts_3
- name: mounts (empty)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts: []
register: mounts_4
- name: mounts (empty idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts: []
register: mounts_5
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- mounts_1 is changed
- mounts_2 is not changed
- mounts_3 is changed
- mounts_4 is changed
- mounts_5 is not changed
####################################################################
## mounts.readonly #################################################
####################################################################
- name: mounts.readonly
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
readonly: true
register: mounts_readonly_1
- name: mounts.readonly (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
readonly: true
register: mounts_readonly_2
- name: mounts.readonly (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
readonly: false
register: mounts_readonly_3
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- mounts_readonly_1 is changed
- mounts_readonly_2 is not changed
- mounts_readonly_3 is changed
####################################################################
## mounts.propagation ##############################################
####################################################################
- name: mounts.propagation
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "/tmp"
target: "/tmp/{{ volume_name_1 }}"
type: "bind"
propagation: "slave"
register: mounts_propagation_1
- name: mounts.propagation (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "/tmp"
target: "/tmp/{{ volume_name_1 }}"
type: "bind"
propagation: "slave"
register: mounts_propagation_2
- name: mounts.propagation (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "/tmp"
target: "/tmp/{{ volume_name_1 }}"
type: "bind"
propagation: "rprivate"
register: mounts_propagation_3
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- mounts_propagation_1 is changed
- mounts_propagation_2 is not changed
- mounts_propagation_3 is changed
####################################################################
## mounts.labels ##################################################
####################################################################
- name: mounts.labels
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
labels:
mylabel: hello-world
my-other-label: hello-mars
register: mounts_labels_1
- name: mounts.labels (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
labels:
mylabel: hello-world
my-other-label: hello-mars
register: mounts_labels_2
- name: mounts.labels (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
labels:
mylabel: hello-world
register: mounts_labels_3
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- mounts_labels_1 is changed
- mounts_labels_2 is not changed
- mounts_labels_3 is changed
####################################################################
## mounts.no_copy ##################################################
####################################################################
- name: mounts.no_copy
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
no_copy: true
register: mounts_no_copy_1
- name: mounts.no_copy (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
no_copy: true
register: mounts_no_copy_2
- name: mounts.no_copy (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
no_copy: false
register: mounts_no_copy_3
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- mounts_no_copy_1 is changed
- mounts_no_copy_2 is not changed
- mounts_no_copy_3 is changed
####################################################################
## mounts.driver_config ############################################
####################################################################
- name: mounts.driver_config
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
driver_config:
name: "nfs"
options:
addr: "127.0.0.1"
register: mounts_driver_config_1
- name: mounts.driver_config
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
driver_config:
name: "nfs"
options:
addr: "127.0.0.1"
register: mounts_driver_config_2
- name: mounts.driver_config
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "volume"
driver_config:
name: "local"
register: mounts_driver_config_3
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- mounts_driver_config_1 is changed
- mounts_driver_config_2 is not changed
- mounts_driver_config_3 is changed
####################################################################
## mounts.tmpfs_size ###############################################
####################################################################
- name: mounts.tmpfs_size
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "tmpfs"
tmpfs_size: "50M"
register: mounts_tmpfs_size_1
ignore_errors: yes
- name: mounts.tmpfs_size (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "tmpfs"
tmpfs_size: "50M"
register: mounts_tmpfs_size_2
ignore_errors: yes
- name: mounts.tmpfs_size (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "tmpfs"
tmpfs_size: "25M"
register: mounts_tmpfs_size_3
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- mounts_tmpfs_size_1 is changed
- mounts_tmpfs_size_2 is not changed
- mounts_tmpfs_size_3 is changed
when: docker_py_version is version('2.6.0', '>=')
- assert:
that:
- mounts_tmpfs_size_1 is failed
- "'Minimum version required' in mounts_tmpfs_size_1.msg"
when: docker_py_version is version('2.6.0', '<')
####################################################################
## mounts.tmpfs_mode ###############################################
####################################################################
- name: mounts.tmpfs_mode
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "tmpfs"
tmpfs_mode: 0444
register: mounts_tmpfs_mode_1
ignore_errors: yes
- name: mounts.tmpfs_mode (idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "tmpfs"
tmpfs_mode: 0444
register: mounts_tmpfs_mode_2
ignore_errors: yes
- name: mounts.tmpfs_mode (change)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: "{{ volume_name_1 }}"
target: "/tmp/{{ volume_name_1 }}"
type: "tmpfs"
tmpfs_mode: 0777
register: mounts_tmpfs_mode_3
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- mounts_tmpfs_mode_1 is changed
- mounts_tmpfs_mode_2 is not changed
- mounts_tmpfs_mode_3 is changed
when: docker_py_version is version('2.6.0', '>=')
- assert:
that:
- mounts_tmpfs_size_1 is failed
- "'Minimum version required' in mounts_tmpfs_size_1.msg"
when: docker_py_version is version('2.6.0', '<')
####################################################################
## mounts.source ###################################################
####################################################################
- name: mounts.source (empty for tmpfs)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: ""
target: "/tmp/{{ volume_name_1 }}"
type: "tmpfs"
register: mounts_tmpfs_source_1
ignore_errors: yes
- name: mounts.source (empty for tmpfs idempotency)
docker_swarm_service:
name: "{{ service_name }}"
image: alpine:3.8
resolve_image: no
command: '/bin/sh -v -c "sleep 10m"'
mounts:
- source: ""
target: "/tmp/{{ volume_name_1 }}"
type: "tmpfs"
register: mounts_tmpfs_source_2
ignore_errors: yes
- name: cleanup
docker_swarm_service:
name: "{{ service_name }}"
state: absent
diff: no
- assert:
that:
- mounts_tmpfs_source_1 is changed
- mounts_tmpfs_source_2 is not changed
when: docker_py_version is version('2.6.0', '>=')
- assert:
that:
- mounts_tmpfs_source_1 is failed
- "'Minimum version required' in mounts_tmpfs_source_1.msg"
when: docker_py_version is version('2.6.0', '<')
####################################################################
####################################################################
####################################################################
- name: Delete volumes
docker_volume:
name: "{{ volume_name }}"
state: absent
loop:
- "{{ volume_name_1 }}"
- "{{ volume_name_2 }}"
loop_control:
loop_var: volume_name
ignore_errors: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,401 |
win_partition refers to non-existent variable $partition
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
win_partition refers to variable $partition which has not been defined or set in the context that it is used
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_partition
##### ANSIBLE VERSION
2.8.5
##### STEPS TO REPRODUCE
Code for win_partition.ps1 contains the following lines, from line 222
```ps1
if ($null -ne $gpt_type -and $gpt_styles.$gpt_type -ne $partition.GptType) {
$module.FailJson("gpt_type is not a valid parameter for existing partitions")
}
if ($null -ne $mbr_type -and $mbr_styles.$mbr_type -ne $partition.MbrType) {
$module.FailJson("mbr_type is not a valid parameter for existing partitions")
}
```
However, the ```$partition``` variable is not defined so will always resolve to ```$null``` - did you mean ````$ansible_partition```` ?
|
https://github.com/ansible/ansible/issues/62401
|
https://github.com/ansible/ansible/pull/63968
|
8c4f59ebd959778178956022fbd015c5c6291c99
|
8b13836b1f318ee00c6482e71bfbdb7a49bd75f3
| 2019-09-17T10:14:05Z |
python
| 2019-11-12T04:44:18Z |
changelogs/fragments/win_partition-var.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,401 |
win_partition refers to non-existent variable $partition
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
win_partition refers to variable $partition which has not been defined or set in the context that it is used
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_partition
##### ANSIBLE VERSION
2.8.5
##### STEPS TO REPRODUCE
Code for win_partition.ps1 contains the following lines, from line 222
```ps1
if ($null -ne $gpt_type -and $gpt_styles.$gpt_type -ne $partition.GptType) {
$module.FailJson("gpt_type is not a valid parameter for existing partitions")
}
if ($null -ne $mbr_type -and $mbr_styles.$mbr_type -ne $partition.MbrType) {
$module.FailJson("mbr_type is not a valid parameter for existing partitions")
}
```
However, the ```$partition``` variable is not defined so will always resolve to ```$null``` - did you mean ````$ansible_partition```` ?
|
https://github.com/ansible/ansible/issues/62401
|
https://github.com/ansible/ansible/pull/63968
|
8c4f59ebd959778178956022fbd015c5c6291c99
|
8b13836b1f318ee00c6482e71bfbdb7a49bd75f3
| 2019-09-17T10:14:05Z |
python
| 2019-11-12T04:44:18Z |
lib/ansible/modules/windows/win_partition.ps1
|
#!powershell
# Copyright: (c) 2018, Varun Chopra (@chopraaa) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#AnsibleRequires -CSharpUtil Ansible.Basic
#AnsibleRequires -OSVersion 6.2
Set-StrictMode -Version 2
$ErrorActionPreference = "Stop"
$spec = @{
options = @{
state = @{ type = "str"; choices = "absent", "present"; default = "present" }
drive_letter = @{ type = "str" }
disk_number = @{ type = "int" }
partition_number = @{ type = "int" }
partition_size = @{ type = "str" }
read_only = @{ type = "bool" }
active = @{ type = "bool" }
hidden = @{ type = "bool" }
offline = @{ type = "bool" }
mbr_type = @{ type = "str"; choices = "fat12", "fat16", "extended", "huge", "ifs", "fat32" }
gpt_type = @{ type = "str"; choices = "system_partition", "microsoft_reserved", "basic_data", "microsoft_recovery" }
}
supports_check_mode = $true
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
$state = $module.Params.state
$drive_letter = $module.Params.drive_letter
$disk_number = $module.Params.disk_number
$partition_number = $module.Params.partition_number
$partition_size = $module.Params.partition_size
$read_only = $module.Params.read_only
$active = $module.Params.active
$hidden = $module.Params.hidden
$offline = $module.Params.offline
$mbr_type = $module.Params.mbr_type
$gpt_type = $module.Params.gpt_type
$size_is_maximum = $false
$ansible_partition = $false
$ansible_partition_size = $null
$partition_style = $null
$gpt_styles = @{
system_partition = "c12a7328-f81f-11d2-ba4b-00a0c93ec93b"
microsoft_reserved = "e3c9e316-0b5c-4db8-817d-f92df00215ae"
basic_data = "ebd0a0a2-b9e5-4433-87c0-68b6b72699c7"
microsoft_recovery = "de94bba4-06d1-4d40-a16a-bfd50179d6ac"
}
$mbr_styles = @{
fat12 = 1
fat16 = 4
extended = 5
huge = 6
ifs = 7
fat32 = 12
}
function Convert-SizeToBytes {
param(
$Size,
$Units
)
switch ($Units) {
"B" { return $Size }
"KB" { return 1000 * $Size }
"KiB" { return 1024 * $Size }
"MB" { return [Math]::Pow(1000, 2) * $Size }
"MiB" { return [Math]::Pow(1024, 2) * $Size }
"GB" { return [Math]::Pow(1000, 3) * $Size }
"GiB" { return [Math]::Pow(1024, 3) * $Size }
"TB" { return [Math]::Pow(1000, 4) * $Size }
"TiB" { return [Math]::Pow(1024, 4) * $Size }
}
}
if ($null -ne $partition_size) {
if ($partition_size -eq -1) {
$size_is_maximum = $true
}
elseif ($partition_size -match '^(?<Size>[0-9]+)[ ]*(?<Units>b|kb|kib|mb|mib|gb|gib|tb|tib)$') {
$ansible_partition_size = Convert-SizeToBytes -Size $Matches.Size -Units $Matches.Units
}
else {
$module.FailJson("Invalid partition size. B, KB, KiB, MB, MiB, GB, GiB, TB, TiB are valid partition size units")
}
}
# If partition_exists, we can change or delete it; otherwise we only need the disk to create a new partition
if ($null -ne $disk_number -and $null -ne $partition_number) {
$ansible_partition = Get-Partition -DiskNumber $disk_number -PartitionNumber $partition_number -ErrorAction SilentlyContinue
}
# Check if drive_letter is either auto-assigned or a character from A-Z
elseif ($drive_letter -and -not ($disk_number -and $partition_number)) {
if ($drive_letter -eq "auto" -or $drive_letter -match "^[a-zA-Z]$") {
$ansible_partition = Get-Partition -DriveLetter $drive_letter -ErrorAction SilentlyContinue
}
else {
$module.FailJson("Incorrect usage of drive_letter: specify a drive letter from A-Z or use 'auto' to automatically assign a drive letter")
}
}
elseif ($disk_number) {
try {
Get-Disk -Number $disk_number | Out-Null
} catch {
$module.FailJson("Specified disk does not exist")
}
}
else {
$module.FailJson("You must provide disk_number, partition_number")
}
# Partition can't have two partition styles
if ($null -ne $gpt_type -and $null -ne $mbr_type) {
$module.FailJson("Cannot specify both GPT and MBR partition styles. Check which partition style is supported by the disk")
}
function New-AnsiblePartition {
param(
$DiskNumber,
$Letter,
$SizeMax,
$Size,
$MbrType,
$GptType,
$Style
)
$parameters = @{
DiskNumber = $DiskNumber
}
if ($null -ne $Letter) {
switch ($Letter) {
"auto" {
$parameters.Add("AssignDriveLetter", $True)
}
default {
$parameters.Add("DriveLetter", $Letter)
}
}
}
if ($null -ne $Size) {
$parameters.Add("Size", $Size)
}
if ($null -ne $MbrType) {
$parameters.Add("MbrType", $Style)
}
if ($null -ne $GptType) {
$parameters.Add("GptType", $Style)
}
try {
$new_partition = New-Partition @parameters
} catch {
$module.FailJson("Unable to create a new partition: $($_.Exception.Message)", $_)
}
return $new_partition
}
function Set-AnsiblePartitionState {
param(
$hidden,
$read_only,
$active,
$partition
)
$parameters = @{
DiskNumber = $partition.DiskNumber
PartitionNumber = $partition.PartitionNumber
}
if ($hidden -NotIn ($null, $partition.IsHidden)) {
$parameters.Add("IsHidden", $hidden)
}
if ($read_only -NotIn ($null, $partition.IsReadOnly)) {
$parameters.Add("IsReadOnly", $read_only)
}
if ($active -NotIn ($null, $partition.IsActive)) {
$parameters.Add("IsActive", $active)
}
try {
Set-Partition @parameters
} catch {
$module.FailJson("Error changing state of partition: $($_.Exception.Message)", $_)
}
}
if ($ansible_partition) {
if ($state -eq "absent") {
try {
Remove-Partition -DiskNumber $ansible_partition.DiskNumber -PartitionNumber $ansible_partition.PartitionNumber -Confirm:$false -WhatIf:$module.CheckMode
} catch {
$module.FailJson("There was an error removing the partition: $($_.Exception.Message)", $_)
}
$module.Result.changed = $true
}
else {
if ($null -ne $gpt_type -and $gpt_styles.$gpt_type -ne $partition.GptType) {
$module.FailJson("gpt_type is not a valid parameter for existing partitions")
}
if ($null -ne $mbr_type -and $mbr_styles.$mbr_type -ne $partition.MbrType) {
$module.FailJson("mbr_type is not a valid parameter for existing partitions")
}
if ($partition_size) {
try {
$max_supported_size = (Get-PartitionSupportedSize -DiskNumber $ansible_partition.DiskNumber -PartitionNumber $ansible_partition.PartitionNumber).SizeMax
} catch {
$module.FailJson("Unable to get maximum supported partition size: $($_.Exception.Message)", $_)
}
if ($size_is_maximum) {
$ansible_partition_size = $max_supported_size
}
if ($ansible_partition_size -ne $ansible_partition.Size -and ($ansible_partition_size - $ansible_partition.Size -gt 1049000 -or $ansible_partition.Size - $ansible_partition_size -gt 1049000)) {
if ($ansible_partition.IsReadOnly) {
$module.FailJson("Unable to resize partition: Partition is read only")
} else {
try {
Resize-Partition -DiskNumber $ansible_partition.DiskNumber -PartitionNumber $ansible_partition.PartitionNumber -Size $ansible_partition_size -WhatIf:$module.CheckMode
} catch {
$module.FailJson("Unable to change partition size: $($_.Exception.Message)", $_)
}
$module.Result.changed = $true
}
} elseif ($ansible_partition_size -gt $max_supported_size) {
$module.FailJson("Specified partition size exceeds size supported by the partition")
}
}
if ($drive_letter -NotIn ("auto", $null, $ansible_partition.DriveLetter)) {
if (-not $module.CheckMode) {
try {
Set-Partition -DiskNumber $ansible_partition.DiskNumber -PartitionNumber $ansible_partition.PartitionNumber -NewDriveLetter $drive_letter
} catch {
$module.FailJson("Unable to change drive letter: $($_.Exception.Message)", $_)
}
}
$module.Result.changed = $true
}
}
}
else {
if ($state -eq "present") {
if ($null -eq $disk_number) {
$module.FailJson("Missing required parameter: disk_number")
}
if ($null -eq $ansible_partition_size -and -not $size_is_maximum){
$module.FailJson("Missing required parameter: partition_size")
}
if (-not $size_is_maximum) {
try {
$max_supported_size = (Get-Disk -Number $disk_number).LargestFreeExtent
} catch {
$module.FailJson("Unable to get maximum size supported by disk: $($_.Exception.Message)", $_)
}
if ($ansible_partition_size -gt $max_supported_size) {
$module.FailJson("Partition size is not supported by disk. Use partition_size: -1 to get maximum size")
}
} else {
$ansible_partition_size = (Get-Disk -Number $disk_number).LargestFreeExtent
}
$supp_part_type = (Get-Disk -Number $disk_number).PartitionStyle
if ($null -ne $mbr_type) {
if ($supp_part_type -eq "MBR" -and $mbr_styles.ContainsKey($mbr_type)) {
$partition_style = $mbr_styles.$mbr_type
} else {
$module.FailJson("Incorrect partition style specified")
}
}
if ($null -ne $gpt_type) {
if ($supp_part_type -eq "GPT" -and $gpt_styles.ContainsKey($gpt_type)) {
$partition_style = $gpt_styles.$gpt_type
} else {
$module.FailJson("Incorrect partition style specified")
}
}
if (-not $module.CheckMode) {
$ansible_partition = New-AnsiblePartition -DiskNumber $disk_number -Letter $drive_letter -Size $ansible_partition_size -MbrType $mbr_type -GptType $gpt_type -Style $partition_style
}
$module.Result.changed = $true
}
}
if ($state -eq "present" -and $ansible_partition) {
if ($offline -NotIn ($null, $ansible_partition.IsOffline)) {
if (-not $module.CheckMode) {
try {
Set-Partition -DiskNumber $ansible_partition.DiskNumber -PartitionNumber $ansible_partition.PartitionNumber -IsOffline $offline
} catch {
$module.FailJson("Error setting partition offline: $($_.Exception.Message)", $_)
}
}
$module.Result.changed = $true
}
if ($hidden -NotIn ($null, $ansible_partition.IsHidden) -or $read_only -NotIn ($null, $ansible_partition.IsReadOnly) -or $active -NotIn ($null, $ansible_partition.IsActive)) {
if (-not $module.CheckMode) {
Set-AnsiblePartitionState -hidden $hidden -read_only $read_only -active $active -partition $ansible_partition
}
$module.Result.changed = $true
}
}
$module.ExitJson()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,542 |
azure_rm_snapshot: Doesn't support create snapshot from a managed disk
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The snapshot module currently only supports storage blob as a creation source, it should
supports managed disk as a creation source.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/modules/cloud/azure/azure_rm_snapshot.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = None
configured module search path = [u'/home/haiyuan/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/local/lib/python2.7/site-packages/ansible
executable location = /opt/ansible/bin/ansible
python version = 2.7.12 (default, Oct 8 2019, 14:14:10) [GCC 5.4.0 20160609]
```
|
https://github.com/ansible/ansible/issues/64542
|
https://github.com/ansible/ansible/pull/64547
|
ca42cb286807d75c10a1699bb100492db0d0ebd5
|
c11d73575b9be98caa77eb5cdc9ea4dd2442a6d1
| 2019-11-07T07:12:28Z |
python
| 2019-11-12T05:05:15Z |
lib/ansible/module_utils/azure_rm_common_ext.py
|
# Copyright (c) 2019 Zim Kalinowski, (@zikalino)
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from ansible.module_utils.azure_rm_common import AzureRMModuleBase
import re
from ansible.module_utils.common.dict_transformations import _camel_to_snake, _snake_to_camel
from ansible.module_utils.six import string_types
class AzureRMModuleBaseExt(AzureRMModuleBase):
def inflate_parameters(self, spec, body, level):
if isinstance(body, list):
for item in body:
self.inflate_parameters(spec, item, level)
return
for name in spec.keys():
# first check if option was passed
param = body.get(name)
if not param:
continue
# check if pattern needs to be used
pattern = spec[name].get('pattern', None)
if pattern:
if pattern == 'camelize':
param = _snake_to_camel(param, True)
else:
param = self.normalize_resource_id(param, pattern)
body[name] = param
disposition = spec[name].get('disposition', '*')
if level == 0 and not disposition.startswith('/'):
continue
if disposition == '/':
disposition = '/*'
parts = disposition.split('/')
if parts[0] == '':
# should fail if level is > 0?
parts.pop(0)
target_dict = body
elem = body.pop(name)
while len(parts) > 1:
target_dict = target_dict.setdefault(parts.pop(0), {})
targetName = parts[0] if parts[0] != '*' else name
target_dict[targetName] = elem
if spec[name].get('options'):
self.inflate_parameters(spec[name].get('options'), target_dict[targetName], level + 1)
def normalize_resource_id(self, value, pattern):
'''
Return a proper resource id string..
:param resource_id: It could be a resource name, resource id or dict containing parts from the pattern.
:param pattern: pattern of resource is, just like in Azure Swagger
'''
value_dict = {}
if isinstance(value, string_types):
value_parts = value.split('/')
if len(value_parts) == 1:
value_dict['name'] = value
else:
pattern_parts = pattern.split('/')
if len(value_parts) != len(pattern_parts):
return None
for i in range(len(value_parts)):
if pattern_parts[i].startswith('{'):
value_dict[pattern_parts[i][1:-1]] = value_parts[i]
elif value_parts[i].lower() != pattern_parts[i].lower():
return None
elif isinstance(value, dict):
value_dict = value
else:
return None
if not value_dict.get('subscription_id'):
value_dict['subscription_id'] = self.subscription_id
if not value_dict.get('resource_group'):
value_dict['resource_group'] = self.resource_group
# check if any extra values passed
for k in value_dict:
if not ('{' + k + '}') in pattern:
return None
# format url
return pattern.format(**value_dict)
def idempotency_check(self, old_params, new_params):
'''
Return True if something changed. Function will use fields from module_arg_spec to perform dependency checks.
:param old_params: old parameters dictionary, body from Get request.
:param new_params: new parameters dictionary, unpacked module parameters.
'''
modifiers = {}
result = {}
self.create_compare_modifiers(self.module.argument_spec, '', modifiers)
self.results['modifiers'] = modifiers
return self.default_compare(modifiers, new_params, old_params, '', self.results)
def create_compare_modifiers(self, arg_spec, path, result):
for k in arg_spec.keys():
o = arg_spec[k]
updatable = o.get('updatable', True)
comparison = o.get('comparison', 'default')
disposition = o.get('disposition', '*')
if disposition == '/':
disposition = '/*'
p = (path +
('/' if len(path) > 0 else '') +
disposition.replace('*', k) +
('/*' if o['type'] == 'list' else ''))
if comparison != 'default' or not updatable:
result[p] = {'updatable': updatable, 'comparison': comparison}
if o.get('options'):
self.create_compare_modifiers(o.get('options'), p, result)
def default_compare(self, modifiers, new, old, path, result):
'''
Default dictionary comparison.
This function will work well with most of the Azure resources.
It correctly handles "location" comparison.
Value handling:
- if "new" value is None, it will be taken from "old" dictionary if "incremental_update"
is enabled.
List handling:
- if list contains "name" field it will be sorted by "name" before comparison is done.
- if module has "incremental_update" set, items missing in the new list will be copied
from the old list
Warnings:
If field is marked as non-updatable, appropriate warning will be printed out and
"new" structure will be updated to old value.
:modifiers: Optional dictionary of modifiers, where key is the path and value is dict of modifiers
:param new: New version
:param old: Old version
Returns True if no difference between structures has been detected.
Returns False if difference was detected.
'''
if new is None:
return True
elif isinstance(new, dict):
comparison_result = True
if not isinstance(old, dict):
result['compare'].append('changed [' + path + '] old dict is null')
comparison_result = False
else:
for k in set(new.keys()) | set(old.keys()):
new_item = new.get(k, None)
old_item = old.get(k, None)
if new_item is None:
if isinstance(old_item, dict):
new[k] = old_item
result['compare'].append('new item was empty, using old [' + path + '][ ' + k + ' ]')
elif not self.default_compare(modifiers, new_item, old_item, path + '/' + k, result):
comparison_result = False
return comparison_result
elif isinstance(new, list):
comparison_result = True
if not isinstance(old, list) or len(new) != len(old):
result['compare'].append('changed [' + path + '] length is different or old value is null')
comparison_result = False
else:
if isinstance(old[0], dict):
key = None
if 'id' in old[0] and 'id' in new[0]:
key = 'id'
elif 'name' in old[0] and 'name' in new[0]:
key = 'name'
else:
key = next(iter(old[0]))
new = sorted(new, key=lambda x: x.get(key, None))
old = sorted(old, key=lambda x: x.get(key, None))
else:
new = sorted(new)
old = sorted(old)
for i in range(len(new)):
if not self.default_compare(modifiers, new[i], old[i], path + '/*', result):
comparison_result = False
return comparison_result
else:
updatable = modifiers.get(path, {}).get('updatable', True)
comparison = modifiers.get(path, {}).get('comparison', 'default')
if comparison == 'ignore':
return True
elif comparison == 'default' or comparison == 'sensitive':
if isinstance(old, string_types) and isinstance(new, string_types):
new = new.lower()
old = old.lower()
elif comparison == 'location':
if isinstance(old, string_types) and isinstance(new, string_types):
new = new.replace(' ', '').lower()
old = old.replace(' ', '').lower()
if str(new) != str(old):
result['compare'].append('changed [' + path + '] ' + str(new) + ' != ' + str(old) + ' - ' + str(comparison))
if updatable:
return False
else:
self.module.warn("property '" + path + "' cannot be updated (" + str(old) + "->" + str(new) + ")")
return True
else:
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,542 |
azure_rm_snapshot: Doesn't support create snapshot from a managed disk
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The snapshot module currently only supports storage blob as a creation source, it should
supports managed disk as a creation source.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/modules/cloud/azure/azure_rm_snapshot.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = None
configured module search path = [u'/home/haiyuan/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/local/lib/python2.7/site-packages/ansible
executable location = /opt/ansible/bin/ansible
python version = 2.7.12 (default, Oct 8 2019, 14:14:10) [GCC 5.4.0 20160609]
```
|
https://github.com/ansible/ansible/issues/64542
|
https://github.com/ansible/ansible/pull/64547
|
ca42cb286807d75c10a1699bb100492db0d0ebd5
|
c11d73575b9be98caa77eb5cdc9ea4dd2442a6d1
| 2019-11-07T07:12:28Z |
python
| 2019-11-12T05:05:15Z |
lib/ansible/modules/cloud/azure/azure_rm_snapshot.py
|
#!/usr/bin/python
#
# Copyright (c) 2019 Zim Kalinowski, (@zikalino)
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: azure_rm_snapshot
version_added: '2.9'
short_description: Manage Azure Snapshot instance
description:
- Create, update and delete instance of Azure Snapshot.
options:
resource_group:
description:
- The name of the resource group.
required: true
type: str
name:
description:
- Resource name.
type: str
location:
description:
- Resource location.
type: str
sku:
description:
- The snapshots SKU.
type: dict
suboptions:
name:
description:
- The sku name.
type: str
choices:
- Standard_LRS
- Premium_LRS
- Standard_ZRS
tier:
description:
- The sku tier.
type: str
os_type:
description:
- The Operating System type.
type: str
choices:
- Linux
- Windows
creation_data:
description:
- Disk source information.
- CreationData information cannot be changed after the disk has been created.
type: dict
suboptions:
create_option:
description:
- This enumerates the possible sources of a disk's creation.
type: str
default: Import
choices:
- Import
source_uri:
description:
- If I(createOption=Import), this is the URI of a blob to be imported into a managed disk.
type: str
state:
description:
- Assert the state of the Snapshot.
- Use C(present) to create or update an Snapshot and C(absent) to delete it.
default: present
type: str
choices:
- absent
- present
extends_documentation_fragment:
- azure
- azure_tags
author:
- Zim Kalinowski (@zikalino)
'''
EXAMPLES = '''
- name: Create a snapshot by importing an unmanaged blob from the same subscription.
azure_rm_snapshot:
resource_group: myResourceGroup
name: mySnapshot
location: eastus
creation_data:
create_option: Import
source_uri: 'https://mystorageaccount.blob.core.windows.net/osimages/osimage.vhd'
'''
RETURN = '''
id:
description:
- Resource ID.
returned: always
type: str
sample: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Compute/snapshots/mySnapshot
'''
import time
import json
import re
from ansible.module_utils.azure_rm_common_ext import AzureRMModuleBaseExt
from ansible.module_utils.azure_rm_common_rest import GenericRestClient
from copy import deepcopy
try:
from msrestazure.azure_exceptions import CloudError
except ImportError:
# this is handled in azure_rm_common
pass
class Actions:
NoAction, Create, Update, Delete = range(4)
class AzureRMSnapshots(AzureRMModuleBaseExt):
def __init__(self):
self.module_arg_spec = dict(
resource_group=dict(
type='str',
updatable=False,
disposition='resourceGroupName',
required=True
),
name=dict(
type='str',
updatable=False,
disposition='snapshotName',
required=True
),
location=dict(
type='str',
updatable=False,
disposition='/'
),
sku=dict(
type='dict',
disposition='/',
options=dict(
name=dict(
type='str',
choices=['Standard_LRS',
'Premium_LRS',
'Standard_ZRS']
),
tier=dict(
type='str'
)
)
),
os_type=dict(
type='str',
disposition='/properties/osType',
choices=['Windows',
'Linux']
),
creation_data=dict(
type='dict',
disposition='/properties/creationData',
options=dict(
create_option=dict(
type='str',
disposition='createOption',
choices=['Import'],
default='Import'
),
source_uri=dict(
type='str',
disposition='sourceUri'
)
)
),
state=dict(
type='str',
default='present',
choices=['present', 'absent']
)
)
self.resource_group = None
self.name = None
self.id = None
self.name = None
self.type = None
self.managed_by = None
self.results = dict(changed=False)
self.mgmt_client = None
self.state = None
self.url = None
self.status_code = [200, 201, 202]
self.to_do = Actions.NoAction
self.body = {}
self.query_parameters = {}
self.query_parameters['api-version'] = '2018-09-30'
self.header_parameters = {}
self.header_parameters['Content-Type'] = 'application/json; charset=utf-8'
super(AzureRMSnapshots, self).__init__(derived_arg_spec=self.module_arg_spec,
supports_check_mode=True,
supports_tags=True)
def exec_module(self, **kwargs):
for key in list(self.module_arg_spec.keys()):
if hasattr(self, key):
setattr(self, key, kwargs[key])
elif kwargs[key] is not None:
self.body[key] = kwargs[key]
self.inflate_parameters(self.module_arg_spec, self.body, 0)
old_response = None
response = None
self.mgmt_client = self.get_mgmt_svc_client(GenericRestClient,
base_url=self._cloud_environment.endpoints.resource_manager)
resource_group = self.get_resource_group(self.resource_group)
if 'location' not in self.body:
self.body['location'] = resource_group.location
self.url = ('/subscriptions' +
'/{{ subscription_id }}' +
'/resourceGroups' +
'/{{ resource_group }}' +
'/providers' +
'/Microsoft.Compute' +
'/snapshots' +
'/{{ snapshot_name }}')
self.url = self.url.replace('{{ subscription_id }}', self.subscription_id)
self.url = self.url.replace('{{ resource_group }}', self.resource_group)
self.url = self.url.replace('{{ snapshot_name }}', self.name)
old_response = self.get_resource()
if not old_response:
self.log("Snapshot instance doesn't exist")
if self.state == 'absent':
self.log("Old instance didn't exist")
else:
self.to_do = Actions.Create
else:
self.log('Snapshot instance already exists')
if self.state == 'absent':
self.to_do = Actions.Delete
else:
modifiers = {}
self.create_compare_modifiers(self.module_arg_spec, '', modifiers)
self.results['modifiers'] = modifiers
self.results['compare'] = []
self.create_compare_modifiers(self.module_arg_spec, '', modifiers)
if not self.default_compare(modifiers, self.body, old_response, '', self.results):
self.to_do = Actions.Update
if (self.to_do == Actions.Create) or (self.to_do == Actions.Update):
self.log('Need to Create / Update the Snapshot instance')
if self.check_mode:
self.results['changed'] = True
return self.results
response = self.create_update_resource()
self.results['changed'] = True
self.log('Creation / Update done')
elif self.to_do == Actions.Delete:
self.log('Snapshot instance deleted')
self.results['changed'] = True
if self.check_mode:
return self.results
self.delete_resource()
# make sure instance is actually deleted, for some Azure resources, instance is hanging around
# for some time after deletion -- this should be really fixed in Azure
while self.get_resource():
time.sleep(20)
else:
self.log('Snapshot instance unchanged')
self.results['changed'] = False
response = old_response
if response:
self.results["id"] = response["id"]
return self.results
def create_update_resource(self):
# self.log('Creating / Updating the Snapshot instance {0}'.format(self.))
try:
response = self.mgmt_client.query(url=self.url,
method='PUT',
query_parameters=self.query_parameters,
header_parameters=self.header_parameters,
body=self.body,
expected_status_codes=self.status_code,
polling_timeout=600,
polling_interval=30)
except CloudError as exc:
self.log('Error attempting to create the Snapshot instance.')
self.fail('Error creating the Snapshot instance: {0}'.format(str(exc)))
try:
response = json.loads(response.text)
except Exception:
response = {'text': response.text}
return response
def delete_resource(self):
# self.log('Deleting the Snapshot instance {0}'.format(self.))
try:
response = self.mgmt_client.query(url=self.url,
method='DELETE',
query_parameters=self.query_parameters,
header_parameters=self.header_parameters,
body=None,
expected_status_codes=self.status_code,
polling_timeout=600,
polling_interval=30)
except CloudError as e:
self.log('Error attempting to delete the Snapshot instance.')
self.fail('Error deleting the Snapshot instance: {0}'.format(str(e)))
return True
def get_resource(self):
# self.log('Checking if the Snapshot instance {0} is present'.format(self.))
found = False
try:
response = self.mgmt_client.query(url=self.url,
method='GET',
query_parameters=self.query_parameters,
header_parameters=self.header_parameters,
body=None,
expected_status_codes=self.status_code,
polling_timeout=600,
polling_interval=30)
found = True
self.log("Response : {0}".format(response))
# self.log("Snapshot instance : {0} found".format(response.name))
except CloudError as e:
self.log('Did not find the Snapshot instance.')
if found is True:
return response
return False
def main():
AzureRMSnapshots()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,542 |
azure_rm_snapshot: Doesn't support create snapshot from a managed disk
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The snapshot module currently only supports storage blob as a creation source, it should
supports managed disk as a creation source.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
lib/ansible/modules/cloud/azure/azure_rm_snapshot.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = None
configured module search path = [u'/home/haiyuan/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/ansible/local/lib/python2.7/site-packages/ansible
executable location = /opt/ansible/bin/ansible
python version = 2.7.12 (default, Oct 8 2019, 14:14:10) [GCC 5.4.0 20160609]
```
|
https://github.com/ansible/ansible/issues/64542
|
https://github.com/ansible/ansible/pull/64547
|
ca42cb286807d75c10a1699bb100492db0d0ebd5
|
c11d73575b9be98caa77eb5cdc9ea4dd2442a6d1
| 2019-11-07T07:12:28Z |
python
| 2019-11-12T05:05:15Z |
test/integration/targets/azure_rm_gallery/tasks/main.yml
|
- name: Prepare random number
set_fact:
rpfx: "{{ resource_group | hash('md5') | truncate(7, True, '') }}{{ 1000 | random }}"
run_once: yes
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: "{{ resource_group }}"
name: testVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: "{{ resource_group }}"
name: testSubnet
address_prefix: "10.0.1.0/24"
virtual_network: testVnet
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: "{{ resource_group }}"
allocation_method: Static
name: testPublicIP
- name: Create virtual network inteface cards for VM A and B
azure_rm_networkinterface:
resource_group: "{{ resource_group }}"
name: "vmforimage{{ rpfx }}nic"
virtual_network: testVnet
subnet: testSubnet
- name: Create VM
azure_rm_virtualmachine:
resource_group: "{{ resource_group }}"
name: "vmforimage{{ rpfx }}"
location: eastus
admin_username: testuser
admin_password: "Password1234!"
vm_size: Standard_B1ms
network_interfaces: "vmforimage{{ rpfx }}nic"
image:
offer: UbuntuServer
publisher: Canonical
sku: 16.04-LTS
version: latest
- name: Get VM facts
azure_rm_virtualmachine_facts:
resource_group: "{{ resource_group }}"
name: "vmforimage{{ rpfx }}"
register: output
- name: Create a snapshot by importing an unmanaged blob from the same subscription.
azure_rm_snapshot:
resource_group: "{{ resource_group }}"
name: mySnapshot
location: eastus
creation_data:
create_option: Import
source_uri: 'https://{{ output.vms[0].storage_account_name }}.blob.core.windows.net/{{ output.vms[0].storage_container_name }}/{{ output.vms[0].storage_blob_name }}'
register: output
- assert:
that:
- output.changed
- name: Generalize VM
azure_rm_virtualmachine:
resource_group: "{{ resource_group }}"
name: "vmforimage{{ rpfx }}"
generalized: yes
- name: Create custom image
azure_rm_image:
resource_group: "{{ resource_group }}"
name: testimagea
source: "vmforimage{{ rpfx }}"
- name: Create or update a simple gallery.
azure_rm_gallery:
resource_group: "{{ resource_group }}"
name: myGallery{{ rpfx }}
location: eastus
description: This is the gallery description.
register: output
- assert:
that:
- output.changed
- name: Create or update a simple gallery - idempotent
azure_rm_gallery:
resource_group: "{{ resource_group }}"
name: myGallery{{ rpfx }}
location: eastus
description: This is the gallery description.
register: output
- assert:
that:
- not output.changed
- name: Create or update a simple gallery - change description
azure_rm_gallery:
resource_group: "{{ resource_group }}"
name: myGallery{{ rpfx }}
location: eastus
description: This is the gallery description - xxx.
register: output
- assert:
that:
- output.changed
- name: Get a gallery info.
azure_rm_gallery_info:
resource_group: "{{ resource_group }}"
name: myGallery{{ rpfx }}
register: output
- assert:
that:
- not output.changed
- output.galleries['id'] != None
- output.galleries['name'] != None
- output.galleries['location'] != None
- output.galleries['description'] != None
- output.galleries['provisioning_state'] != None
- name: Create or update gallery image
azure_rm_galleryimage:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
name: myImage
location: eastus
os_type: linux
os_state: generalized
identifier:
publisher: myPublisherName
offer: myOfferName
sku: mySkuName
description: Image Description
register: output
- assert:
that:
- output.changed
- name: Create or update gallery image - idempotent
azure_rm_galleryimage:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
name: myImage
location: eastus
os_type: linux
os_state: generalized
identifier:
publisher: myPublisherName
offer: myOfferName
sku: mySkuName
description: Image Description
register: output
- assert:
that:
- not output.changed
- name: Create or update gallery image - change description
azure_rm_galleryimage:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
name: myImage
location: eastus
os_type: linux
os_state: generalized
identifier:
publisher: myPublisherName
offer: myOfferName
sku: mySkuName
description: Image Description XXXs
register: output
- assert:
that:
- output.changed
- name: Get a gallery image info.
azure_rm_galleryimage_info:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
name: myImage
register: output
- assert:
that:
- not output.changed
- output.images['id'] != None
- output.images['name'] != None
- output.images['location'] != None
- output.images['os_state'] != None
- output.images['os_type'] != None
- output.images['identifier'] != None
- name: Create or update a simple gallery Image Version.
azure_rm_galleryimageversion:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
gallery_image_name: myImage
name: 10.1.3
location: eastus
publishing_profile:
end_of_life_date: "2020-10-01t00:00:00+00:00"
exclude_from_latest: yes
replica_count: 3
storage_account_type: Standard_LRS
target_regions:
- name: eastus
regional_replica_count: 1
- name: westus
regional_replica_count: 2
storage_account_type: Standard_ZRS
managed_image:
name: testimagea
resource_group: "{{ resource_group }}"
register: output
- assert:
that:
- output.changed
- name: Create or update a simple gallery Image Version - idempotent
azure_rm_galleryimageversion:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
gallery_image_name: myImage
name: 10.1.3
location: eastus
publishing_profile:
end_of_life_date: "2020-10-01t00:00:00+00:00"
exclude_from_latest: yes
replica_count: 3
storage_account_type: Standard_LRS
target_regions:
- name: eastus
regional_replica_count: 1
- name: westus
regional_replica_count: 2
storage_account_type: Standard_ZRS
managed_image:
name: testimagea
resource_group: "{{ resource_group }}"
register: output
- assert:
that:
- not output.changed
- name: Create or update a simple gallery Image Version - change end of life
azure_rm_galleryimageversion:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
gallery_image_name: myImage
name: 10.1.3
location: eastus
publishing_profile:
end_of_life_date: "2021-10-01t00:00:00+00:00"
exclude_from_latest: yes
replica_count: 3
storage_account_type: Standard_LRS
target_regions:
- name: eastus
regional_replica_count: 1
- name: westus
regional_replica_count: 2
storage_account_type: Standard_ZRS
managed_image:
name: testimagea
resource_group: "{{ resource_group }}"
register: output
- assert:
that:
- output.changed
- name: Get a simple gallery Image Version info.
azure_rm_galleryimageversion_info:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
gallery_image_name: myImage
name: 10.1.3
register: output
- assert:
that:
- not output.changed
- output.versions['id'] != None
- output.versions['name'] != None
- output.versions['location'] != None
- output.versions['publishing_profile'] != None
- output.versions['provisioning_state'] != None
- name: Delete gallery image Version.
azure_rm_galleryimageversion:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
gallery_image_name: myImage
name: 10.1.3
state: absent
register: output
- assert:
that:
- output.changed
- name: Delete gallery image
azure_rm_galleryimage:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
name: myImage
state: absent
register: output
- assert:
that:
- output.changed
- name: Delete gallery
azure_rm_gallery:
resource_group: "{{ resource_group }}"
name: myGallery{{ rpfx }}
state: absent
register: output
- assert:
that:
- output.changed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,267 |
win_pester - Add the possibility to generate a test result xml file #4
|
##### SUMMARY
To be able to show the tests results in the CI, it would be great if we could output the tests results as an xml file :
https://github.com/pester/Pester/wiki/Showing-Test-Results-in-CI-%28TeamCity%2C-AppVeyor%2C-Azure-DevOps%29
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
`win_pester`
##### ADDITIONAL INFORMATION
The usage of this module would match the following playbook :
```yaml
- name: Get facts
setup:
- name : Copy test file(s)
win_copy:
src: "{{ item }}"
dest: "{{ remote_test_folder[0] }}"
with_items: "{{local_test_files}}"
- name: Add Pester module
action:
module_name: "{{ 'win_psmodule' if ansible_powershell_version >= 5 else 'win_chocolatey' }}"
name: Pester
state: present
- name: Run the pester test present in a folder and check the Pester module version.
win_pester:
path: C:\Pester\
version: 4.1.0
output_file: C:\Pester\TestsResults.xml
output_format: JUnitXML
- name: Store file into /tmp
fetch:
src: C:\Pester\TestsResults.xml
dest: /tmp/TestsResults.xml
```
|
https://github.com/ansible/ansible/issues/63267
|
https://github.com/ansible/ansible/pull/63583
|
c0331053dbe7d1ae52627c32c9e1bf25b6357402
|
4bf79de8a65b5bb8bddeff3d328adbd49a8dd19b
| 2019-10-09T08:03:51Z |
python
| 2019-11-12T06:13:52Z |
lib/ansible/modules/windows/win_pester.ps1
|
#!powershell
# Copyright: (c) 2017, Erwan Quelin (@equelin) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#Requires -Module Ansible.ModuleUtils.Legacy
$ErrorActionPreference = 'Stop'
$params = Parse-Args -arguments $args -supports_check_mode $true
$check_mode = Get-AnsibleParam -obj $params -name "_ansible_check_mode" -type "bool" -default $false
$diff_mode = Get-AnsibleParam -obj $params -name "_ansible_diff" -type "bool" -default $false
# Modules parameters
$path = Get-AnsibleParam -obj $params -name "path" -type "str" -failifempty $true
$tags = Get-AnsibleParam -obj $params -name "tags" -type "list"
$test_parameters = Get-AnsibleParam -obj $params -name "test_parameters" -type "dict"
$minimum_version = Get-AnsibleParam -obj $params -name "minimum_version" -type "str" -failifempty $false
$result = @{
changed = $false
}
if ($diff_mode) {
$result.diff = @{}
}
# CODE
# Test if parameter $version is valid
Try {
$minimum_version = [version]$minimum_version
}
Catch {
Fail-Json -obj $result -message "Value '$minimum_version' for parameter 'minimum_version' is not a valid version format"
}
# Import Pester module if available
$Module = 'Pester'
If (-not (Get-Module -Name $Module -ErrorAction SilentlyContinue)) {
If (Get-Module -Name $Module -ListAvailable -ErrorAction SilentlyContinue) {
Import-Module $Module
} else {
Fail-Json -obj $result -message "Cannot find module: $Module. Check if pester is installed, and if it is not, install using win_psmodule or win_chocolatey."
}
}
# Add actual pester's module version in the ansible's result variable
$Pester_version = (Get-Module -Name $Module).Version.ToString()
$result.pester_version = $Pester_version
# Test if the Pester module is available with a version greater or equal than the one specified in the $version parameter
If ((-not (Get-Module -Name $Module -ErrorAction SilentlyContinue | Where-Object {$_.Version -ge $minimum_version})) -and ($minimum_version)) {
Fail-Json -obj $result -message "$Module version is not greater or equal to $minimum_version"
}
# Testing if test file or directory exist
If (-not (Test-Path -LiteralPath $path)) {
Fail-Json -obj $result -message "Cannot find file or directory: '$path' as it does not exist"
}
#Prepare Invoke-Pester parameters depending of the Pester's version.
#Invoke-Pester output deactivation behave differently depending on the Pester's version
If ($result.pester_version -ge "4.0.0") {
$Parameters = @{
"show" = "none"
"PassThru" = $True
}
} else {
$Parameters = @{
"quiet" = $True
"PassThru" = $True
}
}
if($tags.count){
$Parameters.Tag = $tags
}
# Run Pester tests
If (Test-Path -LiteralPath $path -PathType Leaf) {
$test_parameters_check_mode_msg = ''
if ($test_parameters.keys.count) {
$Parameters.Script = @{Path = $Path ; Parameters = $test_parameters }
$test_parameters_check_mode_msg = " with $($test_parameters.keys -join ',') parameters"
}
else {
$Parameters.Script = $Path
}
if ($check_mode) {
$result.output = "Run pester test in the file: $path$test_parameters_check_mode_msg"
} else {
try {
$result.output = Invoke-Pester @Parameters
} catch {
Fail-Json -obj $result -message $_.Exception
}
}
} else {
# Run Pester tests against all the .ps1 file in the local folder
$files = Get-ChildItem -Path $path | Where-Object {$_.extension -eq ".ps1"}
if ($check_mode) {
$result.output = "Run pester test(s) who are in the folder: $path"
} else {
try {
$result.output = Invoke-Pester $files.FullName @Parameters
} catch {
Fail-Json -obj $result -message $_.Exception
}
}
}
$result.changed = $true
Exit-Json -obj $result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,267 |
win_pester - Add the possibility to generate a test result xml file #4
|
##### SUMMARY
To be able to show the tests results in the CI, it would be great if we could output the tests results as an xml file :
https://github.com/pester/Pester/wiki/Showing-Test-Results-in-CI-%28TeamCity%2C-AppVeyor%2C-Azure-DevOps%29
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
`win_pester`
##### ADDITIONAL INFORMATION
The usage of this module would match the following playbook :
```yaml
- name: Get facts
setup:
- name : Copy test file(s)
win_copy:
src: "{{ item }}"
dest: "{{ remote_test_folder[0] }}"
with_items: "{{local_test_files}}"
- name: Add Pester module
action:
module_name: "{{ 'win_psmodule' if ansible_powershell_version >= 5 else 'win_chocolatey' }}"
name: Pester
state: present
- name: Run the pester test present in a folder and check the Pester module version.
win_pester:
path: C:\Pester\
version: 4.1.0
output_file: C:\Pester\TestsResults.xml
output_format: JUnitXML
- name: Store file into /tmp
fetch:
src: C:\Pester\TestsResults.xml
dest: /tmp/TestsResults.xml
```
|
https://github.com/ansible/ansible/issues/63267
|
https://github.com/ansible/ansible/pull/63583
|
c0331053dbe7d1ae52627c32c9e1bf25b6357402
|
4bf79de8a65b5bb8bddeff3d328adbd49a8dd19b
| 2019-10-09T08:03:51Z |
python
| 2019-11-12T06:13:52Z |
lib/ansible/modules/windows/win_pester.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_pester
short_description: Run Pester tests on Windows hosts
version_added: "2.6"
description:
- Run Pester tests on Windows hosts.
- Test files have to be available on the remote host.
requirements:
- Pester
options:
path:
description:
- Path to a pester test file or a folder where tests can be found.
- If the path is a folder, the module will consider all ps1 files as Pester tests.
type: str
required: true
tags:
description:
- Runs only tests in Describe blocks with specified Tags values.
- Accepts multiple comma separated tags.
type: list
version_added: '2.9'
test_parameters:
description:
- Allows to specify parameters to the test script.
type: dict
version_added: '2.9'
version:
description:
- Minimum version of the pester module that has to be available on the remote host.
author:
- Erwan Quelin (@equelin)
'''
EXAMPLES = r'''
- name: Get facts
setup:
- name: Add Pester module
action:
module_name: "{{ 'win_psmodule' if ansible_powershell_version >= 5 else 'win_chocolatey' }}"
name: Pester
state: present
- name: Run the pester test provided in the path parameter.
win_pester:
path: C:\Pester
- name: Run the pester tests only for the tags specified.
win_pester:
path: C:\Pester\TestScript.tests
tags: CI,UnitTests
# Run pesters tests files that are present in the specified folder
# ensure that the pester module version available is greater or equal to the version parameter.
- name: Run the pester test present in a folder and check the Pester module version.
win_pester:
path: C:\Pester\test01.test.ps1
version: 4.1.0
- name: Run the pester test present in a folder with given script parameters.
win_pester:
path: C:\Pester\test04.test.ps1
test_parameters:
Process: lsass
Service: bits
'''
RETURN = r'''
pester_version:
description: Version of the pester module found on the remote host.
returned: always
type: str
sample: 4.3.1
output:
description: Results of the Pester tests.
returned: success
type: list
sample: false
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,267 |
win_pester - Add the possibility to generate a test result xml file #4
|
##### SUMMARY
To be able to show the tests results in the CI, it would be great if we could output the tests results as an xml file :
https://github.com/pester/Pester/wiki/Showing-Test-Results-in-CI-%28TeamCity%2C-AppVeyor%2C-Azure-DevOps%29
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
`win_pester`
##### ADDITIONAL INFORMATION
The usage of this module would match the following playbook :
```yaml
- name: Get facts
setup:
- name : Copy test file(s)
win_copy:
src: "{{ item }}"
dest: "{{ remote_test_folder[0] }}"
with_items: "{{local_test_files}}"
- name: Add Pester module
action:
module_name: "{{ 'win_psmodule' if ansible_powershell_version >= 5 else 'win_chocolatey' }}"
name: Pester
state: present
- name: Run the pester test present in a folder and check the Pester module version.
win_pester:
path: C:\Pester\
version: 4.1.0
output_file: C:\Pester\TestsResults.xml
output_format: JUnitXML
- name: Store file into /tmp
fetch:
src: C:\Pester\TestsResults.xml
dest: /tmp/TestsResults.xml
```
|
https://github.com/ansible/ansible/issues/63267
|
https://github.com/ansible/ansible/pull/63583
|
c0331053dbe7d1ae52627c32c9e1bf25b6357402
|
4bf79de8a65b5bb8bddeff3d328adbd49a8dd19b
| 2019-10-09T08:03:51Z |
python
| 2019-11-12T06:13:52Z |
test/integration/targets/win_pester/defaults/main.yml
|
---
test_win_pester_path: C:\ansible\win_pester
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,267 |
win_pester - Add the possibility to generate a test result xml file #4
|
##### SUMMARY
To be able to show the tests results in the CI, it would be great if we could output the tests results as an xml file :
https://github.com/pester/Pester/wiki/Showing-Test-Results-in-CI-%28TeamCity%2C-AppVeyor%2C-Azure-DevOps%29
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
`win_pester`
##### ADDITIONAL INFORMATION
The usage of this module would match the following playbook :
```yaml
- name: Get facts
setup:
- name : Copy test file(s)
win_copy:
src: "{{ item }}"
dest: "{{ remote_test_folder[0] }}"
with_items: "{{local_test_files}}"
- name: Add Pester module
action:
module_name: "{{ 'win_psmodule' if ansible_powershell_version >= 5 else 'win_chocolatey' }}"
name: Pester
state: present
- name: Run the pester test present in a folder and check the Pester module version.
win_pester:
path: C:\Pester\
version: 4.1.0
output_file: C:\Pester\TestsResults.xml
output_format: JUnitXML
- name: Store file into /tmp
fetch:
src: C:\Pester\TestsResults.xml
dest: /tmp/TestsResults.xml
```
|
https://github.com/ansible/ansible/issues/63267
|
https://github.com/ansible/ansible/pull/63583
|
c0331053dbe7d1ae52627c32c9e1bf25b6357402
|
4bf79de8a65b5bb8bddeff3d328adbd49a8dd19b
| 2019-10-09T08:03:51Z |
python
| 2019-11-12T06:13:52Z |
test/integration/targets/win_pester/tasks/test.yml
|
---
- name: Run Pester test(s) specifying a fake test file
win_pester:
path: '{{test_win_pester_path}}\fakefile.ps1'
register: fake_file
failed_when: '"Cannot find file or directory: ''" + test_win_pester_path + "\\fakefile.ps1'' as it does not exist" not in fake_file.msg'
- name: Run Pester test(s) specifying a fake folder
win_pester:
path: '{{test_win_pester_path }}\fakedir'
register: fake_folder
failed_when: '"Cannot find file or directory: ''" + test_win_pester_path + "\\fakedir'' as it does not exist" not in fake_folder.msg'
- name: Run Pester test(s) specifying a test file and a higher pester version
win_pester:
path: '{{test_win_pester_path}}\test01.test.ps1'
minimum_version: '6.0.0'
register: invalid_version
failed_when: '"Pester version is not greater or equal to 6.0.0" not in invalid_version.msg'
- name: Run Pester test(s) specifying a test file
win_pester:
path: '{{test_win_pester_path}}\test01.test.ps1'
register: file_result
- name: assert Run Pester test(s) specify a test file
assert:
that:
- file_result.changed
- not file_result.failed
- file_result.output.TotalCount == 1
- name: Run Pester test(s) specifying a test file and with a minimum mandatory Pester version
win_pester:
path: '{{test_win_pester_path}}\test01.test.ps1'
minimum_version: 3.0.0
register: file_result_with_version
- name: assert Run Pester test(s) specifying a test file and a minimum mandatory Pester version
assert:
that:
- file_result_with_version.changed
- not file_result_with_version.failed
- file_result_with_version.output.TotalCount == 1
- name: Run Pester test(s) located in a folder. Folder path end with '\'
win_pester:
path: '{{test_win_pester_path}}\'
register: dir_with_ending_slash
- name: assert Run Pester test(s) located in a folder. Folder path end with '\'
assert:
that:
- dir_with_ending_slash.changed
- not dir_with_ending_slash.failed
- dir_with_ending_slash.output.TotalCount == 6
- name: Run Pester test(s) located in a folder. Folder path does not end with '\'
win_pester:
path: '{{test_win_pester_path}}'
register: dir_without_ending_slash
- name: assert Run Pester test(s) located in a folder. Folder does not end with '\'
assert:
that:
- dir_without_ending_slash.changed
- not dir_without_ending_slash.failed
- dir_without_ending_slash.output.TotalCount == 6
- name: Run Pester test(s) located in a folder and with a minimum mandatory Pester version
win_pester:
path: '{{test_win_pester_path}}'
minimum_version: 3.0.0
register: dir_with_version
- name: assert Run Pester test(s) located in a folder and with a minimum mandatory Pester version
assert:
that:
- dir_with_version.changed
- not dir_with_version.failed
- dir_with_version.output.TotalCount == 6
- name: Run Pester test(s) specifying a test file without specifying tag
win_pester:
path: '{{test_win_pester_path}}\test03.test.ps1'
register: test_no_tag
- name: assert Run Pester test(s) specifying a test file and all tests executed
assert:
that:
- test_no_tag.changed
- test_no_tag.output.TotalCount == 2
- name: Run Pester test(s) specifying a test file with tag
win_pester:
path: '{{test_win_pester_path}}\test03.test.ps1'
tags: tag1
register: test_with_tag
- name: Run Pester test(s) specifying a test file and only test with sepecified tag executed
assert:
that:
- test_with_tag.changed
- test_with_tag.output.TotalCount == 1
- name: Run Pester test(s) specifying a test file with parameters
win_pester:
path: '{{test_win_pester_path}}\test04.test.ps1'
test_parameters:
Process: lsass
Service: bits
register: test_with_parameter
- name: Run Pester test(s) specifying a test file with parameters
assert:
that:
- test_with_parameter.changed
- test_with_parameter.output.PassedCount == 2
- test_with_parameter.output.TotalCount == 2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,587 |
ansible-doc failing on specific line in documentation
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
anisble-doc is failing on below line
https://github.com/dynatrace-innovationlab/ansible_dynatrace_problem_comment/blob/master/library/dynatrace_comment.py#L47
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-doc
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
ansible-doc changes the line to something like below
u'Source where the comment originates from (default': u'Ansible)'
The above is not a valid python string and hence it throws an exception
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No exception should be thrown, instead ansible-doc should fail the test
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
ansible-doc -t module dynatrace_innovationlab.dynatrace_collection.dynatrace_comment dynatrace_innovationlab.dynatrace_collection.dynatrace_deployment -vvv
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/home/abehl/work/src/anshul_ansible/ansible/bin/ansible-doc", line 111, in <module>
exit_code = cli.run()
File "/home/abehl/work/src/anshul_ansible/ansible/lib/ansible/cli/doc.py", line 198, in run
textret = DocCLI.format_plugin_doc(plugin, loader, plugin_type, search_paths)
File "/home/abehl/work/src/anshul_ansible/ansible/lib/ansible/cli/doc.py", line 309, in format_plugin_doc
text += DocCLI.get_man_text(doc)
File "/home/abehl/work/src/anshul_ansible/ansible/lib/ansible/cli/doc.py", line 598, in get_man_text
DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent)
File "/home/abehl/work/src/anshul_ansible/ansible/lib/ansible/cli/doc.py", line 476, in add_fields
text.append(textwrap.fill(DocCLI.tty_ify(entry), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
File "/home/abehl/work/src/anshul_ansible/ansible/lib/ansible/cli/__init__.py", line 426, in tty_ify
t = cls._ITALIC.sub("`" + r"\1" + "'", text) # I(word) => `word'
TypeError: expected string or buffer
```
|
https://github.com/ansible/ansible/issues/60587
|
https://github.com/ansible/ansible/pull/60933
|
4bf79de8a65b5bb8bddeff3d328adbd49a8dd19b
|
575116a584b5bb2fcfa3270611677f37d18295a8
| 2019-08-14T16:52:38Z |
python
| 2019-11-12T11:18:46Z |
changelogs/fragments/60587-doc_parsing.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,587 |
ansible-doc failing on specific line in documentation
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
anisble-doc is failing on below line
https://github.com/dynatrace-innovationlab/ansible_dynatrace_problem_comment/blob/master/library/dynatrace_comment.py#L47
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible-doc
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
ansible-doc changes the line to something like below
u'Source where the comment originates from (default': u'Ansible)'
The above is not a valid python string and hence it throws an exception
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
No exception should be thrown, instead ansible-doc should fail the test
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
ansible-doc -t module dynatrace_innovationlab.dynatrace_collection.dynatrace_comment dynatrace_innovationlab.dynatrace_collection.dynatrace_deployment -vvv
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/home/abehl/work/src/anshul_ansible/ansible/bin/ansible-doc", line 111, in <module>
exit_code = cli.run()
File "/home/abehl/work/src/anshul_ansible/ansible/lib/ansible/cli/doc.py", line 198, in run
textret = DocCLI.format_plugin_doc(plugin, loader, plugin_type, search_paths)
File "/home/abehl/work/src/anshul_ansible/ansible/lib/ansible/cli/doc.py", line 309, in format_plugin_doc
text += DocCLI.get_man_text(doc)
File "/home/abehl/work/src/anshul_ansible/ansible/lib/ansible/cli/doc.py", line 598, in get_man_text
DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent)
File "/home/abehl/work/src/anshul_ansible/ansible/lib/ansible/cli/doc.py", line 476, in add_fields
text.append(textwrap.fill(DocCLI.tty_ify(entry), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
File "/home/abehl/work/src/anshul_ansible/ansible/lib/ansible/cli/__init__.py", line 426, in tty_ify
t = cls._ITALIC.sub("`" + r"\1" + "'", text) # I(word) => `word'
TypeError: expected string or buffer
```
|
https://github.com/ansible/ansible/issues/60587
|
https://github.com/ansible/ansible/pull/60933
|
4bf79de8a65b5bb8bddeff3d328adbd49a8dd19b
|
575116a584b5bb2fcfa3270611677f37d18295a8
| 2019-08-14T16:52:38Z |
python
| 2019-11-12T11:18:46Z |
lib/ansible/cli/doc.py
|
# Copyright: (c) 2014, James Tanner <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import datetime
import json
import os
import textwrap
import traceback
import yaml
import ansible.plugins.loader as plugin_loader
from ansible import constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils._text import to_native
from ansible.module_utils.common._collections_compat import Container, Sequence
from ansible.module_utils.six import string_types
from ansible.parsing.metadata import extract_metadata
from ansible.parsing.plugin_docs import read_docstub
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.plugins.loader import action_loader, fragment_loader
from ansible.utils.collection_loader import set_collection_playbook_paths
from ansible.utils.display import Display
from ansible.utils.plugin_docs import BLACKLIST, get_docstring, get_versioned_doclink
display = Display()
def jdump(text):
display.display(json.dumps(text, sort_keys=True, indent=4))
class RemovedPlugin(Exception):
pass
class PluginNotFound(Exception):
pass
class DocCLI(CLI):
''' displays information on modules installed in Ansible libraries.
It displays a terse listing of plugins and their short descriptions,
provides a printout of their DOCUMENTATION strings,
and it can create a short "snippet" which can be pasted into a playbook. '''
# default ignore list for detailed views
IGNORE = ('module', 'docuri', 'version_added', 'short_description', 'now_date', 'plainexamples', 'returndocs')
def __init__(self, args):
super(DocCLI, self).__init__(args)
self.plugin_list = set()
def init_parser(self):
super(DocCLI, self).init_parser(
desc="plugin documentation tool",
epilog="See man pages for Ansible CLI options or website for tutorials https://docs.ansible.com"
)
opt_help.add_module_options(self.parser)
opt_help.add_basedir_options(self.parser)
self.parser.add_argument('args', nargs='*', help='Plugin', metavar='plugin')
self.parser.add_argument("-t", "--type", action="store", default='module', dest='type',
help='Choose which plugin type (defaults to "module"). '
'Available plugin types are : {0}'.format(C.DOCUMENTABLE_PLUGINS),
choices=C.DOCUMENTABLE_PLUGINS)
self.parser.add_argument("-j", "--json", action="store_true", default=False, dest='json_format',
help='Change output into json format.')
exclusive = self.parser.add_mutually_exclusive_group()
exclusive.add_argument("-F", "--list_files", action="store_true", default=False, dest="list_files",
help='Show plugin names and their source files without summaries (implies --list)')
exclusive.add_argument("-l", "--list", action="store_true", default=False, dest='list_dir',
help='List available plugins')
exclusive.add_argument("-s", "--snippet", action="store_true", default=False, dest='show_snippet',
help='Show playbook snippet for specified plugin(s)')
exclusive.add_argument("--metadata-dump", action="store_true", default=False, dest='dump',
help='**For internal testing only** Dump json metadata for all plugins.')
def post_process_args(self, options):
options = super(DocCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(DocCLI, self).run()
plugin_type = context.CLIARGS['type']
do_json = context.CLIARGS['json_format']
if plugin_type in C.DOCUMENTABLE_PLUGINS:
loader = getattr(plugin_loader, '%s_loader' % plugin_type)
else:
raise AnsibleOptionsError("Unknown or undocumentable plugin type: %s" % plugin_type)
# add to plugin paths from command line
basedir = context.CLIARGS['basedir']
if basedir:
set_collection_playbook_paths(basedir)
loader.add_directory(basedir, with_subdir=True)
if context.CLIARGS['module_path']:
for path in context.CLIARGS['module_path']:
if path:
loader.add_directory(path)
# save only top level paths for errors
search_paths = DocCLI.print_paths(loader)
loader._paths = None # reset so we can use subdirs below
# list plugins names and filepath for type
if context.CLIARGS['list_files']:
paths = loader._get_paths()
for path in paths:
self.plugin_list.update(DocCLI.find_plugins(path, plugin_type))
plugins = self._get_plugin_list_filenames(loader)
if do_json:
jdump(plugins)
else:
# format for user
displace = max(len(x) for x in self.plugin_list)
linelimit = display.columns - displace - 5
text = []
for plugin in plugins.keys():
filename = plugins[plugin]
text.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(filename), filename))
DocCLI.pager("\n".join(text))
# list file plugins for type (does not read docs, very fast)
elif context.CLIARGS['list_dir']:
paths = loader._get_paths()
for path in paths:
self.plugin_list.update(DocCLI.find_plugins(path, plugin_type))
descs = self._get_plugin_list_descriptions(loader)
if do_json:
jdump(descs)
else:
displace = max(len(x) for x in self.plugin_list)
linelimit = display.columns - displace - 5
text = []
deprecated = []
for plugin in descs.keys():
desc = DocCLI.tty_ify(descs[plugin])
if len(desc) > linelimit:
desc = desc[:linelimit] + '...'
if plugin.startswith('_'): # Handle deprecated
deprecated.append("%-*s %-*.*s" % (displace, plugin[1:], linelimit, len(desc), desc))
else:
text.append("%-*s %-*.*s" % (displace, plugin, linelimit, len(desc), desc))
if len(deprecated) > 0:
text.append("\nDEPRECATED:")
text.extend(deprecated)
DocCLI.pager("\n".join(text))
# dump plugin desc/metadata as JSON
elif context.CLIARGS['dump']:
plugin_data = {}
plugin_names = DocCLI.get_all_plugins_of_type(plugin_type)
for plugin_name in plugin_names:
plugin_info = DocCLI.get_plugin_metadata(plugin_type, plugin_name)
if plugin_info is not None:
plugin_data[plugin_name] = plugin_info
jdump(plugin_data)
else:
# display specific plugin docs
if len(context.CLIARGS['args']) == 0:
raise AnsibleOptionsError("Incorrect options passed")
# get the docs for plugins in the command line list
plugin_docs = {}
for plugin in context.CLIARGS['args']:
try:
doc, plainexamples, returndocs, metadata = DocCLI._get_plugin_doc(plugin, loader, search_paths)
except PluginNotFound:
display.warning("%s %s not found in:\n%s\n" % (plugin_type, plugin, search_paths))
continue
except RemovedPlugin:
display.warning("%s %s has been removed\n" % (plugin_type, plugin))
continue
except Exception as e:
display.vvv(traceback.format_exc())
raise AnsibleError("%s %s missing documentation (or could not parse"
" documentation): %s\n" %
(plugin_type, plugin, to_native(e)))
if not doc:
# The doc section existed but was empty
continue
plugin_docs[plugin] = {'doc': doc, 'examples': plainexamples,
'return': returndocs, 'metadata': metadata}
if do_json:
# Some changes to how json docs are formatted
for plugin, doc_data in plugin_docs.items():
try:
doc_data['return'] = yaml.load(doc_data['return'])
except Exception:
pass
jdump(plugin_docs)
else:
# Some changes to how plain text docs are formatted
text = []
for plugin, doc_data in plugin_docs.items():
textret = DocCLI.format_plugin_doc(plugin, plugin_type,
doc_data['doc'], doc_data['examples'],
doc_data['return'], doc_data['metadata'])
if textret:
text.append(textret)
if text:
DocCLI.pager(''.join(text))
return 0
@staticmethod
def get_all_plugins_of_type(plugin_type):
loader = getattr(plugin_loader, '%s_loader' % plugin_type)
plugin_list = set()
paths = loader._get_paths()
for path in paths:
plugins_to_add = DocCLI.find_plugins(path, plugin_type)
plugin_list.update(plugins_to_add)
return sorted(set(plugin_list))
@staticmethod
def get_plugin_metadata(plugin_type, plugin_name):
# if the plugin lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs
loader = getattr(plugin_loader, '%s_loader' % plugin_type)
filename = loader.find_plugin(plugin_name, mod_type='.py', ignore_deprecated=True, check_aliases=True)
if filename is None:
raise AnsibleError("unable to load {0} plugin named {1} ".format(plugin_type, plugin_name))
try:
doc, __, __, metadata = get_docstring(filename, fragment_loader, verbose=(context.CLIARGS['verbosity'] > 0))
except Exception:
display.vvv(traceback.format_exc())
raise AnsibleError(
"%s %s at %s has a documentation error formatting or is missing documentation." %
(plugin_type, plugin_name, filename))
if doc is None:
if 'removed' not in metadata.get('status', []):
raise AnsibleError(
"%s %s at %s has a documentation error formatting or is missing documentation." %
(plugin_type, plugin_name, filename))
# Removed plugins don't have any documentation
return None
return dict(
name=plugin_name,
namespace=DocCLI.namespace_from_plugin_filepath(filename, plugin_name, loader.package_path),
description=doc.get('short_description', "UNKNOWN"),
version_added=doc.get('version_added', "UNKNOWN")
)
@staticmethod
def namespace_from_plugin_filepath(filepath, plugin_name, basedir):
if not basedir.endswith('/'):
basedir += '/'
rel_path = filepath.replace(basedir, '')
extension_free = os.path.splitext(rel_path)[0]
namespace_only = extension_free.rsplit(plugin_name, 1)[0].strip('/_')
clean_ns = namespace_only.replace('/', '.')
if clean_ns == '':
clean_ns = None
return clean_ns
@staticmethod
def _get_plugin_doc(plugin, loader, search_paths):
# if the plugin lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs
filename = loader.find_plugin(plugin, mod_type='.py', ignore_deprecated=True, check_aliases=True)
if filename is None:
raise PluginNotFound('%s was not found in %s' % (plugin, search_paths))
doc, plainexamples, returndocs, metadata = get_docstring(filename, fragment_loader, verbose=(context.CLIARGS['verbosity'] > 0))
# If the plugin existed but did not have a DOCUMENTATION element and was not removed, it's
# an error
if doc is None:
# doc may be None when the module has been removed. Calling code may choose to
# handle that but we can't.
if 'status' in metadata and isinstance(metadata['status'], Container):
if 'removed' in metadata['status']:
raise RemovedPlugin('%s has been removed' % plugin)
# Backwards compat: no documentation but valid metadata (or no metadata, which results in using the default metadata).
# Probably should make this an error in 2.10
return {}, {}, {}, metadata
else:
# If metadata is invalid, warn but don't error
display.warning(u'%s has an invalid ANSIBLE_METADATA field' % plugin)
raise ValueError('%s did not contain a DOCUMENTATION attribute' % plugin)
doc['filename'] = filename
return doc, plainexamples, returndocs, metadata
@staticmethod
def format_plugin_doc(plugin, plugin_type, doc, plainexamples, returndocs, metadata):
# assign from other sections
doc['plainexamples'] = plainexamples
doc['returndocs'] = returndocs
doc['metadata'] = metadata
# generate extra data
if plugin_type == 'module':
# is there corresponding action plugin?
if plugin in action_loader:
doc['action'] = True
else:
doc['action'] = False
doc['now_date'] = datetime.date.today().strftime('%Y-%m-%d')
if 'docuri' in doc:
doc['docuri'] = doc[plugin_type].replace('_', '-')
if context.CLIARGS['show_snippet'] and plugin_type == 'module':
text = DocCLI.get_snippet_text(doc)
else:
text = DocCLI.get_man_text(doc)
return text
@staticmethod
def find_plugins(path, ptype):
display.vvvv("Searching %s for plugins" % path)
plugin_list = set()
if not os.path.exists(path):
display.vvvv("%s does not exist" % path)
return plugin_list
if not os.path.isdir(path):
display.vvvv("%s is not a directory" % path)
return plugin_list
bkey = ptype.upper()
for plugin in os.listdir(path):
display.vvvv("Found %s" % plugin)
full_path = '/'.join([path, plugin])
if plugin.startswith('.'):
continue
elif os.path.isdir(full_path):
continue
elif any(plugin.endswith(x) for x in C.BLACKLIST_EXTS):
continue
elif plugin.startswith('__'):
continue
elif plugin in C.IGNORE_FILES:
continue
elif plugin .startswith('_'):
if os.path.islink(full_path): # avoids aliases
continue
plugin = os.path.splitext(plugin)[0] # removes the extension
plugin = plugin.lstrip('_') # remove underscore from deprecated plugins
if plugin not in BLACKLIST.get(bkey, ()):
plugin_list.add(plugin)
display.vvvv("Added %s" % plugin)
return plugin_list
def _get_plugin_list_descriptions(self, loader):
descs = {}
plugins = self._get_plugin_list_filenames(loader)
for plugin in plugins.keys():
filename = plugins[plugin]
doc = None
try:
doc = read_docstub(filename)
except Exception:
display.warning("%s has a documentation formatting error" % plugin)
continue
if not doc or not isinstance(doc, dict):
with open(filename) as f:
metadata = extract_metadata(module_data=f.read())
if metadata[0]:
if 'removed' not in metadata[0].get('status', []):
display.warning("%s parsing did not produce documentation." % plugin)
else:
continue
desc = 'UNDOCUMENTED'
else:
desc = doc.get('short_description', 'INVALID SHORT DESCRIPTION').strip()
descs[plugin] = desc
return descs
def _get_plugin_list_filenames(self, loader):
pfiles = {}
for plugin in sorted(self.plugin_list):
try:
# if the module lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs
filename = loader.find_plugin(plugin, mod_type='.py', ignore_deprecated=True, check_aliases=True)
if filename is None:
continue
if filename.endswith(".ps1"):
continue
if os.path.isdir(filename):
continue
pfiles[plugin] = filename
except Exception as e:
raise AnsibleError("Failed reading docs at %s: %s" % (plugin, to_native(e)), orig_exc=e)
return pfiles
@staticmethod
def print_paths(finder):
''' Returns a string suitable for printing of the search path '''
# Uses a list to get the order right
ret = []
for i in finder._get_paths(subdirs=False):
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
@staticmethod
def get_snippet_text(doc):
text = []
desc = DocCLI.tty_ify(doc['short_description'])
text.append("- name: %s" % (desc))
text.append(" %s:" % (doc['module']))
pad = 31
subdent = " " * pad
limit = display.columns - pad
for o in sorted(doc['options'].keys()):
opt = doc['options'][o]
if isinstance(opt['description'], string_types):
desc = DocCLI.tty_ify(opt['description'])
else:
desc = DocCLI.tty_ify(" ".join(opt['description']))
required = opt.get('required', False)
if not isinstance(required, bool):
raise("Incorrect value for 'Required', a boolean is needed.: %s" % required)
if required:
desc = "(required) %s" % desc
o = '%s:' % o
text.append(" %-20s # %s" % (o, textwrap.fill(desc, limit, subsequent_indent=subdent)))
text.append('')
return "\n".join(text)
@staticmethod
def _dump_yaml(struct, indent):
return DocCLI.tty_ify('\n'.join([indent + line for line in
yaml.dump(struct, default_flow_style=False,
Dumper=AnsibleDumper).split('\n')]))
@staticmethod
def add_fields(text, fields, limit, opt_indent):
for o in sorted(fields):
opt = fields[o]
required = opt.pop('required', False)
if not isinstance(required, bool):
raise AnsibleError("Incorrect value for 'Required', a boolean is needed.: %s" % required)
if required:
opt_leadin = "="
else:
opt_leadin = "-"
text.append("%s %s" % (opt_leadin, o))
if isinstance(opt['description'], list):
for entry in opt['description']:
text.append(textwrap.fill(DocCLI.tty_ify(entry), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
else:
text.append(textwrap.fill(DocCLI.tty_ify(opt['description']), limit, initial_indent=opt_indent, subsequent_indent=opt_indent))
del opt['description']
aliases = ''
if 'aliases' in opt:
if len(opt['aliases']) > 0:
aliases = "(Aliases: " + ", ".join(str(i) for i in opt['aliases']) + ")"
del opt['aliases']
choices = ''
if 'choices' in opt:
if len(opt['choices']) > 0:
choices = "(Choices: " + ", ".join(str(i) for i in opt['choices']) + ")"
del opt['choices']
default = ''
if 'default' in opt or not required:
default = "[Default: %s" % str(opt.pop('default', '(null)')) + "]"
text.append(textwrap.fill(DocCLI.tty_ify(aliases + choices + default), limit,
initial_indent=opt_indent, subsequent_indent=opt_indent))
if 'options' in opt:
text.append("%soptions:\n" % opt_indent)
DocCLI.add_fields(text, opt.pop('options'), limit, opt_indent + opt_indent)
if 'spec' in opt:
text.append("%sspec:\n" % opt_indent)
DocCLI.add_fields(text, opt.pop('spec'), limit, opt_indent + opt_indent)
conf = {}
for config in ('env', 'ini', 'yaml', 'vars', 'keywords'):
if config in opt and opt[config]:
conf[config] = opt.pop(config)
for ignore in DocCLI.IGNORE:
for item in conf[config]:
if ignore in item:
del item[ignore]
if conf:
text.append(DocCLI._dump_yaml({'set_via': conf}, opt_indent))
for k in sorted(opt):
if k.startswith('_'):
continue
if isinstance(opt[k], string_types):
text.append('%s%s: %s' % (opt_indent, k,
textwrap.fill(DocCLI.tty_ify(opt[k]),
limit - (len(k) + 2),
subsequent_indent=opt_indent)))
elif isinstance(opt[k], (Sequence)) and all(isinstance(x, string_types) for x in opt[k]):
text.append(DocCLI.tty_ify('%s%s: %s' % (opt_indent, k, ', '.join(opt[k]))))
else:
text.append(DocCLI._dump_yaml({k: opt[k]}, opt_indent))
text.append('')
@staticmethod
def get_support_block(doc):
# Note: 'curated' is deprecated and not used in any of the modules we ship
support_level_msg = {'core': 'The Ansible Core Team',
'network': 'The Ansible Network Team',
'certified': 'an Ansible Partner',
'community': 'The Ansible Community',
'curated': 'A Third Party',
}
return [" * This module is maintained by %s" % support_level_msg[doc['metadata']['supported_by']]]
@staticmethod
def get_metadata_block(doc):
text = []
text.append("METADATA:")
text.append('\tSUPPORT LEVEL: %s' % doc['metadata']['supported_by'])
for k in (m for m in doc['metadata'] if m != 'supported_by'):
if isinstance(k, list):
text.append("\t%s: %s" % (k.capitalize(), ", ".join(doc['metadata'][k])))
else:
text.append("\t%s: %s" % (k.capitalize(), doc['metadata'][k]))
return text
@staticmethod
def get_man_text(doc):
DocCLI.IGNORE = DocCLI.IGNORE + (context.CLIARGS['type'],)
opt_indent = " "
text = []
pad = display.columns * 0.20
limit = max(display.columns - int(pad), 70)
text.append("> %s (%s)\n" % (doc.get(context.CLIARGS['type'], doc.get('plugin_type')).upper(), doc.pop('filename')))
if isinstance(doc['description'], list):
desc = " ".join(doc.pop('description'))
else:
desc = doc.pop('description')
text.append("%s\n" % textwrap.fill(DocCLI.tty_ify(desc), limit, initial_indent=opt_indent,
subsequent_indent=opt_indent))
if 'deprecated' in doc and doc['deprecated'] is not None and len(doc['deprecated']) > 0:
text.append("DEPRECATED: \n")
if isinstance(doc['deprecated'], dict):
if 'version' in doc['deprecated'] and 'removed_in' not in doc['deprecated']:
doc['deprecated']['removed_in'] = doc['deprecated']['version']
text.append("\tReason: %(why)s\n\tWill be removed in: Ansible %(removed_in)s\n\tAlternatives: %(alternative)s" % doc.pop('deprecated'))
else:
text.append("%s" % doc.pop('deprecated'))
text.append("\n")
try:
support_block = DocCLI.get_support_block(doc)
if support_block:
text.extend(support_block)
except Exception:
pass # FIXME: not suported by plugins
if doc.pop('action', False):
text.append(" * note: %s\n" % "This module has a corresponding action plugin.")
if 'options' in doc and doc['options']:
text.append("OPTIONS (= is mandatory):\n")
DocCLI.add_fields(text, doc.pop('options'), limit, opt_indent)
text.append('')
if 'notes' in doc and doc['notes'] and len(doc['notes']) > 0:
text.append("NOTES:")
for note in doc['notes']:
text.append(textwrap.fill(DocCLI.tty_ify(note), limit - 6,
initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append('')
text.append('')
del doc['notes']
if 'seealso' in doc and doc['seealso']:
text.append("SEE ALSO:")
for item in doc['seealso']:
if 'module' in item:
text.append(textwrap.fill(DocCLI.tty_ify('Module %s' % item['module']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
description = item.get('description', 'The official documentation on the %s module.' % item['module'])
text.append(textwrap.fill(DocCLI.tty_ify(description), limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink('modules/%s_module.html' % item['module'])),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent))
elif 'name' in item and 'link' in item and 'description' in item:
text.append(textwrap.fill(DocCLI.tty_ify(item['name']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append(textwrap.fill(DocCLI.tty_ify(item['description']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(textwrap.fill(DocCLI.tty_ify(item['link']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
elif 'ref' in item and 'description' in item:
text.append(textwrap.fill(DocCLI.tty_ify('Ansible documentation [%s]' % item['ref']),
limit - 6, initial_indent=opt_indent[:-2] + "* ", subsequent_indent=opt_indent))
text.append(textwrap.fill(DocCLI.tty_ify(item['description']),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append(textwrap.fill(DocCLI.tty_ify(get_versioned_doclink('/#stq=%s&stp=1' % item['ref'])),
limit - 6, initial_indent=opt_indent + ' ', subsequent_indent=opt_indent + ' '))
text.append('')
text.append('')
del doc['seealso']
if 'requirements' in doc and doc['requirements'] is not None and len(doc['requirements']) > 0:
req = ", ".join(doc.pop('requirements'))
text.append("REQUIREMENTS:%s\n" % textwrap.fill(DocCLI.tty_ify(req), limit - 16, initial_indent=" ", subsequent_indent=opt_indent))
# Generic handler
for k in sorted(doc):
if k in DocCLI.IGNORE or not doc[k]:
continue
if isinstance(doc[k], string_types):
text.append('%s: %s' % (k.upper(), textwrap.fill(DocCLI.tty_ify(doc[k]), limit - (len(k) + 2), subsequent_indent=opt_indent)))
elif isinstance(doc[k], (list, tuple)):
text.append('%s: %s' % (k.upper(), ', '.join(doc[k])))
else:
text.append(DocCLI._dump_yaml({k.upper(): doc[k]}, opt_indent))
del doc[k]
text.append('')
if 'plainexamples' in doc and doc['plainexamples'] is not None:
text.append("EXAMPLES:")
text.append('')
if isinstance(doc['plainexamples'], string_types):
text.append(doc.pop('plainexamples').strip())
else:
text.append(yaml.dump(doc.pop('plainexamples'), indent=2, default_flow_style=False))
text.append('')
text.append('')
if 'returndocs' in doc and doc['returndocs'] is not None:
text.append("RETURN VALUES:")
if isinstance(doc['returndocs'], string_types):
text.append(doc.pop('returndocs'))
else:
text.append(yaml.dump(doc.pop('returndocs'), indent=2, default_flow_style=False))
text.append('')
try:
metadata_block = DocCLI.get_metadata_block(doc)
if metadata_block:
text.extend(metadata_block)
text.append('')
except Exception:
pass # metadata is optional
return "\n".join(text)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,399 |
Slash in network name causes vmware_guest module to return Network ... does not exist
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
We are using the vmware_guest module to deploy a VM Template on a vCenter cluster. When we specify a virtual network name that contains a slash in it (e.g. 0123-network-name-10.0.0.0/22), Ansible returns "Network '0123-network-name-10.0.0.0/22' does not exist. This problem does not occur with network names that do not contain slashes.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 17:58:22) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible running within AWX 9.0.0
Target vCenter environment - vCenter 6.7.0 build 14070654
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a playbook with a single task in it to deploy and customize a VMware Template using the vmware-guest module. Specify a virtual network name with a "/" in it.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: "{{ vm_folder }}"
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: "{{ vm_cluster }}"
networks:
- name: "{{ vm_network }}"
dvswitch_name: "dvSwitch"
hardware:
num_cpus: "{{ vm_hw_cpus }}"
memory_mb: "{{ vm_hw_mem }}"
wait_for_ip_address: True
customization_spec: "{{ vm_custspec }}"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expect vmware_guest to launch or update the VM specified by vm_name variable with the parameters specified.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
PLAY [EDF VMware Deployment playbook] ******************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [deployvm : Clone a virtual machine from Linux template and customize] ****
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Network '0123-network-name-10.0.0.0/22' does not exist."}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64399
|
https://github.com/ansible/ansible/pull/64494
|
575116a584b5bb2fcfa3270611677f37d18295a8
|
47f9873eabab41f4c054d393ea7440bd85d7f95c
| 2019-11-04T16:57:05Z |
python
| 2019-11-12T11:43:57Z |
changelogs/fragments/64399_vmware_guest.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,399 |
Slash in network name causes vmware_guest module to return Network ... does not exist
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
We are using the vmware_guest module to deploy a VM Template on a vCenter cluster. When we specify a virtual network name that contains a slash in it (e.g. 0123-network-name-10.0.0.0/22), Ansible returns "Network '0123-network-name-10.0.0.0/22' does not exist. This problem does not occur with network names that do not contain slashes.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 17:58:22) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible running within AWX 9.0.0
Target vCenter environment - vCenter 6.7.0 build 14070654
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a playbook with a single task in it to deploy and customize a VMware Template using the vmware-guest module. Specify a virtual network name with a "/" in it.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: "{{ vm_folder }}"
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: "{{ vm_cluster }}"
networks:
- name: "{{ vm_network }}"
dvswitch_name: "dvSwitch"
hardware:
num_cpus: "{{ vm_hw_cpus }}"
memory_mb: "{{ vm_hw_mem }}"
wait_for_ip_address: True
customization_spec: "{{ vm_custspec }}"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expect vmware_guest to launch or update the VM specified by vm_name variable with the parameters specified.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
PLAY [EDF VMware Deployment playbook] ******************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [deployvm : Clone a virtual machine from Linux template and customize] ****
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Network '0123-network-name-10.0.0.0/22' does not exist."}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64399
|
https://github.com/ansible/ansible/pull/64494
|
575116a584b5bb2fcfa3270611677f37d18295a8
|
47f9873eabab41f4c054d393ea7440bd85d7f95c
| 2019-11-04T16:57:05Z |
python
| 2019-11-12T11:43:57Z |
lib/ansible/module_utils/vmware.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Joseph Callen <jcallen () csc.com>
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, James E. King III (@jeking3) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import atexit
import ansible.module_utils.common._collections_compat as collections_compat
import json
import os
import re
import ssl
import time
import traceback
from random import randint
from distutils.version import StrictVersion
REQUESTS_IMP_ERR = None
try:
# requests is required for exception handling of the ConnectionError
import requests
HAS_REQUESTS = True
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
HAS_REQUESTS = False
PYVMOMI_IMP_ERR = None
try:
from pyVim import connect
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
HAS_PYVMOMIJSON = hasattr(VmomiSupport, 'VmomiJSONEncoder')
except ImportError:
PYVMOMI_IMP_ERR = traceback.format_exc()
HAS_PYVMOMI = False
HAS_PYVMOMIJSON = False
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.six import integer_types, iteritems, string_types, raise_from
from ansible.module_utils.six.moves.urllib.parse import urlparse
from ansible.module_utils.basic import env_fallback, missing_required_lib
from ansible.module_utils.urls import generic_urlparse
class TaskError(Exception):
def __init__(self, *args, **kwargs):
super(TaskError, self).__init__(*args, **kwargs)
def wait_for_task(task, max_backoff=64, timeout=3600):
"""Wait for given task using exponential back-off algorithm.
Args:
task: VMware task object
max_backoff: Maximum amount of sleep time in seconds
timeout: Timeout for the given task in seconds
Returns: Tuple with True and result for successful task
Raises: TaskError on failure
"""
failure_counter = 0
start_time = time.time()
while True:
if time.time() - start_time >= timeout:
raise TaskError("Timeout")
if task.info.state == vim.TaskInfo.State.success:
return True, task.info.result
if task.info.state == vim.TaskInfo.State.error:
error_msg = task.info.error
host_thumbprint = None
try:
error_msg = error_msg.msg
if hasattr(task.info.error, 'thumbprint'):
host_thumbprint = task.info.error.thumbprint
except AttributeError:
pass
finally:
raise_from(TaskError(error_msg, host_thumbprint), task.info.error)
if task.info.state in [vim.TaskInfo.State.running, vim.TaskInfo.State.queued]:
sleep_time = min(2 ** failure_counter + randint(1, 1000) / 1000, max_backoff)
time.sleep(sleep_time)
failure_counter += 1
def wait_for_vm_ip(content, vm, timeout=300):
facts = dict()
interval = 15
while timeout > 0:
_facts = gather_vm_facts(content, vm)
if _facts['ipv4'] or _facts['ipv6']:
facts = _facts
break
time.sleep(interval)
timeout -= interval
return facts
def find_obj(content, vimtype, name, first=True, folder=None):
container = content.viewManager.CreateContainerView(folder or content.rootFolder, recursive=True, type=vimtype)
# Get all objects matching type (and name if given)
obj_list = [obj for obj in container.view if not name or to_text(obj.name) == to_text(name)]
container.Destroy()
# Return first match or None
if first:
if obj_list:
return obj_list[0]
return None
# Return all matching objects or empty list
return obj_list
def find_dvspg_by_name(dv_switch, portgroup_name):
portgroups = dv_switch.portgroup
for pg in portgroups:
if pg.name == portgroup_name:
return pg
return None
def find_object_by_name(content, name, obj_type, folder=None, recurse=True):
if not isinstance(obj_type, list):
obj_type = [obj_type]
objects = get_all_objs(content, obj_type, folder=folder, recurse=recurse)
for obj in objects:
if obj.name == name:
return obj
return None
def find_cluster_by_name(content, cluster_name, datacenter=None):
if datacenter:
folder = datacenter.hostFolder
else:
folder = content.rootFolder
return find_object_by_name(content, cluster_name, [vim.ClusterComputeResource], folder=folder)
def find_datacenter_by_name(content, datacenter_name):
return find_object_by_name(content, datacenter_name, [vim.Datacenter])
def get_parent_datacenter(obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
return datacenter
def find_datastore_by_name(content, datastore_name):
return find_object_by_name(content, datastore_name, [vim.Datastore])
def find_dvs_by_name(content, switch_name, folder=None):
return find_object_by_name(content, switch_name, [vim.DistributedVirtualSwitch], folder=folder)
def find_hostsystem_by_name(content, hostname):
return find_object_by_name(content, hostname, [vim.HostSystem])
def find_resource_pool_by_name(content, resource_pool_name):
return find_object_by_name(content, resource_pool_name, [vim.ResourcePool])
def find_network_by_name(content, network_name):
return find_object_by_name(content, network_name, [vim.Network])
def find_vm_by_id(content, vm_id, vm_id_type="vm_name", datacenter=None,
cluster=None, folder=None, match_first=False):
""" UUID is unique to a VM, every other id returns the first match. """
si = content.searchIndex
vm = None
if vm_id_type == 'dns_name':
vm = si.FindByDnsName(datacenter=datacenter, dnsName=vm_id, vmSearch=True)
elif vm_id_type == 'uuid':
# Search By BIOS UUID rather than instance UUID
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=False, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'instance_uuid':
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=True, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'ip':
vm = si.FindByIp(datacenter=datacenter, ip=vm_id, vmSearch=True)
elif vm_id_type == 'vm_name':
folder = None
if cluster:
folder = cluster
elif datacenter:
folder = datacenter.hostFolder
vm = find_vm_by_name(content, vm_id, folder)
elif vm_id_type == 'inventory_path':
searchpath = folder
# get all objects for this path
f_obj = si.FindByInventoryPath(searchpath)
if f_obj:
if isinstance(f_obj, vim.Datacenter):
f_obj = f_obj.vmFolder
for c_obj in f_obj.childEntity:
if not isinstance(c_obj, vim.VirtualMachine):
continue
if c_obj.name == vm_id:
vm = c_obj
if match_first:
break
return vm
def find_vm_by_name(content, vm_name, folder=None, recurse=True):
return find_object_by_name(content, vm_name, [vim.VirtualMachine], folder=folder, recurse=recurse)
def find_host_portgroup_by_name(host, portgroup_name):
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return None
def compile_folder_path_for_object(vobj):
""" make a /vm/foo/bar/baz like folder path for an object """
paths = []
if isinstance(vobj, vim.Folder):
paths.append(vobj.name)
thisobj = vobj
while hasattr(thisobj, 'parent'):
thisobj = thisobj.parent
try:
moid = thisobj._moId
except AttributeError:
moid = None
if moid in ['group-d1', 'ha-folder-root']:
break
if isinstance(thisobj, vim.Folder):
paths.append(thisobj.name)
paths.reverse()
return '/' + '/'.join(paths)
def _get_vm_prop(vm, attributes):
"""Safely get a property or return None"""
result = vm
for attribute in attributes:
try:
result = getattr(result, attribute)
except (AttributeError, IndexError):
return None
return result
def gather_vm_facts(content, vm):
""" Gather facts from vim.VirtualMachine object. """
facts = {
'module_hw': True,
'hw_name': vm.config.name,
'hw_power_status': vm.summary.runtime.powerState,
'hw_guest_full_name': vm.summary.guest.guestFullName,
'hw_guest_id': vm.summary.guest.guestId,
'hw_product_uuid': vm.config.uuid,
'hw_processor_count': vm.config.hardware.numCPU,
'hw_cores_per_socket': vm.config.hardware.numCoresPerSocket,
'hw_memtotal_mb': vm.config.hardware.memoryMB,
'hw_interfaces': [],
'hw_datastores': [],
'hw_files': [],
'hw_esxi_host': None,
'hw_guest_ha_state': None,
'hw_is_template': vm.config.template,
'hw_folder': None,
'hw_version': vm.config.version,
'instance_uuid': vm.config.instanceUuid,
'guest_tools_status': _get_vm_prop(vm, ('guest', 'toolsRunningStatus')),
'guest_tools_version': _get_vm_prop(vm, ('guest', 'toolsVersion')),
'guest_question': vm.summary.runtime.question,
'guest_consolidation_needed': vm.summary.runtime.consolidationNeeded,
'ipv4': None,
'ipv6': None,
'annotation': vm.config.annotation,
'customvalues': {},
'snapshots': [],
'current_snapshot': None,
'vnc': {},
'moid': vm._moId,
'vimref': "vim.VirtualMachine:%s" % vm._moId,
}
# facts that may or may not exist
if vm.summary.runtime.host:
try:
host = vm.summary.runtime.host
facts['hw_esxi_host'] = host.summary.config.name
facts['hw_cluster'] = host.parent.name if host.parent and isinstance(host.parent, vim.ClusterComputeResource) else None
except vim.fault.NoPermission:
# User does not have read permission for the host system,
# proceed without this value. This value does not contribute or hamper
# provisioning or power management operations.
pass
if vm.summary.runtime.dasVmProtection:
facts['hw_guest_ha_state'] = vm.summary.runtime.dasVmProtection.dasProtected
datastores = vm.datastore
for ds in datastores:
facts['hw_datastores'].append(ds.info.name)
try:
files = vm.config.files
layout = vm.layout
if files:
facts['hw_files'] = [files.vmPathName]
for item in layout.snapshot:
for snap in item.snapshotFile:
if 'vmsn' in snap:
facts['hw_files'].append(snap)
for item in layout.configFile:
facts['hw_files'].append(os.path.join(os.path.dirname(files.vmPathName), item))
for item in vm.layout.logFile:
facts['hw_files'].append(os.path.join(files.logDirectory, item))
for item in vm.layout.disk:
for disk in item.diskFile:
facts['hw_files'].append(disk)
except Exception:
pass
facts['hw_folder'] = PyVmomi.get_vm_path(content, vm)
cfm = content.customFieldsManager
# Resolve custom values
for value_obj in vm.summary.customValue:
kn = value_obj.key
if cfm is not None and cfm.field:
for f in cfm.field:
if f.key == value_obj.key:
kn = f.name
# Exit the loop immediately, we found it
break
facts['customvalues'][kn] = value_obj.value
net_dict = {}
vmnet = _get_vm_prop(vm, ('guest', 'net'))
if vmnet:
for device in vmnet:
if device.deviceConfigId > 0:
net_dict[device.macAddress] = list(device.ipAddress)
if vm.guest.ipAddress:
if ':' in vm.guest.ipAddress:
facts['ipv6'] = vm.guest.ipAddress
else:
facts['ipv4'] = vm.guest.ipAddress
ethernet_idx = 0
for entry in vm.config.hardware.device:
if not hasattr(entry, 'macAddress'):
continue
if entry.macAddress:
mac_addr = entry.macAddress
mac_addr_dash = mac_addr.replace(':', '-')
else:
mac_addr = mac_addr_dash = None
if (hasattr(entry, 'backing') and hasattr(entry.backing, 'port') and
hasattr(entry.backing.port, 'portKey') and hasattr(entry.backing.port, 'portgroupKey')):
port_group_key = entry.backing.port.portgroupKey
port_key = entry.backing.port.portKey
else:
port_group_key = None
port_key = None
factname = 'hw_eth' + str(ethernet_idx)
facts[factname] = {
'addresstype': entry.addressType,
'label': entry.deviceInfo.label,
'macaddress': mac_addr,
'ipaddresses': net_dict.get(entry.macAddress, None),
'macaddress_dash': mac_addr_dash,
'summary': entry.deviceInfo.summary,
'portgroup_portkey': port_key,
'portgroup_key': port_group_key,
}
facts['hw_interfaces'].append('eth' + str(ethernet_idx))
ethernet_idx += 1
snapshot_facts = list_snapshots(vm)
if 'snapshots' in snapshot_facts:
facts['snapshots'] = snapshot_facts['snapshots']
facts['current_snapshot'] = snapshot_facts['current_snapshot']
facts['vnc'] = get_vnc_extraconfig(vm)
return facts
def deserialize_snapshot_obj(obj):
return {'id': obj.id,
'name': obj.name,
'description': obj.description,
'creation_time': obj.createTime,
'state': obj.state}
def list_snapshots_recursively(snapshots):
snapshot_data = []
for snapshot in snapshots:
snapshot_data.append(deserialize_snapshot_obj(snapshot))
snapshot_data = snapshot_data + list_snapshots_recursively(snapshot.childSnapshotList)
return snapshot_data
def get_current_snap_obj(snapshots, snapob):
snap_obj = []
for snapshot in snapshots:
if snapshot.snapshot == snapob:
snap_obj.append(snapshot)
snap_obj = snap_obj + get_current_snap_obj(snapshot.childSnapshotList, snapob)
return snap_obj
def list_snapshots(vm):
result = {}
snapshot = _get_vm_prop(vm, ('snapshot',))
if not snapshot:
return result
if vm.snapshot is None:
return result
result['snapshots'] = list_snapshots_recursively(vm.snapshot.rootSnapshotList)
current_snapref = vm.snapshot.currentSnapshot
current_snap_obj = get_current_snap_obj(vm.snapshot.rootSnapshotList, current_snapref)
if current_snap_obj:
result['current_snapshot'] = deserialize_snapshot_obj(current_snap_obj[0])
else:
result['current_snapshot'] = dict()
return result
def get_vnc_extraconfig(vm):
result = {}
for opts in vm.config.extraConfig:
for optkeyname in ['enabled', 'ip', 'port', 'password']:
if opts.key.lower() == "remotedisplay.vnc." + optkeyname:
result[optkeyname] = opts.value
return result
def vmware_argument_spec():
return dict(
hostname=dict(type='str',
required=False,
fallback=(env_fallback, ['VMWARE_HOST']),
),
username=dict(type='str',
aliases=['user', 'admin'],
required=False,
fallback=(env_fallback, ['VMWARE_USER'])),
password=dict(type='str',
aliases=['pass', 'pwd'],
required=False,
no_log=True,
fallback=(env_fallback, ['VMWARE_PASSWORD'])),
port=dict(type='int',
default=443,
fallback=(env_fallback, ['VMWARE_PORT'])),
validate_certs=dict(type='bool',
required=False,
default=True,
fallback=(env_fallback, ['VMWARE_VALIDATE_CERTS'])
),
proxy_host=dict(type='str',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_HOST'])),
proxy_port=dict(type='int',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_PORT'])),
)
def connect_to_api(module, disconnect_atexit=True, return_si=False):
hostname = module.params['hostname']
username = module.params['username']
password = module.params['password']
port = module.params.get('port', 443)
validate_certs = module.params['validate_certs']
if not hostname:
module.fail_json(msg="Hostname parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_HOST=ESXI_HOSTNAME'")
if not username:
module.fail_json(msg="Username parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_USER=ESXI_USERNAME'")
if not password:
module.fail_json(msg="Password parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_PASSWORD=ESXI_PASSWORD'")
if validate_certs and not hasattr(ssl, 'SSLContext'):
module.fail_json(msg='pyVim does not support changing verification mode with python < 2.7.9. Either update '
'python or use validate_certs=false.')
elif validate_certs:
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True
ssl_context.load_default_certs()
elif hasattr(ssl, 'SSLContext'):
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_NONE
ssl_context.check_hostname = False
else: # Python < 2.7.9 or RHEL/Centos < 7.4
ssl_context = None
service_instance = None
proxy_host = module.params.get('proxy_host')
proxy_port = module.params.get('proxy_port')
connect_args = dict(
host=hostname,
port=port,
)
if ssl_context:
connect_args.update(sslContext=ssl_context)
msg_suffix = ''
try:
if proxy_host:
msg_suffix = " [proxy: %s:%d]" % (proxy_host, proxy_port)
connect_args.update(httpProxyHost=proxy_host, httpProxyPort=proxy_port)
smart_stub = connect.SmartStubAdapter(**connect_args)
session_stub = connect.VimSessionOrientedStub(smart_stub, connect.VimSessionOrientedStub.makeUserLoginMethod(username, password))
service_instance = vim.ServiceInstance('ServiceInstance', session_stub)
else:
connect_args.update(user=username, pwd=password)
service_instance = connect.SmartConnect(**connect_args)
except vim.fault.InvalidLogin as invalid_login:
msg = "Unable to log on to vCenter or ESXi API at %s:%s " % (hostname, port)
module.fail_json(msg="%s as %s: %s" % (msg, username, invalid_login.msg) + msg_suffix)
except vim.fault.NoPermission as no_permission:
module.fail_json(msg="User %s does not have required permission"
" to log on to vCenter or ESXi API at %s:%s : %s" % (username, hostname, port, no_permission.msg))
except (requests.ConnectionError, ssl.SSLError) as generic_req_exc:
module.fail_json(msg="Unable to connect to vCenter or ESXi API at %s on TCP/%s: %s" % (hostname, port, generic_req_exc))
except vmodl.fault.InvalidRequest as invalid_request:
# Request is malformed
msg = "Failed to get a response from server %s:%s " % (hostname, port)
module.fail_json(msg="%s as request is malformed: %s" % (msg, invalid_request.msg) + msg_suffix)
except Exception as generic_exc:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port) + msg_suffix
module.fail_json(msg="%s : %s" % (msg, generic_exc))
if service_instance is None:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port)
module.fail_json(msg=msg + msg_suffix)
# Disabling atexit should be used in special cases only.
# Such as IP change of the ESXi host which removes the connection anyway.
# Also removal significantly speeds up the return of the module
if disconnect_atexit:
atexit.register(connect.Disconnect, service_instance)
if return_si:
return service_instance, service_instance.RetrieveContent()
return service_instance.RetrieveContent()
def get_all_objs(content, vimtype, folder=None, recurse=True):
if not folder:
folder = content.rootFolder
obj = {}
container = content.viewManager.CreateContainerView(folder, vimtype, recurse)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
def run_command_in_guest(content, vm, username, password, program_path, program_args, program_cwd, program_env):
result = {'failed': False}
tools_status = vm.guest.toolsStatus
if (tools_status == 'toolsNotInstalled' or
tools_status == 'toolsNotRunning'):
result['failed'] = True
result['msg'] = "VMwareTools is not installed or is not running in the guest"
return result
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/NamePasswordAuthentication.rst
creds = vim.vm.guest.NamePasswordAuthentication(
username=username, password=password
)
try:
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/ProcessManager.rst
pm = content.guestOperationsManager.processManager
# https://www.vmware.com/support/developer/converter-sdk/conv51_apireference/vim.vm.guest.ProcessManager.ProgramSpec.html
ps = vim.vm.guest.ProcessManager.ProgramSpec(
# programPath=program,
# arguments=args
programPath=program_path,
arguments=program_args,
workingDirectory=program_cwd,
)
res = pm.StartProgramInGuest(vm, creds, ps)
result['pid'] = res
pdata = pm.ListProcessesInGuest(vm, creds, [res])
# wait for pid to finish
while not pdata[0].endTime:
time.sleep(1)
pdata = pm.ListProcessesInGuest(vm, creds, [res])
result['owner'] = pdata[0].owner
result['startTime'] = pdata[0].startTime.isoformat()
result['endTime'] = pdata[0].endTime.isoformat()
result['exitCode'] = pdata[0].exitCode
if result['exitCode'] != 0:
result['failed'] = True
result['msg'] = "program exited non-zero"
else:
result['msg'] = "program completed successfully"
except Exception as e:
result['msg'] = str(e)
result['failed'] = True
return result
def serialize_spec(clonespec):
"""Serialize a clonespec or a relocation spec"""
data = {}
attrs = dir(clonespec)
attrs = [x for x in attrs if not x.startswith('_')]
for x in attrs:
xo = getattr(clonespec, x)
if callable(xo):
continue
xt = type(xo)
if xo is None:
data[x] = None
elif isinstance(xo, vim.vm.ConfigSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.RelocateSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDisk):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDeviceSpec.FileOperation):
data[x] = to_text(xo)
elif isinstance(xo, vim.Description):
data[x] = {
'dynamicProperty': serialize_spec(xo.dynamicProperty),
'dynamicType': serialize_spec(xo.dynamicType),
'label': serialize_spec(xo.label),
'summary': serialize_spec(xo.summary),
}
elif hasattr(xo, 'name'):
data[x] = to_text(xo) + ':' + to_text(xo.name)
elif isinstance(xo, vim.vm.ProfileSpec):
pass
elif issubclass(xt, list):
data[x] = []
for xe in xo:
data[x].append(serialize_spec(xe))
elif issubclass(xt, string_types + integer_types + (float, bool)):
if issubclass(xt, integer_types):
data[x] = int(xo)
else:
data[x] = to_text(xo)
elif issubclass(xt, bool):
data[x] = xo
elif issubclass(xt, dict):
data[to_text(x)] = {}
for k, v in xo.items():
k = to_text(k)
data[x][k] = serialize_spec(v)
else:
data[x] = str(xt)
return data
def find_host_by_cluster_datacenter(module, content, datacenter_name, cluster_name, host_name):
dc = find_datacenter_by_name(content, datacenter_name)
if dc is None:
module.fail_json(msg="Unable to find datacenter with name %s" % datacenter_name)
cluster = find_cluster_by_name(content, cluster_name, datacenter=dc)
if cluster is None:
module.fail_json(msg="Unable to find cluster with name %s" % cluster_name)
for host in cluster.host:
if host.name == host_name:
return host, cluster
return None, cluster
def set_vm_power_state(content, vm, state, force, timeout=0):
"""
Set the power status for a VM determined by the current and
requested states. force is forceful
"""
facts = gather_vm_facts(content, vm)
expected_state = state.replace('_', '').replace('-', '').lower()
current_state = facts['hw_power_status'].lower()
result = dict(
changed=False,
failed=False,
)
# Need Force
if not force and current_state not in ['poweredon', 'poweredoff']:
result['failed'] = True
result['msg'] = "Virtual Machine is in %s power state. Force is required!" % current_state
return result
# State is not already true
if current_state != expected_state:
task = None
try:
if expected_state == 'poweredoff':
task = vm.PowerOff()
elif expected_state == 'poweredon':
task = vm.PowerOn()
elif expected_state == 'restarted':
if current_state in ('poweredon', 'poweringon', 'resetting', 'poweredoff'):
task = vm.Reset()
else:
result['failed'] = True
result['msg'] = "Cannot restart virtual machine in the current state %s" % current_state
elif expected_state == 'suspended':
if current_state in ('poweredon', 'poweringon'):
task = vm.Suspend()
else:
result['failed'] = True
result['msg'] = 'Cannot suspend virtual machine in the current state %s' % current_state
elif expected_state in ['shutdownguest', 'rebootguest']:
if current_state == 'poweredon':
if vm.guest.toolsRunningStatus == 'guestToolsRunning':
if expected_state == 'shutdownguest':
task = vm.ShutdownGuest()
if timeout > 0:
result.update(wait_for_poweroff(vm, timeout))
else:
task = vm.RebootGuest()
# Set result['changed'] immediately because
# shutdown and reboot return None.
result['changed'] = True
else:
result['failed'] = True
result['msg'] = "VMware tools should be installed for guest shutdown/reboot"
else:
result['failed'] = True
result['msg'] = "Virtual machine %s must be in poweredon state for guest shutdown/reboot" % vm.name
else:
result['failed'] = True
result['msg'] = "Unsupported expected state provided: %s" % expected_state
except Exception as e:
result['failed'] = True
result['msg'] = to_text(e)
if task:
wait_for_task(task)
if task.info.state == 'error':
result['failed'] = True
result['msg'] = task.info.error.msg
else:
result['changed'] = True
# need to get new metadata if changed
result['instance'] = gather_vm_facts(content, vm)
return result
def wait_for_poweroff(vm, timeout=300):
result = dict()
interval = 15
while timeout > 0:
if vm.runtime.powerState.lower() == 'poweredoff':
break
time.sleep(interval)
timeout -= interval
else:
result['failed'] = True
result['msg'] = 'Timeout while waiting for VM power off.'
return result
def is_integer(value, type_of='int'):
try:
VmomiSupport.vmodlTypes[type_of](value)
return True
except (TypeError, ValueError):
return False
def is_boolean(value):
if str(value).lower() in ['true', 'on', 'yes', 'false', 'off', 'no']:
return True
return False
def is_truthy(value):
if str(value).lower() in ['true', 'on', 'yes']:
return True
return False
class PyVmomi(object):
def __init__(self, module):
"""
Constructor
"""
if not HAS_REQUESTS:
module.fail_json(msg=missing_required_lib('requests'),
exception=REQUESTS_IMP_ERR)
if not HAS_PYVMOMI:
module.fail_json(msg=missing_required_lib('PyVmomi'),
exception=PYVMOMI_IMP_ERR)
self.module = module
self.params = module.params
self.current_vm_obj = None
self.si, self.content = connect_to_api(self.module, return_si=True)
self.custom_field_mgr = []
if self.content.customFieldsManager: # not an ESXi
self.custom_field_mgr = self.content.customFieldsManager.field
def is_vcenter(self):
"""
Check if given hostname is vCenter or ESXi host
Returns: True if given connection is with vCenter server
False if given connection is with ESXi server
"""
api_type = None
try:
api_type = self.content.about.apiType
except (vmodl.RuntimeFault, vim.fault.VimFault) as exc:
self.module.fail_json(msg="Failed to get status of vCenter server : %s" % exc.msg)
if api_type == 'VirtualCenter':
return True
elif api_type == 'HostAgent':
return False
def get_managed_objects_properties(self, vim_type, properties=None):
"""
Look up a Managed Object Reference in vCenter / ESXi Environment
:param vim_type: Type of vim object e.g, for datacenter - vim.Datacenter
:param properties: List of properties related to vim object e.g. Name
:return: local content object
"""
# Get Root Folder
root_folder = self.content.rootFolder
if properties is None:
properties = ['name']
# Create Container View with default root folder
mor = self.content.viewManager.CreateContainerView(root_folder, [vim_type], True)
# Create Traversal spec
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
name="traversal_spec",
path='view',
skip=False,
type=vim.view.ContainerView
)
# Create Property Spec
property_spec = vmodl.query.PropertyCollector.PropertySpec(
type=vim_type, # Type of object to retrieved
all=False,
pathSet=properties
)
# Create Object Spec
object_spec = vmodl.query.PropertyCollector.ObjectSpec(
obj=mor,
skip=True,
selectSet=[traversal_spec]
)
# Create Filter Spec
filter_spec = vmodl.query.PropertyCollector.FilterSpec(
objectSet=[object_spec],
propSet=[property_spec],
reportMissingObjectsInResults=False
)
return self.content.propertyCollector.RetrieveContents([filter_spec])
# Virtual Machine related functions
def get_vm(self):
"""
Find unique virtual machine either by UUID, MoID or Name.
Returns: virtual machine object if found, else None.
"""
vm_obj = None
user_desired_path = None
use_instance_uuid = self.params.get('use_instance_uuid') or False
if 'uuid' in self.params and self.params['uuid']:
if not use_instance_uuid:
vm_obj = find_vm_by_id(self.content, vm_id=self.params['uuid'], vm_id_type="uuid")
elif use_instance_uuid:
vm_obj = find_vm_by_id(self.content,
vm_id=self.params['uuid'],
vm_id_type="instance_uuid")
elif 'name' in self.params and self.params['name']:
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
vms = []
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == self.params['name']:
vms.append(temp_vm_object.obj)
break
# get_managed_objects_properties may return multiple virtual machine,
# following code tries to find user desired one depending upon the folder specified.
if len(vms) > 1:
# We have found multiple virtual machines, decide depending upon folder value
if self.params['folder'] is None:
self.module.fail_json(msg="Multiple virtual machines with same name [%s] found, "
"Folder value is a required parameter to find uniqueness "
"of the virtual machine" % self.params['name'],
details="Please see documentation of the vmware_guest module "
"for folder parameter.")
# Get folder path where virtual machine is located
# User provided folder where user thinks virtual machine is present
user_folder = self.params['folder']
# User defined datacenter
user_defined_dc = self.params['datacenter']
# User defined datacenter's object
datacenter_obj = find_datacenter_by_name(self.content, self.params['datacenter'])
# Get Path for Datacenter
dcpath = compile_folder_path_for_object(vobj=datacenter_obj)
# Nested folder does not return trailing /
if not dcpath.endswith('/'):
dcpath += '/'
if user_folder in [None, '', '/']:
# User provided blank value or
# User provided only root value, we fail
self.module.fail_json(msg="vmware_guest found multiple virtual machines with same "
"name [%s], please specify folder path other than blank "
"or '/'" % self.params['name'])
elif user_folder.startswith('/vm/'):
# User provided nested folder under VMware default vm folder i.e. folder = /vm/india/finance
user_desired_path = "%s%s%s" % (dcpath, user_defined_dc, user_folder)
else:
# User defined datacenter is not nested i.e. dcpath = '/' , or
# User defined datacenter is nested i.e. dcpath = '/F0/DC0' or
# User provided folder starts with / and datacenter i.e. folder = /ha-datacenter/ or
# User defined folder starts with datacenter without '/' i.e.
# folder = DC0/vm/india/finance or
# folder = DC0/vm
user_desired_path = user_folder
for vm in vms:
# Check if user has provided same path as virtual machine
actual_vm_folder_path = self.get_vm_path(content=self.content, vm_name=vm)
if not actual_vm_folder_path.startswith("%s%s" % (dcpath, user_defined_dc)):
continue
if user_desired_path in actual_vm_folder_path:
vm_obj = vm
break
elif vms:
# Unique virtual machine found.
vm_obj = vms[0]
elif 'moid' in self.params and self.params['moid']:
vm_obj = VmomiSupport.templateOf('VirtualMachine')(self.params['moid'], self.si._stub)
if vm_obj:
self.current_vm_obj = vm_obj
return vm_obj
def gather_facts(self, vm):
"""
Gather facts of virtual machine.
Args:
vm: Name of virtual machine.
Returns: Facts dictionary of the given virtual machine.
"""
return gather_vm_facts(self.content, vm)
@staticmethod
def get_vm_path(content, vm_name):
"""
Find the path of virtual machine.
Args:
content: VMware content object
vm_name: virtual machine managed object
Returns: Folder of virtual machine if exists, else None
"""
folder_name = None
folder = vm_name.parent
if folder:
folder_name = folder.name
fp = folder.parent
# climb back up the tree to find our path, stop before the root folder
while fp is not None and fp.name is not None and fp != content.rootFolder:
folder_name = fp.name + '/' + folder_name
try:
fp = fp.parent
except Exception:
break
folder_name = '/' + folder_name
return folder_name
def get_vm_or_template(self, template_name=None):
"""
Find the virtual machine or virtual machine template using name
used for cloning purpose.
Args:
template_name: Name of virtual machine or virtual machine template
Returns: virtual machine or virtual machine template object
"""
template_obj = None
if not template_name:
return template_obj
if "/" in template_name:
vm_obj_path = os.path.dirname(template_name)
vm_obj_name = os.path.basename(template_name)
template_obj = find_vm_by_id(self.content, vm_obj_name, vm_id_type="inventory_path", folder=vm_obj_path)
if template_obj:
return template_obj
else:
template_obj = find_vm_by_id(self.content, vm_id=template_name, vm_id_type="uuid")
if template_obj:
return template_obj
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
templates = []
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == template_name:
templates.append(temp_vm_object.obj)
break
if len(templates) > 1:
# We have found multiple virtual machine templates
self.module.fail_json(msg="Multiple virtual machines or templates with same name [%s] found." % template_name)
elif templates:
template_obj = templates[0]
return template_obj
# Cluster related functions
def find_cluster_by_name(self, cluster_name, datacenter_name=None):
"""
Find Cluster by name in given datacenter
Args:
cluster_name: Name of cluster name to find
datacenter_name: (optional) Name of datacenter
Returns: True if found
"""
return find_cluster_by_name(self.content, cluster_name, datacenter=datacenter_name)
def get_all_hosts_by_cluster(self, cluster_name):
"""
Get all hosts from cluster by cluster name
Args:
cluster_name: Name of cluster
Returns: List of hosts
"""
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
return [host for host in cluster_obj.host]
else:
return []
# Hosts related functions
def find_hostsystem_by_name(self, host_name):
"""
Find Host by name
Args:
host_name: Name of ESXi host
Returns: True if found
"""
return find_hostsystem_by_name(self.content, hostname=host_name)
def get_all_host_objs(self, cluster_name=None, esxi_host_name=None):
"""
Get all host system managed object
Args:
cluster_name: Name of Cluster
esxi_host_name: Name of ESXi server
Returns: A list of all host system managed objects, else empty list
"""
host_obj_list = []
if not self.is_vcenter():
hosts = get_all_objs(self.content, [vim.HostSystem]).keys()
if hosts:
host_obj_list.append(list(hosts)[0])
else:
if cluster_name:
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
host_obj_list = [host for host in cluster_obj.host]
else:
self.module.fail_json(changed=False, msg="Cluster '%s' not found" % cluster_name)
elif esxi_host_name:
if isinstance(esxi_host_name, str):
esxi_host_name = [esxi_host_name]
for host in esxi_host_name:
esxi_host_obj = self.find_hostsystem_by_name(host_name=host)
if esxi_host_obj:
host_obj_list.append(esxi_host_obj)
else:
self.module.fail_json(changed=False, msg="ESXi '%s' not found" % host)
return host_obj_list
def host_version_at_least(self, version=None, vm_obj=None, host_name=None):
"""
Check that the ESXi Host is at least a specific version number
Args:
vm_obj: virtual machine object, required one of vm_obj, host_name
host_name (string): ESXi host name
version (tuple): a version tuple, for example (6, 7, 0)
Returns: bool
"""
if vm_obj:
host_system = vm_obj.summary.runtime.host
elif host_name:
host_system = self.find_hostsystem_by_name(host_name=host_name)
else:
self.module.fail_json(msg='VM object or ESXi host name must be set one.')
if host_system and version:
host_version = host_system.summary.config.product.version
return StrictVersion(host_version) >= StrictVersion('.'.join(map(str, version)))
else:
self.module.fail_json(msg='Unable to get the ESXi host from vm: %s, or hostname %s,'
'or the passed ESXi version: %s is None.' % (vm_obj, host_name, version))
# Network related functions
@staticmethod
def find_host_portgroup_by_name(host, portgroup_name):
"""
Find Portgroup on given host
Args:
host: Host config object
portgroup_name: Name of portgroup
Returns: True if found else False
"""
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return False
def get_all_port_groups_by_host(self, host_system):
"""
Get all Port Group by host
Args:
host_system: Name of Host System
Returns: List of Port Group Spec
"""
pgs_list = []
for pg in host_system.config.network.portgroup:
pgs_list.append(pg)
return pgs_list
def find_network_by_name(self, network_name=None):
"""
Get network specified by name
Args:
network_name: Name of network
Returns: List of network managed objects
"""
networks = []
if not network_name:
return networks
objects = self.get_managed_objects_properties(vim_type=vim.Network, properties=['name'])
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == network_name:
networks.append(temp_vm_object.obj)
break
return networks
def network_exists_by_name(self, network_name=None):
"""
Check if network with a specified name exists or not
Args:
network_name: Name of network
Returns: True if network exists else False
"""
ret = False
if not network_name:
return ret
ret = True if self.find_network_by_name(network_name=network_name) else False
return ret
# Datacenter
def find_datacenter_by_name(self, datacenter_name):
"""
Get datacenter managed object by name
Args:
datacenter_name: Name of datacenter
Returns: datacenter managed object if found else None
"""
return find_datacenter_by_name(self.content, datacenter_name=datacenter_name)
def is_datastore_valid(self, datastore_obj=None):
"""
Check if datastore selected is valid or not
Args:
datastore_obj: datastore managed object
Returns: True if datastore is valid, False if not
"""
if not datastore_obj \
or datastore_obj.summary.maintenanceMode != 'normal' \
or not datastore_obj.summary.accessible:
return False
return True
def find_datastore_by_name(self, datastore_name):
"""
Get datastore managed object by name
Args:
datastore_name: Name of datastore
Returns: datastore managed object if found else None
"""
return find_datastore_by_name(self.content, datastore_name=datastore_name)
# Datastore cluster
def find_datastore_cluster_by_name(self, datastore_cluster_name):
"""
Get datastore cluster managed object by name
Args:
datastore_cluster_name: Name of datastore cluster
Returns: Datastore cluster managed object if found else None
"""
data_store_clusters = get_all_objs(self.content, [vim.StoragePod])
for dsc in data_store_clusters:
if dsc.name == datastore_cluster_name:
return dsc
return None
# Resource pool
def find_resource_pool_by_name(self, resource_pool_name, folder=None):
"""
Get resource pool managed object by name
Args:
resource_pool_name: Name of resource pool
Returns: Resource pool managed object if found else None
"""
if not folder:
folder = self.content.rootFolder
resource_pools = get_all_objs(self.content, [vim.ResourcePool], folder=folder)
for rp in resource_pools:
if rp.name == resource_pool_name:
return rp
return None
def find_resource_pool_by_cluster(self, resource_pool_name='Resources', cluster=None):
"""
Get resource pool managed object by cluster object
Args:
resource_pool_name: Name of resource pool
cluster: Managed object of cluster
Returns: Resource pool managed object if found else None
"""
desired_rp = None
if not cluster:
return desired_rp
if resource_pool_name != 'Resources':
# Resource pool name is different than default 'Resources'
resource_pools = cluster.resourcePool.resourcePool
if resource_pools:
for rp in resource_pools:
if rp.name == resource_pool_name:
desired_rp = rp
break
else:
desired_rp = cluster.resourcePool
return desired_rp
# VMDK stuff
def vmdk_disk_path_split(self, vmdk_path):
"""
Takes a string in the format
[datastore_name] path/to/vm_name.vmdk
Returns a tuple with multiple strings:
1. datastore_name: The name of the datastore (without brackets)
2. vmdk_fullpath: The "path/to/vm_name.vmdk" portion
3. vmdk_filename: The "vm_name.vmdk" portion of the string (os.path.basename equivalent)
4. vmdk_folder: The "path/to/" portion of the string (os.path.dirname equivalent)
"""
try:
datastore_name = re.match(r'^\[(.*?)\]', vmdk_path, re.DOTALL).groups()[0]
vmdk_fullpath = re.match(r'\[.*?\] (.*)$', vmdk_path).groups()[0]
vmdk_filename = os.path.basename(vmdk_fullpath)
vmdk_folder = os.path.dirname(vmdk_fullpath)
return datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder
except (IndexError, AttributeError) as e:
self.module.fail_json(msg="Bad path '%s' for filename disk vmdk image: %s" % (vmdk_path, to_native(e)))
def find_vmdk_file(self, datastore_obj, vmdk_fullpath, vmdk_filename, vmdk_folder):
"""
Return vSphere file object or fail_json
Args:
datastore_obj: Managed object of datastore
vmdk_fullpath: Path of VMDK file e.g., path/to/vm/vmdk_filename.vmdk
vmdk_filename: Name of vmdk e.g., VM0001_1.vmdk
vmdk_folder: Base dir of VMDK e.g, path/to/vm
"""
browser = datastore_obj.browser
datastore_name = datastore_obj.name
datastore_name_sq = "[" + datastore_name + "]"
if browser is None:
self.module.fail_json(msg="Unable to access browser for datastore %s" % datastore_name)
detail_query = vim.host.DatastoreBrowser.FileInfo.Details(
fileOwner=True,
fileSize=True,
fileType=True,
modification=True
)
search_spec = vim.host.DatastoreBrowser.SearchSpec(
details=detail_query,
matchPattern=[vmdk_filename],
searchCaseInsensitive=True,
)
search_res = browser.SearchSubFolders(
datastorePath=datastore_name_sq,
searchSpec=search_spec
)
changed = False
vmdk_path = datastore_name_sq + " " + vmdk_fullpath
try:
changed, result = wait_for_task(search_res)
except TaskError as task_e:
self.module.fail_json(msg=to_native(task_e))
if not changed:
self.module.fail_json(msg="No valid disk vmdk image found for path %s" % vmdk_path)
target_folder_paths = [
datastore_name_sq + " " + vmdk_folder + '/',
datastore_name_sq + " " + vmdk_folder,
]
for file_result in search_res.info.result:
for f in getattr(file_result, 'file'):
if f.path == vmdk_filename and file_result.folderPath in target_folder_paths:
return f
self.module.fail_json(msg="No vmdk file found for path specified [%s]" % vmdk_path)
#
# Conversion to JSON
#
def _deepmerge(self, d, u):
"""
Deep merges u into d.
Credit:
https://bit.ly/2EDOs1B (stackoverflow question 3232943)
License:
cc-by-sa 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Changes:
using collections_compat for compatibility
Args:
- d (dict): dict to merge into
- u (dict): dict to merge into d
Returns:
dict, with u merged into d
"""
for k, v in iteritems(u):
if isinstance(v, collections_compat.Mapping):
d[k] = self._deepmerge(d.get(k, {}), v)
else:
d[k] = v
return d
def _extract(self, data, remainder):
"""
This is used to break down dotted properties for extraction.
Args:
- data (dict): result of _jsonify on a property
- remainder: the remainder of the dotted property to select
Return:
dict
"""
result = dict()
if '.' not in remainder:
result[remainder] = data[remainder]
return result
key, remainder = remainder.split('.', 1)
result[key] = self._extract(data[key], remainder)
return result
def _jsonify(self, obj):
"""
Convert an object from pyVmomi into JSON.
Args:
- obj (object): vim object
Return:
dict
"""
return json.loads(json.dumps(obj, cls=VmomiSupport.VmomiJSONEncoder,
sort_keys=True, strip_dynamic=True))
def to_json(self, obj, properties=None):
"""
Convert a vSphere (pyVmomi) Object into JSON. This is a deep
transformation. The list of properties is optional - if not
provided then all properties are deeply converted. The resulting
JSON is sorted to improve human readability.
Requires upstream support from pyVmomi > 6.7.1
(https://github.com/vmware/pyvmomi/pull/732)
Args:
- obj (object): vim object
- properties (list, optional): list of properties following
the property collector specification, for example:
["config.hardware.memoryMB", "name", "overallStatus"]
default is a complete object dump, which can be large
Return:
dict
"""
if not HAS_PYVMOMIJSON:
self.module.fail_json(msg='The installed version of pyvmomi lacks JSON output support; need pyvmomi>6.7.1')
result = dict()
if properties:
for prop in properties:
try:
if '.' in prop:
key, remainder = prop.split('.', 1)
tmp = dict()
tmp[key] = self._extract(self._jsonify(getattr(obj, key)), remainder)
self._deepmerge(result, tmp)
else:
result[prop] = self._jsonify(getattr(obj, prop))
# To match gather_vm_facts output
prop_name = prop
if prop.lower() == '_moid':
prop_name = 'moid'
elif prop.lower() == '_vimref':
prop_name = 'vimref'
result[prop_name] = result[prop]
except (AttributeError, KeyError):
self.module.fail_json(msg="Property '{0}' not found.".format(prop))
else:
result = self._jsonify(obj)
return result
def get_folder_path(self, cur):
full_path = '/' + cur.name
while hasattr(cur, 'parent') and cur.parent:
if cur.parent == self.content.rootFolder:
break
cur = cur.parent
full_path = '/' + cur.name + full_path
return full_path
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,399 |
Slash in network name causes vmware_guest module to return Network ... does not exist
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
We are using the vmware_guest module to deploy a VM Template on a vCenter cluster. When we specify a virtual network name that contains a slash in it (e.g. 0123-network-name-10.0.0.0/22), Ansible returns "Network '0123-network-name-10.0.0.0/22' does not exist. This problem does not occur with network names that do not contain slashes.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 17:58:22) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible running within AWX 9.0.0
Target vCenter environment - vCenter 6.7.0 build 14070654
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a playbook with a single task in it to deploy and customize a VMware Template using the vmware-guest module. Specify a virtual network name with a "/" in it.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: "{{ vm_folder }}"
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: "{{ vm_cluster }}"
networks:
- name: "{{ vm_network }}"
dvswitch_name: "dvSwitch"
hardware:
num_cpus: "{{ vm_hw_cpus }}"
memory_mb: "{{ vm_hw_mem }}"
wait_for_ip_address: True
customization_spec: "{{ vm_custspec }}"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expect vmware_guest to launch or update the VM specified by vm_name variable with the parameters specified.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
PLAY [EDF VMware Deployment playbook] ******************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [deployvm : Clone a virtual machine from Linux template and customize] ****
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Network '0123-network-name-10.0.0.0/22' does not exist."}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64399
|
https://github.com/ansible/ansible/pull/64494
|
575116a584b5bb2fcfa3270611677f37d18295a8
|
47f9873eabab41f4c054d393ea7440bd85d7f95c
| 2019-11-04T16:57:05Z |
python
| 2019-11-12T11:43:57Z |
lib/ansible/modules/cloud/vmware/vmware_guest.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# This module is also sponsored by E.T.A.I. (www.etai.fr)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: vmware_guest
short_description: Manages virtual machines in vCenter
description: >
This module can be used to create new virtual machines from templates or other virtual machines,
manage power state of virtual machine such as power on, power off, suspend, shutdown, reboot, restart etc.,
modify various virtual machine components like network, disk, customization etc.,
rename a virtual machine and remove a virtual machine with associated components.
version_added: '2.2'
author:
- Loic Blot (@nerzhul) <[email protected]>
- Philippe Dellaert (@pdellaert) <[email protected]>
- Abhijeet Kasurde (@Akasurde) <[email protected]>
requirements:
- python >= 2.6
- PyVmomi
notes:
- Please make sure that the user used for vmware_guest has the correct level of privileges.
- For example, following is the list of minimum privileges required by users to create virtual machines.
- " DataStore > Allocate Space"
- " Virtual Machine > Configuration > Add New Disk"
- " Virtual Machine > Configuration > Add or Remove Device"
- " Virtual Machine > Inventory > Create New"
- " Network > Assign Network"
- " Resource > Assign Virtual Machine to Resource Pool"
- "Module may require additional privileges as well, which may be required for gathering facts - e.g. ESXi configurations."
- Tested on vSphere 5.5, 6.0, 6.5 and 6.7
- Use SCSI disks instead of IDE when you want to expand online disks by specifying a SCSI controller
- "For additional information please visit Ansible VMware community wiki - U(https://github.com/ansible/community/wiki/VMware)."
options:
state:
description:
- Specify the state the virtual machine should be in.
- 'If C(state) is set to C(present) and virtual machine exists, ensure the virtual machine
configurations conforms to task arguments.'
- 'If C(state) is set to C(absent) and virtual machine exists, then the specified virtual machine
is removed with its associated components.'
- 'If C(state) is set to one of the following C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists, then virtual machine is deployed with given parameters.'
- 'If C(state) is set to C(poweredon) and virtual machine exists with powerstate other than powered on,
then the specified virtual machine is powered on.'
- 'If C(state) is set to C(poweredoff) and virtual machine exists with powerstate other than powered off,
then the specified virtual machine is powered off.'
- 'If C(state) is set to C(restarted) and virtual machine exists, then the virtual machine is restarted.'
- 'If C(state) is set to C(suspended) and virtual machine exists, then the virtual machine is set to suspended mode.'
- 'If C(state) is set to C(shutdownguest) and virtual machine exists, then the virtual machine is shutdown.'
- 'If C(state) is set to C(rebootguest) and virtual machine exists, then the virtual machine is rebooted.'
default: present
choices: [ present, absent, poweredon, poweredoff, restarted, suspended, shutdownguest, rebootguest ]
name:
description:
- Name of the virtual machine to work with.
- Virtual machine names in vCenter are not necessarily unique, which may be problematic, see C(name_match).
- 'If multiple virtual machines with same name exists, then C(folder) is required parameter to
identify uniqueness of the virtual machine.'
- This parameter is required, if C(state) is set to C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists.
- This parameter is case sensitive.
required: yes
name_match:
description:
- If multiple virtual machines matching the name, use the first or last found.
default: 'first'
choices: [ first, last ]
uuid:
description:
- UUID of the virtual machine to manage if known, this is VMware's unique identifier.
- This is required if C(name) is not supplied.
- If virtual machine does not exists, then this parameter is ignored.
- Please note that a supplied UUID will be ignored on virtual machine creation, as VMware creates the UUID internally.
use_instance_uuid:
description:
- Whether to use the VMware instance UUID rather than the BIOS UUID.
default: no
type: bool
version_added: '2.8'
template:
description:
- Template or existing virtual machine used to create new virtual machine.
- If this value is not set, virtual machine is created without using a template.
- If the virtual machine already exists, this parameter will be ignored.
- This parameter is case sensitive.
- You can also specify template or VM UUID for identifying source. version_added 2.8. Use C(hw_product_uuid) from M(vmware_guest_facts) as UUID value.
- From version 2.8 onwards, absolute path to virtual machine or template can be used.
aliases: [ 'template_src' ]
is_template:
description:
- Flag the instance as a template.
- This will mark the given virtual machine as template.
default: 'no'
type: bool
version_added: '2.3'
folder:
description:
- Destination folder, absolute path to find an existing guest or create the new guest.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter.
- This parameter is case sensitive.
- This parameter is required, while deploying new virtual machine. version_added 2.5.
- 'If multiple machines are found with same name, this parameter is used to identify
uniqueness of the virtual machine. version_added 2.5'
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
hardware:
description:
- Manage virtual machine's hardware attributes.
- All parameters case sensitive.
- 'Valid attributes are:'
- ' - C(hotadd_cpu) (boolean): Allow virtual CPUs to be added while the virtual machine is running.'
- ' - C(hotremove_cpu) (boolean): Allow virtual CPUs to be removed while the virtual machine is running.
version_added: 2.5'
- ' - C(hotadd_memory) (boolean): Allow memory to be added while the virtual machine is running.'
- ' - C(memory_mb) (integer): Amount of memory in MB.'
- ' - C(nested_virt) (bool): Enable nested virtualization. version_added: 2.5'
- ' - C(num_cpus) (integer): Number of CPUs.'
- ' - C(num_cpu_cores_per_socket) (integer): Number of Cores Per Socket.'
- " C(num_cpus) must be a multiple of C(num_cpu_cores_per_socket).
For example to create a VM with 2 sockets of 4 cores, specify C(num_cpus): 8 and C(num_cpu_cores_per_socket): 4"
- ' - C(scsi) (string): Valid values are C(buslogic), C(lsilogic), C(lsilogicsas) and C(paravirtual) (default).'
- " - C(memory_reservation_lock) (boolean): If set true, memory resource reservation for the virtual machine
will always be equal to the virtual machine's memory size. version_added: 2.5"
- ' - C(max_connections) (integer): Maximum number of active remote display connections for the virtual machines.
version_added: 2.5.'
- ' - C(mem_limit) (integer): The memory utilization of a virtual machine will not exceed this limit. Unit is MB.
version_added: 2.5'
- ' - C(mem_reservation) (integer): The amount of memory resource that is guaranteed available to the virtual
machine. Unit is MB. C(memory_reservation) is alias to this. version_added: 2.5'
- ' - C(cpu_limit) (integer): The CPU utilization of a virtual machine will not exceed this limit. Unit is MHz.
version_added: 2.5'
- ' - C(cpu_reservation) (integer): The amount of CPU resource that is guaranteed available to the virtual machine.
Unit is MHz. version_added: 2.5'
- ' - C(version) (integer): The Virtual machine hardware versions. Default is 10 (ESXi 5.5 and onwards).
If value specified as C(latest), version is set to the most current virtual hardware supported on the host.
C(latest) is added in version 2.10.
Please check VMware documentation for correct virtual machine hardware version.
Incorrect hardware version may lead to failure in deployment. If hardware version is already equal to the given
version then no action is taken. version_added: 2.6'
- ' - C(boot_firmware) (string): Choose which firmware should be used to boot the virtual machine.
Allowed values are "bios" and "efi". version_added: 2.7'
- ' - C(virt_based_security) (bool): Enable Virtualization Based Security feature for Windows 10.
(Support from Virtual machine hardware version 14, Guest OS Windows 10 64 bit, Windows Server 2016)'
guest_id:
description:
- Set the guest ID.
- This parameter is case sensitive.
- 'Examples:'
- " virtual machine with RHEL7 64 bit, will be 'rhel7_64Guest'"
- " virtual machine with CentOS 64 bit, will be 'centos64Guest'"
- " virtual machine with Ubuntu 64 bit, will be 'ubuntu64Guest'"
- This field is required when creating a virtual machine, not required when creating from the template.
- >
Valid values are referenced here:
U(https://code.vmware.com/apis/358/doc/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html)
version_added: '2.3'
disk:
description:
- A list of disks to add.
- This parameter is case sensitive.
- Shrinking disks is not supported.
- Removing existing disks of the virtual machine is not supported.
- 'Valid attributes are:'
- ' - C(size_[tb,gb,mb,kb]) (integer): Disk storage size in specified unit.'
- ' - C(type) (string): Valid values are:'
- ' - C(thin) thin disk'
- ' - C(eagerzeroedthick) eagerzeroedthick disk, added in version 2.5'
- ' Default: C(None) thick disk, no eagerzero.'
- ' - C(datastore) (string): The name of datastore which will be used for the disk. If C(autoselect_datastore) is set to True,
then will select the less used datastore whose name contains this "disk.datastore" string.'
- ' - C(filename) (string): Existing disk image to be used. Filename must already exist on the datastore.'
- ' Specify filename string in C([datastore_name] path/to/file.vmdk) format. Added in version 2.8.'
- ' - C(autoselect_datastore) (bool): select the less used datastore. "disk.datastore" and "disk.autoselect_datastore"
will not be used if C(datastore) is specified outside this C(disk) configuration.'
- ' - C(disk_mode) (string): Type of disk mode. Added in version 2.6'
- ' - Available options are :'
- ' - C(persistent): Changes are immediately and permanently written to the virtual disk. This is default.'
- ' - C(independent_persistent): Same as persistent, but not affected by snapshots.'
- ' - C(independent_nonpersistent): Changes to virtual disk are made to a redo log and discarded at power off, but not affected by snapshots.'
cdrom:
description:
- A CD-ROM configuration for the virtual machine.
- Or a list of CD-ROMs configuration for the virtual machine. Added in version 2.9.
- 'Parameters C(controller_type), C(controller_number), C(unit_number), C(state) are added for a list of CD-ROMs
configuration support.'
- 'Valid attributes are:'
- ' - C(type) (string): The type of CD-ROM, valid options are C(none), C(client) or C(iso). With C(none) the CD-ROM
will be disconnected but present.'
- ' - C(iso_path) (string): The datastore path to the ISO file to use, in the form of C([datastore1] path/to/file.iso).
Required if type is set C(iso).'
- ' - C(controller_type) (string): Default value is C(ide). Only C(ide) controller type for CD-ROM is supported for
now, will add SATA controller type in the future.'
- ' - C(controller_number) (int): For C(ide) controller, valid value is 0 or 1.'
- ' - C(unit_number) (int): For CD-ROM device attach to C(ide) controller, valid value is 0 or 1.
C(controller_number) and C(unit_number) are mandatory attributes.'
- ' - C(state) (string): Valid value is C(present) or C(absent). Default is C(present). If set to C(absent), then
the specified CD-ROM will be removed. For C(ide) controller, hot-add or hot-remove CD-ROM is not supported.'
version_added: '2.5'
resource_pool:
description:
- Use the given resource pool for virtual machine operation.
- This parameter is case sensitive.
- Resource pool should be child of the selected host parent.
version_added: '2.3'
wait_for_ip_address:
description:
- Wait until vCenter detects an IP address for the virtual machine.
- This requires vmware-tools (vmtoolsd) to properly work after creation.
- "vmware-tools needs to be installed on the given virtual machine in order to work with this parameter."
default: 'no'
type: bool
wait_for_ip_address_timeout:
description:
- Define a timeout (in seconds) for the wait_for_ip_address parameter.
default: '300'
type: int
version_added: '2.10'
wait_for_customization:
description:
- Wait until vCenter detects all guest customizations as successfully completed.
- When enabled, the VM will automatically be powered on.
default: 'no'
type: bool
version_added: '2.8'
state_change_timeout:
description:
- If the C(state) is set to C(shutdownguest), by default the module will return immediately after sending the shutdown signal.
- If this argument is set to a positive integer, the module will instead wait for the virtual machine to reach the poweredoff state.
- The value sets a timeout in seconds for the module to wait for the state change.
default: 0
version_added: '2.6'
snapshot_src:
description:
- Name of the existing snapshot to use to create a clone of a virtual machine.
- This parameter is case sensitive.
- While creating linked clone using C(linked_clone) parameter, this parameter is required.
version_added: '2.4'
linked_clone:
description:
- Whether to create a linked clone from the snapshot specified.
- If specified, then C(snapshot_src) is required parameter.
default: 'no'
type: bool
version_added: '2.4'
force:
description:
- Ignore warnings and complete the actions.
- This parameter is useful while removing virtual machine which is powered on state.
- 'This module reflects the VMware vCenter API and UI workflow, as such, in some cases the `force` flag will
be mandatory to perform the action to ensure you are certain the action has to be taken, no matter what the consequence.
This is specifically the case for removing a powered on the virtual machine when C(state) is set to C(absent).'
default: 'no'
type: bool
delete_from_inventory:
description:
- Whether to delete Virtual machine from inventory or delete from disk.
default: False
type: bool
version_added: '2.10'
datacenter:
description:
- Destination datacenter for the deploy operation.
- This parameter is case sensitive.
default: ha-datacenter
cluster:
description:
- The cluster name where the virtual machine will run.
- This is a required parameter, if C(esxi_hostname) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
version_added: '2.3'
esxi_hostname:
description:
- The ESXi hostname where the virtual machine will run.
- This is a required parameter, if C(cluster) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
annotation:
description:
- A note or annotation to include in the virtual machine.
version_added: '2.3'
customvalues:
description:
- Define a list of custom values to set on virtual machine.
- A custom value object takes two fields C(key) and C(value).
- Incorrect key and values will be ignored.
version_added: '2.3'
networks:
description:
- A list of networks (in the order of the NICs).
- Removing NICs is not allowed, while reconfiguring the virtual machine.
- All parameters and VMware object names are case sensitive.
- 'One of the below parameters is required per entry:'
- ' - C(name) (string): Name of the portgroup or distributed virtual portgroup for this interface.
When specifying distributed virtual portgroup make sure given C(esxi_hostname) or C(cluster) is associated with it.'
- ' - C(vlan) (integer): VLAN number for this interface.'
- 'Optional parameters per entry (used for virtual hardware):'
- ' - C(device_type) (string): Virtual network device (one of C(e1000), C(e1000e), C(pcnet32), C(vmxnet2), C(vmxnet3) (default), C(sriov)).'
- ' - C(mac) (string): Customize MAC address.'
- ' - C(dvswitch_name) (string): Name of the distributed vSwitch.
This value is required if multiple distributed portgroups exists with the same name. version_added 2.7'
- ' - C(start_connected) (bool): Indicates that virtual network adapter starts with associated virtual machine powers on. version_added: 2.5'
- 'Optional parameters per entry (used for OS customization):'
- ' - C(type) (string): Type of IP assignment (either C(dhcp) or C(static)). C(dhcp) is default.'
- ' - C(ip) (string): Static IP address (implies C(type: static)).'
- ' - C(netmask) (string): Static netmask required for C(ip).'
- ' - C(gateway) (string): Static gateway.'
- ' - C(dns_servers) (string): DNS servers for this network interface (Windows).'
- ' - C(domain) (string): Domain name for this network interface (Windows).'
- ' - C(wake_on_lan) (bool): Indicates if wake-on-LAN is enabled on this virtual network adapter. version_added: 2.5'
- ' - C(allow_guest_control) (bool): Enables guest control over whether the connectable device is connected. version_added: 2.5'
version_added: '2.3'
customization:
description:
- Parameters for OS customization when cloning from the template or the virtual machine, or apply to the existing virtual machine directly.
- Not all operating systems are supported for customization with respective vCenter version,
please check VMware documentation for respective OS customization.
- For supported customization operating system matrix, (see U(http://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf))
- All parameters and VMware object names are case sensitive.
- Linux based OSes requires Perl package to be installed for OS customizations.
- 'Common parameters (Linux/Windows):'
- ' - C(existing_vm) (bool): If set to C(True), do OS customization on the specified virtual machine directly.
If set to C(False) or not specified, do OS customization when cloning from the template or the virtual machine. version_added: 2.8'
- ' - C(dns_servers) (list): List of DNS servers to configure.'
- ' - C(dns_suffix) (list): List of domain suffixes, also known as DNS search path (default: C(domain) parameter).'
- ' - C(domain) (string): DNS domain name to use.'
- ' - C(hostname) (string): Computer hostname (default: shorted C(name) parameter). Allowed characters are alphanumeric (uppercase and lowercase)
and minus, rest of the characters are dropped as per RFC 952.'
- 'Parameters related to Linux customization:'
- ' - C(timezone) (string): Timezone (See List of supported time zones for different vSphere versions in Linux/Unix
systems (2145518) U(https://kb.vmware.com/s/article/2145518)). version_added: 2.9'
- ' - C(hwclockUTC) (bool): Specifies whether the hardware clock is in UTC or local time.
True when the hardware clock is in UTC, False when the hardware clock is in local time. version_added: 2.9'
- 'Parameters related to Windows customization:'
- ' - C(autologon) (bool): Auto logon after virtual machine customization (default: False).'
- ' - C(autologoncount) (int): Number of autologon after reboot (default: 1).'
- ' - C(domainadmin) (string): User used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(domainadminpassword) (string): Password used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(fullname) (string): Server owner name (default: Administrator).'
- ' - C(joindomain) (string): AD domain to join (Not compatible with C(joinworkgroup)).'
- ' - C(joinworkgroup) (string): Workgroup to join (Not compatible with C(joindomain), default: WORKGROUP).'
- ' - C(orgname) (string): Organisation name (default: ACME).'
- ' - C(password) (string): Local administrator password.'
- ' - C(productid) (string): Product ID.'
- ' - C(runonce) (list): List of commands to run at first user logon.'
- ' - C(timezone) (int): Timezone (See U(https://msdn.microsoft.com/en-us/library/ms912391.aspx)).'
version_added: '2.3'
vapp_properties:
description:
- A list of vApp properties
- 'For full list of attributes and types refer to: U(https://github.com/vmware/pyvmomi/blob/master/docs/vim/vApp/PropertyInfo.rst)'
- 'Basic attributes are:'
- ' - C(id) (string): Property id - required.'
- ' - C(value) (string): Property value.'
- ' - C(type) (string): Value type, string type by default.'
- ' - C(operation): C(remove): This attribute is required only when removing properties.'
version_added: '2.6'
customization_spec:
description:
- Unique name identifying the requested customization specification.
- This parameter is case sensitive.
- If set, then overrides C(customization) parameter values.
version_added: '2.6'
datastore:
description:
- Specify datastore or datastore cluster to provision virtual machine.
- 'This parameter takes precedence over "disk.datastore" parameter.'
- 'This parameter can be used to override datastore or datastore cluster setting of the virtual machine when deployed
from the template.'
- Please see example for more usage.
version_added: '2.7'
convert:
description:
- Specify convert disk type while cloning template or virtual machine.
choices: [ thin, thick, eagerzeroedthick ]
version_added: '2.8'
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Create a virtual machine on given ESXi hostname
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /DC1/vm/
name: test_vm_0001
state: poweredon
guest_id: centos64Guest
# This is hostname of particular ESXi server on which user wants VM to be deployed
esxi_hostname: "{{ esxi_hostname }}"
disk:
- size_gb: 10
type: thin
datastore: datastore1
hardware:
memory_mb: 512
num_cpus: 4
scsi: paravirtual
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
ip: 10.10.10.100
netmask: 255.255.255.0
device_type: vmxnet3
wait_for_ip_address: yes
wait_for_ip_address_timeout: 600
delegate_to: localhost
register: deploy_vm
- name: Create a virtual machine from a template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /testvms
name: testvm_2
state: poweredon
template: template_el7
disk:
- size_gb: 10
type: thin
datastore: g73_datastore
hardware:
memory_mb: 512
num_cpus: 6
num_cpu_cores_per_socket: 3
scsi: paravirtual
memory_reservation_lock: True
mem_limit: 8096
mem_reservation: 4096
cpu_limit: 8096
cpu_reservation: 4096
max_connections: 5
hotadd_cpu: True
hotremove_cpu: True
hotadd_memory: False
version: 12 # Hardware version of virtual machine
boot_firmware: "efi"
cdrom:
type: iso
iso_path: "[datastore1] livecd.iso"
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
wait_for_ip_address: yes
delegate_to: localhost
register: deploy
- name: Clone a virtual machine from Windows template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: datacenter1
cluster: cluster
name: testvm-2
template: template_windows
networks:
- name: VM Network
ip: 192.168.1.100
netmask: 255.255.255.0
gateway: 192.168.1.1
mac: aa:bb:dd:aa:00:14
domain: my_domain
dns_servers:
- 192.168.1.1
- 192.168.1.2
- vlan: 1234
type: dhcp
customization:
autologon: yes
dns_servers:
- 192.168.1.1
- 192.168.1.2
domain: my_domain
password: new_vm_password
runonce:
- powershell.exe -ExecutionPolicy Unrestricted -File C:\Windows\Temp\ConfigureRemotingForAnsible.ps1 -ForceNewSSLCert -EnableCredSSP
delegate_to: localhost
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: /DC1/vm
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: DC1_C1
networks:
- name: VM Network
ip: 192.168.10.11
netmask: 255.255.255.0
wait_for_ip_address: True
customization:
domain: "{{ guest_domain }}"
dns_servers:
- 8.9.9.9
- 7.8.8.9
dns_suffix:
- example.com
- example2.com
delegate_to: localhost
- name: Rename a virtual machine (requires the virtual machine's uuid)
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
name: new_name
state: present
delegate_to: localhost
- name: Remove a virtual machine by uuid
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: absent
delegate_to: localhost
- name: Remove a virtual machine from inventory
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: vm_name
delete_from_inventory: True
state: absent
delegate_to: localhost
- name: Manipulate vApp properties
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: vm_name
state: present
vapp_properties:
- id: remoteIP
category: Backup
label: Backup server IP
type: str
value: 10.10.10.1
- id: old_property
operation: remove
delegate_to: localhost
- name: Set powerstate of a virtual machine to poweroff by using UUID
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: poweredoff
delegate_to: localhost
- name: Deploy a virtual machine in a datastore different from the datastore of the template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ vm_name }}"
state: present
template: "{{ template_name }}"
# Here datastore can be different which holds template
datastore: "{{ virtual_machine_datastore }}"
hardware:
memory_mb: 512
num_cpus: 2
scsi: paravirtual
delegate_to: localhost
'''
RETURN = r'''
instance:
description: metadata about the new virtual machine
returned: always
type: dict
sample: None
'''
import re
import time
import string
HAS_PYVMOMI = False
try:
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
except ImportError:
pass
from random import randint
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.network import is_mac
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.vmware import (find_obj, gather_vm_facts, get_all_objs,
compile_folder_path_for_object, serialize_spec,
vmware_argument_spec, set_vm_power_state, PyVmomi,
find_dvs_by_name, find_dvspg_by_name, wait_for_vm_ip,
wait_for_task, TaskError)
def list_or_dict(value):
if isinstance(value, list) or isinstance(value, dict):
return value
else:
raise ValueError("'%s' is not valid, valid type is 'list' or 'dict'." % value)
class PyVmomiDeviceHelper(object):
""" This class is a helper to create easily VMware Objects for PyVmomiHelper """
def __init__(self, module):
self.module = module
self.next_disk_unit_number = 0
self.scsi_device_type = {
'lsilogic': vim.vm.device.VirtualLsiLogicController,
'paravirtual': vim.vm.device.ParaVirtualSCSIController,
'buslogic': vim.vm.device.VirtualBusLogicController,
'lsilogicsas': vim.vm.device.VirtualLsiLogicSASController,
}
def create_scsi_controller(self, scsi_type):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
scsi_device = self.scsi_device_type.get(scsi_type, vim.vm.device.ParaVirtualSCSIController)
scsi_ctl.device = scsi_device()
scsi_ctl.device.busNumber = 0
# While creating a new SCSI controller, temporary key value
# should be unique negative integers
scsi_ctl.device.key = -randint(1000, 9999)
scsi_ctl.device.hotAddRemove = True
scsi_ctl.device.sharedBus = 'noSharing'
scsi_ctl.device.scsiCtlrUnitNumber = 7
return scsi_ctl
def is_scsi_controller(self, device):
return isinstance(device, tuple(self.scsi_device_type.values()))
@staticmethod
def create_ide_controller(bus_number=0):
ide_ctl = vim.vm.device.VirtualDeviceSpec()
ide_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
ide_ctl.device = vim.vm.device.VirtualIDEController()
ide_ctl.device.deviceInfo = vim.Description()
# While creating a new IDE controller, temporary key value
# should be unique negative integers
ide_ctl.device.key = -randint(200, 299)
ide_ctl.device.busNumber = bus_number
return ide_ctl
@staticmethod
def create_cdrom(ide_device, cdrom_type, iso_path=None, unit_number=0):
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
cdrom_spec.device = vim.vm.device.VirtualCdrom()
cdrom_spec.device.controllerKey = ide_device.key
cdrom_spec.device.key = -randint(3000, 3999)
cdrom_spec.device.unitNumber = unit_number
cdrom_spec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_spec.device.connectable.allowGuestControl = True
cdrom_spec.device.connectable.startConnected = (cdrom_type != "none")
if cdrom_type in ["none", "client"]:
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif cdrom_type == "iso":
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
return cdrom_spec
@staticmethod
def is_equal_cdrom(vm_obj, cdrom_device, cdrom_type, iso_path):
if cdrom_type == "none":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
not cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or not cdrom_device.connectable.connected))
elif cdrom_type == "client":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
elif cdrom_type == "iso":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.IsoBackingInfo) and
cdrom_device.backing.fileName == iso_path and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
@staticmethod
def update_cdrom_config(vm_obj, cdrom_spec, cdrom_device, iso_path=None):
# Updating an existing CD-ROM
if cdrom_spec["type"] in ["client", "none"]:
cdrom_device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif cdrom_spec["type"] == "iso" and iso_path is not None:
cdrom_device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
cdrom_device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_device.connectable.allowGuestControl = True
cdrom_device.connectable.startConnected = (cdrom_spec["type"] != "none")
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_device.connectable.connected = (cdrom_spec["type"] != "none")
def remove_cdrom(self, cdrom_device):
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove
cdrom_spec.device = cdrom_device
return cdrom_spec
def create_scsi_disk(self, scsi_ctl, disk_index=None):
diskspec = vim.vm.device.VirtualDeviceSpec()
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
diskspec.device = vim.vm.device.VirtualDisk()
diskspec.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
diskspec.device.controllerKey = scsi_ctl.device.key
if self.next_disk_unit_number == 7:
raise AssertionError()
if disk_index == 7:
raise AssertionError()
"""
Configure disk unit number.
"""
if disk_index is not None:
diskspec.device.unitNumber = disk_index
self.next_disk_unit_number = disk_index + 1
else:
diskspec.device.unitNumber = self.next_disk_unit_number
self.next_disk_unit_number += 1
# unit number 7 is reserved to SCSI controller, increase next index
if self.next_disk_unit_number == 7:
self.next_disk_unit_number += 1
return diskspec
def get_device(self, device_type, name):
nic_dict = dict(pcnet32=vim.vm.device.VirtualPCNet32(),
vmxnet2=vim.vm.device.VirtualVmxnet2(),
vmxnet3=vim.vm.device.VirtualVmxnet3(),
e1000=vim.vm.device.VirtualE1000(),
e1000e=vim.vm.device.VirtualE1000e(),
sriov=vim.vm.device.VirtualSriovEthernetCard(),
)
if device_type in nic_dict:
return nic_dict[device_type]
else:
self.module.fail_json(msg='Invalid device_type "%s"'
' for network "%s"' % (device_type, name))
def create_nic(self, device_type, device_label, device_infos):
nic = vim.vm.device.VirtualDeviceSpec()
nic.device = self.get_device(device_type, device_infos['name'])
nic.device.wakeOnLanEnabled = bool(device_infos.get('wake_on_lan', True))
nic.device.deviceInfo = vim.Description()
nic.device.deviceInfo.label = device_label
nic.device.deviceInfo.summary = device_infos['name']
nic.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
nic.device.connectable.startConnected = bool(device_infos.get('start_connected', True))
nic.device.connectable.allowGuestControl = bool(device_infos.get('allow_guest_control', True))
nic.device.connectable.connected = True
if 'mac' in device_infos and is_mac(device_infos['mac']):
nic.device.addressType = 'manual'
nic.device.macAddress = device_infos['mac']
else:
nic.device.addressType = 'generated'
return nic
def integer_value(self, input_value, name):
"""
Function to return int value for given input, else return error
Args:
input_value: Input value to retrieve int value from
name: Name of the Input value (used to build error message)
Returns: (int) if integer value can be obtained, otherwise will send a error message.
"""
if isinstance(input_value, int):
return input_value
elif isinstance(input_value, str) and input_value.isdigit():
return int(input_value)
else:
self.module.fail_json(msg='"%s" attribute should be an'
' integer value.' % name)
class PyVmomiCache(object):
""" This class caches references to objects which are requested multiples times but not modified """
def __init__(self, content, dc_name=None):
self.content = content
self.dc_name = dc_name
self.networks = {}
self.clusters = {}
self.esx_hosts = {}
self.parent_datacenters = {}
def find_obj(self, content, types, name, confine_to_datacenter=True):
""" Wrapper around find_obj to set datacenter context """
result = find_obj(content, types, name)
if result and confine_to_datacenter:
if to_text(self.get_parent_datacenter(result).name) != to_text(self.dc_name):
result = None
objects = self.get_all_objs(content, types, confine_to_datacenter=True)
for obj in objects:
if name is None or to_text(obj.name) == to_text(name):
return obj
return result
def get_all_objs(self, content, types, confine_to_datacenter=True):
""" Wrapper around get_all_objs to set datacenter context """
objects = get_all_objs(content, types)
if confine_to_datacenter:
if hasattr(objects, 'items'):
# resource pools come back as a dictionary
# make a copy
for k, v in tuple(objects.items()):
parent_dc = self.get_parent_datacenter(k)
if parent_dc.name != self.dc_name:
del objects[k]
else:
# everything else should be a list
objects = [x for x in objects if self.get_parent_datacenter(x).name == self.dc_name]
return objects
def get_network(self, network):
if network not in self.networks:
self.networks[network] = self.find_obj(self.content, [vim.Network], network)
return self.networks[network]
def get_cluster(self, cluster):
if cluster not in self.clusters:
self.clusters[cluster] = self.find_obj(self.content, [vim.ClusterComputeResource], cluster)
return self.clusters[cluster]
def get_esx_host(self, host):
if host not in self.esx_hosts:
self.esx_hosts[host] = self.find_obj(self.content, [vim.HostSystem], host)
return self.esx_hosts[host]
def get_parent_datacenter(self, obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
if obj in self.parent_datacenters:
return self.parent_datacenters[obj]
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
self.parent_datacenters[obj] = datacenter
return datacenter
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
self.device_helper = PyVmomiDeviceHelper(self.module)
self.configspec = None
self.relospec = None
self.change_detected = False # a change was detected and needs to be applied through reconfiguration
self.change_applied = False # a change was applied meaning at least one task succeeded
self.customspec = None
self.cache = PyVmomiCache(self.content, dc_name=self.params['datacenter'])
def gather_facts(self, vm):
return gather_vm_facts(self.content, vm)
def remove_vm(self, vm, delete_from_inventory=False):
# https://www.vmware.com/support/developer/converter-sdk/conv60_apireference/vim.ManagedEntity.html#destroy
if vm.summary.runtime.powerState.lower() == 'poweredon':
self.module.fail_json(msg="Virtual machine %s found in 'powered on' state, "
"please use 'force' parameter to remove or poweroff VM "
"and try removing VM again." % vm.name)
# Delete VM from Inventory
if delete_from_inventory:
try:
vm.UnregisterVM()
except (vim.fault.TaskInProgress,
vmodl.RuntimeFault) as e:
return {'changed': self.change_applied, 'failed': True, 'msg': e.msg, 'op': 'UnregisterVM'}
self.change_applied = True
return {'changed': self.change_applied, 'failed': False}
# Delete VM from Disk
task = vm.Destroy()
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'destroy'}
else:
return {'changed': self.change_applied, 'failed': False}
def configure_guestid(self, vm_obj, vm_creation=False):
# guest_id is not required when using templates
if self.params['template']:
return
# guest_id is only mandatory on VM creation
if vm_creation and self.params['guest_id'] is None:
self.module.fail_json(msg="guest_id attribute is mandatory for VM creation")
if self.params['guest_id'] and \
(vm_obj is None or self.params['guest_id'].lower() != vm_obj.summary.config.guestId.lower()):
self.change_detected = True
self.configspec.guestId = self.params['guest_id']
def configure_resource_alloc_info(self, vm_obj):
"""
Function to configure resource allocation information about virtual machine
:param vm_obj: VM object in case of reconfigure, None in case of deploy
:return: None
"""
rai_change_detected = False
memory_allocation = vim.ResourceAllocationInfo()
cpu_allocation = vim.ResourceAllocationInfo()
if 'hardware' in self.params:
if 'mem_limit' in self.params['hardware']:
mem_limit = None
try:
mem_limit = int(self.params['hardware'].get('mem_limit'))
except ValueError:
self.module.fail_json(msg="hardware.mem_limit attribute should be an integer value.")
memory_allocation.limit = mem_limit
if vm_obj is None or memory_allocation.limit != vm_obj.config.memoryAllocation.limit:
rai_change_detected = True
if 'mem_reservation' in self.params['hardware'] or 'memory_reservation' in self.params['hardware']:
mem_reservation = self.params['hardware'].get('mem_reservation')
if mem_reservation is None:
mem_reservation = self.params['hardware'].get('memory_reservation')
try:
mem_reservation = int(mem_reservation)
except ValueError:
self.module.fail_json(msg="hardware.mem_reservation or hardware.memory_reservation should be an integer value.")
memory_allocation.reservation = mem_reservation
if vm_obj is None or \
memory_allocation.reservation != vm_obj.config.memoryAllocation.reservation:
rai_change_detected = True
if 'cpu_limit' in self.params['hardware']:
cpu_limit = None
try:
cpu_limit = int(self.params['hardware'].get('cpu_limit'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_limit attribute should be an integer value.")
cpu_allocation.limit = cpu_limit
if vm_obj is None or cpu_allocation.limit != vm_obj.config.cpuAllocation.limit:
rai_change_detected = True
if 'cpu_reservation' in self.params['hardware']:
cpu_reservation = None
try:
cpu_reservation = int(self.params['hardware'].get('cpu_reservation'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_reservation should be an integer value.")
cpu_allocation.reservation = cpu_reservation
if vm_obj is None or \
cpu_allocation.reservation != vm_obj.config.cpuAllocation.reservation:
rai_change_detected = True
if rai_change_detected:
self.configspec.memoryAllocation = memory_allocation
self.configspec.cpuAllocation = cpu_allocation
self.change_detected = True
def configure_cpu_and_memory(self, vm_obj, vm_creation=False):
# set cpu/memory/etc
if 'hardware' in self.params:
if 'num_cpus' in self.params['hardware']:
try:
num_cpus = int(self.params['hardware']['num_cpus'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpus attribute should be an integer value.")
# check VM power state and cpu hot-add/hot-remove state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if not vm_obj.config.cpuHotRemoveEnabled and num_cpus < vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is less than the cpu number of the VM, "
"cpuHotRemove is not enabled")
if not vm_obj.config.cpuHotAddEnabled and num_cpus > vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is more than the cpu number of the VM, "
"cpuHotAdd is not enabled")
if 'num_cpu_cores_per_socket' in self.params['hardware']:
try:
num_cpu_cores_per_socket = int(self.params['hardware']['num_cpu_cores_per_socket'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpu_cores_per_socket attribute "
"should be an integer value.")
if num_cpus % num_cpu_cores_per_socket != 0:
self.module.fail_json(msg="hardware.num_cpus attribute should be a multiple "
"of hardware.num_cpu_cores_per_socket")
self.configspec.numCoresPerSocket = num_cpu_cores_per_socket
if vm_obj is None or self.configspec.numCoresPerSocket != vm_obj.config.hardware.numCoresPerSocket:
self.change_detected = True
self.configspec.numCPUs = num_cpus
if vm_obj is None or self.configspec.numCPUs != vm_obj.config.hardware.numCPU:
self.change_detected = True
# num_cpu is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.num_cpus attribute is mandatory for VM creation")
if 'memory_mb' in self.params['hardware']:
try:
memory_mb = int(self.params['hardware']['memory_mb'])
except ValueError:
self.module.fail_json(msg="Failed to parse hardware.memory_mb value."
" Please refer the documentation and provide"
" correct value.")
# check VM power state and memory hotadd state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if vm_obj.config.memoryHotAddEnabled and memory_mb < vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="Configured memory is less than memory size of the VM, "
"operation is not supported")
elif not vm_obj.config.memoryHotAddEnabled and memory_mb != vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="memoryHotAdd is not enabled")
self.configspec.memoryMB = memory_mb
if vm_obj is None or self.configspec.memoryMB != vm_obj.config.hardware.memoryMB:
self.change_detected = True
# memory_mb is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.memory_mb attribute is mandatory for VM creation")
if 'hotadd_memory' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.memoryHotAddEnabled != bool(self.params['hardware']['hotadd_memory']):
self.module.fail_json(msg="Configure hotadd memory operation is not supported when VM is power on")
self.configspec.memoryHotAddEnabled = bool(self.params['hardware']['hotadd_memory'])
if vm_obj is None or self.configspec.memoryHotAddEnabled != vm_obj.config.memoryHotAddEnabled:
self.change_detected = True
if 'hotadd_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotAddEnabled != bool(self.params['hardware']['hotadd_cpu']):
self.module.fail_json(msg="Configure hotadd cpu operation is not supported when VM is power on")
self.configspec.cpuHotAddEnabled = bool(self.params['hardware']['hotadd_cpu'])
if vm_obj is None or self.configspec.cpuHotAddEnabled != vm_obj.config.cpuHotAddEnabled:
self.change_detected = True
if 'hotremove_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotRemoveEnabled != bool(self.params['hardware']['hotremove_cpu']):
self.module.fail_json(msg="Configure hotremove cpu operation is not supported when VM is power on")
self.configspec.cpuHotRemoveEnabled = bool(self.params['hardware']['hotremove_cpu'])
if vm_obj is None or self.configspec.cpuHotRemoveEnabled != vm_obj.config.cpuHotRemoveEnabled:
self.change_detected = True
if 'memory_reservation_lock' in self.params['hardware']:
self.configspec.memoryReservationLockedToMax = bool(self.params['hardware']['memory_reservation_lock'])
if vm_obj is None or self.configspec.memoryReservationLockedToMax != vm_obj.config.memoryReservationLockedToMax:
self.change_detected = True
if 'boot_firmware' in self.params['hardware']:
# boot firmware re-config can cause boot issue
if vm_obj is not None:
return
boot_firmware = self.params['hardware']['boot_firmware'].lower()
if boot_firmware not in ('bios', 'efi'):
self.module.fail_json(msg="hardware.boot_firmware value is invalid [%s]."
" Need one of ['bios', 'efi']." % boot_firmware)
self.configspec.firmware = boot_firmware
self.change_detected = True
def sanitize_cdrom_params(self):
# cdroms {'ide': [{num: 0, cdrom: []}, {}], 'sata': [{num: 0, cdrom: []}, {}, ...]}
cdroms = {'ide': [], 'sata': []}
expected_cdrom_spec = self.params.get('cdrom')
if expected_cdrom_spec:
for cdrom_spec in expected_cdrom_spec:
cdrom_spec['controller_type'] = cdrom_spec.get('controller_type', 'ide').lower()
if cdrom_spec['controller_type'] not in ['ide', 'sata']:
self.module.fail_json(msg="Invalid cdrom.controller_type: %s, valid value is 'ide' or 'sata'."
% cdrom_spec['controller_type'])
cdrom_spec['state'] = cdrom_spec.get('state', 'present').lower()
if cdrom_spec['state'] not in ['present', 'absent']:
self.module.fail_json(msg="Invalid cdrom.state: %s, valid value is 'present', 'absent'."
% cdrom_spec['state'])
if cdrom_spec['state'] == 'present':
if 'type' in cdrom_spec and cdrom_spec.get('type') not in ['none', 'client', 'iso']:
self.module.fail_json(msg="Invalid cdrom.type: %s, valid value is 'none', 'client' or 'iso'."
% cdrom_spec.get('type'))
if cdrom_spec.get('type') == 'iso' and not cdrom_spec.get('iso_path'):
self.module.fail_json(msg="cdrom.iso_path is mandatory when cdrom.type is set to iso.")
if cdrom_spec['controller_type'] == 'ide' and \
(cdrom_spec.get('controller_number') not in [0, 1] or cdrom_spec.get('unit_number') not in [0, 1]):
self.module.fail_json(msg="Invalid cdrom.controller_number: %s or cdrom.unit_number: %s, valid"
" values are 0 or 1 for IDE controller." % (cdrom_spec.get('controller_number'), cdrom_spec.get('unit_number')))
if cdrom_spec['controller_type'] == 'sata' and \
(cdrom_spec.get('controller_number') not in range(0, 4) or cdrom_spec.get('unit_number') not in range(0, 30)):
self.module.fail_json(msg="Invalid cdrom.controller_number: %s or cdrom.unit_number: %s,"
" valid controller_number value is 0-3, valid unit_number is 0-29"
" for SATA controller." % (cdrom_spec.get('controller_number'), cdrom_spec.get('unit_number')))
ctl_exist = False
for exist_spec in cdroms.get(cdrom_spec['controller_type']):
if exist_spec['num'] == cdrom_spec['controller_number']:
ctl_exist = True
exist_spec['cdrom'].append(cdrom_spec)
break
if not ctl_exist:
cdroms.get(cdrom_spec['controller_type']).append({'num': cdrom_spec['controller_number'], 'cdrom': [cdrom_spec]})
return cdroms
def configure_cdrom(self, vm_obj):
# Configure the VM CD-ROM
if self.params.get('cdrom'):
if vm_obj and vm_obj.config.template:
# Changing CD-ROM settings on a template is not supported
return
if isinstance(self.params.get('cdrom'), dict):
self.configure_cdrom_dict(vm_obj)
elif isinstance(self.params.get('cdrom'), list):
self.configure_cdrom_list(vm_obj)
def configure_cdrom_dict(self, vm_obj):
if self.params["cdrom"].get('type') not in ['none', 'client', 'iso']:
self.module.fail_json(msg="cdrom.type is mandatory. Options are 'none', 'client', and 'iso'.")
if self.params["cdrom"]['type'] == 'iso' and not self.params["cdrom"].get('iso_path'):
self.module.fail_json(msg="cdrom.iso_path is mandatory when cdrom.type is set to iso.")
cdrom_spec = None
cdrom_devices = self.get_vm_cdrom_devices(vm=vm_obj)
iso_path = self.params["cdrom"].get("iso_path")
if len(cdrom_devices) == 0:
# Creating new CD-ROM
ide_devices = self.get_vm_ide_devices(vm=vm_obj)
if len(ide_devices) == 0:
# Creating new IDE device
ide_ctl = self.device_helper.create_ide_controller()
ide_device = ide_ctl.device
self.change_detected = True
self.configspec.deviceChange.append(ide_ctl)
else:
ide_device = ide_devices[0]
if len(ide_device.device) > 3:
self.module.fail_json(msg="hardware.cdrom specified for a VM or template which already has 4"
" IDE devices of which none are a cdrom")
cdrom_spec = self.device_helper.create_cdrom(ide_device=ide_device, cdrom_type=self.params["cdrom"]["type"],
iso_path=iso_path)
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_spec.device.connectable.connected = (self.params["cdrom"]["type"] != "none")
elif not self.device_helper.is_equal_cdrom(vm_obj=vm_obj, cdrom_device=cdrom_devices[0],
cdrom_type=self.params["cdrom"]["type"], iso_path=iso_path):
self.device_helper.update_cdrom_config(vm_obj, self.params["cdrom"], cdrom_devices[0], iso_path=iso_path)
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
cdrom_spec.device = cdrom_devices[0]
if cdrom_spec:
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
def configure_cdrom_list(self, vm_obj):
configured_cdroms = self.sanitize_cdrom_params()
cdrom_devices = self.get_vm_cdrom_devices(vm=vm_obj)
# configure IDE CD-ROMs
if configured_cdroms['ide']:
ide_devices = self.get_vm_ide_devices(vm=vm_obj)
for expected_cdrom_spec in configured_cdroms['ide']:
ide_device = None
for device in ide_devices:
if device.busNumber == expected_cdrom_spec['num']:
ide_device = device
break
# if not find the matched ide controller or no existing ide controller
if not ide_device:
ide_ctl = self.device_helper.create_ide_controller(bus_number=expected_cdrom_spec['num'])
ide_device = ide_ctl.device
self.change_detected = True
self.configspec.deviceChange.append(ide_ctl)
for cdrom in expected_cdrom_spec['cdrom']:
cdrom_device = None
iso_path = cdrom.get('iso_path')
unit_number = cdrom.get('unit_number')
for target_cdrom in cdrom_devices:
if target_cdrom.controllerKey == ide_device.key and target_cdrom.unitNumber == unit_number:
cdrom_device = target_cdrom
break
# create new CD-ROM
if not cdrom_device and cdrom.get('state') != 'absent':
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
self.module.fail_json(msg='CD-ROM attach to IDE controller not support hot-add.')
if len(ide_device.device) == 2:
self.module.fail_json(msg='Maximum number of CD-ROMs attached to IDE controller is 2.')
cdrom_spec = self.device_helper.create_cdrom(ide_device=ide_device, cdrom_type=cdrom['type'],
iso_path=iso_path, unit_number=unit_number)
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
# re-configure CD-ROM
elif cdrom_device and cdrom.get('state') != 'absent' and \
not self.device_helper.is_equal_cdrom(vm_obj=vm_obj, cdrom_device=cdrom_device,
cdrom_type=cdrom['type'], iso_path=iso_path):
self.device_helper.update_cdrom_config(vm_obj, cdrom, cdrom_device, iso_path=iso_path)
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
cdrom_spec.device = cdrom_device
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
# delete CD-ROM
elif cdrom_device and cdrom.get('state') == 'absent':
if vm_obj and vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
self.module.fail_json(msg='CD-ROM attach to IDE controller not support hot-remove.')
cdrom_spec = self.device_helper.remove_cdrom(cdrom_device)
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
# configure SATA CD-ROMs is not supported yet
if configured_cdroms['sata']:
pass
def configure_hardware_params(self, vm_obj):
"""
Function to configure hardware related configuration of virtual machine
Args:
vm_obj: virtual machine object
"""
if 'hardware' in self.params:
if 'max_connections' in self.params['hardware']:
# maxMksConnections == max_connections
self.configspec.maxMksConnections = int(self.params['hardware']['max_connections'])
if vm_obj is None or self.configspec.maxMksConnections != vm_obj.config.maxMksConnections:
self.change_detected = True
if 'nested_virt' in self.params['hardware']:
self.configspec.nestedHVEnabled = bool(self.params['hardware']['nested_virt'])
if vm_obj is None or self.configspec.nestedHVEnabled != bool(vm_obj.config.nestedHVEnabled):
self.change_detected = True
if 'version' in self.params['hardware']:
hw_version_check_failed = False
temp_version = self.params['hardware'].get('version', 10)
if isinstance(temp_version, str) and temp_version.lower() == 'latest':
# Check is to make sure vm_obj is not of type template
if vm_obj and not vm_obj.config.template:
try:
task = vm_obj.UpgradeVM_Task()
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
except vim.fault.AlreadyUpgraded:
# Don't fail if VM is already upgraded.
pass
else:
try:
temp_version = int(temp_version)
except ValueError:
hw_version_check_failed = True
if temp_version not in range(3, 16):
hw_version_check_failed = True
if hw_version_check_failed:
self.module.fail_json(msg="Failed to set hardware.version '%s' value as valid"
" values range from 3 (ESX 2.x) to 14 (ESXi 6.5 and greater)." % temp_version)
# Hardware version is denoted as "vmx-10"
version = "vmx-%02d" % temp_version
self.configspec.version = version
if vm_obj is None or self.configspec.version != vm_obj.config.version:
self.change_detected = True
# Check is to make sure vm_obj is not of type template
if vm_obj and not vm_obj.config.template:
# VM exists and we need to update the hardware version
current_version = vm_obj.config.version
# current_version = "vmx-10"
version_digit = int(current_version.split("-", 1)[-1])
if temp_version < version_digit:
self.module.fail_json(msg="Current hardware version '%d' which is greater than the specified"
" version '%d'. Downgrading hardware version is"
" not supported. Please specify version greater"
" than the current version." % (version_digit,
temp_version))
new_version = "vmx-%02d" % temp_version
try:
task = vm_obj.UpgradeVM_Task(new_version)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
except vim.fault.AlreadyUpgraded:
# Don't fail if VM is already upgraded.
pass
if 'virt_based_security' in self.params['hardware']:
host_version = self.select_host().summary.config.product.version
if int(host_version.split('.')[0]) < 6 or (int(host_version.split('.')[0]) == 6 and int(host_version.split('.')[1]) < 7):
self.module.fail_json(msg="ESXi version %s not support VBS." % host_version)
guest_ids = ['windows9_64Guest', 'windows9Server64Guest']
if vm_obj is None:
guestid = self.configspec.guestId
else:
guestid = vm_obj.summary.config.guestId
if guestid not in guest_ids:
self.module.fail_json(msg="Guest '%s' not support VBS." % guestid)
if (vm_obj is None and int(self.configspec.version.split('-')[1]) >= 14) or \
(vm_obj and int(vm_obj.config.version.split('-')[1]) >= 14 and (vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOff)):
self.configspec.flags = vim.vm.FlagInfo()
self.configspec.flags.vbsEnabled = bool(self.params['hardware']['virt_based_security'])
if bool(self.params['hardware']['virt_based_security']):
self.configspec.flags.vvtdEnabled = True
self.configspec.nestedHVEnabled = True
if (vm_obj is None and self.configspec.firmware == 'efi') or \
(vm_obj and vm_obj.config.firmware == 'efi'):
self.configspec.bootOptions = vim.vm.BootOptions()
self.configspec.bootOptions.efiSecureBootEnabled = True
else:
self.module.fail_json(msg="Not support VBS when firmware is BIOS.")
if vm_obj is None or self.configspec.flags.vbsEnabled != vm_obj.config.flags.vbsEnabled:
self.change_detected = True
def get_device_by_type(self, vm=None, type=None):
device_list = []
if vm is None or type is None:
return device_list
for device in vm.config.hardware.device:
if isinstance(device, type):
device_list.append(device)
return device_list
def get_vm_cdrom_devices(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualCdrom)
def get_vm_ide_devices(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualIDEController)
def get_vm_network_interfaces(self, vm=None):
device_list = []
if vm is None:
return device_list
nw_device_types = (vim.vm.device.VirtualPCNet32, vim.vm.device.VirtualVmxnet2,
vim.vm.device.VirtualVmxnet3, vim.vm.device.VirtualE1000,
vim.vm.device.VirtualE1000e, vim.vm.device.VirtualSriovEthernetCard)
for device in vm.config.hardware.device:
if isinstance(device, nw_device_types):
device_list.append(device)
return device_list
def sanitize_network_params(self):
"""
Sanitize user provided network provided params
Returns: A sanitized list of network params, else fails
"""
network_devices = list()
# Clean up user data here
for network in self.params['networks']:
if 'name' not in network and 'vlan' not in network:
self.module.fail_json(msg="Please specify at least a network name or"
" a VLAN name under VM network list.")
if 'name' in network and self.cache.get_network(network['name']) is None:
self.module.fail_json(msg="Network '%(name)s' does not exist." % network)
elif 'vlan' in network:
dvps = self.cache.get_all_objs(self.content, [vim.dvs.DistributedVirtualPortgroup])
for dvp in dvps:
if hasattr(dvp.config.defaultPortConfig, 'vlan') and \
isinstance(dvp.config.defaultPortConfig.vlan.vlanId, int) and \
str(dvp.config.defaultPortConfig.vlan.vlanId) == str(network['vlan']):
network['name'] = dvp.config.name
break
if 'dvswitch_name' in network and \
dvp.config.distributedVirtualSwitch.name == network['dvswitch_name'] and \
dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
if dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
else:
self.module.fail_json(msg="VLAN '%(vlan)s' does not exist." % network)
if 'type' in network:
if network['type'] not in ['dhcp', 'static']:
self.module.fail_json(msg="Network type '%(type)s' is not a valid parameter."
" Valid parameters are ['dhcp', 'static']." % network)
if network['type'] != 'static' and ('ip' in network or 'netmask' in network):
self.module.fail_json(msg='Static IP information provided for network "%(name)s",'
' but "type" is set to "%(type)s".' % network)
else:
# Type is optional parameter, if user provided IP or Subnet assume
# network type as 'static'
if 'ip' in network or 'netmask' in network:
network['type'] = 'static'
else:
# User wants network type as 'dhcp'
network['type'] = 'dhcp'
if network.get('type') == 'static':
if 'ip' in network and 'netmask' not in network:
self.module.fail_json(msg="'netmask' is required if 'ip' is"
" specified under VM network list.")
if 'ip' not in network and 'netmask' in network:
self.module.fail_json(msg="'ip' is required if 'netmask' is"
" specified under VM network list.")
validate_device_types = ['pcnet32', 'vmxnet2', 'vmxnet3', 'e1000', 'e1000e', 'sriov']
if 'device_type' in network and network['device_type'] not in validate_device_types:
self.module.fail_json(msg="Device type specified '%s' is not valid."
" Please specify correct device"
" type from ['%s']." % (network['device_type'],
"', '".join(validate_device_types)))
if 'mac' in network and not is_mac(network['mac']):
self.module.fail_json(msg="Device MAC address '%s' is invalid."
" Please provide correct MAC address." % network['mac'])
network_devices.append(network)
return network_devices
def configure_network(self, vm_obj):
# Ignore empty networks, this permits to keep networks when deploying a template/cloning a VM
if len(self.params['networks']) == 0:
return
network_devices = self.sanitize_network_params()
# List current device for Clone or Idempotency
current_net_devices = self.get_vm_network_interfaces(vm=vm_obj)
if len(network_devices) < len(current_net_devices):
self.module.fail_json(msg="Given network device list is lesser than current VM device list (%d < %d). "
"Removing interfaces is not allowed"
% (len(network_devices), len(current_net_devices)))
for key in range(0, len(network_devices)):
nic_change_detected = False
network_name = network_devices[key]['name']
if key < len(current_net_devices) and (vm_obj or self.params['template']):
# We are editing existing network devices, this is either when
# are cloning from VM or Template
nic = vim.vm.device.VirtualDeviceSpec()
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
nic.device = current_net_devices[key]
if ('wake_on_lan' in network_devices[key] and
nic.device.wakeOnLanEnabled != network_devices[key].get('wake_on_lan')):
nic.device.wakeOnLanEnabled = network_devices[key].get('wake_on_lan')
nic_change_detected = True
if ('start_connected' in network_devices[key] and
nic.device.connectable.startConnected != network_devices[key].get('start_connected')):
nic.device.connectable.startConnected = network_devices[key].get('start_connected')
nic_change_detected = True
if ('allow_guest_control' in network_devices[key] and
nic.device.connectable.allowGuestControl != network_devices[key].get('allow_guest_control')):
nic.device.connectable.allowGuestControl = network_devices[key].get('allow_guest_control')
nic_change_detected = True
if nic.device.deviceInfo.summary != network_name:
nic.device.deviceInfo.summary = network_name
nic_change_detected = True
if 'device_type' in network_devices[key]:
device = self.device_helper.get_device(network_devices[key]['device_type'], network_name)
device_class = type(device)
if not isinstance(nic.device, device_class):
self.module.fail_json(msg="Changing the device type is not possible when interface is already present. "
"The failing device type is %s" % network_devices[key]['device_type'])
# Changing mac address has no effect when editing interface
if 'mac' in network_devices[key] and nic.device.macAddress != current_net_devices[key].macAddress:
self.module.fail_json(msg="Changing MAC address has not effect when interface is already present. "
"The failing new MAC address is %s" % nic.device.macAddress)
else:
# Default device type is vmxnet3, VMware best practice
device_type = network_devices[key].get('device_type', 'vmxnet3')
nic = self.device_helper.create_nic(device_type,
'Network Adapter %s' % (key + 1),
network_devices[key])
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
nic_change_detected = True
if hasattr(self.cache.get_network(network_name), 'portKeys'):
# VDS switch
pg_obj = None
if 'dvswitch_name' in network_devices[key]:
dvs_name = network_devices[key]['dvswitch_name']
dvs_obj = find_dvs_by_name(self.content, dvs_name)
if dvs_obj is None:
self.module.fail_json(msg="Unable to find distributed virtual switch %s" % dvs_name)
pg_obj = find_dvspg_by_name(dvs_obj, network_name)
if pg_obj is None:
self.module.fail_json(msg="Unable to find distributed port group %s" % network_name)
else:
pg_obj = self.cache.find_obj(self.content, [vim.dvs.DistributedVirtualPortgroup], network_name)
# TODO: (akasurde) There is no way to find association between resource pool and distributed virtual portgroup
# For now, check if we are able to find distributed virtual switch
if not pg_obj.config.distributedVirtualSwitch:
self.module.fail_json(msg="Failed to find distributed virtual switch which is associated with"
" distributed virtual portgroup '%s'. Make sure hostsystem is associated with"
" the given distributed virtual portgroup. Also, check if user has correct"
" permission to access distributed virtual switch in the given portgroup." % pg_obj.name)
if (nic.device.backing and
(not hasattr(nic.device.backing, 'port') or
(nic.device.backing.port.portgroupKey != pg_obj.key or
nic.device.backing.port.switchUuid != pg_obj.config.distributedVirtualSwitch.uuid))):
nic_change_detected = True
dvs_port_connection = vim.dvs.PortConnection()
dvs_port_connection.portgroupKey = pg_obj.key
# If user specifies distributed port group without associating to the hostsystem on which
# virtual machine is going to be deployed then we get error. We can infer that there is no
# association between given distributed port group and host system.
host_system = self.params.get('esxi_hostname')
if host_system and host_system not in [host.config.host.name for host in pg_obj.config.distributedVirtualSwitch.config.host]:
self.module.fail_json(msg="It seems that host system '%s' is not associated with distributed"
" virtual portgroup '%s'. Please make sure host system is associated"
" with given distributed virtual portgroup" % (host_system, pg_obj.name))
dvs_port_connection.switchUuid = pg_obj.config.distributedVirtualSwitch.uuid
nic.device.backing = vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo()
nic.device.backing.port = dvs_port_connection
elif isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
# NSX-T Logical Switch
nic.device.backing = vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo()
network_id = self.cache.get_network(network_name).summary.opaqueNetworkId
nic.device.backing.opaqueNetworkType = 'nsx.LogicalSwitch'
nic.device.backing.opaqueNetworkId = network_id
nic.device.deviceInfo.summary = 'nsx.LogicalSwitch: %s' % network_id
nic_change_detected = True
else:
# vSwitch
if not isinstance(nic.device.backing, vim.vm.device.VirtualEthernetCard.NetworkBackingInfo):
nic.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()
nic_change_detected = True
net_obj = self.cache.get_network(network_name)
if nic.device.backing.network != net_obj:
nic.device.backing.network = net_obj
nic_change_detected = True
if nic.device.backing.deviceName != network_name:
nic.device.backing.deviceName = network_name
nic_change_detected = True
if nic_change_detected:
# Change to fix the issue found while configuring opaque network
# VMs cloned from a template with opaque network will get disconnected
# Replacing deprecated config parameter with relocation Spec
if isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
self.relospec.deviceChange.append(nic)
else:
self.configspec.deviceChange.append(nic)
self.change_detected = True
def configure_vapp_properties(self, vm_obj):
if len(self.params['vapp_properties']) == 0:
return
for x in self.params['vapp_properties']:
if not x.get('id'):
self.module.fail_json(msg="id is required to set vApp property")
new_vmconfig_spec = vim.vApp.VmConfigSpec()
if vm_obj:
# VM exists
# This is primarily for vcsim/integration tests, unset vAppConfig was not seen on my deployments
orig_spec = vm_obj.config.vAppConfig if vm_obj.config.vAppConfig else new_vmconfig_spec
vapp_properties_current = dict((x.id, x) for x in orig_spec.property)
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
# each property must have a unique key
# init key counter with max value + 1
all_keys = [x.key for x in orig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
for property_id, property_spec in vapp_properties_to_change.items():
is_property_changed = False
new_vapp_property_spec = vim.vApp.PropertySpec()
if property_id in vapp_properties_current:
if property_spec.get('operation') == 'remove':
new_vapp_property_spec.operation = 'remove'
new_vapp_property_spec.removeKey = vapp_properties_current[property_id].key
is_property_changed = True
else:
# this is 'edit' branch
new_vapp_property_spec.operation = 'edit'
new_vapp_property_spec.info = vapp_properties_current[property_id]
try:
for property_name, property_value in property_spec.items():
if property_name == 'operation':
# operation is not an info object property
# if set to anything other than 'remove' we don't fail
continue
# Updating attributes only if needed
if getattr(new_vapp_property_spec.info, property_name) != property_value:
setattr(new_vapp_property_spec.info, property_name, property_value)
is_property_changed = True
except Exception as e:
msg = "Failed to set vApp property field='%s' and value='%s'. Error: %s" % (property_name, property_value, to_text(e))
self.module.fail_json(msg=msg)
else:
if property_spec.get('operation') == 'remove':
# attempt to delete non-existent property
continue
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
else:
# New VM
all_keys = [x.key for x in new_vmconfig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
is_property_changed = False
for property_id, property_spec in vapp_properties_to_change.items():
new_vapp_property_spec = vim.vApp.PropertySpec()
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
if new_vmconfig_spec.property:
self.configspec.vAppConfig = new_vmconfig_spec
self.change_detected = True
def customize_customvalues(self, vm_obj):
if len(self.params['customvalues']) == 0:
return
facts = self.gather_facts(vm_obj)
for kv in self.params['customvalues']:
if 'key' not in kv or 'value' not in kv:
self.module.exit_json(msg="customvalues items required both 'key' and 'value' fields.")
key_id = None
for field in self.content.customFieldsManager.field:
if field.name == kv['key']:
key_id = field.key
break
if not key_id:
self.module.fail_json(msg="Unable to find custom value key %s" % kv['key'])
# If kv is not kv fetched from facts, change it
if kv['key'] not in facts['customvalues'] or facts['customvalues'][kv['key']] != kv['value']:
self.content.customFieldsManager.SetField(entity=vm_obj, key=key_id, value=kv['value'])
self.change_detected = True
def customize_vm(self, vm_obj):
# User specified customization specification
custom_spec_name = self.params.get('customization_spec')
if custom_spec_name:
cc_mgr = self.content.customizationSpecManager
if cc_mgr.DoesCustomizationSpecExist(name=custom_spec_name):
temp_spec = cc_mgr.GetCustomizationSpec(name=custom_spec_name)
self.customspec = temp_spec.spec
return
else:
self.module.fail_json(msg="Unable to find customization specification"
" '%s' in given configuration." % custom_spec_name)
# Network settings
adaptermaps = []
for network in self.params['networks']:
guest_map = vim.vm.customization.AdapterMapping()
guest_map.adapter = vim.vm.customization.IPSettings()
if 'ip' in network and 'netmask' in network:
guest_map.adapter.ip = vim.vm.customization.FixedIp()
guest_map.adapter.ip.ipAddress = str(network['ip'])
guest_map.adapter.subnetMask = str(network['netmask'])
elif 'type' in network and network['type'] == 'dhcp':
guest_map.adapter.ip = vim.vm.customization.DhcpIpGenerator()
if 'gateway' in network:
guest_map.adapter.gateway = network['gateway']
# On Windows, DNS domain and DNS servers can be set by network interface
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.IPSettings.html
if 'domain' in network:
guest_map.adapter.dnsDomain = network['domain']
elif 'domain' in self.params['customization']:
guest_map.adapter.dnsDomain = self.params['customization']['domain']
if 'dns_servers' in network:
guest_map.adapter.dnsServerList = network['dns_servers']
elif 'dns_servers' in self.params['customization']:
guest_map.adapter.dnsServerList = self.params['customization']['dns_servers']
adaptermaps.append(guest_map)
# Global DNS settings
globalip = vim.vm.customization.GlobalIPSettings()
if 'dns_servers' in self.params['customization']:
globalip.dnsServerList = self.params['customization']['dns_servers']
# TODO: Maybe list the different domains from the interfaces here by default ?
if 'dns_suffix' in self.params['customization']:
dns_suffix = self.params['customization']['dns_suffix']
if isinstance(dns_suffix, list):
globalip.dnsSuffixList = " ".join(dns_suffix)
else:
globalip.dnsSuffixList = dns_suffix
elif 'domain' in self.params['customization']:
globalip.dnsSuffixList = self.params['customization']['domain']
if self.params['guest_id']:
guest_id = self.params['guest_id']
else:
guest_id = vm_obj.summary.config.guestId
# For windows guest OS, use SysPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.Sysprep.html#field_detail
if 'win' in guest_id:
ident = vim.vm.customization.Sysprep()
ident.userData = vim.vm.customization.UserData()
# Setting hostName, orgName and fullName is mandatory, so we set some default when missing
ident.userData.computerName = vim.vm.customization.FixedName()
# computer name will be truncated to 15 characters if using VM name
default_name = self.params['name'].replace(' ', '')
punctuation = string.punctuation.replace('-', '')
default_name = ''.join([c for c in default_name if c not in punctuation])
ident.userData.computerName.name = str(self.params['customization'].get('hostname', default_name[0:15]))
ident.userData.fullName = str(self.params['customization'].get('fullname', 'Administrator'))
ident.userData.orgName = str(self.params['customization'].get('orgname', 'ACME'))
if 'productid' in self.params['customization']:
ident.userData.productId = str(self.params['customization']['productid'])
ident.guiUnattended = vim.vm.customization.GuiUnattended()
if 'autologon' in self.params['customization']:
ident.guiUnattended.autoLogon = self.params['customization']['autologon']
ident.guiUnattended.autoLogonCount = self.params['customization'].get('autologoncount', 1)
if 'timezone' in self.params['customization']:
# Check if timezone value is a int before proceeding.
ident.guiUnattended.timeZone = self.device_helper.integer_value(
self.params['customization']['timezone'],
'customization.timezone')
ident.identification = vim.vm.customization.Identification()
if self.params['customization'].get('password', '') != '':
ident.guiUnattended.password = vim.vm.customization.Password()
ident.guiUnattended.password.value = str(self.params['customization']['password'])
ident.guiUnattended.password.plainText = True
if 'joindomain' in self.params['customization']:
if 'domainadmin' not in self.params['customization'] or 'domainadminpassword' not in self.params['customization']:
self.module.fail_json(msg="'domainadmin' and 'domainadminpassword' entries are mandatory in 'customization' section to use "
"joindomain feature")
ident.identification.domainAdmin = str(self.params['customization']['domainadmin'])
ident.identification.joinDomain = str(self.params['customization']['joindomain'])
ident.identification.domainAdminPassword = vim.vm.customization.Password()
ident.identification.domainAdminPassword.value = str(self.params['customization']['domainadminpassword'])
ident.identification.domainAdminPassword.plainText = True
elif 'joinworkgroup' in self.params['customization']:
ident.identification.joinWorkgroup = str(self.params['customization']['joinworkgroup'])
if 'runonce' in self.params['customization']:
ident.guiRunOnce = vim.vm.customization.GuiRunOnce()
ident.guiRunOnce.commandList = self.params['customization']['runonce']
else:
# FIXME: We have no clue whether this non-Windows OS is actually Linux, hence it might fail!
# For Linux guest OS, use LinuxPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.LinuxPrep.html
ident = vim.vm.customization.LinuxPrep()
# TODO: Maybe add domain from interface if missing ?
if 'domain' in self.params['customization']:
ident.domain = str(self.params['customization']['domain'])
ident.hostName = vim.vm.customization.FixedName()
hostname = str(self.params['customization'].get('hostname', self.params['name'].split('.')[0]))
# Remove all characters except alphanumeric and minus which is allowed by RFC 952
valid_hostname = re.sub(r"[^a-zA-Z0-9\-]", "", hostname)
ident.hostName.name = valid_hostname
# List of supported time zones for different vSphere versions in Linux/Unix systems
# https://kb.vmware.com/s/article/2145518
if 'timezone' in self.params['customization']:
ident.timeZone = str(self.params['customization']['timezone'])
if 'hwclockUTC' in self.params['customization']:
ident.hwClockUTC = self.params['customization']['hwclockUTC']
self.customspec = vim.vm.customization.Specification()
self.customspec.nicSettingMap = adaptermaps
self.customspec.globalIPSettings = globalip
self.customspec.identity = ident
def get_vm_scsi_controller(self, vm_obj):
# If vm_obj doesn't exist there is no SCSI controller to find
if vm_obj is None:
return None
for device in vm_obj.config.hardware.device:
if self.device_helper.is_scsi_controller(device):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.device = device
return scsi_ctl
return None
def get_configured_disk_size(self, expected_disk_spec):
# what size is it?
if [x for x in expected_disk_spec.keys() if x.startswith('size_') or x == 'size']:
# size, size_tb, size_gb, size_mb, size_kb
if 'size' in expected_disk_spec:
size_regex = re.compile(r'(\d+(?:\.\d+)?)([tgmkTGMK][bB])')
disk_size_m = size_regex.match(expected_disk_spec['size'])
try:
if disk_size_m:
expected = disk_size_m.group(1)
unit = disk_size_m.group(2)
else:
raise ValueError
if re.match(r'\d+\.\d+', expected):
# We found float value in string, let's typecast it
expected = float(expected)
else:
# We found int value in string, let's typecast it
expected = int(expected)
if not expected or not unit:
raise ValueError
except (TypeError, ValueError, NameError):
# Common failure
self.module.fail_json(msg="Failed to parse disk size please review value"
" provided using documentation.")
else:
param = [x for x in expected_disk_spec.keys() if x.startswith('size_')][0]
unit = param.split('_')[-1].lower()
expected = [x[1] for x in expected_disk_spec.items() if x[0].startswith('size_')][0]
expected = int(expected)
disk_units = dict(tb=3, gb=2, mb=1, kb=0)
if unit in disk_units:
unit = unit.lower()
return expected * (1024 ** disk_units[unit])
else:
self.module.fail_json(msg="%s is not a supported unit for disk size."
" Supported units are ['%s']." % (unit,
"', '".join(disk_units.keys())))
# No size found but disk, fail
self.module.fail_json(
msg="No size, size_kb, size_mb, size_gb or size_tb attribute found into disk configuration")
def find_vmdk(self, vmdk_path):
"""
Takes a vsphere datastore path in the format
[datastore_name] path/to/file.vmdk
Returns vsphere file object or raises RuntimeError
"""
datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder = self.vmdk_disk_path_split(vmdk_path)
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
if datastore is None:
self.module.fail_json(msg="Failed to find the datastore %s" % datastore_name)
return self.find_vmdk_file(datastore, vmdk_fullpath, vmdk_filename, vmdk_folder)
def add_existing_vmdk(self, vm_obj, expected_disk_spec, diskspec, scsi_ctl):
"""
Adds vmdk file described by expected_disk_spec['filename'], retrieves the file
information and adds the correct spec to self.configspec.deviceChange.
"""
filename = expected_disk_spec['filename']
# if this is a new disk, or the disk file names are different
if (vm_obj and diskspec.device.backing.fileName != filename) or vm_obj is None:
vmdk_file = self.find_vmdk(expected_disk_spec['filename'])
diskspec.device.backing.fileName = expected_disk_spec['filename']
diskspec.device.capacityInKB = VmomiSupport.vmodlTypes['long'](vmdk_file.fileSize / 1024)
diskspec.device.key = -1
self.change_detected = True
self.configspec.deviceChange.append(diskspec)
def configure_disks(self, vm_obj):
# Ignore empty disk list, this permits to keep disks when deploying a template/cloning a VM
if len(self.params['disk']) == 0:
return
scsi_ctl = self.get_vm_scsi_controller(vm_obj)
# Create scsi controller only if we are deploying a new VM, not a template or reconfiguring
if vm_obj is None or scsi_ctl is None:
scsi_ctl = self.device_helper.create_scsi_controller(self.get_scsi_type())
self.change_detected = True
self.configspec.deviceChange.append(scsi_ctl)
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)] \
if vm_obj is not None else None
if disks is not None and self.params.get('disk') and len(self.params.get('disk')) < len(disks):
self.module.fail_json(msg="Provided disks configuration has less disks than "
"the target object (%d vs %d)" % (len(self.params.get('disk')), len(disks)))
disk_index = 0
for expected_disk_spec in self.params.get('disk'):
disk_modified = False
# If we are manipulating and existing objects which has disks and disk_index is in disks
if vm_obj is not None and disks is not None and disk_index < len(disks):
diskspec = vim.vm.device.VirtualDeviceSpec()
# set the operation to edit so that it knows to keep other settings
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
diskspec.device = disks[disk_index]
else:
diskspec = self.device_helper.create_scsi_disk(scsi_ctl, disk_index)
disk_modified = True
# increment index for next disk search
disk_index += 1
# index 7 is reserved to SCSI controller
if disk_index == 7:
disk_index += 1
if 'disk_mode' in expected_disk_spec:
disk_mode = expected_disk_spec.get('disk_mode', 'persistent').lower()
valid_disk_mode = ['persistent', 'independent_persistent', 'independent_nonpersistent']
if disk_mode not in valid_disk_mode:
self.module.fail_json(msg="disk_mode specified is not valid."
" Should be one of ['%s']" % "', '".join(valid_disk_mode))
if (vm_obj and diskspec.device.backing.diskMode != disk_mode) or (vm_obj is None):
diskspec.device.backing.diskMode = disk_mode
disk_modified = True
else:
diskspec.device.backing.diskMode = "persistent"
# is it thin?
if 'type' in expected_disk_spec:
disk_type = expected_disk_spec.get('type', '').lower()
if disk_type == 'thin':
diskspec.device.backing.thinProvisioned = True
elif disk_type == 'eagerzeroedthick':
diskspec.device.backing.eagerlyScrub = True
if 'filename' in expected_disk_spec and expected_disk_spec['filename'] is not None:
self.add_existing_vmdk(vm_obj, expected_disk_spec, diskspec, scsi_ctl)
continue
elif vm_obj is None or self.params['template']:
# We are creating new VM or from Template
# Only create virtual device if not backed by vmdk in original template
if diskspec.device.backing.fileName == '':
diskspec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.create
# which datastore?
if expected_disk_spec.get('datastore'):
# TODO: This is already handled by the relocation spec,
# but it needs to eventually be handled for all the
# other disks defined
pass
kb = self.get_configured_disk_size(expected_disk_spec)
# VMware doesn't allow to reduce disk sizes
if kb < diskspec.device.capacityInKB:
self.module.fail_json(
msg="Given disk size is smaller than found (%d < %d). Reducing disks is not allowed." %
(kb, diskspec.device.capacityInKB))
if kb != diskspec.device.capacityInKB or disk_modified:
diskspec.device.capacityInKB = kb
self.configspec.deviceChange.append(diskspec)
self.change_detected = True
def select_host(self):
hostsystem = self.cache.get_esx_host(self.params['esxi_hostname'])
if not hostsystem:
self.module.fail_json(msg='Failed to find ESX host "%(esxi_hostname)s"' % self.params)
if hostsystem.runtime.connectionState != 'connected' or hostsystem.runtime.inMaintenanceMode:
self.module.fail_json(msg='ESXi "%(esxi_hostname)s" is in invalid state or in maintenance mode.' % self.params)
return hostsystem
def autoselect_datastore(self):
datastore = None
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if not self.is_datastore_valid(datastore_obj=ds):
continue
if ds.summary.freeSpace > datastore_freespace:
datastore = ds
datastore_freespace = ds.summary.freeSpace
return datastore
def get_recommended_datastore(self, datastore_cluster_obj=None):
"""
Function to return Storage DRS recommended datastore from datastore cluster
Args:
datastore_cluster_obj: datastore cluster managed object
Returns: Name of recommended datastore from the given datastore cluster
"""
if datastore_cluster_obj is None:
return None
# Check if Datastore Cluster provided by user is SDRS ready
sdrs_status = datastore_cluster_obj.podStorageDrsEntry.storageDrsConfig.podConfig.enabled
if sdrs_status:
# We can get storage recommendation only if SDRS is enabled on given datastorage cluster
pod_sel_spec = vim.storageDrs.PodSelectionSpec()
pod_sel_spec.storagePod = datastore_cluster_obj
storage_spec = vim.storageDrs.StoragePlacementSpec()
storage_spec.podSelectionSpec = pod_sel_spec
storage_spec.type = 'create'
try:
rec = self.content.storageResourceManager.RecommendDatastores(storageSpec=storage_spec)
rec_action = rec.recommendations[0].action[0]
return rec_action.destination.name
except Exception:
# There is some error so we fall back to general workflow
pass
datastore = None
datastore_freespace = 0
for ds in datastore_cluster_obj.childEntity:
if isinstance(ds, vim.Datastore) and ds.summary.freeSpace > datastore_freespace:
# If datastore field is provided, filter destination datastores
if not self.is_datastore_valid(datastore_obj=ds):
continue
datastore = ds
datastore_freespace = ds.summary.freeSpace
if datastore:
return datastore.name
return None
def select_datastore(self, vm_obj=None):
datastore = None
datastore_name = None
if len(self.params['disk']) != 0:
# TODO: really use the datastore for newly created disks
if 'autoselect_datastore' in self.params['disk'][0] and self.params['disk'][0]['autoselect_datastore']:
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
datastores = [x for x in datastores if self.cache.get_parent_datacenter(x).name == self.params['datacenter']]
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if not self.is_datastore_valid(datastore_obj=ds):
continue
if (ds.summary.freeSpace > datastore_freespace) or (ds.summary.freeSpace == datastore_freespace and not datastore):
# If datastore field is provided, filter destination datastores
if 'datastore' in self.params['disk'][0] and \
isinstance(self.params['disk'][0]['datastore'], str) and \
ds.name.find(self.params['disk'][0]['datastore']) < 0:
continue
datastore = ds
datastore_name = datastore.name
datastore_freespace = ds.summary.freeSpace
elif 'datastore' in self.params['disk'][0]:
datastore_name = self.params['disk'][0]['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
self.module.fail_json(msg="Either datastore or autoselect_datastore should be provided to select datastore")
if not datastore and self.params['template']:
# use the template's existing DS
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)]
if disks:
datastore = disks[0].backing.datastore
datastore_name = datastore.name
# validation
if datastore:
dc = self.cache.get_parent_datacenter(datastore)
if dc.name != self.params['datacenter']:
datastore = self.autoselect_datastore()
datastore_name = datastore.name
if not datastore:
if len(self.params['disk']) != 0 or self.params['template'] is None:
self.module.fail_json(msg="Unable to find the datastore with given parameters."
" This could mean, %s is a non-existent virtual machine and module tried to"
" deploy it as new virtual machine with no disk. Please specify disks parameter"
" or specify template to clone from." % self.params['name'])
self.module.fail_json(msg="Failed to find a matching datastore")
return datastore, datastore_name
def obj_has_parent(self, obj, parent):
if obj is None and parent is None:
raise AssertionError()
current_parent = obj
while True:
if current_parent.name == parent.name:
return True
# Check if we have reached till root folder
moid = current_parent._moId
if moid in ['group-d1', 'ha-folder-root']:
return False
current_parent = current_parent.parent
if current_parent is None:
return False
def get_scsi_type(self):
disk_controller_type = "paravirtual"
# set cpu/memory/etc
if 'hardware' in self.params:
if 'scsi' in self.params['hardware']:
if self.params['hardware']['scsi'] in ['buslogic', 'paravirtual', 'lsilogic', 'lsilogicsas']:
disk_controller_type = self.params['hardware']['scsi']
else:
self.module.fail_json(msg="hardware.scsi attribute should be 'paravirtual' or 'lsilogic'")
return disk_controller_type
def find_folder(self, searchpath):
""" Walk inventory objects one position of the searchpath at a time """
# split the searchpath so we can iterate through it
paths = [x.replace('/', '') for x in searchpath.split('/')]
paths_total = len(paths) - 1
position = 0
# recursive walk while looking for next element in searchpath
root = self.content.rootFolder
while root and position <= paths_total:
change = False
if hasattr(root, 'childEntity'):
for child in root.childEntity:
if child.name == paths[position]:
root = child
position += 1
change = True
break
elif isinstance(root, vim.Datacenter):
if hasattr(root, 'vmFolder'):
if root.vmFolder.name == paths[position]:
root = root.vmFolder
position += 1
change = True
else:
root = None
if not change:
root = None
return root
def get_resource_pool(self, cluster=None, host=None, resource_pool=None):
""" Get a resource pool, filter on cluster, esxi_hostname or resource_pool if given """
cluster_name = cluster or self.params.get('cluster', None)
host_name = host or self.params.get('esxi_hostname', None)
resource_pool_name = resource_pool or self.params.get('resource_pool', None)
# get the datacenter object
datacenter = find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if not datacenter:
self.module.fail_json(msg='Unable to find datacenter "%s"' % self.params['datacenter'])
# if cluster is given, get the cluster object
if cluster_name:
cluster = find_obj(self.content, [vim.ComputeResource], cluster_name, folder=datacenter)
if not cluster:
self.module.fail_json(msg='Unable to find cluster "%s"' % cluster_name)
# if host is given, get the cluster object using the host
elif host_name:
host = find_obj(self.content, [vim.HostSystem], host_name, folder=datacenter)
if not host:
self.module.fail_json(msg='Unable to find host "%s"' % host_name)
cluster = host.parent
else:
cluster = None
# get resource pools limiting search to cluster or datacenter
resource_pool = find_obj(self.content, [vim.ResourcePool], resource_pool_name, folder=cluster or datacenter)
if not resource_pool:
if resource_pool_name:
self.module.fail_json(msg='Unable to find resource_pool "%s"' % resource_pool_name)
else:
self.module.fail_json(msg='Unable to find resource pool, need esxi_hostname, resource_pool, or cluster')
return resource_pool
def deploy_vm(self):
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/clone_vm.py
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.CloneSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.ConfigSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# FIXME:
# - static IPs
self.folder = self.params.get('folder', None)
if self.folder is None:
self.module.fail_json(msg="Folder is required parameter while deploying new virtual machine")
# Prepend / if it was missing from the folder path, also strip trailing slashes
if not self.folder.startswith('/'):
self.folder = '/%(folder)s' % self.params
self.folder = self.folder.rstrip('/')
datacenter = self.cache.find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if datacenter is None:
self.module.fail_json(msg='No datacenter named %(datacenter)s was found' % self.params)
dcpath = compile_folder_path_for_object(datacenter)
# Nested folder does not have trailing /
if not dcpath.endswith('/'):
dcpath += '/'
# Check for full path first in case it was already supplied
if (self.folder.startswith(dcpath + self.params['datacenter'] + '/vm') or
self.folder.startswith(dcpath + '/' + self.params['datacenter'] + '/vm')):
fullpath = self.folder
elif self.folder.startswith('/vm/') or self.folder == '/vm':
fullpath = "%s%s%s" % (dcpath, self.params['datacenter'], self.folder)
elif self.folder.startswith('/'):
fullpath = "%s%s/vm%s" % (dcpath, self.params['datacenter'], self.folder)
else:
fullpath = "%s%s/vm/%s" % (dcpath, self.params['datacenter'], self.folder)
f_obj = self.content.searchIndex.FindByInventoryPath(fullpath)
# abort if no strategy was successful
if f_obj is None:
# Add some debugging values in failure.
details = {
'datacenter': datacenter.name,
'datacenter_path': dcpath,
'folder': self.folder,
'full_search_path': fullpath,
}
self.module.fail_json(msg='No folder %s matched in the search path : %s' % (self.folder, fullpath),
details=details)
destfolder = f_obj
if self.params['template']:
vm_obj = self.get_vm_or_template(template_name=self.params['template'])
if vm_obj is None:
self.module.fail_json(msg="Could not find a template named %(template)s" % self.params)
else:
vm_obj = None
# always get a resource_pool
resource_pool = self.get_resource_pool()
# set the destination datastore for VM & disks
if self.params['datastore']:
# Give precedence to datastore value provided by user
# User may want to deploy VM to specific datastore.
datastore_name = self.params['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
(datastore, datastore_name) = self.select_datastore(vm_obj)
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=vm_obj, vm_creation=True)
self.configure_cpu_and_memory(vm_obj=vm_obj, vm_creation=True)
self.configure_hardware_params(vm_obj=vm_obj)
self.configure_resource_alloc_info(vm_obj=vm_obj)
self.configure_vapp_properties(vm_obj=vm_obj)
self.configure_disks(vm_obj=vm_obj)
self.configure_network(vm_obj=vm_obj)
self.configure_cdrom(vm_obj=vm_obj)
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 0 or network_changes or self.params.get('customization_spec') is not None:
self.customize_vm(vm_obj=vm_obj)
clonespec = None
clone_method = None
try:
if self.params['template']:
# Only select specific host when ESXi hostname is provided
if self.params['esxi_hostname']:
self.relospec.host = self.select_host()
self.relospec.datastore = datastore
# Convert disk present in template if is set
if self.params['convert']:
for device in vm_obj.config.hardware.device:
if isinstance(device, vim.vm.device.VirtualDisk):
disk_locator = vim.vm.RelocateSpec.DiskLocator()
disk_locator.diskBackingInfo = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
if self.params['convert'] in ['thin']:
disk_locator.diskBackingInfo.thinProvisioned = True
if self.params['convert'] in ['eagerzeroedthick']:
disk_locator.diskBackingInfo.eagerlyScrub = True
if self.params['convert'] in ['thick']:
disk_locator.diskBackingInfo.diskMode = "persistent"
disk_locator.diskId = device.key
disk_locator.datastore = datastore
self.relospec.disk.append(disk_locator)
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# > pool: For a clone operation from a template to a virtual machine, this argument is required.
self.relospec.pool = resource_pool
linked_clone = self.params.get('linked_clone')
snapshot_src = self.params.get('snapshot_src', None)
if linked_clone:
if snapshot_src is not None:
self.relospec.diskMoveType = vim.vm.RelocateSpec.DiskMoveOptions.createNewChildDiskBacking
else:
self.module.fail_json(msg="Parameter 'linked_src' and 'snapshot_src' are"
" required together for linked clone operation.")
clonespec = vim.vm.CloneSpec(template=self.params['is_template'], location=self.relospec)
if self.customspec:
clonespec.customization = self.customspec
if snapshot_src is not None:
if vm_obj.snapshot is None:
self.module.fail_json(msg="No snapshots present for virtual machine or template [%(template)s]" % self.params)
snapshot = self.get_snapshots_by_name_recursively(snapshots=vm_obj.snapshot.rootSnapshotList,
snapname=snapshot_src)
if len(snapshot) != 1:
self.module.fail_json(msg='virtual machine "%(template)s" does not contain'
' snapshot named "%(snapshot_src)s"' % self.params)
clonespec.snapshot = snapshot[0].snapshot
clonespec.config = self.configspec
clone_method = 'Clone'
try:
task = vm_obj.Clone(folder=destfolder, name=self.params['name'], spec=clonespec)
except vim.fault.NoPermission as e:
self.module.fail_json(msg="Failed to clone virtual machine %s to folder %s "
"due to permission issue: %s" % (self.params['name'],
destfolder,
to_native(e.msg)))
self.change_detected = True
else:
# ConfigSpec require name for VM creation
self.configspec.name = self.params['name']
self.configspec.files = vim.vm.FileInfo(logDirectory=None,
snapshotDirectory=None,
suspendDirectory=None,
vmPathName="[" + datastore_name + "]")
clone_method = 'CreateVM_Task'
try:
task = destfolder.CreateVM_Task(config=self.configspec, pool=resource_pool)
except vmodl.fault.InvalidRequest as e:
self.module.fail_json(msg="Failed to create virtual machine due to invalid configuration "
"parameter %s" % to_native(e.msg))
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to create virtual machine due to "
"product versioning restrictions: %s" % to_native(e.msg))
self.change_detected = True
self.wait_for_task(task)
except TypeError as e:
self.module.fail_json(msg="TypeError was returned, please ensure to give correct inputs. %s" % to_text(e))
if task.info.state == 'error':
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021361
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2173
# provide these to the user for debugging
clonespec_json = serialize_spec(clonespec)
configspec_json = serialize_spec(self.configspec)
kwargs = {
'changed': self.change_applied,
'failed': True,
'msg': task.info.error.msg,
'clonespec': clonespec_json,
'configspec': configspec_json,
'clone_method': clone_method
}
return kwargs
else:
# set annotation
vm = task.info.result
if self.params['annotation']:
annotation_spec = vim.vm.ConfigSpec()
annotation_spec.annotation = str(self.params['annotation'])
task = vm.ReconfigVM_Task(annotation_spec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'annotation'}
if self.params['customvalues']:
self.customize_customvalues(vm_obj=vm)
if self.params['wait_for_ip_address'] or self.params['wait_for_customization'] or self.params['state'] in ['poweredon', 'restarted']:
set_vm_power_state(self.content, vm, 'poweredon', force=False)
if self.params['wait_for_ip_address']:
wait_for_vm_ip(self.content, vm, self.params['wait_for_ip_address_timeout'])
if self.params['wait_for_customization']:
is_customization_ok = self.wait_for_customization(vm)
if not is_customization_ok:
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': True, 'instance': vm_facts, 'op': 'customization'}
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def get_snapshots_by_name_recursively(self, snapshots, snapname):
snap_obj = []
for snapshot in snapshots:
if snapshot.name == snapname:
snap_obj.append(snapshot)
else:
snap_obj = snap_obj + self.get_snapshots_by_name_recursively(snapshot.childSnapshotList, snapname)
return snap_obj
def reconfigure_vm(self):
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=self.current_vm_obj)
self.configure_cpu_and_memory(vm_obj=self.current_vm_obj)
self.configure_hardware_params(vm_obj=self.current_vm_obj)
self.configure_disks(vm_obj=self.current_vm_obj)
self.configure_network(vm_obj=self.current_vm_obj)
self.configure_cdrom(vm_obj=self.current_vm_obj)
self.customize_customvalues(vm_obj=self.current_vm_obj)
self.configure_resource_alloc_info(vm_obj=self.current_vm_obj)
self.configure_vapp_properties(vm_obj=self.current_vm_obj)
if self.params['annotation'] and self.current_vm_obj.config.annotation != self.params['annotation']:
self.configspec.annotation = str(self.params['annotation'])
self.change_detected = True
if self.params['resource_pool']:
self.relospec.pool = self.get_resource_pool()
if self.relospec.pool != self.current_vm_obj.resourcePool:
task = self.current_vm_obj.RelocateVM_Task(spec=self.relospec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'relocate'}
# Only send VMware task if we see a modification
if self.change_detected:
task = None
try:
task = self.current_vm_obj.ReconfigVM_Task(spec=self.configspec)
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to reconfigure virtual machine due to"
" product versioning restrictions: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'reconfig'}
# Rename VM
if self.params['uuid'] and self.params['name'] and self.params['name'] != self.current_vm_obj.config.name:
task = self.current_vm_obj.Rename_Task(self.params['name'])
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'rename'}
# Mark VM as Template
if self.params['is_template'] and not self.current_vm_obj.config.template:
try:
self.current_vm_obj.MarkAsTemplate()
self.change_applied = True
except vmodl.fault.NotSupported as e:
self.module.fail_json(msg="Failed to mark virtual machine [%s] "
"as template: %s" % (self.params['name'], e.msg))
# Mark Template as VM
elif not self.params['is_template'] and self.current_vm_obj.config.template:
resource_pool = self.get_resource_pool()
kwargs = dict(pool=resource_pool)
if self.params.get('esxi_hostname', None):
host_system_obj = self.select_host()
kwargs.update(host=host_system_obj)
try:
self.current_vm_obj.MarkAsVirtualMachine(**kwargs)
self.change_applied = True
except vim.fault.InvalidState as invalid_state:
self.module.fail_json(msg="Virtual machine is not marked"
" as template : %s" % to_native(invalid_state.msg))
except vim.fault.InvalidDatastore as invalid_ds:
self.module.fail_json(msg="Converting template to virtual machine"
" operation cannot be performed on the"
" target datastores: %s" % to_native(invalid_ds.msg))
except vim.fault.CannotAccessVmComponent as cannot_access:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" as operation unable access virtual machine"
" component: %s" % to_native(cannot_access.msg))
except vmodl.fault.InvalidArgument as invalid_argument:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to : %s" % to_native(invalid_argument.msg))
except Exception as generic_exc:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to generic error : %s" % to_native(generic_exc))
# Automatically update VMware UUID when converting template to VM.
# This avoids an interactive prompt during VM startup.
uuid_action = [x for x in self.current_vm_obj.config.extraConfig if x.key == "uuid.action"]
if not uuid_action:
uuid_action_opt = vim.option.OptionValue()
uuid_action_opt.key = "uuid.action"
uuid_action_opt.value = "create"
self.configspec.extraConfig.append(uuid_action_opt)
self.change_detected = True
# add customize existing VM after VM re-configure
if 'existing_vm' in self.params['customization'] and self.params['customization']['existing_vm']:
if self.current_vm_obj.config.template:
self.module.fail_json(msg="VM is template, not support guest OS customization.")
if self.current_vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
self.module.fail_json(msg="VM is not in poweroff state, can not do guest OS customization.")
cus_result = self.customize_exist_vm()
if cus_result['failed']:
return cus_result
vm_facts = self.gather_facts(self.current_vm_obj)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def customize_exist_vm(self):
task = None
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 1 or network_changes or self.params.get('customization_spec'):
self.customize_vm(vm_obj=self.current_vm_obj)
try:
task = self.current_vm_obj.CustomizeVM_Task(self.customspec)
except vim.fault.CustomizationFault as e:
self.module.fail_json(msg="Failed to customization virtual machine due to CustomizationFault: %s" % to_native(e.msg))
except vim.fault.RuntimeFault as e:
self.module.fail_json(msg="failed to customization virtual machine due to RuntimeFault: %s" % to_native(e.msg))
except Exception as e:
self.module.fail_json(msg="failed to customization virtual machine due to fault: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'customize_exist'}
if self.params['wait_for_customization']:
set_vm_power_state(self.content, self.current_vm_obj, 'poweredon', force=False)
is_customization_ok = self.wait_for_customization(self.current_vm_obj)
if not is_customization_ok:
return {'changed': self.change_applied, 'failed': True, 'op': 'wait_for_customize_exist'}
return {'changed': self.change_applied, 'failed': False}
def wait_for_task(self, task, poll_interval=1):
"""
Wait for a VMware task to complete. Terminal states are 'error' and 'success'.
Inputs:
- task: the task to wait for
- poll_interval: polling interval to check the task, in seconds
Modifies:
- self.change_applied
"""
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.Task.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.TaskInfo.html
# https://github.com/virtdevninja/pyvmomi-community-samples/blob/master/samples/tools/tasks.py
while task.info.state not in ['error', 'success']:
time.sleep(poll_interval)
self.change_applied = self.change_applied or task.info.state == 'success'
def get_vm_events(self, vm, eventTypeIdList):
byEntity = vim.event.EventFilterSpec.ByEntity(entity=vm, recursion="self")
filterSpec = vim.event.EventFilterSpec(entity=byEntity, eventTypeId=eventTypeIdList)
eventManager = self.content.eventManager
return eventManager.QueryEvent(filterSpec)
def wait_for_customization(self, vm, poll=10000, sleep=10):
thispoll = 0
while thispoll <= poll:
eventStarted = self.get_vm_events(vm, ['CustomizationStartedEvent'])
if len(eventStarted):
thispoll = 0
while thispoll <= poll:
eventsFinishedResult = self.get_vm_events(vm, ['CustomizationSucceeded', 'CustomizationFailed'])
if len(eventsFinishedResult):
if not isinstance(eventsFinishedResult[0], vim.event.CustomizationSucceeded):
self.module.fail_json(msg='Customization failed with error {0}:\n{1}'.format(
eventsFinishedResult[0]._wsdlName, eventsFinishedResult[0].fullFormattedMessage))
return False
break
else:
time.sleep(sleep)
thispoll += 1
return True
else:
time.sleep(sleep)
thispoll += 1
self.module.fail_json('waiting for customizations timed out.')
return False
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
state=dict(type='str', default='present',
choices=['absent', 'poweredoff', 'poweredon', 'present', 'rebootguest', 'restarted', 'shutdownguest', 'suspended']),
template=dict(type='str', aliases=['template_src']),
is_template=dict(type='bool', default=False),
annotation=dict(type='str', aliases=['notes']),
customvalues=dict(type='list', default=[]),
name=dict(type='str'),
name_match=dict(type='str', choices=['first', 'last'], default='first'),
uuid=dict(type='str'),
use_instance_uuid=dict(type='bool', default=False),
folder=dict(type='str'),
guest_id=dict(type='str'),
disk=dict(type='list', default=[]),
cdrom=dict(type=list_or_dict, default=[]),
hardware=dict(type='dict', default={}),
force=dict(type='bool', default=False),
datacenter=dict(type='str', default='ha-datacenter'),
esxi_hostname=dict(type='str'),
cluster=dict(type='str'),
wait_for_ip_address=dict(type='bool', default=False),
wait_for_ip_address_timeout=dict(type='int', default=300),
state_change_timeout=dict(type='int', default=0),
snapshot_src=dict(type='str'),
linked_clone=dict(type='bool', default=False),
networks=dict(type='list', default=[]),
resource_pool=dict(type='str'),
customization=dict(type='dict', default={}, no_log=True),
customization_spec=dict(type='str', default=None),
wait_for_customization=dict(type='bool', default=False),
vapp_properties=dict(type='list', default=[]),
datastore=dict(type='str'),
convert=dict(type='str', choices=['thin', 'thick', 'eagerzeroedthick']),
delete_from_inventory=dict(type='bool', default=False),
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[
['cluster', 'esxi_hostname'],
],
required_one_of=[
['name', 'uuid'],
],
)
result = {'failed': False, 'changed': False}
pyv = PyVmomiHelper(module)
# Check if the VM exists before continuing
vm = pyv.get_vm()
# VM already exists
if vm:
if module.params['state'] == 'absent':
# destroy it
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='remove_vm',
)
module.exit_json(**result)
if module.params['force']:
# has to be poweredoff first
set_vm_power_state(pyv.content, vm, 'poweredoff', module.params['force'])
result = pyv.remove_vm(vm, module.params['delete_from_inventory'])
elif module.params['state'] == 'present':
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
desired_operation='reconfigure_vm',
)
module.exit_json(**result)
result = pyv.reconfigure_vm()
elif module.params['state'] in ['poweredon', 'poweredoff', 'restarted', 'suspended', 'shutdownguest', 'rebootguest']:
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='set_vm_power_state',
)
module.exit_json(**result)
# set powerstate
tmp_result = set_vm_power_state(pyv.content, vm, module.params['state'], module.params['force'], module.params['state_change_timeout'])
if tmp_result['changed']:
result["changed"] = True
if module.params['state'] in ['poweredon', 'restarted', 'rebootguest'] and module.params['wait_for_ip_address']:
wait_result = wait_for_vm_ip(pyv.content, vm, module.params['wait_for_ip_address_timeout'])
if not wait_result:
module.fail_json(msg='Waiting for IP address timed out')
tmp_result['instance'] = wait_result
if not tmp_result["failed"]:
result["failed"] = False
result['instance'] = tmp_result['instance']
if tmp_result["failed"]:
result["failed"] = True
result["msg"] = tmp_result["msg"]
else:
# This should not happen
raise AssertionError()
# VM doesn't exist
else:
if module.params['state'] in ['poweredon', 'poweredoff', 'present', 'restarted', 'suspended']:
if module.check_mode:
result.update(
changed=True,
desired_operation='deploy_vm',
)
module.exit_json(**result)
result = pyv.deploy_vm()
if result['failed']:
module.fail_json(msg='Failed to create a virtual machine : %s' % result['msg'])
if result['failed']:
module.fail_json(**result)
else:
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,399 |
Slash in network name causes vmware_guest module to return Network ... does not exist
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
We are using the vmware_guest module to deploy a VM Template on a vCenter cluster. When we specify a virtual network name that contains a slash in it (e.g. 0123-network-name-10.0.0.0/22), Ansible returns "Network '0123-network-name-10.0.0.0/22' does not exist. This problem does not occur with network names that do not contain slashes.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 17:58:22) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible running within AWX 9.0.0
Target vCenter environment - vCenter 6.7.0 build 14070654
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a playbook with a single task in it to deploy and customize a VMware Template using the vmware-guest module. Specify a virtual network name with a "/" in it.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: "{{ vm_folder }}"
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: "{{ vm_cluster }}"
networks:
- name: "{{ vm_network }}"
dvswitch_name: "dvSwitch"
hardware:
num_cpus: "{{ vm_hw_cpus }}"
memory_mb: "{{ vm_hw_mem }}"
wait_for_ip_address: True
customization_spec: "{{ vm_custspec }}"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expect vmware_guest to launch or update the VM specified by vm_name variable with the parameters specified.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
PLAY [EDF VMware Deployment playbook] ******************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [deployvm : Clone a virtual machine from Linux template and customize] ****
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Network '0123-network-name-10.0.0.0/22' does not exist."}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64399
|
https://github.com/ansible/ansible/pull/64494
|
575116a584b5bb2fcfa3270611677f37d18295a8
|
47f9873eabab41f4c054d393ea7440bd85d7f95c
| 2019-11-04T16:57:05Z |
python
| 2019-11-12T11:43:57Z |
test/integration/targets/prepare_vmware_tests/tasks/init_real_lab.yml
|
---
- name: load vars
include_vars:
file: real_lab.yml
- include_tasks: teardown.yml
- when: setup_esxi_instance is not defined
block:
- include_tasks: setup_datacenter.yml
- include_tasks: setup_cluster.yml
- include_tasks: setup_attach_hosts.yml
when: setup_attach_host is defined
- include_tasks: setup_datastore.yml
when: setup_datastore is defined
- include_tasks: setup_virtualmachines.yml
when: setup_virtualmachines is defined
- include_tasks: setup_switch.yml
when: setup_switch is defined
- include_tasks: setup_dvswitch.yml
when: setup_dvswitch is defined
- include_tasks: setup_resource_pool.yml
when: setup_resource_pool is defined
- include_tasks: setup_category.yml
when: setup_category is defined
- include_tasks: setup_tag.yml
when: setup_tag is defined
- include_tasks: setup_content_library.yml
when: setup_content_library is defined
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,399 |
Slash in network name causes vmware_guest module to return Network ... does not exist
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
We are using the vmware_guest module to deploy a VM Template on a vCenter cluster. When we specify a virtual network name that contains a slash in it (e.g. 0123-network-name-10.0.0.0/22), Ansible returns "Network '0123-network-name-10.0.0.0/22' does not exist. This problem does not occur with network names that do not contain slashes.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 17:58:22) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible running within AWX 9.0.0
Target vCenter environment - vCenter 6.7.0 build 14070654
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a playbook with a single task in it to deploy and customize a VMware Template using the vmware-guest module. Specify a virtual network name with a "/" in it.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: "{{ vm_folder }}"
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: "{{ vm_cluster }}"
networks:
- name: "{{ vm_network }}"
dvswitch_name: "dvSwitch"
hardware:
num_cpus: "{{ vm_hw_cpus }}"
memory_mb: "{{ vm_hw_mem }}"
wait_for_ip_address: True
customization_spec: "{{ vm_custspec }}"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expect vmware_guest to launch or update the VM specified by vm_name variable with the parameters specified.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
PLAY [EDF VMware Deployment playbook] ******************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [deployvm : Clone a virtual machine from Linux template and customize] ****
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Network '0123-network-name-10.0.0.0/22' does not exist."}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64399
|
https://github.com/ansible/ansible/pull/64494
|
575116a584b5bb2fcfa3270611677f37d18295a8
|
47f9873eabab41f4c054d393ea7440bd85d7f95c
| 2019-11-04T16:57:05Z |
python
| 2019-11-12T11:43:57Z |
test/integration/targets/prepare_vmware_tests/tasks/setup_dvs_portgroup.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,399 |
Slash in network name causes vmware_guest module to return Network ... does not exist
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
We are using the vmware_guest module to deploy a VM Template on a vCenter cluster. When we specify a virtual network name that contains a slash in it (e.g. 0123-network-name-10.0.0.0/22), Ansible returns "Network '0123-network-name-10.0.0.0/22' does not exist. This problem does not occur with network names that do not contain slashes.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 17:58:22) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible running within AWX 9.0.0
Target vCenter environment - vCenter 6.7.0 build 14070654
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a playbook with a single task in it to deploy and customize a VMware Template using the vmware-guest module. Specify a virtual network name with a "/" in it.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: "{{ vm_folder }}"
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: "{{ vm_cluster }}"
networks:
- name: "{{ vm_network }}"
dvswitch_name: "dvSwitch"
hardware:
num_cpus: "{{ vm_hw_cpus }}"
memory_mb: "{{ vm_hw_mem }}"
wait_for_ip_address: True
customization_spec: "{{ vm_custspec }}"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expect vmware_guest to launch or update the VM specified by vm_name variable with the parameters specified.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
PLAY [EDF VMware Deployment playbook] ******************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [deployvm : Clone a virtual machine from Linux template and customize] ****
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Network '0123-network-name-10.0.0.0/22' does not exist."}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64399
|
https://github.com/ansible/ansible/pull/64494
|
575116a584b5bb2fcfa3270611677f37d18295a8
|
47f9873eabab41f4c054d393ea7440bd85d7f95c
| 2019-11-04T16:57:05Z |
python
| 2019-11-12T11:43:57Z |
test/integration/targets/prepare_vmware_tests/tasks/teardown.yml
|
---
- name: Clean up the firewall rules
vmware_host_firewall_manager:
cluster_name: '{{ ccr1 }}'
rules:
- name: vvold
enabled: False
- name: CIMHttpServer
enabled: True
allowed_hosts:
all_ip: True
- name: NFC
enabled: True
allowed_hosts:
all_ip: True
ignore_errors: yes
- name: Remove the VM prepared by prepare_vmware_tests
vmware_guest:
name: "{{ item.name }}"
force: yes
state: absent
with_items: '{{ virtual_machines + virtual_machines_in_cluster }}'
- name: Remove the test_vm* VMs
vmware_guest:
name: "{{ item }}"
force: yes
state: absent
with_items:
- test_vm1
- test_vm2
- test_vm3
- name: Remove the DVSwitch
vmware_dvswitch:
datacenter_name: '{{ dc1 }}'
state: absent
switch_name: '{{ item }}'
loop:
- '{{ dvswitch1 }}'
- dvswitch_0001
- dvswitch_0002
ignore_errors: yes
- name: Remove the vSwitches
vmware_vswitch:
hostname: '{{ item }}'
username: '{{ esxi_user }}'
password: '{{ esxi_password }}'
switch_name: "{{ switch1 }}"
state: absent
with_items: "{{ esxi_hosts }}"
ignore_errors: yes
- name: Remove ESXi Hosts to vCenter
vmware_host:
datacenter_name: '{{ dc1 }}'
cluster_name: ccr1
esxi_hostname: '{{ item }}'
esxi_username: '{{ esxi_user }}'
esxi_password: '{{ esxi_password }}'
state: absent
with_items: "{{ esxi_hosts }}"
ignore_errors: yes
- name: Umount NFS datastores to ESXi (1/2)
vmware_host_datastore:
hostname: '{{ item }}'
username: '{{ esxi_user }}'
password: '{{ esxi_password }}'
esxi_hostname: '{{ item }}' # Won't be necessary with https://github.com/ansible/ansible/pull/56516
datastore_name: '{{ ds1 }}'
state: absent
with_items: "{{ esxi_hosts }}"
- name: Umount NFS datastores to ESXi (2/2)
vmware_host_datastore:
hostname: '{{ item }}'
username: '{{ esxi_user }}'
password: '{{ esxi_password }}'
esxi_hostname: '{{ item }}' # Won't be necessary with https://github.com/ansible/ansible/pull/56516
datastore_name: '{{ ds2 }}'
state: absent
with_items: "{{ esxi_hosts }}"
- name: Delete a datastore cluster to datacenter
vmware_datastore_cluster:
datacenter_name: "{{ dc1 }}"
datastore_cluster_name: '{{ item }}'
state: absent
with_items:
- DSC1
- DSC2
ignore_errors: yes
- name: Remove the datacenter
vmware_datacenter:
datacenter_name: '{{ item }}'
state: absent
when: vcsim is not defined
with_items:
- '{{ dc1 }}'
- datacenter_0001
- name: kill vcsim
uri:
url: "http://{{ vcsim }}:5000/killall"
when: vcsim is defined
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,399 |
Slash in network name causes vmware_guest module to return Network ... does not exist
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
We are using the vmware_guest module to deploy a VM Template on a vCenter cluster. When we specify a virtual network name that contains a slash in it (e.g. 0123-network-name-10.0.0.0/22), Ansible returns "Network '0123-network-name-10.0.0.0/22' does not exist. This problem does not occur with network names that do not contain slashes.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 17:58:22) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible running within AWX 9.0.0
Target vCenter environment - vCenter 6.7.0 build 14070654
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a playbook with a single task in it to deploy and customize a VMware Template using the vmware-guest module. Specify a virtual network name with a "/" in it.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: "{{ vm_folder }}"
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: "{{ vm_cluster }}"
networks:
- name: "{{ vm_network }}"
dvswitch_name: "dvSwitch"
hardware:
num_cpus: "{{ vm_hw_cpus }}"
memory_mb: "{{ vm_hw_mem }}"
wait_for_ip_address: True
customization_spec: "{{ vm_custspec }}"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expect vmware_guest to launch or update the VM specified by vm_name variable with the parameters specified.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
PLAY [EDF VMware Deployment playbook] ******************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [deployvm : Clone a virtual machine from Linux template and customize] ****
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Network '0123-network-name-10.0.0.0/22' does not exist."}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64399
|
https://github.com/ansible/ansible/pull/64494
|
575116a584b5bb2fcfa3270611677f37d18295a8
|
47f9873eabab41f4c054d393ea7440bd85d7f95c
| 2019-11-04T16:57:05Z |
python
| 2019-11-12T11:43:57Z |
test/integration/targets/prepare_vmware_tests/vars/common.yml
|
---
dc1: DC0
ccr1: DC0_C0
ds1: LocalDS_0
ds2: LocalDS_1
f0: F0
switch1: switch1
esxi1: '{{ esxi_hosts[0] }}'
esxi2: '{{ esxi_hosts[1] }}'
esxi3: '{{ esxi_hosts[2] }}'
dvswitch1: DVS0
esxi_user: root
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,399 |
Slash in network name causes vmware_guest module to return Network ... does not exist
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
We are using the vmware_guest module to deploy a VM Template on a vCenter cluster. When we specify a virtual network name that contains a slash in it (e.g. 0123-network-name-10.0.0.0/22), Ansible returns "Network '0123-network-name-10.0.0.0/22' does not exist. This problem does not occur with network names that do not contain slashes.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 17:58:22) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible running within AWX 9.0.0
Target vCenter environment - vCenter 6.7.0 build 14070654
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a playbook with a single task in it to deploy and customize a VMware Template using the vmware-guest module. Specify a virtual network name with a "/" in it.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: "{{ vm_folder }}"
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: "{{ vm_cluster }}"
networks:
- name: "{{ vm_network }}"
dvswitch_name: "dvSwitch"
hardware:
num_cpus: "{{ vm_hw_cpus }}"
memory_mb: "{{ vm_hw_mem }}"
wait_for_ip_address: True
customization_spec: "{{ vm_custspec }}"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expect vmware_guest to launch or update the VM specified by vm_name variable with the parameters specified.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
PLAY [EDF VMware Deployment playbook] ******************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [deployvm : Clone a virtual machine from Linux template and customize] ****
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Network '0123-network-name-10.0.0.0/22' does not exist."}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64399
|
https://github.com/ansible/ansible/pull/64494
|
575116a584b5bb2fcfa3270611677f37d18295a8
|
47f9873eabab41f4c054d393ea7440bd85d7f95c
| 2019-11-04T16:57:05Z |
python
| 2019-11-12T11:43:57Z |
test/integration/targets/vmware_guest/tasks/main.yml
|
# Test code for the vmware_guest module.
# Copyright: (c) 2017, James Tanner <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- import_role:
name: prepare_vmware_tests
vars:
setup_attach_host: true
setup_datacenter: true
setup_datastore: true
setup_dvswitch: true
setup_resource_pool: true
setup_virtualmachines: true
- include_tasks: run_test_playbook.yml
with_items: '{{ vmware_guest_test_playbooks }}'
loop_control:
loop_var: test_playbook
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,399 |
Slash in network name causes vmware_guest module to return Network ... does not exist
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
We are using the vmware_guest module to deploy a VM Template on a vCenter cluster. When we specify a virtual network name that contains a slash in it (e.g. 0123-network-name-10.0.0.0/22), Ansible returns "Network '0123-network-name-10.0.0.0/22' does not exist. This problem does not occur with network names that do not contain slashes.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 17:58:22) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ansible running within AWX 9.0.0
Target vCenter environment - vCenter 6.7.0 build 14070654
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a playbook with a single task in it to deploy and customize a VMware Template using the vmware-guest module. Specify a virtual network name with a "/" in it.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: "{{ vm_folder }}"
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: "{{ vm_cluster }}"
networks:
- name: "{{ vm_network }}"
dvswitch_name: "dvSwitch"
hardware:
num_cpus: "{{ vm_hw_cpus }}"
memory_mb: "{{ vm_hw_mem }}"
wait_for_ip_address: True
customization_spec: "{{ vm_custspec }}"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expect vmware_guest to launch or update the VM specified by vm_name variable with the parameters specified.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
/bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
PLAY [EDF VMware Deployment playbook] ******************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [deployvm : Clone a virtual machine from Linux template and customize] ****
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "Network '0123-network-name-10.0.0.0/22' does not exist."}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64399
|
https://github.com/ansible/ansible/pull/64494
|
575116a584b5bb2fcfa3270611677f37d18295a8
|
47f9873eabab41f4c054d393ea7440bd85d7f95c
| 2019-11-04T16:57:05Z |
python
| 2019-11-12T11:43:57Z |
test/integration/targets/vmware_guest/tasks/network_with_dvpg.yml
|
# Test code for the vmware_guest module.
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Clone from existing VM with DVPG
- when: vcsim is not defined
block:
- name: create basic DVS portgroup
vmware_dvs_portgroup:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
switch_name: "{{ dvswitch1 }}"
portgroup_name: DC0_DVPG0
vlan_id: 0
num_ports: 32
portgroup_type: earlyBinding
state: present
register: dvs_pg_result_0001
- name: Deploy VM from template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ dc1 }}"
state: poweredon
folder: "{{ f0 }}"
template: "{{ virtual_machines[0].name }}"
name: test_vm1
disk:
- size: 10gb
autoselect_datastore: yes
guest_id: rhel7_64guest
hardware:
memory_mb: 128
num_cpus: 1
networks:
- name: DC0_DVPG0
register: no_vm_result
- debug: var=no_vm_result
- assert:
that:
- no_vm_result is changed
# New clone with DVPG
- name: Deploy new VM with DVPG
vmware_guest:
esxi_hostname: "{{ esxi_hosts[0] }}"
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ dc1 }}"
state: poweredon
folder: "{{ f0 }}"
name: test_vm2
disk:
- size: 10gb
autoselect_datastore: yes
guest_id: rhel7_64guest
hardware:
memory_mb: 128
num_cpus: 1
networks:
- name: "DC0_DVPG0"
dvswitch_name: "{{ dvswitch1 }}"
register: no_vm_result
- debug: var=no_vm_result
- assert:
that:
- no_vm_result is changed
- name: Deploy same VM again
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ dc1 }}"
state: poweredon
folder: "{{ f0 }}"
name: test_vm2
disk:
- size: 10gb
autoselect_datastore: yes
guest_id: rhel7_64guest
hardware:
memory_mb: 128
num_cpus: 1
networks:
- name: "DC0_DVPG0"
register: no_vm_result
- debug: var=no_vm_result
- assert:
that:
- not (no_vm_result is changed)
always:
- when: vcsim is not defined
name: Remove VM to free the portgroup
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: '{{ item }}'
force: yes
state: absent
with_items:
- test_vm1
- test_vm2
- when: vcsim is not defined
name: delete basic portgroup
vmware_dvs_portgroup:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
switch_name: "{{ dvswitch1 }}"
portgroup_name: DC0_DVPG0
vlan_id: 0
num_ports: 32
portgroup_type: earlyBinding
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,407 |
ansible/wait_for_connection: temp file from ~/.ansible/tmp not removed
|
##### SUMMARY
a temp file is not removed everytime a playbook calling wait_for_connection is played.
a file named AnsiballZ_ping.py in a tmp folder is never removed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
wait_for_connection
##### ANSIBLE VERSION
```
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible']
ansible python module location = /home/user/ansible-2.8/lib/python2.7/site-packages/ansible
executable location = /home/user/ansible-2.8/bin/ansible
python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
```
```
ansible 2.9.0b1
config file = None
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/ansible-2.9/local/lib/python2.7/site-packages/ansible
executable location = /home/user/ansible-2.9/bin/ansible
python version = 2.7.16 (default, Apr 6 2019, 01:42:57) [GCC 8.3.0]
```
##### OS / ENVIRONMENT
any linux
##### STEPS TO REPRODUCE
```yaml
- hosts: all
gather_facts: false
tasks:
- name: Wait for system to become reachable
wait_for_connection:
timeout: 10
```
```
PLAY [all] **********************************************************************************************************************************************************************************************************************************
TASK [Wait for system to become reachable] **************************************************************************************************************************************************************************************************
ok: [hostname] => {"changed": false, "elapsed": 2}
PLAY RECAP **********************************************************************************************************************************************************************************************************************************
hostname : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### EXPECTED RESULTS
~/.ansible/tmp folder should be empty
##### ACTUAL RESULTS
```
./.ansible
./.ansible/tmp
./.ansible/tmp/ansible-tmp-1568722317.85-268965278526846
./.ansible/tmp/ansible-tmp-1568722317.85-268965278526846/AnsiballZ_ping.py
```
|
https://github.com/ansible/ansible/issues/62407
|
https://github.com/ansible/ansible/pull/64592
|
fed049600542c4c478a2741f3a28ca1dbfd4497a
|
68428efc39313b7fb22b77152ec548ca983b03dd
| 2019-09-17T12:19:25Z |
python
| 2019-11-12T15:07:33Z |
changelogs/fragments/62407-wait_for_connection.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,407 |
ansible/wait_for_connection: temp file from ~/.ansible/tmp not removed
|
##### SUMMARY
a temp file is not removed everytime a playbook calling wait_for_connection is played.
a file named AnsiballZ_ping.py in a tmp folder is never removed.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
wait_for_connection
##### ANSIBLE VERSION
```
ansible 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/usr/share/ansible']
ansible python module location = /home/user/ansible-2.8/lib/python2.7/site-packages/ansible
executable location = /home/user/ansible-2.8/bin/ansible
python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
```
```
ansible 2.9.0b1
config file = None
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/ansible-2.9/local/lib/python2.7/site-packages/ansible
executable location = /home/user/ansible-2.9/bin/ansible
python version = 2.7.16 (default, Apr 6 2019, 01:42:57) [GCC 8.3.0]
```
##### OS / ENVIRONMENT
any linux
##### STEPS TO REPRODUCE
```yaml
- hosts: all
gather_facts: false
tasks:
- name: Wait for system to become reachable
wait_for_connection:
timeout: 10
```
```
PLAY [all] **********************************************************************************************************************************************************************************************************************************
TASK [Wait for system to become reachable] **************************************************************************************************************************************************************************************************
ok: [hostname] => {"changed": false, "elapsed": 2}
PLAY RECAP **********************************************************************************************************************************************************************************************************************************
hostname : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### EXPECTED RESULTS
~/.ansible/tmp folder should be empty
##### ACTUAL RESULTS
```
./.ansible
./.ansible/tmp
./.ansible/tmp/ansible-tmp-1568722317.85-268965278526846
./.ansible/tmp/ansible-tmp-1568722317.85-268965278526846/AnsiballZ_ping.py
```
|
https://github.com/ansible/ansible/issues/62407
|
https://github.com/ansible/ansible/pull/64592
|
fed049600542c4c478a2741f3a28ca1dbfd4497a
|
68428efc39313b7fb22b77152ec548ca983b03dd
| 2019-09-17T12:19:25Z |
python
| 2019-11-12T15:07:33Z |
lib/ansible/plugins/action/wait_for_connection.py
|
# (c) 2017, Dag Wieers <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# CI-required python3 boilerplate
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import time
from datetime import datetime, timedelta
from ansible.module_utils._text import to_text
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
display = Display()
class TimedOutException(Exception):
pass
class ActionModule(ActionBase):
TRANSFERS_FILES = False
_VALID_ARGS = frozenset(('connect_timeout', 'delay', 'sleep', 'timeout'))
DEFAULT_CONNECT_TIMEOUT = 5
DEFAULT_DELAY = 0
DEFAULT_SLEEP = 1
DEFAULT_TIMEOUT = 600
def do_until_success_or_timeout(self, what, timeout, connect_timeout, what_desc, sleep=1):
max_end_time = datetime.utcnow() + timedelta(seconds=timeout)
e = None
while datetime.utcnow() < max_end_time:
try:
what(connect_timeout)
if what_desc:
display.debug("wait_for_connection: %s success" % what_desc)
return
except Exception as e:
error = e # PY3 compatibility to store exception for use outside of this block
if what_desc:
display.debug("wait_for_connection: %s fail (expected), retrying in %d seconds..." % (what_desc, sleep))
time.sleep(sleep)
raise TimedOutException("timed out waiting for %s: %s" % (what_desc, error))
def run(self, tmp=None, task_vars=None):
if task_vars is None:
task_vars = dict()
connect_timeout = int(self._task.args.get('connect_timeout', self.DEFAULT_CONNECT_TIMEOUT))
delay = int(self._task.args.get('delay', self.DEFAULT_DELAY))
sleep = int(self._task.args.get('sleep', self.DEFAULT_SLEEP))
timeout = int(self._task.args.get('timeout', self.DEFAULT_TIMEOUT))
if self._play_context.check_mode:
display.vvv("wait_for_connection: skipping for check_mode")
return dict(skipped=True)
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
def ping_module_test(connect_timeout):
''' Test ping module, if available '''
display.vvv("wait_for_connection: attempting ping module test")
# call connection reset between runs if it's there
try:
self._connection.reset()
except AttributeError:
pass
# Use win_ping on winrm/powershell, else use ping
if getattr(self._connection._shell, "_IS_WINDOWS", False):
ping_result = self._execute_module(module_name='win_ping', module_args=dict(), task_vars=task_vars)
else:
ping_result = self._execute_module(module_name='ping', module_args=dict(), task_vars=task_vars)
# Test module output
if ping_result['ping'] != 'pong':
raise Exception('ping test failed')
start = datetime.now()
if delay:
time.sleep(delay)
try:
# If the connection has a transport_test method, use it first
if hasattr(self._connection, 'transport_test'):
self.do_until_success_or_timeout(self._connection.transport_test, timeout, connect_timeout, what_desc="connection port up", sleep=sleep)
# Use the ping module test to determine end-to-end connectivity
self.do_until_success_or_timeout(ping_module_test, timeout, connect_timeout, what_desc="ping module test", sleep=sleep)
except TimedOutException as e:
result['failed'] = True
result['msg'] = to_text(e)
elapsed = datetime.now() - start
result['elapsed'] = elapsed.seconds
return result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,068 |
Failing integration test zabbix_host
|
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
zabbix_host integration test
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The integration test zabbix_host is disabled because it fails on Ubuntu 16.04 in CI.
##### STEPS TO REPRODUCE
Run the zabbix_host integration test in CI.
##### EXPECTED RESULTS
Tests pass.
##### ACTUAL RESULTS
Tests fail: https://app.shippable.com/github/ansible/ansible/runs/148501/73/tests
> failure: Failed to update apt cache: W:GPG error: http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 68818C72E52529D4, W:The repository 'http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release' is not signed., W:Data from such a repository can't be authenticated and is therefore potentially dangerous to use., W:See apt-secure(8) manpage for repository creation and user configuration details., E:Failed to fetch store:/var/lib/apt/lists/partial/repo.mongodb.org_apt_ubuntu_dists_xenial_mongodb-org_4.0_multiverse_binary-amd64_Packages.gz Hash Sum mismatch, E:Some index files failed to download. They have been ignored, or old ones used instead.
```
{
"changed": false,
"msg": "Failed to update apt cache: W:GPG error: http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 68818C72E52529D4, W:The repository 'http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release' is not signed., W:Data from such a repository can't be authenticated and is therefore potentially dangerous to use., W:See apt-secure(8) manpage for repository creation and user configuration details., E:Failed to fetch store:/var/lib/apt/lists/partial/repo.mongodb.org_apt_ubuntu_dists_xenial_mongodb-org_4.0_multiverse_binary-amd64_Packages.gz Hash Sum mismatch, E:Some index files failed to download. They have been ignored, or old ones used instead."
}
```
|
https://github.com/ansible/ansible/issues/64068
|
https://github.com/ansible/ansible/pull/64142
|
47bf5deb54ef77a39261bb121b687641492afb4b
|
fadf7a426fef8e61110d35e5b3d6c546a024a8dc
| 2019-10-29T17:27:11Z |
python
| 2019-11-12T17:54:51Z |
test/integration/targets/setup_zabbix/handlers/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,068 |
Failing integration test zabbix_host
|
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
zabbix_host integration test
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The integration test zabbix_host is disabled because it fails on Ubuntu 16.04 in CI.
##### STEPS TO REPRODUCE
Run the zabbix_host integration test in CI.
##### EXPECTED RESULTS
Tests pass.
##### ACTUAL RESULTS
Tests fail: https://app.shippable.com/github/ansible/ansible/runs/148501/73/tests
> failure: Failed to update apt cache: W:GPG error: http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 68818C72E52529D4, W:The repository 'http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release' is not signed., W:Data from such a repository can't be authenticated and is therefore potentially dangerous to use., W:See apt-secure(8) manpage for repository creation and user configuration details., E:Failed to fetch store:/var/lib/apt/lists/partial/repo.mongodb.org_apt_ubuntu_dists_xenial_mongodb-org_4.0_multiverse_binary-amd64_Packages.gz Hash Sum mismatch, E:Some index files failed to download. They have been ignored, or old ones used instead.
```
{
"changed": false,
"msg": "Failed to update apt cache: W:GPG error: http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 68818C72E52529D4, W:The repository 'http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release' is not signed., W:Data from such a repository can't be authenticated and is therefore potentially dangerous to use., W:See apt-secure(8) manpage for repository creation and user configuration details., E:Failed to fetch store:/var/lib/apt/lists/partial/repo.mongodb.org_apt_ubuntu_dists_xenial_mongodb-org_4.0_multiverse_binary-amd64_Packages.gz Hash Sum mismatch, E:Some index files failed to download. They have been ignored, or old ones used instead."
}
```
|
https://github.com/ansible/ansible/issues/64068
|
https://github.com/ansible/ansible/pull/64142
|
47bf5deb54ef77a39261bb121b687641492afb4b
|
fadf7a426fef8e61110d35e5b3d6c546a024a8dc
| 2019-10-29T17:27:11Z |
python
| 2019-11-12T17:54:51Z |
test/integration/targets/setup_zabbix/tasks/setup.yml
|
# sets up and starts Zabbix with default settings using a MySQL database.
- name: install zabbix repository key
apt_key:
url: "{{ zabbix_apt_repository_key }}"
state: present
- name: install zabbix repository
apt_repository:
repo: "{{ zabbix_apt_repository }}"
filename: zabbix
state: present
- name: check if dpkg is set to exclude specific destinations
stat:
path: /etc/dpkg/dpkg.cfg.d/excludes
register: dpkg_excludes
- name: ensure documentation installations are allowed for zabbix
lineinfile:
path: /etc/dpkg/dpkg.cfg.d/excludes
regexp: '^path-include=/usr/share/doc/zabbix*$'
line: 'path-include=/usr/share/doc/zabbix*'
state: present
when: dpkg_excludes.stat.exists
- name: install zabbix apt dependencies
apt:
name: "{{ zabbix_packages }}"
state: latest
update_cache: yes
- name: install zabbix-api python package
pip:
name: zabbix-api
state: latest
- name: create mysql user {{ db_user }}
mysql_user:
name: "{{ db_user }}"
password: "{{ db_password }}"
state: present
priv: "{{ db_name }}.*:ALL"
login_unix_socket: '{{ mysql_socket }}'
- name: import initial zabbix database
mysql_db:
name: "{{ db_name }}"
login_user: "{{ db_user }}"
login_password: "{{ db_password }}"
state: import
target: /usr/share/doc/zabbix-server-mysql/create.sql.gz
- name: deploy zabbix-server configuration
template:
src: zabbix_server.conf.j2
dest: /etc/zabbix/zabbix_server.conf
owner: root
group: zabbix
mode: 0640
- name: deploy zabbix web frontend configuration
template:
src: zabbix.conf.php.j2
dest: /etc/zabbix/web/zabbix.conf.php
mode: 0644
- name: Create proper run directory for zabbix-server
file:
path: /var/run/zabbix
state: directory
owner: zabbix
group: zabbix
mode: 0775
- name: restart zabbix-server
service:
name: zabbix-server
state: restarted
enabled: yes
- name: restart apache2
service:
name: apache2
state: restarted
enabled: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,068 |
Failing integration test zabbix_host
|
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
zabbix_host integration test
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The integration test zabbix_host is disabled because it fails on Ubuntu 16.04 in CI.
##### STEPS TO REPRODUCE
Run the zabbix_host integration test in CI.
##### EXPECTED RESULTS
Tests pass.
##### ACTUAL RESULTS
Tests fail: https://app.shippable.com/github/ansible/ansible/runs/148501/73/tests
> failure: Failed to update apt cache: W:GPG error: http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 68818C72E52529D4, W:The repository 'http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release' is not signed., W:Data from such a repository can't be authenticated and is therefore potentially dangerous to use., W:See apt-secure(8) manpage for repository creation and user configuration details., E:Failed to fetch store:/var/lib/apt/lists/partial/repo.mongodb.org_apt_ubuntu_dists_xenial_mongodb-org_4.0_multiverse_binary-amd64_Packages.gz Hash Sum mismatch, E:Some index files failed to download. They have been ignored, or old ones used instead.
```
{
"changed": false,
"msg": "Failed to update apt cache: W:GPG error: http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 68818C72E52529D4, W:The repository 'http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release' is not signed., W:Data from such a repository can't be authenticated and is therefore potentially dangerous to use., W:See apt-secure(8) manpage for repository creation and user configuration details., E:Failed to fetch store:/var/lib/apt/lists/partial/repo.mongodb.org_apt_ubuntu_dists_xenial_mongodb-org_4.0_multiverse_binary-amd64_Packages.gz Hash Sum mismatch, E:Some index files failed to download. They have been ignored, or old ones used instead."
}
```
|
https://github.com/ansible/ansible/issues/64068
|
https://github.com/ansible/ansible/pull/64142
|
47bf5deb54ef77a39261bb121b687641492afb4b
|
fadf7a426fef8e61110d35e5b3d6c546a024a8dc
| 2019-10-29T17:27:11Z |
python
| 2019-11-12T17:54:51Z |
test/integration/targets/zabbix_host/aliases
|
destructive
shippable/posix/group1
skip/osx
skip/freebsd
skip/rhel
disabled
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,068 |
Failing integration test zabbix_host
|
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
zabbix_host integration test
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The integration test zabbix_host is disabled because it fails on Ubuntu 16.04 in CI.
##### STEPS TO REPRODUCE
Run the zabbix_host integration test in CI.
##### EXPECTED RESULTS
Tests pass.
##### ACTUAL RESULTS
Tests fail: https://app.shippable.com/github/ansible/ansible/runs/148501/73/tests
> failure: Failed to update apt cache: W:GPG error: http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 68818C72E52529D4, W:The repository 'http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release' is not signed., W:Data from such a repository can't be authenticated and is therefore potentially dangerous to use., W:See apt-secure(8) manpage for repository creation and user configuration details., E:Failed to fetch store:/var/lib/apt/lists/partial/repo.mongodb.org_apt_ubuntu_dists_xenial_mongodb-org_4.0_multiverse_binary-amd64_Packages.gz Hash Sum mismatch, E:Some index files failed to download. They have been ignored, or old ones used instead.
```
{
"changed": false,
"msg": "Failed to update apt cache: W:GPG error: http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 68818C72E52529D4, W:The repository 'http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release' is not signed., W:Data from such a repository can't be authenticated and is therefore potentially dangerous to use., W:See apt-secure(8) manpage for repository creation and user configuration details., E:Failed to fetch store:/var/lib/apt/lists/partial/repo.mongodb.org_apt_ubuntu_dists_xenial_mongodb-org_4.0_multiverse_binary-amd64_Packages.gz Hash Sum mismatch, E:Some index files failed to download. They have been ignored, or old ones used instead."
}
```
|
https://github.com/ansible/ansible/issues/64068
|
https://github.com/ansible/ansible/pull/64142
|
47bf5deb54ef77a39261bb121b687641492afb4b
|
fadf7a426fef8e61110d35e5b3d6c546a024a8dc
| 2019-10-29T17:27:11Z |
python
| 2019-11-12T17:54:51Z |
test/integration/targets/zabbix_host/tasks/main.yml
|
---
# setup stuff not testing zabbix_host
- block:
- include: zabbix_host_setup.yml
# zabbix_host module tests
- include: zabbix_host_tests.yml
# documentation example tests
- include: zabbix_host_doc.yml
# tear down stuff set up earlier
- include: zabbix_host_teardown.yml
when:
- ansible_distribution == 'Ubuntu'
- ansible_distribution_release == 'bionic'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,828 |
win_iis_website: state=restarted does not actually restart the website
|
##### SUMMARY
win_iis_website module does not restart IIS website when using 'state=restarted' parameter
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_iis_website
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.7.9
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/amesh/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
same issue still present in newer versions according to source code
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS: Windows Server 2016
##### STEPS TO REPRODUCE
1. Have a Windows Server with IIS and 'dummy_website' created in it.
2. Run the following task in a playbook:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Restart the IIS Site"
win_iis_website:
name : "dummy_website"
state: "restarted"
tags: "restart"
```
##### EXPECTED RESULTS
The website is restarted.
##### ACTUAL RESULTS
The website is not restarted, though Ansible claims the change occurred:
<!--- Paste verbatim command output between quotes -->
```paste below
changed: [iis1.example.com] => {
"changed": true,
"site": {
"ApplicationPool": "dummy_app_pool",
"Bindings": [
"10.101.101.101:80:"
],
"ID": 2,
"Name": "dummy_website",
"PhysicalPath": "C:\\inetpub\\vhosts\\dummy_website",
"State": "Started"
}
}
```
In the module source code we can see that state=restarted condition is only handled in the spot, where Start-Website PowerShell cmdlet is invoked:
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/windows/win_iis_website.ps1
```paste below
# Set run state
if (($state -eq 'stopped') -and ($site.State -eq 'Started'))
{
Stop-Website -Name $name -ErrorAction Stop
$result.changed = $true
}
if ((($state -eq 'started') -and ($site.State -eq 'Stopped')) -or ($state -eq 'restarted'))
{
Start-Website -Name $name -ErrorAction Stop
$result.changed = $true
}
```
I don't believe this is the intended behavior, 'restarted' state should also be included the condition to invoke Stop-Website.
|
https://github.com/ansible/ansible/issues/63828
|
https://github.com/ansible/ansible/pull/63829
|
95d613f3ab376af8c06399d256d931c6c00c21d6
|
bd9a0b6700d2f54185d84a772a21605e67f7e077
| 2019-10-23T03:31:41Z |
python
| 2019-11-13T00:24:25Z |
changelogs/fragments/win_iis_website-restarted.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,828 |
win_iis_website: state=restarted does not actually restart the website
|
##### SUMMARY
win_iis_website module does not restart IIS website when using 'state=restarted' parameter
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_iis_website
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.7.9
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/amesh/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
same issue still present in newer versions according to source code
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS: Windows Server 2016
##### STEPS TO REPRODUCE
1. Have a Windows Server with IIS and 'dummy_website' created in it.
2. Run the following task in a playbook:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Restart the IIS Site"
win_iis_website:
name : "dummy_website"
state: "restarted"
tags: "restart"
```
##### EXPECTED RESULTS
The website is restarted.
##### ACTUAL RESULTS
The website is not restarted, though Ansible claims the change occurred:
<!--- Paste verbatim command output between quotes -->
```paste below
changed: [iis1.example.com] => {
"changed": true,
"site": {
"ApplicationPool": "dummy_app_pool",
"Bindings": [
"10.101.101.101:80:"
],
"ID": 2,
"Name": "dummy_website",
"PhysicalPath": "C:\\inetpub\\vhosts\\dummy_website",
"State": "Started"
}
}
```
In the module source code we can see that state=restarted condition is only handled in the spot, where Start-Website PowerShell cmdlet is invoked:
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/windows/win_iis_website.ps1
```paste below
# Set run state
if (($state -eq 'stopped') -and ($site.State -eq 'Started'))
{
Stop-Website -Name $name -ErrorAction Stop
$result.changed = $true
}
if ((($state -eq 'started') -and ($site.State -eq 'Stopped')) -or ($state -eq 'restarted'))
{
Start-Website -Name $name -ErrorAction Stop
$result.changed = $true
}
```
I don't believe this is the intended behavior, 'restarted' state should also be included the condition to invoke Stop-Website.
|
https://github.com/ansible/ansible/issues/63828
|
https://github.com/ansible/ansible/pull/63829
|
95d613f3ab376af8c06399d256d931c6c00c21d6
|
bd9a0b6700d2f54185d84a772a21605e67f7e077
| 2019-10-23T03:31:41Z |
python
| 2019-11-13T00:24:25Z |
lib/ansible/modules/windows/win_iis_website.ps1
|
#!powershell
# Copyright: (c) 2015, Henrik Wallström <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#Requires -Module Ansible.ModuleUtils.Legacy
$ErrorActionPreference = "Stop"
$params = Parse-Args $args
$name = Get-AnsibleParam -obj $params -name "name" -type "str" -failifempty $true
$application_pool = Get-AnsibleParam -obj $params -name "application_pool" -type "str"
$physical_path = Get-AnsibleParam -obj $params -name "physical_path" -type "str"
$site_id = Get-AnsibleParam -obj $params -name "site_id" -type "str"
$state = Get-AnsibleParam -obj $params -name "state" -type "str" -validateset "absent","restarted","started","stopped"
# Binding Parameters
$bind_port = Get-AnsibleParam -obj $params -name "port" -type "int"
$bind_ip = Get-AnsibleParam -obj $params -name "ip" -type "str"
$bind_hostname = Get-AnsibleParam -obj $params -name "hostname" -type "str"
# Custom site Parameters from string where properties
# are separated by a pipe and property name/values by colon.
# Ex. "foo:1|bar:2"
$parameters = Get-AnsibleParam -obj $params -name "parameters" -type "str"
if($null -ne $parameters) {
$parameters = @($parameters -split '\|' | ForEach-Object {
return ,($_ -split "\:", 2);
})
}
# Ensure WebAdministration module is loaded
if ($null -eq (Get-Module "WebAdministration" -ErrorAction SilentlyContinue)) {
Import-Module WebAdministration
}
# Result
$result = @{
site = @{}
changed = $false
}
# Site info
$site = Get-Website | Where-Object { $_.Name -eq $name }
Try {
# Add site
If(($state -ne 'absent') -and (-not $site)) {
If (-not $physical_path) {
Fail-Json -obj $result -message "missing required arguments: physical_path"
}
ElseIf (-not (Test-Path $physical_path)) {
Fail-Json -obj $result -message "specified folder must already exist: physical_path"
}
$site_parameters = @{
Name = $name
PhysicalPath = $physical_path
}
If ($application_pool) {
$site_parameters.ApplicationPool = $application_pool
}
If ($site_id) {
$site_parameters.ID = $site_id
}
If ($bind_port) {
$site_parameters.Port = $bind_port
}
If ($bind_ip) {
$site_parameters.IPAddress = $bind_ip
}
If ($bind_hostname) {
$site_parameters.HostHeader = $bind_hostname
}
# Fix for error "New-Item : Index was outside the bounds of the array."
# This is a bug in the New-WebSite commandlet. Apparently there must be at least one site configured in IIS otherwise New-WebSite crashes.
# For more details, see http://stackoverflow.com/questions/3573889/ps-c-new-website-blah-throws-index-was-outside-the-bounds-of-the-array
$sites_list = get-childitem -Path IIS:\sites
if ($null -eq $sites_list) {
if ($site_id) {
$site_parameters.ID = $site_id
} else {
$site_parameters.ID = 1
}
}
$site = New-Website @site_parameters -Force
$result.changed = $true
}
# Remove site
If ($state -eq 'absent' -and $site) {
$site = Remove-Website -Name $name
$result.changed = $true
}
$site = Get-Website | Where-Object { $_.Name -eq $name }
If($site) {
# Change Physical Path if needed
if($physical_path) {
If (-not (Test-Path $physical_path)) {
Fail-Json -obj $result -message "specified folder must already exist: physical_path"
}
$folder = Get-Item $physical_path
If($folder.FullName -ne $site.PhysicalPath) {
Set-ItemProperty "IIS:\Sites\$($site.Name)" -name physicalPath -value $folder.FullName
$result.changed = $true
}
}
# Change Application Pool if needed
if($application_pool) {
If($application_pool -ne $site.applicationPool) {
Set-ItemProperty "IIS:\Sites\$($site.Name)" -name applicationPool -value $application_pool
$result.changed = $true
}
}
# Set properties
if($parameters) {
$parameters | ForEach-Object {
$property_value = Get-ItemProperty "IIS:\Sites\$($site.Name)" $_[0]
switch ($property_value.GetType().Name)
{
"ConfigurationAttribute" { $parameter_value = $property_value.value }
"String" { $parameter_value = $property_value }
}
if((-not $parameter_value) -or ($parameter_value) -ne $_[1]) {
Set-ItemProperty -LiteralPath "IIS:\Sites\$($site.Name)" $_[0] $_[1]
$result.changed = $true
}
}
}
# Set run state
if (($state -eq 'stopped') -and ($site.State -eq 'Started'))
{
Stop-Website -Name $name -ErrorAction Stop
$result.changed = $true
}
if ((($state -eq 'started') -and ($site.State -eq 'Stopped')) -or ($state -eq 'restarted'))
{
Start-Website -Name $name -ErrorAction Stop
$result.changed = $true
}
}
}
Catch
{
Fail-Json -obj $result -message $_.Exception.Message
}
if ($state -ne 'absent')
{
$site = Get-Website | Where-Object { $_.Name -eq $name }
}
if ($site)
{
$result.site = @{
Name = $site.Name
ID = $site.ID
State = $site.State
PhysicalPath = $site.PhysicalPath
ApplicationPool = $site.applicationPool
Bindings = @($site.Bindings.Collection | ForEach-Object { $_.BindingInformation })
}
}
Exit-Json -obj $result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,453 |
eos_lacp_interfaces : lacp port-priority not set when state is replaced
|
##### SUMMARY
When state = replaced , 'no lacp port-priority' is appended to commands. Hence , lacp port-priority is not getting set for the interface.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_lacp_interfaces.py
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gosriniv/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/gosriniv/ansible/lib/ansible
executable location = /home/gosriniv/ansible/bin/ansible
python version = 3.6.5 (default, Sep 4 2019, 12:23:33) [GCC 9.0.1 20190312 (Red Hat 9.0.1-0.10)]
```
##### OS / ENVIRONMENT
arista eos
##### STEPS TO REPRODUCE
```
eos_lacp_interfaces:
config:
- name: Ethernet2
port_priority: 55
rate: fast
state: replaced
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
'no lacp port-priority' should not be part of commands
##### ACTUAL RESULTS
```
changed: [10.8.38.32] => {
"after": [
{
"name": "Ethernet2",
"rate": "fast"
}
],
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"before": [
{
"name": "Ethernet2",
"port_priority": 45,
"rate": "fast"
}
],
"changed": true,
"commands": [
"interface Ethernet2",
"lacp port-priority 55",
"no lacp port-priority"
],
"invocation": {
"module_args": {
"config": [
{
"name": "Ethernet2",
"port_priority": 55,
"rate": "fast"
}
],
"state": "replaced"
}
}
}
```
|
https://github.com/ansible/ansible/issues/64453
|
https://github.com/ansible/ansible/pull/64530
|
83927c3437ae54f5e07d16230e8d096aed2ef034
|
143bafec9a506aff8f42ca573c7006a8c5549e12
| 2019-11-05T16:37:00Z |
python
| 2019-11-13T14:31:21Z |
lib/ansible/module_utils/network/eos/config/lacp_interfaces/lacp_interfaces.py
|
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The eos_lacp_interfaces class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.utils import to_list, dict_diff, param_list_to_dict
from ansible.module_utils.network.eos.facts.facts import Facts
from ansible.module_utils.network.eos.utils.utils import normalize_interface
class Lacp_interfaces(ConfigBase):
"""
The eos_lacp_interfaces class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'lacp_interfaces',
]
def get_lacp_interfaces_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
lacp_interfaces_facts = facts['ansible_network_resources'].get('lacp_interfaces')
if not lacp_interfaces_facts:
return []
return lacp_interfaces_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
warnings = list()
commands = list()
existing_lacp_interfaces_facts = self.get_lacp_interfaces_facts()
commands.extend(self.set_config(existing_lacp_interfaces_facts))
if commands:
if not self._module.check_mode:
self._connection.edit_config(commands)
result['changed'] = True
result['commands'] = commands
changed_lacp_interfaces_facts = self.get_lacp_interfaces_facts()
result['before'] = existing_lacp_interfaces_facts
if result['changed']:
result['after'] = changed_lacp_interfaces_facts
result['warnings'] = warnings
return result
def set_config(self, existing_lacp_interfaces_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_lacp_interfaces_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
state = self._module.params['state']
want = param_list_to_dict(want)
have = param_list_to_dict(have)
if state == 'overridden':
commands = self._state_overridden(want, have)
elif state == 'deleted':
commands = self._state_deleted(want, have)
elif state == 'merged':
commands = self._state_merged(want, have)
elif state == 'replaced':
commands = self._state_replaced(want, have)
return commands
@staticmethod
def _state_replaced(want, have):
""" The command generator when state is replaced
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
for key, desired in want.items():
interface_name = normalize_interface(key)
if interface_name in have:
extant = have[interface_name]
else:
extant = dict()
add_config = dict_diff(extant, desired)
del_config = dict_diff(desired, extant)
commands.extend(generate_commands(key, add_config, del_config))
return commands
@staticmethod
def _state_overridden(want, have):
""" The command generator when state is overridden
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
for key, extant in have.items():
if key in want:
desired = want[key]
else:
desired = dict()
add_config = dict_diff(extant, desired)
del_config = dict_diff(desired, extant)
commands.extend(generate_commands(key, add_config, del_config))
return commands
@staticmethod
def _state_merged(want, have):
""" The command generator when state is merged
:rtype: A list
:returns: the commands necessary to merge the provided into
the current configuration
"""
commands = []
for key, desired in want.items():
interface_name = normalize_interface(key)
if interface_name in have:
extant = have[interface_name]
else:
extant = dict()
add_config = dict_diff(extant, desired)
commands.extend(generate_commands(key, add_config, {}))
return commands
@staticmethod
def _state_deleted(want, have):
""" The command generator when state is deleted
:rtype: A list
:returns: the commands necessary to remove the current configuration
of the provided objects
"""
commands = []
for key in want:
desired = dict()
if key in have:
extant = have[key]
else:
continue
del_config = dict_diff(desired, extant)
commands.extend(generate_commands(key, {}, del_config))
return commands
def generate_commands(interface, to_set, to_remove):
commands = []
for key, value in to_set.items():
if value is None:
continue
commands.append("lacp {0} {1}".format(key.replace("_", "-"), value))
for key in to_remove.keys():
commands.append("no lacp {0}".format(key.replace("_", "-")))
if commands:
commands.insert(0, "interface {0}".format(interface))
return commands
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,453 |
eos_lacp_interfaces : lacp port-priority not set when state is replaced
|
##### SUMMARY
When state = replaced , 'no lacp port-priority' is appended to commands. Hence , lacp port-priority is not getting set for the interface.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_lacp_interfaces.py
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gosriniv/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/gosriniv/ansible/lib/ansible
executable location = /home/gosriniv/ansible/bin/ansible
python version = 3.6.5 (default, Sep 4 2019, 12:23:33) [GCC 9.0.1 20190312 (Red Hat 9.0.1-0.10)]
```
##### OS / ENVIRONMENT
arista eos
##### STEPS TO REPRODUCE
```
eos_lacp_interfaces:
config:
- name: Ethernet2
port_priority: 55
rate: fast
state: replaced
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
'no lacp port-priority' should not be part of commands
##### ACTUAL RESULTS
```
changed: [10.8.38.32] => {
"after": [
{
"name": "Ethernet2",
"rate": "fast"
}
],
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"before": [
{
"name": "Ethernet2",
"port_priority": 45,
"rate": "fast"
}
],
"changed": true,
"commands": [
"interface Ethernet2",
"lacp port-priority 55",
"no lacp port-priority"
],
"invocation": {
"module_args": {
"config": [
{
"name": "Ethernet2",
"port_priority": 55,
"rate": "fast"
}
],
"state": "replaced"
}
}
}
```
|
https://github.com/ansible/ansible/issues/64453
|
https://github.com/ansible/ansible/pull/64530
|
83927c3437ae54f5e07d16230e8d096aed2ef034
|
143bafec9a506aff8f42ca573c7006a8c5549e12
| 2019-11-05T16:37:00Z |
python
| 2019-11-13T14:31:21Z |
test/integration/targets/eos_lacp_interfaces/tests/cli/replaced.yaml
|
---
- include_tasks: reset_config.yml
- set_fact:
config:
- name: Ethernet1
rate: fast
other_config:
- name: Ethernet2
rate: fast
- eos_facts:
gather_network_resources: lacp_interfaces
become: yes
- name: Replaces device configuration of listed vlans with provided configuration
eos_lacp_interfaces:
config: "{{ config }}"
state: replaced
register: result
become: yes
- assert:
that:
- "ansible_facts.network_resources.lacp_interfaces|symmetric_difference(result.before) == []"
- eos_facts:
gather_network_resources: lacp_interfaces
become: yes
- assert:
that:
- "ansible_facts.network_resources.lacp_interfaces|symmetric_difference(result.after) == []"
- set_fact:
expected_config: "{{ config }} + {{ other_config }}"
- assert:
that:
- "expected_config|symmetric_difference(ansible_facts.network_resources.lacp_interfaces) == []"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,228 |
ansible-galaxy collection build includes unrelated files into the result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The "source" folder of my collection contains several files that are not directly related to the collection itself. E.g. a `.gitignore`, a `.github` folder, a `Makefile` etc.
When using `ansible-galaxy collection build <folder>`, these files are included in the generated tarball. This is especially hurtful when the source folder also contains a Python venv that I use for developing.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/egolov/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/ansible/ansible/lib/ansible
executable location = /home/egolov/Devel/ansible/ansible/bin/ansible
python version = 2.7.15 (default, Oct 15 2018, 15:26:09) [GCC 8.2.1 20180801 (Red Hat 8.2.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
empty
```
##### OS / ENVIRONMENT
Fedora
##### STEPS TO REPRODUCE
Build a collection :)
##### EXPECTED RESULTS
I'd expect only collection-relevant files/folders (those that are listed in the spec/docs) to be included.
##### ADDITIONAL INFORMATION
Alternatively, a ignore file as suggested by @bcoca in https://github.com/ansible/ansible/pull/59121/files#r304554334 could be a fix for this.
|
https://github.com/ansible/ansible/issues/59228
|
https://github.com/ansible/ansible/pull/64688
|
bf190606835d67998232140541bf848a51510c5c
|
f8f76628500052ad3521fbec16c073ae7f99d287
| 2019-07-18T07:57:44Z |
python
| 2019-11-13T19:02:58Z |
changelogs/fragments/ansible-galaxy-ignore.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,228 |
ansible-galaxy collection build includes unrelated files into the result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The "source" folder of my collection contains several files that are not directly related to the collection itself. E.g. a `.gitignore`, a `.github` folder, a `Makefile` etc.
When using `ansible-galaxy collection build <folder>`, these files are included in the generated tarball. This is especially hurtful when the source folder also contains a Python venv that I use for developing.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/egolov/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/ansible/ansible/lib/ansible
executable location = /home/egolov/Devel/ansible/ansible/bin/ansible
python version = 2.7.15 (default, Oct 15 2018, 15:26:09) [GCC 8.2.1 20180801 (Red Hat 8.2.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
empty
```
##### OS / ENVIRONMENT
Fedora
##### STEPS TO REPRODUCE
Build a collection :)
##### EXPECTED RESULTS
I'd expect only collection-relevant files/folders (those that are listed in the spec/docs) to be included.
##### ADDITIONAL INFORMATION
Alternatively, a ignore file as suggested by @bcoca in https://github.com/ansible/ansible/pull/59121/files#r304554334 could be a fix for this.
|
https://github.com/ansible/ansible/issues/59228
|
https://github.com/ansible/ansible/pull/64688
|
bf190606835d67998232140541bf848a51510c5c
|
f8f76628500052ad3521fbec16c073ae7f99d287
| 2019-07-18T07:57:44Z |
python
| 2019-11-13T19:02:58Z |
docs/docsite/_static/ansible.css
|
/*! minified with http://css-minify.online-domain-tools.com/ - all comments
* must have ! to preserve during minifying with that tool *//*! Fix for read the docs theme:
* https://rackerlabs.github.io/docs-rackspace/tools/rtd-tables.html
*//*! override table width restrictions */@media screen and (min-width:767px){/*! If we ever publish to read the docs, we need to use !important for these
* two styles as read the docs itself loads their theme in a way that we
* can't otherwise override it.
*/.wy-table-responsive table td{white-space:normal}.wy-table-responsive{overflow:visible}}/*!
* We use the class documentation-table for attribute tables where the first
* column is the name of an attribute and the second column is the description.
*//*! These tables look like this:
*
* Attribute Name Description
* -------------- -----------
* **NAME** This is a multi-line description
* str/required that can span multiple lines
*
* With multiple paragraphs
* -------------- -----------
*
* **NAME** is given the class .value-name
* str is given the class .value-type
* / is given the class .value-separator
* required is given the class .value-required
*//*! The extra .rst-content is so this will override rtd theme */.rst-content table.documentation-table td{vertical-align:top}table.documentation-table td:first-child{white-space:nowrap;vertical-align:top}table.documentation-table td:first-child p:first-child{font-weight:700;display:inline}/*! This is now redundant with above position-based styling *//*!
table.documentation-table .value-name {
font-weight: bold;
display: inline;
}
*/table.documentation-table .value-type{font-size:x-small;color:purple;display:inline}table.documentation-table .value-separator{font-size:x-small;display:inline}table.documentation-table .value-required{font-size:x-small;color:red;display:inline}/*! Ansible-specific CSS pulled out of rtd theme for 2.9 */.DocSiteProduct-header{flex:1;-webkit-flex:1;padding:10px 20px 20px;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;align-items:center;-webkit-align-items:center;justify-content:flex-start;-webkit-justify-content:flex-start;margin-left:20px;margin-right:20px;text-decoration:none;font-weight:400;font-family:'Open Sans',sans-serif}.DocSiteProduct-header:active,.DocSiteProduct-header:focus,.DocSiteProduct-header:visited{color:#fff}.DocSiteProduct-header--core{font-size:25px;background-color:#5bbdbf;border:2px solid #5bbdbf;border-top-left-radius:4px;border-top-right-radius:4px;color:#fff;padding-left:2px;margin-left:2px}.DocSiteProduct-headerAlign{width:100%}.DocSiteProduct-logo{width:60px;height:60px;margin-bottom:-9px}.DocSiteProduct-logoText{margin-top:6px;font-size:25px;text-align:left}.DocSiteProduct-CheckVersionPara{margin-left:2px;padding-bottom:4px;margin-right:2px;margin-bottom:10px}/*! Ansible color scheme */.wy-nav-top,.wy-side-nav-search{background-color:#5bbdbf}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#5bbdbf}.wy-menu-vertical a{padding:0}.wy-menu-vertical a.reference.internal{padding:.4045em 1.618em}/*! Override sphinx rtd theme max-with of 800px */.wy-nav-content{max-width:100%}/*! Override sphinx_rtd_theme - keeps left-nav from overwriting Documentation title */.wy-nav-side{top:45px}/*! Ansible - changed absolute to relative to remove extraneous side scroll bar */.wy-grid-for-nav{position:relative}/*! Ansible - remove so highlight indenting is correct */.rst-content .highlighted{padding:0}.DocSiteBanner{display:flex;display:-webkit-flex;justify-content:center;-webkit-justify-content:center;flex-wrap:wrap;-webkit-flex-wrap:wrap;margin-bottom:25px}.DocSiteBanner-imgWrapper{max-width:100%}td,th{min-width:100px}table{overflow-x:auto;display:block;max-width:100%}.documentation-table td.elbow-placeholder{border-left:1px solid #000;border-top:0;width:30px;min-width:30px}.documentation-table td,.documentation-table th{padding:4px;border-left:1px solid #000;border-top:1px solid #000}.documentation-table{border-right:1px solid #000;border-bottom:1px solid #000}@media print{*{background:0 0!important;color:#000!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}#nav,a,a:visited{text-decoration:underline}a[href]:after{content:" (" attr(href) ")"}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}/*! Don't show links for images, or javascript/internal links */pre,blockquote{border:0 solid #999;page-break-inside:avoid}thead{display:table-header-group}/*! h5bp.com/t */tr,img{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}h2,h3,p{orphans:3;widows:3}h2,h3{page-break-after:avoid}#google_image_div,.DocSiteBanner{display:none!important}}#sideBanner,.DocSite-globalNav{display:none}.DocSite-sideNav{display:block;margin-bottom:40px}.DocSite-nav{display:none}.ansibleNav{background:#000;padding:0 20px;width:auto;border-bottom:1px solid #444;font-size:14px;z-index:1}.ansibleNav ul{list-style:none;padding-left:0;margin-top:0}.ansibleNav ul li{padding:7px 0;border-bottom:1px solid #444}.ansibleNav ul li:last-child{border:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:6px 0}.ansibleNav ul li a:hover{color:#5bbdbf;background:0 0}@media screen and (min-width:768px){.DocSite-globalNav{display:block;position:fixed}#sideBanner{display:block}.DocSite-sideNav{display:none}.DocSite-nav{flex:initial;-webkit-flex:initial;display:flex;display:-webkit-flex;flex-direction:row;-webkit-flex-direction:row;justify-content:flex-start;-webkit-justify-content:flex-start;padding:15px;background-color:#000;text-decoration:none;font-family:'Open Sans',sans-serif}.DocSiteNav-logo{width:28px;height:28px;margin-right:8px;margin-top:-6px;position:fixed;z-index:1}.DocSiteNav-title{color:#fff;font-size:20px;position:fixed;margin-left:40px;margin-top:-4px;z-index:1}.ansibleNav{height:45px;width:100%;font-size:13px;padding:0 60px 0 0}.ansibleNav ul{float:right;display:flex;flex-wrap:nowrap;margin-top:13px}.ansibleNav ul li{padding:0;border-bottom:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:8px 13px}}@media screen and (min-width:768px){#sideBanner,.DocSite-globalNav{display:block}.DocSite-sideNav{display:none}.DocSite-nav{flex:initial;-webkit-flex:initial;display:flex;display:-webkit-flex;flex-direction:row;-webkit-flex-direction:row;justify-content:flex-start;-webkit-justify-content:flex-start;padding:15px;background-color:#000;text-decoration:none;font-family:'Open Sans',sans-serif}.DocSiteNav-logo{width:28px;height:28px;margin-right:8px;margin-top:-6px;position:fixed}.DocSiteNav-title{color:#fff;font-size:20px;position:fixed;margin-left:40px;margin-top:-4px}.ansibleNav{height:45px;font-size:13px;padding:0 60px 0 0}.ansibleNav ul{float:right;display:flex;flex-wrap:nowrap;margin-top:13px}.ansibleNav ul li{padding:0;border-bottom:none}.ansibleNav ul li a{color:#fff;text-decoration:none;text-transform:uppercase;padding:8px 13px}}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,228 |
ansible-galaxy collection build includes unrelated files into the result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The "source" folder of my collection contains several files that are not directly related to the collection itself. E.g. a `.gitignore`, a `.github` folder, a `Makefile` etc.
When using `ansible-galaxy collection build <folder>`, these files are included in the generated tarball. This is especially hurtful when the source folder also contains a Python venv that I use for developing.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/egolov/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/ansible/ansible/lib/ansible
executable location = /home/egolov/Devel/ansible/ansible/bin/ansible
python version = 2.7.15 (default, Oct 15 2018, 15:26:09) [GCC 8.2.1 20180801 (Red Hat 8.2.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
empty
```
##### OS / ENVIRONMENT
Fedora
##### STEPS TO REPRODUCE
Build a collection :)
##### EXPECTED RESULTS
I'd expect only collection-relevant files/folders (those that are listed in the spec/docs) to be included.
##### ADDITIONAL INFORMATION
Alternatively, a ignore file as suggested by @bcoca in https://github.com/ansible/ansible/pull/59121/files#r304554334 could be a fix for this.
|
https://github.com/ansible/ansible/issues/59228
|
https://github.com/ansible/ansible/pull/64688
|
bf190606835d67998232140541bf848a51510c5c
|
f8f76628500052ad3521fbec16c073ae7f99d287
| 2019-07-18T07:57:44Z |
python
| 2019-11-13T19:02:58Z |
docs/docsite/rst/dev_guide/developing_collections.rst
|
.. _developing_collections:
**********************
Developing collections
**********************
Collections are a distribution format for Ansible content. You can use collections to package and distribute playbooks, roles, modules, and plugins.
You can publish and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_.
.. contents::
:local:
:depth: 2
.. _collection_structure:
Collection structure
====================
Collections follow a simple data structure. None of the directories are required unless you have specific content that belongs in one of them. A collection does require a ``galaxy.yml`` file at the root level of the collection. This file contains all of the metadata that Galaxy
and other tools need in order to package, build and publish the collection::
collection/
├── docs/
├── galaxy.yml
├── plugins/
│ ├── modules/
│ │ └── module1.py
│ ├── inventory/
│ └── .../
├── README.md
├── roles/
│ ├── role1/
│ ├── role2/
│ └── .../
├── playbooks/
│ ├── files/
│ ├── vars/
│ ├── templates/
│ └── tasks/
└── tests/
.. note::
* Ansible only accepts ``.yml`` extensions for :file:`galaxy.yml`, and ``.md`` for the :file:`README` file and any files in the :file:`/docs` folder.
* See the `draft collection <https://github.com/bcoca/collection>`_ for an example of a full collection structure.
* Not all directories are currently in use. Those are placeholders for future features.
.. _galaxy_yml:
galaxy.yml
----------
A collection must have a ``galaxy.yml`` file that contains the necessary information to build a collection artifact.
See :ref:`collections_galaxy_meta` for details.
.. _collections_doc_dir:
docs directory
---------------
Put general documentation for the collection here. Keep the specific documentation for plugins and modules embedded as Python docstrings. Use the ``docs`` folder to describe how to use the roles and plugins the collection provides, role requirements, and so on. Use markdown and do not add subfolders.
Use ``ansible-doc`` to view documentation for plugins inside a collection:
.. code-block:: bash
ansible-doc -t lookup my_namespace.my_collection.lookup1
The ``ansible-doc`` command requires the fully qualified collection name (FQCN) to display specific plugin documentation. In this example, ``my_namespace`` is the namespace and ``my_collection`` is the collection name within that namespace.
.. note:: The Ansible collection namespace is defined in the ``galaxy.yml`` file and is not equivalent to the GitHub repository name.
.. _collections_plugin_dir:
plugins directory
------------------
Add a 'per plugin type' specific subdirectory here, including ``module_utils`` which is usable not only by modules, but by most plugins by using their FQCN. This is a way to distribute modules, lookups, filters, and so on, without having to import a role in every play.
Vars plugins are unsupported in collections. Cache plugins may be used in collections for fact caching, but are not supported for inventory plugins.
module_utils
^^^^^^^^^^^^
When coding with ``module_utils`` in a collection, the Python ``import`` statement needs to take into account the FQCN along with the ``ansible_collections`` convention. The resulting Python import will look like ``from ansible_collections.{namespace}.{collection}.plugins.module_utils.{util} import {something}``
The following example snippets show a Python and PowerShell module using both default Ansible ``module_utils`` and
those provided by a collection. In this example the namespace is ``ansible_example``, the collection is ``community``.
In the Python example the ``module_util`` in question is called ``qradar`` such that the FQCN is
``ansible_example.community.plugins.module_utils.qradar``:
.. code-block:: python
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves.urllib.parse import urlencode, quote_plus
from ansible.module_utils.six.moves.urllib.error import HTTPError
from ansible_collections.ansible_example.community.plugins.module_utils.qradar import QRadarRequest
argspec = dict(
name=dict(required=True, type='str'),
state=dict(choices=['present', 'absent'], required=True),
)
module = AnsibleModule(
argument_spec=argspec,
supports_check_mode=True
)
qradar_request = QRadarRequest(
module,
headers={"Content-Type": "application/json"},
not_rest_data_keys=['state']
)
Note that importing something from an ``__init__.py`` file requires using the file name:
.. code-block:: python
from ansible_collections.namespace.collection_name.plugins.callback.__init__ import CustomBaseClass
In the PowerShell example the ``module_util`` in question is called ``hyperv`` such that the FCQN is
``ansible_example.community.plugins.module_utils.hyperv``:
.. code-block:: powershell
#!powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
#AnsibleRequires -PowerShell ansible_collections.ansible_example.community.plugins.module_utils.hyperv
$spec = @{
name = @{ required = $true; type = "str" }
state = @{ required = $true; choices = @("present", "absent") }
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
Invoke-HyperVFunction -Name $module.Params.name
$module.ExitJson()
.. _collections_roles_dir:
roles directory
----------------
Collection roles are mostly the same as existing roles, but with a couple of limitations:
- Role names are now limited to contain only lowercase alphanumeric characters, plus ``_`` and start with an alpha character.
- Roles in a collection cannot contain plugins any more. Plugins must live in the collection ``plugins`` directory tree. Each plugin is accessible to all roles in the collection.
The directory name of the role is used as the role name. Therefore, the directory name must comply with the
above role name rules.
The collection import into Galaxy will fail if a role name does not comply with these rules.
You can migrate 'traditional roles' into a collection but they must follow the rules above. You may need to rename roles if they don't conform. You will have to move or link any role-based plugins to the collection specific directories.
.. note::
For roles imported into Galaxy directly from a GitHub repository, setting the ``role_name`` value in the role's
metadata overrides the role name used by Galaxy. For collections, that value is ignored. When importing a
collection, Galaxy uses the role directory as the name of the role and ignores the ``role_name`` metadata value.
playbooks directory
--------------------
TBD.
tests directory
----------------
TBD. Expect tests for the collection itself to reside here.
.. _creating_collections:
Creating collections
======================
To create a collection:
#. Initialize a collection with :ref:`ansible-galaxy collection init<creating_collections_skeleton>` to create the skeleton directory structure.
#. Add your content to the collection.
#. Build the collection into a collection artifact with :ref:`ansible-galaxy collection build<building_collections>`.
#. Publish the collection artifact to Galaxy with :ref:`ansible-galaxy collection publish<publishing_collections>`.
A user can then install your collection on their systems.
Currently the ``ansible-galaxy collection`` command implements the following sub commands:
* ``init``: Create a basic collection skeleton based on the default template included with Ansible or your own template.
* ``build``: Create a collection artifact that can be uploaded to Galaxy or your own repository.
* ``publish``: Publish a built collection artifact to Galaxy.
* ``install``: Install one or more collections.
To learn more about the ``ansible-galaxy`` cli tool, see the :ref:`ansible-galaxy` man page.
.. _creating_collections_skeleton:
Creating a collection skeleton
------------------------------
To start a new collection:
.. code-block:: bash
collection_dir#> ansible-galaxy collection init my_namespace.my_collection
Then you can populate the directories with the content you want inside the collection. See
https://github.com/bcoca/collection to get a better idea of what you can place inside a collection.
.. _building_collections:
Building collections
--------------------
To build a collection, run ``ansible-galaxy collection build`` from inside the root directory of the collection:
.. code-block:: bash
collection_dir#> ansible-galaxy collection build
This creates
a tarball of the built collection in the current directory which can be uploaded to Galaxy.::
my_collection/
├── galaxy.yml
├── ...
├── my_namespace-my_collection-1.0.0.tar.gz
└── ...
.. note::
* Certain files and folders are excluded when building the collection artifact. This is not currently configurable and is a work in progress so the collection artifact may contain files you would not wish to distribute.
* If you used the now-deprecated ``Mazer`` tool for any of your collections, delete any and all files it added to your :file:`releases/` directory before you build your collection with ``ansible-galaxy``.
* You must also delete the :file:`tests/output` directory if you have been testing with ``ansible-test``.
* The current Galaxy maximum tarball size is 2 MB.
This tarball is mainly intended to upload to Galaxy
as a distribution method, but you can use it directly to install the collection on target systems.
.. _trying_collection_locally:
Trying collection locally
-------------------------
You can try your collection locally by installing it from the tarball. The following will enable an adjacent playbook to
access the collection:
.. code-block:: bash
ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections
You should use one of the values configured in :ref:`COLLECTIONS_PATHS` for your path. This is also where Ansible itself will
expect to find collections when attempting to use them. If you don't specify a path value, ``ansible-galaxy collection install``
installs the collection in the first path defined in :ref:`COLLECTIONS_PATHS`, which by default is ``~/.ansible/collections``.
Next, try using the local collection inside a playbook. For examples and more details see :ref:`Using collections <using_collections>`
.. _publishing_collections:
Publishing collections
----------------------
You can publish collections to Galaxy using the ``ansible-galaxy collection publish`` command or the Galaxy UI itself.
.. note:: Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before you upload it.
.. _upload_collection_ansible_galaxy:
Upload using ansible-galaxy
^^^^^^^^^^^^^^^^^^^^^^^^^^^
To upload the collection artifact with the ``ansible-galaxy`` command:
.. code-block:: bash
ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --api-key=SECRET
The above command triggers an import process, just as if you uploaded the collection through the Galaxy website.
The command waits until the import process completes before reporting the status back. If you wish to continue
without waiting for the import result, use the ``--no-wait`` argument and manually look at the import progress in your
`My Imports <https://galaxy.ansible.com/my-imports/>`_ page.
The API key is a secret token used by Ansible Galaxy to protect your content. You can find your API key at your
`Galaxy profile preferences <https://galaxy.ansible.com/me/preferences>`_ page.
.. _upload_collection_galaxy:
Upload a collection from the Galaxy website
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To upload your collection artifact directly on Galaxy:
#. Go to the `My Content <https://galaxy.ansible.com/my-content/namespaces>`_ page, and click the **Add Content** button on one of your namespaces.
#. From the **Add Content** dialogue, click **Upload New Collection**, and select the collection archive file from your local filesystem.
When uploading collections it doesn't matter which namespace you select. The collection will be uploaded to the
namespace specified in the collection metadata in the ``galaxy.yml`` file. If you're not an owner of the
namespace, the upload request will fail.
Once Galaxy uploads and accepts a collection, you will be redirected to the **My Imports** page, which displays output from the
import process, including any errors or warnings about the metadata and content contained in the collection.
.. _collection_versions:
Collection versions
-------------------
Once you upload a version of a collection, you cannot delete or modify that version. Ensure that everything looks okay before
uploading. The only way to change a collection is to release a new version. The latest version of a collection (by highest version number)
will be the version displayed everywhere in Galaxy; however, users will still be able to download older versions.
Collection versions use `Sematic Versioning <https://semver.org/>`_ for version numbers. Please read the official documentation for details and examples. In summary:
* Increment major (for example: x in `x.y.z`) version number for an incompatible API change.
* Increment minor (for example: y in `x.y.z`) version number for new functionality in a backwards compatible manner.
* Increment patch (for example: z in `x.y.z`) version number for backwards compatible bug fixes.
.. _migrate_to_collection:
Migrating Ansible content to a collection
=========================================
You can experiment with migrating existing modules into a collection using the `content_collector tool <https://github.com/ansible/content_collector>`_. The ``content_collector`` is a playbook that helps you migrate content from an Ansible distribution into a collection.
.. warning::
This tool is in active development and is provided only for experimentation and feedback at this point.
See the `content_collector README <https://github.com/ansible/content_collector>`_ for full details and usage guidelines.
.. seealso::
:ref:`collections`
Learn how to install and use collections.
:ref:`collections_galaxy_meta`
Understand the collections metadata structure.
:ref:`developing_modules_general`
Learn about how to write Ansible modules
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,228 |
ansible-galaxy collection build includes unrelated files into the result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The "source" folder of my collection contains several files that are not directly related to the collection itself. E.g. a `.gitignore`, a `.github` folder, a `Makefile` etc.
When using `ansible-galaxy collection build <folder>`, these files are included in the generated tarball. This is especially hurtful when the source folder also contains a Python venv that I use for developing.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/egolov/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/ansible/ansible/lib/ansible
executable location = /home/egolov/Devel/ansible/ansible/bin/ansible
python version = 2.7.15 (default, Oct 15 2018, 15:26:09) [GCC 8.2.1 20180801 (Red Hat 8.2.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
empty
```
##### OS / ENVIRONMENT
Fedora
##### STEPS TO REPRODUCE
Build a collection :)
##### EXPECTED RESULTS
I'd expect only collection-relevant files/folders (those that are listed in the spec/docs) to be included.
##### ADDITIONAL INFORMATION
Alternatively, a ignore file as suggested by @bcoca in https://github.com/ansible/ansible/pull/59121/files#r304554334 could be a fix for this.
|
https://github.com/ansible/ansible/issues/59228
|
https://github.com/ansible/ansible/pull/64688
|
bf190606835d67998232140541bf848a51510c5c
|
f8f76628500052ad3521fbec16c073ae7f99d287
| 2019-07-18T07:57:44Z |
python
| 2019-11-13T19:02:58Z |
docs/templates/collections_galaxy_meta.rst.j2
|
.. _collections_galaxy_meta:
************************************
Collection Galaxy metadata structure
************************************
A key component of an Ansible collection is the ``galaxy.yml`` file placed in the root directory of a collection. This
file contains the metadata of the collection that is used to generate a collection artifact.
Structure
=========
The ``galaxy.yml`` file must contain the following keys in valid YAML:
.. rst-class:: documentation-table
.. list-table::
:header-rows: 1
:widths: auto
* - Key
- Comment
{%- for entry in options %}
* - .. rst-class:: value-name
@{ entry.key }@ |br|
.. rst-class:: value-type
@{ entry.type | documented_type }@ |_|
{% if entry.get('required', False) -%}
.. rst-class:: value-separator
/ |_|
.. rst-class:: value-required
required
{%- endif %}
- {% for desc in entry.description -%}
@{ desc | trim | rst_ify }@
{% endfor -%}
{%- endfor %}
Examples
========
.. code-block:: yaml
namespace: "namespace_name"
name: "collection_name"
version: "1.0.12"
readme: "README.md"
authors:
- "Author1"
- "Author2 (https://author2.example.com)"
- "Author3 <[email protected]>"
dependencies:
"other_namespace.collection1": ">=1.0.0"
"other_namespace.collection2": ">=2.0.0,<3.0.0"
"anderson55.my_collection": "*" # note: "*" selects the highest version available
license:
- "MIT"
tags:
- demo
- collection
repository: "https://www.github.com/my_org/my_collection"
.. seealso::
:ref:`developing_collections`
Develop or modify a collection.
:ref:`developing_modules_general`
Learn about how to write Ansible modules
:ref:`collections`
Learn how to install and use collections.
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,228 |
ansible-galaxy collection build includes unrelated files into the result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The "source" folder of my collection contains several files that are not directly related to the collection itself. E.g. a `.gitignore`, a `.github` folder, a `Makefile` etc.
When using `ansible-galaxy collection build <folder>`, these files are included in the generated tarball. This is especially hurtful when the source folder also contains a Python venv that I use for developing.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/egolov/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/ansible/ansible/lib/ansible
executable location = /home/egolov/Devel/ansible/ansible/bin/ansible
python version = 2.7.15 (default, Oct 15 2018, 15:26:09) [GCC 8.2.1 20180801 (Red Hat 8.2.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
empty
```
##### OS / ENVIRONMENT
Fedora
##### STEPS TO REPRODUCE
Build a collection :)
##### EXPECTED RESULTS
I'd expect only collection-relevant files/folders (those that are listed in the spec/docs) to be included.
##### ADDITIONAL INFORMATION
Alternatively, a ignore file as suggested by @bcoca in https://github.com/ansible/ansible/pull/59121/files#r304554334 could be a fix for this.
|
https://github.com/ansible/ansible/issues/59228
|
https://github.com/ansible/ansible/pull/64688
|
bf190606835d67998232140541bf848a51510c5c
|
f8f76628500052ad3521fbec16c073ae7f99d287
| 2019-07-18T07:57:44Z |
python
| 2019-11-13T19:02:58Z |
lib/ansible/cli/galaxy.py
|
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os.path
import re
import shutil
import textwrap
import time
import yaml
from jinja2 import BaseLoader, Environment, FileSystemLoader
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import build_collection, install_collections, publish_collection, \
validate_collection_name
from ansible.galaxy.login import GalaxyLogin
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
# Inject role into sys.argv[1] as a backwards compatibility step
if len(args) > 1 and args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
# Remove this in Ansible 2.13 when we also remove -v as an option on the root parser for ansible-galaxy.
idx = 2 if args[1].startswith('-v') else 1
args.insert(idx, 'role')
self.api_servers = []
self.galaxy = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences. You can also use ansible-galaxy login to '
'retrieve this key or set the token for the GALAXY_SERVER_LIST entry.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs',
default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.')
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_login_options(role_parser, parents=[common])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each role installed in the roles_path.')
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument('role', help='Role', nargs='?', metavar='role')
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_login_options(self, parser, parents=None):
login_parser = parser.add_parser('login', parents=parents,
help="Login to api.github.com server in order to use ansible-galaxy role sub "
"command such as 'import', 'delete', 'publish', and 'setup'")
login_parser.set_defaults(func=self.execute_login)
login_parser.add_argument('--github-token', dest='token', default=None,
help='Identify with github token rather than username and password.')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=C.COLLECTIONS_PATHS[0],
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
else:
install_parser.add_argument('-r', '--role-file', dest='role_file',
help='A file containing a list of roles to be imported.')
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be publish to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required):
return {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
}
server_def = [('url', True), ('username', False), ('password', False), ('token', False),
('auth_url', False)]
config_servers = []
for server_key in (C.GALAXY_SERVER_LIST or []):
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req)) for k, req in server_def)
defs = AnsibleLoader(yaml.safe_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi
auth_url = server_options.pop('auth_url', None)
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username,
server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=not context.CLIARGS['ignore_certs'])
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
config_servers.append(GalaxyAPI(self.galaxy, server_key, **server_options))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(self.galaxy, 'cmd_arg', cmd_server, token=cmd_token))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token))
context.CLIARGS['func']()
@property
def api(self):
return self.api_servers[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml.safe_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if requirements_file is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml.safe_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles', []):
requirements['roles'] += parse_role_req(role_req)
for collection_req in file_requirements.get('collections', []):
if isinstance(collection_req, dict):
req_name = collection_req.get('name', None)
if req_name is None:
raise AnsibleError("Collections requirement entry should contain the key name.")
req_version = collection_req.get('version', '*')
req_source = collection_req.get('source', None)
if req_source:
# Try and match up the requirement source with our list of Galaxy API servers defined in the
# config, otherwise create a server with that URL without any auth.
req_source = next(iter([a for a in self.api_servers if req_source in [a.name, a.api_server]]),
GalaxyAPI(self.galaxy, "explicit_requirement_%s" % req_name, req_source))
requirements['collections'].append((req_name, req_version, req_source))
else:
requirements['collections'].append((collection_req, '*', None))
return requirements
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
text.append(u"\tdescription: %s" % role_info.get('description', ''))
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
def to_yaml(v):
return yaml.safe_dump(v, default_flow_style=False).rstrip()
env = Environment(loader=BaseLoader)
env.filters['comment_ify'] = comment_ify
env.filters['to_yaml'] = to_yaml
template = env.from_string(meta_template)
meta_value = template.render({'required_config': required_config, 'optional_config': optional_config})
return meta_value
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(collection_path, output_path, force)
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
))
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
))
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
if obj_skeleton is not None:
own_skeleton = False
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
template_env = Environment(loader=FileSystemLoader(obj_skeleton))
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
elif galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(rel_root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_env.get_template(src_template).stream(inject_data).dump(dest_file, encoding='utf-8')
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
remote_data = False
if not context.CLIARGS['offline']:
remote_data = self.api.lookup_role_by_name(role, False)
if remote_data:
role_info.update(remote_data)
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data = self._display_role_info(role_info)
# FIXME: This is broken in both 1.9 and 2.0 as
# _display_role_info() always returns something
if not data:
data = u"\n- the role %s was not found" % role
self.pager(data)
def execute_install(self):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
"""
if context.CLIARGS['type'] == 'collection':
collections = context.CLIARGS['args']
force = context.CLIARGS['force']
output_path = context.CLIARGS['collections_path']
ignore_certs = context.CLIARGS['ignore_certs']
ignore_errors = context.CLIARGS['ignore_errors']
requirements_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(requirements_file, allow_old_format=False)['collections']
else:
requirements = []
for collection_input in collections:
name, dummy, requirement = collection_input.partition(':')
requirements.append((name, requirement or '*', None))
output_path = GalaxyCLI._resolve_path(output_path)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(output_path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection won't be picked up in an Ansible "
"run." % (to_text(output_path), to_text(":".join(collections_path))))
if os.path.split(output_path)[1] != 'ansible_collections':
output_path = os.path.join(output_path, 'ansible_collections')
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(requirements, output_path, self.api_servers, (not ignore_certs), ignore_errors,
no_deps, force, force_deps)
return 0
role_file = context.CLIARGS['role_file']
if not context.CLIARGS['args'] and role_file is None:
# the user needs to specify one of either --role-file or specify a single user/role name
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
roles_left = []
if role_file:
if not (role_file.endswith('.yaml') or role_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
roles_left = self._parse_requirements_file(role_file)['roles']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
roles_left.append(GalaxyRole(self.galaxy, self.api, **role))
for role in roles_left:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = role.metadata.get('dependencies') or []
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in roles_left:
display.display('- adding dependency: %s' % to_text(dep_role))
roles_left.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependant role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
roles_left.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
roles_left.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
lists the roles installed on the local system or matches a single role passed as an argument.
"""
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
if context.CLIARGS['role']:
# show the requested role, if it exists
name = context.CLIARGS['role']
gr = GalaxyRole(self.galaxy, self.api, name)
if gr.metadata:
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
else:
display.display("- the role %s was not found" % name)
else:
# show all valid roles in the roles_path directory
roles_path = context.CLIARGS['roles_path']
path_found = False
warnings = []
for path in roles_path:
role_path = os.path.expanduser(path)
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
elif not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
path_found = True
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths was usable. Please specify a valid path with --roles-path")
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return True
def execute_login(self):
"""
verify user's identify via Github and retrieve an auth token from Ansible Galaxy.
"""
# Authenticate with github and retrieve a token
if context.CLIARGS['token'] is None:
if C.GALAXY_TOKEN:
github_token = C.GALAXY_TOKEN
else:
login = GalaxyLogin(self.galaxy)
github_token = login.create_github_token()
else:
github_token = context.CLIARGS['token']
galaxy_response = self.api.authenticate(github_token)
if context.CLIARGS['token'] is None and C.GALAXY_TOKEN is None:
# Remove the token we created
login.remove_github_token()
# Store the Galaxy token
token = GalaxyToken()
token.set(galaxy_response['token'])
display.display("Successfully logged into Galaxy as %s" % galaxy_response['username'])
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,228 |
ansible-galaxy collection build includes unrelated files into the result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The "source" folder of my collection contains several files that are not directly related to the collection itself. E.g. a `.gitignore`, a `.github` folder, a `Makefile` etc.
When using `ansible-galaxy collection build <folder>`, these files are included in the generated tarball. This is especially hurtful when the source folder also contains a Python venv that I use for developing.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/egolov/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/ansible/ansible/lib/ansible
executable location = /home/egolov/Devel/ansible/ansible/bin/ansible
python version = 2.7.15 (default, Oct 15 2018, 15:26:09) [GCC 8.2.1 20180801 (Red Hat 8.2.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
empty
```
##### OS / ENVIRONMENT
Fedora
##### STEPS TO REPRODUCE
Build a collection :)
##### EXPECTED RESULTS
I'd expect only collection-relevant files/folders (those that are listed in the spec/docs) to be included.
##### ADDITIONAL INFORMATION
Alternatively, a ignore file as suggested by @bcoca in https://github.com/ansible/ansible/pull/59121/files#r304554334 could be a fix for this.
|
https://github.com/ansible/ansible/issues/59228
|
https://github.com/ansible/ansible/pull/64688
|
bf190606835d67998232140541bf848a51510c5c
|
f8f76628500052ad3521fbec16c073ae7f99d287
| 2019-07-18T07:57:44Z |
python
| 2019-11-13T19:02:58Z |
lib/ansible/galaxy/collection.py
|
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import fnmatch
import json
import operator
import os
import shutil
import sys
import tarfile
import tempfile
import threading
import time
import yaml
from contextlib import contextmanager
from distutils.version import LooseVersion, StrictVersion
from hashlib import sha256
from io import BytesIO
from yaml.error import YAMLError
try:
import queue
except ImportError:
import Queue as queue # Python 2
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.galaxy import get_collections_galaxy_meta_info
from ansible.galaxy.api import CollectionVersionMetadata, GalaxyError
from ansible.module_utils import six
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash, secure_hash_s
from ansible.module_utils.urls import open_url
urlparse = six.moves.urllib.parse.urlparse
urllib_error = six.moves.urllib.error
display = Display()
MANIFEST_FORMAT = 1
class CollectionRequirement:
_FILE_MAPPING = [(b'MANIFEST.json', 'manifest_file'), (b'FILES.json', 'files_file')]
def __init__(self, namespace, name, b_path, api, versions, requirement, force, parent=None, metadata=None,
files=None, skip=False):
"""
Represents a collection requirement, the versions that are available to be installed as well as any
dependencies the collection has.
:param namespace: The collection namespace.
:param name: The collection name.
:param b_path: Byte str of the path to the collection tarball if it has already been downloaded.
:param api: The GalaxyAPI to use if the collection is from Galaxy.
:param versions: A list of versions of the collection that are available.
:param requirement: The version requirement string used to verify the list of versions fit the requirements.
:param force: Whether the force flag applied to the collection.
:param parent: The name of the parent the collection is a dependency of.
:param metadata: The galaxy.api.CollectionVersionMetadata that has already been retrieved from the Galaxy
server.
:param files: The files that exist inside the collection. This is based on the FILES.json file inside the
collection artifact.
:param skip: Whether to skip installing the collection. Should be set if the collection is already installed
and force is not set.
"""
self.namespace = namespace
self.name = name
self.b_path = b_path
self.api = api
self.versions = set(versions)
self.force = force
self.skip = skip
self.required_by = []
self._metadata = metadata
self._files = files
self.add_requirement(parent, requirement)
def __str__(self):
return to_native("%s.%s" % (self.namespace, self.name))
def __unicode__(self):
return u"%s.%s" % (self.namespace, self.name)
@property
def latest_version(self):
try:
return max([v for v in self.versions if v != '*'], key=LooseVersion)
except ValueError: # ValueError: max() arg is an empty sequence
return '*'
@property
def dependencies(self):
if self._metadata:
return self._metadata.dependencies
elif len(self.versions) > 1:
return None
self._get_metadata()
return self._metadata.dependencies
def add_requirement(self, parent, requirement):
self.required_by.append((parent, requirement))
new_versions = set(v for v in self.versions if self._meets_requirements(v, requirement, parent))
if len(new_versions) == 0:
if self.skip:
force_flag = '--force-with-deps' if parent else '--force'
version = self.latest_version if self.latest_version != '*' else 'unknown'
msg = "Cannot meet requirement %s:%s as it is already installed at version '%s'. Use %s to overwrite" \
% (to_text(self), requirement, version, force_flag)
raise AnsibleError(msg)
elif parent is None:
msg = "Cannot meet requirement %s for dependency %s" % (requirement, to_text(self))
else:
msg = "Cannot meet dependency requirement '%s:%s' for collection %s" \
% (to_text(self), requirement, parent)
collection_source = to_text(self.b_path, nonstring='passthru') or self.api.api_server
req_by = "\n".join(
"\t%s - '%s:%s'" % (to_text(p) if p else 'base', to_text(self), r)
for p, r in self.required_by
)
versions = ", ".join(sorted(self.versions, key=LooseVersion))
raise AnsibleError(
"%s from source '%s'. Available versions before last requirement added: %s\nRequirements from:\n%s"
% (msg, collection_source, versions, req_by)
)
self.versions = new_versions
def install(self, path, b_temp_path):
if self.skip:
display.display("Skipping '%s' as it is already installed" % to_text(self))
return
# Install if it is not
collection_path = os.path.join(path, self.namespace, self.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.display("Installing '%s:%s' to '%s'" % (to_text(self), self.latest_version, collection_path))
if self.b_path is None:
download_url = self._metadata.download_url
artifact_hash = self._metadata.artifact_sha256
headers = {}
self.api._add_auth_token(headers, download_url, required=False)
self.b_path = _download_file(download_url, b_temp_path, artifact_hash, self.api.validate_certs,
headers=headers)
if os.path.exists(b_collection_path):
shutil.rmtree(b_collection_path)
os.makedirs(b_collection_path)
with tarfile.open(self.b_path, mode='r') as collection_tar:
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as files_obj:
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, 'MANIFEST.json', b_collection_path, b_temp_path)
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
file_name = file_info['name']
if file_name == '.':
continue
if file_info['ftype'] == 'file':
_extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path,
expected_hash=file_info['chksum_sha256'])
else:
os.makedirs(os.path.join(b_collection_path, to_bytes(file_name, errors='surrogate_or_strict')))
def set_latest_version(self):
self.versions = set([self.latest_version])
self._get_metadata()
def _get_metadata(self):
if self._metadata:
return
self._metadata = self.api.get_collection_version_metadata(self.namespace, self.name, self.latest_version)
def _meets_requirements(self, version, requirements, parent):
"""
Supports version identifiers can be '==', '!=', '>', '>=', '<', '<=', '*'. Each requirement is delimited by ','
"""
op_map = {
'!=': operator.ne,
'==': operator.eq,
'=': operator.eq,
'>=': operator.ge,
'>': operator.gt,
'<=': operator.le,
'<': operator.lt,
}
for req in list(requirements.split(',')):
op_pos = 2 if len(req) > 1 and req[1] == '=' else 1
op = op_map.get(req[:op_pos])
requirement = req[op_pos:]
if not op:
requirement = req
op = operator.eq
# In the case we are checking a new requirement on a base requirement (parent != None) we can't accept
# version as '*' (unknown version) unless the requirement is also '*'.
if parent and version == '*' and requirement != '*':
break
elif requirement == '*' or version == '*':
continue
if not op(LooseVersion(version), LooseVersion(requirement)):
break
else:
return True
# The loop was broken early, it does not meet all the requirements
return False
@staticmethod
def from_tar(b_path, force, parent=None):
if not tarfile.is_tarfile(b_path):
raise AnsibleError("Collection artifact at '%s' is not a valid tar file." % to_native(b_path))
info = {}
with tarfile.open(b_path, mode='r') as collection_tar:
for b_member_name, property_name in CollectionRequirement._FILE_MAPPING:
n_member_name = to_native(b_member_name)
try:
member = collection_tar.getmember(n_member_name)
except KeyError:
raise AnsibleError("Collection at '%s' does not contain the required file %s."
% (to_native(b_path), n_member_name))
with _tarfile_extract(collection_tar, member) as member_obj:
try:
info[property_name] = json.loads(to_text(member_obj.read(), errors='surrogate_or_strict'))
except ValueError:
raise AnsibleError("Collection tar file member %s does not contain a valid json string."
% n_member_name)
meta = info['manifest_file']['collection_info']
files = info['files_file']['files']
namespace = meta['namespace']
name = meta['name']
version = meta['version']
meta = CollectionVersionMetadata(namespace, name, version, None, None, meta['dependencies'])
return CollectionRequirement(namespace, name, b_path, None, [version], version, force, parent=parent,
metadata=meta, files=files)
@staticmethod
def from_path(b_path, force, parent=None):
info = {}
for b_file_name, property_name in CollectionRequirement._FILE_MAPPING:
b_file_path = os.path.join(b_path, b_file_name)
if not os.path.exists(b_file_path):
continue
with open(b_file_path, 'rb') as file_obj:
try:
info[property_name] = json.loads(to_text(file_obj.read(), errors='surrogate_or_strict'))
except ValueError:
raise AnsibleError("Collection file at '%s' does not contain a valid json string."
% to_native(b_file_path))
if 'manifest_file' in info:
manifest = info['manifest_file']['collection_info']
namespace = manifest['namespace']
name = manifest['name']
version = manifest['version']
dependencies = manifest['dependencies']
else:
display.warning("Collection at '%s' does not have a MANIFEST.json file, cannot detect version."
% to_text(b_path))
parent_dir, name = os.path.split(to_text(b_path, errors='surrogate_or_strict'))
namespace = os.path.split(parent_dir)[1]
version = '*'
dependencies = {}
meta = CollectionVersionMetadata(namespace, name, version, None, None, dependencies)
files = info.get('files_file', {}).get('files', {})
return CollectionRequirement(namespace, name, b_path, None, [version], version, force, parent=parent,
metadata=meta, files=files, skip=True)
@staticmethod
def from_name(collection, apis, requirement, force, parent=None):
namespace, name = collection.split('.', 1)
galaxy_meta = None
for api in apis:
try:
if not (requirement == '*' or requirement.startswith('<') or requirement.startswith('>') or
requirement.startswith('!=')):
if requirement.startswith('='):
requirement = requirement.lstrip('=')
resp = api.get_collection_version_metadata(namespace, name, requirement)
galaxy_meta = resp
versions = [resp.version]
else:
resp = api.get_collection_versions(namespace, name)
# Galaxy supports semver but ansible-galaxy does not. We ignore any versions that don't match
# StrictVersion (x.y.z) and only support pre-releases if an explicit version was set (done above).
versions = [v for v in resp if StrictVersion.version_re.match(v)]
except GalaxyError as err:
if err.http_code == 404:
display.vvv("Collection '%s' is not available from server %s %s"
% (collection, api.name, api.api_server))
continue
raise
display.vvv("Collection '%s' obtained from server %s %s" % (collection, api.name, api.api_server))
break
else:
raise AnsibleError("Failed to find collection %s:%s" % (collection, requirement))
req = CollectionRequirement(namespace, name, None, api, versions, requirement, force, parent=parent,
metadata=galaxy_meta)
return req
def build_collection(collection_path, output_path, force):
"""
Creates the Ansible collection artifact in a .tar.gz file.
:param collection_path: The path to the collection to build. This should be the directory that contains the
galaxy.yml file.
:param output_path: The path to create the collection build artifact. This should be a directory.
:param force: Whether to overwrite an existing collection build artifact or fail.
:return: The path to the collection build artifact.
"""
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
b_galaxy_path = os.path.join(b_collection_path, b'galaxy.yml')
if not os.path.exists(b_galaxy_path):
raise AnsibleError("The collection galaxy.yml path '%s' does not exist." % to_native(b_galaxy_path))
collection_meta = _get_galaxy_yml(b_galaxy_path)
file_manifest = _build_files_manifest(b_collection_path, collection_meta['namespace'], collection_meta['name'])
collection_manifest = _build_manifest(**collection_meta)
collection_output = os.path.join(output_path, "%s-%s-%s.tar.gz" % (collection_meta['namespace'],
collection_meta['name'],
collection_meta['version']))
b_collection_output = to_bytes(collection_output, errors='surrogate_or_strict')
if os.path.exists(b_collection_output):
if os.path.isdir(b_collection_output):
raise AnsibleError("The output collection artifact '%s' already exists, "
"but is a directory - aborting" % to_native(collection_output))
elif not force:
raise AnsibleError("The file '%s' already exists. You can use --force to re-create "
"the collection artifact." % to_native(collection_output))
_build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
def publish_collection(collection_path, api, wait, timeout):
"""
Publish an Ansible collection tarball into an Ansible Galaxy server.
:param collection_path: The path to the collection tarball to publish.
:param api: A GalaxyAPI to publish the collection to.
:param wait: Whether to wait until the import process is complete.
:param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite.
"""
import_uri = api.publish_collection(collection_path)
if wait:
# Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
task_id = None
for path_segment in reversed(import_uri.split('/')):
if path_segment:
task_id = path_segment
break
if not task_id:
raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri)
display.display("Collection has been published to the Galaxy server %s %s" % (api.name, api.api_server))
with _display_progress():
api.wait_import_task(task_id, timeout)
display.display("Collection has been successfully published and imported to the Galaxy server %s %s"
% (api.name, api.api_server))
else:
display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has "
"completed due to --no-wait being set. Import task results can be found at %s"
% (api.name, api.api_server, import_uri))
def install_collections(collections, output_path, apis, validate_certs, ignore_errors, no_deps, force, force_deps):
"""
Install Ansible collections to the path specified.
:param collections: The collections to install, should be a list of tuples with (name, requirement, Galaxy server).
:param output_path: The path to install the collections to.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param validate_certs: Whether to validate the certificates if downloading a tarball.
:param ignore_errors: Whether to ignore any errors when installing the collection.
:param no_deps: Ignore any collection dependencies and only install the base requirements.
:param force: Re-install a collection if it has already been installed.
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = _find_existing_collections(output_path)
with _tempdir() as b_temp_path:
display.display("Process install dependency map")
with _display_progress():
dependency_map = _build_dependency_map(collections, existing_collections, b_temp_path, apis,
validate_certs, force, force_deps, no_deps)
display.display("Starting collection install process")
with _display_progress():
for collection in dependency_map.values():
try:
collection.install(output_path, b_temp_path)
except AnsibleError as err:
if ignore_errors:
display.warning("Failed to install collection %s but skipping due to --ignore-errors being set. "
"Error: %s" % (to_text(collection), to_text(err)))
else:
raise
def validate_collection_name(name):
"""
Validates the collection name as an input from the user or a requirements file fit the requirements.
:param name: The input name with optional range specifier split by ':'.
:return: The input value, required for argparse validation.
"""
collection, dummy, dummy = name.partition(':')
if AnsibleCollectionRef.is_valid_collection_name(collection):
return name
raise AnsibleError("Invalid collection name '%s', name must be in the format <namespace>.<collection>." % name)
@contextmanager
def _tempdir():
b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict'))
yield b_temp_path
shutil.rmtree(b_temp_path)
@contextmanager
def _tarfile_extract(tar, member):
tar_obj = tar.extractfile(member)
yield tar_obj
tar_obj.close()
@contextmanager
def _display_progress():
config_display = C.GALAXY_DISPLAY_PROGRESS
display_wheel = sys.stdout.isatty() if config_display is None else config_display
if not display_wheel:
yield
return
def progress(display_queue, actual_display):
actual_display.debug("Starting display_progress display thread")
t = threading.current_thread()
while True:
for c in "|/-\\":
actual_display.display(c + "\b", newline=False)
time.sleep(0.1)
# Display a message from the main thread
while True:
try:
method, args, kwargs = display_queue.get(block=False, timeout=0.1)
except queue.Empty:
break
else:
func = getattr(actual_display, method)
func(*args, **kwargs)
if getattr(t, "finish", False):
actual_display.debug("Received end signal for display_progress display thread")
return
class DisplayThread(object):
def __init__(self, display_queue):
self.display_queue = display_queue
def __getattr__(self, attr):
def call_display(*args, **kwargs):
self.display_queue.put((attr, args, kwargs))
return call_display
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
global display
old_display = display
try:
display_queue = queue.Queue()
display = DisplayThread(display_queue)
t = threading.Thread(target=progress, args=(display_queue, old_display))
t.daemon = True
t.start()
try:
yield
finally:
t.finish = True
t.join()
except Exception:
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
raise
finally:
display = old_display
def _get_galaxy_yml(b_galaxy_yml_path):
meta_info = get_collections_galaxy_meta_info()
mandatory_keys = set()
string_keys = set()
list_keys = set()
dict_keys = set()
for info in meta_info:
if info.get('required', False):
mandatory_keys.add(info['key'])
key_list_type = {
'str': string_keys,
'list': list_keys,
'dict': dict_keys,
}[info.get('type', 'str')]
key_list_type.add(info['key'])
all_keys = frozenset(list(mandatory_keys) + list(string_keys) + list(list_keys) + list(dict_keys))
try:
with open(b_galaxy_yml_path, 'rb') as g_yaml:
galaxy_yml = yaml.safe_load(g_yaml)
except YAMLError as err:
raise AnsibleError("Failed to parse the galaxy.yml at '%s' with the following error:\n%s"
% (to_native(b_galaxy_yml_path), to_native(err)))
set_keys = set(galaxy_yml.keys())
missing_keys = mandatory_keys.difference(set_keys)
if missing_keys:
raise AnsibleError("The collection galaxy.yml at '%s' is missing the following mandatory keys: %s"
% (to_native(b_galaxy_yml_path), ", ".join(sorted(missing_keys))))
extra_keys = set_keys.difference(all_keys)
if len(extra_keys) > 0:
display.warning("Found unknown keys in collection galaxy.yml at '%s': %s"
% (to_text(b_galaxy_yml_path), ", ".join(extra_keys)))
# Add the defaults if they have not been set
for optional_string in string_keys:
if optional_string not in galaxy_yml:
galaxy_yml[optional_string] = None
for optional_list in list_keys:
list_val = galaxy_yml.get(optional_list, None)
if list_val is None:
galaxy_yml[optional_list] = []
elif not isinstance(list_val, list):
galaxy_yml[optional_list] = [list_val]
for optional_dict in dict_keys:
if optional_dict not in galaxy_yml:
galaxy_yml[optional_dict] = {}
# license is a builtin var in Python, to avoid confusion we just rename it to license_ids
galaxy_yml['license_ids'] = galaxy_yml['license']
del galaxy_yml['license']
return galaxy_yml
def _build_files_manifest(b_collection_path, namespace, name):
# Contains tuple of (b_filename, only root) where 'only root' means to only ignore the file in the root dir
b_ignore_files = frozenset([(b'*.pyc', False), (b'*.retry', False),
(to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), True)])
b_ignore_dirs = frozenset([(b'CVS', False), (b'.bzr', False), (b'.hg', False), (b'.git', False), (b'.svn', False),
(b'__pycache__', False), (b'.tox', False)])
entry_template = {
'name': None,
'ftype': None,
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT
}
manifest = {
'files': [
{
'name': '.',
'ftype': 'dir',
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT,
},
],
'format': MANIFEST_FORMAT,
}
def _walk(b_path, b_top_level_dir):
is_root = b_path == b_top_level_dir
for b_item in os.listdir(b_path):
b_abs_path = os.path.join(b_path, b_item)
b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:]
rel_path = to_text(os.path.join(b_rel_base_dir, b_item), errors='surrogate_or_strict')
if os.path.isdir(b_abs_path):
if any(b_item == b_path for b_path, root_only in b_ignore_dirs
if not root_only or root_only == is_root):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
if os.path.islink(b_abs_path):
b_link_target = os.path.realpath(b_abs_path)
if not b_link_target.startswith(b_top_level_dir):
display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection"
% to_text(b_abs_path))
continue
manifest_entry = entry_template.copy()
manifest_entry['name'] = rel_path
manifest_entry['ftype'] = 'dir'
manifest['files'].append(manifest_entry)
_walk(b_abs_path, b_top_level_dir)
else:
if b_item == b'galaxy.yml':
continue
elif any(fnmatch.fnmatch(b_item, b_pattern) for b_pattern, root_only in b_ignore_files
if not root_only or root_only == is_root):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
manifest_entry = entry_template.copy()
manifest_entry['name'] = rel_path
manifest_entry['ftype'] = 'file'
manifest_entry['chksum_type'] = 'sha256'
manifest_entry['chksum_sha256'] = secure_hash(b_abs_path, hash_func=sha256)
manifest['files'].append(manifest_entry)
_walk(b_collection_path, b_collection_path)
return manifest
def _build_manifest(namespace, name, version, authors, readme, tags, description, license_ids, license_file,
dependencies, repository, documentation, homepage, issues, **kwargs):
manifest = {
'collection_info': {
'namespace': namespace,
'name': name,
'version': version,
'authors': authors,
'readme': readme,
'tags': tags,
'description': description,
'license': license_ids,
'license_file': license_file if license_file else None, # Handle galaxy.yml having an empty string (None)
'dependencies': dependencies,
'repository': repository,
'documentation': documentation,
'homepage': homepage,
'issues': issues,
},
'file_manifest_file': {
'name': 'FILES.json',
'ftype': 'file',
'chksum_type': 'sha256',
'chksum_sha256': None, # Filled out in _build_collection_tar
'format': MANIFEST_FORMAT
},
'format': MANIFEST_FORMAT,
}
return manifest
def _build_collection_tar(b_collection_path, b_tar_path, collection_manifest, file_manifest):
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
with _tempdir() as b_temp_path:
b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path))
with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file:
# Add the MANIFEST.json and FILES.json file to the archive
for name, b in [('MANIFEST.json', collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_io = BytesIO(b)
tar_info = tarfile.TarInfo(name)
tar_info.size = len(b)
tar_info.mtime = time.time()
tar_info.mode = 0o0644
tar_file.addfile(tarinfo=tar_info, fileobj=b_io)
for file_info in file_manifest['files']:
if file_info['name'] == '.':
continue
# arcname expects a native string, cannot be bytes
filename = to_native(file_info['name'], errors='surrogate_or_strict')
b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict'))
def reset_stat(tarinfo):
tarinfo.mode = 0o0755 if tarinfo.isdir() else 0o0644
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = ''
return tarinfo
tar_file.add(os.path.realpath(b_src_path), arcname=filename, recursive=False, filter=reset_stat)
shutil.copy(b_tar_filepath, b_tar_path)
collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'],
collection_manifest['collection_info']['name'])
display.display('Created collection for %s at %s' % (collection_name, to_text(b_tar_path)))
def _find_existing_collections(path):
collections = []
b_path = to_bytes(path, errors='surrogate_or_strict')
for b_namespace in os.listdir(b_path):
b_namespace_path = os.path.join(b_path, b_namespace)
if os.path.isfile(b_namespace_path):
continue
for b_collection in os.listdir(b_namespace_path):
b_collection_path = os.path.join(b_namespace_path, b_collection)
if os.path.isdir(b_collection_path):
req = CollectionRequirement.from_path(b_collection_path, False)
display.vvv("Found installed collection %s:%s at '%s'" % (to_text(req), req.latest_version,
to_text(b_collection_path)))
collections.append(req)
return collections
def _build_dependency_map(collections, existing_collections, b_temp_path, apis, validate_certs, force, force_deps,
no_deps):
dependency_map = {}
# First build the dependency map on the actual requirements
for name, version, source in collections:
_get_collection_info(dependency_map, existing_collections, name, version, source, b_temp_path, apis,
validate_certs, (force or force_deps))
checked_parents = set([to_text(c) for c in dependency_map.values() if c.skip])
while len(dependency_map) != len(checked_parents):
while not no_deps: # Only parse dependencies if no_deps was not set
parents_to_check = set(dependency_map.keys()).difference(checked_parents)
deps_exhausted = True
for parent in parents_to_check:
parent_info = dependency_map[parent]
if parent_info.dependencies:
deps_exhausted = False
for dep_name, dep_requirement in parent_info.dependencies.items():
_get_collection_info(dependency_map, existing_collections, dep_name, dep_requirement,
parent_info.api, b_temp_path, apis, validate_certs, force_deps,
parent=parent)
checked_parents.add(parent)
# No extra dependencies were resolved, exit loop
if deps_exhausted:
break
# Now we have resolved the deps to our best extent, now select the latest version for collections with
# multiple versions found and go from there
deps_not_checked = set(dependency_map.keys()).difference(checked_parents)
for collection in deps_not_checked:
dependency_map[collection].set_latest_version()
if no_deps or len(dependency_map[collection].dependencies) == 0:
checked_parents.add(collection)
return dependency_map
def _get_collection_info(dep_map, existing_collections, collection, requirement, source, b_temp_path, apis,
validate_certs, force, parent=None):
dep_msg = ""
if parent:
dep_msg = " - as dependency of %s" % parent
display.vvv("Processing requirement collection '%s'%s" % (to_text(collection), dep_msg))
b_tar_path = None
if os.path.isfile(to_bytes(collection, errors='surrogate_or_strict')):
display.vvvv("Collection requirement '%s' is a tar artifact" % to_text(collection))
b_tar_path = to_bytes(collection, errors='surrogate_or_strict')
elif urlparse(collection).scheme:
display.vvvv("Collection requirement '%s' is a URL to a tar artifact" % collection)
b_tar_path = _download_file(collection, b_temp_path, None, validate_certs)
if b_tar_path:
req = CollectionRequirement.from_tar(b_tar_path, force, parent=parent)
collection_name = to_text(req)
if collection_name in dep_map:
collection_info = dep_map[collection_name]
collection_info.add_requirement(None, req.latest_version)
else:
collection_info = req
else:
validate_collection_name(collection)
display.vvvv("Collection requirement '%s' is the name of a collection" % collection)
if collection in dep_map:
collection_info = dep_map[collection]
collection_info.add_requirement(parent, requirement)
else:
apis = [source] if source else apis
collection_info = CollectionRequirement.from_name(collection, apis, requirement, force, parent=parent)
existing = [c for c in existing_collections if to_text(c) == to_text(collection_info)]
if existing and not collection_info.force:
# Test that the installed collection fits the requirement
existing[0].add_requirement(to_text(collection_info), requirement)
collection_info = existing[0]
dep_map[to_text(collection_info)] = collection_info
def _download_file(url, b_path, expected_hash, validate_certs, headers=None):
bufsize = 65536
digest = sha256()
urlsplit = os.path.splitext(to_text(url.rsplit('/', 1)[1]))
b_file_name = to_bytes(urlsplit[0], errors='surrogate_or_strict')
b_file_ext = to_bytes(urlsplit[1], errors='surrogate_or_strict')
b_file_path = tempfile.NamedTemporaryFile(dir=b_path, prefix=b_file_name, suffix=b_file_ext, delete=False).name
display.vvv("Downloading %s to %s" % (url, to_text(b_path)))
# Galaxy redirs downloads to S3 which reject the request if an Authorization header is attached so don't redir that
resp = open_url(to_native(url, errors='surrogate_or_strict'), validate_certs=validate_certs, headers=headers,
unredirected_headers=['Authorization'])
with open(b_file_path, 'wb') as download_file:
data = resp.read(bufsize)
while data:
digest.update(data)
download_file.write(data)
data = resp.read(bufsize)
if expected_hash:
actual_hash = digest.hexdigest()
display.vvvv("Validating downloaded file hash %s with expected hash %s" % (actual_hash, expected_hash))
if expected_hash != actual_hash:
raise AnsibleError("Mismatch artifact hash with downloaded file")
return b_file_path
def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None):
n_filename = to_native(filename, errors='surrogate_or_strict')
try:
member = tar.getmember(n_filename)
except KeyError:
raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (to_native(tar.name),
n_filename))
with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj:
bufsize = 65536
sha256_digest = sha256()
with _tarfile_extract(tar, member) as tar_obj:
data = tar_obj.read(bufsize)
while data:
tmpfile_obj.write(data)
tmpfile_obj.flush()
sha256_digest.update(data)
data = tar_obj.read(bufsize)
actual_hash = sha256_digest.hexdigest()
if expected_hash and actual_hash != expected_hash:
raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'"
% (n_filename, to_native(tar.name)))
b_dest_filepath = os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict'))
b_parent_dir = os.path.split(b_dest_filepath)[0]
if not os.path.exists(b_parent_dir):
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
os.makedirs(b_parent_dir)
shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,228 |
ansible-galaxy collection build includes unrelated files into the result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The "source" folder of my collection contains several files that are not directly related to the collection itself. E.g. a `.gitignore`, a `.github` folder, a `Makefile` etc.
When using `ansible-galaxy collection build <folder>`, these files are included in the generated tarball. This is especially hurtful when the source folder also contains a Python venv that I use for developing.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/egolov/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/ansible/ansible/lib/ansible
executable location = /home/egolov/Devel/ansible/ansible/bin/ansible
python version = 2.7.15 (default, Oct 15 2018, 15:26:09) [GCC 8.2.1 20180801 (Red Hat 8.2.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
empty
```
##### OS / ENVIRONMENT
Fedora
##### STEPS TO REPRODUCE
Build a collection :)
##### EXPECTED RESULTS
I'd expect only collection-relevant files/folders (those that are listed in the spec/docs) to be included.
##### ADDITIONAL INFORMATION
Alternatively, a ignore file as suggested by @bcoca in https://github.com/ansible/ansible/pull/59121/files#r304554334 could be a fix for this.
|
https://github.com/ansible/ansible/issues/59228
|
https://github.com/ansible/ansible/pull/64688
|
bf190606835d67998232140541bf848a51510c5c
|
f8f76628500052ad3521fbec16c073ae7f99d287
| 2019-07-18T07:57:44Z |
python
| 2019-11-13T19:02:58Z |
lib/ansible/galaxy/data/collections_galaxy_meta.yml
|
# Copyright (c) 2019 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# key: The name of the key as defined in galaxy.yml
# description: Comment/info on the key to be used as the generated doc and auto generated skeleton galaxy.yml file
# required: Whether the key is required (default is no)
# type: The type of value that can be set, aligns to the values in the plugin formatter
---
- key: namespace
description:
- The namespace of the collection.
- This can be a company/brand/organization or product namespace under which all content lives.
- May only contain alphanumeric characters and underscores. Additionally namespaces cannot start with underscores or
numbers and cannot contain consecutive underscores.
required: yes
type: str
- key: name
description:
- The name of the collection.
- Has the same character restrictions as C(namespace).
required: yes
type: str
- key: version
description:
- The version of the collection.
- Must be compatible with semantic versioning.
required: yes
type: str
- key: readme
description:
- The path to the Markdown (.md) readme file.
- This path is relative to the root of the collection.
required: yes
type: str
- key: authors
description:
- A list of the collection's content authors.
- Can be just the name or in the format 'Full Name <email> (url) @nicks:irc/im.site#channel'.
required: yes
type: list
- key: description
description:
- A short summary description of the collection.
type: str
- key: license
description:
- Either a single license or a list of licenses for content inside of a collection.
- Ansible Galaxy currently only accepts L(SPDX,https://spdx.org/licenses/) licenses
- This key is mutually exclusive with C(license_file).
type: list
- key: license_file
description:
- The path to the license file for the collection.
- This path is relative to the root of the collection.
- This key is mutually exclusive with C(license).
type: str
- key: tags
description:
- A list of tags you want to associate with the collection for indexing/searching.
- A tag name has the same character requirements as C(namespace) and C(name).
type: list
- key: dependencies
description:
- Collections that this collection requires to be installed for it to be usable.
- The key of the dict is the collection label C(namespace.name).
- The value is a version range
L(specifiers,https://python-semanticversion.readthedocs.io/en/latest/#requirement-specification).
- Multiple version range specifiers can be set and are separated by C(,).
type: dict
- key: repository
description:
- The URL of the originating SCM repository.
type: str
- key: documentation
description:
- The URL to any online docs.
type: str
- key: homepage
description:
- The URL to the homepage of the collection/project.
type: str
- key: issues
description:
- The URL to the collection issue tracker.
type: str
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,228 |
ansible-galaxy collection build includes unrelated files into the result
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The "source" folder of my collection contains several files that are not directly related to the collection itself. E.g. a `.gitignore`, a `.github` folder, a `Makefile` etc.
When using `ansible-galaxy collection build <folder>`, these files are included in the generated tarball. This is especially hurtful when the source folder also contains a Python venv that I use for developing.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/egolov/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/egolov/Devel/ansible/ansible/lib/ansible
executable location = /home/egolov/Devel/ansible/ansible/bin/ansible
python version = 2.7.15 (default, Oct 15 2018, 15:26:09) [GCC 8.2.1 20180801 (Red Hat 8.2.1-2)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
empty
```
##### OS / ENVIRONMENT
Fedora
##### STEPS TO REPRODUCE
Build a collection :)
##### EXPECTED RESULTS
I'd expect only collection-relevant files/folders (those that are listed in the spec/docs) to be included.
##### ADDITIONAL INFORMATION
Alternatively, a ignore file as suggested by @bcoca in https://github.com/ansible/ansible/pull/59121/files#r304554334 could be a fix for this.
|
https://github.com/ansible/ansible/issues/59228
|
https://github.com/ansible/ansible/pull/64688
|
bf190606835d67998232140541bf848a51510c5c
|
f8f76628500052ad3521fbec16c073ae7f99d287
| 2019-07-18T07:57:44Z |
python
| 2019-11-13T19:02:58Z |
test/units/galaxy/test_collection.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import pytest
import tarfile
import uuid
from hashlib import sha256
from io import BytesIO
from units.compat.mock import MagicMock
from ansible import context
from ansible.cli.galaxy import GalaxyCLI
from ansible.errors import AnsibleError
from ansible.galaxy import api, collection, token
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.utils import context_objects as co
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash_s
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
@pytest.fixture()
def collection_input(tmp_path_factory):
''' Creates a collection skeleton directory for build tests '''
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
namespace = 'ansible_namespace'
collection = 'collection'
skeleton = os.path.join(os.path.dirname(os.path.split(__file__)[0]), 'cli', 'test_data', 'collection_skeleton')
galaxy_args = ['ansible-galaxy', 'collection', 'init', '%s.%s' % (namespace, collection),
'-c', '--init-path', test_dir, '--collection-skeleton', skeleton]
GalaxyCLI(args=galaxy_args).run()
collection_dir = os.path.join(test_dir, namespace, collection)
output_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Output'))
return collection_dir, output_dir
@pytest.fixture()
def collection_artifact(monkeypatch, tmp_path_factory):
''' Creates a temp collection artifact and mocked open_url instance for publishing tests '''
mock_open = MagicMock()
monkeypatch.setattr(collection, 'open_url', mock_open)
mock_uuid = MagicMock()
mock_uuid.return_value.hex = 'uuid'
monkeypatch.setattr(uuid, 'uuid4', mock_uuid)
tmp_path = tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections')
input_file = to_text(tmp_path / 'collection.tar.gz')
with tarfile.open(input_file, 'w:gz') as tfile:
b_io = BytesIO(b"\x00\x01\x02\x03")
tar_info = tarfile.TarInfo('test')
tar_info.size = 4
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
return input_file, mock_open
@pytest.fixture()
def galaxy_yml(request, tmp_path_factory):
b_test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
b_galaxy_yml = os.path.join(b_test_dir, b'galaxy.yml')
with open(b_galaxy_yml, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(request.param))
yield b_galaxy_yml
@pytest.fixture()
def tmp_tarfile(tmp_path_factory):
''' Creates a temporary tar file for _extract_tar_file tests '''
filename = u'ÅÑŚÌβŁÈ'
temp_dir = to_bytes(tmp_path_factory.mktemp('test-%s Collections' % to_native(filename)))
tar_file = os.path.join(temp_dir, to_bytes('%s.tar.gz' % filename))
data = os.urandom(8)
with tarfile.open(tar_file, 'w:gz') as tfile:
b_io = BytesIO(data)
tar_info = tarfile.TarInfo(filename)
tar_info.size = len(data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
sha256_hash = sha256()
sha256_hash.update(data)
with tarfile.open(tar_file, 'r') as tfile:
yield temp_dir, tfile, filename, sha256_hash.hexdigest()
@pytest.fixture()
def galaxy_server():
context.CLIARGS._store = {'ignore_certs': False}
galaxy_api = api.GalaxyAPI(None, 'test_server', 'https://galaxy.ansible.com',
token=token.GalaxyToken(token='key'))
return galaxy_api
def test_build_collection_no_galaxy_yaml():
fake_path = u'/fake/ÅÑŚÌβŁÈ/path'
expected = to_native("The collection galaxy.yml path '%s/galaxy.yml' does not exist." % fake_path)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(fake_path, 'output', False)
def test_build_existing_output_file(collection_input):
input_dir, output_dir = collection_input
existing_output_dir = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
os.makedirs(existing_output_dir)
expected = "The output collection artifact '%s' already exists, but is a directory - aborting" \
% to_native(existing_output_dir)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(input_dir, output_dir, False)
def test_build_existing_output_without_force(collection_input):
input_dir, output_dir = collection_input
existing_output = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
with open(existing_output, 'w+') as out_file:
out_file.write("random garbage")
out_file.flush()
expected = "The file '%s' already exists. You can use --force to re-create the collection artifact." \
% to_native(existing_output)
with pytest.raises(AnsibleError, match=expected):
collection.build_collection(input_dir, output_dir, False)
def test_build_existing_output_with_force(collection_input):
input_dir, output_dir = collection_input
existing_output = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
with open(existing_output, 'w+') as out_file:
out_file.write("random garbage")
out_file.flush()
collection.build_collection(input_dir, output_dir, True)
# Verify the file was replaced with an actual tar file
assert tarfile.is_tarfile(existing_output)
@pytest.mark.parametrize('galaxy_yml', [b'namespace: value: broken'], indirect=True)
def test_invalid_yaml_galaxy_file(galaxy_yml):
expected = to_native(b"Failed to parse the galaxy.yml at '%s' with the following error:" % galaxy_yml)
with pytest.raises(AnsibleError, match=expected):
collection._get_galaxy_yml(galaxy_yml)
@pytest.mark.parametrize('galaxy_yml', [b'namespace: test_namespace'], indirect=True)
def test_missing_required_galaxy_key(galaxy_yml):
expected = "The collection galaxy.yml at '%s' is missing the following mandatory keys: authors, name, " \
"readme, version" % to_native(galaxy_yml)
with pytest.raises(AnsibleError, match=expected):
collection._get_galaxy_yml(galaxy_yml)
@pytest.mark.parametrize('galaxy_yml', [b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
invalid: value"""], indirect=True)
def test_warning_extra_keys(galaxy_yml, monkeypatch):
display_mock = MagicMock()
monkeypatch.setattr(Display, 'warning', display_mock)
collection._get_galaxy_yml(galaxy_yml)
assert display_mock.call_count == 1
assert display_mock.call_args[0][0] == "Found unknown keys in collection galaxy.yml at '%s': invalid"\
% to_text(galaxy_yml)
@pytest.mark.parametrize('galaxy_yml', [b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md"""], indirect=True)
def test_defaults_galaxy_yml(galaxy_yml):
actual = collection._get_galaxy_yml(galaxy_yml)
assert actual['namespace'] == 'namespace'
assert actual['name'] == 'collection'
assert actual['authors'] == ['Jordan']
assert actual['version'] == '0.1.0'
assert actual['readme'] == 'README.md'
assert actual['description'] is None
assert actual['repository'] is None
assert actual['documentation'] is None
assert actual['homepage'] is None
assert actual['issues'] is None
assert actual['tags'] == []
assert actual['dependencies'] == {}
assert actual['license_ids'] == []
@pytest.mark.parametrize('galaxy_yml', [(b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
license: MIT"""), (b"""
namespace: namespace
name: collection
authors: Jordan
version: 0.1.0
readme: README.md
license:
- MIT""")], indirect=True)
def test_galaxy_yml_list_value(galaxy_yml):
actual = collection._get_galaxy_yml(galaxy_yml)
assert actual['license_ids'] == ['MIT']
def test_build_ignore_files_and_folders(collection_input, monkeypatch):
input_dir = collection_input[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
git_folder = os.path.join(input_dir, '.git')
retry_file = os.path.join(input_dir, 'ansible.retry')
os.makedirs(git_folder)
with open(retry_file, 'w+') as ignore_file:
ignore_file.write('random')
ignore_file.flush()
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection')
assert actual['format'] == 1
for manifest_entry in actual['files']:
assert manifest_entry['name'] not in ['.git', 'ansible.retry', 'galaxy.yml']
expected_msgs = [
"Skipping '%s' for collection build" % to_text(retry_file),
"Skipping '%s' for collection build" % to_text(git_folder),
]
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] in expected_msgs
assert mock_display.mock_calls[1][1][0] in expected_msgs
def test_build_ignore_older_release_in_root(collection_input, monkeypatch):
input_dir = collection_input[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'vvv', mock_display)
# This is expected to be ignored because it is in the root collection dir.
release_file = os.path.join(input_dir, 'namespace-collection-0.0.0.tar.gz')
# This is not expected to be ignored because it is not in the root collection dir.
fake_release_file = os.path.join(input_dir, 'plugins', 'namespace-collection-0.0.0.tar.gz')
for filename in [release_file, fake_release_file]:
with open(filename, 'w+') as file_obj:
file_obj.write('random')
file_obj.flush()
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection')
assert actual['format'] == 1
plugin_release_found = False
for manifest_entry in actual['files']:
assert manifest_entry['name'] != 'namespace-collection-0.0.0.tar.gz'
if manifest_entry['name'] == 'plugins/namespace-collection-0.0.0.tar.gz':
plugin_release_found = True
assert plugin_release_found
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == "Skipping '%s' for collection build" % to_text(release_file)
def test_build_ignore_symlink_target_outside_collection(collection_input, monkeypatch):
input_dir, outside_dir = collection_input
mock_display = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_display)
link_path = os.path.join(input_dir, 'plugins', 'connection')
os.symlink(outside_dir, link_path)
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection')
for manifest_entry in actual['files']:
assert manifest_entry['name'] != 'plugins/connection'
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == "Skipping '%s' as it is a symbolic link to a directory outside " \
"the collection" % to_text(link_path)
def test_build_copy_symlink_target_inside_collection(collection_input):
input_dir = collection_input[0]
os.makedirs(os.path.join(input_dir, 'playbooks', 'roles'))
roles_link = os.path.join(input_dir, 'playbooks', 'roles', 'linked')
roles_target = os.path.join(input_dir, 'roles', 'linked')
roles_target_tasks = os.path.join(roles_target, 'tasks')
os.makedirs(roles_target_tasks)
with open(os.path.join(roles_target_tasks, 'main.yml'), 'w+') as tasks_main:
tasks_main.write("---\n- hosts: localhost\n tasks:\n - ping:")
tasks_main.flush()
os.symlink(roles_target, roles_link)
actual = collection._build_files_manifest(to_bytes(input_dir), 'namespace', 'collection')
linked_entries = [e for e in actual['files'] if e['name'].startswith('playbooks/roles/linked')]
assert len(linked_entries) == 3
assert linked_entries[0]['name'] == 'playbooks/roles/linked'
assert linked_entries[0]['ftype'] == 'dir'
assert linked_entries[1]['name'] == 'playbooks/roles/linked/tasks'
assert linked_entries[1]['ftype'] == 'dir'
assert linked_entries[2]['name'] == 'playbooks/roles/linked/tasks/main.yml'
assert linked_entries[2]['ftype'] == 'file'
assert linked_entries[2]['chksum_sha256'] == '9c97a1633c51796999284c62236b8d5462903664640079b80c37bf50080fcbc3'
def test_build_with_symlink_inside_collection(collection_input):
input_dir, output_dir = collection_input
os.makedirs(os.path.join(input_dir, 'playbooks', 'roles'))
roles_link = os.path.join(input_dir, 'playbooks', 'roles', 'linked')
file_link = os.path.join(input_dir, 'docs', 'README.md')
roles_target = os.path.join(input_dir, 'roles', 'linked')
roles_target_tasks = os.path.join(roles_target, 'tasks')
os.makedirs(roles_target_tasks)
with open(os.path.join(roles_target_tasks, 'main.yml'), 'w+') as tasks_main:
tasks_main.write("---\n- hosts: localhost\n tasks:\n - ping:")
tasks_main.flush()
os.symlink(roles_target, roles_link)
os.symlink(os.path.join(input_dir, 'README.md'), file_link)
collection.build_collection(input_dir, output_dir, False)
output_artifact = os.path.join(output_dir, 'ansible_namespace-collection-0.1.0.tar.gz')
assert tarfile.is_tarfile(output_artifact)
with tarfile.open(output_artifact, mode='r') as actual:
members = actual.getmembers()
linked_members = [m for m in members if m.path.startswith('playbooks/roles/linked/tasks')]
assert len(linked_members) == 2
assert linked_members[0].name == 'playbooks/roles/linked/tasks'
assert linked_members[0].isdir()
assert linked_members[1].name == 'playbooks/roles/linked/tasks/main.yml'
assert linked_members[1].isreg()
linked_task = actual.extractfile(linked_members[1].name)
actual_task = secure_hash_s(linked_task.read())
linked_task.close()
assert actual_task == 'f4dcc52576b6c2cd8ac2832c52493881c4e54226'
linked_file = [m for m in members if m.path == 'docs/README.md']
assert len(linked_file) == 1
assert linked_file[0].isreg()
linked_file_obj = actual.extractfile(linked_file[0].name)
actual_file = secure_hash_s(linked_file_obj.read())
linked_file_obj.close()
assert actual_file == '63444bfc766154e1bc7557ef6280de20d03fcd81'
def test_publish_no_wait(galaxy_server, collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
artifact_path, mock_open = collection_artifact
fake_import_uri = 'https://galaxy.server.com/api/v2/import/1234'
mock_publish = MagicMock()
mock_publish.return_value = fake_import_uri
monkeypatch.setattr(galaxy_server, 'publish_collection', mock_publish)
collection.publish_collection(artifact_path, galaxy_server, False, 0)
assert mock_publish.call_count == 1
assert mock_publish.mock_calls[0][1][0] == artifact_path
assert mock_display.call_count == 1
assert mock_display.mock_calls[0][1][0] == \
"Collection has been pushed to the Galaxy server %s %s, not waiting until import has completed due to " \
"--no-wait being set. Import task results can be found at %s" % (galaxy_server.name, galaxy_server.api_server,
fake_import_uri)
def test_publish_with_wait(galaxy_server, collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
artifact_path, mock_open = collection_artifact
fake_import_uri = 'https://galaxy.server.com/api/v2/import/1234'
mock_publish = MagicMock()
mock_publish.return_value = fake_import_uri
monkeypatch.setattr(galaxy_server, 'publish_collection', mock_publish)
mock_wait = MagicMock()
monkeypatch.setattr(galaxy_server, 'wait_import_task', mock_wait)
collection.publish_collection(artifact_path, galaxy_server, True, 0)
assert mock_publish.call_count == 1
assert mock_publish.mock_calls[0][1][0] == artifact_path
assert mock_wait.call_count == 1
assert mock_wait.mock_calls[0][1][0] == '1234'
assert mock_display.mock_calls[0][1][0] == "Collection has been published to the Galaxy server test_server %s" \
% galaxy_server.api_server
def test_find_existing_collections(tmp_path_factory, monkeypatch):
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
collection1 = os.path.join(test_dir, 'namespace1', 'collection1')
collection2 = os.path.join(test_dir, 'namespace2', 'collection2')
fake_collection1 = os.path.join(test_dir, 'namespace3', 'collection3')
fake_collection2 = os.path.join(test_dir, 'namespace4')
os.makedirs(collection1)
os.makedirs(collection2)
os.makedirs(os.path.split(fake_collection1)[0])
open(fake_collection1, 'wb+').close()
open(fake_collection2, 'wb+').close()
collection1_manifest = json.dumps({
'collection_info': {
'namespace': 'namespace1',
'name': 'collection1',
'version': '1.2.3',
'authors': ['Jordan Borean'],
'readme': 'README.md',
'dependencies': {},
},
'format': 1,
})
with open(os.path.join(collection1, 'MANIFEST.json'), 'wb') as manifest_obj:
manifest_obj.write(to_bytes(collection1_manifest))
mock_warning = MagicMock()
monkeypatch.setattr(Display, 'warning', mock_warning)
actual = collection._find_existing_collections(test_dir)
assert len(actual) == 2
for actual_collection in actual:
assert actual_collection.skip is True
if str(actual_collection) == 'namespace1.collection1':
assert actual_collection.namespace == 'namespace1'
assert actual_collection.name == 'collection1'
assert actual_collection.b_path == to_bytes(collection1)
assert actual_collection.api is None
assert actual_collection.versions == set(['1.2.3'])
assert actual_collection.latest_version == '1.2.3'
assert actual_collection.dependencies == {}
else:
assert actual_collection.namespace == 'namespace2'
assert actual_collection.name == 'collection2'
assert actual_collection.b_path == to_bytes(collection2)
assert actual_collection.api is None
assert actual_collection.versions == set(['*'])
assert actual_collection.latest_version == '*'
assert actual_collection.dependencies == {}
assert mock_warning.call_count == 1
assert mock_warning.mock_calls[0][1][0] == "Collection at '%s' does not have a MANIFEST.json file, cannot " \
"detect version." % to_text(collection2)
def test_download_file(tmp_path_factory, monkeypatch):
temp_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
data = b"\x00\x01\x02\x03"
sha256_hash = sha256()
sha256_hash.update(data)
mock_open = MagicMock()
mock_open.return_value = BytesIO(data)
monkeypatch.setattr(collection, 'open_url', mock_open)
expected = os.path.join(temp_dir, b'file')
actual = collection._download_file('http://google.com/file', temp_dir, sha256_hash.hexdigest(), True)
assert actual.startswith(expected)
assert os.path.isfile(actual)
with open(actual, 'rb') as file_obj:
assert file_obj.read() == data
assert mock_open.call_count == 1
assert mock_open.mock_calls[0][1][0] == 'http://google.com/file'
def test_download_file_hash_mismatch(tmp_path_factory, monkeypatch):
temp_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
data = b"\x00\x01\x02\x03"
mock_open = MagicMock()
mock_open.return_value = BytesIO(data)
monkeypatch.setattr(collection, 'open_url', mock_open)
expected = "Mismatch artifact hash with downloaded file"
with pytest.raises(AnsibleError, match=expected):
collection._download_file('http://google.com/file', temp_dir, 'bad', True)
def test_extract_tar_file_invalid_hash(tmp_tarfile):
temp_dir, tfile, filename, dummy = tmp_tarfile
expected = "Checksum mismatch for '%s' inside collection at '%s'" % (to_native(filename), to_native(tfile.name))
with pytest.raises(AnsibleError, match=expected):
collection._extract_tar_file(tfile, filename, temp_dir, temp_dir, "fakehash")
def test_extract_tar_file_missing_member(tmp_tarfile):
temp_dir, tfile, dummy, dummy = tmp_tarfile
expected = "Collection tar at '%s' does not contain the expected file 'missing'." % to_native(tfile.name)
with pytest.raises(AnsibleError, match=expected):
collection._extract_tar_file(tfile, 'missing', temp_dir, temp_dir)
def test_extract_tar_file_missing_parent_dir(tmp_tarfile):
temp_dir, tfile, filename, checksum = tmp_tarfile
output_dir = os.path.join(temp_dir, b'output')
output_file = os.path.join(output_dir, to_bytes(filename))
collection._extract_tar_file(tfile, filename, output_dir, temp_dir, checksum)
os.path.isfile(output_file)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,331 |
ec2_eip throws error if you ensure: absent on a non-existent IP
|
##### SUMMARY
If you try to re-release an elastic IP (ensure: absent) which has already been released ec2_eip throws an error rather than simply returning "ok".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/cloud/amazon/ec2_eip.py
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A ansible-test
##### OS / ENVIRONMENT
N/A ansible-test
##### STEPS TO REPRODUCE
```
#==================================================================
# Allocation from a pool
- name: allocate a new eip from a pool
ec2_eip:
state: present
in_vpc: yes
public_ipv4_pool: amazon
register: eip
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip is defined
- eip is changed
- eip.public_ip is defined and eip.public_ip != ""
- eip.allocation_id is defined and eip.allocation_id != ""
#==================================================================
# EIP Deletion
- name: Release eip
ec2_eip:
state: absent
public_ip: "{{ eip.public_ip }}"
register: eip_release
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip_release is defined
- eip_release is changed
- name: Rerelease eip (no change)
ec2_eip:
state: absent
public_ip: "{{ eip.public_ip }}"
register: eip_release
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip_release is defined
- eip_release is not changed
```
##### EXPECTED RESULTS
Play completes successfully
##### ACTUAL RESULTS
```
TASK [ec2_eip : Rerelease eip (no change)] *************************************
task path: /root/.ansible/test/tmp/ec2_eip-4v9k8sww-ÅÑŚÌβŁÈ/test/integration/targets/ec2_eip/tasks/main.yml:326
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<testhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029 `" && echo ansible-tmp-1568552830.3837097-100844260545029="` echo /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029 `" ) && sleep 0'
Using module file /root/ansible/lib/ansible/modules/cloud/amazon/ec2_eip.py
<testhost> PUT /root/.ansible/tmp/ansible-local-126l_udgy92/tmppojiib6x TO /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py
<testhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/ /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py && sleep 0'
<testhost> EXEC /bin/sh -c 'ANSIBLE_DEBUG_BOTOCORE_LOGS=True /tmp/python-tkg1ink4-ansible/python /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ec2_eip_payload_jseaw6r8/ansible_ec2_eip_payload.zip/ansible/modules/cloud/amazon/ec2_eip.py", line 317, in find_address
File "/tmp/ansible_ec2_eip_payload_jseaw6r8/ansible_ec2_eip_payload.zip/ansible/module_utils/aws/core.py", line 283, in deciding_wrapper
return unwrapped(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidAddress.NotFound) when calling the DescribeAddresses operation: Address '52.43.70.125' not found.
fatal: [testhost]: FAILED! => {
"boto3_version": "1.9.204",
"botocore_version": "1.12.204",
"changed": false,
"error": {
"code": "InvalidAddress.NotFound",
"message": "Address '52.43.70.125' not found."
},
"invocation": {
"module_args": {
"allow_reassociation": false,
"aws_access_key": "REDACTED",
"aws_secret_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"debug_botocore_endpoint_logs": true,
"device_id": null,
"ec2_url": null,
"in_vpc": true,
"private_ip_address": null,
"profile": null,
"public_ip": "52.43.70.125",
"public_ipv4_pool": null,
"region": "us-west-2",
"release_on_disassociation": false,
"reuse_existing_ip_allowed": false,
"security_token": null,
"state": "absent",
"tag_name": null,
"tag_value": null,
"validate_certs": true,
"wait_timeout": 300
}
},
"msg": "Couldn't obtain list of existing Elastic IP addresses: An error occurred (InvalidAddress.NotFound) when calling the DescribeAddresses operation: Address '52.43.70.125' not found.",
"resource_actions": [
"ec2:DescribeAddresses"
],
"response_metadata": {
"http_headers": {
"connection": "close",
"date": "Sun, 15 Sep 2019 13:07:11 GMT",
"server": "AmazonEC2",
"transfer-encoding": "chunked"
},
"http_status_code": 400,
"request_id": "c4b65cd8-02de-4503-af3a-0e498db40d53",
"retry_attempts": 0
}
}
```
|
https://github.com/ansible/ansible/issues/62331
|
https://github.com/ansible/ansible/pull/62332
|
f8f76628500052ad3521fbec16c073ae7f99d287
|
b5f484dcc35f2b6adfbf53d075762578b83d942f
| 2019-09-15T15:43:36Z |
python
| 2019-11-13T20:27:35Z |
hacking/aws_config/testing_policies/network-policy.json
|
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ManageRoute53ForTests",
"Effect": "Allow",
"Action": [
"route53:CreateHostedZone",
"route53:DeleteHostedZone",
"route53:GetHostedZone",
"route53:ListHostedZones",
"route53:UpdateHostedZoneComment"
],
"Resource": "*"
},
{
"Sid": "AllowTransitGatewayManagement",
"Effect": "Allow",
"Action": [
"ec2:CreateTransitGateway",
"ec2:DeleteTransitGateway",
"ec2:DescribeTransitGateways"
],
"Resource": "*"
},
{
"Sid": "AllowUnspecifiedEC2NetworkingResource",
"Effect": "Allow",
"Action": [
"ec2:AllocateAddress",
"ec2:AssociateAddress",
"ec2:AssociateDhcpOptions",
"ec2:AssociateRouteTable",
"ec2:AssociateVpcCidrBlock",
"ec2:AssociateSubnetCidrBlock",
"ec2:AttachInternetGateway",
"ec2:AttachNetworkInterface",
"ec2:AttachVpnGateway",
"ec2:CreateCustomerGateway",
"ec2:CreateDhcpOptions",
"ec2:CreateNatGateway",
"ec2:CreateNetworkAcl",
"ec2:CreateNetworkAclEntry",
"ec2:CreateNetworkInterface",
"ec2:CreateRoute",
"ec2:CreateRouteTable",
"ec2:CreateSubnet",
"ec2:CreateVpc",
"ec2:CreateVpnConnection",
"ec2:CreateVpnGateway",
"ec2:DeleteCustomerGateway",
"ec2:DeleteDhcpOptions",
"ec2:DeleteInternetGateway",
"ec2:DeleteNatGateway",
"ec2:DeleteNetworkAcl",
"ec2:DeleteNetworkAclEntry",
"ec2:DeleteNetworkInterface",
"ec2:DeleteRoute",
"ec2:DeleteRouteTable",
"ec2:DeleteSubnet",
"ec2:DeleteVpc",
"ec2:DeleteVpnConnection",
"ec2:DeleteVpnGateway",
"ec2:DetachInternetGateway",
"ec2:DetachVpnGateway",
"ec2:Describe*",
"ec2:DisassociateAddress",
"ec2:DisassociateRouteTable",
"ec2:DisassociateSubnetCidrBlock",
"ec2:ModifySubnetAttribute",
"ec2:ModifyVpcAttribute",
"ec2:ReleaseAddress",
"ec2:ReplaceNetworkAclAssociation",
"ec2:ReplaceNetworkAclEntry",
"ec2:ReplaceRouteTableAssociation"
],
"Resource": "*"
},
{
"Sid": "AllowCloudfrontUsage",
"Effect": "Allow",
"Action": [
"cloudfront:CreateDistribution",
"cloudfront:CreateDistributionWithTags",
"cloudfront:CreateCloudFrontOriginAccessIdentity",
"cloudfront:DeleteDistribution",
"cloudfront:GetDistribution",
"cloudfront:GetStreamingDistribution",
"cloudfront:GetDistributionConfig",
"cloudfront:GetStreamingDistributionConfig",
"cloudfront:GetInvalidation",
"cloudfront:ListDistributions",
"cloudfront:ListDistributionsByWebACLId",
"cloudfront:ListInvalidations",
"cloudfront:ListStreamingDistributions",
"cloudfront:ListTagsForResource",
"cloudfront:TagResource",
"cloudfront:UntagResource",
"cloudfront:UpdateDistribution"
],
"Resource": "*"
}
]
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,331 |
ec2_eip throws error if you ensure: absent on a non-existent IP
|
##### SUMMARY
If you try to re-release an elastic IP (ensure: absent) which has already been released ec2_eip throws an error rather than simply returning "ok".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/cloud/amazon/ec2_eip.py
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A ansible-test
##### OS / ENVIRONMENT
N/A ansible-test
##### STEPS TO REPRODUCE
```
#==================================================================
# Allocation from a pool
- name: allocate a new eip from a pool
ec2_eip:
state: present
in_vpc: yes
public_ipv4_pool: amazon
register: eip
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip is defined
- eip is changed
- eip.public_ip is defined and eip.public_ip != ""
- eip.allocation_id is defined and eip.allocation_id != ""
#==================================================================
# EIP Deletion
- name: Release eip
ec2_eip:
state: absent
public_ip: "{{ eip.public_ip }}"
register: eip_release
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip_release is defined
- eip_release is changed
- name: Rerelease eip (no change)
ec2_eip:
state: absent
public_ip: "{{ eip.public_ip }}"
register: eip_release
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip_release is defined
- eip_release is not changed
```
##### EXPECTED RESULTS
Play completes successfully
##### ACTUAL RESULTS
```
TASK [ec2_eip : Rerelease eip (no change)] *************************************
task path: /root/.ansible/test/tmp/ec2_eip-4v9k8sww-ÅÑŚÌβŁÈ/test/integration/targets/ec2_eip/tasks/main.yml:326
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<testhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029 `" && echo ansible-tmp-1568552830.3837097-100844260545029="` echo /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029 `" ) && sleep 0'
Using module file /root/ansible/lib/ansible/modules/cloud/amazon/ec2_eip.py
<testhost> PUT /root/.ansible/tmp/ansible-local-126l_udgy92/tmppojiib6x TO /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py
<testhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/ /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py && sleep 0'
<testhost> EXEC /bin/sh -c 'ANSIBLE_DEBUG_BOTOCORE_LOGS=True /tmp/python-tkg1ink4-ansible/python /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ec2_eip_payload_jseaw6r8/ansible_ec2_eip_payload.zip/ansible/modules/cloud/amazon/ec2_eip.py", line 317, in find_address
File "/tmp/ansible_ec2_eip_payload_jseaw6r8/ansible_ec2_eip_payload.zip/ansible/module_utils/aws/core.py", line 283, in deciding_wrapper
return unwrapped(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidAddress.NotFound) when calling the DescribeAddresses operation: Address '52.43.70.125' not found.
fatal: [testhost]: FAILED! => {
"boto3_version": "1.9.204",
"botocore_version": "1.12.204",
"changed": false,
"error": {
"code": "InvalidAddress.NotFound",
"message": "Address '52.43.70.125' not found."
},
"invocation": {
"module_args": {
"allow_reassociation": false,
"aws_access_key": "REDACTED",
"aws_secret_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"debug_botocore_endpoint_logs": true,
"device_id": null,
"ec2_url": null,
"in_vpc": true,
"private_ip_address": null,
"profile": null,
"public_ip": "52.43.70.125",
"public_ipv4_pool": null,
"region": "us-west-2",
"release_on_disassociation": false,
"reuse_existing_ip_allowed": false,
"security_token": null,
"state": "absent",
"tag_name": null,
"tag_value": null,
"validate_certs": true,
"wait_timeout": 300
}
},
"msg": "Couldn't obtain list of existing Elastic IP addresses: An error occurred (InvalidAddress.NotFound) when calling the DescribeAddresses operation: Address '52.43.70.125' not found.",
"resource_actions": [
"ec2:DescribeAddresses"
],
"response_metadata": {
"http_headers": {
"connection": "close",
"date": "Sun, 15 Sep 2019 13:07:11 GMT",
"server": "AmazonEC2",
"transfer-encoding": "chunked"
},
"http_status_code": 400,
"request_id": "c4b65cd8-02de-4503-af3a-0e498db40d53",
"retry_attempts": 0
}
}
```
|
https://github.com/ansible/ansible/issues/62331
|
https://github.com/ansible/ansible/pull/62332
|
f8f76628500052ad3521fbec16c073ae7f99d287
|
b5f484dcc35f2b6adfbf53d075762578b83d942f
| 2019-09-15T15:43:36Z |
python
| 2019-11-13T20:27:35Z |
lib/ansible/modules/cloud/amazon/ec2_eip.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ec2_eip
short_description: manages EC2 elastic IP (EIP) addresses.
description:
- This module can allocate or release an EIP.
- This module can associate/disassociate an EIP with instances or network interfaces.
version_added: "1.4"
options:
device_id:
description:
- The id of the device for the EIP. Can be an EC2 Instance id or Elastic Network Interface (ENI) id.
required: false
aliases: [ instance_id ]
version_added: "2.0"
type: str
public_ip:
description:
- The IP address of a previously allocated EIP.
- If C(present) and device is specified, the EIP is associated with the device.
- If C(absent) and device is specified, the EIP is disassociated from the device.
aliases: [ ip ]
type: str
state:
description:
- If C(present), allocate an EIP or associate an existing EIP with a device.
- If C(absent), disassociate the EIP from the device and optionally release it.
choices: ['present', 'absent']
default: present
type: str
in_vpc:
description:
- Allocate an EIP inside a VPC or not. Required if specifying an ENI with I(device_id).
default: false
type: bool
version_added: "1.4"
reuse_existing_ip_allowed:
description:
- Reuse an EIP that is not associated to a device (when available), instead of allocating a new one.
default: false
type: bool
version_added: "1.6"
release_on_disassociation:
description:
- Whether or not to automatically release the EIP when it is disassociated.
default: false
type: bool
version_added: "2.0"
private_ip_address:
description:
- The primary or secondary private IP address to associate with the Elastic IP address.
version_added: "2.3"
type: str
allow_reassociation:
description:
- Specify this option to allow an Elastic IP address that is already associated with another
network interface or instance to be re-associated with the specified instance or interface.
default: false
type: bool
version_added: "2.5"
tag_name:
description:
- When I(reuse_existing_ip_allowed=true), supplement with this option to only reuse
an Elastic IP if it is tagged with I(tag_name).
version_added: "2.9"
type: str
tag_value:
description:
- Supplements I(tag_name) but also checks that the value of the tag provided in I(tag_name) matches I(tag_value).
version_added: "2.9"
type: str
public_ipv4_pool:
description:
- Allocates the new Elastic IP from the provided public IPv4 pool (BYOIP)
only applies to newly allocated Elastic IPs, isn't validated when reuse_existing_ip_allowed is true.
version_added: "2.9"
type: str
wait_timeout:
description:
- The I(wait_timeout) option does nothing and will be removed in Ansible 2.14.
type: int
extends_documentation_fragment:
- aws
- ec2
author: "Rick Mendes (@rickmendes) <[email protected]>"
notes:
- There may be a delay between the time the EIP is assigned and when
the cloud instance is reachable via the new address. Use wait_for and
pause to delay further playbook execution until the instance is reachable,
if necessary.
- This module returns multiple changed statuses on disassociation or release.
It returns an overall status based on any changes occurring. It also returns
individual changed statuses for disassociation and release.
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
- name: associate an elastic IP with an instance
ec2_eip:
device_id: i-1212f003
ip: 93.184.216.119
- name: associate an elastic IP with a device
ec2_eip:
device_id: eni-c8ad70f3
ip: 93.184.216.119
- name: associate an elastic IP with a device and allow reassociation
ec2_eip:
device_id: eni-c8ad70f3
public_ip: 93.184.216.119
allow_reassociation: true
- name: disassociate an elastic IP from an instance
ec2_eip:
device_id: i-1212f003
ip: 93.184.216.119
state: absent
- name: disassociate an elastic IP with a device
ec2_eip:
device_id: eni-c8ad70f3
ip: 93.184.216.119
state: absent
- name: allocate a new elastic IP and associate it with an instance
ec2_eip:
device_id: i-1212f003
- name: allocate a new elastic IP without associating it to anything
ec2_eip:
state: present
register: eip
- name: output the IP
debug:
msg: "Allocated IP is {{ eip.public_ip }}"
- name: provision new instances with ec2
ec2:
keypair: mykey
instance_type: c1.medium
image: ami-40603AD1
wait: true
group: webserver
count: 3
register: ec2
- name: associate new elastic IPs with each of the instances
ec2_eip:
device_id: "{{ item }}"
loop: "{{ ec2.instance_ids }}"
- name: allocate a new elastic IP inside a VPC in us-west-2
ec2_eip:
region: us-west-2
in_vpc: true
register: eip
- name: output the IP
debug:
msg: "Allocated IP inside a VPC is {{ eip.public_ip }}"
- name: allocate eip - reuse unallocated ips (if found) with FREE tag
ec2_eip:
region: us-east-1
in_vpc: true
reuse_existing_ip_allowed: true
tag_name: FREE
- name: allocate eip - reuse unallocted ips if tag reserved is nope
ec2_eip:
region: us-east-1
in_vpc: true
reuse_existing_ip_allowed: true
tag_name: reserved
tag_value: nope
- name: allocate new eip - from servers given ipv4 pool
ec2_eip:
region: us-east-1
in_vpc: true
public_ipv4_pool: ipv4pool-ec2-0588c9b75a25d1a02
- name: allocate eip - from a given pool (if no free addresses where dev-servers tag is dynamic)
ec2_eip:
region: us-east-1
in_vpc: true
reuse_existing_ip_allowed: true
tag_name: dev-servers
public_ipv4_pool: ipv4pool-ec2-0588c9b75a25d1a02
- name: allocate eip from pool - check if tag reserved_for exists and value is our hostname
ec2_eip:
region: us-east-1
in_vpc: true
reuse_existing_ip_allowed: true
tag_name: reserved_for
tag_value: "{{ inventory_hostname }}"
public_ipv4_pool: ipv4pool-ec2-0588c9b75a25d1a02
'''
RETURN = '''
allocation_id:
description: allocation_id of the elastic ip
returned: on success
type: str
sample: eipalloc-51aa3a6c
public_ip:
description: an elastic ip address
returned: on success
type: str
sample: 52.88.159.209
'''
try:
import botocore.exceptions
except ImportError:
pass # Taken care of by ec2.HAS_BOTO3
from ansible.module_utils.aws.core import AnsibleAWSModule, is_boto3_error_code
from ansible.module_utils.ec2 import AWSRetry, ansible_dict_to_boto3_filter_list, ec2_argument_spec
def associate_ip_and_device(ec2, module, address, private_ip_address, device_id, allow_reassociation, check_mode, is_instance=True):
if address_is_associated_with_device(ec2, module, address, device_id, is_instance):
return {'changed': False}
# If we're in check mode, nothing else to do
if not check_mode:
if is_instance:
try:
params = dict(
InstanceId=device_id,
PrivateIpAddress=private_ip_address,
AllowReassociation=allow_reassociation,
)
if address.domain == "vpc":
params['AllocationId'] = address['AllocationId']
else:
params['PublicIp'] = address['PublicIp']
res = ec2.associate_address(**params)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
msg = "Couldn't associate Elastic IP address with instance '{0}'".format(device_id)
module.fail_json_aws(e, msg=msg)
else:
params = dict(
NetworkInterfaceId=device_id,
AllocationId=address['AllocationId'],
AllowReassociation=allow_reassociation,
)
if private_ip_address:
params['PrivateIpAddress'] = private_ip_address
try:
res = ec2.associate_address(aws_retry=True, **params)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
msg = "Couldn't associate Elastic IP address with network interface '{0}'".format(device_id)
module.fail_json_aws(e, msg=msg)
if not res:
module.fail_json_aws(e, msg='Association failed.')
return {'changed': True}
def disassociate_ip_and_device(ec2, module, address, device_id, check_mode, is_instance=True):
if not address_is_associated_with_device(ec2, module, address, device_id, is_instance):
return {'changed': False}
# If we're in check mode, nothing else to do
if not check_mode:
try:
if address['Domain'] == 'vpc':
res = ec2.disassociate_address(
AssociationId=address['AssociationId'], aws_retry=True
)
else:
res = ec2.disassociate_address(
PublicIp=address['PublicIp'], aws_retry=True
)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Dissassociation of Elastic IP failed")
return {'changed': True}
@AWSRetry.jittered_backoff()
def find_address(ec2, module, public_ip, device_id, is_instance=True):
""" Find an existing Elastic IP address """
filters = []
kwargs = {}
if public_ip:
kwargs["PublicIps"] = [public_ip]
elif device_id:
if is_instance:
filters.append({"Name": 'instance-id', "Values": [device_id]})
else:
filters.append({'Name': 'network-interface-id', "Values": [device_id]})
if len(filters) > 0:
kwargs["Filters"] = filters
elif len(filters) == 0 and public_ip is None:
return None
try:
addresses = ec2.describe_addresses(**kwargs)
except is_boto3_error_code('InvalidAddress.NotFound') as e:
module.fail_json_aws(e, msg="Couldn't obtain list of existing Elastic IP addresses")
addresses = addresses["Addresses"]
if len(addresses) == 1:
return addresses[0]
elif len(addresses) > 1:
msg = "Found more than one address using args {0}".format(kwargs)
msg += "Addresses found: {0}".format(addresses)
module.fail_json_aws(botocore.exceptions.ClientError, msg=msg)
def address_is_associated_with_device(ec2, module, address, device_id, is_instance=True):
""" Check if the elastic IP is currently associated with the device """
address = find_address(ec2, module, address["PublicIp"], device_id, is_instance)
if address:
if is_instance:
if "InstanceId" in address and address["InstanceId"] == device_id:
return address
else:
if "NetworkInterfaceId" in address and address["NetworkInterfaceId"] == device_id:
return address
return False
def allocate_address(ec2, module, domain, reuse_existing_ip_allowed, check_mode, tag_dict=None, public_ipv4_pool=None):
""" Allocate a new elastic IP address (when needed) and return it """
if reuse_existing_ip_allowed:
filters = []
if not domain:
domain = 'standard'
filters.append({'Name': 'domain', "Values": [domain]})
if tag_dict is not None:
filters += ansible_dict_to_boto3_filter_list(tag_dict)
try:
all_addresses = ec2.describe_addresses(Filters=filters, aws_retry=True)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Couldn't obtain list of existing Elastic IP addresses")
all_addresses = all_addresses["Addresses"]
if domain == 'vpc':
unassociated_addresses = [a for a in all_addresses
if not a.get('AssociationId', None)]
else:
unassociated_addresses = [a for a in all_addresses
if not a['InstanceId']]
if unassociated_addresses:
return unassociated_addresses[0], False
if public_ipv4_pool:
return allocate_address_from_pool(ec2, module, domain, check_mode, public_ipv4_pool), True
try:
result = ec2.allocate_address(Domain=domain, aws_retry=True), True
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Couldn't allocate Elastic IP address")
return result
def release_address(ec2, module, address, check_mode):
""" Release a previously allocated elastic IP address """
# If we're in check mode, nothing else to do
if not check_mode:
try:
result = ec2.release_address(AllocationId=address['AllocationId'], aws_retry=True)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Couldn't release Elastic IP address")
return {'changed': True}
@AWSRetry.jittered_backoff()
def describe_eni_with_backoff(ec2, module, device_id):
try:
return ec2.describe_network_interfaces(NetworkInterfaceIds=[device_id])
except is_boto3_error_code('InvalidNetworkInterfaceID.NotFound') as e:
module.fail_json_aws(e, msg="Couldn't get list of network interfaces.")
def find_device(ec2, module, device_id, is_instance=True):
""" Attempt to find the EC2 instance and return it """
if is_instance:
try:
paginator = ec2.get_paginator('describe_instances')
reservations = list(paginator.paginate(InstanceIds=[device_id]).search('Reservations[]'))
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Couldn't get list of instances")
if len(reservations) == 1:
instances = reservations[0]['Instances']
if len(instances) == 1:
return instances[0]
else:
try:
interfaces = describe_eni_with_backoff(ec2, module, device_id)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Couldn't get list of network interfaces.")
if len(interfaces) == 1:
return interfaces[0]
def ensure_present(ec2, module, domain, address, private_ip_address, device_id,
reuse_existing_ip_allowed, allow_reassociation, check_mode, is_instance=True):
changed = False
# Return the EIP object since we've been given a public IP
if not address:
if check_mode:
return {'changed': True}
address, changed = allocate_address(ec2, module, domain, reuse_existing_ip_allowed, check_mode)
if device_id:
# Allocate an IP for instance since no public_ip was provided
if is_instance:
instance = find_device(ec2, module, device_id)
if reuse_existing_ip_allowed:
if instance.vpc_id and len(instance.vpc_id) > 0 and domain is None:
msg = "You must set 'in_vpc' to true to associate an instance with an existing ip in a vpc"
module.fail_json_aws(botocore.exceptions.ClientError, msg=msg)
# Associate address object (provided or allocated) with instance
assoc_result = associate_ip_and_device(
ec2, module, address, private_ip_address, device_id, allow_reassociation,
check_mode
)
else:
instance = find_device(ec2, module, device_id, is_instance=False)
# Associate address object (provided or allocated) with instance
assoc_result = associate_ip_and_device(
ec2, module, address, private_ip_address, device_id, allow_reassociation,
check_mode, is_instance=False
)
changed = changed or assoc_result['changed']
return {'changed': changed, 'public_ip': address['PublicIp'], 'allocation_id': address['AllocationId']}
def ensure_absent(ec2, module, address, device_id, check_mode, is_instance=True):
if not address:
return {'changed': False}
# disassociating address from instance
if device_id:
if is_instance:
return disassociate_ip_and_device(
ec2, module, address, device_id, check_mode
)
else:
return disassociate_ip_and_device(
ec2, module, address, device_id, check_mode, is_instance=False
)
# releasing address
else:
return release_address(ec2, module, address, check_mode)
def allocate_address_from_pool(ec2, module, domain, check_mode, public_ipv4_pool):
# type: (EC2Connection, str, bool, str) -> Address
""" Overrides boto's allocate_address function to support BYOIP """
params = {}
if domain is not None:
params['Domain'] = domain
if public_ipv4_pool is not None:
params['PublicIpv4Pool'] = public_ipv4_pool
if check_mode:
params['DryRun'] = 'true'
try:
result = ec2.allocate_address(aws_retry=True, **params)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Couldn't allocate Elastic IP address")
return result
def generate_tag_dict(module, tag_name, tag_value):
# type: (AnsibleModule, str, str) -> Optional[Dict]
""" Generates a dictionary to be passed as a filter to Amazon """
if tag_name and not tag_value:
if tag_name.startswith('tag:'):
tag_name = tag_name.strip('tag:')
return {'tag-key': tag_name}
elif tag_name and tag_value:
if not tag_name.startswith('tag:'):
tag_name = 'tag:' + tag_name
return {tag_name: tag_value}
elif tag_value and not tag_name:
module.fail_json(msg="parameters are required together: ('tag_name', 'tag_value')")
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
device_id=dict(required=False, aliases=['instance_id']),
public_ip=dict(required=False, aliases=['ip']),
state=dict(required=False, default='present',
choices=['present', 'absent']),
in_vpc=dict(required=False, type='bool', default=False),
reuse_existing_ip_allowed=dict(required=False, type='bool',
default=False),
release_on_disassociation=dict(required=False, type='bool', default=False),
allow_reassociation=dict(type='bool', default=False),
wait_timeout=dict(type='int', removed_in_version='2.14'),
private_ip_address=dict(),
tag_name=dict(),
tag_value=dict(),
public_ipv4_pool=dict()
))
module = AnsibleAWSModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_by={
'private_ip_address': ['device_id'],
},
)
ec2 = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
device_id = module.params.get('device_id')
instance_id = module.params.get('instance_id')
public_ip = module.params.get('public_ip')
private_ip_address = module.params.get('private_ip_address')
state = module.params.get('state')
in_vpc = module.params.get('in_vpc')
domain = 'vpc' if in_vpc else None
reuse_existing_ip_allowed = module.params.get('reuse_existing_ip_allowed')
release_on_disassociation = module.params.get('release_on_disassociation')
allow_reassociation = module.params.get('allow_reassociation')
tag_name = module.params.get('tag_name')
tag_value = module.params.get('tag_value')
public_ipv4_pool = module.params.get('public_ipv4_pool')
if instance_id:
warnings = ["instance_id is no longer used, please use device_id going forward"]
is_instance = True
device_id = instance_id
else:
if device_id and device_id.startswith('i-'):
is_instance = True
elif device_id:
if device_id.startswith('eni-') and not in_vpc:
module.fail_json(msg="If you are specifying an ENI, in_vpc must be true")
is_instance = False
tag_dict = generate_tag_dict(module, tag_name, tag_value)
try:
if device_id:
address = find_address(ec2, module, public_ip, device_id, is_instance=is_instance)
else:
address = find_address(ec2, module, public_ip, None)
if state == 'present':
if device_id:
result = ensure_present(
ec2, module, domain, address, private_ip_address, device_id,
reuse_existing_ip_allowed, allow_reassociation,
module.check_mode, is_instance=is_instance
)
else:
if address:
changed = False
else:
address, changed = allocate_address(
ec2, module, domain, reuse_existing_ip_allowed,
module.check_mode, tag_dict, public_ipv4_pool
)
result = {
'changed': changed,
'public_ip': address['PublicIp'],
'allocation_id': address['AllocationId']
}
else:
if device_id:
disassociated = ensure_absent(
ec2, module, address, device_id, module.check_mode, is_instance=is_instance
)
if release_on_disassociation and disassociated['changed']:
released = release_address(ec2, module, address, module.check_mode)
result = {
'changed': True,
'disassociated': disassociated,
'released': released
}
else:
result = {
'changed': disassociated['changed'],
'disassociated': disassociated,
'released': {'changed': False}
}
else:
released = release_address(ec2, module, address, module.check_mode)
result = {
'changed': released['changed'],
'disassociated': {'changed': False},
'released': released
}
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(str(e))
if instance_id:
result['warnings'] = warnings
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,331 |
ec2_eip throws error if you ensure: absent on a non-existent IP
|
##### SUMMARY
If you try to re-release an elastic IP (ensure: absent) which has already been released ec2_eip throws an error rather than simply returning "ok".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/cloud/amazon/ec2_eip.py
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A ansible-test
##### OS / ENVIRONMENT
N/A ansible-test
##### STEPS TO REPRODUCE
```
#==================================================================
# Allocation from a pool
- name: allocate a new eip from a pool
ec2_eip:
state: present
in_vpc: yes
public_ipv4_pool: amazon
register: eip
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip is defined
- eip is changed
- eip.public_ip is defined and eip.public_ip != ""
- eip.allocation_id is defined and eip.allocation_id != ""
#==================================================================
# EIP Deletion
- name: Release eip
ec2_eip:
state: absent
public_ip: "{{ eip.public_ip }}"
register: eip_release
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip_release is defined
- eip_release is changed
- name: Rerelease eip (no change)
ec2_eip:
state: absent
public_ip: "{{ eip.public_ip }}"
register: eip_release
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip_release is defined
- eip_release is not changed
```
##### EXPECTED RESULTS
Play completes successfully
##### ACTUAL RESULTS
```
TASK [ec2_eip : Rerelease eip (no change)] *************************************
task path: /root/.ansible/test/tmp/ec2_eip-4v9k8sww-ÅÑŚÌβŁÈ/test/integration/targets/ec2_eip/tasks/main.yml:326
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<testhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029 `" && echo ansible-tmp-1568552830.3837097-100844260545029="` echo /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029 `" ) && sleep 0'
Using module file /root/ansible/lib/ansible/modules/cloud/amazon/ec2_eip.py
<testhost> PUT /root/.ansible/tmp/ansible-local-126l_udgy92/tmppojiib6x TO /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py
<testhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/ /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py && sleep 0'
<testhost> EXEC /bin/sh -c 'ANSIBLE_DEBUG_BOTOCORE_LOGS=True /tmp/python-tkg1ink4-ansible/python /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ec2_eip_payload_jseaw6r8/ansible_ec2_eip_payload.zip/ansible/modules/cloud/amazon/ec2_eip.py", line 317, in find_address
File "/tmp/ansible_ec2_eip_payload_jseaw6r8/ansible_ec2_eip_payload.zip/ansible/module_utils/aws/core.py", line 283, in deciding_wrapper
return unwrapped(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidAddress.NotFound) when calling the DescribeAddresses operation: Address '52.43.70.125' not found.
fatal: [testhost]: FAILED! => {
"boto3_version": "1.9.204",
"botocore_version": "1.12.204",
"changed": false,
"error": {
"code": "InvalidAddress.NotFound",
"message": "Address '52.43.70.125' not found."
},
"invocation": {
"module_args": {
"allow_reassociation": false,
"aws_access_key": "REDACTED",
"aws_secret_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"debug_botocore_endpoint_logs": true,
"device_id": null,
"ec2_url": null,
"in_vpc": true,
"private_ip_address": null,
"profile": null,
"public_ip": "52.43.70.125",
"public_ipv4_pool": null,
"region": "us-west-2",
"release_on_disassociation": false,
"reuse_existing_ip_allowed": false,
"security_token": null,
"state": "absent",
"tag_name": null,
"tag_value": null,
"validate_certs": true,
"wait_timeout": 300
}
},
"msg": "Couldn't obtain list of existing Elastic IP addresses: An error occurred (InvalidAddress.NotFound) when calling the DescribeAddresses operation: Address '52.43.70.125' not found.",
"resource_actions": [
"ec2:DescribeAddresses"
],
"response_metadata": {
"http_headers": {
"connection": "close",
"date": "Sun, 15 Sep 2019 13:07:11 GMT",
"server": "AmazonEC2",
"transfer-encoding": "chunked"
},
"http_status_code": 400,
"request_id": "c4b65cd8-02de-4503-af3a-0e498db40d53",
"retry_attempts": 0
}
}
```
|
https://github.com/ansible/ansible/issues/62331
|
https://github.com/ansible/ansible/pull/62332
|
f8f76628500052ad3521fbec16c073ae7f99d287
|
b5f484dcc35f2b6adfbf53d075762578b83d942f
| 2019-09-15T15:43:36Z |
python
| 2019-11-13T20:27:35Z |
test/integration/targets/ec2_eip/defaults/main.yml
|
---
aws_region: us-east-1
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,331 |
ec2_eip throws error if you ensure: absent on a non-existent IP
|
##### SUMMARY
If you try to re-release an elastic IP (ensure: absent) which has already been released ec2_eip throws an error rather than simply returning "ok".
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/cloud/amazon/ec2_eip.py
##### ANSIBLE VERSION
devel
##### CONFIGURATION
N/A ansible-test
##### OS / ENVIRONMENT
N/A ansible-test
##### STEPS TO REPRODUCE
```
#==================================================================
# Allocation from a pool
- name: allocate a new eip from a pool
ec2_eip:
state: present
in_vpc: yes
public_ipv4_pool: amazon
register: eip
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip is defined
- eip is changed
- eip.public_ip is defined and eip.public_ip != ""
- eip.allocation_id is defined and eip.allocation_id != ""
#==================================================================
# EIP Deletion
- name: Release eip
ec2_eip:
state: absent
public_ip: "{{ eip.public_ip }}"
register: eip_release
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip_release is defined
- eip_release is changed
- name: Rerelease eip (no change)
ec2_eip:
state: absent
public_ip: "{{ eip.public_ip }}"
register: eip_release
- ec2_eip_info:
register: eip_info
- assert:
that:
- eip_release is defined
- eip_release is not changed
```
##### EXPECTED RESULTS
Play completes successfully
##### ACTUAL RESULTS
```
TASK [ec2_eip : Rerelease eip (no change)] *************************************
task path: /root/.ansible/test/tmp/ec2_eip-4v9k8sww-ÅÑŚÌβŁÈ/test/integration/targets/ec2_eip/tasks/main.yml:326
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<testhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029 `" && echo ansible-tmp-1568552830.3837097-100844260545029="` echo /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029 `" ) && sleep 0'
Using module file /root/ansible/lib/ansible/modules/cloud/amazon/ec2_eip.py
<testhost> PUT /root/.ansible/tmp/ansible-local-126l_udgy92/tmppojiib6x TO /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py
<testhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/ /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py && sleep 0'
<testhost> EXEC /bin/sh -c 'ANSIBLE_DEBUG_BOTOCORE_LOGS=True /tmp/python-tkg1ink4-ansible/python /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/AnsiballZ_ec2_eip.py && sleep 0'
<testhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1568552830.3837097-100844260545029/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_ec2_eip_payload_jseaw6r8/ansible_ec2_eip_payload.zip/ansible/modules/cloud/amazon/ec2_eip.py", line 317, in find_address
File "/tmp/ansible_ec2_eip_payload_jseaw6r8/ansible_ec2_eip_payload.zip/ansible/module_utils/aws/core.py", line 283, in deciding_wrapper
return unwrapped(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/dist-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidAddress.NotFound) when calling the DescribeAddresses operation: Address '52.43.70.125' not found.
fatal: [testhost]: FAILED! => {
"boto3_version": "1.9.204",
"botocore_version": "1.12.204",
"changed": false,
"error": {
"code": "InvalidAddress.NotFound",
"message": "Address '52.43.70.125' not found."
},
"invocation": {
"module_args": {
"allow_reassociation": false,
"aws_access_key": "REDACTED",
"aws_secret_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"debug_botocore_endpoint_logs": true,
"device_id": null,
"ec2_url": null,
"in_vpc": true,
"private_ip_address": null,
"profile": null,
"public_ip": "52.43.70.125",
"public_ipv4_pool": null,
"region": "us-west-2",
"release_on_disassociation": false,
"reuse_existing_ip_allowed": false,
"security_token": null,
"state": "absent",
"tag_name": null,
"tag_value": null,
"validate_certs": true,
"wait_timeout": 300
}
},
"msg": "Couldn't obtain list of existing Elastic IP addresses: An error occurred (InvalidAddress.NotFound) when calling the DescribeAddresses operation: Address '52.43.70.125' not found.",
"resource_actions": [
"ec2:DescribeAddresses"
],
"response_metadata": {
"http_headers": {
"connection": "close",
"date": "Sun, 15 Sep 2019 13:07:11 GMT",
"server": "AmazonEC2",
"transfer-encoding": "chunked"
},
"http_status_code": 400,
"request_id": "c4b65cd8-02de-4503-af3a-0e498db40d53",
"retry_attempts": 0
}
}
```
|
https://github.com/ansible/ansible/issues/62331
|
https://github.com/ansible/ansible/pull/62332
|
f8f76628500052ad3521fbec16c073ae7f99d287
|
b5f484dcc35f2b6adfbf53d075762578b83d942f
| 2019-09-15T15:43:36Z |
python
| 2019-11-13T20:27:35Z |
test/integration/targets/ec2_eip/tasks/main.yml
|
---
- name: Integration testing for ec2_eip
block:
- name: set up aws connection info
set_fact:
aws_connection_info: &aws_connection_info
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token }}"
region: "{{ aws_region }}"
no_log: True
- name: Allocate a new eip - attempt reusing unallocated ones
ec2_eip:
state: present
in_vpc: yes
reuse_existing_ip_allowed: yes
<<: *aws_connection_info
register: eip
- assert:
that:
- eip is defined
- eip.public_ip is defined and eip.public_ip != ""
- eip.allocation_id is defined and eip.allocation_id != ""
- name: Allocate a new eip
ec2_eip:
state: present
in_vpc: yes
<<: *aws_connection_info
register: new_eip
- assert:
that:
- new_eip is defined
- new_eip is changed
- new_eip.public_ip is defined and new_eip.public_ip != ""
- new_eip.allocation_id is defined and new_eip.allocation_id != ""
- name: Match an existing eip (changed == false)
ec2_eip:
state: present
in_vpc: yes
<<: *aws_connection_info
public_ip: "{{ eip.public_ip }}"
register: existing_eip
- assert:
that:
- existing_eip is defined
- existing_eip is not changed
- existing_eip.public_ip is defined and existing_eip.public_ip != ""
- existing_eip.allocation_id is defined and existing_eip.allocation_id != ""
- name: attempt reusing an existing eip with a tag (or allocate a new one)
ec2_eip:
state: present
in_vpc: yes
<<: *aws_connection_info
reuse_existing_ip_allowed: yes
tag_name: Team
register: tagged_eip
- assert:
that:
- tagged_eip is defined
- tagged_eip.public_ip is defined and tagged_eip.public_ip != ""
- tagged_eip.allocation_id is defined and tagged_eip.allocation_id != ""
- name: attempt reusing an existing eip with a tag and it's value (or allocate a new one)
ec2_eip:
state: present
in_vpc: yes
<<: *aws_connection_info
public_ip: "{{ eip.public_ip }}"
reuse_existing_ip_allowed: yes
tag_name: Team
tag_value: Backend
register: backend_eip
- assert:
that:
- backend_eip is defined
- backend_eip.public_ip is defined and backend_eip.public_ip != ""
- backend_eip.allocation_id is defined and backend_eip.allocation_id != ""
- name: attempt reusing an existing eip with a tag and it's value (or allocate a new one from pool)
ec2_eip:
state: present
in_vpc: yes
<<: *aws_connection_info
reuse_existing_ip_allowed: yes
tag_name: Team
tag_value: Backend
public_ipv4_pool: amazon
register: amazon_eip
- assert:
that:
- amazon_eip is defined
- amazon_eip.public_ip is defined and amazon_eip.public_ip != ""
- amazon_eip.allocation_id is defined and amazon_eip.allocation_id != ""
- name: allocate a new eip from a pool
ec2_eip:
state: present
in_vpc: yes
<<: *aws_connection_info
public_ipv4_pool: amazon
register: pool_eip
- assert:
that:
- pool_eip is defined
- pool_eip is changed
- pool_eip.public_ip is defined and pool_eip.public_ip != ""
- pool_eip.allocation_id is defined and pool_eip.allocation_id != ""
always:
- debug:
msg: "{{ item }}"
when: item is defined and item.public_ip is defined and item.allocation_id is defined
loop:
- eip
- new_eip
- pool_eip
- tagged_eip
- backend_eip
- amazon_eip
- name: Cleanup newly allocated eip
ec2_eip:
state: absent
public_ip: "{{ item.public_ip }}"
in_vpc: yes
<<: *aws_connection_info
when: item is defined and item is changed and item.public_ip is defined and item.public_ip != ""
loop:
- "{{ eip }}"
- "{{ new_eip }}"
- "{{ pool_eip }}"
- "{{ tagged_eip }}"
- "{{ backend_eip }}"
- "{{ amazon_eip }}"
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,507 |
vmware_guest has UptCompatibilityEnabled set to true by default with no option to change
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The vmware_guest module UptCompatibilityEnabled set to true by default with no option to change. This is the "checkbox" in vSphere for "DirectPath I/O." It should have an option to change this value as part of the networks parameter in the vmware_guest module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/gforster/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/gforster/.local/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Jul 9 2019, 16:51:35) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a VM using the vmware_guest module.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: PROVISION | Create VM
vmware_guest:
validate_certs: "{{ validate_certs }}"
hostname: "{{ vcenter_host }}"
username: '{{ vcenter_username }}'
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter }}"
name: "{{ item }}"
guest_id: "{{ vmware_guest_id }}"
folder: "{{ vmware_folder }}"
state: poweredon
annotation: '{{ vmnotes|default("Provisioning new VM") }}'
cluster: "{{ vmware_cluster }}"
hardware:
num_cpus: "{{ cpu }}"
memory_mb: "{{ mem_mb }}"
hotadd_cpu: true
hotremove_cpu: true
hotadd_memory: true
scsi: lsilogicsas
disk:
- size_gb: "{{ os_disk }}"
type: thick
datastore: "{{ vmware_datastore }}"
# autoselect_datastore: true
networks:
- name: "{{ pxe_vlan }}"
device_type: vmxnet3
wait_for_ip_address: "{{ wait_for_ip }}"
register: new_vm
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
DirectPath I/O (UptCompatibilityEnabled ) should not be enabled by default and/or have parameter to set it.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
DirectPath I/O (UptCompatibilityEnabled ) is enabled by default with no parameter to change it.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63507
|
https://github.com/ansible/ansible/pull/63610
|
4e7c0b3e21d425b1e31842011e687a5d65fed92d
|
3a9650df98b7e0219f060aa5ec775f22d4170f10
| 2019-10-15T11:46:39Z |
python
| 2019-11-13T21:12:52Z |
lib/ansible/modules/cloud/vmware/vmware_guest_network.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# Copyright: (c) 2019, Diane Wang <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: vmware_guest_network
short_description: Manage network adapters of specified virtual machine in given vCenter infrastructure
description:
- This module is used to add, reconfigure, remove network adapter of given virtual machine.
- All parameters and VMware object names are case sensitive.
version_added: '2.9'
author:
- Diane Wang (@Tomorrow9) <[email protected]>
notes:
- Tested on vSphere 6.0, 6.5 and 6.7
requirements:
- "python >= 2.6"
- PyVmomi
options:
name:
description:
- Name of the virtual machine.
- This is a required parameter, if parameter C(uuid) or C(moid) is not supplied.
type: str
uuid:
description:
- UUID of the instance to gather info if known, this is VMware's unique identifier.
- This is a required parameter, if parameter C(name) or C(moid) is not supplied.
type: str
use_instance_uuid:
description:
- Whether to use the VMware instance UUID rather than the BIOS UUID.
default: False
type: bool
version_added: '2.10'
moid:
description:
- Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance.
- This is required if C(name) or C(uuid) is not supplied.
type: str
folder:
description:
- Destination folder, absolute or relative path to find an existing guest.
- This is a required parameter, only if multiple VMs are found with same name.
- The folder should include the datacenter. ESXi server's datacenter is ha-datacenter.
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
type: str
cluster:
description:
- The name of cluster where the virtual machine will run.
- This is a required parameter, if C(esxi_hostname) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
type: str
esxi_hostname:
description:
- The ESXi hostname where the virtual machine will run.
- This is a required parameter, if C(cluster) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
type: str
datacenter:
default: ha-datacenter
description:
- The datacenter name to which virtual machine belongs to.
type: str
gather_network_info:
description:
- If set to C(True), return settings of all network adapters, other parameters are ignored.
- If set to C(False), will add, reconfigure or remove network adapters according to the parameters in C(networks).
type: bool
default: False
aliases: [ gather_network_facts ]
networks:
type: list
description:
- A list of network adapters.
- C(mac) or C(label) or C(device_type) is required to reconfigure or remove an existing network adapter.
- 'If there are multiple network adapters with the same C(device_type), you should set C(label) or C(mac) to match
one of them, or will apply changes on all network adapters with the C(device_type) specified.'
- 'C(mac), C(label), C(device_type) is the order of precedence from greatest to least if all set.'
- 'Valid attributes are:'
- ' - C(mac) (string): MAC address of the existing network adapter to be reconfigured or removed.'
- ' - C(label) (string): Label of the existing network adapter to be reconfigured or removed, e.g., "Network adapter 1".'
- ' - C(device_type) (string): Valid virtual network device types are:
C(e1000), C(e1000e), C(pcnet32), C(vmxnet2), C(vmxnet3) (default), C(sriov).
Used to add new network adapter, reconfigure or remove the existing network adapter with this type.
If C(mac) and C(label) not specified or not find network adapter by C(mac) or C(label) will use this parameter.'
- ' - C(name) (string): Name of the portgroup or distributed virtual portgroup for this interface.
When specifying distributed virtual portgroup make sure given C(esxi_hostname) or C(cluster) is associated with it.'
- ' - C(vlan) (integer): VLAN number for this interface.'
- ' - C(dvswitch_name) (string): Name of the distributed vSwitch.
This value is required if multiple distributed portgroups exists with the same name.'
- ' - C(state) (string): State of the network adapter.'
- ' If set to C(present), then will do reconfiguration for the specified network adapter.'
- ' If set to C(new), then will add the specified network adapter.'
- ' If set to C(absent), then will remove this network adapter.'
- ' - C(manual_mac) (string): Manual specified MAC address of the network adapter when creating, or reconfiguring.
If not specified when creating new network adapter, mac address will be generated automatically.
When reconfigure MAC address, VM should be in powered off state.'
- ' - C(connected) (bool): Indicates that virtual network adapter connects to the associated virtual machine.'
- ' - C(start_connected) (bool): Indicates that virtual network adapter starts with associated virtual machine powers on.'
- ' - C(directpath_io) (bool): If set, Universal Pass-Through (UPT or DirectPath I/O) will be enabled on the network adapter.
UPT is only compatible for Vmxnet3 adapter.'
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
- name: Change network adapter settings of virtual machine
vmware_guest_network:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
validate_certs: no
name: test-vm
gather_network_info: false
networks:
- name: "VM Network"
state: new
manual_mac: "00:50:56:11:22:33"
- state: present
device_type: e1000e
manual_mac: "00:50:56:44:55:66"
- state: present
label: "Network adapter 3"
connected: false
- state: absent
mac: "00:50:56:44:55:77"
delegate_to: localhost
register: network_info
- name: Change network adapter settings of virtual machine using MoID
vmware_guest_network:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
validate_certs: no
moid: vm-42
gather_network_info: false
networks:
- state: absent
mac: "00:50:56:44:55:77"
delegate_to: localhost
- name: Change network adapter settings of virtual machine using instance UUID
vmware_guest_network:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
validate_certs: no
uuid: 5003b4f5-c705-2f37-ccf6-dfc0b40afeb7
use_instance_uuid: True
gather_network_info: false
networks:
- state: absent
mac: "00:50:56:44:55:77"
delegate_to: localhost
'''
RETURN = """
network_data:
description: metadata about the virtual machine's network adapter after managing them
returned: always
type: dict
sample: {
"0": {
"label": "Network Adapter 1",
"name": "VM Network",
"device_type": "E1000E",
"directpath_io": "N/A",
"mac_addr": "00:50:56:89:dc:05",
"unit_number": 7,
"wake_onlan": false,
"allow_guest_ctl": true,
"connected": true,
"start_connected": true,
},
"1": {
"label": "Network Adapter 2",
"name": "VM Network",
"device_type": "VMXNET3",
"directpath_io": true,
"mac_addr": "00:50:56:8d:93:8c",
"unit_number": 8,
"start_connected": true,
"wake_on_lan": true,
"connected": true,
}
}
"""
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.network import is_mac
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, wait_for_task, get_all_objs, get_parent_datacenter
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
self.change_detected = False
self.config_spec = vim.vm.ConfigSpec()
self.config_spec.deviceChange = []
self.nic_device_type = dict(
pcnet32=vim.vm.device.VirtualPCNet32,
vmxnet2=vim.vm.device.VirtualVmxnet2,
vmxnet3=vim.vm.device.VirtualVmxnet3,
e1000=vim.vm.device.VirtualE1000,
e1000e=vim.vm.device.VirtualE1000e,
sriov=vim.vm.device.VirtualSriovEthernetCard,
)
def get_device_type(self, device_type=None):
""" Get network adapter device type """
if device_type and device_type in list(self.nic_device_type.keys()):
return self.nic_device_type[device_type]()
else:
self.module.fail_json(msg='Invalid network device_type %s' % device_type)
def get_network_device(self, vm=None, mac=None, device_type=None, device_label=None):
"""
Get network adapter
"""
nic_devices = []
nic_device = None
if vm is None:
if device_type:
return nic_devices
else:
return nic_device
for device in vm.config.hardware.device:
if mac:
if isinstance(device, vim.vm.device.VirtualEthernetCard):
if device.macAddress == mac:
nic_device = device
break
elif device_type:
if isinstance(device, self.nic_device_type[device_type]):
nic_devices.append(device)
elif device_label:
if isinstance(device, vim.vm.device.VirtualEthernetCard):
if device.deviceInfo.label == device_label:
nic_device = device
break
if device_type:
return nic_devices
else:
return nic_device
def get_network_device_by_mac(self, vm=None, mac=None):
""" Get network adapter with the specified mac address"""
return self.get_network_device(vm=vm, mac=mac)
def get_network_devices_by_type(self, vm=None, device_type=None):
""" Get network adapter list with the name type """
return self.get_network_device(vm=vm, device_type=device_type)
def get_network_device_by_label(self, vm=None, device_label=None):
""" Get network adapter with the specified label """
return self.get_network_device(vm=vm, device_label=device_label)
def create_network_adapter(self, device_info):
nic = vim.vm.device.VirtualDeviceSpec()
nic.device = self.get_device_type(device_type=device_info.get('device_type', 'vmxnet3'))
nic.device.deviceInfo = vim.Description()
nic.device.deviceInfo.summary = device_info['name']
nic.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()
nic.device.backing.deviceName = device_info['name']
nic.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
nic.device.connectable.startConnected = device_info.get('start_connected', True)
nic.device.connectable.allowGuestControl = True
nic.device.connectable.connected = device_info.get('connected', True)
if 'manual_mac' in device_info:
nic.device.addressType = 'manual'
nic.device.macAddress = device_info['manual_mac']
else:
nic.device.addressType = 'generated'
return nic
def get_network_info(self, vm_obj):
network_info = dict()
if vm_obj is None:
return network_info
nic_index = 0
for nic in vm_obj.config.hardware.device:
nic_type = None
directpath_io = 'N/A'
if isinstance(nic, vim.vm.device.VirtualPCNet32):
nic_type = 'PCNet32'
elif isinstance(nic, vim.vm.device.VirtualVmxnet2):
nic_type = 'VMXNET2'
elif isinstance(nic, vim.vm.device.VirtualVmxnet3):
nic_type = 'VMXNET3'
directpath_io = True
elif isinstance(nic, vim.vm.device.VirtualE1000):
nic_type = 'E1000'
elif isinstance(nic, vim.vm.device.VirtualE1000e):
nic_type = 'E1000E'
elif isinstance(nic, vim.vm.device.VirtualSriovEthernetCard):
nic_type = 'SriovEthernetCard'
if nic_type is not None:
network_info[nic_index] = dict(
device_type=nic_type,
label=nic.deviceInfo.label,
name=nic.deviceInfo.summary,
mac_addr=nic.macAddress,
unit_number=nic.unitNumber,
wake_onlan=nic.wakeOnLanEnabled,
allow_guest_ctl=nic.connectable.allowGuestControl,
connected=nic.connectable.connected,
start_connected=nic.connectable.startConnected,
directpath_io=directpath_io
)
nic_index += 1
return network_info
def sanitize_network_params(self):
network_list = []
valid_state = ['new', 'present', 'absent']
if len(self.params['networks']) != 0:
for network in self.params['networks']:
if 'state' not in network or network['state'].lower() not in valid_state:
self.module.fail_json(msg="Network adapter state not specified or invalid: '%s', valid values: "
"%s" % (network.get('state', ''), valid_state))
# add new network adapter but no name specified
if network['state'].lower() == 'new' and 'name' not in network and 'vlan' not in network:
self.module.fail_json(msg="Please specify at least network name or VLAN name for adding new network adapter.")
if network['state'].lower() == 'new' and 'mac' in network:
self.module.fail_json(msg="networks.mac is used for vNIC reconfigure, but networks.state is set to 'new'.")
if network['state'].lower() == 'present' and 'mac' not in network and 'label' not in network and 'device_type' not in network:
self.module.fail_json(msg="Should specify 'mac', 'label' or 'device_type' parameter to reconfigure network adapter")
if 'connected' in network:
if not isinstance(network['connected'], bool):
self.module.fail_json(msg="networks.connected parameter should be boolean.")
if network['state'].lower() == 'new' and not network['connected']:
network['start_connected'] = False
if 'start_connected' in network:
if not isinstance(network['start_connected'], bool):
self.module.fail_json(msg="networks.start_connected parameter should be boolean.")
if network['state'].lower() == 'new' and not network['start_connected']:
network['connected'] = False
# specified network does not exist
if 'name' in network and not self.network_exists_by_name(network['name']):
self.module.fail_json(msg="Network '%(name)s' does not exist." % network)
elif 'vlan' in network:
objects = get_all_objs(self.content, [vim.dvs.DistributedVirtualPortgroup])
dvps = [x for x in objects if to_text(get_parent_datacenter(x).name) == to_text(self.params['datacenter'])]
for dvp in dvps:
if hasattr(dvp.config.defaultPortConfig, 'vlan') and \
isinstance(dvp.config.defaultPortConfig.vlan.vlanId, int) and \
str(dvp.config.defaultPortConfig.vlan.vlanId) == str(network['vlan']):
network['name'] = dvp.config.name
break
if 'dvswitch_name' in network and \
dvp.config.distributedVirtualSwitch.name == network['dvswitch_name'] and \
dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
if dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
else:
self.module.fail_json(msg="VLAN '%(vlan)s' does not exist." % network)
if 'device_type' in network and network['device_type'] not in list(self.nic_device_type.keys()):
self.module.fail_json(msg="Device type specified '%s' is invalid. "
"Valid types %s " % (network['device_type'], list(self.nic_device_type.keys())))
if ('mac' in network and not is_mac(network['mac'])) or \
('manual_mac' in network and not is_mac(network['manual_mac'])):
self.module.fail_json(msg="Device MAC address '%s' or manual set MAC address %s is invalid. "
"Please provide correct MAC address." % (network['mac'], network['manual_mac']))
network_list.append(network)
return network_list
def get_network_config_spec(self, vm_obj, network_list):
# create network adapter config spec for adding, editing, removing
for network in network_list:
# add new network adapter
if network['state'].lower() == 'new':
nic_spec = self.create_network_adapter(network)
nic_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
self.change_detected = True
self.config_spec.deviceChange.append(nic_spec)
# reconfigure network adapter or remove network adapter
else:
nic_devices = []
if 'mac' in network:
nic = self.get_network_device_by_mac(vm_obj, mac=network['mac'])
if nic is not None:
nic_devices.append(nic)
if 'label' in network and len(nic_devices) == 0:
nic = self.get_network_device_by_label(vm_obj, device_label=network['label'])
if nic is not None:
nic_devices.append(nic)
if 'device_type' in network and len(nic_devices) == 0:
nic_devices = self.get_network_devices_by_type(vm_obj, device_type=network['device_type'])
if len(nic_devices) != 0:
for nic_device in nic_devices:
nic_spec = vim.vm.device.VirtualDeviceSpec()
if network['state'].lower() == 'present':
nic_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
nic_spec.device = nic_device
if 'start_connected' in network and nic_device.connectable.startConnected != network['start_connected']:
nic_device.connectable.startConnected = network['start_connected']
self.change_detected = True
if 'connected' in network and nic_device.connectable.connected != network['connected']:
nic_device.connectable.connected = network['connected']
self.change_detected = True
if 'name' in network and nic_device.deviceInfo.summary != network['name']:
nic_device.deviceInfo.summary = network['name']
self.change_detected = True
if 'manual_mac' in network and nic_device.macAddress != network['manual_mac']:
if vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
self.module.fail_json(msg='Expected power state is poweredOff to reconfigure MAC address')
nic_device.addressType = 'manual'
nic_device.macAddress = network['manual_mac']
self.change_detected = True
if 'directpath_io' in network:
if isinstance(nic_device, vim.vm.device.VirtualVmxnet3):
if nic_device.uptCompatibilityEnabled != network['directpath_io']:
nic_device.uptCompatibilityEnabled = network['directpath_io']
self.change_detected = True
else:
self.module.fail_json(msg='UPT is only compatible for Vmxnet3 adapter.'
+ ' Clients can set this property enabled or disabled if ethernet virtual device is Vmxnet3.')
if self.change_detected:
self.config_spec.deviceChange.append(nic_spec)
elif network['state'].lower() == 'absent':
nic_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove
nic_spec.device = nic_device
self.change_detected = True
self.config_spec.deviceChange.append(nic_spec)
else:
self.module.fail_json(msg='Unable to find the specified network adapter: %s' % network)
def reconfigure_vm_network(self, vm_obj):
network_list = self.sanitize_network_params()
# gather network adapter info only
if (self.params['gather_network_info'] is not None and self.params['gather_network_info']) or len(network_list) == 0:
results = {'changed': False, 'failed': False, 'network_data': self.get_network_info(vm_obj)}
# do reconfigure then gather info
else:
self.get_network_config_spec(vm_obj, network_list)
try:
task = vm_obj.ReconfigVM_Task(spec=self.config_spec)
wait_for_task(task)
except vim.fault.InvalidDeviceSpec as e:
self.module.fail_json(msg="Failed to configure network adapter on given virtual machine due to invalid"
" device spec : %s" % to_native(e.msg),
details="Please check ESXi server logs for more details.")
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to reconfigure virtual machine due to"
" product versioning restrictions: %s" % to_native(e.msg))
if task.info.state == 'error':
results = {'changed': self.change_detected, 'failed': True, 'msg': task.info.error.msg}
else:
network_info = self.get_network_info(vm_obj)
results = {'changed': self.change_detected, 'failed': False, 'network_data': network_info}
return results
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
name=dict(type='str'),
uuid=dict(type='str'),
use_instance_uuid=dict(type='bool', default=False),
moid=dict(type='str'),
folder=dict(type='str'),
datacenter=dict(type='str', default='ha-datacenter'),
esxi_hostname=dict(type='str'),
cluster=dict(type='str'),
gather_network_info=dict(type='bool', default=False, aliases=['gather_network_facts']),
networks=dict(type='list', default=[])
)
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=[
['name', 'uuid', 'moid']
]
)
pyv = PyVmomiHelper(module)
vm = pyv.get_vm()
if not vm:
vm_id = (module.params.get('uuid') or module.params.get('name') or module.params.get('moid'))
module.fail_json(msg='Unable to find the specified virtual machine using %s' % vm_id)
result = pyv.reconfigure_vm_network(vm)
if result['failed']:
module.fail_json(**result)
else:
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,507 |
vmware_guest has UptCompatibilityEnabled set to true by default with no option to change
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The vmware_guest module UptCompatibilityEnabled set to true by default with no option to change. This is the "checkbox" in vSphere for "DirectPath I/O." It should have an option to change this value as part of the networks parameter in the vmware_guest module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/gforster/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/gforster/.local/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Jul 9 2019, 16:51:35) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a VM using the vmware_guest module.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: PROVISION | Create VM
vmware_guest:
validate_certs: "{{ validate_certs }}"
hostname: "{{ vcenter_host }}"
username: '{{ vcenter_username }}'
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter }}"
name: "{{ item }}"
guest_id: "{{ vmware_guest_id }}"
folder: "{{ vmware_folder }}"
state: poweredon
annotation: '{{ vmnotes|default("Provisioning new VM") }}'
cluster: "{{ vmware_cluster }}"
hardware:
num_cpus: "{{ cpu }}"
memory_mb: "{{ mem_mb }}"
hotadd_cpu: true
hotremove_cpu: true
hotadd_memory: true
scsi: lsilogicsas
disk:
- size_gb: "{{ os_disk }}"
type: thick
datastore: "{{ vmware_datastore }}"
# autoselect_datastore: true
networks:
- name: "{{ pxe_vlan }}"
device_type: vmxnet3
wait_for_ip_address: "{{ wait_for_ip }}"
register: new_vm
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
DirectPath I/O (UptCompatibilityEnabled ) should not be enabled by default and/or have parameter to set it.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
DirectPath I/O (UptCompatibilityEnabled ) is enabled by default with no parameter to change it.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/63507
|
https://github.com/ansible/ansible/pull/63610
|
4e7c0b3e21d425b1e31842011e687a5d65fed92d
|
3a9650df98b7e0219f060aa5ec775f22d4170f10
| 2019-10-15T11:46:39Z |
python
| 2019-11-13T21:12:52Z |
test/integration/targets/vmware_guest_network/tasks/main.yml
|
# Test code for the vmware_guest_network module
# Copyright: (c) 2019, Diane Wang (Tomorrow9) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- when: vcsim is not defined
block:
- import_role:
name: prepare_vmware_tests
vars:
setup_attach_host: true
setup_datastore: true
- name: Create VMs
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ dc1 }}"
validate_certs: no
folder: '/DC0/vm/F0'
name: test_vm1
state: poweredon
guest_id: centos7_64Guest
disk:
- size_gb: 1
type: thin
datastore: '{{ ds2 }}'
hardware:
version: latest
memory_mb: 1024
num_cpus: 1
scsi: paravirtual
cdrom:
type: iso
iso_path: "[{{ ds1 }}] fedora.iso"
networks:
- name: VM Network
- vmware_guest_tools_wait:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: test_vm1
- name: gather network adapters' facts of the virtual machine
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
gather_network_info: true
register: netadapter_info
- debug: var=netadapter_info
- name: get number of existing netowrk adapters
set_fact:
netadapter_num: "{{ netadapter_info.network_data | length }}"
- name: add new network adapters to virtual machine
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
networks:
- name: "VM Network"
state: new
device_type: e1000e
manual_mac: "aa:50:56:58:59:60"
connected: True
- name: "VM Network"
state: new
connected: True
device_type: vmxnet3
manual_mac: "aa:50:56:58:59:61"
register: add_netadapter
- debug: var=add_netadapter
- name: assert the new netowrk adapters were added to VM
assert:
that:
- add_netadapter is changed
- "{{ add_netadapter.network_data | length | int }} == {{ netadapter_num | int + 2 }}"
- name: delete one specified network adapter
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
networks:
- state: absent
mac: "aa:50:56:58:59:60"
register: del_netadapter
- debug: var=del_netadapter
- name: assert the network adapter was removed
assert:
that:
- del_netadapter is changed
- "{{ del_netadapter.network_data | length | int }} == {{ netadapter_num | int + 1 }}"
- name: get instance uuid of virtual machines
vmware_guest_info:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
datacenter: '{{ dc1 }}'
register: guest_info
- set_fact: vm1_instance_uuid="{{ guest_info['instance']['instance_uuid'] }}"
- name: add new network adapters to virtual machine with instance uuid
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
uuid: '{{ vm1_instance_uuid }}'
use_instance_uuid: True
networks:
- name: "VM Network"
state: new
connected: True
device_type: e1000e
manual_mac: "bb:50:56:58:59:60"
register: add_netadapter_instanceuuid
- debug: var=add_netadapter_instanceuuid
- name: assert the new netowrk adapters were added to VM
assert:
that:
- add_netadapter_instanceuuid is changed
- "{{ add_netadapter_instanceuuid.network_data | length | int }} == {{ netadapter_num | int + 2 }}"
- name: delete again one specified network adapter
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
networks:
- state: absent
mac: "bb:50:56:58:59:60"
register: del_again_netadapter
- debug: var=del_again_netadapter
- name: assert the network adapter was removed
assert:
that:
- del_again_netadapter is changed
- "{{ del_again_netadapter.network_data | length | int }} == {{ netadapter_num | int + 1 }}"
- name: disable DirectPath I/O on a Vmxnet3 adapter
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ virtual_machines[0].name }}"
networks:
- state: present
mac: "00:50:56:58:59:61"
directpath_io: False
register: disable_directpath_io
- debug: var=disable_directpath_io
- name: enable DirectPath I/O on a Vmxnet3 adapter
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ virtual_machines[0].name }}"
networks:
- state: present
mac: "00:50:56:58:59:61"
directpath_io: True
register: enable_directpath_io
- debug: var=enable_directpath_io
- name: disconnect one specified network adapter
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
networks:
- state: present
mac: "aa:50:56:58:59:61"
connected: false
register: disc_netadapter
- debug: var=disc_netadapter
- name: assert the network adapter was disconnected
assert:
that:
- disc_netadapter is changed
- "{{ disc_netadapter.network_data[netadapter_num]['connected'] }} == false"
- name: Check if network does not exists
vmware_guest_network:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: test_vm1
networks:
- name: non-existing-nw
manual_mac: "aa:50:56:11:22:33"
state: new
register: no_nw_details
ignore_errors: yes
- debug: var=no_nw_details
- name: Check if network does not exists
assert:
that:
- not (no_nw_details is changed)
- no_nw_details.failed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,382 |
Hostname module doesn't work on a Manjaro Linux host
|
##### SUMMARY
The hostname module reports that it doesn't work on Manjaro Linux, eventhough this is simply an Arch based distribution. So I think it should work without any modification if Manjaro is 'whitelisted' as a platform that supports this module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Hostname module.
##### ANSIBLE VERSION
```ansible 2.8.3
config file = /home/overlord/ConfigManagement/ansible/ansible.cfg
configured module search path = ['/home/overlord/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 16 2019, 07:12:58) [GCC 9.1.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_NOCOWS = True
ANSIBLE_PIPELINING = True
ANSIBLE_SSH_ARGS = -C -o ControlMaster=auto -o ControlPersist=300s -o ServerAliveInterval=2
ANSIBLE_SSH_CONTROL_PATH = %(directory)s/%%h-%%r
ANSIBLE_SSH_CONTROL_PATH_DIR = /tmp/ansible-cp
COMMAND_WARNINGS = False
DEFAULT_ASK_PASS = True
DEFAULT_BECOME_ASK_PASS = True
DEFAULT_FORKS = 50
DEFAULT_GATHERING = smart
DEFAULT_HOST_LIST = ['/home/overlord/ConfigManagement/ansible/inventory']
DEFAULT_INTERNAL_POLL_INTERVAL = 0.005
DEFAULT_LOCAL_TMP = /tmp/ansible-local/ansible-local-13795g9m3yhv0
DEFAULT_MANAGED_STR = This file is managed by Ansible.%n
template: {file}
user: {uid}
host: {host}
DO NOT EDIT BY HAND!!!!
DEFAULT_POLL_INTERVAL = 15
DEFAULT_ROLES_PATH = ['/home/myuser/ConfigManagement/ansible/roles', '/home/myuser/ConfigManagement/ansible/galaxyroles', '/home/myuser/ConfigManagement/ansible/software']
DEFAULT_SSH_TRANSFER_METHOD = piped
DEFAULT_TIMEOUT = 30
DISPLAY_SKIPPED_HOSTS = False
HOST_KEY_CHECKING = False
RETRY_FILES_ENABLED = False
```
##### OS / ENVIRONMENT
Ansible host is Manjaro Linux (current version, all packages up-to-date) and the target host is localhost (in this case).
##### STEPS TO REPRODUCE
My task is:
```
- name: "Zorgen dat de hostname {{ baseline_hostname }} is"
hostname:
name: "{{ baseline_hostname }}"
```
This code works fine on Ubuntu based hosts on the same network, using the same role which includes this task.
##### EXPECTED RESULTS
It should work fine if a Manjaro host is seen by Ansible as an Arch Linux host.
##### ACTUAL RESULTS
```
fatal: [myhost.mydomain.local]: FAILED! => {"changed": false, "msg": "hostname module cannot be used on platform Linux (Manjaro)"}
```
|
https://github.com/ansible/ansible/issues/61382
|
https://github.com/ansible/ansible/pull/64810
|
e0373a73a87fb5a22b38316f25af3d1c5628ec9f
|
a75a79b84c9f41321b5bfdf57000f0922fc11715
| 2019-08-27T15:03:07Z |
python
| 2019-11-14T09:01:21Z |
changelogs/fragments/64810-hostname-add-manjaro-linux-distribution.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,382 |
Hostname module doesn't work on a Manjaro Linux host
|
##### SUMMARY
The hostname module reports that it doesn't work on Manjaro Linux, eventhough this is simply an Arch based distribution. So I think it should work without any modification if Manjaro is 'whitelisted' as a platform that supports this module.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Hostname module.
##### ANSIBLE VERSION
```ansible 2.8.3
config file = /home/overlord/ConfigManagement/ansible/ansible.cfg
configured module search path = ['/home/overlord/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.4 (default, Jul 16 2019, 07:12:58) [GCC 9.1.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
ANSIBLE_NOCOWS = True
ANSIBLE_PIPELINING = True
ANSIBLE_SSH_ARGS = -C -o ControlMaster=auto -o ControlPersist=300s -o ServerAliveInterval=2
ANSIBLE_SSH_CONTROL_PATH = %(directory)s/%%h-%%r
ANSIBLE_SSH_CONTROL_PATH_DIR = /tmp/ansible-cp
COMMAND_WARNINGS = False
DEFAULT_ASK_PASS = True
DEFAULT_BECOME_ASK_PASS = True
DEFAULT_FORKS = 50
DEFAULT_GATHERING = smart
DEFAULT_HOST_LIST = ['/home/overlord/ConfigManagement/ansible/inventory']
DEFAULT_INTERNAL_POLL_INTERVAL = 0.005
DEFAULT_LOCAL_TMP = /tmp/ansible-local/ansible-local-13795g9m3yhv0
DEFAULT_MANAGED_STR = This file is managed by Ansible.%n
template: {file}
user: {uid}
host: {host}
DO NOT EDIT BY HAND!!!!
DEFAULT_POLL_INTERVAL = 15
DEFAULT_ROLES_PATH = ['/home/myuser/ConfigManagement/ansible/roles', '/home/myuser/ConfigManagement/ansible/galaxyroles', '/home/myuser/ConfigManagement/ansible/software']
DEFAULT_SSH_TRANSFER_METHOD = piped
DEFAULT_TIMEOUT = 30
DISPLAY_SKIPPED_HOSTS = False
HOST_KEY_CHECKING = False
RETRY_FILES_ENABLED = False
```
##### OS / ENVIRONMENT
Ansible host is Manjaro Linux (current version, all packages up-to-date) and the target host is localhost (in this case).
##### STEPS TO REPRODUCE
My task is:
```
- name: "Zorgen dat de hostname {{ baseline_hostname }} is"
hostname:
name: "{{ baseline_hostname }}"
```
This code works fine on Ubuntu based hosts on the same network, using the same role which includes this task.
##### EXPECTED RESULTS
It should work fine if a Manjaro host is seen by Ansible as an Arch Linux host.
##### ACTUAL RESULTS
```
fatal: [myhost.mydomain.local]: FAILED! => {"changed": false, "msg": "hostname module cannot be used on platform Linux (Manjaro)"}
```
|
https://github.com/ansible/ansible/issues/61382
|
https://github.com/ansible/ansible/pull/64810
|
e0373a73a87fb5a22b38316f25af3d1c5628ec9f
|
a75a79b84c9f41321b5bfdf57000f0922fc11715
| 2019-08-27T15:03:07Z |
python
| 2019-11-14T09:01:21Z |
lib/ansible/modules/system/hostname.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Hiroaki Nakamura <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: hostname
author:
- Adrian Likins (@alikins)
- Hideki Saito (@saito-hideki)
version_added: "1.4"
short_description: Manage hostname
requirements: [ hostname ]
description:
- Set system's hostname, supports most OSs/Distributions, including those using systemd.
- Note, this module does *NOT* modify C(/etc/hosts). You need to modify it yourself using other modules like template or replace.
- Windows, HP-UX and AIX are not currently supported.
options:
name:
description:
- Name of the host
required: true
use:
description:
- Which strategy to use to update the hostname.
- If not set we try to autodetect, but this can be problematic, specially with containers as they can present misleading information.
choices: ['generic', 'debian','sles', 'redhat', 'alpine', 'systemd', 'openrc', 'openbsd', 'solaris', 'freebsd']
version_added: '2.9'
'''
EXAMPLES = '''
- hostname:
name: web01
'''
import os
import platform
import socket
import traceback
from ansible.module_utils.basic import (
AnsibleModule,
get_distribution,
get_distribution_version,
)
from ansible.module_utils.common.sys_info import get_platform_subclass
from ansible.module_utils.facts.system.service_mgr import ServiceMgrFactCollector
from ansible.module_utils._text import to_native
STRATS = {'generic': 'Generic', 'debian': 'Debian', 'sles': 'SLES', 'redhat': 'RedHat', 'alpine': 'Alpine',
'systemd': 'Systemd', 'openrc': 'OpenRC', 'openbsd': 'OpenBSD', 'solaris': 'Solaris', 'freebsd': 'FreeBSD'}
class UnimplementedStrategy(object):
def __init__(self, module):
self.module = module
def update_current_and_permanent_hostname(self):
self.unimplemented_error()
def update_current_hostname(self):
self.unimplemented_error()
def update_permanent_hostname(self):
self.unimplemented_error()
def get_current_hostname(self):
self.unimplemented_error()
def set_current_hostname(self, name):
self.unimplemented_error()
def get_permanent_hostname(self):
self.unimplemented_error()
def set_permanent_hostname(self, name):
self.unimplemented_error()
def unimplemented_error(self):
system = platform.system()
distribution = get_distribution()
if distribution is not None:
msg_platform = '%s (%s)' % (system, distribution)
else:
msg_platform = system
self.module.fail_json(
msg='hostname module cannot be used on platform %s' % msg_platform)
class Hostname(object):
"""
This is a generic Hostname manipulation class that is subclassed
based on platform.
A subclass may wish to set different strategy instance to self.strategy.
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
strategy_class = UnimplementedStrategy
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Hostname)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.use = module.params['use']
if self.use is not None:
strat = globals()['%sStrategy' % STRATS[self.use]]
self.strategy = strat(module)
elif self.platform == 'Linux' and ServiceMgrFactCollector.is_systemd_managed(module):
self.strategy = SystemdStrategy(module)
else:
self.strategy = self.strategy_class(module)
def update_current_and_permanent_hostname(self):
return self.strategy.update_current_and_permanent_hostname()
def get_current_hostname(self):
return self.strategy.get_current_hostname()
def set_current_hostname(self, name):
self.strategy.set_current_hostname(name)
def get_permanent_hostname(self):
return self.strategy.get_permanent_hostname()
def set_permanent_hostname(self, name):
self.strategy.set_permanent_hostname(name)
class GenericStrategy(object):
"""
This is a generic Hostname manipulation strategy class.
A subclass may wish to override some or all of these methods.
- get_current_hostname()
- get_permanent_hostname()
- set_current_hostname(name)
- set_permanent_hostname(name)
"""
def __init__(self, module):
self.module = module
self.changed = False
self.hostname_cmd = self.module.get_bin_path('hostnamectl', False)
if not self.hostname_cmd:
self.hostname_cmd = self.module.get_bin_path('hostname', True)
def update_current_and_permanent_hostname(self):
self.update_current_hostname()
self.update_permanent_hostname()
return self.changed
def update_current_hostname(self):
name = self.module.params['name']
current_name = self.get_current_hostname()
if current_name != name:
if not self.module.check_mode:
self.set_current_hostname(name)
self.changed = True
def update_permanent_hostname(self):
name = self.module.params['name']
permanent_name = self.get_permanent_hostname()
if permanent_name != name:
if not self.module.check_mode:
self.set_permanent_hostname(name)
self.changed = True
def get_current_hostname(self):
cmd = [self.hostname_cmd]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
return 'UNKNOWN'
def set_permanent_hostname(self, name):
pass
class DebianStrategy(GenericStrategy):
"""
This is a Debian family Hostname manipulation strategy class - it edits
the /etc/hostname file.
"""
HOSTNAME_FILE = '/etc/hostname'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class SLESStrategy(GenericStrategy):
"""
This is a SLES Hostname strategy class - it edits the
/etc/HOSTNAME file.
"""
HOSTNAME_FILE = '/etc/HOSTNAME'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class RedHatStrategy(GenericStrategy):
"""
This is a Redhat Hostname strategy class - it edits the
/etc/sysconfig/network file.
"""
NETWORK_FILE = '/etc/sysconfig/network'
def get_permanent_hostname(self):
try:
f = open(self.NETWORK_FILE, 'rb')
try:
for line in f.readlines():
if line.startswith('HOSTNAME'):
k, v = line.split('=')
return v.strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
lines = []
found = False
f = open(self.NETWORK_FILE, 'rb')
try:
for line in f.readlines():
if line.startswith('HOSTNAME'):
lines.append("HOSTNAME=%s\n" % name)
found = True
else:
lines.append(line)
finally:
f.close()
if not found:
lines.append("HOSTNAME=%s\n" % name)
f = open(self.NETWORK_FILE, 'w+')
try:
f.writelines(lines)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class AlpineStrategy(GenericStrategy):
"""
This is a Alpine Linux Hostname manipulation strategy class - it edits
the /etc/hostname file then run hostname -F /etc/hostname.
"""
HOSTNAME_FILE = '/etc/hostname'
def update_current_and_permanent_hostname(self):
self.update_permanent_hostname()
self.update_current_hostname()
return self.changed
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_current_hostname(self, name):
cmd = [self.hostname_cmd, '-F', self.HOSTNAME_FILE]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class SystemdStrategy(GenericStrategy):
"""
This is a Systemd hostname manipulation strategy class - it uses
the hostnamectl command.
"""
def get_current_hostname(self):
cmd = [self.hostname_cmd, '--transient', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_current_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostname_cmd, '--transient', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
cmd = [self.hostname_cmd, '--static', 'status']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
if len(name) > 64:
self.module.fail_json(msg="name cannot be longer than 64 characters on systemd servers, try a shorter name")
cmd = [self.hostname_cmd, '--pretty', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
cmd = [self.hostname_cmd, '--static', 'set-hostname', name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class OpenRCStrategy(GenericStrategy):
"""
This is a Gentoo (OpenRC) Hostname manipulation strategy class - it edits
the /etc/conf.d/hostname file.
"""
HOSTNAME_FILE = '/etc/conf.d/hostname'
def get_permanent_hostname(self):
name = 'UNKNOWN'
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
for line in f:
line = line.strip()
if line.startswith('hostname='):
name = line[10:].strip('"')
break
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
return name
def set_permanent_hostname(self, name):
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
lines = [x.strip() for x in f]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
f.close()
f = open(self.HOSTNAME_FILE, 'w')
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
class OpenBSDStrategy(GenericStrategy):
"""
This is a OpenBSD family Hostname manipulation strategy class - it edits
the /etc/myname file.
"""
HOSTNAME_FILE = '/etc/myname'
def get_permanent_hostname(self):
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
f = open(self.HOSTNAME_FILE)
try:
return f.read().strip()
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
def set_permanent_hostname(self, name):
try:
f = open(self.HOSTNAME_FILE, 'w+')
try:
f.write("%s\n" % name)
finally:
f.close()
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
class SolarisStrategy(GenericStrategy):
"""
This is a Solaris11 or later Hostname manipulation strategy class - it
execute hostname command.
"""
def set_current_hostname(self, name):
cmd_option = '-t'
cmd = [self.hostname_cmd, cmd_option, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
def get_permanent_hostname(self):
fmri = 'svc:/system/identity:node'
pattern = 'config/nodename'
cmd = '/usr/sbin/svccfg -s %s listprop -o value %s' % (fmri, pattern)
rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
return to_native(out).strip()
def set_permanent_hostname(self, name):
cmd = [self.hostname_cmd, name]
rc, out, err = self.module.run_command(cmd)
if rc != 0:
self.module.fail_json(msg="Command failed rc=%d, out=%s, err=%s" % (rc, out, err))
class FreeBSDStrategy(GenericStrategy):
"""
This is a FreeBSD hostname manipulation strategy class - it edits
the /etc/rc.conf.d/hostname file.
"""
HOSTNAME_FILE = '/etc/rc.conf.d/hostname'
def get_permanent_hostname(self):
name = 'UNKNOWN'
if not os.path.isfile(self.HOSTNAME_FILE):
try:
open(self.HOSTNAME_FILE, "a").write("hostname=temporarystub\n")
except IOError as e:
self.module.fail_json(msg="failed to write file: %s" %
to_native(e), exception=traceback.format_exc())
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
for line in f:
line = line.strip()
if line.startswith('hostname='):
name = line[10:].strip('"')
break
except Exception as e:
self.module.fail_json(msg="failed to read hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
return name
def set_permanent_hostname(self, name):
try:
try:
f = open(self.HOSTNAME_FILE, 'r')
lines = [x.strip() for x in f]
for i, line in enumerate(lines):
if line.startswith('hostname='):
lines[i] = 'hostname="%s"' % name
break
f.close()
f = open(self.HOSTNAME_FILE, 'w')
f.write('\n'.join(lines) + '\n')
except Exception as e:
self.module.fail_json(msg="failed to update hostname: %s" %
to_native(e), exception=traceback.format_exc())
finally:
f.close()
class FedoraHostname(Hostname):
platform = 'Linux'
distribution = 'Fedora'
strategy_class = SystemdStrategy
class SLESHostname(Hostname):
platform = 'Linux'
distribution = 'Sles'
try:
distribution_version = get_distribution_version()
# cast to float may raise ValueError on non SLES, we use float for a little more safety over int
if distribution_version and 10 <= float(distribution_version) <= 12:
strategy_class = SLESStrategy
else:
raise ValueError()
except ValueError:
strategy_class = UnimplementedStrategy
class OpenSUSEHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse'
strategy_class = SystemdStrategy
class OpenSUSELeapHostname(Hostname):
platform = 'Linux'
distribution = 'Opensuse-leap'
strategy_class = SystemdStrategy
class AsteraHostname(Hostname):
platform = 'Linux'
distribution = '"astralinuxce"'
strategy_class = SystemdStrategy
class ArchHostname(Hostname):
platform = 'Linux'
distribution = 'Arch'
strategy_class = SystemdStrategy
class ArchARMHostname(Hostname):
platform = 'Linux'
distribution = 'Archarm'
strategy_class = SystemdStrategy
class RHELHostname(Hostname):
platform = 'Linux'
distribution = 'Redhat'
strategy_class = RedHatStrategy
class CentOSHostname(Hostname):
platform = 'Linux'
distribution = 'Centos'
strategy_class = RedHatStrategy
class ClearLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Clear-linux-os'
strategy_class = SystemdStrategy
class CloudlinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Cloudlinux'
strategy_class = RedHatStrategy
class CoreosHostname(Hostname):
platform = 'Linux'
distribution = 'Coreos'
strategy_class = SystemdStrategy
class ScientificHostname(Hostname):
platform = 'Linux'
distribution = 'Scientific'
strategy_class = RedHatStrategy
class OracleLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Oracle'
strategy_class = RedHatStrategy
class VirtuozzoLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Virtuozzo'
strategy_class = RedHatStrategy
class AmazonLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Amazon'
strategy_class = RedHatStrategy
class DebianHostname(Hostname):
platform = 'Linux'
distribution = 'Debian'
strategy_class = DebianStrategy
class KylinHostname(Hostname):
platform = 'Linux'
distribution = 'Kylin'
strategy_class = DebianStrategy
class CumulusHostname(Hostname):
platform = 'Linux'
distribution = 'Cumulus-linux'
strategy_class = DebianStrategy
class KaliHostname(Hostname):
platform = 'Linux'
distribution = 'Kali'
strategy_class = DebianStrategy
class UbuntuHostname(Hostname):
platform = 'Linux'
distribution = 'Ubuntu'
strategy_class = DebianStrategy
class LinuxmintHostname(Hostname):
platform = 'Linux'
distribution = 'Linuxmint'
strategy_class = DebianStrategy
class LinaroHostname(Hostname):
platform = 'Linux'
distribution = 'Linaro'
strategy_class = DebianStrategy
class DevuanHostname(Hostname):
platform = 'Linux'
distribution = 'Devuan'
strategy_class = DebianStrategy
class RaspbianHostname(Hostname):
platform = 'Linux'
distribution = 'Raspbian'
strategy_class = DebianStrategy
class GentooHostname(Hostname):
platform = 'Linux'
distribution = 'Gentoo'
strategy_class = OpenRCStrategy
class ALTLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Altlinux'
strategy_class = RedHatStrategy
class AlpineLinuxHostname(Hostname):
platform = 'Linux'
distribution = 'Alpine'
strategy_class = AlpineStrategy
class OpenBSDHostname(Hostname):
platform = 'OpenBSD'
distribution = None
strategy_class = OpenBSDStrategy
class SolarisHostname(Hostname):
platform = 'SunOS'
distribution = None
strategy_class = SolarisStrategy
class FreeBSDHostname(Hostname):
platform = 'FreeBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NetBSDHostname(Hostname):
platform = 'NetBSD'
distribution = None
strategy_class = FreeBSDStrategy
class NeonHostname(Hostname):
platform = 'Linux'
distribution = 'Neon'
strategy_class = DebianStrategy
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(type='str', required=True),
use=dict(type='str', choices=STRATS.keys())
),
supports_check_mode=True,
)
hostname = Hostname(module)
name = module.params['name']
current_hostname = hostname.get_current_hostname()
permanent_hostname = hostname.get_permanent_hostname()
changed = hostname.update_current_and_permanent_hostname()
if name != current_hostname:
name_before = current_hostname
elif name != permanent_hostname:
name_before = permanent_hostname
kw = dict(changed=changed, name=name,
ansible_facts=dict(ansible_hostname=name.split('.')[0],
ansible_nodename=name,
ansible_fqdn=socket.getfqdn(),
ansible_domain='.'.join(socket.getfqdn().split('.')[1:])))
if changed:
kw['diff'] = {'after': 'hostname = ' + name + '\n',
'before': 'hostname = ' + name_before + '\n'}
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,806 |
postgres_user does not correctly commit changes if groups is set
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `postgres_user` module does not commit changes if the variable groups is set. The create action will be commited, but any changes after that which do not change the `groups` variable will not be commited. This is due to [this line](https://github.com/ansible/ansible/blob/80bf24b17c52546c081b27e639499ba128824cf3/lib/ansible/modules/database/postgresql/postgresql_user.py#L855) which overrides the changed variable regardless of hte previous settings
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`postgres_user`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
prod@ansible # ansible --version
ansible 2.9.0
config file = /root/ansible.cfg
configured module search path = [u'/root/galaxy_roles/kafka_lib/library']
ansible python module location = /root/env/local/lib/python2.7/site-packages/ansible
executable location = /root/env/bin/ansible
python version = 2.7.12 (default, Oct 8 2019, 14:14:10) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/root/ansible.cfg) = True
ANSIBLE_SSH_RETRIES(/root/ansible.cfg) = 3
CACHE_PLUGIN(/root/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/root/ansible.cfg) = /root/.ansible_jsonfile_cache
CACHE_PLUGIN_TIMEOUT(/root/ansible.cfg) = 86400000
DEFAULT_GATHERING(/root/ansible.cfg) = smart
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = [u'/root/galaxy_roles/kafka_lib/library']
HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False
```
##### OS / ENVIRONMENT
Tested on mac and linux
##### STEPS TO REPRODUCE
Run the following:
```yaml
- name: Create read group
postgresql_user:
name: read
role_attr_flags: NOLOGIN
state: present
become: yes
become_user: postgres
- name: Create/grant postgres user
postgresql_user:
name: some_user
state: present
groups: [read]
password: abc123
become: yes
become_user: postgres
```
Then change something (such as password) without changing groups:
```yaml
- name: Create/grant postgres user
postgresql_user:
name: some_user
state: present
groups: [read]
password: abc1234
become: yes
become_user: postgres
```
And run the playbook again. The play will be reported as `changed: false` and the password will not be updated.
##### EXPECTED RESULTS
Changes are commited without changing the groups var
##### ACTUAL RESULTS
Changes are only commited if the groups var is changed.
|
https://github.com/ansible/ansible/issues/64806
|
https://github.com/ansible/ansible/pull/64807
|
a75a79b84c9f41321b5bfdf57000f0922fc11715
|
9ee601288c45a5ca2c1bc37196b9aade2966ca0d
| 2019-11-13T21:10:55Z |
python
| 2019-11-14T09:39:45Z |
lib/ansible/modules/database/postgresql/postgresql_user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: postgresql_user
short_description: Add or remove a user (role) from a PostgreSQL server instance
description:
- Adds or removes a user (role) from a PostgreSQL server instance
("cluster" in PostgreSQL terminology) and, optionally,
grants the user access to an existing database or tables.
- A user is a role with login privilege.
- The fundamental function of the module is to create, or delete, users from
a PostgreSQL instances. Privilege assignment, or removal, is an optional
step, which works on one database at a time. This allows for the module to
be called several times in the same module to modify the permissions on
different databases, or to grant permissions to already existing users.
- A user cannot be removed until all the privileges have been stripped from
the user. In such situation, if the module tries to remove the user it
will fail. To avoid this from happening the fail_on_user option signals
the module to try to remove the user, but if not possible keep going; the
module will report if changes happened and separately if the user was
removed or not.
version_added: '0.6'
options:
name:
description:
- Name of the user (role) to add or remove.
type: str
required: true
aliases:
- user
password:
description:
- Set the user's password, before 1.4 this was required.
- Password can be passed unhashed or hashed (MD5-hashed).
- Unhashed password will automatically be hashed when saved into the
database if C(encrypted) parameter is set, otherwise it will be save in
plain text format.
- When passing a hashed password it must be generated with the format
C('str["md5"] + md5[ password + username ]'), resulting in a total of
35 characters. An easy way to do this is C(echo "md5$(echo -n
'verysecretpasswordJOE' | md5sum | awk '{print $1}')").
- Note that if the provided password string is already in MD5-hashed
format, then it is used as-is, regardless of C(encrypted) parameter.
type: str
db:
description:
- Name of database to connect to and where user's permissions will be granted.
type: str
aliases:
- login_db
fail_on_user:
description:
- If C(yes), fail when user (role) can't be removed. Otherwise just log and continue.
default: 'yes'
type: bool
aliases:
- fail_on_role
priv:
description:
- "Slash-separated PostgreSQL privileges string: C(priv1/priv2), where
privileges can be defined for database ( allowed options - 'CREATE',
'CONNECT', 'TEMPORARY', 'TEMP', 'ALL'. For example C(CONNECT) ) or
for table ( allowed options - 'SELECT', 'INSERT', 'UPDATE', 'DELETE',
'TRUNCATE', 'REFERENCES', 'TRIGGER', 'ALL'. For example
C(table:SELECT) ). Mixed example of this string:
C(CONNECT/CREATE/table1:SELECT/table2:INSERT)."
type: str
role_attr_flags:
description:
- "PostgreSQL user attributes string in the format: CREATEDB,CREATEROLE,SUPERUSER."
- Note that '[NO]CREATEUSER' is deprecated.
- To create a simple role for using it like a group, use C(NOLOGIN) flag.
type: str
choices: [ '[NO]SUPERUSER', '[NO]CREATEROLE', '[NO]CREATEDB',
'[NO]INHERIT', '[NO]LOGIN', '[NO]REPLICATION', '[NO]BYPASSRLS' ]
session_role:
version_added: '2.8'
description:
- Switch to session_role after connecting.
- The specified session_role must be a role that the current login_user is a member of.
- Permissions checking for SQL commands is carried out as though the session_role were the one that had logged in originally.
type: str
state:
description:
- The user (role) state.
type: str
default: present
choices: [ absent, present ]
encrypted:
description:
- Whether the password is stored hashed in the database.
- Passwords can be passed already hashed or unhashed, and postgresql
ensures the stored password is hashed when C(encrypted) is set.
- "Note: Postgresql 10 and newer doesn't support unhashed passwords."
- Previous to Ansible 2.6, this was C(no) by default.
default: 'yes'
type: bool
version_added: '1.4'
expires:
description:
- The date at which the user's password is to expire.
- If set to C('infinity'), user's password never expire.
- Note that this value should be a valid SQL date and time type.
type: str
version_added: '1.4'
no_password_changes:
description:
- If C(yes), don't inspect database for password changes. Effective when
C(pg_authid) is not accessible (such as AWS RDS). Otherwise, make
password changes as necessary.
default: 'no'
type: bool
version_added: '2.0'
conn_limit:
description:
- Specifies the user (role) connection limit.
type: int
version_added: '2.4'
ssl_mode:
description:
- Determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server.
- See https://www.postgresql.org/docs/current/static/libpq-ssl.html for more information on the modes.
- Default of C(prefer) matches libpq default.
type: str
default: prefer
choices: [ allow, disable, prefer, require, verify-ca, verify-full ]
version_added: '2.3'
ca_cert:
description:
- Specifies the name of a file containing SSL certificate authority (CA) certificate(s).
- If the file exists, the server's certificate will be verified to be signed by one of these authorities.
type: str
aliases: [ ssl_rootcert ]
version_added: '2.3'
groups:
description:
- The list of groups (roles) that need to be granted to the user.
type: list
elements: str
version_added: '2.9'
notes:
- The module creates a user (role) with login privilege by default.
Use NOLOGIN role_attr_flags to change this behaviour.
- If you specify PUBLIC as the user (role), then the privilege changes will apply to all users (roles).
You may not specify password or role_attr_flags when the PUBLIC user is specified.
seealso:
- module: postgresql_privs
- module: postgresql_membership
- module: postgresql_owner
- name: PostgreSQL database roles
description: Complete reference of the PostgreSQL database roles documentation.
link: https://www.postgresql.org/docs/current/user-manag.html
author:
- Ansible Core Team
extends_documentation_fragment: postgres
'''
EXAMPLES = r'''
- name: Connect to acme database, create django user, and grant access to database and products table
postgresql_user:
db: acme
name: django
password: ceec4eif7ya
priv: "CONNECT/products:ALL"
expires: "Jan 31 2020"
# Connect to default database, create rails user, set its password (MD5-hashed),
# and grant privilege to create other databases and demote rails from super user status if user exists
- name: Create rails user, set MD5-hashed password, grant privs
postgresql_user:
name: rails
password: md59543f1d82624df2b31672ec0f7050460
role_attr_flags: CREATEDB,NOSUPERUSER
- name: Connect to acme database and remove test user privileges from there
postgresql_user:
db: acme
name: test
priv: "ALL/products:ALL"
state: absent
fail_on_user: no
- name: Connect to test database, remove test user from cluster
postgresql_user:
db: test
name: test
priv: ALL
state: absent
- name: Connect to acme database and set user's password with no expire date
postgresql_user:
db: acme
name: django
password: mysupersecretword
priv: "CONNECT/products:ALL"
expires: infinity
# Example privileges string format
# INSERT,UPDATE/table:SELECT/anothertable:ALL
- name: Connect to test database and remove an existing user's password
postgresql_user:
db: test
user: test
password: ""
- name: Create user test and grant group user_ro and user_rw to it
postgresql_user:
name: test
groups:
- user_ro
- user_rw
'''
RETURN = r'''
queries:
description: List of executed queries.
returned: always
type: list
sample: ['CREATE USER "alice"', 'GRANT CONNECT ON DATABASE "acme" TO "alice"']
version_added: '2.8'
'''
import itertools
import re
import traceback
from hashlib import md5
try:
import psycopg2
from psycopg2.extras import DictCursor
except ImportError:
# psycopg2 is checked by connect_to_db()
# from ansible.module_utils.postgres
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.database import pg_quote_identifier, SQLParseError
from ansible.module_utils.postgres import (
connect_to_db,
get_conn_params,
PgMembership,
postgres_common_argument_spec,
)
from ansible.module_utils._text import to_bytes, to_native
from ansible.module_utils.six import iteritems
FLAGS = ('SUPERUSER', 'CREATEROLE', 'CREATEDB', 'INHERIT', 'LOGIN', 'REPLICATION')
FLAGS_BY_VERSION = {'BYPASSRLS': 90500}
VALID_PRIVS = dict(table=frozenset(('SELECT', 'INSERT', 'UPDATE', 'DELETE', 'TRUNCATE', 'REFERENCES', 'TRIGGER', 'ALL')),
database=frozenset(
('CREATE', 'CONNECT', 'TEMPORARY', 'TEMP', 'ALL')),
)
# map to cope with idiosyncracies of SUPERUSER and LOGIN
PRIV_TO_AUTHID_COLUMN = dict(SUPERUSER='rolsuper', CREATEROLE='rolcreaterole',
CREATEDB='rolcreatedb', INHERIT='rolinherit', LOGIN='rolcanlogin',
REPLICATION='rolreplication', BYPASSRLS='rolbypassrls')
executed_queries = []
class InvalidFlagsError(Exception):
pass
class InvalidPrivsError(Exception):
pass
# ===========================================
# PostgreSQL module specific support methods.
#
def user_exists(cursor, user):
# The PUBLIC user is a special case that is always there
if user == 'PUBLIC':
return True
query = "SELECT rolname FROM pg_roles WHERE rolname=%(user)s"
cursor.execute(query, {'user': user})
return cursor.rowcount > 0
def user_add(cursor, user, password, role_attr_flags, encrypted, expires, conn_limit):
"""Create a new database user (role)."""
# Note: role_attr_flags escaped by parse_role_attrs and encrypted is a
# literal
query_password_data = dict(password=password, expires=expires)
query = ['CREATE USER "%(user)s"' %
{"user": user}]
if password is not None and password != '':
query.append("WITH %(crypt)s" % {"crypt": encrypted})
query.append("PASSWORD %(password)s")
if expires is not None:
query.append("VALID UNTIL %(expires)s")
if conn_limit is not None:
query.append("CONNECTION LIMIT %(conn_limit)s" % {"conn_limit": conn_limit})
query.append(role_attr_flags)
query = ' '.join(query)
executed_queries.append(query)
cursor.execute(query, query_password_data)
return True
def user_should_we_change_password(current_role_attrs, user, password, encrypted):
"""Check if we should change the user's password.
Compare the proposed password with the existing one, comparing
hashes if encrypted. If we can't access it assume yes.
"""
if current_role_attrs is None:
# on some databases, E.g. AWS RDS instances, there is no access to
# the pg_authid relation to check the pre-existing password, so we
# just assume password is different
return True
# Do we actually need to do anything?
pwchanging = False
if password is not None:
# Empty password means that the role shouldn't have a password, which
# means we need to check if the current password is None.
if password == '':
if current_role_attrs['rolpassword'] is not None:
pwchanging = True
# 32: MD5 hashes are represented as a sequence of 32 hexadecimal digits
# 3: The size of the 'md5' prefix
# When the provided password looks like a MD5-hash, value of
# 'encrypted' is ignored.
elif (password.startswith('md5') and len(password) == 32 + 3) or encrypted == 'UNENCRYPTED':
if password != current_role_attrs['rolpassword']:
pwchanging = True
elif encrypted == 'ENCRYPTED':
hashed_password = 'md5{0}'.format(md5(to_bytes(password) + to_bytes(user)).hexdigest())
if hashed_password != current_role_attrs['rolpassword']:
pwchanging = True
return pwchanging
def user_alter(db_connection, module, user, password, role_attr_flags, encrypted, expires, no_password_changes, conn_limit):
"""Change user password and/or attributes. Return True if changed, False otherwise."""
changed = False
cursor = db_connection.cursor(cursor_factory=DictCursor)
# Note: role_attr_flags escaped by parse_role_attrs and encrypted is a
# literal
if user == 'PUBLIC':
if password is not None:
module.fail_json(msg="cannot change the password for PUBLIC user")
elif role_attr_flags != '':
module.fail_json(msg="cannot change the role_attr_flags for PUBLIC user")
else:
return False
# Handle passwords.
if not no_password_changes and (password is not None or role_attr_flags != '' or expires is not None or conn_limit is not None):
# Select password and all flag-like columns in order to verify changes.
try:
select = "SELECT * FROM pg_authid where rolname=%(user)s"
cursor.execute(select, {"user": user})
# Grab current role attributes.
current_role_attrs = cursor.fetchone()
except psycopg2.ProgrammingError:
current_role_attrs = None
db_connection.rollback()
pwchanging = user_should_we_change_password(current_role_attrs, user, password, encrypted)
if current_role_attrs is None:
try:
# AWS RDS instances does not allow user to access pg_authid
# so try to get current_role_attrs from pg_roles tables
select = "SELECT * FROM pg_roles where rolname=%(user)s"
cursor.execute(select, {"user": user})
# Grab current role attributes from pg_roles
current_role_attrs = cursor.fetchone()
except psycopg2.ProgrammingError as e:
db_connection.rollback()
module.fail_json(msg="Failed to get role details for current user %s: %s" % (user, e))
role_attr_flags_changing = False
if role_attr_flags:
role_attr_flags_dict = {}
for r in role_attr_flags.split(' '):
if r.startswith('NO'):
role_attr_flags_dict[r.replace('NO', '', 1)] = False
else:
role_attr_flags_dict[r] = True
for role_attr_name, role_attr_value in role_attr_flags_dict.items():
if current_role_attrs[PRIV_TO_AUTHID_COLUMN[role_attr_name]] != role_attr_value:
role_attr_flags_changing = True
if expires is not None:
cursor.execute("SELECT %s::timestamptz;", (expires,))
expires_with_tz = cursor.fetchone()[0]
expires_changing = expires_with_tz != current_role_attrs.get('rolvaliduntil')
else:
expires_changing = False
conn_limit_changing = (conn_limit is not None and conn_limit != current_role_attrs['rolconnlimit'])
if not pwchanging and not role_attr_flags_changing and not expires_changing and not conn_limit_changing:
return False
alter = ['ALTER USER "%(user)s"' % {"user": user}]
if pwchanging:
if password != '':
alter.append("WITH %(crypt)s" % {"crypt": encrypted})
alter.append("PASSWORD %(password)s")
else:
alter.append("WITH PASSWORD NULL")
alter.append(role_attr_flags)
elif role_attr_flags:
alter.append('WITH %s' % role_attr_flags)
if expires is not None:
alter.append("VALID UNTIL %(expires)s")
if conn_limit is not None:
alter.append("CONNECTION LIMIT %(conn_limit)s" % {"conn_limit": conn_limit})
query_password_data = dict(password=password, expires=expires)
try:
cursor.execute(' '.join(alter), query_password_data)
changed = True
except psycopg2.InternalError as e:
if e.pgcode == '25006':
# Handle errors due to read-only transactions indicated by pgcode 25006
# ERROR: cannot execute ALTER ROLE in a read-only transaction
changed = False
module.fail_json(msg=e.pgerror, exception=traceback.format_exc())
return changed
else:
raise psycopg2.InternalError(e)
except psycopg2.NotSupportedError as e:
module.fail_json(msg=e.pgerror, exception=traceback.format_exc())
elif no_password_changes and role_attr_flags != '':
# Grab role information from pg_roles instead of pg_authid
select = "SELECT * FROM pg_roles where rolname=%(user)s"
cursor.execute(select, {"user": user})
# Grab current role attributes.
current_role_attrs = cursor.fetchone()
role_attr_flags_changing = False
if role_attr_flags:
role_attr_flags_dict = {}
for r in role_attr_flags.split(' '):
if r.startswith('NO'):
role_attr_flags_dict[r.replace('NO', '', 1)] = False
else:
role_attr_flags_dict[r] = True
for role_attr_name, role_attr_value in role_attr_flags_dict.items():
if current_role_attrs[PRIV_TO_AUTHID_COLUMN[role_attr_name]] != role_attr_value:
role_attr_flags_changing = True
if not role_attr_flags_changing:
return False
alter = ['ALTER USER "%(user)s"' %
{"user": user}]
if role_attr_flags:
alter.append('WITH %s' % role_attr_flags)
try:
cursor.execute(' '.join(alter))
except psycopg2.InternalError as e:
if e.pgcode == '25006':
# Handle errors due to read-only transactions indicated by pgcode 25006
# ERROR: cannot execute ALTER ROLE in a read-only transaction
changed = False
module.fail_json(msg=e.pgerror, exception=traceback.format_exc())
return changed
else:
raise psycopg2.InternalError(e)
# Grab new role attributes.
cursor.execute(select, {"user": user})
new_role_attrs = cursor.fetchone()
# Detect any differences between current_ and new_role_attrs.
changed = current_role_attrs != new_role_attrs
return changed
def user_delete(cursor, user):
"""Try to remove a user. Returns True if successful otherwise False"""
cursor.execute("SAVEPOINT ansible_pgsql_user_delete")
try:
query = 'DROP USER "%s"' % user
executed_queries.append(query)
cursor.execute(query)
except Exception:
cursor.execute("ROLLBACK TO SAVEPOINT ansible_pgsql_user_delete")
cursor.execute("RELEASE SAVEPOINT ansible_pgsql_user_delete")
return False
cursor.execute("RELEASE SAVEPOINT ansible_pgsql_user_delete")
return True
def has_table_privileges(cursor, user, table, privs):
"""
Return the difference between the privileges that a user already has and
the privileges that they desire to have.
:returns: tuple of:
* privileges that they have and were requested
* privileges they currently hold but were not requested
* privileges requested that they do not hold
"""
cur_privs = get_table_privileges(cursor, user, table)
have_currently = cur_privs.intersection(privs)
other_current = cur_privs.difference(privs)
desired = privs.difference(cur_privs)
return (have_currently, other_current, desired)
def get_table_privileges(cursor, user, table):
if '.' in table:
schema, table = table.split('.', 1)
else:
schema = 'public'
query = ("SELECT privilege_type FROM information_schema.role_table_grants "
"WHERE grantee='%s' AND table_name='%s' AND table_schema='%s'" % (user, table, schema))
cursor.execute(query)
return frozenset([x[0] for x in cursor.fetchall()])
def grant_table_privileges(cursor, user, table, privs):
# Note: priv escaped by parse_privs
privs = ', '.join(privs)
query = 'GRANT %s ON TABLE %s TO "%s"' % (
privs, pg_quote_identifier(table, 'table'), user)
executed_queries.append(query)
cursor.execute(query)
def revoke_table_privileges(cursor, user, table, privs):
# Note: priv escaped by parse_privs
privs = ', '.join(privs)
query = 'REVOKE %s ON TABLE %s FROM "%s"' % (
privs, pg_quote_identifier(table, 'table'), user)
executed_queries.append(query)
cursor.execute(query)
def get_database_privileges(cursor, user, db):
priv_map = {
'C': 'CREATE',
'T': 'TEMPORARY',
'c': 'CONNECT',
}
query = 'SELECT datacl FROM pg_database WHERE datname = %s'
cursor.execute(query, (db,))
datacl = cursor.fetchone()[0]
if datacl is None:
return set()
r = re.search(r'%s\\?"?=(C?T?c?)/[^,]+,?' % user, datacl)
if r is None:
return set()
o = set()
for v in r.group(1):
o.add(priv_map[v])
return normalize_privileges(o, 'database')
def has_database_privileges(cursor, user, db, privs):
"""
Return the difference between the privileges that a user already has and
the privileges that they desire to have.
:returns: tuple of:
* privileges that they have and were requested
* privileges they currently hold but were not requested
* privileges requested that they do not hold
"""
cur_privs = get_database_privileges(cursor, user, db)
have_currently = cur_privs.intersection(privs)
other_current = cur_privs.difference(privs)
desired = privs.difference(cur_privs)
return (have_currently, other_current, desired)
def grant_database_privileges(cursor, user, db, privs):
# Note: priv escaped by parse_privs
privs = ', '.join(privs)
if user == "PUBLIC":
query = 'GRANT %s ON DATABASE %s TO PUBLIC' % (
privs, pg_quote_identifier(db, 'database'))
else:
query = 'GRANT %s ON DATABASE %s TO "%s"' % (
privs, pg_quote_identifier(db, 'database'), user)
executed_queries.append(query)
cursor.execute(query)
def revoke_database_privileges(cursor, user, db, privs):
# Note: priv escaped by parse_privs
privs = ', '.join(privs)
if user == "PUBLIC":
query = 'REVOKE %s ON DATABASE %s FROM PUBLIC' % (
privs, pg_quote_identifier(db, 'database'))
else:
query = 'REVOKE %s ON DATABASE %s FROM "%s"' % (
privs, pg_quote_identifier(db, 'database'), user)
executed_queries.append(query)
cursor.execute(query)
def revoke_privileges(cursor, user, privs):
if privs is None:
return False
revoke_funcs = dict(table=revoke_table_privileges,
database=revoke_database_privileges)
check_funcs = dict(table=has_table_privileges,
database=has_database_privileges)
changed = False
for type_ in privs:
for name, privileges in iteritems(privs[type_]):
# Check that any of the privileges requested to be removed are
# currently granted to the user
differences = check_funcs[type_](cursor, user, name, privileges)
if differences[0]:
revoke_funcs[type_](cursor, user, name, privileges)
changed = True
return changed
def grant_privileges(cursor, user, privs):
if privs is None:
return False
grant_funcs = dict(table=grant_table_privileges,
database=grant_database_privileges)
check_funcs = dict(table=has_table_privileges,
database=has_database_privileges)
changed = False
for type_ in privs:
for name, privileges in iteritems(privs[type_]):
# Check that any of the privileges requested for the user are
# currently missing
differences = check_funcs[type_](cursor, user, name, privileges)
if differences[2]:
grant_funcs[type_](cursor, user, name, privileges)
changed = True
return changed
def parse_role_attrs(cursor, role_attr_flags):
"""
Parse role attributes string for user creation.
Format:
attributes[,attributes,...]
Where:
attributes := CREATEDB,CREATEROLE,NOSUPERUSER,...
[ "[NO]SUPERUSER","[NO]CREATEROLE", "[NO]CREATEDB",
"[NO]INHERIT", "[NO]LOGIN", "[NO]REPLICATION",
"[NO]BYPASSRLS" ]
Note: "[NO]BYPASSRLS" role attribute introduced in 9.5
Note: "[NO]CREATEUSER" role attribute is deprecated.
"""
flags = frozenset(role.upper() for role in role_attr_flags.split(',') if role)
valid_flags = frozenset(itertools.chain(FLAGS, get_valid_flags_by_version(cursor)))
valid_flags = frozenset(itertools.chain(valid_flags, ('NO%s' % flag for flag in valid_flags)))
if not flags.issubset(valid_flags):
raise InvalidFlagsError('Invalid role_attr_flags specified: %s' %
' '.join(flags.difference(valid_flags)))
return ' '.join(flags)
def normalize_privileges(privs, type_):
new_privs = set(privs)
if 'ALL' in new_privs:
new_privs.update(VALID_PRIVS[type_])
new_privs.remove('ALL')
if 'TEMP' in new_privs:
new_privs.add('TEMPORARY')
new_privs.remove('TEMP')
return new_privs
def parse_privs(privs, db):
"""
Parse privilege string to determine permissions for database db.
Format:
privileges[/privileges/...]
Where:
privileges := DATABASE_PRIVILEGES[,DATABASE_PRIVILEGES,...] |
TABLE_NAME:TABLE_PRIVILEGES[,TABLE_PRIVILEGES,...]
"""
if privs is None:
return privs
o_privs = {
'database': {},
'table': {}
}
for token in privs.split('/'):
if ':' not in token:
type_ = 'database'
name = db
priv_set = frozenset(x.strip().upper()
for x in token.split(',') if x.strip())
else:
type_ = 'table'
name, privileges = token.split(':', 1)
priv_set = frozenset(x.strip().upper()
for x in privileges.split(',') if x.strip())
if not priv_set.issubset(VALID_PRIVS[type_]):
raise InvalidPrivsError('Invalid privs specified for %s: %s' %
(type_, ' '.join(priv_set.difference(VALID_PRIVS[type_]))))
priv_set = normalize_privileges(priv_set, type_)
o_privs[type_][name] = priv_set
return o_privs
def get_valid_flags_by_version(cursor):
"""
Some role attributes were introduced after certain versions. We want to
compile a list of valid flags against the current Postgres version.
"""
current_version = cursor.connection.server_version
return [
flag
for flag, version_introduced in FLAGS_BY_VERSION.items()
if current_version >= version_introduced
]
# ===========================================
# Module execution.
#
def main():
argument_spec = postgres_common_argument_spec()
argument_spec.update(
user=dict(type='str', required=True, aliases=['name']),
password=dict(type='str', default=None, no_log=True),
state=dict(type='str', default='present', choices=['absent', 'present']),
priv=dict(type='str', default=None),
db=dict(type='str', default='', aliases=['login_db']),
fail_on_user=dict(type='bool', default='yes', aliases=['fail_on_role']),
role_attr_flags=dict(type='str', default=''),
encrypted=dict(type='bool', default='yes'),
no_password_changes=dict(type='bool', default='no'),
expires=dict(type='str', default=None),
conn_limit=dict(type='int', default=None),
session_role=dict(type='str'),
groups=dict(type='list'),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True
)
user = module.params["user"]
password = module.params["password"]
state = module.params["state"]
fail_on_user = module.params["fail_on_user"]
if module.params['db'] == '' and module.params["priv"] is not None:
module.fail_json(msg="privileges require a database to be specified")
privs = parse_privs(module.params["priv"], module.params["db"])
no_password_changes = module.params["no_password_changes"]
if module.params["encrypted"]:
encrypted = "ENCRYPTED"
else:
encrypted = "UNENCRYPTED"
expires = module.params["expires"]
conn_limit = module.params["conn_limit"]
role_attr_flags = module.params["role_attr_flags"]
groups = module.params["groups"]
if groups:
groups = [e.strip() for e in groups]
conn_params = get_conn_params(module, module.params, warn_db_default=False)
db_connection = connect_to_db(module, conn_params)
cursor = db_connection.cursor(cursor_factory=DictCursor)
try:
role_attr_flags = parse_role_attrs(cursor, role_attr_flags)
except InvalidFlagsError as e:
module.fail_json(msg=to_native(e), exception=traceback.format_exc())
kw = dict(user=user)
changed = False
user_removed = False
if state == "present":
if user_exists(cursor, user):
try:
changed = user_alter(db_connection, module, user, password,
role_attr_flags, encrypted, expires, no_password_changes, conn_limit)
except SQLParseError as e:
module.fail_json(msg=to_native(e), exception=traceback.format_exc())
else:
try:
changed = user_add(cursor, user, password,
role_attr_flags, encrypted, expires, conn_limit)
except psycopg2.ProgrammingError as e:
module.fail_json(msg="Unable to add user with given requirement "
"due to : %s" % to_native(e),
exception=traceback.format_exc())
except SQLParseError as e:
module.fail_json(msg=to_native(e), exception=traceback.format_exc())
try:
changed = grant_privileges(cursor, user, privs) or changed
except SQLParseError as e:
module.fail_json(msg=to_native(e), exception=traceback.format_exc())
if groups:
target_roles = []
target_roles.append(user)
pg_membership = PgMembership(module, cursor, groups, target_roles)
changed = pg_membership.grant()
executed_queries.extend(pg_membership.executed_queries)
else:
if user_exists(cursor, user):
if module.check_mode:
changed = True
kw['user_removed'] = True
else:
try:
changed = revoke_privileges(cursor, user, privs)
user_removed = user_delete(cursor, user)
except SQLParseError as e:
module.fail_json(msg=to_native(e), exception=traceback.format_exc())
changed = changed or user_removed
if fail_on_user and not user_removed:
msg = "Unable to remove user"
module.fail_json(msg=msg)
kw['user_removed'] = user_removed
if changed:
if module.check_mode:
db_connection.rollback()
else:
db_connection.commit()
kw['changed'] = changed
kw['queries'] = executed_queries
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,921 |
gitlab_user requires email, name, and password arguments when deleting a user
|
##### SUMMARY
In th `gitlab_user` module the `email`, `name`, and `password` arguments are required.
That is perfectly fine when creating a new user as this information is needed.
However, these three arguments are also required when deleting a user (state==absent), even though they are not used by the module for this action.
The module fails with an error if these arguments are not passed in. You are forced to pass in blank (or dummy) values.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
gitlab_user
##### ANSIBLE VERSION
```paste below
ansible 2.8.4
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Manjaro Linux 18.0.4
##### STEPS TO REPRODUCE
```yaml
- name: Delete user account
gitlab_user:
server_url: "{{ gitlab_url }}"
api_token: "{{ token }}"
username: "{{ username }}"
state: absent
```
##### EXPECTED RESULTS
GitLab user should be successfully deleted. Only the `username` is required to find and delete the user.
##### ACTUAL RESULTS
```
fatal: [localhost]: FAILED! => {"changed": false, "msg": "missing required arguments: name, password, email"}
```
##### WORKAROUND
```yaml
- name: Delete user account
gitlab_user:
server_url: "{{ gitlab_url }}"
api_token: "{{ token }}"
username: "{{ username }}"
email: ""
name: ""
password: ""
state: absent
```
|
https://github.com/ansible/ansible/issues/61921
|
https://github.com/ansible/ansible/pull/64832
|
9ee601288c45a5ca2c1bc37196b9aade2966ca0d
|
eac7fa186088bbcb82c1914b124cfb93d9436202
| 2019-09-06T08:39:20Z |
python
| 2019-11-14T11:51:14Z |
changelogs/fragments/61921-gitlab_user.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,921 |
gitlab_user requires email, name, and password arguments when deleting a user
|
##### SUMMARY
In th `gitlab_user` module the `email`, `name`, and `password` arguments are required.
That is perfectly fine when creating a new user as this information is needed.
However, these three arguments are also required when deleting a user (state==absent), even though they are not used by the module for this action.
The module fails with an error if these arguments are not passed in. You are forced to pass in blank (or dummy) values.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
gitlab_user
##### ANSIBLE VERSION
```paste below
ansible 2.8.4
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Manjaro Linux 18.0.4
##### STEPS TO REPRODUCE
```yaml
- name: Delete user account
gitlab_user:
server_url: "{{ gitlab_url }}"
api_token: "{{ token }}"
username: "{{ username }}"
state: absent
```
##### EXPECTED RESULTS
GitLab user should be successfully deleted. Only the `username` is required to find and delete the user.
##### ACTUAL RESULTS
```
fatal: [localhost]: FAILED! => {"changed": false, "msg": "missing required arguments: name, password, email"}
```
##### WORKAROUND
```yaml
- name: Delete user account
gitlab_user:
server_url: "{{ gitlab_url }}"
api_token: "{{ token }}"
username: "{{ username }}"
email: ""
name: ""
password: ""
state: absent
```
|
https://github.com/ansible/ansible/issues/61921
|
https://github.com/ansible/ansible/pull/64832
|
9ee601288c45a5ca2c1bc37196b9aade2966ca0d
|
eac7fa186088bbcb82c1914b124cfb93d9436202
| 2019-09-06T08:39:20Z |
python
| 2019-11-14T11:51:14Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option wil be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
Plugins
=======
No notable changes
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,921 |
gitlab_user requires email, name, and password arguments when deleting a user
|
##### SUMMARY
In th `gitlab_user` module the `email`, `name`, and `password` arguments are required.
That is perfectly fine when creating a new user as this information is needed.
However, these three arguments are also required when deleting a user (state==absent), even though they are not used by the module for this action.
The module fails with an error if these arguments are not passed in. You are forced to pass in blank (or dummy) values.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
gitlab_user
##### ANSIBLE VERSION
```paste below
ansible 2.8.4
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Manjaro Linux 18.0.4
##### STEPS TO REPRODUCE
```yaml
- name: Delete user account
gitlab_user:
server_url: "{{ gitlab_url }}"
api_token: "{{ token }}"
username: "{{ username }}"
state: absent
```
##### EXPECTED RESULTS
GitLab user should be successfully deleted. Only the `username` is required to find and delete the user.
##### ACTUAL RESULTS
```
fatal: [localhost]: FAILED! => {"changed": false, "msg": "missing required arguments: name, password, email"}
```
##### WORKAROUND
```yaml
- name: Delete user account
gitlab_user:
server_url: "{{ gitlab_url }}"
api_token: "{{ token }}"
username: "{{ username }}"
email: ""
name: ""
password: ""
state: absent
```
|
https://github.com/ansible/ansible/issues/61921
|
https://github.com/ansible/ansible/pull/64832
|
9ee601288c45a5ca2c1bc37196b9aade2966ca0d
|
eac7fa186088bbcb82c1914b124cfb93d9436202
| 2019-09-06T08:39:20Z |
python
| 2019-11-14T11:51:14Z |
lib/ansible/modules/source_control/gitlab/gitlab_user.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_user
short_description: Creates/updates/deletes GitLab Users
description:
- When the user does not exist in GitLab, it will be created.
- When the user does exists and state=absent, the user will be deleted.
- When changes are made to user, the user will be updated.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
- administrator rights on the GitLab server
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
type: str
name:
description:
- Name of the user you want to create
required: true
type: str
username:
description:
- The username of the user.
required: true
type: str
password:
description:
- The password of the user.
- GitLab server enforces minimum password length to 8, set this value with 8 or more characters.
required: true
type: str
email:
description:
- The email that belongs to the user.
required: true
type: str
sshkey_name:
description:
- The name of the sshkey
type: str
sshkey_file:
description:
- The ssh key itself.
type: str
group:
description:
- Id or Full path of parent group in the form of group/name
- Add user as an member to this group.
type: str
access_level:
description:
- The access level to the group. One of the following can be used.
- guest
- reporter
- developer
- master (alias for maintainer)
- maintainer
- owner
default: guest
type: str
choices: ["guest", "reporter", "developer", "master", "maintainer", "owner"]
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
confirm:
description:
- Require confirmation.
type: bool
default: yes
version_added: "2.4"
isadmin:
description:
- Grant admin privileges to the user
type: bool
default: no
version_added: "2.8"
external:
description:
- Define external parameter for this user
type: bool
default: no
version_added: "2.8"
'''
EXAMPLES = '''
- name: "Delete GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
username: myusername
state: absent
delegate_to: localhost
- name: "Create GitLab User"
gitlab_user:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: My Name
username: myusername
password: mysecretpassword
email: [email protected]
sshkey_name: MySSH
sshkey_file: ssh-rsa AAAAB3NzaC1yc...
state: present
group: super_group/mon_group
access_level: owner
delegate_to: localhost
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
user:
description: API object
returned: always
type: dict
'''
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabUser(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.userObject = None
self.ACCESS_LEVEL = {
'guest': gitlab.GUEST_ACCESS,
'reporter': gitlab.REPORTER_ACCESS,
'developer': gitlab.DEVELOPER_ACCESS,
'master': gitlab.MAINTAINER_ACCESS,
'maintainer': gitlab.MAINTAINER_ACCESS,
'owner': gitlab.OWNER_ACCESS}
'''
@param username Username of the user
@param options User options
'''
def createOrUpdateUser(self, username, options):
changed = False
# Because we have already call userExists in main()
if self.userObject is None:
user = self.createUser({
'name': options['name'],
'username': username,
'password': options['password'],
'email': options['email'],
'skip_confirmation': not options['confirm'],
'admin': options['isadmin'],
'external': options['external']})
changed = True
else:
changed, user = self.updateUser(self.userObject, {
'name': options['name'],
'email': options['email'],
'is_admin': options['isadmin'],
'external': options['external']})
# Assign ssh keys
if options['sshkey_name'] and options['sshkey_file']:
key_changed = self.addSshKeyToUser(user, {
'name': options['sshkey_name'],
'file': options['sshkey_file']})
changed = changed or key_changed
# Assign group
if options['group_path']:
group_changed = self.assignUserToGroup(user, options['group_path'], options['access_level'])
changed = changed or group_changed
self.userObject = user
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the user %s" % username)
try:
user.save()
except Exception as e:
self._module.fail_json(msg="Failed to update user: %s " % to_native(e))
return True
else:
return False
'''
@param group User object
'''
def getUserId(self, user):
if user is not None:
return user.id
return None
'''
@param user User object
@param sshkey_name Name of the ssh key
'''
def sshKeyExists(self, user, sshkey_name):
keyList = map(lambda k: k.title, user.keys.list())
return sshkey_name in keyList
'''
@param user User object
@param sshkey Dict containing sshkey infos {"name": "", "file": ""}
'''
def addSshKeyToUser(self, user, sshkey):
if not self.sshKeyExists(user, sshkey['name']):
if self._module.check_mode:
return True
try:
user.keys.create({
'title': sshkey['name'],
'key': sshkey['file']})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign sshkey to user: %s" % to_native(e))
return True
return False
'''
@param group Group object
@param user_id Id of the user to find
'''
def findMember(self, group, user_id):
try:
member = group.members.get(user_id)
except gitlab.exceptions.GitlabGetError:
return None
return member
'''
@param group Group object
@param user_id Id of the user to check
'''
def memberExists(self, group, user_id):
member = self.findMember(group, user_id)
return member is not None
'''
@param group Group object
@param user_id Id of the user to check
@param access_level GitLab access_level to check
'''
def memberAsGoodAccessLevel(self, group, user_id, access_level):
member = self.findMember(group, user_id)
return member.access_level == access_level
'''
@param user User object
@param group_path Complete path of the Group including parent group path. <parent_path>/<group_path>
@param access_level GitLab access_level to assign
'''
def assignUserToGroup(self, user, group_identifier, access_level):
group = findGroup(self._gitlab, group_identifier)
if self._module.check_mode:
return True
if group is None:
return False
if self.memberExists(group, self.getUserId(user)):
member = self.findMember(group, self.getUserId(user))
if not self.memberAsGoodAccessLevel(group, member.id, self.ACCESS_LEVEL[access_level]):
member.access_level = self.ACCESS_LEVEL[access_level]
member.save()
return True
else:
try:
group.members.create({
'user_id': self.getUserId(user),
'access_level': self.ACCESS_LEVEL[access_level]})
except gitlab.exceptions.GitlabCreateError as e:
self._module.fail_json(msg="Failed to assign user to group: %s" % to_native(e))
return True
return False
'''
@param user User object
@param arguments User attributes
'''
def updateUser(self, user, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(user, arg_key) != arguments[arg_key]:
setattr(user, arg_key, arguments[arg_key])
changed = True
return (changed, user)
'''
@param arguments User attributes
'''
def createUser(self, arguments):
if self._module.check_mode:
return True
try:
user = self._gitlab.users.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create user: %s " % to_native(e))
return user
'''
@param username Username of the user
'''
def findUser(self, username):
users = self._gitlab.users.list(search=username)
for user in users:
if (user.username == username):
return user
'''
@param username Username of the user
'''
def existsUser(self, username):
# When user exists, object will be stored in self.userObject.
user = self.findUser(username)
if user:
self.userObject = user
return True
return False
def deleteUser(self):
if self._module.check_mode:
return True
user = self.userObject
return user.delete()
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True),
name=dict(type='str', required=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
username=dict(type='str', required=True),
password=dict(type='str', required=True, no_log=True),
email=dict(type='str', required=True),
sshkey_name=dict(type='str'),
sshkey_file=dict(type='str'),
group=dict(type='str'),
access_level=dict(type='str', default="guest", choices=["developer", "guest", "maintainer", "master", "owner", "reporter"]),
confirm=dict(type='bool', default=True),
isadmin=dict(type='bool', default=False),
external=dict(type='bool', default=False),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token'],
],
required_together=[
['api_username', 'api_password'],
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
gitlab_url = module.params['api_url']
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
user_name = module.params['name']
state = module.params['state']
user_username = module.params['username'].lower()
user_password = module.params['password']
user_email = module.params['email']
user_sshkey_name = module.params['sshkey_name']
user_sshkey_file = module.params['sshkey_file']
group_path = module.params['group']
access_level = module.params['access_level']
confirm = module.params['confirm']
user_isadmin = module.params['isadmin']
user_external = module.params['external']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_user = GitLabUser(module, gitlab_instance)
user_exists = gitlab_user.existsUser(user_username)
if state == 'absent':
if user_exists:
gitlab_user.deleteUser()
module.exit_json(changed=True, msg="Successfully deleted user %s" % user_username)
else:
module.exit_json(changed=False, msg="User deleted or does not exists")
if state == 'present':
if gitlab_user.createOrUpdateUser(user_username, {
"name": user_name,
"password": user_password,
"email": user_email,
"sshkey_name": user_sshkey_name,
"sshkey_file": user_sshkey_file,
"group_path": group_path,
"access_level": access_level,
"confirm": confirm,
"isadmin": user_isadmin,
"external": user_external}):
module.exit_json(changed=True, msg="Successfully created or updated the user %s" % user_username, user=gitlab_user.userObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the user %s" % user_username, user=gitlab_user.userObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,267 |
kinesis_stream Wait Calls DescribeStream In a Loop Without Pause
|
##### SUMMARY
If you use kinesis_stream with wait set to yes, ansible will call the DescribeStream API call in a loop with no sleep in between calls. I've seen up to 95 calls per second. You will most likely receive rate exceeded errors which are masked by ansible but can be seen in your CloudTrail logs.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
kinesis_stream
##### ANSIBLE VERSION
```
ansible 2.8.6
config file = /home/ansible/ansible.cfg
configured module search path = [u'/home/ansible/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
```
COMMAND_WARNINGS(/home/ansible/ansible.cfg) = False
DEFAULT_CALLBACK_WHITELIST(/home/ansible/ansible.cfg) = [u'profile_roles', u'timer', u'log_plays']
DEFAULT_LOG_PATH(/home/ansible/ansible.cfg) = /var/log/ansible/ansible.log
DEFAULT_ROLES_PATH(/home/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/etc/ansible/xxx']
DEFAULT_STDOUT_CALLBACK(/home/ansible/ansible.cfg) = default
HOST_KEY_CHECKING(/home/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/home/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
Ubuntu 18.04.3 LTS
##### STEPS TO REPRODUCE
Create a kinesis_stream with wait set to yes. Run it and look at your CloudTrail logs for Kinesis DescribeStream calls.
```
kinesis_stream:
aws_access_key: "xxx"
aws_secret_key: "xxx"
region: "xxx"
name: "my_stream"
shards: "1"
wait: yes
wait_timeout: 600
```
##### EXPECTED RESULTS
DescribeStream should only be called once per 5 seconds.
##### ACTUAL RESULTS
DescribeStream is called dozens of times per second.
|
https://github.com/ansible/ansible/issues/64267
|
https://github.com/ansible/ansible/pull/64283
|
fbdd295cef36b898a97e7f1fa5fab221f145be58
|
d6a51807cd17fd0e2307dd89c8cb790279c822b1
| 2019-11-01T13:35:42Z |
python
| 2019-11-14T15:53:01Z |
lib/ansible/modules/cloud/amazon/kinesis_stream.py
|
#!/usr/bin/python
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: kinesis_stream
short_description: Manage a Kinesis Stream.
description:
- Create or Delete a Kinesis Stream.
- Update the retention period of a Kinesis Stream.
- Update Tags on a Kinesis Stream.
- Enable/disable server side encryption on a Kinesis Stream.
version_added: "2.2"
requirements: [ boto3 ]
author: Allen Sanabria (@linuxdynasty)
options:
name:
description:
- The name of the Kinesis Stream you are managing.
required: true
type: str
shards:
description:
- The number of shards you want to have with this stream.
- This is required when I(state=present)
type: int
retention_period:
description:
- The length of time (in hours) data records are accessible after they are added to
the stream.
- The default retention period is 24 hours and can not be less than 24 hours.
- The maximum retention period is 168 hours.
- The retention period can be modified during any point in time.
type: int
state:
description:
- Create or Delete the Kinesis Stream.
default: present
choices: [ 'present', 'absent' ]
type: str
wait:
description:
- Wait for operation to complete before returning.
default: true
type: bool
wait_timeout:
description:
- How many seconds to wait for an operation to complete before timing out.
default: 300
type: int
tags:
description:
- "A dictionary of resource tags of the form: C({ tag1: value1, tag2: value2 })."
aliases: [ "resource_tags" ]
type: dict
encryption_state:
description:
- Enable or Disable encryption on the Kinesis Stream.
choices: [ 'enabled', 'disabled' ]
version_added: "2.5"
type: str
encryption_type:
description:
- The type of encryption.
- Defaults to C(KMS)
choices: ['KMS', 'NONE']
version_added: "2.5"
type: str
key_id:
description:
- The GUID or alias for the KMS key.
version_added: "2.5"
type: str
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Basic creation example:
- name: Set up Kinesis Stream with 10 shards and wait for the stream to become ACTIVE
kinesis_stream:
name: test-stream
shards: 10
wait: yes
wait_timeout: 600
register: test_stream
# Basic creation example with tags:
- name: Set up Kinesis Stream with 10 shards, tag the environment, and wait for the stream to become ACTIVE
kinesis_stream:
name: test-stream
shards: 10
tags:
Env: development
wait: yes
wait_timeout: 600
register: test_stream
# Basic creation example with tags and increase the retention period from the default 24 hours to 48 hours:
- name: Set up Kinesis Stream with 10 shards, tag the environment, increase the retention period and wait for the stream to become ACTIVE
kinesis_stream:
name: test-stream
retention_period: 48
shards: 10
tags:
Env: development
wait: yes
wait_timeout: 600
register: test_stream
# Basic delete example:
- name: Delete Kinesis Stream test-stream and wait for it to finish deleting.
kinesis_stream:
name: test-stream
state: absent
wait: yes
wait_timeout: 600
register: test_stream
# Basic enable encryption example:
- name: Encrypt Kinesis Stream test-stream.
kinesis_stream:
name: test-stream
state: present
encryption_state: enabled
encryption_type: KMS
key_id: alias/aws/kinesis
wait: yes
wait_timeout: 600
register: test_stream
# Basic disable encryption example:
- name: Encrypt Kinesis Stream test-stream.
kinesis_stream:
name: test-stream
state: present
encryption_state: disabled
encryption_type: KMS
key_id: alias/aws/kinesis
wait: yes
wait_timeout: 600
register: test_stream
'''
RETURN = '''
stream_name:
description: The name of the Kinesis Stream.
returned: when state == present.
type: str
sample: "test-stream"
stream_arn:
description: The amazon resource identifier
returned: when state == present.
type: str
sample: "arn:aws:kinesis:east-side:123456789:stream/test-stream"
stream_status:
description: The current state of the Kinesis Stream.
returned: when state == present.
type: str
sample: "ACTIVE"
retention_period_hours:
description: Number of hours messages will be kept for a Kinesis Stream.
returned: when state == present.
type: int
sample: 24
tags:
description: Dictionary containing all the tags associated with the Kinesis stream.
returned: when state == present.
type: dict
sample: {
"Name": "Splunk",
"Env": "development"
}
'''
import re
import datetime
import time
from functools import reduce
try:
import botocore.exceptions
except ImportError:
pass # Taken care of by ec2.HAS_BOTO3
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ec2 import HAS_BOTO3, boto3_conn, ec2_argument_spec, get_aws_connection_info
from ansible.module_utils._text import to_native
def convert_to_lower(data):
"""Convert all uppercase keys in dict with lowercase_
Args:
data (dict): Dictionary with keys that have upper cases in them
Example.. FooBar == foo_bar
if a val is of type datetime.datetime, it will be converted to
the ISO 8601
Basic Usage:
>>> test = {'FooBar': []}
>>> test = convert_to_lower(test)
{
'foo_bar': []
}
Returns:
Dictionary
"""
results = dict()
if isinstance(data, dict):
for key, val in data.items():
key = re.sub(r'(([A-Z]{1,3}){1})', r'_\1', key).lower()
if key[0] == '_':
key = key[1:]
if isinstance(val, datetime.datetime):
results[key] = val.isoformat()
elif isinstance(val, dict):
results[key] = convert_to_lower(val)
elif isinstance(val, list):
converted = list()
for item in val:
converted.append(convert_to_lower(item))
results[key] = converted
else:
results[key] = val
return results
def make_tags_in_proper_format(tags):
"""Take a dictionary of tags and convert them into the AWS Tags format.
Args:
tags (list): The tags you want applied.
Basic Usage:
>>> tags = [{'Key': 'env', 'Value': 'development'}]
>>> make_tags_in_proper_format(tags)
{
"env": "development",
}
Returns:
Dict
"""
formatted_tags = dict()
for tag in tags:
formatted_tags[tag.get('Key')] = tag.get('Value')
return formatted_tags
def make_tags_in_aws_format(tags):
"""Take a dictionary of tags and convert them into the AWS Tags format.
Args:
tags (dict): The tags you want applied.
Basic Usage:
>>> tags = {'env': 'development', 'service': 'web'}
>>> make_tags_in_proper_format(tags)
[
{
"Value": "web",
"Key": "service"
},
{
"Value": "development",
"key": "env"
}
]
Returns:
List
"""
formatted_tags = list()
for key, val in tags.items():
formatted_tags.append({
'Key': key,
'Value': val
})
return formatted_tags
def get_tags(client, stream_name, check_mode=False):
"""Retrieve the tags for a Kinesis Stream.
Args:
client (botocore.client.EC2): Boto3 client.
stream_name (str): Name of the Kinesis stream.
Kwargs:
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> stream_name = 'test-stream'
>> get_tags(client, stream_name)
Returns:
Tuple (bool, str, dict)
"""
err_msg = ''
success = False
params = {
'StreamName': stream_name,
}
results = dict()
try:
if not check_mode:
results = (
client.list_tags_for_stream(**params)['Tags']
)
else:
results = [
{
'Key': 'DryRunMode',
'Value': 'true'
},
]
success = True
except botocore.exceptions.ClientError as e:
err_msg = to_native(e)
return success, err_msg, results
def find_stream(client, stream_name, check_mode=False):
"""Retrieve a Kinesis Stream.
Args:
client (botocore.client.EC2): Boto3 client.
stream_name (str): Name of the Kinesis stream.
Kwargs:
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> stream_name = 'test-stream'
Returns:
Tuple (bool, str, dict)
"""
err_msg = ''
success = False
params = {
'StreamName': stream_name,
}
results = dict()
has_more_shards = True
shards = list()
try:
if not check_mode:
while has_more_shards:
results = (
client.describe_stream(**params)['StreamDescription']
)
shards.extend(results.pop('Shards'))
has_more_shards = results['HasMoreShards']
results['Shards'] = shards
num_closed_shards = len([s for s in shards if 'EndingSequenceNumber' in s['SequenceNumberRange']])
results['OpenShardsCount'] = len(shards) - num_closed_shards
results['ClosedShardsCount'] = num_closed_shards
results['ShardsCount'] = len(shards)
else:
results = {
'OpenShardsCount': 5,
'ClosedShardsCount': 0,
'ShardsCount': 5,
'HasMoreShards': True,
'RetentionPeriodHours': 24,
'StreamName': stream_name,
'StreamARN': 'arn:aws:kinesis:east-side:123456789:stream/{0}'.format(stream_name),
'StreamStatus': 'ACTIVE',
'EncryptionType': 'NONE'
}
success = True
except botocore.exceptions.ClientError as e:
err_msg = to_native(e)
return success, err_msg, results
def wait_for_status(client, stream_name, status, wait_timeout=300,
check_mode=False):
"""Wait for the status to change for a Kinesis Stream.
Args:
client (botocore.client.EC2): Boto3 client
stream_name (str): The name of the kinesis stream.
status (str): The status to wait for.
examples. status=available, status=deleted
Kwargs:
wait_timeout (int): Number of seconds to wait, until this timeout is reached.
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> stream_name = 'test-stream'
>>> wait_for_status(client, stream_name, 'ACTIVE', 300)
Returns:
Tuple (bool, str, dict)
"""
polling_increment_secs = 5
wait_timeout = time.time() + wait_timeout
status_achieved = False
stream = dict()
err_msg = ""
while wait_timeout > time.time():
try:
find_success, find_msg, stream = (
find_stream(client, stream_name, check_mode=check_mode)
)
if check_mode:
status_achieved = True
break
elif status != 'DELETING':
if find_success and stream:
if stream.get('StreamStatus') == status:
status_achieved = True
break
elif status == 'DELETING' and not check_mode:
if not find_success:
status_achieved = True
break
else:
time.sleep(polling_increment_secs)
except botocore.exceptions.ClientError as e:
err_msg = to_native(e)
if not status_achieved:
err_msg = "Wait time out reached, while waiting for results"
else:
err_msg = "Status {0} achieved successfully".format(status)
return status_achieved, err_msg, stream
def tags_action(client, stream_name, tags, action='create', check_mode=False):
"""Create or delete multiple tags from a Kinesis Stream.
Args:
client (botocore.client.EC2): Boto3 client.
resource_id (str): The Amazon resource id.
tags (list): List of dictionaries.
examples.. [{Name: "", Values: [""]}]
Kwargs:
action (str): The action to perform.
valid actions == create and delete
default=create
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('ec2')
>>> resource_id = 'pcx-123345678'
>>> tags = {'env': 'development'}
>>> update_tags(client, resource_id, tags)
[True, '']
Returns:
List (bool, str)
"""
success = False
err_msg = ""
params = {'StreamName': stream_name}
try:
if not check_mode:
if action == 'create':
params['Tags'] = tags
client.add_tags_to_stream(**params)
success = True
elif action == 'delete':
params['TagKeys'] = list(tags)
client.remove_tags_from_stream(**params)
success = True
else:
err_msg = 'Invalid action {0}'.format(action)
else:
if action == 'create':
success = True
elif action == 'delete':
success = True
else:
err_msg = 'Invalid action {0}'.format(action)
except botocore.exceptions.ClientError as e:
err_msg = to_native(e)
return success, err_msg
def recreate_tags_from_list(list_of_tags):
"""Recreate tags from a list of tuples into the Amazon Tag format.
Args:
list_of_tags (list): List of tuples.
Basic Usage:
>>> list_of_tags = [('Env', 'Development')]
>>> recreate_tags_from_list(list_of_tags)
[
{
"Value": "Development",
"Key": "Env"
}
]
Returns:
List
"""
tags = list()
i = 0
for i in range(len(list_of_tags)):
key_name = list_of_tags[i][0]
key_val = list_of_tags[i][1]
tags.append(
{
'Key': key_name,
'Value': key_val
}
)
return tags
def update_tags(client, stream_name, tags, check_mode=False):
"""Update tags for an amazon resource.
Args:
resource_id (str): The Amazon resource id.
tags (dict): Dictionary of tags you want applied to the Kinesis stream.
Kwargs:
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('ec2')
>>> stream_name = 'test-stream'
>>> tags = {'env': 'development'}
>>> update_tags(client, stream_name, tags)
[True, '']
Return:
Tuple (bool, str)
"""
success = False
changed = False
err_msg = ''
tag_success, tag_msg, current_tags = (
get_tags(client, stream_name, check_mode=check_mode)
)
if current_tags:
tags = make_tags_in_aws_format(tags)
current_tags_set = (
set(
reduce(
lambda x, y: x + y,
[make_tags_in_proper_format(current_tags).items()]
)
)
)
new_tags_set = (
set(
reduce(
lambda x, y: x + y,
[make_tags_in_proper_format(tags).items()]
)
)
)
tags_to_delete = list(current_tags_set.difference(new_tags_set))
tags_to_update = list(new_tags_set.difference(current_tags_set))
if tags_to_delete:
tags_to_delete = make_tags_in_proper_format(
recreate_tags_from_list(tags_to_delete)
)
delete_success, delete_msg = (
tags_action(
client, stream_name, tags_to_delete, action='delete',
check_mode=check_mode
)
)
if not delete_success:
return delete_success, changed, delete_msg
if tags_to_update:
tags = make_tags_in_proper_format(
recreate_tags_from_list(tags_to_update)
)
else:
return True, changed, 'Tags do not need to be updated'
if tags:
create_success, create_msg = (
tags_action(
client, stream_name, tags, action='create',
check_mode=check_mode
)
)
if create_success:
changed = True
return create_success, changed, create_msg
return success, changed, err_msg
def stream_action(client, stream_name, shard_count=1, action='create',
timeout=300, check_mode=False):
"""Create or Delete an Amazon Kinesis Stream.
Args:
client (botocore.client.EC2): Boto3 client.
stream_name (str): The name of the kinesis stream.
Kwargs:
shard_count (int): Number of shards this stream will use.
action (str): The action to perform.
valid actions == create and delete
default=create
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> stream_name = 'test-stream'
>>> shard_count = 20
>>> stream_action(client, stream_name, shard_count, action='create')
Returns:
List (bool, str)
"""
success = False
err_msg = ''
params = {
'StreamName': stream_name
}
try:
if not check_mode:
if action == 'create':
params['ShardCount'] = shard_count
client.create_stream(**params)
success = True
elif action == 'delete':
client.delete_stream(**params)
success = True
else:
err_msg = 'Invalid action {0}'.format(action)
else:
if action == 'create':
success = True
elif action == 'delete':
success = True
else:
err_msg = 'Invalid action {0}'.format(action)
except botocore.exceptions.ClientError as e:
err_msg = to_native(e)
return success, err_msg
def stream_encryption_action(client, stream_name, action='start_encryption', encryption_type='', key_id='',
timeout=300, check_mode=False):
"""Create, Encrypt or Delete an Amazon Kinesis Stream.
Args:
client (botocore.client.EC2): Boto3 client.
stream_name (str): The name of the kinesis stream.
Kwargs:
shard_count (int): Number of shards this stream will use.
action (str): The action to perform.
valid actions == create and delete
default=create
encryption_type (str): NONE or KMS
key_id (str): The GUID or alias for the KMS key
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> stream_name = 'test-stream'
>>> shard_count = 20
>>> stream_action(client, stream_name, shard_count, action='create', encryption_type='KMS',key_id='alias/aws')
Returns:
List (bool, str)
"""
success = False
err_msg = ''
params = {
'StreamName': stream_name
}
try:
if not check_mode:
if action == 'start_encryption':
params['EncryptionType'] = encryption_type
params['KeyId'] = key_id
client.start_stream_encryption(**params)
success = True
elif action == 'stop_encryption':
params['EncryptionType'] = encryption_type
params['KeyId'] = key_id
client.stop_stream_encryption(**params)
success = True
else:
err_msg = 'Invalid encryption action {0}'.format(action)
else:
if action == 'start_encryption':
success = True
elif action == 'stop_encryption':
success = True
else:
err_msg = 'Invalid encryption action {0}'.format(action)
except botocore.exceptions.ClientError as e:
err_msg = to_native(e)
return success, err_msg
def retention_action(client, stream_name, retention_period=24,
action='increase', check_mode=False):
"""Increase or Decrease the retention of messages in the Kinesis stream.
Args:
client (botocore.client.EC2): Boto3 client.
stream_name (str): The name of the kinesis stream.
Kwargs:
retention_period (int): This is how long messages will be kept before
they are discarded. This can not be less than 24 hours.
action (str): The action to perform.
valid actions == create and delete
default=create
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> stream_name = 'test-stream'
>>> retention_period = 48
>>> retention_action(client, stream_name, retention_period, action='increase')
Returns:
Tuple (bool, str)
"""
success = False
err_msg = ''
params = {
'StreamName': stream_name
}
try:
if not check_mode:
if action == 'increase':
params['RetentionPeriodHours'] = retention_period
client.increase_stream_retention_period(**params)
success = True
err_msg = (
'Retention Period increased successfully to {0}'.format(retention_period)
)
elif action == 'decrease':
params['RetentionPeriodHours'] = retention_period
client.decrease_stream_retention_period(**params)
success = True
err_msg = (
'Retention Period decreased successfully to {0}'.format(retention_period)
)
else:
err_msg = 'Invalid action {0}'.format(action)
else:
if action == 'increase':
success = True
elif action == 'decrease':
success = True
else:
err_msg = 'Invalid action {0}'.format(action)
except botocore.exceptions.ClientError as e:
err_msg = to_native(e)
return success, err_msg
def update_shard_count(client, stream_name, number_of_shards=1, check_mode=False):
"""Increase or Decrease the number of shards in the Kinesis stream.
Args:
client (botocore.client.EC2): Boto3 client.
stream_name (str): The name of the kinesis stream.
Kwargs:
number_of_shards (int): Number of shards this stream will use.
default=1
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> stream_name = 'test-stream'
>>> number_of_shards = 3
>>> update_shard_count(client, stream_name, number_of_shards)
Returns:
Tuple (bool, str)
"""
success = True
err_msg = ''
params = {
'StreamName': stream_name,
'ScalingType': 'UNIFORM_SCALING'
}
if not check_mode:
params['TargetShardCount'] = number_of_shards
try:
client.update_shard_count(**params)
except botocore.exceptions.ClientError as e:
return False, str(e)
return success, err_msg
def update(client, current_stream, stream_name, number_of_shards=1, retention_period=None,
tags=None, wait=False, wait_timeout=300, check_mode=False):
"""Update an Amazon Kinesis Stream.
Args:
client (botocore.client.EC2): Boto3 client.
stream_name (str): The name of the kinesis stream.
Kwargs:
number_of_shards (int): Number of shards this stream will use.
default=1
retention_period (int): This is how long messages will be kept before
they are discarded. This can not be less than 24 hours.
tags (dict): The tags you want applied.
wait (bool): Wait until Stream is ACTIVE.
default=False
wait_timeout (int): How long to wait until this operation is considered failed.
default=300
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> current_stream = {
'ShardCount': 3,
'HasMoreShards': True,
'RetentionPeriodHours': 24,
'StreamName': 'test-stream',
'StreamARN': 'arn:aws:kinesis:us-west-2:123456789:stream/test-stream',
'StreamStatus': "ACTIVE'
}
>>> stream_name = 'test-stream'
>>> retention_period = 48
>>> number_of_shards = 10
>>> update(client, current_stream, stream_name,
number_of_shards, retention_period )
Returns:
Tuple (bool, bool, str)
"""
success = True
changed = False
err_msg = ''
if retention_period:
if wait:
wait_success, wait_msg, current_stream = (
wait_for_status(
client, stream_name, 'ACTIVE', wait_timeout,
check_mode=check_mode
)
)
if not wait_success:
return wait_success, False, wait_msg
if current_stream.get('StreamStatus') == 'ACTIVE':
retention_changed = False
if retention_period > current_stream['RetentionPeriodHours']:
retention_changed, retention_msg = (
retention_action(
client, stream_name, retention_period, action='increase',
check_mode=check_mode
)
)
elif retention_period < current_stream['RetentionPeriodHours']:
retention_changed, retention_msg = (
retention_action(
client, stream_name, retention_period, action='decrease',
check_mode=check_mode
)
)
elif retention_period == current_stream['RetentionPeriodHours']:
retention_msg = (
'Retention {0} is the same as {1}'
.format(
retention_period,
current_stream['RetentionPeriodHours']
)
)
success = True
if retention_changed:
success = True
changed = True
err_msg = retention_msg
if changed and wait:
wait_success, wait_msg, current_stream = (
wait_for_status(
client, stream_name, 'ACTIVE', wait_timeout,
check_mode=check_mode
)
)
if not wait_success:
return wait_success, False, wait_msg
elif changed and not wait:
stream_found, stream_msg, current_stream = (
find_stream(client, stream_name, check_mode=check_mode)
)
if stream_found:
if current_stream['StreamStatus'] != 'ACTIVE':
err_msg = (
'Retention Period for {0} is in the process of updating'
.format(stream_name)
)
return success, changed, err_msg
else:
err_msg = (
'StreamStatus has to be ACTIVE in order to modify the retention period. Current status is {0}'
.format(current_stream.get('StreamStatus', 'UNKNOWN'))
)
return success, changed, err_msg
if current_stream['OpenShardsCount'] != number_of_shards:
success, err_msg = (
update_shard_count(client, stream_name, number_of_shards, check_mode=check_mode)
)
if not success:
return success, changed, err_msg
changed = True
if wait:
wait_success, wait_msg, current_stream = (
wait_for_status(
client, stream_name, 'ACTIVE', wait_timeout,
check_mode=check_mode
)
)
if not wait_success:
return wait_success, changed, wait_msg
else:
stream_found, stream_msg, current_stream = (
find_stream(client, stream_name, check_mode=check_mode)
)
if stream_found and current_stream['StreamStatus'] != 'ACTIVE':
err_msg = (
'Number of shards for {0} is in the process of updating'
.format(stream_name)
)
return success, changed, err_msg
if tags:
tag_success, tag_changed, err_msg = (
update_tags(client, stream_name, tags, check_mode=check_mode)
)
if wait:
success, err_msg, status_stream = (
wait_for_status(
client, stream_name, 'ACTIVE', wait_timeout,
check_mode=check_mode
)
)
if success and changed:
err_msg = 'Kinesis Stream {0} updated successfully.'.format(stream_name)
elif success and not changed:
err_msg = 'Kinesis Stream {0} did not change.'.format(stream_name)
return success, changed, err_msg
def create_stream(client, stream_name, number_of_shards=1, retention_period=None,
tags=None, wait=False, wait_timeout=300, check_mode=False):
"""Create an Amazon Kinesis Stream.
Args:
client (botocore.client.EC2): Boto3 client.
stream_name (str): The name of the kinesis stream.
Kwargs:
number_of_shards (int): Number of shards this stream will use.
default=1
retention_period (int): Can not be less than 24 hours
default=None
tags (dict): The tags you want applied.
default=None
wait (bool): Wait until Stream is ACTIVE.
default=False
wait_timeout (int): How long to wait until this operation is considered failed.
default=300
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> stream_name = 'test-stream'
>>> number_of_shards = 10
>>> tags = {'env': 'test'}
>>> create_stream(client, stream_name, number_of_shards, tags=tags)
Returns:
Tuple (bool, bool, str, dict)
"""
success = False
changed = False
err_msg = ''
results = dict()
stream_found, stream_msg, current_stream = (
find_stream(client, stream_name, check_mode=check_mode)
)
if stream_found and current_stream.get('StreamStatus') == 'DELETING' and wait:
wait_success, wait_msg, current_stream = (
wait_for_status(
client, stream_name, 'ACTIVE', wait_timeout,
check_mode=check_mode
)
)
if stream_found and current_stream.get('StreamStatus') != 'DELETING':
success, changed, err_msg = update(
client, current_stream, stream_name, number_of_shards,
retention_period, tags, wait, wait_timeout, check_mode=check_mode
)
else:
create_success, create_msg = (
stream_action(
client, stream_name, number_of_shards, action='create',
check_mode=check_mode
)
)
if not create_success:
changed = True
err_msg = 'Failed to create Kinesis stream: {0}'.format(create_msg)
return False, True, err_msg, {}
else:
changed = True
if wait:
wait_success, wait_msg, results = (
wait_for_status(
client, stream_name, 'ACTIVE', wait_timeout,
check_mode=check_mode
)
)
err_msg = (
'Kinesis Stream {0} is in the process of being created'
.format(stream_name)
)
if not wait_success:
return wait_success, True, wait_msg, results
else:
err_msg = (
'Kinesis Stream {0} created successfully'
.format(stream_name)
)
if tags:
changed, err_msg = (
tags_action(
client, stream_name, tags, action='create',
check_mode=check_mode
)
)
if changed:
success = True
if not success:
return success, changed, err_msg, results
stream_found, stream_msg, current_stream = (
find_stream(client, stream_name, check_mode=check_mode)
)
if retention_period and current_stream.get('StreamStatus') == 'ACTIVE':
changed, err_msg = (
retention_action(
client, stream_name, retention_period, action='increase',
check_mode=check_mode
)
)
if changed:
success = True
if not success:
return success, changed, err_msg, results
else:
err_msg = (
'StreamStatus has to be ACTIVE in order to modify the retention period. Current status is {0}'
.format(current_stream.get('StreamStatus', 'UNKNOWN'))
)
success = create_success
changed = True
if success:
stream_found, stream_msg, results = (
find_stream(client, stream_name, check_mode=check_mode)
)
tag_success, tag_msg, current_tags = (
get_tags(client, stream_name, check_mode=check_mode)
)
if current_tags and not check_mode:
current_tags = make_tags_in_proper_format(current_tags)
results['Tags'] = current_tags
elif check_mode and tags:
results['Tags'] = tags
else:
results['Tags'] = dict()
results = convert_to_lower(results)
return success, changed, err_msg, results
def delete_stream(client, stream_name, wait=False, wait_timeout=300,
check_mode=False):
"""Delete an Amazon Kinesis Stream.
Args:
client (botocore.client.EC2): Boto3 client.
stream_name (str): The name of the kinesis stream.
Kwargs:
wait (bool): Wait until Stream is ACTIVE.
default=False
wait_timeout (int): How long to wait until this operation is considered failed.
default=300
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> stream_name = 'test-stream'
>>> delete_stream(client, stream_name)
Returns:
Tuple (bool, bool, str, dict)
"""
success = False
changed = False
err_msg = ''
results = dict()
stream_found, stream_msg, current_stream = (
find_stream(client, stream_name, check_mode=check_mode)
)
if stream_found:
success, err_msg = (
stream_action(
client, stream_name, action='delete', check_mode=check_mode
)
)
if success:
changed = True
if wait:
success, err_msg, results = (
wait_for_status(
client, stream_name, 'DELETING', wait_timeout,
check_mode=check_mode
)
)
err_msg = 'Stream {0} deleted successfully'.format(stream_name)
if not success:
return success, True, err_msg, results
else:
err_msg = (
'Stream {0} is in the process of being deleted'
.format(stream_name)
)
else:
success = True
changed = False
err_msg = 'Stream {0} does not exist'.format(stream_name)
return success, changed, err_msg, results
def start_stream_encryption(client, stream_name, encryption_type='', key_id='',
wait=False, wait_timeout=300, check_mode=False):
"""Start encryption on an Amazon Kinesis Stream.
Args:
client (botocore.client.EC2): Boto3 client.
stream_name (str): The name of the kinesis stream.
Kwargs:
encryption_type (str): KMS or NONE
key_id (str): KMS key GUID or alias
wait (bool): Wait until Stream is ACTIVE.
default=False
wait_timeout (int): How long to wait until this operation is considered failed.
default=300
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> stream_name = 'test-stream'
>>> key_id = 'alias/aws'
>>> encryption_type = 'KMS'
>>> start_stream_encryption(client, stream_name,encryption_type,key_id)
Returns:
Tuple (bool, bool, str, dict)
"""
success = False
changed = False
err_msg = ''
params = {
'StreamName': stream_name
}
results = dict()
stream_found, stream_msg, current_stream = (
find_stream(client, stream_name, check_mode=check_mode)
)
if stream_found:
success, err_msg = (
stream_encryption_action(
client, stream_name, action='start_encryption', encryption_type=encryption_type, key_id=key_id, check_mode=check_mode
)
)
if success:
changed = True
if wait:
success, err_msg, results = (
wait_for_status(
client, stream_name, 'ACTIVE', wait_timeout,
check_mode=check_mode
)
)
err_msg = 'Kinesis Stream {0} encryption started successfully.'.format(stream_name)
if not success:
return success, True, err_msg, results
else:
err_msg = (
'Kinesis Stream {0} is in the process of starting encryption.'.format(stream_name)
)
else:
success = True
changed = False
err_msg = 'Kinesis Stream {0} does not exist'.format(stream_name)
return success, changed, err_msg, results
def stop_stream_encryption(client, stream_name, encryption_type='', key_id='',
wait=True, wait_timeout=300, check_mode=False):
"""Stop encryption on an Amazon Kinesis Stream.
Args:
client (botocore.client.EC2): Boto3 client.
stream_name (str): The name of the kinesis stream.
Kwargs:
encryption_type (str): KMS or NONE
key_id (str): KMS key GUID or alias
wait (bool): Wait until Stream is ACTIVE.
default=False
wait_timeout (int): How long to wait until this operation is considered failed.
default=300
check_mode (bool): This will pass DryRun as one of the parameters to the aws api.
default=False
Basic Usage:
>>> client = boto3.client('kinesis')
>>> stream_name = 'test-stream'
>>> start_stream_encryption(client, stream_name,encryption_type, key_id)
Returns:
Tuple (bool, bool, str, dict)
"""
success = False
changed = False
err_msg = ''
params = {
'StreamName': stream_name
}
results = dict()
stream_found, stream_msg, current_stream = (
find_stream(client, stream_name, check_mode=check_mode)
)
if stream_found:
if current_stream.get('EncryptionType') == 'KMS':
success, err_msg = (
stream_encryption_action(
client, stream_name, action='stop_encryption', key_id=key_id, encryption_type=encryption_type, check_mode=check_mode
)
)
elif current_stream.get('EncryptionType') == 'NONE':
success = True
if success:
changed = True
if wait:
success, err_msg, results = (
wait_for_status(
client, stream_name, 'ACTIVE', wait_timeout,
check_mode=check_mode
)
)
err_msg = 'Kinesis Stream {0} encryption stopped successfully.'.format(stream_name)
if not success:
return success, True, err_msg, results
else:
err_msg = (
'Stream {0} is in the process of stopping encryption.'.format(stream_name)
)
else:
success = True
changed = False
err_msg = 'Stream {0} does not exist.'.format(stream_name)
return success, changed, err_msg, results
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
name=dict(required=True),
shards=dict(default=None, required=False, type='int'),
retention_period=dict(default=None, required=False, type='int'),
tags=dict(default=None, required=False, type='dict', aliases=['resource_tags']),
wait=dict(default=True, required=False, type='bool'),
wait_timeout=dict(default=300, required=False, type='int'),
state=dict(default='present', choices=['present', 'absent']),
encryption_type=dict(required=False, choices=['NONE', 'KMS']),
key_id=dict(required=False, type='str'),
encryption_state=dict(required=False, choices=['enabled', 'disabled']),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
retention_period = module.params.get('retention_period')
stream_name = module.params.get('name')
shards = module.params.get('shards')
state = module.params.get('state')
tags = module.params.get('tags')
wait = module.params.get('wait')
wait_timeout = module.params.get('wait_timeout')
encryption_type = module.params.get('encryption_type')
key_id = module.params.get('key_id')
encryption_state = module.params.get('encryption_state')
if state == 'present' and not shards:
module.fail_json(msg='Shards is required when state == present.')
if retention_period:
if retention_period < 24:
module.fail_json(msg='Retention period can not be less than 24 hours.')
if not HAS_BOTO3:
module.fail_json(msg='boto3 is required.')
check_mode = module.check_mode
try:
region, ec2_url, aws_connect_kwargs = (
get_aws_connection_info(module, boto3=True)
)
client = (
boto3_conn(
module, conn_type='client', resource='kinesis',
region=region, endpoint=ec2_url, **aws_connect_kwargs
)
)
except botocore.exceptions.ClientError as e:
err_msg = 'Boto3 Client Error - {0}'.format(to_native(e.msg))
module.fail_json(
success=False, changed=False, result={}, msg=err_msg
)
if state == 'present':
success, changed, err_msg, results = (
create_stream(
client, stream_name, shards, retention_period, tags,
wait, wait_timeout, check_mode
)
)
if encryption_state == 'enabled':
success, changed, err_msg, results = (
start_stream_encryption(
client, stream_name, encryption_type, key_id, wait, wait_timeout, check_mode
)
)
elif encryption_state == 'disabled':
success, changed, err_msg, results = (
stop_stream_encryption(
client, stream_name, encryption_type, key_id, wait, wait_timeout, check_mode
)
)
elif state == 'absent':
success, changed, err_msg, results = (
delete_stream(client, stream_name, wait, wait_timeout, check_mode)
)
if success:
module.exit_json(
success=success, changed=changed, msg=err_msg, **results
)
else:
module.fail_json(
success=success, changed=changed, msg=err_msg, result=results
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,043 |
ipv4 address of an arbitrary interface access method is incorrect in faq docs
|
##### SUMMARY
I tried accessing the ipv4 address of the interface 'tap2f186e31-b8' in my host.
It looks like ansible substitutes '-' with _ in the key. So we need to replace '-' with _ in the interface name before querying.
Current method:
`{{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }}`
Correct method:
`{{ hostvars[inventory_hostname]['ansible_' + which_interface | replace('-', '_') ]['ipv4']['address'] }}`
Source to where the interface is substituted.
https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/facts/network/linux.py#L303
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/reference_appendices/faq.rst
##### ANSIBLE VERSION
```
Affects 2.10
```
|
https://github.com/ansible/ansible/issues/64043
|
https://github.com/ansible/ansible/pull/64041
|
a6f45713fc8aaf31373d79a1f7f07221f2f9cd6d
|
161e0be89b6365d9a629cfdda012f88b6dd4384e
| 2019-10-28T21:09:44Z |
python
| 2019-11-14T21:38:40Z |
docs/docsite/rst/reference_appendices/faq.rst
|
.. _ansible_faq:
Frequently Asked Questions
==========================
Here are some commonly asked questions and their answers.
.. _set_environment:
How can I set the PATH or any other environment variable for a task or entire playbook?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting environment variables can be done with the `environment` keyword. It can be used at the task or other levels in the play::
environment:
PATH: "{{ ansible_env.PATH }}:/thingy/bin"
SOME: value
.. note:: starting in 2.0.1 the setup task from gather_facts also inherits the environment directive from the play, you might need to use the `|default` filter to avoid errors if setting this at play level.
.. _faq_setting_users_and_ports:
How do I handle different machines needing different user accounts or ports to log in with?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting inventory variables in the inventory file is the easiest way.
For instance, suppose these hosts have different usernames and ports:
.. code-block:: ini
[webservers]
asdf.example.com ansible_port=5000 ansible_user=alice
jkl.example.com ansible_port=5001 ansible_user=bob
You can also dictate the connection type to be used, if you want:
.. code-block:: ini
[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com ansible_connection=paramiko
You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file.
See the rest of the documentation for more information about how to organize variables.
.. _use_ssh:
How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Switch your default connection type in the configuration file to 'ssh', or use '-c ssh' to use
Native OpenSSH for connections instead of the python paramiko library. In Ansible 1.2.1 and later, 'ssh' will be used
by default if OpenSSH is new enough to support ControlPersist as an option.
Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible
from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage
older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old, so
consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use paramiko.
We keep paramiko as the default as if you are first installing Ansible on an EL box, it offers a better experience
for new users.
.. _use_ssh_jump_hosts:
How do I configure a jump host to access servers that I have no direct access to?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set a `ProxyCommand` in the
`ansible_ssh_common_args` inventory variable. Any arguments specified in
this variable are added to the sftp/scp/ssh command line when connecting
to the relevant host(s). Consider the following inventory group:
.. code-block:: ini
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create `group_vars/gatewayed.yml` with the following contents::
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q [email protected]"'
Ansible will append these arguments to the command line when trying to
connect to any hosts in the group `gatewayed`. (These arguments are used
in addition to any `ssh_args` from `ansible.cfg`, so you do not need to
repeat global `ControlPersist` settings in `ansible_ssh_common_args`.)
Note that `ssh -W` is available only with OpenSSH 5.4 or later. With
older versions, it's necessary to execute `nc %h:%p` or some equivalent
command on the bastion host.
With earlier versions of Ansible, it was necessary to configure a
suitable `ProxyCommand` for one or more hosts in `~/.ssh/config`,
or globally by setting `ssh_args` in `ansible.cfg`.
.. _ssh_serveraliveinterval:
How do I get Ansible to notice a dead target in a timely manner?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can add ``-o ServerAliveInterval=NumberOfSeconds`` in ``ssh_args`` from ``ansible.cfg``. Without this option, SSH and therefore Ansible will wait until the TCP connection times out. Another solution is to add ``ServerAliveInterval`` into your global SSH configuration. A good value for ``ServerAliveInterval`` is up to you to decide; keep in mind that ``ServerAliveCountMax=3`` is the SSH default so any value you set will be tripled before terminating the SSH session.
.. _ec2_cloud_performance:
How do I speed up management inside EC2?
++++++++++++++++++++++++++++++++++++++++
Don't try to manage a fleet of EC2 machines from your laptop. Connect to a management node inside EC2 first
and run Ansible from there.
.. _python_interpreters:
How do I handle not having a Python interpreter at /usr/bin/python on a remote machine?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While you can write Ansible modules in any language, most Ansible modules are written in Python,
including the ones central to letting Ansible work.
By default, Ansible assumes it can find a :command:`/usr/bin/python` on your remote system that is
either Python2, version 2.6 or higher or Python3, 3.5 or higher.
Setting the inventory variable ``ansible_python_interpreter`` on any host will tell Ansible to
auto-replace the Python interpreter with that value instead. Thus, you can point to any Python you
want on the system if :command:`/usr/bin/python` on your system does not point to a compatible
Python interpreter.
Some platforms may only have Python 3 installed by default. If it is not installed as
:command:`/usr/bin/python`, you will need to configure the path to the interpreter via
``ansible_python_interpreter``. Although most core modules will work with Python 3, there may be some
special purpose ones which do not or you may encounter a bug in an edge case. As a temporary
workaround you can install Python 2 on the managed host and configure Ansible to use that Python via
``ansible_python_interpreter``. If there's no mention in the module's documentation that the module
requires Python 2, you can also report a bug on our `bug tracker
<https://github.com/ansible/ansible/issues>`_ so that the incompatibility can be fixed in a future release.
Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time.
Also, this works for ANY interpreter, i.e ruby: `ansible_ruby_interpreter`, perl: `ansible_perl_interpreter`, etc,
so you can use this for custom modules written in any scripting language and control the interpreter location.
Keep in mind that if you put `env` in your module shebang line (`#!/usr/bin/env <other>`),
this facility will be ignored so you will be at the mercy of the remote `$PATH`.
.. _installation_faqs:
How do I handle the package dependencies required by Ansible package dependencies during Ansible installation ?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While installing Ansible, sometimes you may encounter errors such as `No package 'libffi' found` or `fatal error: Python.h: No such file or directory`
These errors are generally caused by the missing packages, which are dependencies of the packages required by Ansible.
For example, `libffi` package is dependency of `pynacl` and `paramiko` (Ansible -> paramiko -> pynacl -> libffi).
In order to solve these kinds of dependency issues, you might need to install required packages using the OS native package managers, such as `yum`, `dnf`, or `apt`, or as mentioned in the package installation guide.
Refer to the documentation of the respective package for such dependencies and their installation methods.
Common Platform Issues
++++++++++++++++++++++
What customer platforms does Red Hat support?
---------------------------------------------
A number of them! For a definitive list please see this `Knowledge Base article <https://access.redhat.com/articles/3168091>`_.
Running in a virtualenv
-----------------------
You can install Ansible into a virtualenv on the controller quite simply:
.. code-block:: shell
$ virtualenv ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you want to run under Python 3 instead of Python 2 you may want to change that slightly:
.. code-block:: shell
$ virtualenv -p python3 ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you need to use any libraries which are not available via pip (for instance, SELinux Python
bindings on systems such as Red Hat Enterprise Linux or Fedora that have SELinux enabled), then you
need to install them into the virtualenv. There are two methods:
* When you create the virtualenv, specify ``--system-site-packages`` to make use of any libraries
installed in the system's Python:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
* Copy those files in manually from the system. For instance, for SELinux bindings you might do:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
$ cp -r -v /usr/lib64/python3.*/site-packages/selinux/ ./py3-ansible/lib64/python3.*/site-packages/
$ cp -v /usr/lib64/python3.*/site-packages/*selinux*.so ./py3-ansible/lib64/python3.*/site-packages/
Running on BSD
--------------
.. seealso:: :ref:`working_with_bsd`
Running on Solaris
------------------
By default, Solaris 10 and earlier run a non-POSIX shell which does not correctly expand the default
tmp directory Ansible uses ( :file:`~/.ansible/tmp`). If you see module failures on Solaris machines, this
is likely the problem. There are several workarounds:
* You can set ``remote_tmp`` to a path that will expand correctly with the shell you are using (see the plugin documentation for :ref:`C shell<csh_shell>`, :ref:`fish shell<fish_shell>`, and :ref:`Powershell<powershell_shell>`). For
example, in the ansible config file you can set::
remote_tmp=$HOME/.ansible/tmp
In Ansible 2.5 and later, you can also set it per-host in inventory like this::
solaris1 ansible_remote_tmp=$HOME/.ansible/tmp
* You can set :ref:`ansible_shell_executable<ansible_shell_executable>` to the path to a POSIX compatible shell. For
instance, many Solaris hosts have a POSIX shell located at :file:`/usr/xpg4/bin/sh` so you can set
this in inventory like so::
solaris1 ansible_shell_executable=/usr/xpg4/bin/sh
(bash, ksh, and zsh should also be POSIX compatible if you have any of those installed).
Running on z/OS
---------------
There are a few common errors that one might run into when trying to execute Ansible on z/OS as a target.
* Version 2.7.6 of python for z/OS will not work with Ansible because it represents strings internally as EBCDIC.
To get around this limitation, download and install a later version of `python for z/OS <https://www.rocketsoftware.com/zos-open-source>`_ (2.7.13 or 3.6.1) that represents strings internally as ASCII. Version 2.7.13 is verified to work.
* When ``pipelining = False`` in `/etc/ansible/ansible.cfg` then Ansible modules are transferred in binary mode via sftp however execution of python fails with
.. error::
SyntaxError: Non-UTF-8 code starting with \'\\x83\' in file /a/user1/.ansible/tmp/ansible-tmp-1548232945.35-274513842609025/AnsiballZ_stat.py on line 1, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details
To fix it set ``pipelining = True`` in `/etc/ansible/ansible.cfg`.
* Python interpret cannot be found in default location ``/usr/bin/python`` on target host.
.. error::
/usr/bin/python: EDC5129I No such file or directory
To fix this set the path to the python installation in your inventory like so::
zos1 ansible_python_interpreter=/usr/lpp/python/python-2017-04-12-py27/python27/bin/python
* Start of python fails with ``The module libpython2.7.so was not found.``
.. error::
EE3501S The module libpython2.7.so was not found.
On z/OS, you must execute python from gnu bash. If gnu bash is installed at ``/usr/lpp/bash``, you can fix this in your inventory by specifying an ``ansible_shell_executable``::
zos1 ansible_shell_executable=/usr/lpp/bash/bin/bash
.. _use_roles:
What is the best way to make content reusable/redistributable?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
If you have not done so already, read all about "Roles" in the playbooks documentation. This helps you make playbook content
self-contained, and works well with things like git submodules for sharing content with others.
If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended.
.. _configuration_file:
Where does the configuration file live and what can I configure in it?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
See :ref:`intro_configuration`.
.. _who_would_ever_want_to_disable_cowsay_but_ok_here_is_how:
How do I disable cowsay?
++++++++++++++++++++++++
If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide
that you would like to work in a professional cow-free environment, you can either uninstall cowsay, set ``nocows=1`` in ansible.cfg, or set the :envvar:`ANSIBLE_NOCOWS` environment variable:
.. code-block:: shell-session
export ANSIBLE_NOCOWS=1
.. _browse_facts:
How do I see a list of all of the ansible\_ variables?
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible by default gathers "facts" about the machines under management, and these facts can be accessed in Playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the "setup" module as an ad-hoc action:
.. code-block:: shell-session
ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host. You might want to pipe the output to a pager.This does NOT include inventory variables or internal 'magic' variables. See the next question if you need more than just 'facts'.
.. _browse_inventory_vars:
How do I see all the inventory variables defined for my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
By running the following command, you can see inventory variables for a host:
.. code-block:: shell-session
ansible-inventory --list --yaml
.. _browse_host_vars:
How do I see all the variables specific to my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
To see all host specific variables, which might include facts and other sources:
.. code-block:: shell-session
ansible -m debug -a "var=hostvars['hostname']" localhost
Unless you are using a fact cache, you normally need to use a play that gathers facts first, for facts included in the task above.
.. _host_loops:
How do I loop over a list of hosts in a group, inside of a template?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration
file with a list of servers. To do this, you can just access the "$groups" dictionary in your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ host }}
{% endfor %}
If you need to access facts about these hosts, for instance, the IP address of each hostname, you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers::
- hosts: db_servers
tasks:
- debug: msg="doesn't matter what you do, just that they were talked to previously."
Then you can use the facts inside your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}
.. _programatic_access_to_a_variable:
How do I access a variable name programmatically?
+++++++++++++++++++++++++++++++++++++++++++++++++
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied
via a role parameter or other input. Variable names can be built by adding strings together, like so:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['ansible_' + which_interface]['ipv4']['address'] }}
The trick about going through hostvars is necessary because it's a dictionary of the entire namespace of variables. 'inventory_hostname'
is a magic variable that indicates the current host you are looping over in the host loop.
Also see dynamic_variables_.
.. _access_group_variable:
How do I access a group variable?
+++++++++++++++++++++++++++++++++
Technically, you don't, Ansible does not really use groups directly. Groups are label for host selection and a way to bulk assign variables, they are not a first class entity, Ansible only cares about Hosts and Tasks.
That said, you could just access the variable by selecting a host that is part of that group, see first_host_in_a_group_ below for an example.
.. _first_host_in_a_group:
How do I access a variable of the first host in a group?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note that if we
are using dynamic inventory, which host is the 'first' may not be consistent, so you wouldn't want to do this unless your inventory
is static and predictable. (If you are using :ref:`ansible_tower`, it will use database order, so this isn't a problem even if you are using cloud
based inventory scripts).
Anyway, here's the trick:
.. code-block:: jinja
{{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }}
Notice how we're pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you
could use the Jinja2 '#set' directive to simplify this, or in a playbook, you could also use set_fact::
- set_fact: headnode={{ groups[['webservers'][0]] }}
- debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }}
Notice how we interchanged the bracket syntax for dots -- that can be done anywhere.
.. _file_recursion:
How do I copy files recursively onto a target host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
The "copy" module has a recursive parameter. However, take a look at the "synchronize" module if you want to do something more efficient for a large number of files. The "synchronize" module wraps rsync. See the module index for info on both of these modules.
.. _shell_env:
How do I access shell environment variables?
++++++++++++++++++++++++++++++++++++++++++++
If you just need to access existing variables ON THE CONTROLLER, use the 'env' lookup plugin.
For example, to access the value of the HOME environment variable on the management machine::
---
# ...
vars:
local_home: "{{ lookup('env','HOME') }}"
For environment variables on the TARGET machines, they are available via facts in the 'ansible_env' variable:
.. code-block:: jinja
{{ ansible_env.SOME_VARIABLE }}
If you need to set environment variables for TASK execution, see :ref:`playbooks_environment` in the :ref:`Advanced Playbooks <playbooks_special_topics>` section.
There are several ways to set environment variables on your target machines. You can use the :ref:`template <template_module>`, :ref:`replace <replace_module>`, or :ref:`lineinfile <lineinfile_module>` modules to introduce environment variables into files.
The exact files to edit vary depending on your OS and distribution and local configuration.
.. _user_passwords:
How do I generate encrypted passwords for the user module?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible ad-hoc command is the easiest option:
.. code-block:: shell-session
ansible all -i localhost, -m debug -a "msg={{ 'mypassword' | password_hash('sha512', 'mysecretsalt') }}"
The mkpasswd utility that is available on most Linux systems is also a great option:
.. code-block:: shell-session
mkpasswd --method=sha-512
If this utility is not installed on your system (e.g. you are using macOS) then you can still easily
generate these passwords using Python. First, ensure that the `Passlib <https://bitbucket.org/ecollins/passlib/wiki/Home>`_
password hashing library is installed:
.. code-block:: shell-session
pip install passlib
Once the library is ready, SHA512 password values can then be generated as follows:
.. code-block:: shell-session
python -c "from passlib.hash import sha512_crypt; import getpass; print(sha512_crypt.using(rounds=5000).hash(getpass.getpass()))"
Use the integrated :ref:`hash_filters` to generate a hashed version of a password.
You shouldn't put plaintext passwords in your playbook or host_vars; instead, use :ref:`playbooks_vault` to encrypt sensitive data.
In OpenBSD, a similar option is available in the base system called encrypt(1):
.. code-block:: shell-session
encrypt
.. _dot_or_array_notation:
Ansible allows dot notation and array notation for variables. Which notation should I use?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The dot notation comes from Jinja and works fine for variables without special
characters. If your variable contains dots (.), colons (:), or dashes (-), if
a key begins and ends with two underscores, or if a key uses any of the known
public attributes, it is safer to use the array notation. See :ref:`playbooks_variables`
for a list of the known public attributes.
.. code-block:: jinja
item[0]['checksum:md5']
item['section']['2.1']
item['region']['Mid-Atlantic']
It is {{ temperature['Celsius']['-3'] }} outside.
Also array notation allows for dynamic variable composition, see dynamic_variables_.
Another problem with 'dot notation' is that some keys can cause problems because they collide with attributes and methods of python dictionaries.
.. code-block:: jinja
item.update # this breaks if item is a dictionary, as 'update()' is a python method for dictionaries
item['update'] # this works
.. _argsplat_unsafe:
When is it unsafe to bulk-set task arguments from a variable?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set all of a task's arguments from a dictionary-typed variable. This
technique can be useful in some dynamic execution scenarios. However, it
introduces a security risk. We do not recommend it, so Ansible issues a
warning when you do something like this::
#...
vars:
usermod_args:
name: testuser
state: present
update_password: always
tasks:
- user: '{{ usermod_args }}'
This particular example is safe. However, constructing tasks like this is
risky because the parameters and values passed to ``usermod_args`` could
be overwritten by malicious values in the ``host facts`` on a compromised
target machine. To mitigate this risk:
* set bulk variables at a level of precedence greater than ``host facts`` in the order of precedence found in :ref:`ansible_variable_precedence` (the example above is safe because play vars take precedence over facts)
* disable the :ref:`inject_facts_as_vars` configuration setting to prevent fact values from colliding with variables (this will also disable the original warning)
.. _commercial_support:
Can I get training on Ansible?
++++++++++++++++++++++++++++++
Yes! See our `services page <https://www.ansible.com/products/consulting>`_ for information on our services and training offerings. Email `[email protected] <mailto:[email protected]>`_ for further details.
We also offer free web-based training classes on a regular basis. See our `webinar page <https://www.ansible.com/resources/webinars-training>`_ for more info on upcoming webinars.
.. _web_interface:
Is there a web interface / REST API / etc?
++++++++++++++++++++++++++++++++++++++++++
Yes! Ansible, Inc makes a great product that makes Ansible even more powerful and easy to use. See :ref:`ansible_tower`.
.. _docs_contributions:
How do I submit a change to the documentation?
++++++++++++++++++++++++++++++++++++++++++++++
Great question! Documentation for Ansible is kept in the main project git repository, and complete instructions for contributing can be found in the docs README `viewable on GitHub <https://github.com/ansible/ansible/blob/devel/docs/docsite/README.md>`_. Thanks!
.. _keep_secret_data:
How do I keep secret data in my playbook?
+++++++++++++++++++++++++++++++++++++++++
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :ref:`playbooks_vault`.
If you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful::
- name: secret task
shell: /usr/bin/do_something --value={{ secret_value }}
no_log: True
This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.
The no_log attribute can also apply to an entire play::
- hosts: all
no_log: True
Though this will make the play somewhat difficult to debug. It's recommended that this
be applied to single tasks only, once a playbook is completed. Note that the use of the
no_log attribute does not prevent data from being shown when debugging Ansible itself via
the :envvar:`ANSIBLE_DEBUG` environment variable.
.. _when_to_use_brackets:
.. _dynamic_variables:
.. _interpolate_variables:
When should I use {{ }}? Also, how to interpolate variables or dynamic variable names
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A steadfast rule is 'always use ``{{ }}`` except when ``when:``'.
Conditionals are always run through Jinja2 as to resolve the expression,
so ``when:``, ``failed_when:`` and ``changed_when:`` are always templated and you should avoid adding ``{{ }}``.
In most other cases you should always use the brackets, even if previously you could use variables without specifying (like ``loop`` or ``with_`` clauses), as this made it hard to distinguish between an undefined variable and a string.
Another rule is 'moustaches don't stack'. We often see this:
.. code-block:: jinja
{{ somevar_{{other_var}} }}
The above DOES NOT WORK as you expect, if you need to use a dynamic variable use the following as appropriate:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['somevar_' + other_var] }}
For 'non host vars' you can use the :ref:`vars lookup<vars_lookup>` plugin:
.. code-block:: jinja
{{ lookup('vars', 'somevar_' + other_var) }}
.. _why_no_wheel:
Why don't you ship in X format?
+++++++++++++++++++++++++++++++
In most cases it has to do with maintainability. There are many ways to ship software and we do not have the resources to release Ansible on every platform.
In some cases there are technical issues. For example, our dependencies are not present on Python Wheels.
.. _ansible_host_delegated:
How do I get the original ansible_host when I delegate a task?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As the documentation states, connection variables are taken from the ``delegate_to`` host so ``ansible_host`` is overwritten,
but you can still access the original via ``hostvars``::
original_host: "{{ hostvars[inventory_hostname]['ansible_host'] }}"
This works for all overridden connection variables, like ``ansible_user``, ``ansible_port``, etc.
.. _scp_protocol_error_filename:
How do I fix 'protocol error: filename does not match request' when fetching a file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Newer releases of OpenSSH have a `bug <https://bugzilla.mindrot.org/show_bug.cgi?id=2966>`_ in the SCP client that can trigger this error on the Ansible controller when using SCP as the file transfer mechanism::
failed to transfer file to /tmp/ansible/file.txt\r\nprotocol error: filename does not match request
In these releases, SCP tries to validate that the path of the file to fetch matches the requested path.
The validation
fails if the remote filename requires quotes to escape spaces or non-ascii characters in its path. To avoid this error:
* Use SFTP instead of SCP by setting ``scp_if_ssh`` to ``smart`` (which tries SFTP first) or to ``False``. You can do this in one of four ways:
* Rely on the default setting, which is ``smart`` - this works if ``scp_if_ssh`` is not explicitly set anywhere
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>` in inventory: ``ansible_scp_if_ssh: False``
* Set an environment variable on your control node: ``export ANSIBLE_SCP_IF_SSH=False``
* Pass an environment variable when you run Ansible: ``ANSIBLE_SCP_IF_SSH=smart ansible-playbook``
* Modify your ``ansible.cfg`` file: add ``scp_if_ssh=False`` to the ``[ssh_connection]`` section
* If you must use SCP, set the ``-T`` arg to tell the SCP client to ignore path validation. You can do this in one of three ways:
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>`: ``ansible_scp_extra_args=-T``,
* Export or pass an environment variable: ``ANSIBLE_SCP_EXTRA_ARGS=-T``
* Modify your ``ansible.cfg`` file: add ``scp_extra_args=-T`` to the ``[ssh_connection]`` section
.. note:: If you see an ``invalid argument`` error when using ``-T``, then your SCP client is not performing filename validation and will not trigger this error.
.. _i_dont_see_my_question:
I don't see my question here
++++++++++++++++++++++++++++
Please see the section below for a link to IRC and the Google Group, where you can ask your question there.
.. seealso::
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Best practices advice
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,905 |
VMware: guest autostartup parameters on VMware
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
guest autostartup parameters on VMware
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_host
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Hello,
I trying to find a way to change startup parameters for VMs.
Does it exist such module? I'm speaking about this.

<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63905
|
https://github.com/ansible/ansible/pull/64605
|
9a8d0cf0063d1fbd2ee93342675dfb39e882d16a
|
8301ad47c3fdb13f10ef6a184f3a80f0f985a46d
| 2019-10-24T13:58:15Z |
python
| 2019-11-16T03:16:38Z |
lib/ansible/modules/cloud/vmware/vmware_host_auto_start.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,905 |
VMware: guest autostartup parameters on VMware
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
guest autostartup parameters on VMware
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_host
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Hello,
I trying to find a way to change startup parameters for VMs.
Does it exist such module? I'm speaking about this.

<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63905
|
https://github.com/ansible/ansible/pull/64605
|
9a8d0cf0063d1fbd2ee93342675dfb39e882d16a
|
8301ad47c3fdb13f10ef6a184f3a80f0f985a46d
| 2019-10-24T13:58:15Z |
python
| 2019-11-16T03:16:38Z |
test/integration/targets/vmware_host_auto_start/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,905 |
VMware: guest autostartup parameters on VMware
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
guest autostartup parameters on VMware
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_host
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Hello,
I trying to find a way to change startup parameters for VMs.
Does it exist such module? I'm speaking about this.

<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63905
|
https://github.com/ansible/ansible/pull/64605
|
9a8d0cf0063d1fbd2ee93342675dfb39e882d16a
|
8301ad47c3fdb13f10ef6a184f3a80f0f985a46d
| 2019-10-24T13:58:15Z |
python
| 2019-11-16T03:16:38Z |
test/integration/targets/vmware_host_auto_start/tasks/esxi_auto_start_ops.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,905 |
VMware: guest autostartup parameters on VMware
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
guest autostartup parameters on VMware
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_host
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Hello,
I trying to find a way to change startup parameters for VMs.
Does it exist such module? I'm speaking about this.

<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63905
|
https://github.com/ansible/ansible/pull/64605
|
9a8d0cf0063d1fbd2ee93342675dfb39e882d16a
|
8301ad47c3fdb13f10ef6a184f3a80f0f985a46d
| 2019-10-24T13:58:15Z |
python
| 2019-11-16T03:16:38Z |
test/integration/targets/vmware_host_auto_start/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,905 |
VMware: guest autostartup parameters on VMware
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
guest autostartup parameters on VMware
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_host
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Hello,
I trying to find a way to change startup parameters for VMs.
Does it exist such module? I'm speaking about this.

<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63905
|
https://github.com/ansible/ansible/pull/64605
|
9a8d0cf0063d1fbd2ee93342675dfb39e882d16a
|
8301ad47c3fdb13f10ef6a184f3a80f0f985a46d
| 2019-10-24T13:58:15Z |
python
| 2019-11-16T03:16:38Z |
test/integration/targets/vmware_host_auto_start/tasks/reset_auto_start_config.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,905 |
VMware: guest autostartup parameters on VMware
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
guest autostartup parameters on VMware
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_host
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Hello,
I trying to find a way to change startup parameters for VMs.
Does it exist such module? I'm speaking about this.

<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63905
|
https://github.com/ansible/ansible/pull/64605
|
9a8d0cf0063d1fbd2ee93342675dfb39e882d16a
|
8301ad47c3fdb13f10ef6a184f3a80f0f985a46d
| 2019-10-24T13:58:15Z |
python
| 2019-11-16T03:16:38Z |
test/integration/targets/vmware_host_auto_start/tasks/vcenter_auto_start_ops.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,513 |
pulp_repo - using repo with tls client auth fails
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When you wan't to create / update a repo which needs tls client auth for sync (like redhat repos)
this breaks because this module uses the following vars:
```python
ca_cert
client_cert
client_key
```
which overlap with the fetch_url vars which also use those.
So the api calls to your local pulp server are also made with tls client auth and
not only for the remote repo.
This only affects users who wan't to sync RedHat repos which needs TLS client auth.
The bug happens only in call **server.set_repo_list()** of the module:
```python
server = pulp_server(module, pulp_host, repo_type, wait_for_completion=wait_for_completion)
server.set_repo_list()
repo_exists = server.check_repo_exists(repo)
```
the call to **fetch_url** has overlapping vars the module defines:
```python
def set_repo_list(self):
url = "%s/pulp/api/v2/repositories/?details=true" % self.host
response, info = fetch_url(self.module, url, method='GET')
if info['status'] != 200:
print(response)
print(info)
self.module.fail_json(
msg="Request failed",
status_code=info['status'],
response=info['msg'],
url=url)
self.repo_list = json.load(response)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
pulp_repo
##### ANSIBLE VERSION
```
ansible 2.8.2
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
pulp_repo:
# connect / auth options
pulp_host: REDACTED
url_username: admin
url_password: "{{ pulp_admin_password }}"
force_basic_auth: true
# actual repo options
name: "SOME REPO NAME"
relative_url: "/redhat/7/example"
ca_cert: "/etc/rhsm/ca/redhat-uep.pem"
client_cert: "/etc/pki/entitlement/999999999999999999.pem "
client_key: "/etc/pki/entitlement/9999999999999999999-key.pem"
feed: "https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os/"
wait_for_completion: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
create / modify of the pulp repo on the server
##### ACTUAL RESULTS
Connection failure: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake failure (_ssl.c:1822)
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59513
|
https://github.com/ansible/ansible/pull/59522
|
1d0a83269222e508ec0757759382b7f4156d2f69
|
1e59017d272eda0125ae200c29bd3c0b3197c9e5
| 2019-07-24T11:44:54Z |
python
| 2019-11-18T19:41:40Z |
changelogs/fragments/59522-renamed-module-tls-client-auth-params-to-avoid-overlaping-with-fetch_url.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,513 |
pulp_repo - using repo with tls client auth fails
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When you wan't to create / update a repo which needs tls client auth for sync (like redhat repos)
this breaks because this module uses the following vars:
```python
ca_cert
client_cert
client_key
```
which overlap with the fetch_url vars which also use those.
So the api calls to your local pulp server are also made with tls client auth and
not only for the remote repo.
This only affects users who wan't to sync RedHat repos which needs TLS client auth.
The bug happens only in call **server.set_repo_list()** of the module:
```python
server = pulp_server(module, pulp_host, repo_type, wait_for_completion=wait_for_completion)
server.set_repo_list()
repo_exists = server.check_repo_exists(repo)
```
the call to **fetch_url** has overlapping vars the module defines:
```python
def set_repo_list(self):
url = "%s/pulp/api/v2/repositories/?details=true" % self.host
response, info = fetch_url(self.module, url, method='GET')
if info['status'] != 200:
print(response)
print(info)
self.module.fail_json(
msg="Request failed",
status_code=info['status'],
response=info['msg'],
url=url)
self.repo_list = json.load(response)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
pulp_repo
##### ANSIBLE VERSION
```
ansible 2.8.2
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
pulp_repo:
# connect / auth options
pulp_host: REDACTED
url_username: admin
url_password: "{{ pulp_admin_password }}"
force_basic_auth: true
# actual repo options
name: "SOME REPO NAME"
relative_url: "/redhat/7/example"
ca_cert: "/etc/rhsm/ca/redhat-uep.pem"
client_cert: "/etc/pki/entitlement/999999999999999999.pem "
client_key: "/etc/pki/entitlement/9999999999999999999-key.pem"
feed: "https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os/"
wait_for_completion: yes
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
create / modify of the pulp repo on the server
##### ACTUAL RESULTS
Connection failure: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake failure (_ssl.c:1822)
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59513
|
https://github.com/ansible/ansible/pull/59522
|
1d0a83269222e508ec0757759382b7f4156d2f69
|
1e59017d272eda0125ae200c29bd3c0b3197c9e5
| 2019-07-24T11:44:54Z |
python
| 2019-11-18T19:41:40Z |
lib/ansible/modules/packaging/os/pulp_repo.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016, Joe Adams <@sysadmind>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: pulp_repo
author: "Joe Adams (@sysadmind)"
short_description: Add or remove Pulp repos from a remote host.
description:
- Add or remove Pulp repos from a remote host.
version_added: "2.3"
options:
add_export_distributor:
description:
- Whether or not to add the export distributor to new C(rpm) repositories.
type: bool
default: 'no'
feed:
description:
- Upstream feed URL to receive updates from.
force_basic_auth:
description:
- httplib2, the library used by the M(uri) module only sends
authentication information when a webservice responds to an initial
request with a 401 status. Since some basic auth services do not
properly send a 401, logins will fail. This option forces the sending of
the Basic authentication header upon initial request.
type: bool
default: 'no'
generate_sqlite:
description:
- Boolean flag to indicate whether sqlite files should be generated during
a repository publish.
required: false
type: bool
default: 'no'
version_added: "2.8"
ca_cert:
description:
- CA certificate string used to validate the feed source SSL certificate.
This can be the file content or the path to the file.
type: str
aliases: [ importer_ssl_ca_cert ]
client_cert:
description:
- Certificate used as the client certificate when synchronizing the
repository. This is used to communicate authentication information to
the feed source. The value to this option must be the full path to the
certificate. The specified file may be the certificate itself or a
single file containing both the certificate and private key. This can be
the file content or the path to the file.
type: str
aliases: [ importer_ssl_client_cert ]
client_key:
description:
- Private key to the certificate specified in I(importer_ssl_client_cert),
assuming it is not included in the certificate file itself. This can be
the file content or the path to the file.
type: str
aliases: [ importer_ssl_client_key ]
name:
description:
- Name of the repo to add or remove. This correlates to repo-id in Pulp.
required: true
proxy_host:
description:
- Proxy url setting for the pulp repository importer. This is in the
format scheme://host.
required: false
default: null
proxy_port:
description:
- Proxy port setting for the pulp repository importer.
required: false
default: null
proxy_username:
description:
- Proxy username for the pulp repository importer.
required: false
default: null
version_added: "2.8"
proxy_password:
description:
- Proxy password for the pulp repository importer.
required: false
default: null
version_added: "2.8"
publish_distributor:
description:
- Distributor to use when state is C(publish). The default is to
publish all distributors.
pulp_host:
description:
- URL of the pulp server to connect to.
default: http://127.0.0.1
relative_url:
description:
- Relative URL for the local repository.
required: true
repo_type:
description:
- Repo plugin type to use (i.e. C(rpm), C(docker)).
default: rpm
repoview:
description:
- Whether to generate repoview files for a published repository. Setting
this to "yes" automatically activates `generate_sqlite`.
required: false
type: bool
default: 'no'
version_added: "2.8"
serve_http:
description:
- Make the repo available over HTTP.
type: bool
default: 'no'
serve_https:
description:
- Make the repo available over HTTPS.
type: bool
default: 'yes'
state:
description:
- The repo state. A state of C(sync) will queue a sync of the repo.
This is asynchronous but not delayed like a scheduled sync. A state of
C(publish) will use the repository's distributor to publish the content.
default: present
choices: [ "present", "absent", "sync", "publish" ]
url_password:
description:
- The password for use in HTTP basic authentication to the pulp API.
If the I(url_username) parameter is not specified, the I(url_password)
parameter will not be used.
url_username:
description:
- The username for use in HTTP basic authentication to the pulp API.
validate_certs:
description:
- If C(no), SSL certificates will not be validated. This should only be
used on personally controlled sites using self-signed certificates.
type: bool
default: 'yes'
wait_for_completion:
description:
- Wait for asynchronous tasks to complete before returning.
type: bool
default: 'no'
notes:
- This module can currently only create distributors and importers on rpm
repositories. Contributions to support other repo types are welcome.
extends_documentation_fragment:
- url
'''
EXAMPLES = '''
- name: Create a new repo with name 'my_repo'
pulp_repo:
name: my_repo
relative_url: my/repo
state: present
- name: Create a repo with a feed and a relative URL
pulp_repo:
name: my_centos_updates
repo_type: rpm
feed: http://mirror.centos.org/centos/6/updates/x86_64/
relative_url: centos/6/updates
url_username: admin
url_password: admin
force_basic_auth: yes
state: present
- name: Remove a repo from the pulp server
pulp_repo:
name: my_old_repo
repo_type: rpm
state: absent
'''
RETURN = '''
repo:
description: Name of the repo that the action was performed on.
returned: success
type: str
sample: my_repo
'''
import json
import os
from time import sleep
# import module snippets
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.urls import fetch_url
from ansible.module_utils.urls import url_argument_spec
class pulp_server(object):
"""
Class to interact with a Pulp server
"""
def __init__(self, module, pulp_host, repo_type, wait_for_completion=False):
self.module = module
self.host = pulp_host
self.repo_type = repo_type
self.repo_cache = dict()
self.wait_for_completion = wait_for_completion
def check_repo_exists(self, repo_id):
try:
self.get_repo_config_by_id(repo_id)
except IndexError:
return False
else:
return True
def compare_repo_distributor_config(self, repo_id, **kwargs):
repo_config = self.get_repo_config_by_id(repo_id)
for distributor in repo_config['distributors']:
for key, value in kwargs.items():
if key not in distributor['config'].keys():
return False
if not distributor['config'][key] == value:
return False
return True
def compare_repo_importer_config(self, repo_id, **kwargs):
repo_config = self.get_repo_config_by_id(repo_id)
for importer in repo_config['importers']:
for key, value in kwargs.items():
if value is not None:
if key not in importer['config'].keys():
return False
if not importer['config'][key] == value:
return False
return True
def create_repo(
self,
repo_id,
relative_url,
feed=None,
generate_sqlite=False,
serve_http=False,
serve_https=True,
proxy_host=None,
proxy_port=None,
proxy_username=None,
proxy_password=None,
repoview=False,
ssl_ca_cert=None,
ssl_client_cert=None,
ssl_client_key=None,
add_export_distributor=False
):
url = "%s/pulp/api/v2/repositories/" % self.host
data = dict()
data['id'] = repo_id
data['distributors'] = []
if self.repo_type == 'rpm':
yum_distributor = dict()
yum_distributor['distributor_id'] = "yum_distributor"
yum_distributor['distributor_type_id'] = "yum_distributor"
yum_distributor['auto_publish'] = True
yum_distributor['distributor_config'] = dict()
yum_distributor['distributor_config']['http'] = serve_http
yum_distributor['distributor_config']['https'] = serve_https
yum_distributor['distributor_config']['relative_url'] = relative_url
yum_distributor['distributor_config']['repoview'] = repoview
yum_distributor['distributor_config']['generate_sqlite'] = generate_sqlite or repoview
data['distributors'].append(yum_distributor)
if add_export_distributor:
export_distributor = dict()
export_distributor['distributor_id'] = "export_distributor"
export_distributor['distributor_type_id'] = "export_distributor"
export_distributor['auto_publish'] = False
export_distributor['distributor_config'] = dict()
export_distributor['distributor_config']['http'] = serve_http
export_distributor['distributor_config']['https'] = serve_https
export_distributor['distributor_config']['relative_url'] = relative_url
export_distributor['distributor_config']['repoview'] = repoview
export_distributor['distributor_config']['generate_sqlite'] = generate_sqlite or repoview
data['distributors'].append(export_distributor)
data['importer_type_id'] = "yum_importer"
data['importer_config'] = dict()
if feed:
data['importer_config']['feed'] = feed
if proxy_host:
data['importer_config']['proxy_host'] = proxy_host
if proxy_port:
data['importer_config']['proxy_port'] = proxy_port
if proxy_username:
data['importer_config']['proxy_username'] = proxy_username
if proxy_password:
data['importer_config']['proxy_password'] = proxy_password
if ssl_ca_cert:
data['importer_config']['ssl_ca_cert'] = ssl_ca_cert
if ssl_client_cert:
data['importer_config']['ssl_client_cert'] = ssl_client_cert
if ssl_client_key:
data['importer_config']['ssl_client_key'] = ssl_client_key
data['notes'] = {
"_repo-type": "rpm-repo"
}
response, info = fetch_url(
self.module,
url,
data=json.dumps(data),
method='POST')
if info['status'] != 201:
self.module.fail_json(
msg="Failed to create repo.",
status_code=info['status'],
response=info['msg'],
url=url)
else:
return True
def delete_repo(self, repo_id):
url = "%s/pulp/api/v2/repositories/%s/" % (self.host, repo_id)
response, info = fetch_url(self.module, url, data='', method='DELETE')
if info['status'] != 202:
self.module.fail_json(
msg="Failed to delete repo.",
status_code=info['status'],
response=info['msg'],
url=url)
if self.wait_for_completion:
self.verify_tasks_completed(json.load(response))
return True
def get_repo_config_by_id(self, repo_id):
if repo_id not in self.repo_cache.keys():
repo_array = [x for x in self.repo_list if x['id'] == repo_id]
self.repo_cache[repo_id] = repo_array[0]
return self.repo_cache[repo_id]
def publish_repo(self, repo_id, publish_distributor):
url = "%s/pulp/api/v2/repositories/%s/actions/publish/" % (self.host, repo_id)
# If there's no distributor specified, we will publish them all
if publish_distributor is None:
repo_config = self.get_repo_config_by_id(repo_id)
for distributor in repo_config['distributors']:
data = dict()
data['id'] = distributor['id']
response, info = fetch_url(
self.module,
url,
data=json.dumps(data),
method='POST')
if info['status'] != 202:
self.module.fail_json(
msg="Failed to publish the repo.",
status_code=info['status'],
response=info['msg'],
url=url,
distributor=distributor['id'])
else:
data = dict()
data['id'] = publish_distributor
response, info = fetch_url(
self.module,
url,
data=json.dumps(data),
method='POST')
if info['status'] != 202:
self.module.fail_json(
msg="Failed to publish the repo",
status_code=info['status'],
response=info['msg'],
url=url,
distributor=publish_distributor)
if self.wait_for_completion:
self.verify_tasks_completed(json.load(response))
return True
def sync_repo(self, repo_id):
url = "%s/pulp/api/v2/repositories/%s/actions/sync/" % (self.host, repo_id)
response, info = fetch_url(self.module, url, data='', method='POST')
if info['status'] != 202:
self.module.fail_json(
msg="Failed to schedule a sync of the repo.",
status_code=info['status'],
response=info['msg'],
url=url)
if self.wait_for_completion:
self.verify_tasks_completed(json.load(response))
return True
def update_repo_distributor_config(self, repo_id, **kwargs):
url = "%s/pulp/api/v2/repositories/%s/distributors/" % (self.host, repo_id)
repo_config = self.get_repo_config_by_id(repo_id)
for distributor in repo_config['distributors']:
distributor_url = "%s%s/" % (url, distributor['id'])
data = dict()
data['distributor_config'] = dict()
for key, value in kwargs.items():
data['distributor_config'][key] = value
response, info = fetch_url(
self.module,
distributor_url,
data=json.dumps(data),
method='PUT')
if info['status'] != 202:
self.module.fail_json(
msg="Failed to set the relative url for the repository.",
status_code=info['status'],
response=info['msg'],
url=url)
def update_repo_importer_config(self, repo_id, **kwargs):
url = "%s/pulp/api/v2/repositories/%s/importers/" % (self.host, repo_id)
data = dict()
importer_config = dict()
for key, value in kwargs.items():
if value is not None:
importer_config[key] = value
data['importer_config'] = importer_config
if self.repo_type == 'rpm':
data['importer_type_id'] = "yum_importer"
response, info = fetch_url(
self.module,
url,
data=json.dumps(data),
method='POST')
if info['status'] != 202:
self.module.fail_json(
msg="Failed to set the repo importer configuration",
status_code=info['status'],
response=info['msg'],
importer_config=importer_config,
url=url)
def set_repo_list(self):
url = "%s/pulp/api/v2/repositories/?details=true" % self.host
response, info = fetch_url(self.module, url, method='GET')
if info['status'] != 200:
self.module.fail_json(
msg="Request failed",
status_code=info['status'],
response=info['msg'],
url=url)
self.repo_list = json.load(response)
def verify_tasks_completed(self, response_dict):
for task in response_dict['spawned_tasks']:
task_url = "%s%s" % (self.host, task['_href'])
while True:
response, info = fetch_url(
self.module,
task_url,
data='',
method='GET')
if info['status'] != 200:
self.module.fail_json(
msg="Failed to check async task status.",
status_code=info['status'],
response=info['msg'],
url=task_url)
task_dict = json.load(response)
if task_dict['state'] == 'finished':
return True
if task_dict['state'] == 'error':
self.module.fail_json(msg="Asynchronous task failed to complete.", error=task_dict['error'])
sleep(2)
def main():
argument_spec = url_argument_spec()
argument_spec.update(
add_export_distributor=dict(default=False, type='bool'),
feed=dict(),
generate_sqlite=dict(default=False, type='bool'),
ca_cert=dict(aliases=['importer_ssl_ca_cert']),
client_cert=dict(aliases=['importer_ssl_client_cert']),
client_key=dict(aliases=['importer_ssl_client_key']),
name=dict(required=True, aliases=['repo']),
proxy_host=dict(),
proxy_port=dict(),
proxy_username=dict(),
proxy_password=dict(no_log=True),
publish_distributor=dict(),
pulp_host=dict(default="https://127.0.0.1"),
relative_url=dict(),
repo_type=dict(default="rpm"),
repoview=dict(default=False, type='bool'),
serve_http=dict(default=False, type='bool'),
serve_https=dict(default=True, type='bool'),
state=dict(
default="present",
choices=['absent', 'present', 'sync', 'publish']),
wait_for_completion=dict(default=False, type="bool"))
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True)
add_export_distributor = module.params['add_export_distributor']
feed = module.params['feed']
generate_sqlite = module.params['generate_sqlite']
importer_ssl_ca_cert = module.params['ca_cert']
importer_ssl_client_cert = module.params['client_cert']
importer_ssl_client_key = module.params['client_key']
proxy_host = module.params['proxy_host']
proxy_port = module.params['proxy_port']
proxy_username = module.params['proxy_username']
proxy_password = module.params['proxy_password']
publish_distributor = module.params['publish_distributor']
pulp_host = module.params['pulp_host']
relative_url = module.params['relative_url']
repo = module.params['name']
repo_type = module.params['repo_type']
repoview = module.params['repoview']
serve_http = module.params['serve_http']
serve_https = module.params['serve_https']
state = module.params['state']
wait_for_completion = module.params['wait_for_completion']
if (state == 'present') and (not relative_url):
module.fail_json(msg="When state is present, relative_url is required.")
# Ensure that the importer_ssl_* is the content and not a file path
if importer_ssl_ca_cert is not None:
importer_ssl_ca_cert_file_path = os.path.abspath(importer_ssl_ca_cert)
if os.path.isfile(importer_ssl_ca_cert_file_path):
importer_ssl_ca_cert_file_object = open(importer_ssl_ca_cert_file_path, 'r')
try:
importer_ssl_ca_cert = importer_ssl_ca_cert_file_object.read()
finally:
importer_ssl_ca_cert_file_object.close()
if importer_ssl_client_cert is not None:
importer_ssl_client_cert_file_path = os.path.abspath(importer_ssl_client_cert)
if os.path.isfile(importer_ssl_client_cert_file_path):
importer_ssl_client_cert_file_object = open(importer_ssl_client_cert_file_path, 'r')
try:
importer_ssl_client_cert = importer_ssl_client_cert_file_object.read()
finally:
importer_ssl_client_cert_file_object.close()
if importer_ssl_client_key is not None:
importer_ssl_client_key_file_path = os.path.abspath(importer_ssl_client_key)
if os.path.isfile(importer_ssl_client_key_file_path):
importer_ssl_client_key_file_object = open(importer_ssl_client_key_file_path, 'r')
try:
importer_ssl_client_key = importer_ssl_client_key_file_object.read()
finally:
importer_ssl_client_key_file_object.close()
server = pulp_server(module, pulp_host, repo_type, wait_for_completion=wait_for_completion)
server.set_repo_list()
repo_exists = server.check_repo_exists(repo)
changed = False
if state == 'absent' and repo_exists:
if not module.check_mode:
server.delete_repo(repo)
changed = True
if state == 'sync':
if not repo_exists:
module.fail_json(msg="Repository was not found. The repository can not be synced.")
if not module.check_mode:
server.sync_repo(repo)
changed = True
if state == 'publish':
if not repo_exists:
module.fail_json(msg="Repository was not found. The repository can not be published.")
if not module.check_mode:
server.publish_repo(repo, publish_distributor)
changed = True
if state == 'present':
if not repo_exists:
if not module.check_mode:
server.create_repo(
repo_id=repo,
relative_url=relative_url,
feed=feed,
generate_sqlite=generate_sqlite,
serve_http=serve_http,
serve_https=serve_https,
proxy_host=proxy_host,
proxy_port=proxy_port,
proxy_username=proxy_username,
proxy_password=proxy_password,
repoview=repoview,
ssl_ca_cert=importer_ssl_ca_cert,
ssl_client_cert=importer_ssl_client_cert,
ssl_client_key=importer_ssl_client_key,
add_export_distributor=add_export_distributor)
changed = True
else:
# Check to make sure all the settings are correct
# The importer config gets overwritten on set and not updated, so
# we set the whole config at the same time.
if not server.compare_repo_importer_config(
repo,
feed=feed,
proxy_host=proxy_host,
proxy_port=proxy_port,
proxy_username=proxy_username,
proxy_password=proxy_password,
ssl_ca_cert=importer_ssl_ca_cert,
ssl_client_cert=importer_ssl_client_cert,
ssl_client_key=importer_ssl_client_key
):
if not module.check_mode:
server.update_repo_importer_config(
repo,
feed=feed,
proxy_host=proxy_host,
proxy_port=proxy_port,
proxy_username=proxy_username,
proxy_password=proxy_password,
ssl_ca_cert=importer_ssl_ca_cert,
ssl_client_cert=importer_ssl_client_cert,
ssl_client_key=importer_ssl_client_key)
changed = True
if relative_url is not None:
if not server.compare_repo_distributor_config(
repo,
relative_url=relative_url
):
if not module.check_mode:
server.update_repo_distributor_config(
repo,
relative_url=relative_url)
changed = True
if not server.compare_repo_distributor_config(repo, generate_sqlite=generate_sqlite):
if not module.check_mode:
server.update_repo_distributor_config(repo, generate_sqlite=generate_sqlite)
changed = True
if not server.compare_repo_distributor_config(repo, repoview=repoview):
if not module.check_mode:
server.update_repo_distributor_config(repo, repoview=repoview)
changed = True
if not server.compare_repo_distributor_config(repo, http=serve_http):
if not module.check_mode:
server.update_repo_distributor_config(repo, http=serve_http)
changed = True
if not server.compare_repo_distributor_config(repo, https=serve_https):
if not module.check_mode:
server.update_repo_distributor_config(repo, https=serve_https)
changed = True
module.exit_json(changed=changed, repo=repo)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,101 |
ansible_distribution_version parsed wrong on Debian with Plesk
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The ansible_distribution_version fact is displayed as "8" on Debian 9, when Plesk 18 is installed, because the /etc/plesk-release file is parsed.
It seems the setup module (_distro.py?) is parsing all /etc/*-release files on a target host, and this can mess with the ansible_distribution_version facts.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
setup
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.5/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
root@web003:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
VERSION_CODENAME=stretch
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
root@web003:~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.11 (stretch)
Release: 9.11
Codename: stretch
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Install Debian with Plesk 18, and run ansible setup against it.
If you can't install Plesk, just create the file /etc/plesk-release with this content:
```
18.0.19.3
Plesk Obsidian 18.0
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ansible web003 -m setup -a "filter=ansible_dist*"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
`ansible_distribution_version` reports 9
```
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "9",
"ansible_distribution_release": "stretch",
"ansible_distribution_version": "9.11",
"discovered_interpreter_python": "/usr/bin/python"
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
`ansible_distribution_version` reports 8
```paste below
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "8",
"ansible_distribution_release": "stretch",
"ansible_distribution_version": "8.0.19.3",
"discovered_interpreter_python": "/usr/bin/python"
```
|
https://github.com/ansible/ansible/issues/64101
|
https://github.com/ansible/ansible/pull/64665
|
2749090bc65afa0136301981e1794bb92b38b725
|
d5fd588b34e8c402157e1596bbee32f7d418f258
| 2019-10-30T11:38:41Z |
python
| 2019-11-18T20:05:23Z |
changelogs/fragments/distribution_release.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,101 |
ansible_distribution_version parsed wrong on Debian with Plesk
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The ansible_distribution_version fact is displayed as "8" on Debian 9, when Plesk 18 is installed, because the /etc/plesk-release file is parsed.
It seems the setup module (_distro.py?) is parsing all /etc/*-release files on a target host, and this can mess with the ansible_distribution_version facts.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
setup
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.5/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
root@web003:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
VERSION_CODENAME=stretch
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
root@web003:~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.11 (stretch)
Release: 9.11
Codename: stretch
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Install Debian with Plesk 18, and run ansible setup against it.
If you can't install Plesk, just create the file /etc/plesk-release with this content:
```
18.0.19.3
Plesk Obsidian 18.0
```
<!--- Paste example playbooks or commands between quotes below -->
```yaml
ansible web003 -m setup -a "filter=ansible_dist*"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
`ansible_distribution_version` reports 9
```
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "9",
"ansible_distribution_release": "stretch",
"ansible_distribution_version": "9.11",
"discovered_interpreter_python": "/usr/bin/python"
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
`ansible_distribution_version` reports 8
```paste below
"ansible_facts": {
"ansible_distribution": "Debian",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/os-release",
"ansible_distribution_file_variety": "Debian",
"ansible_distribution_major_version": "8",
"ansible_distribution_release": "stretch",
"ansible_distribution_version": "8.0.19.3",
"discovered_interpreter_python": "/usr/bin/python"
```
|
https://github.com/ansible/ansible/issues/64101
|
https://github.com/ansible/ansible/pull/64665
|
2749090bc65afa0136301981e1794bb92b38b725
|
d5fd588b34e8c402157e1596bbee32f7d418f258
| 2019-10-30T11:38:41Z |
python
| 2019-11-18T20:05:23Z |
lib/ansible/module_utils/distro/_distro.py
|
# Copyright 2015,2016,2017 Nir Cohen
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# A local copy of the license can be found in licenses/Apache-License.txt
#
# Modifications to this code have been made by Ansible Project
"""
The ``distro`` package (``distro`` stands for Linux Distribution) provides
information about the Linux distribution it runs on, such as a reliable
machine-readable distro ID, or version information.
It is the recommended replacement for Python's original
:py:func:`platform.linux_distribution` function, but it provides much more
functionality. An alternative implementation became necessary because Python
3.5 deprecated this function, and Python 3.8 will remove it altogether.
Its predecessor function :py:func:`platform.dist` was already
deprecated since Python 2.6 and will also be removed in Python 3.8.
Still, there are many cases in which access to OS distribution information
is needed. See `Python issue 1322 <https://bugs.python.org/issue1322>`_ for
more information.
"""
import os
import re
import sys
import json
import shlex
import logging
import optparse
import subprocess
_UNIXCONFDIR = os.environ.get('UNIXCONFDIR', '/etc')
_OS_RELEASE_BASENAME = 'os-release'
#: Translation table for normalizing the "ID" attribute defined in os-release
#: files, for use by the :func:`distro.id` method.
#:
#: * Key: Value as defined in the os-release file, translated to lower case,
#: with blanks translated to underscores.
#:
#: * Value: Normalized value.
NORMALIZED_OS_ID = {
'ol': 'oracle', # Oracle Enterprise Linux
}
#: Translation table for normalizing the "Distributor ID" attribute returned by
#: the lsb_release command, for use by the :func:`distro.id` method.
#:
#: * Key: Value as returned by the lsb_release command, translated to lower
#: case, with blanks translated to underscores.
#:
#: * Value: Normalized value.
NORMALIZED_LSB_ID = {
'enterpriseenterprise': 'oracle', # Oracle Enterprise Linux
'redhatenterpriseworkstation': 'rhel', # RHEL 6, 7 Workstation
'redhatenterpriseserver': 'rhel', # RHEL 6, 7 Server
}
#: Translation table for normalizing the distro ID derived from the file name
#: of distro release files, for use by the :func:`distro.id` method.
#:
#: * Key: Value as derived from the file name of a distro release file,
#: translated to lower case, with blanks translated to underscores.
#:
#: * Value: Normalized value.
NORMALIZED_DISTRO_ID = {
'redhat': 'rhel', # RHEL 6.x, 7.x
}
# Pattern for content of distro release file (reversed)
_DISTRO_RELEASE_CONTENT_REVERSED_PATTERN = re.compile(
r'(?:[^)]*\)(.*)\()? *(?:STL )?([\d.+\-a-z]*\d) *(?:esaeler *)?(.+)')
# Pattern for base file name of distro release file
_DISTRO_RELEASE_BASENAME_PATTERN = re.compile(
r'(\w+)[-_](release|version)$')
# Base file names to be ignored when searching for distro release file
_DISTRO_RELEASE_IGNORE_BASENAMES = (
'debian_version',
'lsb-release',
'oem-release',
_OS_RELEASE_BASENAME,
'system-release'
)
#
# Python 2.6 does not have subprocess.check_output so replicate it here
#
def _my_check_output(*popenargs, **kwargs):
r"""Run command with arguments and return its output as a byte string.
If the exit code was non-zero it raises a CalledProcessError. The
CalledProcessError object will have the return code in the returncode
attribute and output in the output attribute.
The arguments are the same as for the Popen constructor. Example:
>>> check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
The stdout argument is not allowed as it is used internally.
To capture standard error in the result, use stderr=STDOUT.
>>> check_output(["/bin/sh", "-c",
... "ls -l non_existent_file ; exit 0"],
... stderr=STDOUT)
'ls: non_existent_file: No such file or directory\n'
This is a backport of Python-2.7's check output to Python-2.6
"""
if 'stdout' in kwargs:
raise ValueError(
'stdout argument not allowed, it will be overridden.'
)
process = subprocess.Popen(
stdout=subprocess.PIPE, *popenargs, **kwargs
)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
# Deviation from Python-2.7: Python-2.6's CalledProcessError does not
# have an argument for the stdout so simply omit it.
raise subprocess.CalledProcessError(retcode, cmd)
return output
try:
_check_output = subprocess.check_output
except AttributeError:
_check_output = _my_check_output
def linux_distribution(full_distribution_name=True):
"""
Return information about the current OS distribution as a tuple
``(id_name, version, codename)`` with items as follows:
* ``id_name``: If *full_distribution_name* is false, the result of
:func:`distro.id`. Otherwise, the result of :func:`distro.name`.
* ``version``: The result of :func:`distro.version`.
* ``codename``: The result of :func:`distro.codename`.
The interface of this function is compatible with the original
:py:func:`platform.linux_distribution` function, supporting a subset of
its parameters.
The data it returns may not exactly be the same, because it uses more data
sources than the original function, and that may lead to different data if
the OS distribution is not consistent across multiple data sources it
provides (there are indeed such distributions ...).
Another reason for differences is the fact that the :func:`distro.id`
method normalizes the distro ID string to a reliable machine-readable value
for a number of popular OS distributions.
"""
return _distro.linux_distribution(full_distribution_name)
def id():
"""
Return the distro ID of the current distribution, as a
machine-readable string.
For a number of OS distributions, the returned distro ID value is
*reliable*, in the sense that it is documented and that it does not change
across releases of the distribution.
This package maintains the following reliable distro ID values:
============== =========================================
Distro ID Distribution
============== =========================================
"ubuntu" Ubuntu
"debian" Debian
"rhel" RedHat Enterprise Linux
"centos" CentOS
"fedora" Fedora
"sles" SUSE Linux Enterprise Server
"opensuse" openSUSE
"amazon" Amazon Linux
"arch" Arch Linux
"cloudlinux" CloudLinux OS
"exherbo" Exherbo Linux
"gentoo" GenToo Linux
"ibm_powerkvm" IBM PowerKVM
"kvmibm" KVM for IBM z Systems
"linuxmint" Linux Mint
"mageia" Mageia
"mandriva" Mandriva Linux
"parallels" Parallels
"pidora" Pidora
"raspbian" Raspbian
"oracle" Oracle Linux (and Oracle Enterprise Linux)
"scientific" Scientific Linux
"slackware" Slackware
"xenserver" XenServer
"openbsd" OpenBSD
"netbsd" NetBSD
"freebsd" FreeBSD
============== =========================================
If you have a need to get distros for reliable IDs added into this set,
or if you find that the :func:`distro.id` function returns a different
distro ID for one of the listed distros, please create an issue in the
`distro issue tracker`_.
**Lookup hierarchy and transformations:**
First, the ID is obtained from the following sources, in the specified
order. The first available and non-empty value is used:
* the value of the "ID" attribute of the os-release file,
* the value of the "Distributor ID" attribute returned by the lsb_release
command,
* the first part of the file name of the distro release file,
The so determined ID value then passes the following transformations,
before it is returned by this method:
* it is translated to lower case,
* blanks (which should not be there anyway) are translated to underscores,
* a normalization of the ID is performed, based upon
`normalization tables`_. The purpose of this normalization is to ensure
that the ID is as reliable as possible, even across incompatible changes
in the OS distributions. A common reason for an incompatible change is
the addition of an os-release file, or the addition of the lsb_release
command, with ID values that differ from what was previously determined
from the distro release file name.
"""
return _distro.id()
def name(pretty=False):
"""
Return the name of the current OS distribution, as a human-readable
string.
If *pretty* is false, the name is returned without version or codename.
(e.g. "CentOS Linux")
If *pretty* is true, the version and codename are appended.
(e.g. "CentOS Linux 7.1.1503 (Core)")
**Lookup hierarchy:**
The name is obtained from the following sources, in the specified order.
The first available and non-empty value is used:
* If *pretty* is false:
- the value of the "NAME" attribute of the os-release file,
- the value of the "Distributor ID" attribute returned by the lsb_release
command,
- the value of the "<name>" field of the distro release file.
* If *pretty* is true:
- the value of the "PRETTY_NAME" attribute of the os-release file,
- the value of the "Description" attribute returned by the lsb_release
command,
- the value of the "<name>" field of the distro release file, appended
with the value of the pretty version ("<version_id>" and "<codename>"
fields) of the distro release file, if available.
"""
return _distro.name(pretty)
def version(pretty=False, best=False):
"""
Return the version of the current OS distribution, as a human-readable
string.
If *pretty* is false, the version is returned without codename (e.g.
"7.0").
If *pretty* is true, the codename in parenthesis is appended, if the
codename is non-empty (e.g. "7.0 (Maipo)").
Some distributions provide version numbers with different precisions in
the different sources of distribution information. Examining the different
sources in a fixed priority order does not always yield the most precise
version (e.g. for Debian 8.2, or CentOS 7.1).
The *best* parameter can be used to control the approach for the returned
version:
If *best* is false, the first non-empty version number in priority order of
the examined sources is returned.
If *best* is true, the most precise version number out of all examined
sources is returned.
**Lookup hierarchy:**
In all cases, the version number is obtained from the following sources.
If *best* is false, this order represents the priority order:
* the value of the "VERSION_ID" attribute of the os-release file,
* the value of the "Release" attribute returned by the lsb_release
command,
* the version number parsed from the "<version_id>" field of the first line
of the distro release file,
* the version number parsed from the "PRETTY_NAME" attribute of the
os-release file, if it follows the format of the distro release files.
* the version number parsed from the "Description" attribute returned by
the lsb_release command, if it follows the format of the distro release
files.
"""
return _distro.version(pretty, best)
def version_parts(best=False):
"""
Return the version of the current OS distribution as a tuple
``(major, minor, build_number)`` with items as follows:
* ``major``: The result of :func:`distro.major_version`.
* ``minor``: The result of :func:`distro.minor_version`.
* ``build_number``: The result of :func:`distro.build_number`.
For a description of the *best* parameter, see the :func:`distro.version`
method.
"""
return _distro.version_parts(best)
def major_version(best=False):
"""
Return the major version of the current OS distribution, as a string,
if provided.
Otherwise, the empty string is returned. The major version is the first
part of the dot-separated version string.
For a description of the *best* parameter, see the :func:`distro.version`
method.
"""
return _distro.major_version(best)
def minor_version(best=False):
"""
Return the minor version of the current OS distribution, as a string,
if provided.
Otherwise, the empty string is returned. The minor version is the second
part of the dot-separated version string.
For a description of the *best* parameter, see the :func:`distro.version`
method.
"""
return _distro.minor_version(best)
def build_number(best=False):
"""
Return the build number of the current OS distribution, as a string,
if provided.
Otherwise, the empty string is returned. The build number is the third part
of the dot-separated version string.
For a description of the *best* parameter, see the :func:`distro.version`
method.
"""
return _distro.build_number(best)
def like():
"""
Return a space-separated list of distro IDs of distributions that are
closely related to the current OS distribution in regards to packaging
and programming interfaces, for example distributions the current
distribution is a derivative from.
**Lookup hierarchy:**
This information item is only provided by the os-release file.
For details, see the description of the "ID_LIKE" attribute in the
`os-release man page
<http://www.freedesktop.org/software/systemd/man/os-release.html>`_.
"""
return _distro.like()
def codename():
"""
Return the codename for the release of the current OS distribution,
as a string.
If the distribution does not have a codename, an empty string is returned.
Note that the returned codename is not always really a codename. For
example, openSUSE returns "x86_64". This function does not handle such
cases in any special way and just returns the string it finds, if any.
**Lookup hierarchy:**
* the codename within the "VERSION" attribute of the os-release file, if
provided,
* the value of the "Codename" attribute returned by the lsb_release
command,
* the value of the "<codename>" field of the distro release file.
"""
return _distro.codename()
def info(pretty=False, best=False):
"""
Return certain machine-readable information items about the current OS
distribution in a dictionary, as shown in the following example:
.. sourcecode:: python
{
'id': 'rhel',
'version': '7.0',
'version_parts': {
'major': '7',
'minor': '0',
'build_number': ''
},
'like': 'fedora',
'codename': 'Maipo'
}
The dictionary structure and keys are always the same, regardless of which
information items are available in the underlying data sources. The values
for the various keys are as follows:
* ``id``: The result of :func:`distro.id`.
* ``version``: The result of :func:`distro.version`.
* ``version_parts -> major``: The result of :func:`distro.major_version`.
* ``version_parts -> minor``: The result of :func:`distro.minor_version`.
* ``version_parts -> build_number``: The result of
:func:`distro.build_number`.
* ``like``: The result of :func:`distro.like`.
* ``codename``: The result of :func:`distro.codename`.
For a description of the *pretty* and *best* parameters, see the
:func:`distro.version` method.
"""
return _distro.info(pretty, best)
def os_release_info():
"""
Return a dictionary containing key-value pairs for the information items
from the os-release file data source of the current OS distribution.
See `os-release file`_ for details about these information items.
"""
return _distro.os_release_info()
def lsb_release_info():
"""
Return a dictionary containing key-value pairs for the information items
from the lsb_release command data source of the current OS distribution.
See `lsb_release command output`_ for details about these information
items.
"""
return _distro.lsb_release_info()
def distro_release_info():
"""
Return a dictionary containing key-value pairs for the information items
from the distro release file data source of the current OS distribution.
See `distro release file`_ for details about these information items.
"""
return _distro.distro_release_info()
def uname_info():
"""
Return a dictionary containing key-value pairs for the information items
from the distro release file data source of the current OS distribution.
"""
return _distro.uname_info()
def os_release_attr(attribute):
"""
Return a single named information item from the os-release file data source
of the current OS distribution.
Parameters:
* ``attribute`` (string): Key of the information item.
Returns:
* (string): Value of the information item, if the item exists.
The empty string, if the item does not exist.
See `os-release file`_ for details about these information items.
"""
return _distro.os_release_attr(attribute)
def lsb_release_attr(attribute):
"""
Return a single named information item from the lsb_release command output
data source of the current OS distribution.
Parameters:
* ``attribute`` (string): Key of the information item.
Returns:
* (string): Value of the information item, if the item exists.
The empty string, if the item does not exist.
See `lsb_release command output`_ for details about these information
items.
"""
return _distro.lsb_release_attr(attribute)
def distro_release_attr(attribute):
"""
Return a single named information item from the distro release file
data source of the current OS distribution.
Parameters:
* ``attribute`` (string): Key of the information item.
Returns:
* (string): Value of the information item, if the item exists.
The empty string, if the item does not exist.
See `distro release file`_ for details about these information items.
"""
return _distro.distro_release_attr(attribute)
def uname_attr(attribute):
"""
Return a single named information item from the distro release file
data source of the current OS distribution.
Parameters:
* ``attribute`` (string): Key of the information item.
Returns:
* (string): Value of the information item, if the item exists.
The empty string, if the item does not exist.
"""
return _distro.uname_attr(attribute)
class cached_property(object):
"""A version of @property which caches the value. On access, it calls the
underlying function and sets the value in `__dict__` so future accesses
will not re-call the property.
"""
def __init__(self, f):
self._fname = f.__name__
self._f = f
def __get__(self, obj, owner):
assert obj is not None, 'call {0} on an instance'.format(self._fname)
ret = obj.__dict__[self._fname] = self._f(obj)
return ret
class LinuxDistribution(object):
"""
Provides information about a OS distribution.
This package creates a private module-global instance of this class with
default initialization arguments, that is used by the
`consolidated accessor functions`_ and `single source accessor functions`_.
By using default initialization arguments, that module-global instance
returns data about the current OS distribution (i.e. the distro this
package runs on).
Normally, it is not necessary to create additional instances of this class.
However, in situations where control is needed over the exact data sources
that are used, instances of this class can be created with a specific
distro release file, or a specific os-release file, or without invoking the
lsb_release command.
"""
def __init__(self,
include_lsb=True,
os_release_file='',
distro_release_file='',
include_uname=True):
"""
The initialization method of this class gathers information from the
available data sources, and stores that in private instance attributes.
Subsequent access to the information items uses these private instance
attributes, so that the data sources are read only once.
Parameters:
* ``include_lsb`` (bool): Controls whether the
`lsb_release command output`_ is included as a data source.
If the lsb_release command is not available in the program execution
path, the data source for the lsb_release command will be empty.
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is to be used as a data source.
An empty string (the default) will cause the default path name to
be used (see `os-release file`_ for details).
If the specified or defaulted os-release file does not exist, the
data source for the os-release file will be empty.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is to be used as a data source.
An empty string (the default) will cause a default search algorithm
to be used (see `distro release file`_ for details).
If the specified distro release file does not exist, or if no default
distro release file can be found, the data source for the distro
release file will be empty.
* ``include_name`` (bool): Controls whether uname command output is
included as a data source. If the uname command is not available in
the program execution path the data source for the uname command will
be empty.
Public instance attributes:
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``include_lsb`` (bool): The result of the ``include_lsb`` parameter.
This controls whether the lsb information will be loaded.
* ``include_uname`` (bool): The result of the ``include_uname``
parameter. This controls whether the uname information will
be loaded.
Raises:
* :py:exc:`IOError`: Some I/O issue with an os-release file or distro
release file.
* :py:exc:`subprocess.CalledProcessError`: The lsb_release command had
some issue (other than not being available in the program execution
path).
* :py:exc:`UnicodeError`: A data source has unexpected characters or
uses an unexpected encoding.
"""
self.os_release_file = os_release_file or \
os.path.join(_UNIXCONFDIR, _OS_RELEASE_BASENAME)
self.distro_release_file = distro_release_file or '' # updated later
self.include_lsb = include_lsb
self.include_uname = include_uname
def __repr__(self):
"""Return repr of all info
"""
return \
"LinuxDistribution(" \
"os_release_file={self.os_release_file!r}, " \
"distro_release_file={self.distro_release_file!r}, " \
"include_lsb={self.include_lsb!r}, " \
"include_uname={self.include_uname!r}, " \
"_os_release_info={self._os_release_info!r}, " \
"_lsb_release_info={self._lsb_release_info!r}, " \
"_distro_release_info={self._distro_release_info!r}, " \
"_uname_info={self._uname_info!r})".format(
self=self)
def linux_distribution(self, full_distribution_name=True):
"""
Return information about the OS distribution that is compatible
with Python's :func:`platform.linux_distribution`, supporting a subset
of its parameters.
For details, see :func:`distro.linux_distribution`.
"""
return (
self.name() if full_distribution_name else self.id(),
self.version(),
self.codename()
)
def id(self):
"""Return the distro ID of the OS distribution, as a string.
For details, see :func:`distro.id`.
"""
def normalize(distro_id, table):
distro_id = distro_id.lower().replace(' ', '_')
return table.get(distro_id, distro_id)
distro_id = self.os_release_attr('id')
if distro_id:
return normalize(distro_id, NORMALIZED_OS_ID)
distro_id = self.lsb_release_attr('distributor_id')
if distro_id:
return normalize(distro_id, NORMALIZED_LSB_ID)
distro_id = self.distro_release_attr('id')
if distro_id:
return normalize(distro_id, NORMALIZED_DISTRO_ID)
distro_id = self.uname_attr('id')
if distro_id:
return normalize(distro_id, NORMALIZED_DISTRO_ID)
return ''
def name(self, pretty=False):
"""
Return the name of the OS distribution, as a string.
For details, see :func:`distro.name`.
"""
name = self.os_release_attr('name') \
or self.lsb_release_attr('distributor_id') \
or self.distro_release_attr('name') \
or self.uname_attr('name')
if pretty:
name = self.os_release_attr('pretty_name') \
or self.lsb_release_attr('description')
if not name:
name = self.distro_release_attr('name') \
or self.uname_attr('name')
version = self.version(pretty=True)
if version:
name = name + ' ' + version
return name or ''
def version(self, pretty=False, best=False):
"""
Return the version of the OS distribution, as a string.
For details, see :func:`distro.version`.
"""
versions = [
self.os_release_attr('version_id'),
self.lsb_release_attr('release'),
self.distro_release_attr('version_id'),
self._parse_distro_release_content(
self.os_release_attr('pretty_name')).get('version_id', ''),
self._parse_distro_release_content(
self.lsb_release_attr('description')).get('version_id', ''),
self.uname_attr('release')
]
version = ''
if best:
# This algorithm uses the last version in priority order that has
# the best precision. If the versions are not in conflict, that
# does not matter; otherwise, using the last one instead of the
# first one might be considered a surprise.
for v in versions:
if v.count(".") > version.count(".") or version == '':
version = v
else:
for v in versions:
if v != '':
version = v
break
if pretty and version and self.codename():
version = u'{0} ({1})'.format(version, self.codename())
return version
def version_parts(self, best=False):
"""
Return the version of the OS distribution, as a tuple of version
numbers.
For details, see :func:`distro.version_parts`.
"""
version_str = self.version(best=best)
if version_str:
version_regex = re.compile(r'(\d+)\.?(\d+)?\.?(\d+)?')
matches = version_regex.match(version_str)
if matches:
major, minor, build_number = matches.groups()
return major, minor or '', build_number or ''
return '', '', ''
def major_version(self, best=False):
"""
Return the major version number of the current distribution.
For details, see :func:`distro.major_version`.
"""
return self.version_parts(best)[0]
def minor_version(self, best=False):
"""
Return the minor version number of the current distribution.
For details, see :func:`distro.minor_version`.
"""
return self.version_parts(best)[1]
def build_number(self, best=False):
"""
Return the build number of the current distribution.
For details, see :func:`distro.build_number`.
"""
return self.version_parts(best)[2]
def like(self):
"""
Return the IDs of distributions that are like the OS distribution.
For details, see :func:`distro.like`.
"""
return self.os_release_attr('id_like') or ''
def codename(self):
"""
Return the codename of the OS distribution.
For details, see :func:`distro.codename`.
"""
try:
# Handle os_release specially since distros might purposefully set
# this to empty string to have no codename
return self._os_release_info['codename']
except KeyError:
return self.lsb_release_attr('codename') \
or self.distro_release_attr('codename') \
or ''
def info(self, pretty=False, best=False):
"""
Return certain machine-readable information about the OS
distribution.
For details, see :func:`distro.info`.
"""
return dict(
id=self.id(),
version=self.version(pretty, best),
version_parts=dict(
major=self.major_version(best),
minor=self.minor_version(best),
build_number=self.build_number(best)
),
like=self.like(),
codename=self.codename(),
)
def os_release_info(self):
"""
Return a dictionary containing key-value pairs for the information
items from the os-release file data source of the OS distribution.
For details, see :func:`distro.os_release_info`.
"""
return self._os_release_info
def lsb_release_info(self):
"""
Return a dictionary containing key-value pairs for the information
items from the lsb_release command data source of the OS
distribution.
For details, see :func:`distro.lsb_release_info`.
"""
return self._lsb_release_info
def distro_release_info(self):
"""
Return a dictionary containing key-value pairs for the information
items from the distro release file data source of the OS
distribution.
For details, see :func:`distro.distro_release_info`.
"""
return self._distro_release_info
def uname_info(self):
"""
Return a dictionary containing key-value pairs for the information
items from the uname command data source of the OS distribution.
For details, see :func:`distro.uname_info`.
"""
return self._uname_info
def os_release_attr(self, attribute):
"""
Return a single named information item from the os-release file data
source of the OS distribution.
For details, see :func:`distro.os_release_attr`.
"""
return self._os_release_info.get(attribute, '')
def lsb_release_attr(self, attribute):
"""
Return a single named information item from the lsb_release command
output data source of the OS distribution.
For details, see :func:`distro.lsb_release_attr`.
"""
return self._lsb_release_info.get(attribute, '')
def distro_release_attr(self, attribute):
"""
Return a single named information item from the distro release file
data source of the OS distribution.
For details, see :func:`distro.distro_release_attr`.
"""
return self._distro_release_info.get(attribute, '')
def uname_attr(self, attribute):
"""
Return a single named information item from the uname command
output data source of the OS distribution.
For details, see :func:`distro.uname_release_attr`.
"""
return self._uname_info.get(attribute, '')
@cached_property
def _os_release_info(self):
"""
Get the information items from the specified os-release file.
Returns:
A dictionary containing all information items.
"""
if os.path.isfile(self.os_release_file):
with open(self.os_release_file) as release_file:
return self._parse_os_release_content(release_file)
return {}
@staticmethod
def _parse_os_release_content(lines):
"""
Parse the lines of an os-release file.
Parameters:
* lines: Iterable through the lines in the os-release file.
Each line must be a unicode string or a UTF-8 encoded byte
string.
Returns:
A dictionary containing all information items.
"""
props = {}
lexer = shlex.shlex(lines, posix=True)
lexer.whitespace_split = True
# The shlex module defines its `wordchars` variable using literals,
# making it dependent on the encoding of the Python source file.
# In Python 2.6 and 2.7, the shlex source file is encoded in
# 'iso-8859-1', and the `wordchars` variable is defined as a byte
# string. This causes a UnicodeDecodeError to be raised when the
# parsed content is a unicode object. The following fix resolves that
# (... but it should be fixed in shlex...):
if sys.version_info[0] == 2 and isinstance(lexer.wordchars, bytes):
lexer.wordchars = lexer.wordchars.decode('iso-8859-1')
tokens = list(lexer)
for token in tokens:
# At this point, all shell-like parsing has been done (i.e.
# comments processed, quotes and backslash escape sequences
# processed, multi-line values assembled, trailing newlines
# stripped, etc.), so the tokens are now either:
# * variable assignments: var=value
# * commands or their arguments (not allowed in os-release)
if '=' in token:
k, v = token.split('=', 1)
if isinstance(v, bytes):
v = v.decode('utf-8')
props[k.lower()] = v
else:
# Ignore any tokens that are not variable assignments
pass
if 'version_codename' in props:
# os-release added a version_codename field. Use that in
# preference to anything else Note that some distros purposefully
# do not have code names. They should be setting
# version_codename=""
props['codename'] = props['version_codename']
elif 'ubuntu_codename' in props:
# Same as above but a non-standard field name used on older Ubuntus
props['codename'] = props['ubuntu_codename']
elif 'version' in props:
# If there is no version_codename, parse it from the version
codename = re.search(r'(\(\D+\))|,(\s+)?\D+', props['version'])
if codename:
codename = codename.group()
codename = codename.strip('()')
codename = codename.strip(',')
codename = codename.strip()
# codename appears within paranthese.
props['codename'] = codename
return props
@cached_property
def _lsb_release_info(self):
"""
Get the information items from the lsb_release command output.
Returns:
A dictionary containing all information items.
"""
if not self.include_lsb:
return {}
with open(os.devnull, 'w') as devnull:
try:
cmd = ('lsb_release', '-a')
stdout = _check_output(cmd, stderr=devnull)
except OSError: # Command not found
return {}
content = stdout.decode(sys.getfilesystemencoding()).splitlines()
return self._parse_lsb_release_content(content)
@staticmethod
def _parse_lsb_release_content(lines):
"""
Parse the output of the lsb_release command.
Parameters:
* lines: Iterable through the lines of the lsb_release output.
Each line must be a unicode string or a UTF-8 encoded byte
string.
Returns:
A dictionary containing all information items.
"""
props = {}
for line in lines:
kv = line.strip('\n').split(':', 1)
if len(kv) != 2:
# Ignore lines without colon.
continue
k, v = kv
props.update({k.replace(' ', '_').lower(): v.strip()})
return props
@cached_property
def _uname_info(self):
with open(os.devnull, 'w') as devnull:
try:
cmd = ('uname', '-rs')
stdout = _check_output(cmd, stderr=devnull)
except OSError:
return {}
content = stdout.decode(sys.getfilesystemencoding()).splitlines()
return self._parse_uname_content(content)
@staticmethod
def _parse_uname_content(lines):
props = {}
match = re.search(r'^([^\s]+)\s+([\d\.]+)', lines[0].strip())
if match:
name, version = match.groups()
# This is to prevent the Linux kernel version from
# appearing as the 'best' version on otherwise
# identifiable distributions.
if name == 'Linux':
return {}
props['id'] = name.lower()
props['name'] = name
props['release'] = version
return props
@cached_property
def _distro_release_info(self):
"""
Get the information items from the specified distro release file.
Returns:
A dictionary containing all information items.
"""
if self.distro_release_file:
# If it was specified, we use it and parse what we can, even if
# its file name or content does not match the expected pattern.
distro_info = self._parse_distro_release_file(
self.distro_release_file)
basename = os.path.basename(self.distro_release_file)
# The file name pattern for user-specified distro release files
# is somewhat more tolerant (compared to when searching for the
# file), because we want to use what was specified as best as
# possible.
match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename)
if 'name' in distro_info \
and 'cloudlinux' in distro_info['name'].lower():
distro_info['id'] = 'cloudlinux'
elif match:
distro_info['id'] = match.group(1)
return distro_info
else:
try:
basenames = os.listdir(_UNIXCONFDIR)
# We sort for repeatability in cases where there are multiple
# distro specific files; e.g. CentOS, Oracle, Enterprise all
# containing `redhat-release` on top of their own.
basenames.sort()
except OSError:
# This may occur when /etc is not readable but we can't be
# sure about the *-release files. Check common entries of
# /etc for information. If they turn out to not be there the
# error is handled in `_parse_distro_release_file()`.
basenames = ['SuSE-release',
'arch-release',
'base-release',
'centos-release',
'fedora-release',
'gentoo-release',
'mageia-release',
'mandrake-release',
'mandriva-release',
'mandrivalinux-release',
'manjaro-release',
'oracle-release',
'redhat-release',
'sl-release',
'slackware-version']
for basename in basenames:
if basename in _DISTRO_RELEASE_IGNORE_BASENAMES:
continue
match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename)
if match:
filepath = os.path.join(_UNIXCONFDIR, basename)
distro_info = self._parse_distro_release_file(filepath)
if 'name' in distro_info:
# The name is always present if the pattern matches
self.distro_release_file = filepath
distro_info['id'] = match.group(1)
if 'cloudlinux' in distro_info['name'].lower():
distro_info['id'] = 'cloudlinux'
return distro_info
return {}
def _parse_distro_release_file(self, filepath):
"""
Parse a distro release file.
Parameters:
* filepath: Path name of the distro release file.
Returns:
A dictionary containing all information items.
"""
try:
with open(filepath) as fp:
# Only parse the first line. For instance, on SLES there
# are multiple lines. We don't want them...
return self._parse_distro_release_content(fp.readline())
except (OSError, IOError):
# Ignore not being able to read a specific, seemingly version
# related file.
# See https://github.com/nir0s/distro/issues/162
return {}
@staticmethod
def _parse_distro_release_content(line):
"""
Parse a line from a distro release file.
Parameters:
* line: Line from the distro release file. Must be a unicode string
or a UTF-8 encoded byte string.
Returns:
A dictionary containing all information items.
"""
if isinstance(line, bytes):
line = line.decode('utf-8')
matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match(
line.strip()[::-1])
distro_info = {}
if matches:
# regexp ensures non-None
distro_info['name'] = matches.group(3)[::-1]
if matches.group(2):
distro_info['version_id'] = matches.group(2)[::-1]
if matches.group(1):
distro_info['codename'] = matches.group(1)[::-1]
elif line:
distro_info['name'] = line.strip()
return distro_info
_distro = LinuxDistribution()
def main():
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(logging.StreamHandler(sys.stdout))
parser = optparse.OptionParser(description="OS distro info tool")
parser.add_option(
'--json',
'-j',
help="Output in machine readable format",
action="store_true")
args, opts = parser.parse_args()
if args.json:
logger.info(json.dumps(info(), indent=4, sort_keys=True))
else:
logger.info('Name: %s', name(pretty=True))
distribution_version = version(pretty=True)
logger.info('Version: %s', distribution_version)
distribution_codename = codename()
logger.info('Codename: %s', distribution_codename)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,616 |
Inventory cache settings 2.7.13
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
According to the documentation, if _cache_plugin_ and _cache_connection_ are not set in _[inventory]_ section of ansible.cfg and cache set to True, the values should be taken from fact_caching and fact_caching_connection in defaults section.
This is working with ansible 2.8.5 but returns an error in 2.7.13
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
Inventory cache settings
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.7.13
config file = /home/xxx/automation-cisco/ansible/ansible.cfg
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
CACHE_PLUGIN(/home/xxx/automation-cisco/ansible/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/xxx/automation-cisco/ansible/ansible.cfg) = cache
```
The config in [inventory] section doesn't show in this output (no matter if cache_plugin is set or not) - OK found #46097
In Ansible 2.8.5 :
```
INVENTORY_CACHE_ENABLED(/home/xxx/automation-cisco/ansible/ansible.cfg) = True
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Set the following in ansible.cfg
```
[defaults]
fact_caching = jsonfile
fact_caching_connection = cache
[inventory]
cache = True
#linked to fact_caching options if not specified (OK in 2.8+, not in 2.7)
#cache_plugin = jsonfile
#cache_connection = cache
```
Will return an error; uncomment cache_plugin AND cache_connection and it will work
<!--- Paste example playbooks or commands between quotes below -->
```bash
ansible-inventory --graph
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The inventory cache options should inherit from fact_caching options, even the error messages states it.
https://docs.ansible.com/ansible/2.7/plugins/cache.html
> If an inventory-specific cache plugin is not provided and inventory caching is enabled, the fact cache plugin is used for inventory.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
[WARNING]: * Failed to parse /home/xxx/automation-cisco/ansible/inventory/01-netbox.yml with auto plugin: error, 'None' inventory cache plugin requires the one of the following to be set: ansible.cfg: [default]: fact_caching_connection, [inventory]: cache_connection;
Environment: ANSIBLE_INVENTORY_CACHE_CONNECTION, ANSIBLE_CACHE_PLUGIN_CONNECTION.to be set to a writeable directory path
```
|
https://github.com/ansible/ansible/issues/63616
|
https://github.com/ansible/ansible/pull/63635
|
7d4800deb12c0b2894b0715065e1e0ab4caea99f
|
a4b36b2e6a1e66ee8479e36adcad4419fa058d3e
| 2019-10-17T08:30:34Z |
python
| 2019-11-18T22:00:26Z |
docs/docsite/rst/plugins/cache.rst
|
.. _cache_plugins:
Cache Plugins
=============
.. contents::
:local:
:depth: 2
Cache plugin implement a backend caching mechanism that allows Ansible to store gathered facts or inventory source data
without the performance hit of retrieving them from source.
The default cache plugin is the :ref:`memory <memory_cache>` plugin, which only caches the data for the current execution of Ansible. Other plugins with persistent storage are available to allow caching the data across runs.
You can use a separate cache plugin for inventory and facts. If an inventory-specific cache plugin is not provided and inventory caching is enabled, the fact cache plugin is used for inventory.
.. _enabling_cache:
Enabling Fact Cache Plugins
---------------------------
Only one fact cache plugin can be active at a time.
You can enable a cache plugin in the Ansible configuration, either via environment variable:
.. code-block:: shell
export ANSIBLE_CACHE_PLUGIN=jsonfile
or in the ``ansible.cfg`` file:
.. code-block:: ini
[defaults]
fact_caching=redis
If the cache plugin is in a collection use the fully qualified name:
.. code-block:: ini
[defaults]
fact_caching = namespace.collection_name.cache_plugin_name
You will also need to configure other settings specific to each plugin. Consult the individual plugin documentation
or the Ansible :ref:`configuration <ansible_configuration_settings>` for more details.
A custom cache plugin is enabled by dropping it into a ``cache_plugins`` directory adjacent to your play, inside a role, or by putting it in one of the directory sources configured in :ref:`ansible.cfg <ansible_configuration_settings>`.
Enabling Inventory Cache Plugins
--------------------------------
Inventory may be cached using a file-based cache plugin (like jsonfile). Check the specific inventory plugin to see if it supports caching. Cache plugins inside a collection are not supported for caching inventory.
If an inventory-specific cache plugin is not specified Ansible will fall back to caching inventory with the fact cache plugin options.
The inventory cache is disabled by default. You may enable it via environment variable:
.. code-block:: shell
export ANSIBLE_INVENTORY_CACHE=True
or in the ``ansible.cfg`` file:
.. code-block:: ini
[inventory]
cache=True
or if the inventory plugin accepts a YAML configuration source, in the configuration file:
.. code-block:: yaml
# dev.aws_ec2.yaml
plugin: aws_ec2
cache: True
Similarly with fact cache plugins, only one inventory cache plugin can be active at a time and may be set via environment variable:
.. code-block:: shell
export ANSIBLE_INVENTORY_CACHE_PLUGIN=jsonfile
or in the ansible.cfg file:
.. code-block:: ini
[inventory]
cache_plugin=jsonfile
or if the inventory plugin accepts a YAML configuration source, in the configuration file:
.. code-block:: yaml
# dev.aws_ec2.yaml
plugin: aws_ec2
cache_plugin: jsonfile
Consult the individual inventory plugin documentation or the Ansible :ref:`configuration <ansible_configuration_settings>` for more details.
.. _using_cache:
Using Cache Plugins
-------------------
Cache plugins are used automatically once they are enabled.
.. _cache_plugin_list:
Plugin List
-----------
You can use ``ansible-doc -t cache -l`` to see the list of available plugins.
Use ``ansible-doc -t cache <plugin name>`` to see specific documentation and examples.
.. toctree:: :maxdepth: 1
:glob:
cache/*
.. seealso::
:ref:`action_plugins`
Ansible Action plugins
:ref:`callback_plugins`
Ansible callback plugins
:ref:`connection_plugins`
Ansible connection plugins
:ref:`inventory_plugins`
Ansible inventory plugins
:ref:`shell_plugins`
Ansible Shell plugins
:ref:`strategy_plugins`
Ansible Strategy plugins
:ref:`vars_plugins`
Ansible Vars plugins
`User Mailing List <https://groups.google.com/forum/#!forum/ansible-devel>`_
Have a question? Stop by the google group!
`webchat.freenode.net <https://webchat.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,816 |
VMware: customize guest OS through vmware_guest module too long to timeout
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Customize Windows 10 guest OS with wait_for_customzation parameter set, but after many hours there is no return and the playbook just stuck there.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /root/.local/lib/python2.7/site-packages/ansible
executable location = ./bin/ansible
python version = 2.7.15+ (default, Jul 9 2019, 16:51:35) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
vSphere 6.7U3, Windows 10 guest OS 32bit with BIOS firmware
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
If something wrong with the customization in guest OS, no event in VC, then wait_for_customization should return timeout error in a not very long time.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
no timeout error return after more than 10 hours, just stuck there.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/62816
|
https://github.com/ansible/ansible/pull/64493
|
f1bf15bf63fce93c4bdfee709a44e48b525b6050
|
067e96b152d7bcf397143a0f4625cc4f7c89c3a7
| 2019-09-25T03:27:45Z |
python
| 2019-11-19T02:33:42Z |
lib/ansible/modules/cloud/vmware/vmware_guest.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# This module is also sponsored by E.T.A.I. (www.etai.fr)
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: vmware_guest
short_description: Manages virtual machines in vCenter
description: >
This module can be used to create new virtual machines from templates or other virtual machines,
manage power state of virtual machine such as power on, power off, suspend, shutdown, reboot, restart etc.,
modify various virtual machine components like network, disk, customization etc.,
rename a virtual machine and remove a virtual machine with associated components.
version_added: '2.2'
author:
- Loic Blot (@nerzhul) <[email protected]>
- Philippe Dellaert (@pdellaert) <[email protected]>
- Abhijeet Kasurde (@Akasurde) <[email protected]>
requirements:
- python >= 2.6
- PyVmomi
notes:
- Please make sure that the user used for vmware_guest has the correct level of privileges.
- For example, following is the list of minimum privileges required by users to create virtual machines.
- " DataStore > Allocate Space"
- " Virtual Machine > Configuration > Add New Disk"
- " Virtual Machine > Configuration > Add or Remove Device"
- " Virtual Machine > Inventory > Create New"
- " Network > Assign Network"
- " Resource > Assign Virtual Machine to Resource Pool"
- "Module may require additional privileges as well, which may be required for gathering facts - e.g. ESXi configurations."
- Tested on vSphere 5.5, 6.0, 6.5 and 6.7
- Use SCSI disks instead of IDE when you want to expand online disks by specifying a SCSI controller
- "For additional information please visit Ansible VMware community wiki - U(https://github.com/ansible/community/wiki/VMware)."
options:
state:
description:
- Specify the state the virtual machine should be in.
- 'If C(state) is set to C(present) and virtual machine exists, ensure the virtual machine
configurations conforms to task arguments.'
- 'If C(state) is set to C(absent) and virtual machine exists, then the specified virtual machine
is removed with its associated components.'
- 'If C(state) is set to one of the following C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists, then virtual machine is deployed with given parameters.'
- 'If C(state) is set to C(poweredon) and virtual machine exists with powerstate other than powered on,
then the specified virtual machine is powered on.'
- 'If C(state) is set to C(poweredoff) and virtual machine exists with powerstate other than powered off,
then the specified virtual machine is powered off.'
- 'If C(state) is set to C(restarted) and virtual machine exists, then the virtual machine is restarted.'
- 'If C(state) is set to C(suspended) and virtual machine exists, then the virtual machine is set to suspended mode.'
- 'If C(state) is set to C(shutdownguest) and virtual machine exists, then the virtual machine is shutdown.'
- 'If C(state) is set to C(rebootguest) and virtual machine exists, then the virtual machine is rebooted.'
default: present
choices: [ present, absent, poweredon, poweredoff, restarted, suspended, shutdownguest, rebootguest ]
name:
description:
- Name of the virtual machine to work with.
- Virtual machine names in vCenter are not necessarily unique, which may be problematic, see C(name_match).
- 'If multiple virtual machines with same name exists, then C(folder) is required parameter to
identify uniqueness of the virtual machine.'
- This parameter is required, if C(state) is set to C(poweredon), C(poweredoff), C(present), C(restarted), C(suspended)
and virtual machine does not exists.
- This parameter is case sensitive.
required: yes
name_match:
description:
- If multiple virtual machines matching the name, use the first or last found.
default: 'first'
choices: [ first, last ]
uuid:
description:
- UUID of the virtual machine to manage if known, this is VMware's unique identifier.
- This is required if C(name) is not supplied.
- If virtual machine does not exists, then this parameter is ignored.
- Please note that a supplied UUID will be ignored on virtual machine creation, as VMware creates the UUID internally.
use_instance_uuid:
description:
- Whether to use the VMware instance UUID rather than the BIOS UUID.
default: no
type: bool
version_added: '2.8'
template:
description:
- Template or existing virtual machine used to create new virtual machine.
- If this value is not set, virtual machine is created without using a template.
- If the virtual machine already exists, this parameter will be ignored.
- This parameter is case sensitive.
- You can also specify template or VM UUID for identifying source. version_added 2.8. Use C(hw_product_uuid) from M(vmware_guest_facts) as UUID value.
- From version 2.8 onwards, absolute path to virtual machine or template can be used.
aliases: [ 'template_src' ]
is_template:
description:
- Flag the instance as a template.
- This will mark the given virtual machine as template.
default: 'no'
type: bool
version_added: '2.3'
folder:
description:
- Destination folder, absolute path to find an existing guest or create the new guest.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter.
- This parameter is case sensitive.
- This parameter is required, while deploying new virtual machine. version_added 2.5.
- 'If multiple machines are found with same name, this parameter is used to identify
uniqueness of the virtual machine. version_added 2.5'
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
hardware:
description:
- Manage virtual machine's hardware attributes.
- All parameters case sensitive.
- 'Valid attributes are:'
- ' - C(hotadd_cpu) (boolean): Allow virtual CPUs to be added while the virtual machine is running.'
- ' - C(hotremove_cpu) (boolean): Allow virtual CPUs to be removed while the virtual machine is running.
version_added: 2.5'
- ' - C(hotadd_memory) (boolean): Allow memory to be added while the virtual machine is running.'
- ' - C(memory_mb) (integer): Amount of memory in MB.'
- ' - C(nested_virt) (bool): Enable nested virtualization. version_added: 2.5'
- ' - C(num_cpus) (integer): Number of CPUs.'
- ' - C(num_cpu_cores_per_socket) (integer): Number of Cores Per Socket.'
- " C(num_cpus) must be a multiple of C(num_cpu_cores_per_socket).
For example to create a VM with 2 sockets of 4 cores, specify C(num_cpus): 8 and C(num_cpu_cores_per_socket): 4"
- ' - C(scsi) (string): Valid values are C(buslogic), C(lsilogic), C(lsilogicsas) and C(paravirtual) (default).'
- " - C(memory_reservation_lock) (boolean): If set true, memory resource reservation for the virtual machine
will always be equal to the virtual machine's memory size. version_added: 2.5"
- ' - C(max_connections) (integer): Maximum number of active remote display connections for the virtual machines.
version_added: 2.5.'
- ' - C(mem_limit) (integer): The memory utilization of a virtual machine will not exceed this limit. Unit is MB.
version_added: 2.5'
- ' - C(mem_reservation) (integer): The amount of memory resource that is guaranteed available to the virtual
machine. Unit is MB. C(memory_reservation) is alias to this. version_added: 2.5'
- ' - C(cpu_limit) (integer): The CPU utilization of a virtual machine will not exceed this limit. Unit is MHz.
version_added: 2.5'
- ' - C(cpu_reservation) (integer): The amount of CPU resource that is guaranteed available to the virtual machine.
Unit is MHz. version_added: 2.5'
- ' - C(version) (integer): The Virtual machine hardware versions. Default is 10 (ESXi 5.5 and onwards).
If value specified as C(latest), version is set to the most current virtual hardware supported on the host.
C(latest) is added in version 2.10.
Please check VMware documentation for correct virtual machine hardware version.
Incorrect hardware version may lead to failure in deployment. If hardware version is already equal to the given
version then no action is taken. version_added: 2.6'
- ' - C(boot_firmware) (string): Choose which firmware should be used to boot the virtual machine.
Allowed values are "bios" and "efi". version_added: 2.7'
- ' - C(virt_based_security) (bool): Enable Virtualization Based Security feature for Windows 10.
(Support from Virtual machine hardware version 14, Guest OS Windows 10 64 bit, Windows Server 2016)'
guest_id:
description:
- Set the guest ID.
- This parameter is case sensitive.
- 'Examples:'
- " virtual machine with RHEL7 64 bit, will be 'rhel7_64Guest'"
- " virtual machine with CentOS 64 bit, will be 'centos64Guest'"
- " virtual machine with Ubuntu 64 bit, will be 'ubuntu64Guest'"
- This field is required when creating a virtual machine, not required when creating from the template.
- >
Valid values are referenced here:
U(https://code.vmware.com/apis/358/doc/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html)
version_added: '2.3'
disk:
description:
- A list of disks to add.
- This parameter is case sensitive.
- Shrinking disks is not supported.
- Removing existing disks of the virtual machine is not supported.
- 'Valid attributes are:'
- ' - C(size_[tb,gb,mb,kb]) (integer): Disk storage size in specified unit.'
- ' - C(type) (string): Valid values are:'
- ' - C(thin) thin disk'
- ' - C(eagerzeroedthick) eagerzeroedthick disk, added in version 2.5'
- ' Default: C(None) thick disk, no eagerzero.'
- ' - C(datastore) (string): The name of datastore which will be used for the disk. If C(autoselect_datastore) is set to True,
then will select the less used datastore whose name contains this "disk.datastore" string.'
- ' - C(filename) (string): Existing disk image to be used. Filename must already exist on the datastore.'
- ' Specify filename string in C([datastore_name] path/to/file.vmdk) format. Added in version 2.8.'
- ' - C(autoselect_datastore) (bool): select the less used datastore. "disk.datastore" and "disk.autoselect_datastore"
will not be used if C(datastore) is specified outside this C(disk) configuration.'
- ' - C(disk_mode) (string): Type of disk mode. Added in version 2.6'
- ' - Available options are :'
- ' - C(persistent): Changes are immediately and permanently written to the virtual disk. This is default.'
- ' - C(independent_persistent): Same as persistent, but not affected by snapshots.'
- ' - C(independent_nonpersistent): Changes to virtual disk are made to a redo log and discarded at power off, but not affected by snapshots.'
cdrom:
description:
- A CD-ROM configuration for the virtual machine.
- Or a list of CD-ROMs configuration for the virtual machine. Added in version 2.9.
- 'Parameters C(controller_type), C(controller_number), C(unit_number), C(state) are added for a list of CD-ROMs
configuration support.'
- 'Valid attributes are:'
- ' - C(type) (string): The type of CD-ROM, valid options are C(none), C(client) or C(iso). With C(none) the CD-ROM
will be disconnected but present.'
- ' - C(iso_path) (string): The datastore path to the ISO file to use, in the form of C([datastore1] path/to/file.iso).
Required if type is set C(iso).'
- ' - C(controller_type) (string): Default value is C(ide). Only C(ide) controller type for CD-ROM is supported for
now, will add SATA controller type in the future.'
- ' - C(controller_number) (int): For C(ide) controller, valid value is 0 or 1.'
- ' - C(unit_number) (int): For CD-ROM device attach to C(ide) controller, valid value is 0 or 1.
C(controller_number) and C(unit_number) are mandatory attributes.'
- ' - C(state) (string): Valid value is C(present) or C(absent). Default is C(present). If set to C(absent), then
the specified CD-ROM will be removed. For C(ide) controller, hot-add or hot-remove CD-ROM is not supported.'
version_added: '2.5'
resource_pool:
description:
- Use the given resource pool for virtual machine operation.
- This parameter is case sensitive.
- Resource pool should be child of the selected host parent.
version_added: '2.3'
wait_for_ip_address:
description:
- Wait until vCenter detects an IP address for the virtual machine.
- This requires vmware-tools (vmtoolsd) to properly work after creation.
- "vmware-tools needs to be installed on the given virtual machine in order to work with this parameter."
default: 'no'
type: bool
wait_for_ip_address_timeout:
description:
- Define a timeout (in seconds) for the wait_for_ip_address parameter.
default: '300'
type: int
version_added: '2.10'
wait_for_customization:
description:
- Wait until vCenter detects all guest customizations as successfully completed.
- When enabled, the VM will automatically be powered on.
default: 'no'
type: bool
version_added: '2.8'
state_change_timeout:
description:
- If the C(state) is set to C(shutdownguest), by default the module will return immediately after sending the shutdown signal.
- If this argument is set to a positive integer, the module will instead wait for the virtual machine to reach the poweredoff state.
- The value sets a timeout in seconds for the module to wait for the state change.
default: 0
version_added: '2.6'
snapshot_src:
description:
- Name of the existing snapshot to use to create a clone of a virtual machine.
- This parameter is case sensitive.
- While creating linked clone using C(linked_clone) parameter, this parameter is required.
version_added: '2.4'
linked_clone:
description:
- Whether to create a linked clone from the snapshot specified.
- If specified, then C(snapshot_src) is required parameter.
default: 'no'
type: bool
version_added: '2.4'
force:
description:
- Ignore warnings and complete the actions.
- This parameter is useful while removing virtual machine which is powered on state.
- 'This module reflects the VMware vCenter API and UI workflow, as such, in some cases the `force` flag will
be mandatory to perform the action to ensure you are certain the action has to be taken, no matter what the consequence.
This is specifically the case for removing a powered on the virtual machine when C(state) is set to C(absent).'
default: 'no'
type: bool
delete_from_inventory:
description:
- Whether to delete Virtual machine from inventory or delete from disk.
default: False
type: bool
version_added: '2.10'
datacenter:
description:
- Destination datacenter for the deploy operation.
- This parameter is case sensitive.
default: ha-datacenter
cluster:
description:
- The cluster name where the virtual machine will run.
- This is a required parameter, if C(esxi_hostname) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
version_added: '2.3'
esxi_hostname:
description:
- The ESXi hostname where the virtual machine will run.
- This is a required parameter, if C(cluster) is not set.
- C(esxi_hostname) and C(cluster) are mutually exclusive parameters.
- This parameter is case sensitive.
annotation:
description:
- A note or annotation to include in the virtual machine.
version_added: '2.3'
customvalues:
description:
- Define a list of custom values to set on virtual machine.
- A custom value object takes two fields C(key) and C(value).
- Incorrect key and values will be ignored.
version_added: '2.3'
networks:
description:
- A list of networks (in the order of the NICs).
- Removing NICs is not allowed, while reconfiguring the virtual machine.
- All parameters and VMware object names are case sensitive.
- 'One of the below parameters is required per entry:'
- ' - C(name) (string): Name of the portgroup or distributed virtual portgroup for this interface.
When specifying distributed virtual portgroup make sure given C(esxi_hostname) or C(cluster) is associated with it.'
- ' - C(vlan) (integer): VLAN number for this interface.'
- 'Optional parameters per entry (used for virtual hardware):'
- ' - C(device_type) (string): Virtual network device (one of C(e1000), C(e1000e), C(pcnet32), C(vmxnet2), C(vmxnet3) (default), C(sriov)).'
- ' - C(mac) (string): Customize MAC address.'
- ' - C(dvswitch_name) (string): Name of the distributed vSwitch.
This value is required if multiple distributed portgroups exists with the same name. version_added 2.7'
- ' - C(start_connected) (bool): Indicates that virtual network adapter starts with associated virtual machine powers on. version_added: 2.5'
- 'Optional parameters per entry (used for OS customization):'
- ' - C(type) (string): Type of IP assignment (either C(dhcp) or C(static)). C(dhcp) is default.'
- ' - C(ip) (string): Static IP address (implies C(type: static)).'
- ' - C(netmask) (string): Static netmask required for C(ip).'
- ' - C(gateway) (string): Static gateway.'
- ' - C(dns_servers) (string): DNS servers for this network interface (Windows).'
- ' - C(domain) (string): Domain name for this network interface (Windows).'
- ' - C(wake_on_lan) (bool): Indicates if wake-on-LAN is enabled on this virtual network adapter. version_added: 2.5'
- ' - C(allow_guest_control) (bool): Enables guest control over whether the connectable device is connected. version_added: 2.5'
version_added: '2.3'
customization:
description:
- Parameters for OS customization when cloning from the template or the virtual machine, or apply to the existing virtual machine directly.
- Not all operating systems are supported for customization with respective vCenter version,
please check VMware documentation for respective OS customization.
- For supported customization operating system matrix, (see U(http://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf))
- All parameters and VMware object names are case sensitive.
- Linux based OSes requires Perl package to be installed for OS customizations.
- 'Common parameters (Linux/Windows):'
- ' - C(existing_vm) (bool): If set to C(True), do OS customization on the specified virtual machine directly.
If set to C(False) or not specified, do OS customization when cloning from the template or the virtual machine. version_added: 2.8'
- ' - C(dns_servers) (list): List of DNS servers to configure.'
- ' - C(dns_suffix) (list): List of domain suffixes, also known as DNS search path (default: C(domain) parameter).'
- ' - C(domain) (string): DNS domain name to use.'
- ' - C(hostname) (string): Computer hostname (default: shorted C(name) parameter). Allowed characters are alphanumeric (uppercase and lowercase)
and minus, rest of the characters are dropped as per RFC 952.'
- 'Parameters related to Linux customization:'
- ' - C(timezone) (string): Timezone (See List of supported time zones for different vSphere versions in Linux/Unix
systems (2145518) U(https://kb.vmware.com/s/article/2145518)). version_added: 2.9'
- ' - C(hwclockUTC) (bool): Specifies whether the hardware clock is in UTC or local time.
True when the hardware clock is in UTC, False when the hardware clock is in local time. version_added: 2.9'
- 'Parameters related to Windows customization:'
- ' - C(autologon) (bool): Auto logon after virtual machine customization (default: False).'
- ' - C(autologoncount) (int): Number of autologon after reboot (default: 1).'
- ' - C(domainadmin) (string): User used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(domainadminpassword) (string): Password used to join in AD domain (mandatory with C(joindomain)).'
- ' - C(fullname) (string): Server owner name (default: Administrator).'
- ' - C(joindomain) (string): AD domain to join (Not compatible with C(joinworkgroup)).'
- ' - C(joinworkgroup) (string): Workgroup to join (Not compatible with C(joindomain), default: WORKGROUP).'
- ' - C(orgname) (string): Organisation name (default: ACME).'
- ' - C(password) (string): Local administrator password.'
- ' - C(productid) (string): Product ID.'
- ' - C(runonce) (list): List of commands to run at first user logon.'
- ' - C(timezone) (int): Timezone (See U(https://msdn.microsoft.com/en-us/library/ms912391.aspx)).'
version_added: '2.3'
vapp_properties:
description:
- A list of vApp properties
- 'For full list of attributes and types refer to: U(https://github.com/vmware/pyvmomi/blob/master/docs/vim/vApp/PropertyInfo.rst)'
- 'Basic attributes are:'
- ' - C(id) (string): Property id - required.'
- ' - C(value) (string): Property value.'
- ' - C(type) (string): Value type, string type by default.'
- ' - C(operation): C(remove): This attribute is required only when removing properties.'
version_added: '2.6'
customization_spec:
description:
- Unique name identifying the requested customization specification.
- This parameter is case sensitive.
- If set, then overrides C(customization) parameter values.
version_added: '2.6'
datastore:
description:
- Specify datastore or datastore cluster to provision virtual machine.
- 'This parameter takes precedence over "disk.datastore" parameter.'
- 'This parameter can be used to override datastore or datastore cluster setting of the virtual machine when deployed
from the template.'
- Please see example for more usage.
version_added: '2.7'
convert:
description:
- Specify convert disk type while cloning template or virtual machine.
choices: [ thin, thick, eagerzeroedthick ]
version_added: '2.8'
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Create a virtual machine on given ESXi hostname
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /DC1/vm/
name: test_vm_0001
state: poweredon
guest_id: centos64Guest
# This is hostname of particular ESXi server on which user wants VM to be deployed
esxi_hostname: "{{ esxi_hostname }}"
disk:
- size_gb: 10
type: thin
datastore: datastore1
hardware:
memory_mb: 512
num_cpus: 4
scsi: paravirtual
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
ip: 10.10.10.100
netmask: 255.255.255.0
device_type: vmxnet3
wait_for_ip_address: yes
wait_for_ip_address_timeout: 600
delegate_to: localhost
register: deploy_vm
- name: Create a virtual machine from a template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
folder: /testvms
name: testvm_2
state: poweredon
template: template_el7
disk:
- size_gb: 10
type: thin
datastore: g73_datastore
hardware:
memory_mb: 512
num_cpus: 6
num_cpu_cores_per_socket: 3
scsi: paravirtual
memory_reservation_lock: True
mem_limit: 8096
mem_reservation: 4096
cpu_limit: 8096
cpu_reservation: 4096
max_connections: 5
hotadd_cpu: True
hotremove_cpu: True
hotadd_memory: False
version: 12 # Hardware version of virtual machine
boot_firmware: "efi"
cdrom:
type: iso
iso_path: "[datastore1] livecd.iso"
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
wait_for_ip_address: yes
delegate_to: localhost
register: deploy
- name: Clone a virtual machine from Windows template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: datacenter1
cluster: cluster
name: testvm-2
template: template_windows
networks:
- name: VM Network
ip: 192.168.1.100
netmask: 255.255.255.0
gateway: 192.168.1.1
mac: aa:bb:dd:aa:00:14
domain: my_domain
dns_servers:
- 192.168.1.1
- 192.168.1.2
- vlan: 1234
type: dhcp
customization:
autologon: yes
dns_servers:
- 192.168.1.1
- 192.168.1.2
domain: my_domain
password: new_vm_password
runonce:
- powershell.exe -ExecutionPolicy Unrestricted -File C:\Windows\Temp\ConfigureRemotingForAnsible.ps1 -ForceNewSSLCert -EnableCredSSP
delegate_to: localhost
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
state: present
folder: /DC1/vm
template: "{{ template }}"
name: "{{ vm_name }}"
cluster: DC1_C1
networks:
- name: VM Network
ip: 192.168.10.11
netmask: 255.255.255.0
wait_for_ip_address: True
customization:
domain: "{{ guest_domain }}"
dns_servers:
- 8.9.9.9
- 7.8.8.9
dns_suffix:
- example.com
- example2.com
delegate_to: localhost
- name: Rename a virtual machine (requires the virtual machine's uuid)
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
name: new_name
state: present
delegate_to: localhost
- name: Remove a virtual machine by uuid
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: absent
delegate_to: localhost
- name: Remove a virtual machine from inventory
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: vm_name
delete_from_inventory: True
state: absent
delegate_to: localhost
- name: Manipulate vApp properties
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: vm_name
state: present
vapp_properties:
- id: remoteIP
category: Backup
label: Backup server IP
type: str
value: 10.10.10.1
- id: old_property
operation: remove
delegate_to: localhost
- name: Set powerstate of a virtual machine to poweroff by using UUID
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_uuid }}"
state: poweredoff
delegate_to: localhost
- name: Deploy a virtual machine in a datastore different from the datastore of the template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ vm_name }}"
state: present
template: "{{ template_name }}"
# Here datastore can be different which holds template
datastore: "{{ virtual_machine_datastore }}"
hardware:
memory_mb: 512
num_cpus: 2
scsi: paravirtual
delegate_to: localhost
'''
RETURN = r'''
instance:
description: metadata about the new virtual machine
returned: always
type: dict
sample: None
'''
import re
import time
import string
HAS_PYVMOMI = False
try:
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
except ImportError:
pass
from random import randint
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.network import is_mac
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.vmware import (find_obj, gather_vm_facts, get_all_objs,
compile_folder_path_for_object, serialize_spec,
vmware_argument_spec, set_vm_power_state, PyVmomi,
find_dvs_by_name, find_dvspg_by_name, wait_for_vm_ip,
wait_for_task, TaskError, quote_obj_name)
def list_or_dict(value):
if isinstance(value, list) or isinstance(value, dict):
return value
else:
raise ValueError("'%s' is not valid, valid type is 'list' or 'dict'." % value)
class PyVmomiDeviceHelper(object):
""" This class is a helper to create easily VMware Objects for PyVmomiHelper """
def __init__(self, module):
self.module = module
self.next_disk_unit_number = 0
self.scsi_device_type = {
'lsilogic': vim.vm.device.VirtualLsiLogicController,
'paravirtual': vim.vm.device.ParaVirtualSCSIController,
'buslogic': vim.vm.device.VirtualBusLogicController,
'lsilogicsas': vim.vm.device.VirtualLsiLogicSASController,
}
def create_scsi_controller(self, scsi_type):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
scsi_device = self.scsi_device_type.get(scsi_type, vim.vm.device.ParaVirtualSCSIController)
scsi_ctl.device = scsi_device()
scsi_ctl.device.busNumber = 0
# While creating a new SCSI controller, temporary key value
# should be unique negative integers
scsi_ctl.device.key = -randint(1000, 9999)
scsi_ctl.device.hotAddRemove = True
scsi_ctl.device.sharedBus = 'noSharing'
scsi_ctl.device.scsiCtlrUnitNumber = 7
return scsi_ctl
def is_scsi_controller(self, device):
return isinstance(device, tuple(self.scsi_device_type.values()))
@staticmethod
def create_ide_controller(bus_number=0):
ide_ctl = vim.vm.device.VirtualDeviceSpec()
ide_ctl.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
ide_ctl.device = vim.vm.device.VirtualIDEController()
ide_ctl.device.deviceInfo = vim.Description()
# While creating a new IDE controller, temporary key value
# should be unique negative integers
ide_ctl.device.key = -randint(200, 299)
ide_ctl.device.busNumber = bus_number
return ide_ctl
@staticmethod
def create_cdrom(ide_device, cdrom_type, iso_path=None, unit_number=0):
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
cdrom_spec.device = vim.vm.device.VirtualCdrom()
cdrom_spec.device.controllerKey = ide_device.key
cdrom_spec.device.key = -randint(3000, 3999)
cdrom_spec.device.unitNumber = unit_number
cdrom_spec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_spec.device.connectable.allowGuestControl = True
cdrom_spec.device.connectable.startConnected = (cdrom_type != "none")
if cdrom_type in ["none", "client"]:
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif cdrom_type == "iso":
cdrom_spec.device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
return cdrom_spec
@staticmethod
def is_equal_cdrom(vm_obj, cdrom_device, cdrom_type, iso_path):
if cdrom_type == "none":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
not cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or not cdrom_device.connectable.connected))
elif cdrom_type == "client":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo) and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
elif cdrom_type == "iso":
return (isinstance(cdrom_device.backing, vim.vm.device.VirtualCdrom.IsoBackingInfo) and
cdrom_device.backing.fileName == iso_path and
cdrom_device.connectable.allowGuestControl and
cdrom_device.connectable.startConnected and
(vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOn or cdrom_device.connectable.connected))
@staticmethod
def update_cdrom_config(vm_obj, cdrom_spec, cdrom_device, iso_path=None):
# Updating an existing CD-ROM
if cdrom_spec["type"] in ["client", "none"]:
cdrom_device.backing = vim.vm.device.VirtualCdrom.RemotePassthroughBackingInfo()
elif cdrom_spec["type"] == "iso" and iso_path is not None:
cdrom_device.backing = vim.vm.device.VirtualCdrom.IsoBackingInfo(fileName=iso_path)
cdrom_device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
cdrom_device.connectable.allowGuestControl = True
cdrom_device.connectable.startConnected = (cdrom_spec["type"] != "none")
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_device.connectable.connected = (cdrom_spec["type"] != "none")
def remove_cdrom(self, cdrom_device):
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove
cdrom_spec.device = cdrom_device
return cdrom_spec
def create_scsi_disk(self, scsi_ctl, disk_index=None):
diskspec = vim.vm.device.VirtualDeviceSpec()
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
diskspec.device = vim.vm.device.VirtualDisk()
diskspec.device.backing = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
diskspec.device.controllerKey = scsi_ctl.device.key
if self.next_disk_unit_number == 7:
raise AssertionError()
if disk_index == 7:
raise AssertionError()
"""
Configure disk unit number.
"""
if disk_index is not None:
diskspec.device.unitNumber = disk_index
self.next_disk_unit_number = disk_index + 1
else:
diskspec.device.unitNumber = self.next_disk_unit_number
self.next_disk_unit_number += 1
# unit number 7 is reserved to SCSI controller, increase next index
if self.next_disk_unit_number == 7:
self.next_disk_unit_number += 1
return diskspec
def get_device(self, device_type, name):
nic_dict = dict(pcnet32=vim.vm.device.VirtualPCNet32(),
vmxnet2=vim.vm.device.VirtualVmxnet2(),
vmxnet3=vim.vm.device.VirtualVmxnet3(),
e1000=vim.vm.device.VirtualE1000(),
e1000e=vim.vm.device.VirtualE1000e(),
sriov=vim.vm.device.VirtualSriovEthernetCard(),
)
if device_type in nic_dict:
return nic_dict[device_type]
else:
self.module.fail_json(msg='Invalid device_type "%s"'
' for network "%s"' % (device_type, name))
def create_nic(self, device_type, device_label, device_infos):
nic = vim.vm.device.VirtualDeviceSpec()
nic.device = self.get_device(device_type, device_infos['name'])
nic.device.wakeOnLanEnabled = bool(device_infos.get('wake_on_lan', True))
nic.device.deviceInfo = vim.Description()
nic.device.deviceInfo.label = device_label
nic.device.deviceInfo.summary = device_infos['name']
nic.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()
nic.device.connectable.startConnected = bool(device_infos.get('start_connected', True))
nic.device.connectable.allowGuestControl = bool(device_infos.get('allow_guest_control', True))
nic.device.connectable.connected = True
if 'mac' in device_infos and is_mac(device_infos['mac']):
nic.device.addressType = 'manual'
nic.device.macAddress = device_infos['mac']
else:
nic.device.addressType = 'generated'
return nic
def integer_value(self, input_value, name):
"""
Function to return int value for given input, else return error
Args:
input_value: Input value to retrieve int value from
name: Name of the Input value (used to build error message)
Returns: (int) if integer value can be obtained, otherwise will send a error message.
"""
if isinstance(input_value, int):
return input_value
elif isinstance(input_value, str) and input_value.isdigit():
return int(input_value)
else:
self.module.fail_json(msg='"%s" attribute should be an'
' integer value.' % name)
class PyVmomiCache(object):
""" This class caches references to objects which are requested multiples times but not modified """
def __init__(self, content, dc_name=None):
self.content = content
self.dc_name = dc_name
self.networks = {}
self.clusters = {}
self.esx_hosts = {}
self.parent_datacenters = {}
def find_obj(self, content, types, name, confine_to_datacenter=True):
""" Wrapper around find_obj to set datacenter context """
result = find_obj(content, types, name)
if result and confine_to_datacenter:
if to_text(self.get_parent_datacenter(result).name) != to_text(self.dc_name):
result = None
objects = self.get_all_objs(content, types, confine_to_datacenter=True)
for obj in objects:
if name is None or to_text(obj.name) == to_text(name):
return obj
return result
def get_all_objs(self, content, types, confine_to_datacenter=True):
""" Wrapper around get_all_objs to set datacenter context """
objects = get_all_objs(content, types)
if confine_to_datacenter:
if hasattr(objects, 'items'):
# resource pools come back as a dictionary
# make a copy
for k, v in tuple(objects.items()):
parent_dc = self.get_parent_datacenter(k)
if parent_dc.name != self.dc_name:
del objects[k]
else:
# everything else should be a list
objects = [x for x in objects if self.get_parent_datacenter(x).name == self.dc_name]
return objects
def get_network(self, network):
network = quote_obj_name(network)
if network not in self.networks:
self.networks[network] = self.find_obj(self.content, [vim.Network], network)
return self.networks[network]
def get_cluster(self, cluster):
if cluster not in self.clusters:
self.clusters[cluster] = self.find_obj(self.content, [vim.ClusterComputeResource], cluster)
return self.clusters[cluster]
def get_esx_host(self, host):
if host not in self.esx_hosts:
self.esx_hosts[host] = self.find_obj(self.content, [vim.HostSystem], host)
return self.esx_hosts[host]
def get_parent_datacenter(self, obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
if obj in self.parent_datacenters:
return self.parent_datacenters[obj]
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
self.parent_datacenters[obj] = datacenter
return datacenter
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
self.device_helper = PyVmomiDeviceHelper(self.module)
self.configspec = None
self.relospec = None
self.change_detected = False # a change was detected and needs to be applied through reconfiguration
self.change_applied = False # a change was applied meaning at least one task succeeded
self.customspec = None
self.cache = PyVmomiCache(self.content, dc_name=self.params['datacenter'])
def gather_facts(self, vm):
return gather_vm_facts(self.content, vm)
def remove_vm(self, vm, delete_from_inventory=False):
# https://www.vmware.com/support/developer/converter-sdk/conv60_apireference/vim.ManagedEntity.html#destroy
if vm.summary.runtime.powerState.lower() == 'poweredon':
self.module.fail_json(msg="Virtual machine %s found in 'powered on' state, "
"please use 'force' parameter to remove or poweroff VM "
"and try removing VM again." % vm.name)
# Delete VM from Inventory
if delete_from_inventory:
try:
vm.UnregisterVM()
except (vim.fault.TaskInProgress,
vmodl.RuntimeFault) as e:
return {'changed': self.change_applied, 'failed': True, 'msg': e.msg, 'op': 'UnregisterVM'}
self.change_applied = True
return {'changed': self.change_applied, 'failed': False}
# Delete VM from Disk
task = vm.Destroy()
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'destroy'}
else:
return {'changed': self.change_applied, 'failed': False}
def configure_guestid(self, vm_obj, vm_creation=False):
# guest_id is not required when using templates
if self.params['template']:
return
# guest_id is only mandatory on VM creation
if vm_creation and self.params['guest_id'] is None:
self.module.fail_json(msg="guest_id attribute is mandatory for VM creation")
if self.params['guest_id'] and \
(vm_obj is None or self.params['guest_id'].lower() != vm_obj.summary.config.guestId.lower()):
self.change_detected = True
self.configspec.guestId = self.params['guest_id']
def configure_resource_alloc_info(self, vm_obj):
"""
Function to configure resource allocation information about virtual machine
:param vm_obj: VM object in case of reconfigure, None in case of deploy
:return: None
"""
rai_change_detected = False
memory_allocation = vim.ResourceAllocationInfo()
cpu_allocation = vim.ResourceAllocationInfo()
if 'hardware' in self.params:
if 'mem_limit' in self.params['hardware']:
mem_limit = None
try:
mem_limit = int(self.params['hardware'].get('mem_limit'))
except ValueError:
self.module.fail_json(msg="hardware.mem_limit attribute should be an integer value.")
memory_allocation.limit = mem_limit
if vm_obj is None or memory_allocation.limit != vm_obj.config.memoryAllocation.limit:
rai_change_detected = True
if 'mem_reservation' in self.params['hardware'] or 'memory_reservation' in self.params['hardware']:
mem_reservation = self.params['hardware'].get('mem_reservation')
if mem_reservation is None:
mem_reservation = self.params['hardware'].get('memory_reservation')
try:
mem_reservation = int(mem_reservation)
except ValueError:
self.module.fail_json(msg="hardware.mem_reservation or hardware.memory_reservation should be an integer value.")
memory_allocation.reservation = mem_reservation
if vm_obj is None or \
memory_allocation.reservation != vm_obj.config.memoryAllocation.reservation:
rai_change_detected = True
if 'cpu_limit' in self.params['hardware']:
cpu_limit = None
try:
cpu_limit = int(self.params['hardware'].get('cpu_limit'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_limit attribute should be an integer value.")
cpu_allocation.limit = cpu_limit
if vm_obj is None or cpu_allocation.limit != vm_obj.config.cpuAllocation.limit:
rai_change_detected = True
if 'cpu_reservation' in self.params['hardware']:
cpu_reservation = None
try:
cpu_reservation = int(self.params['hardware'].get('cpu_reservation'))
except ValueError:
self.module.fail_json(msg="hardware.cpu_reservation should be an integer value.")
cpu_allocation.reservation = cpu_reservation
if vm_obj is None or \
cpu_allocation.reservation != vm_obj.config.cpuAllocation.reservation:
rai_change_detected = True
if rai_change_detected:
self.configspec.memoryAllocation = memory_allocation
self.configspec.cpuAllocation = cpu_allocation
self.change_detected = True
def configure_cpu_and_memory(self, vm_obj, vm_creation=False):
# set cpu/memory/etc
if 'hardware' in self.params:
if 'num_cpus' in self.params['hardware']:
try:
num_cpus = int(self.params['hardware']['num_cpus'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpus attribute should be an integer value.")
# check VM power state and cpu hot-add/hot-remove state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if not vm_obj.config.cpuHotRemoveEnabled and num_cpus < vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is less than the cpu number of the VM, "
"cpuHotRemove is not enabled")
if not vm_obj.config.cpuHotAddEnabled and num_cpus > vm_obj.config.hardware.numCPU:
self.module.fail_json(msg="Configured cpu number is more than the cpu number of the VM, "
"cpuHotAdd is not enabled")
if 'num_cpu_cores_per_socket' in self.params['hardware']:
try:
num_cpu_cores_per_socket = int(self.params['hardware']['num_cpu_cores_per_socket'])
except ValueError:
self.module.fail_json(msg="hardware.num_cpu_cores_per_socket attribute "
"should be an integer value.")
if num_cpus % num_cpu_cores_per_socket != 0:
self.module.fail_json(msg="hardware.num_cpus attribute should be a multiple "
"of hardware.num_cpu_cores_per_socket")
self.configspec.numCoresPerSocket = num_cpu_cores_per_socket
if vm_obj is None or self.configspec.numCoresPerSocket != vm_obj.config.hardware.numCoresPerSocket:
self.change_detected = True
self.configspec.numCPUs = num_cpus
if vm_obj is None or self.configspec.numCPUs != vm_obj.config.hardware.numCPU:
self.change_detected = True
# num_cpu is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.num_cpus attribute is mandatory for VM creation")
if 'memory_mb' in self.params['hardware']:
try:
memory_mb = int(self.params['hardware']['memory_mb'])
except ValueError:
self.module.fail_json(msg="Failed to parse hardware.memory_mb value."
" Please refer the documentation and provide"
" correct value.")
# check VM power state and memory hotadd state before re-config VM
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
if vm_obj.config.memoryHotAddEnabled and memory_mb < vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="Configured memory is less than memory size of the VM, "
"operation is not supported")
elif not vm_obj.config.memoryHotAddEnabled and memory_mb != vm_obj.config.hardware.memoryMB:
self.module.fail_json(msg="memoryHotAdd is not enabled")
self.configspec.memoryMB = memory_mb
if vm_obj is None or self.configspec.memoryMB != vm_obj.config.hardware.memoryMB:
self.change_detected = True
# memory_mb is mandatory for VM creation
elif vm_creation and not self.params['template']:
self.module.fail_json(msg="hardware.memory_mb attribute is mandatory for VM creation")
if 'hotadd_memory' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.memoryHotAddEnabled != bool(self.params['hardware']['hotadd_memory']):
self.module.fail_json(msg="Configure hotadd memory operation is not supported when VM is power on")
self.configspec.memoryHotAddEnabled = bool(self.params['hardware']['hotadd_memory'])
if vm_obj is None or self.configspec.memoryHotAddEnabled != vm_obj.config.memoryHotAddEnabled:
self.change_detected = True
if 'hotadd_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotAddEnabled != bool(self.params['hardware']['hotadd_cpu']):
self.module.fail_json(msg="Configure hotadd cpu operation is not supported when VM is power on")
self.configspec.cpuHotAddEnabled = bool(self.params['hardware']['hotadd_cpu'])
if vm_obj is None or self.configspec.cpuHotAddEnabled != vm_obj.config.cpuHotAddEnabled:
self.change_detected = True
if 'hotremove_cpu' in self.params['hardware']:
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn and \
vm_obj.config.cpuHotRemoveEnabled != bool(self.params['hardware']['hotremove_cpu']):
self.module.fail_json(msg="Configure hotremove cpu operation is not supported when VM is power on")
self.configspec.cpuHotRemoveEnabled = bool(self.params['hardware']['hotremove_cpu'])
if vm_obj is None or self.configspec.cpuHotRemoveEnabled != vm_obj.config.cpuHotRemoveEnabled:
self.change_detected = True
if 'memory_reservation_lock' in self.params['hardware']:
self.configspec.memoryReservationLockedToMax = bool(self.params['hardware']['memory_reservation_lock'])
if vm_obj is None or self.configspec.memoryReservationLockedToMax != vm_obj.config.memoryReservationLockedToMax:
self.change_detected = True
if 'boot_firmware' in self.params['hardware']:
# boot firmware re-config can cause boot issue
if vm_obj is not None:
return
boot_firmware = self.params['hardware']['boot_firmware'].lower()
if boot_firmware not in ('bios', 'efi'):
self.module.fail_json(msg="hardware.boot_firmware value is invalid [%s]."
" Need one of ['bios', 'efi']." % boot_firmware)
self.configspec.firmware = boot_firmware
self.change_detected = True
def sanitize_cdrom_params(self):
# cdroms {'ide': [{num: 0, cdrom: []}, {}], 'sata': [{num: 0, cdrom: []}, {}, ...]}
cdroms = {'ide': [], 'sata': []}
expected_cdrom_spec = self.params.get('cdrom')
if expected_cdrom_spec:
for cdrom_spec in expected_cdrom_spec:
cdrom_spec['controller_type'] = cdrom_spec.get('controller_type', 'ide').lower()
if cdrom_spec['controller_type'] not in ['ide', 'sata']:
self.module.fail_json(msg="Invalid cdrom.controller_type: %s, valid value is 'ide' or 'sata'."
% cdrom_spec['controller_type'])
cdrom_spec['state'] = cdrom_spec.get('state', 'present').lower()
if cdrom_spec['state'] not in ['present', 'absent']:
self.module.fail_json(msg="Invalid cdrom.state: %s, valid value is 'present', 'absent'."
% cdrom_spec['state'])
if cdrom_spec['state'] == 'present':
if 'type' in cdrom_spec and cdrom_spec.get('type') not in ['none', 'client', 'iso']:
self.module.fail_json(msg="Invalid cdrom.type: %s, valid value is 'none', 'client' or 'iso'."
% cdrom_spec.get('type'))
if cdrom_spec.get('type') == 'iso' and not cdrom_spec.get('iso_path'):
self.module.fail_json(msg="cdrom.iso_path is mandatory when cdrom.type is set to iso.")
if cdrom_spec['controller_type'] == 'ide' and \
(cdrom_spec.get('controller_number') not in [0, 1] or cdrom_spec.get('unit_number') not in [0, 1]):
self.module.fail_json(msg="Invalid cdrom.controller_number: %s or cdrom.unit_number: %s, valid"
" values are 0 or 1 for IDE controller." % (cdrom_spec.get('controller_number'), cdrom_spec.get('unit_number')))
if cdrom_spec['controller_type'] == 'sata' and \
(cdrom_spec.get('controller_number') not in range(0, 4) or cdrom_spec.get('unit_number') not in range(0, 30)):
self.module.fail_json(msg="Invalid cdrom.controller_number: %s or cdrom.unit_number: %s,"
" valid controller_number value is 0-3, valid unit_number is 0-29"
" for SATA controller." % (cdrom_spec.get('controller_number'), cdrom_spec.get('unit_number')))
ctl_exist = False
for exist_spec in cdroms.get(cdrom_spec['controller_type']):
if exist_spec['num'] == cdrom_spec['controller_number']:
ctl_exist = True
exist_spec['cdrom'].append(cdrom_spec)
break
if not ctl_exist:
cdroms.get(cdrom_spec['controller_type']).append({'num': cdrom_spec['controller_number'], 'cdrom': [cdrom_spec]})
return cdroms
def configure_cdrom(self, vm_obj):
# Configure the VM CD-ROM
if self.params.get('cdrom'):
if vm_obj and vm_obj.config.template:
# Changing CD-ROM settings on a template is not supported
return
if isinstance(self.params.get('cdrom'), dict):
self.configure_cdrom_dict(vm_obj)
elif isinstance(self.params.get('cdrom'), list):
self.configure_cdrom_list(vm_obj)
def configure_cdrom_dict(self, vm_obj):
if self.params["cdrom"].get('type') not in ['none', 'client', 'iso']:
self.module.fail_json(msg="cdrom.type is mandatory. Options are 'none', 'client', and 'iso'.")
if self.params["cdrom"]['type'] == 'iso' and not self.params["cdrom"].get('iso_path'):
self.module.fail_json(msg="cdrom.iso_path is mandatory when cdrom.type is set to iso.")
cdrom_spec = None
cdrom_devices = self.get_vm_cdrom_devices(vm=vm_obj)
iso_path = self.params["cdrom"].get("iso_path")
if len(cdrom_devices) == 0:
# Creating new CD-ROM
ide_devices = self.get_vm_ide_devices(vm=vm_obj)
if len(ide_devices) == 0:
# Creating new IDE device
ide_ctl = self.device_helper.create_ide_controller()
ide_device = ide_ctl.device
self.change_detected = True
self.configspec.deviceChange.append(ide_ctl)
else:
ide_device = ide_devices[0]
if len(ide_device.device) > 3:
self.module.fail_json(msg="hardware.cdrom specified for a VM or template which already has 4"
" IDE devices of which none are a cdrom")
cdrom_spec = self.device_helper.create_cdrom(ide_device=ide_device, cdrom_type=self.params["cdrom"]["type"],
iso_path=iso_path)
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
cdrom_spec.device.connectable.connected = (self.params["cdrom"]["type"] != "none")
elif not self.device_helper.is_equal_cdrom(vm_obj=vm_obj, cdrom_device=cdrom_devices[0],
cdrom_type=self.params["cdrom"]["type"], iso_path=iso_path):
self.device_helper.update_cdrom_config(vm_obj, self.params["cdrom"], cdrom_devices[0], iso_path=iso_path)
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
cdrom_spec.device = cdrom_devices[0]
if cdrom_spec:
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
def configure_cdrom_list(self, vm_obj):
configured_cdroms = self.sanitize_cdrom_params()
cdrom_devices = self.get_vm_cdrom_devices(vm=vm_obj)
# configure IDE CD-ROMs
if configured_cdroms['ide']:
ide_devices = self.get_vm_ide_devices(vm=vm_obj)
for expected_cdrom_spec in configured_cdroms['ide']:
ide_device = None
for device in ide_devices:
if device.busNumber == expected_cdrom_spec['num']:
ide_device = device
break
# if not find the matched ide controller or no existing ide controller
if not ide_device:
ide_ctl = self.device_helper.create_ide_controller(bus_number=expected_cdrom_spec['num'])
ide_device = ide_ctl.device
self.change_detected = True
self.configspec.deviceChange.append(ide_ctl)
for cdrom in expected_cdrom_spec['cdrom']:
cdrom_device = None
iso_path = cdrom.get('iso_path')
unit_number = cdrom.get('unit_number')
for target_cdrom in cdrom_devices:
if target_cdrom.controllerKey == ide_device.key and target_cdrom.unitNumber == unit_number:
cdrom_device = target_cdrom
break
# create new CD-ROM
if not cdrom_device and cdrom.get('state') != 'absent':
if vm_obj and vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOn:
self.module.fail_json(msg='CD-ROM attach to IDE controller not support hot-add.')
if len(ide_device.device) == 2:
self.module.fail_json(msg='Maximum number of CD-ROMs attached to IDE controller is 2.')
cdrom_spec = self.device_helper.create_cdrom(ide_device=ide_device, cdrom_type=cdrom['type'],
iso_path=iso_path, unit_number=unit_number)
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
# re-configure CD-ROM
elif cdrom_device and cdrom.get('state') != 'absent' and \
not self.device_helper.is_equal_cdrom(vm_obj=vm_obj, cdrom_device=cdrom_device,
cdrom_type=cdrom['type'], iso_path=iso_path):
self.device_helper.update_cdrom_config(vm_obj, cdrom, cdrom_device, iso_path=iso_path)
cdrom_spec = vim.vm.device.VirtualDeviceSpec()
cdrom_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
cdrom_spec.device = cdrom_device
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
# delete CD-ROM
elif cdrom_device and cdrom.get('state') == 'absent':
if vm_obj and vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
self.module.fail_json(msg='CD-ROM attach to IDE controller not support hot-remove.')
cdrom_spec = self.device_helper.remove_cdrom(cdrom_device)
self.change_detected = True
self.configspec.deviceChange.append(cdrom_spec)
# configure SATA CD-ROMs is not supported yet
if configured_cdroms['sata']:
pass
def configure_hardware_params(self, vm_obj):
"""
Function to configure hardware related configuration of virtual machine
Args:
vm_obj: virtual machine object
"""
if 'hardware' in self.params:
if 'max_connections' in self.params['hardware']:
# maxMksConnections == max_connections
self.configspec.maxMksConnections = int(self.params['hardware']['max_connections'])
if vm_obj is None or self.configspec.maxMksConnections != vm_obj.config.maxMksConnections:
self.change_detected = True
if 'nested_virt' in self.params['hardware']:
self.configspec.nestedHVEnabled = bool(self.params['hardware']['nested_virt'])
if vm_obj is None or self.configspec.nestedHVEnabled != bool(vm_obj.config.nestedHVEnabled):
self.change_detected = True
if 'version' in self.params['hardware']:
hw_version_check_failed = False
temp_version = self.params['hardware'].get('version', 10)
if isinstance(temp_version, str) and temp_version.lower() == 'latest':
# Check is to make sure vm_obj is not of type template
if vm_obj and not vm_obj.config.template:
try:
task = vm_obj.UpgradeVM_Task()
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
except vim.fault.AlreadyUpgraded:
# Don't fail if VM is already upgraded.
pass
else:
try:
temp_version = int(temp_version)
except ValueError:
hw_version_check_failed = True
if temp_version not in range(3, 16):
hw_version_check_failed = True
if hw_version_check_failed:
self.module.fail_json(msg="Failed to set hardware.version '%s' value as valid"
" values range from 3 (ESX 2.x) to 14 (ESXi 6.5 and greater)." % temp_version)
# Hardware version is denoted as "vmx-10"
version = "vmx-%02d" % temp_version
self.configspec.version = version
if vm_obj is None or self.configspec.version != vm_obj.config.version:
self.change_detected = True
# Check is to make sure vm_obj is not of type template
if vm_obj and not vm_obj.config.template:
# VM exists and we need to update the hardware version
current_version = vm_obj.config.version
# current_version = "vmx-10"
version_digit = int(current_version.split("-", 1)[-1])
if temp_version < version_digit:
self.module.fail_json(msg="Current hardware version '%d' which is greater than the specified"
" version '%d'. Downgrading hardware version is"
" not supported. Please specify version greater"
" than the current version." % (version_digit,
temp_version))
new_version = "vmx-%02d" % temp_version
try:
task = vm_obj.UpgradeVM_Task(new_version)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'upgrade'}
except vim.fault.AlreadyUpgraded:
# Don't fail if VM is already upgraded.
pass
if 'virt_based_security' in self.params['hardware']:
host_version = self.select_host().summary.config.product.version
if int(host_version.split('.')[0]) < 6 or (int(host_version.split('.')[0]) == 6 and int(host_version.split('.')[1]) < 7):
self.module.fail_json(msg="ESXi version %s not support VBS." % host_version)
guest_ids = ['windows9_64Guest', 'windows9Server64Guest']
if vm_obj is None:
guestid = self.configspec.guestId
else:
guestid = vm_obj.summary.config.guestId
if guestid not in guest_ids:
self.module.fail_json(msg="Guest '%s' not support VBS." % guestid)
if (vm_obj is None and int(self.configspec.version.split('-')[1]) >= 14) or \
(vm_obj and int(vm_obj.config.version.split('-')[1]) >= 14 and (vm_obj.runtime.powerState == vim.VirtualMachinePowerState.poweredOff)):
self.configspec.flags = vim.vm.FlagInfo()
self.configspec.flags.vbsEnabled = bool(self.params['hardware']['virt_based_security'])
if bool(self.params['hardware']['virt_based_security']):
self.configspec.flags.vvtdEnabled = True
self.configspec.nestedHVEnabled = True
if (vm_obj is None and self.configspec.firmware == 'efi') or \
(vm_obj and vm_obj.config.firmware == 'efi'):
self.configspec.bootOptions = vim.vm.BootOptions()
self.configspec.bootOptions.efiSecureBootEnabled = True
else:
self.module.fail_json(msg="Not support VBS when firmware is BIOS.")
if vm_obj is None or self.configspec.flags.vbsEnabled != vm_obj.config.flags.vbsEnabled:
self.change_detected = True
def get_device_by_type(self, vm=None, type=None):
device_list = []
if vm is None or type is None:
return device_list
for device in vm.config.hardware.device:
if isinstance(device, type):
device_list.append(device)
return device_list
def get_vm_cdrom_devices(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualCdrom)
def get_vm_ide_devices(self, vm=None):
return self.get_device_by_type(vm=vm, type=vim.vm.device.VirtualIDEController)
def get_vm_network_interfaces(self, vm=None):
device_list = []
if vm is None:
return device_list
nw_device_types = (vim.vm.device.VirtualPCNet32, vim.vm.device.VirtualVmxnet2,
vim.vm.device.VirtualVmxnet3, vim.vm.device.VirtualE1000,
vim.vm.device.VirtualE1000e, vim.vm.device.VirtualSriovEthernetCard)
for device in vm.config.hardware.device:
if isinstance(device, nw_device_types):
device_list.append(device)
return device_list
def sanitize_network_params(self):
"""
Sanitize user provided network provided params
Returns: A sanitized list of network params, else fails
"""
network_devices = list()
# Clean up user data here
for network in self.params['networks']:
if 'name' not in network and 'vlan' not in network:
self.module.fail_json(msg="Please specify at least a network name or"
" a VLAN name under VM network list.")
if 'name' in network and self.cache.get_network(network['name']) is None:
self.module.fail_json(msg="Network '%(name)s' does not exist." % network)
elif 'vlan' in network:
dvps = self.cache.get_all_objs(self.content, [vim.dvs.DistributedVirtualPortgroup])
for dvp in dvps:
if hasattr(dvp.config.defaultPortConfig, 'vlan') and \
isinstance(dvp.config.defaultPortConfig.vlan.vlanId, int) and \
str(dvp.config.defaultPortConfig.vlan.vlanId) == str(network['vlan']):
network['name'] = dvp.config.name
break
if 'dvswitch_name' in network and \
dvp.config.distributedVirtualSwitch.name == network['dvswitch_name'] and \
dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
if dvp.config.name == network['vlan']:
network['name'] = dvp.config.name
break
else:
self.module.fail_json(msg="VLAN '%(vlan)s' does not exist." % network)
if 'type' in network:
if network['type'] not in ['dhcp', 'static']:
self.module.fail_json(msg="Network type '%(type)s' is not a valid parameter."
" Valid parameters are ['dhcp', 'static']." % network)
if network['type'] != 'static' and ('ip' in network or 'netmask' in network):
self.module.fail_json(msg='Static IP information provided for network "%(name)s",'
' but "type" is set to "%(type)s".' % network)
else:
# Type is optional parameter, if user provided IP or Subnet assume
# network type as 'static'
if 'ip' in network or 'netmask' in network:
network['type'] = 'static'
else:
# User wants network type as 'dhcp'
network['type'] = 'dhcp'
if network.get('type') == 'static':
if 'ip' in network and 'netmask' not in network:
self.module.fail_json(msg="'netmask' is required if 'ip' is"
" specified under VM network list.")
if 'ip' not in network and 'netmask' in network:
self.module.fail_json(msg="'ip' is required if 'netmask' is"
" specified under VM network list.")
validate_device_types = ['pcnet32', 'vmxnet2', 'vmxnet3', 'e1000', 'e1000e', 'sriov']
if 'device_type' in network and network['device_type'] not in validate_device_types:
self.module.fail_json(msg="Device type specified '%s' is not valid."
" Please specify correct device"
" type from ['%s']." % (network['device_type'],
"', '".join(validate_device_types)))
if 'mac' in network and not is_mac(network['mac']):
self.module.fail_json(msg="Device MAC address '%s' is invalid."
" Please provide correct MAC address." % network['mac'])
network_devices.append(network)
return network_devices
def configure_network(self, vm_obj):
# Ignore empty networks, this permits to keep networks when deploying a template/cloning a VM
if len(self.params['networks']) == 0:
return
network_devices = self.sanitize_network_params()
# List current device for Clone or Idempotency
current_net_devices = self.get_vm_network_interfaces(vm=vm_obj)
if len(network_devices) < len(current_net_devices):
self.module.fail_json(msg="Given network device list is lesser than current VM device list (%d < %d). "
"Removing interfaces is not allowed"
% (len(network_devices), len(current_net_devices)))
for key in range(0, len(network_devices)):
nic_change_detected = False
network_name = network_devices[key]['name']
if key < len(current_net_devices) and (vm_obj or self.params['template']):
# We are editing existing network devices, this is either when
# are cloning from VM or Template
nic = vim.vm.device.VirtualDeviceSpec()
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
nic.device = current_net_devices[key]
if ('wake_on_lan' in network_devices[key] and
nic.device.wakeOnLanEnabled != network_devices[key].get('wake_on_lan')):
nic.device.wakeOnLanEnabled = network_devices[key].get('wake_on_lan')
nic_change_detected = True
if ('start_connected' in network_devices[key] and
nic.device.connectable.startConnected != network_devices[key].get('start_connected')):
nic.device.connectable.startConnected = network_devices[key].get('start_connected')
nic_change_detected = True
if ('allow_guest_control' in network_devices[key] and
nic.device.connectable.allowGuestControl != network_devices[key].get('allow_guest_control')):
nic.device.connectable.allowGuestControl = network_devices[key].get('allow_guest_control')
nic_change_detected = True
if nic.device.deviceInfo.summary != network_name:
nic.device.deviceInfo.summary = network_name
nic_change_detected = True
if 'device_type' in network_devices[key]:
device = self.device_helper.get_device(network_devices[key]['device_type'], network_name)
device_class = type(device)
if not isinstance(nic.device, device_class):
self.module.fail_json(msg="Changing the device type is not possible when interface is already present. "
"The failing device type is %s" % network_devices[key]['device_type'])
# Changing mac address has no effect when editing interface
if 'mac' in network_devices[key] and nic.device.macAddress != current_net_devices[key].macAddress:
self.module.fail_json(msg="Changing MAC address has not effect when interface is already present. "
"The failing new MAC address is %s" % nic.device.macAddress)
else:
# Default device type is vmxnet3, VMware best practice
device_type = network_devices[key].get('device_type', 'vmxnet3')
nic = self.device_helper.create_nic(device_type,
'Network Adapter %s' % (key + 1),
network_devices[key])
nic.operation = vim.vm.device.VirtualDeviceSpec.Operation.add
nic_change_detected = True
if hasattr(self.cache.get_network(network_name), 'portKeys'):
# VDS switch
pg_obj = None
if 'dvswitch_name' in network_devices[key]:
dvs_name = network_devices[key]['dvswitch_name']
dvs_obj = find_dvs_by_name(self.content, dvs_name)
if dvs_obj is None:
self.module.fail_json(msg="Unable to find distributed virtual switch %s" % dvs_name)
pg_obj = find_dvspg_by_name(dvs_obj, network_name)
if pg_obj is None:
self.module.fail_json(msg="Unable to find distributed port group %s" % network_name)
else:
pg_obj = self.cache.find_obj(self.content, [vim.dvs.DistributedVirtualPortgroup], network_name)
# TODO: (akasurde) There is no way to find association between resource pool and distributed virtual portgroup
# For now, check if we are able to find distributed virtual switch
if not pg_obj.config.distributedVirtualSwitch:
self.module.fail_json(msg="Failed to find distributed virtual switch which is associated with"
" distributed virtual portgroup '%s'. Make sure hostsystem is associated with"
" the given distributed virtual portgroup. Also, check if user has correct"
" permission to access distributed virtual switch in the given portgroup." % pg_obj.name)
if (nic.device.backing and
(not hasattr(nic.device.backing, 'port') or
(nic.device.backing.port.portgroupKey != pg_obj.key or
nic.device.backing.port.switchUuid != pg_obj.config.distributedVirtualSwitch.uuid))):
nic_change_detected = True
dvs_port_connection = vim.dvs.PortConnection()
dvs_port_connection.portgroupKey = pg_obj.key
# If user specifies distributed port group without associating to the hostsystem on which
# virtual machine is going to be deployed then we get error. We can infer that there is no
# association between given distributed port group and host system.
host_system = self.params.get('esxi_hostname')
if host_system and host_system not in [host.config.host.name for host in pg_obj.config.distributedVirtualSwitch.config.host]:
self.module.fail_json(msg="It seems that host system '%s' is not associated with distributed"
" virtual portgroup '%s'. Please make sure host system is associated"
" with given distributed virtual portgroup" % (host_system, pg_obj.name))
dvs_port_connection.switchUuid = pg_obj.config.distributedVirtualSwitch.uuid
nic.device.backing = vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo()
nic.device.backing.port = dvs_port_connection
elif isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
# NSX-T Logical Switch
nic.device.backing = vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo()
network_id = self.cache.get_network(network_name).summary.opaqueNetworkId
nic.device.backing.opaqueNetworkType = 'nsx.LogicalSwitch'
nic.device.backing.opaqueNetworkId = network_id
nic.device.deviceInfo.summary = 'nsx.LogicalSwitch: %s' % network_id
nic_change_detected = True
else:
# vSwitch
if not isinstance(nic.device.backing, vim.vm.device.VirtualEthernetCard.NetworkBackingInfo):
nic.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()
nic_change_detected = True
net_obj = self.cache.get_network(network_name)
if nic.device.backing.network != net_obj:
nic.device.backing.network = net_obj
nic_change_detected = True
if nic.device.backing.deviceName != network_name:
nic.device.backing.deviceName = network_name
nic_change_detected = True
if nic_change_detected:
# Change to fix the issue found while configuring opaque network
# VMs cloned from a template with opaque network will get disconnected
# Replacing deprecated config parameter with relocation Spec
if isinstance(self.cache.get_network(network_name), vim.OpaqueNetwork):
self.relospec.deviceChange.append(nic)
else:
self.configspec.deviceChange.append(nic)
self.change_detected = True
def configure_vapp_properties(self, vm_obj):
if len(self.params['vapp_properties']) == 0:
return
for x in self.params['vapp_properties']:
if not x.get('id'):
self.module.fail_json(msg="id is required to set vApp property")
new_vmconfig_spec = vim.vApp.VmConfigSpec()
if vm_obj:
# VM exists
# This is primarily for vcsim/integration tests, unset vAppConfig was not seen on my deployments
orig_spec = vm_obj.config.vAppConfig if vm_obj.config.vAppConfig else new_vmconfig_spec
vapp_properties_current = dict((x.id, x) for x in orig_spec.property)
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
# each property must have a unique key
# init key counter with max value + 1
all_keys = [x.key for x in orig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
for property_id, property_spec in vapp_properties_to_change.items():
is_property_changed = False
new_vapp_property_spec = vim.vApp.PropertySpec()
if property_id in vapp_properties_current:
if property_spec.get('operation') == 'remove':
new_vapp_property_spec.operation = 'remove'
new_vapp_property_spec.removeKey = vapp_properties_current[property_id].key
is_property_changed = True
else:
# this is 'edit' branch
new_vapp_property_spec.operation = 'edit'
new_vapp_property_spec.info = vapp_properties_current[property_id]
try:
for property_name, property_value in property_spec.items():
if property_name == 'operation':
# operation is not an info object property
# if set to anything other than 'remove' we don't fail
continue
# Updating attributes only if needed
if getattr(new_vapp_property_spec.info, property_name) != property_value:
setattr(new_vapp_property_spec.info, property_name, property_value)
is_property_changed = True
except Exception as e:
msg = "Failed to set vApp property field='%s' and value='%s'. Error: %s" % (property_name, property_value, to_text(e))
self.module.fail_json(msg=msg)
else:
if property_spec.get('operation') == 'remove':
# attempt to delete non-existent property
continue
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
else:
# New VM
all_keys = [x.key for x in new_vmconfig_spec.property]
new_property_index = max(all_keys) + 1 if all_keys else 0
vapp_properties_to_change = dict((x['id'], x) for x in self.params['vapp_properties'])
is_property_changed = False
for property_id, property_spec in vapp_properties_to_change.items():
new_vapp_property_spec = vim.vApp.PropertySpec()
# this is add new property branch
new_vapp_property_spec.operation = 'add'
property_info = vim.vApp.PropertyInfo()
property_info.classId = property_spec.get('classId')
property_info.instanceId = property_spec.get('instanceId')
property_info.id = property_spec.get('id')
property_info.category = property_spec.get('category')
property_info.label = property_spec.get('label')
property_info.type = property_spec.get('type', 'string')
property_info.userConfigurable = property_spec.get('userConfigurable', True)
property_info.defaultValue = property_spec.get('defaultValue')
property_info.value = property_spec.get('value', '')
property_info.description = property_spec.get('description')
new_vapp_property_spec.info = property_info
new_vapp_property_spec.info.key = new_property_index
new_property_index += 1
is_property_changed = True
if is_property_changed:
new_vmconfig_spec.property.append(new_vapp_property_spec)
if new_vmconfig_spec.property:
self.configspec.vAppConfig = new_vmconfig_spec
self.change_detected = True
def customize_customvalues(self, vm_obj):
if len(self.params['customvalues']) == 0:
return
facts = self.gather_facts(vm_obj)
for kv in self.params['customvalues']:
if 'key' not in kv or 'value' not in kv:
self.module.exit_json(msg="customvalues items required both 'key' and 'value' fields.")
key_id = None
for field in self.content.customFieldsManager.field:
if field.name == kv['key']:
key_id = field.key
break
if not key_id:
self.module.fail_json(msg="Unable to find custom value key %s" % kv['key'])
# If kv is not kv fetched from facts, change it
if kv['key'] not in facts['customvalues'] or facts['customvalues'][kv['key']] != kv['value']:
self.content.customFieldsManager.SetField(entity=vm_obj, key=key_id, value=kv['value'])
self.change_detected = True
def customize_vm(self, vm_obj):
# User specified customization specification
custom_spec_name = self.params.get('customization_spec')
if custom_spec_name:
cc_mgr = self.content.customizationSpecManager
if cc_mgr.DoesCustomizationSpecExist(name=custom_spec_name):
temp_spec = cc_mgr.GetCustomizationSpec(name=custom_spec_name)
self.customspec = temp_spec.spec
return
else:
self.module.fail_json(msg="Unable to find customization specification"
" '%s' in given configuration." % custom_spec_name)
# Network settings
adaptermaps = []
for network in self.params['networks']:
guest_map = vim.vm.customization.AdapterMapping()
guest_map.adapter = vim.vm.customization.IPSettings()
if 'ip' in network and 'netmask' in network:
guest_map.adapter.ip = vim.vm.customization.FixedIp()
guest_map.adapter.ip.ipAddress = str(network['ip'])
guest_map.adapter.subnetMask = str(network['netmask'])
elif 'type' in network and network['type'] == 'dhcp':
guest_map.adapter.ip = vim.vm.customization.DhcpIpGenerator()
if 'gateway' in network:
guest_map.adapter.gateway = network['gateway']
# On Windows, DNS domain and DNS servers can be set by network interface
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.IPSettings.html
if 'domain' in network:
guest_map.adapter.dnsDomain = network['domain']
elif 'domain' in self.params['customization']:
guest_map.adapter.dnsDomain = self.params['customization']['domain']
if 'dns_servers' in network:
guest_map.adapter.dnsServerList = network['dns_servers']
elif 'dns_servers' in self.params['customization']:
guest_map.adapter.dnsServerList = self.params['customization']['dns_servers']
adaptermaps.append(guest_map)
# Global DNS settings
globalip = vim.vm.customization.GlobalIPSettings()
if 'dns_servers' in self.params['customization']:
globalip.dnsServerList = self.params['customization']['dns_servers']
# TODO: Maybe list the different domains from the interfaces here by default ?
if 'dns_suffix' in self.params['customization']:
dns_suffix = self.params['customization']['dns_suffix']
if isinstance(dns_suffix, list):
globalip.dnsSuffixList = " ".join(dns_suffix)
else:
globalip.dnsSuffixList = dns_suffix
elif 'domain' in self.params['customization']:
globalip.dnsSuffixList = self.params['customization']['domain']
if self.params['guest_id']:
guest_id = self.params['guest_id']
else:
guest_id = vm_obj.summary.config.guestId
# For windows guest OS, use SysPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.Sysprep.html#field_detail
if 'win' in guest_id:
ident = vim.vm.customization.Sysprep()
ident.userData = vim.vm.customization.UserData()
# Setting hostName, orgName and fullName is mandatory, so we set some default when missing
ident.userData.computerName = vim.vm.customization.FixedName()
# computer name will be truncated to 15 characters if using VM name
default_name = self.params['name'].replace(' ', '')
punctuation = string.punctuation.replace('-', '')
default_name = ''.join([c for c in default_name if c not in punctuation])
ident.userData.computerName.name = str(self.params['customization'].get('hostname', default_name[0:15]))
ident.userData.fullName = str(self.params['customization'].get('fullname', 'Administrator'))
ident.userData.orgName = str(self.params['customization'].get('orgname', 'ACME'))
if 'productid' in self.params['customization']:
ident.userData.productId = str(self.params['customization']['productid'])
ident.guiUnattended = vim.vm.customization.GuiUnattended()
if 'autologon' in self.params['customization']:
ident.guiUnattended.autoLogon = self.params['customization']['autologon']
ident.guiUnattended.autoLogonCount = self.params['customization'].get('autologoncount', 1)
if 'timezone' in self.params['customization']:
# Check if timezone value is a int before proceeding.
ident.guiUnattended.timeZone = self.device_helper.integer_value(
self.params['customization']['timezone'],
'customization.timezone')
ident.identification = vim.vm.customization.Identification()
if self.params['customization'].get('password', '') != '':
ident.guiUnattended.password = vim.vm.customization.Password()
ident.guiUnattended.password.value = str(self.params['customization']['password'])
ident.guiUnattended.password.plainText = True
if 'joindomain' in self.params['customization']:
if 'domainadmin' not in self.params['customization'] or 'domainadminpassword' not in self.params['customization']:
self.module.fail_json(msg="'domainadmin' and 'domainadminpassword' entries are mandatory in 'customization' section to use "
"joindomain feature")
ident.identification.domainAdmin = str(self.params['customization']['domainadmin'])
ident.identification.joinDomain = str(self.params['customization']['joindomain'])
ident.identification.domainAdminPassword = vim.vm.customization.Password()
ident.identification.domainAdminPassword.value = str(self.params['customization']['domainadminpassword'])
ident.identification.domainAdminPassword.plainText = True
elif 'joinworkgroup' in self.params['customization']:
ident.identification.joinWorkgroup = str(self.params['customization']['joinworkgroup'])
if 'runonce' in self.params['customization']:
ident.guiRunOnce = vim.vm.customization.GuiRunOnce()
ident.guiRunOnce.commandList = self.params['customization']['runonce']
else:
# FIXME: We have no clue whether this non-Windows OS is actually Linux, hence it might fail!
# For Linux guest OS, use LinuxPrep
# https://pubs.vmware.com/vi3/sdk/ReferenceGuide/vim.vm.customization.LinuxPrep.html
ident = vim.vm.customization.LinuxPrep()
# TODO: Maybe add domain from interface if missing ?
if 'domain' in self.params['customization']:
ident.domain = str(self.params['customization']['domain'])
ident.hostName = vim.vm.customization.FixedName()
hostname = str(self.params['customization'].get('hostname', self.params['name'].split('.')[0]))
# Remove all characters except alphanumeric and minus which is allowed by RFC 952
valid_hostname = re.sub(r"[^a-zA-Z0-9\-]", "", hostname)
ident.hostName.name = valid_hostname
# List of supported time zones for different vSphere versions in Linux/Unix systems
# https://kb.vmware.com/s/article/2145518
if 'timezone' in self.params['customization']:
ident.timeZone = str(self.params['customization']['timezone'])
if 'hwclockUTC' in self.params['customization']:
ident.hwClockUTC = self.params['customization']['hwclockUTC']
self.customspec = vim.vm.customization.Specification()
self.customspec.nicSettingMap = adaptermaps
self.customspec.globalIPSettings = globalip
self.customspec.identity = ident
def get_vm_scsi_controller(self, vm_obj):
# If vm_obj doesn't exist there is no SCSI controller to find
if vm_obj is None:
return None
for device in vm_obj.config.hardware.device:
if self.device_helper.is_scsi_controller(device):
scsi_ctl = vim.vm.device.VirtualDeviceSpec()
scsi_ctl.device = device
return scsi_ctl
return None
def get_configured_disk_size(self, expected_disk_spec):
# what size is it?
if [x for x in expected_disk_spec.keys() if x.startswith('size_') or x == 'size']:
# size, size_tb, size_gb, size_mb, size_kb
if 'size' in expected_disk_spec:
size_regex = re.compile(r'(\d+(?:\.\d+)?)([tgmkTGMK][bB])')
disk_size_m = size_regex.match(expected_disk_spec['size'])
try:
if disk_size_m:
expected = disk_size_m.group(1)
unit = disk_size_m.group(2)
else:
raise ValueError
if re.match(r'\d+\.\d+', expected):
# We found float value in string, let's typecast it
expected = float(expected)
else:
# We found int value in string, let's typecast it
expected = int(expected)
if not expected or not unit:
raise ValueError
except (TypeError, ValueError, NameError):
# Common failure
self.module.fail_json(msg="Failed to parse disk size please review value"
" provided using documentation.")
else:
param = [x for x in expected_disk_spec.keys() if x.startswith('size_')][0]
unit = param.split('_')[-1].lower()
expected = [x[1] for x in expected_disk_spec.items() if x[0].startswith('size_')][0]
expected = int(expected)
disk_units = dict(tb=3, gb=2, mb=1, kb=0)
if unit in disk_units:
unit = unit.lower()
return expected * (1024 ** disk_units[unit])
else:
self.module.fail_json(msg="%s is not a supported unit for disk size."
" Supported units are ['%s']." % (unit,
"', '".join(disk_units.keys())))
# No size found but disk, fail
self.module.fail_json(
msg="No size, size_kb, size_mb, size_gb or size_tb attribute found into disk configuration")
def find_vmdk(self, vmdk_path):
"""
Takes a vsphere datastore path in the format
[datastore_name] path/to/file.vmdk
Returns vsphere file object or raises RuntimeError
"""
datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder = self.vmdk_disk_path_split(vmdk_path)
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
if datastore is None:
self.module.fail_json(msg="Failed to find the datastore %s" % datastore_name)
return self.find_vmdk_file(datastore, vmdk_fullpath, vmdk_filename, vmdk_folder)
def add_existing_vmdk(self, vm_obj, expected_disk_spec, diskspec, scsi_ctl):
"""
Adds vmdk file described by expected_disk_spec['filename'], retrieves the file
information and adds the correct spec to self.configspec.deviceChange.
"""
filename = expected_disk_spec['filename']
# if this is a new disk, or the disk file names are different
if (vm_obj and diskspec.device.backing.fileName != filename) or vm_obj is None:
vmdk_file = self.find_vmdk(expected_disk_spec['filename'])
diskspec.device.backing.fileName = expected_disk_spec['filename']
diskspec.device.capacityInKB = VmomiSupport.vmodlTypes['long'](vmdk_file.fileSize / 1024)
diskspec.device.key = -1
self.change_detected = True
self.configspec.deviceChange.append(diskspec)
def configure_disks(self, vm_obj):
# Ignore empty disk list, this permits to keep disks when deploying a template/cloning a VM
if len(self.params['disk']) == 0:
return
scsi_ctl = self.get_vm_scsi_controller(vm_obj)
# Create scsi controller only if we are deploying a new VM, not a template or reconfiguring
if vm_obj is None or scsi_ctl is None:
scsi_ctl = self.device_helper.create_scsi_controller(self.get_scsi_type())
self.change_detected = True
self.configspec.deviceChange.append(scsi_ctl)
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)] \
if vm_obj is not None else None
if disks is not None and self.params.get('disk') and len(self.params.get('disk')) < len(disks):
self.module.fail_json(msg="Provided disks configuration has less disks than "
"the target object (%d vs %d)" % (len(self.params.get('disk')), len(disks)))
disk_index = 0
for expected_disk_spec in self.params.get('disk'):
disk_modified = False
# If we are manipulating and existing objects which has disks and disk_index is in disks
if vm_obj is not None and disks is not None and disk_index < len(disks):
diskspec = vim.vm.device.VirtualDeviceSpec()
# set the operation to edit so that it knows to keep other settings
diskspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
diskspec.device = disks[disk_index]
else:
diskspec = self.device_helper.create_scsi_disk(scsi_ctl, disk_index)
disk_modified = True
# increment index for next disk search
disk_index += 1
# index 7 is reserved to SCSI controller
if disk_index == 7:
disk_index += 1
if 'disk_mode' in expected_disk_spec:
disk_mode = expected_disk_spec.get('disk_mode', 'persistent').lower()
valid_disk_mode = ['persistent', 'independent_persistent', 'independent_nonpersistent']
if disk_mode not in valid_disk_mode:
self.module.fail_json(msg="disk_mode specified is not valid."
" Should be one of ['%s']" % "', '".join(valid_disk_mode))
if (vm_obj and diskspec.device.backing.diskMode != disk_mode) or (vm_obj is None):
diskspec.device.backing.diskMode = disk_mode
disk_modified = True
else:
diskspec.device.backing.diskMode = "persistent"
# is it thin?
if 'type' in expected_disk_spec:
disk_type = expected_disk_spec.get('type', '').lower()
if disk_type == 'thin':
diskspec.device.backing.thinProvisioned = True
elif disk_type == 'eagerzeroedthick':
diskspec.device.backing.eagerlyScrub = True
if 'filename' in expected_disk_spec and expected_disk_spec['filename'] is not None:
self.add_existing_vmdk(vm_obj, expected_disk_spec, diskspec, scsi_ctl)
continue
elif vm_obj is None or self.params['template']:
# We are creating new VM or from Template
# Only create virtual device if not backed by vmdk in original template
if diskspec.device.backing.fileName == '':
diskspec.fileOperation = vim.vm.device.VirtualDeviceSpec.FileOperation.create
# which datastore?
if expected_disk_spec.get('datastore'):
# TODO: This is already handled by the relocation spec,
# but it needs to eventually be handled for all the
# other disks defined
pass
kb = self.get_configured_disk_size(expected_disk_spec)
# VMware doesn't allow to reduce disk sizes
if kb < diskspec.device.capacityInKB:
self.module.fail_json(
msg="Given disk size is smaller than found (%d < %d). Reducing disks is not allowed." %
(kb, diskspec.device.capacityInKB))
if kb != diskspec.device.capacityInKB or disk_modified:
diskspec.device.capacityInKB = kb
self.configspec.deviceChange.append(diskspec)
self.change_detected = True
def select_host(self):
hostsystem = self.cache.get_esx_host(self.params['esxi_hostname'])
if not hostsystem:
self.module.fail_json(msg='Failed to find ESX host "%(esxi_hostname)s"' % self.params)
if hostsystem.runtime.connectionState != 'connected' or hostsystem.runtime.inMaintenanceMode:
self.module.fail_json(msg='ESXi "%(esxi_hostname)s" is in invalid state or in maintenance mode.' % self.params)
return hostsystem
def autoselect_datastore(self):
datastore = None
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if not self.is_datastore_valid(datastore_obj=ds):
continue
if ds.summary.freeSpace > datastore_freespace:
datastore = ds
datastore_freespace = ds.summary.freeSpace
return datastore
def get_recommended_datastore(self, datastore_cluster_obj=None):
"""
Function to return Storage DRS recommended datastore from datastore cluster
Args:
datastore_cluster_obj: datastore cluster managed object
Returns: Name of recommended datastore from the given datastore cluster
"""
if datastore_cluster_obj is None:
return None
# Check if Datastore Cluster provided by user is SDRS ready
sdrs_status = datastore_cluster_obj.podStorageDrsEntry.storageDrsConfig.podConfig.enabled
if sdrs_status:
# We can get storage recommendation only if SDRS is enabled on given datastorage cluster
pod_sel_spec = vim.storageDrs.PodSelectionSpec()
pod_sel_spec.storagePod = datastore_cluster_obj
storage_spec = vim.storageDrs.StoragePlacementSpec()
storage_spec.podSelectionSpec = pod_sel_spec
storage_spec.type = 'create'
try:
rec = self.content.storageResourceManager.RecommendDatastores(storageSpec=storage_spec)
rec_action = rec.recommendations[0].action[0]
return rec_action.destination.name
except Exception:
# There is some error so we fall back to general workflow
pass
datastore = None
datastore_freespace = 0
for ds in datastore_cluster_obj.childEntity:
if isinstance(ds, vim.Datastore) and ds.summary.freeSpace > datastore_freespace:
# If datastore field is provided, filter destination datastores
if not self.is_datastore_valid(datastore_obj=ds):
continue
datastore = ds
datastore_freespace = ds.summary.freeSpace
if datastore:
return datastore.name
return None
def select_datastore(self, vm_obj=None):
datastore = None
datastore_name = None
if len(self.params['disk']) != 0:
# TODO: really use the datastore for newly created disks
if 'autoselect_datastore' in self.params['disk'][0] and self.params['disk'][0]['autoselect_datastore']:
datastores = self.cache.get_all_objs(self.content, [vim.Datastore])
datastores = [x for x in datastores if self.cache.get_parent_datacenter(x).name == self.params['datacenter']]
if datastores is None or len(datastores) == 0:
self.module.fail_json(msg="Unable to find a datastore list when autoselecting")
datastore_freespace = 0
for ds in datastores:
if not self.is_datastore_valid(datastore_obj=ds):
continue
if (ds.summary.freeSpace > datastore_freespace) or (ds.summary.freeSpace == datastore_freespace and not datastore):
# If datastore field is provided, filter destination datastores
if 'datastore' in self.params['disk'][0] and \
isinstance(self.params['disk'][0]['datastore'], str) and \
ds.name.find(self.params['disk'][0]['datastore']) < 0:
continue
datastore = ds
datastore_name = datastore.name
datastore_freespace = ds.summary.freeSpace
elif 'datastore' in self.params['disk'][0]:
datastore_name = self.params['disk'][0]['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
self.module.fail_json(msg="Either datastore or autoselect_datastore should be provided to select datastore")
if not datastore and self.params['template']:
# use the template's existing DS
disks = [x for x in vm_obj.config.hardware.device if isinstance(x, vim.vm.device.VirtualDisk)]
if disks:
datastore = disks[0].backing.datastore
datastore_name = datastore.name
# validation
if datastore:
dc = self.cache.get_parent_datacenter(datastore)
if dc.name != self.params['datacenter']:
datastore = self.autoselect_datastore()
datastore_name = datastore.name
if not datastore:
if len(self.params['disk']) != 0 or self.params['template'] is None:
self.module.fail_json(msg="Unable to find the datastore with given parameters."
" This could mean, %s is a non-existent virtual machine and module tried to"
" deploy it as new virtual machine with no disk. Please specify disks parameter"
" or specify template to clone from." % self.params['name'])
self.module.fail_json(msg="Failed to find a matching datastore")
return datastore, datastore_name
def obj_has_parent(self, obj, parent):
if obj is None and parent is None:
raise AssertionError()
current_parent = obj
while True:
if current_parent.name == parent.name:
return True
# Check if we have reached till root folder
moid = current_parent._moId
if moid in ['group-d1', 'ha-folder-root']:
return False
current_parent = current_parent.parent
if current_parent is None:
return False
def get_scsi_type(self):
disk_controller_type = "paravirtual"
# set cpu/memory/etc
if 'hardware' in self.params:
if 'scsi' in self.params['hardware']:
if self.params['hardware']['scsi'] in ['buslogic', 'paravirtual', 'lsilogic', 'lsilogicsas']:
disk_controller_type = self.params['hardware']['scsi']
else:
self.module.fail_json(msg="hardware.scsi attribute should be 'paravirtual' or 'lsilogic'")
return disk_controller_type
def find_folder(self, searchpath):
""" Walk inventory objects one position of the searchpath at a time """
# split the searchpath so we can iterate through it
paths = [x.replace('/', '') for x in searchpath.split('/')]
paths_total = len(paths) - 1
position = 0
# recursive walk while looking for next element in searchpath
root = self.content.rootFolder
while root and position <= paths_total:
change = False
if hasattr(root, 'childEntity'):
for child in root.childEntity:
if child.name == paths[position]:
root = child
position += 1
change = True
break
elif isinstance(root, vim.Datacenter):
if hasattr(root, 'vmFolder'):
if root.vmFolder.name == paths[position]:
root = root.vmFolder
position += 1
change = True
else:
root = None
if not change:
root = None
return root
def get_resource_pool(self, cluster=None, host=None, resource_pool=None):
""" Get a resource pool, filter on cluster, esxi_hostname or resource_pool if given """
cluster_name = cluster or self.params.get('cluster', None)
host_name = host or self.params.get('esxi_hostname', None)
resource_pool_name = resource_pool or self.params.get('resource_pool', None)
# get the datacenter object
datacenter = find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if not datacenter:
self.module.fail_json(msg='Unable to find datacenter "%s"' % self.params['datacenter'])
# if cluster is given, get the cluster object
if cluster_name:
cluster = find_obj(self.content, [vim.ComputeResource], cluster_name, folder=datacenter)
if not cluster:
self.module.fail_json(msg='Unable to find cluster "%s"' % cluster_name)
# if host is given, get the cluster object using the host
elif host_name:
host = find_obj(self.content, [vim.HostSystem], host_name, folder=datacenter)
if not host:
self.module.fail_json(msg='Unable to find host "%s"' % host_name)
cluster = host.parent
else:
cluster = None
# get resource pools limiting search to cluster or datacenter
resource_pool = find_obj(self.content, [vim.ResourcePool], resource_pool_name, folder=cluster or datacenter)
if not resource_pool:
if resource_pool_name:
self.module.fail_json(msg='Unable to find resource_pool "%s"' % resource_pool_name)
else:
self.module.fail_json(msg='Unable to find resource pool, need esxi_hostname, resource_pool, or cluster')
return resource_pool
def deploy_vm(self):
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/clone_vm.py
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.CloneSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.vm.ConfigSpec.html
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# FIXME:
# - static IPs
self.folder = self.params.get('folder', None)
if self.folder is None:
self.module.fail_json(msg="Folder is required parameter while deploying new virtual machine")
# Prepend / if it was missing from the folder path, also strip trailing slashes
if not self.folder.startswith('/'):
self.folder = '/%(folder)s' % self.params
self.folder = self.folder.rstrip('/')
datacenter = self.cache.find_obj(self.content, [vim.Datacenter], self.params['datacenter'])
if datacenter is None:
self.module.fail_json(msg='No datacenter named %(datacenter)s was found' % self.params)
dcpath = compile_folder_path_for_object(datacenter)
# Nested folder does not have trailing /
if not dcpath.endswith('/'):
dcpath += '/'
# Check for full path first in case it was already supplied
if (self.folder.startswith(dcpath + self.params['datacenter'] + '/vm') or
self.folder.startswith(dcpath + '/' + self.params['datacenter'] + '/vm')):
fullpath = self.folder
elif self.folder.startswith('/vm/') or self.folder == '/vm':
fullpath = "%s%s%s" % (dcpath, self.params['datacenter'], self.folder)
elif self.folder.startswith('/'):
fullpath = "%s%s/vm%s" % (dcpath, self.params['datacenter'], self.folder)
else:
fullpath = "%s%s/vm/%s" % (dcpath, self.params['datacenter'], self.folder)
f_obj = self.content.searchIndex.FindByInventoryPath(fullpath)
# abort if no strategy was successful
if f_obj is None:
# Add some debugging values in failure.
details = {
'datacenter': datacenter.name,
'datacenter_path': dcpath,
'folder': self.folder,
'full_search_path': fullpath,
}
self.module.fail_json(msg='No folder %s matched in the search path : %s' % (self.folder, fullpath),
details=details)
destfolder = f_obj
if self.params['template']:
vm_obj = self.get_vm_or_template(template_name=self.params['template'])
if vm_obj is None:
self.module.fail_json(msg="Could not find a template named %(template)s" % self.params)
else:
vm_obj = None
# always get a resource_pool
resource_pool = self.get_resource_pool()
# set the destination datastore for VM & disks
if self.params['datastore']:
# Give precedence to datastore value provided by user
# User may want to deploy VM to specific datastore.
datastore_name = self.params['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
(datastore, datastore_name) = self.select_datastore(vm_obj)
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=vm_obj, vm_creation=True)
self.configure_cpu_and_memory(vm_obj=vm_obj, vm_creation=True)
self.configure_hardware_params(vm_obj=vm_obj)
self.configure_resource_alloc_info(vm_obj=vm_obj)
self.configure_vapp_properties(vm_obj=vm_obj)
self.configure_disks(vm_obj=vm_obj)
self.configure_network(vm_obj=vm_obj)
self.configure_cdrom(vm_obj=vm_obj)
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 0 or network_changes or self.params.get('customization_spec') is not None:
self.customize_vm(vm_obj=vm_obj)
clonespec = None
clone_method = None
try:
if self.params['template']:
# Only select specific host when ESXi hostname is provided
if self.params['esxi_hostname']:
self.relospec.host = self.select_host()
self.relospec.datastore = datastore
# Convert disk present in template if is set
if self.params['convert']:
for device in vm_obj.config.hardware.device:
if isinstance(device, vim.vm.device.VirtualDisk):
disk_locator = vim.vm.RelocateSpec.DiskLocator()
disk_locator.diskBackingInfo = vim.vm.device.VirtualDisk.FlatVer2BackingInfo()
if self.params['convert'] in ['thin']:
disk_locator.diskBackingInfo.thinProvisioned = True
if self.params['convert'] in ['eagerzeroedthick']:
disk_locator.diskBackingInfo.eagerlyScrub = True
if self.params['convert'] in ['thick']:
disk_locator.diskBackingInfo.diskMode = "persistent"
disk_locator.diskId = device.key
disk_locator.datastore = datastore
self.relospec.disk.append(disk_locator)
# https://www.vmware.com/support/developer/vc-sdk/visdk41pubs/ApiReference/vim.vm.RelocateSpec.html
# > pool: For a clone operation from a template to a virtual machine, this argument is required.
self.relospec.pool = resource_pool
linked_clone = self.params.get('linked_clone')
snapshot_src = self.params.get('snapshot_src', None)
if linked_clone:
if snapshot_src is not None:
self.relospec.diskMoveType = vim.vm.RelocateSpec.DiskMoveOptions.createNewChildDiskBacking
else:
self.module.fail_json(msg="Parameter 'linked_src' and 'snapshot_src' are"
" required together for linked clone operation.")
clonespec = vim.vm.CloneSpec(template=self.params['is_template'], location=self.relospec)
if self.customspec:
clonespec.customization = self.customspec
if snapshot_src is not None:
if vm_obj.snapshot is None:
self.module.fail_json(msg="No snapshots present for virtual machine or template [%(template)s]" % self.params)
snapshot = self.get_snapshots_by_name_recursively(snapshots=vm_obj.snapshot.rootSnapshotList,
snapname=snapshot_src)
if len(snapshot) != 1:
self.module.fail_json(msg='virtual machine "%(template)s" does not contain'
' snapshot named "%(snapshot_src)s"' % self.params)
clonespec.snapshot = snapshot[0].snapshot
clonespec.config = self.configspec
clone_method = 'Clone'
try:
task = vm_obj.Clone(folder=destfolder, name=self.params['name'], spec=clonespec)
except vim.fault.NoPermission as e:
self.module.fail_json(msg="Failed to clone virtual machine %s to folder %s "
"due to permission issue: %s" % (self.params['name'],
destfolder,
to_native(e.msg)))
self.change_detected = True
else:
# ConfigSpec require name for VM creation
self.configspec.name = self.params['name']
self.configspec.files = vim.vm.FileInfo(logDirectory=None,
snapshotDirectory=None,
suspendDirectory=None,
vmPathName="[" + datastore_name + "]")
clone_method = 'CreateVM_Task'
try:
task = destfolder.CreateVM_Task(config=self.configspec, pool=resource_pool)
except vmodl.fault.InvalidRequest as e:
self.module.fail_json(msg="Failed to create virtual machine due to invalid configuration "
"parameter %s" % to_native(e.msg))
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to create virtual machine due to "
"product versioning restrictions: %s" % to_native(e.msg))
self.change_detected = True
self.wait_for_task(task)
except TypeError as e:
self.module.fail_json(msg="TypeError was returned, please ensure to give correct inputs. %s" % to_text(e))
if task.info.state == 'error':
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021361
# https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2173
# provide these to the user for debugging
clonespec_json = serialize_spec(clonespec)
configspec_json = serialize_spec(self.configspec)
kwargs = {
'changed': self.change_applied,
'failed': True,
'msg': task.info.error.msg,
'clonespec': clonespec_json,
'configspec': configspec_json,
'clone_method': clone_method
}
return kwargs
else:
# set annotation
vm = task.info.result
if self.params['annotation']:
annotation_spec = vim.vm.ConfigSpec()
annotation_spec.annotation = str(self.params['annotation'])
task = vm.ReconfigVM_Task(annotation_spec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'annotation'}
if self.params['customvalues']:
self.customize_customvalues(vm_obj=vm)
if self.params['wait_for_ip_address'] or self.params['wait_for_customization'] or self.params['state'] in ['poweredon', 'restarted']:
set_vm_power_state(self.content, vm, 'poweredon', force=False)
if self.params['wait_for_ip_address']:
wait_for_vm_ip(self.content, vm, self.params['wait_for_ip_address_timeout'])
if self.params['wait_for_customization']:
is_customization_ok = self.wait_for_customization(vm)
if not is_customization_ok:
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': True, 'instance': vm_facts, 'op': 'customization'}
vm_facts = self.gather_facts(vm)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def get_snapshots_by_name_recursively(self, snapshots, snapname):
snap_obj = []
for snapshot in snapshots:
if snapshot.name == snapname:
snap_obj.append(snapshot)
else:
snap_obj = snap_obj + self.get_snapshots_by_name_recursively(snapshot.childSnapshotList, snapname)
return snap_obj
def reconfigure_vm(self):
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
# create the relocation spec
self.relospec = vim.vm.RelocateSpec()
self.relospec.deviceChange = []
self.configure_guestid(vm_obj=self.current_vm_obj)
self.configure_cpu_and_memory(vm_obj=self.current_vm_obj)
self.configure_hardware_params(vm_obj=self.current_vm_obj)
self.configure_disks(vm_obj=self.current_vm_obj)
self.configure_network(vm_obj=self.current_vm_obj)
self.configure_cdrom(vm_obj=self.current_vm_obj)
self.customize_customvalues(vm_obj=self.current_vm_obj)
self.configure_resource_alloc_info(vm_obj=self.current_vm_obj)
self.configure_vapp_properties(vm_obj=self.current_vm_obj)
if self.params['annotation'] and self.current_vm_obj.config.annotation != self.params['annotation']:
self.configspec.annotation = str(self.params['annotation'])
self.change_detected = True
if self.params['resource_pool']:
self.relospec.pool = self.get_resource_pool()
if self.relospec.pool != self.current_vm_obj.resourcePool:
task = self.current_vm_obj.RelocateVM_Task(spec=self.relospec)
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'relocate'}
# Only send VMware task if we see a modification
if self.change_detected:
task = None
try:
task = self.current_vm_obj.ReconfigVM_Task(spec=self.configspec)
except vim.fault.RestrictedVersion as e:
self.module.fail_json(msg="Failed to reconfigure virtual machine due to"
" product versioning restrictions: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'reconfig'}
# Rename VM
if self.params['uuid'] and self.params['name'] and self.params['name'] != self.current_vm_obj.config.name:
task = self.current_vm_obj.Rename_Task(self.params['name'])
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'rename'}
# Mark VM as Template
if self.params['is_template'] and not self.current_vm_obj.config.template:
try:
self.current_vm_obj.MarkAsTemplate()
self.change_applied = True
except vmodl.fault.NotSupported as e:
self.module.fail_json(msg="Failed to mark virtual machine [%s] "
"as template: %s" % (self.params['name'], e.msg))
# Mark Template as VM
elif not self.params['is_template'] and self.current_vm_obj.config.template:
resource_pool = self.get_resource_pool()
kwargs = dict(pool=resource_pool)
if self.params.get('esxi_hostname', None):
host_system_obj = self.select_host()
kwargs.update(host=host_system_obj)
try:
self.current_vm_obj.MarkAsVirtualMachine(**kwargs)
self.change_applied = True
except vim.fault.InvalidState as invalid_state:
self.module.fail_json(msg="Virtual machine is not marked"
" as template : %s" % to_native(invalid_state.msg))
except vim.fault.InvalidDatastore as invalid_ds:
self.module.fail_json(msg="Converting template to virtual machine"
" operation cannot be performed on the"
" target datastores: %s" % to_native(invalid_ds.msg))
except vim.fault.CannotAccessVmComponent as cannot_access:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" as operation unable access virtual machine"
" component: %s" % to_native(cannot_access.msg))
except vmodl.fault.InvalidArgument as invalid_argument:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to : %s" % to_native(invalid_argument.msg))
except Exception as generic_exc:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to generic error : %s" % to_native(generic_exc))
# Automatically update VMware UUID when converting template to VM.
# This avoids an interactive prompt during VM startup.
uuid_action = [x for x in self.current_vm_obj.config.extraConfig if x.key == "uuid.action"]
if not uuid_action:
uuid_action_opt = vim.option.OptionValue()
uuid_action_opt.key = "uuid.action"
uuid_action_opt.value = "create"
self.configspec.extraConfig.append(uuid_action_opt)
self.change_detected = True
# add customize existing VM after VM re-configure
if 'existing_vm' in self.params['customization'] and self.params['customization']['existing_vm']:
if self.current_vm_obj.config.template:
self.module.fail_json(msg="VM is template, not support guest OS customization.")
if self.current_vm_obj.runtime.powerState != vim.VirtualMachinePowerState.poweredOff:
self.module.fail_json(msg="VM is not in poweroff state, can not do guest OS customization.")
cus_result = self.customize_exist_vm()
if cus_result['failed']:
return cus_result
vm_facts = self.gather_facts(self.current_vm_obj)
return {'changed': self.change_applied, 'failed': False, 'instance': vm_facts}
def customize_exist_vm(self):
task = None
# Find if we need network customizations (find keys in dictionary that requires customizations)
network_changes = False
for nw in self.params['networks']:
for key in nw:
# We don't need customizations for these keys
if key not in ('device_type', 'mac', 'name', 'vlan', 'type', 'start_connected'):
network_changes = True
break
if len(self.params['customization']) > 1 or network_changes or self.params.get('customization_spec'):
self.customize_vm(vm_obj=self.current_vm_obj)
try:
task = self.current_vm_obj.CustomizeVM_Task(self.customspec)
except vim.fault.CustomizationFault as e:
self.module.fail_json(msg="Failed to customization virtual machine due to CustomizationFault: %s" % to_native(e.msg))
except vim.fault.RuntimeFault as e:
self.module.fail_json(msg="failed to customization virtual machine due to RuntimeFault: %s" % to_native(e.msg))
except Exception as e:
self.module.fail_json(msg="failed to customization virtual machine due to fault: %s" % to_native(e.msg))
self.wait_for_task(task)
if task.info.state == 'error':
return {'changed': self.change_applied, 'failed': True, 'msg': task.info.error.msg, 'op': 'customize_exist'}
if self.params['wait_for_customization']:
set_vm_power_state(self.content, self.current_vm_obj, 'poweredon', force=False)
is_customization_ok = self.wait_for_customization(self.current_vm_obj)
if not is_customization_ok:
return {'changed': self.change_applied, 'failed': True, 'op': 'wait_for_customize_exist'}
return {'changed': self.change_applied, 'failed': False}
def wait_for_task(self, task, poll_interval=1):
"""
Wait for a VMware task to complete. Terminal states are 'error' and 'success'.
Inputs:
- task: the task to wait for
- poll_interval: polling interval to check the task, in seconds
Modifies:
- self.change_applied
"""
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.Task.html
# https://www.vmware.com/support/developer/vc-sdk/visdk25pubs/ReferenceGuide/vim.TaskInfo.html
# https://github.com/virtdevninja/pyvmomi-community-samples/blob/master/samples/tools/tasks.py
while task.info.state not in ['error', 'success']:
time.sleep(poll_interval)
self.change_applied = self.change_applied or task.info.state == 'success'
def get_vm_events(self, vm, eventTypeIdList):
byEntity = vim.event.EventFilterSpec.ByEntity(entity=vm, recursion="self")
filterSpec = vim.event.EventFilterSpec(entity=byEntity, eventTypeId=eventTypeIdList)
eventManager = self.content.eventManager
return eventManager.QueryEvent(filterSpec)
def wait_for_customization(self, vm, poll=10000, sleep=10):
thispoll = 0
while thispoll <= poll:
eventStarted = self.get_vm_events(vm, ['CustomizationStartedEvent'])
if len(eventStarted):
thispoll = 0
while thispoll <= poll:
eventsFinishedResult = self.get_vm_events(vm, ['CustomizationSucceeded', 'CustomizationFailed'])
if len(eventsFinishedResult):
if not isinstance(eventsFinishedResult[0], vim.event.CustomizationSucceeded):
self.module.fail_json(msg='Customization failed with error {0}:\n{1}'.format(
eventsFinishedResult[0]._wsdlName, eventsFinishedResult[0].fullFormattedMessage))
return False
break
else:
time.sleep(sleep)
thispoll += 1
return True
else:
time.sleep(sleep)
thispoll += 1
self.module.fail_json('waiting for customizations timed out.')
return False
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
state=dict(type='str', default='present',
choices=['absent', 'poweredoff', 'poweredon', 'present', 'rebootguest', 'restarted', 'shutdownguest', 'suspended']),
template=dict(type='str', aliases=['template_src']),
is_template=dict(type='bool', default=False),
annotation=dict(type='str', aliases=['notes']),
customvalues=dict(type='list', default=[]),
name=dict(type='str'),
name_match=dict(type='str', choices=['first', 'last'], default='first'),
uuid=dict(type='str'),
use_instance_uuid=dict(type='bool', default=False),
folder=dict(type='str'),
guest_id=dict(type='str'),
disk=dict(type='list', default=[]),
cdrom=dict(type=list_or_dict, default=[]),
hardware=dict(type='dict', default={}),
force=dict(type='bool', default=False),
datacenter=dict(type='str', default='ha-datacenter'),
esxi_hostname=dict(type='str'),
cluster=dict(type='str'),
wait_for_ip_address=dict(type='bool', default=False),
wait_for_ip_address_timeout=dict(type='int', default=300),
state_change_timeout=dict(type='int', default=0),
snapshot_src=dict(type='str'),
linked_clone=dict(type='bool', default=False),
networks=dict(type='list', default=[]),
resource_pool=dict(type='str'),
customization=dict(type='dict', default={}, no_log=True),
customization_spec=dict(type='str', default=None),
wait_for_customization=dict(type='bool', default=False),
vapp_properties=dict(type='list', default=[]),
datastore=dict(type='str'),
convert=dict(type='str', choices=['thin', 'thick', 'eagerzeroedthick']),
delete_from_inventory=dict(type='bool', default=False),
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[
['cluster', 'esxi_hostname'],
],
required_one_of=[
['name', 'uuid'],
],
)
result = {'failed': False, 'changed': False}
pyv = PyVmomiHelper(module)
# Check if the VM exists before continuing
vm = pyv.get_vm()
# VM already exists
if vm:
if module.params['state'] == 'absent':
# destroy it
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='remove_vm',
)
module.exit_json(**result)
if module.params['force']:
# has to be poweredoff first
set_vm_power_state(pyv.content, vm, 'poweredoff', module.params['force'])
result = pyv.remove_vm(vm, module.params['delete_from_inventory'])
elif module.params['state'] == 'present':
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
desired_operation='reconfigure_vm',
)
module.exit_json(**result)
result = pyv.reconfigure_vm()
elif module.params['state'] in ['poweredon', 'poweredoff', 'restarted', 'suspended', 'shutdownguest', 'rebootguest']:
if module.check_mode:
result.update(
vm_name=vm.name,
changed=True,
current_powerstate=vm.summary.runtime.powerState.lower(),
desired_operation='set_vm_power_state',
)
module.exit_json(**result)
# set powerstate
tmp_result = set_vm_power_state(pyv.content, vm, module.params['state'], module.params['force'], module.params['state_change_timeout'])
if tmp_result['changed']:
result["changed"] = True
if module.params['state'] in ['poweredon', 'restarted', 'rebootguest'] and module.params['wait_for_ip_address']:
wait_result = wait_for_vm_ip(pyv.content, vm, module.params['wait_for_ip_address_timeout'])
if not wait_result:
module.fail_json(msg='Waiting for IP address timed out')
tmp_result['instance'] = wait_result
if not tmp_result["failed"]:
result["failed"] = False
result['instance'] = tmp_result['instance']
if tmp_result["failed"]:
result["failed"] = True
result["msg"] = tmp_result["msg"]
else:
# This should not happen
raise AssertionError()
# VM doesn't exist
else:
if module.params['state'] in ['poweredon', 'poweredoff', 'present', 'restarted', 'suspended']:
if module.check_mode:
result.update(
changed=True,
desired_operation='deploy_vm',
)
module.exit_json(**result)
result = pyv.deploy_vm()
if result['failed']:
module.fail_json(msg='Failed to create a virtual machine : %s' % result['msg'])
if result['failed']:
module.fail_json(**result)
else:
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,770 |
gitlab modules : user/password method is deprecated
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`python-gitlab` library release a new version [v1.13.0](https://github.com/python-gitlab/python-gitlab/releases/tag/v1.13.0). User/Password authentication as been removed completely causing all Ansible GitLab module to fail...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gitlab_group
gitlab_deploy_key
gitlab_hook
gitlab_project
gitlab_project_variable
gitlab_runner
gitlab_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
All versions
|
https://github.com/ansible/ansible/issues/64770
|
https://github.com/ansible/ansible/pull/64989
|
bc479fcafc3e73cb660577c6a4ef1480277bbd4f
|
4e6fa59ec1b5f31332b5f19f6e51607ada6345dd
| 2019-11-13T09:59:06Z |
python
| 2019-11-19T10:00:34Z |
changelogs/fragments/64989-gitlab-handle-lib-new-version.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,770 |
gitlab modules : user/password method is deprecated
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`python-gitlab` library release a new version [v1.13.0](https://github.com/python-gitlab/python-gitlab/releases/tag/v1.13.0). User/Password authentication as been removed completely causing all Ansible GitLab module to fail...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gitlab_group
gitlab_deploy_key
gitlab_hook
gitlab_project
gitlab_project_variable
gitlab_runner
gitlab_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
All versions
|
https://github.com/ansible/ansible/issues/64770
|
https://github.com/ansible/ansible/pull/64989
|
bc479fcafc3e73cb660577c6a4ef1480277bbd4f
|
4e6fa59ec1b5f31332b5f19f6e51607ada6345dd
| 2019-11-13T09:59:06Z |
python
| 2019-11-19T10:00:34Z |
lib/ansible/module_utils/gitlab.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import
import json
from ansible.module_utils.urls import fetch_url
try:
from urllib import quote_plus # Python 2.X
except ImportError:
from urllib.parse import quote_plus # Python 3+
def request(module, api_url, project, path, access_token, private_token, rawdata='', method='GET'):
url = "%s/v4/projects/%s%s" % (api_url, quote_plus(project), path)
headers = {}
if access_token:
headers['Authorization'] = "Bearer %s" % access_token
else:
headers['Private-Token'] = private_token
headers['Accept'] = "application/json"
headers['Content-Type'] = "application/json"
response, info = fetch_url(module=module, url=url, headers=headers, data=rawdata, method=method)
status = info['status']
content = ""
if response:
content = response.read()
if status == 204:
return True, content
elif status == 200 or status == 201:
return True, json.loads(content)
else:
return False, str(status) + ": " + content
def findProject(gitlab_instance, identifier):
try:
project = gitlab_instance.projects.get(identifier)
except Exception as e:
current_user = gitlab_instance.user
try:
project = gitlab_instance.projects.get(current_user.username + '/' + identifier)
except Exception as e:
return None
return project
def findGroup(gitlab_instance, identifier):
try:
project = gitlab_instance.groups.get(identifier)
except Exception as e:
return None
return project
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,770 |
gitlab modules : user/password method is deprecated
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`python-gitlab` library release a new version [v1.13.0](https://github.com/python-gitlab/python-gitlab/releases/tag/v1.13.0). User/Password authentication as been removed completely causing all Ansible GitLab module to fail...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gitlab_group
gitlab_deploy_key
gitlab_hook
gitlab_project
gitlab_project_variable
gitlab_runner
gitlab_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
All versions
|
https://github.com/ansible/ansible/issues/64770
|
https://github.com/ansible/ansible/pull/64989
|
bc479fcafc3e73cb660577c6a4ef1480277bbd4f
|
4e6fa59ec1b5f31332b5f19f6e51607ada6345dd
| 2019-11-13T09:59:06Z |
python
| 2019-11-19T10:00:34Z |
lib/ansible/modules/source_control/gitlab/gitlab_deploy_key.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2018, Marcus Watkins <[email protected]>
# Based on code:
# Copyright: (c) 2013, Phillip Gentry <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: gitlab_deploy_key
short_description: Manages GitLab project deploy keys.
description:
- Adds, updates and removes project deploy keys
version_added: "2.6"
author:
- Marcus Watkins (@marwatk)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
version_added: "2.8"
type: str
project:
description:
- Id or Full path of project in the form of group/name.
required: true
type: str
title:
description:
- Deploy key's title.
required: true
type: str
key:
description:
- Deploy key
required: true
type: str
can_push:
description:
- Whether this key can push to the project.
type: bool
default: no
state:
description:
- When C(present) the deploy key added to the project if it doesn't exist.
- When C(absent) it will be removed from the project if it exists.
required: true
default: present
type: str
choices: [ "present", "absent" ]
'''
EXAMPLES = '''
- name: "Adding a project deploy key"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ api_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
- name: "Update the above deploy key to add push access"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ api_token }}"
project: "my_group/my_project"
title: "Jenkins CI"
state: present
can_push: yes
- name: "Remove the previous deploy key from the project"
gitlab_deploy_key:
api_url: https://gitlab.example.com/
api_token: "{{ api_token }}"
project: "my_group/my_project"
state: absent
key: "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAIEAiPWx6WM4lhHNedGfBpPJNPpZ7yKu+dnn1SJejgt4596k6YjzGGphH2TUxwKzxcKDKKezwkpfnxPkSMkuEspGRt/aZZ9w..."
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: key is already in use"
deploy_key:
description: API object
returned: always
type: dict
'''
import re
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findProject
class GitLabDeployKey(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.deployKeyObject = None
'''
@param project Project object
@param key_title Title of the key
@param key_key String of the key
@param key_can_push Option of the deployKey
@param options Deploy key options
'''
def createOrUpdateDeployKey(self, project, key_title, key_key, options):
changed = False
# Because we have already call existsDeployKey in main()
if self.deployKeyObject is None:
deployKey = self.createDeployKey(project, {
'title': key_title,
'key': key_key,
'can_push': options['can_push']})
changed = True
else:
changed, deployKey = self.updateDeployKey(self.deployKeyObject, {
'can_push': options['can_push']})
self.deployKeyObject = deployKey
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title)
try:
deployKey.save()
except Exception as e:
self._module.fail_json(msg="Failed to update deploy key: %s " % e)
return True
else:
return False
'''
@param project Project Object
@param arguments Attributes of the deployKey
'''
def createDeployKey(self, project, arguments):
if self._module.check_mode:
return True
try:
deployKey = project.keys.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create deploy key: %s " % to_native(e))
return deployKey
'''
@param deployKey Deploy Key Object
@param arguments Attributes of the deployKey
'''
def updateDeployKey(self, deployKey, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(deployKey, arg_key) != arguments[arg_key]:
setattr(deployKey, arg_key, arguments[arg_key])
changed = True
return (changed, deployKey)
'''
@param project Project object
@param key_title Title of the key
'''
def findDeployKey(self, project, key_title):
deployKeys = project.keys.list()
for deployKey in deployKeys:
if (deployKey.title == key_title):
return deployKey
'''
@param project Project object
@param key_title Title of the key
'''
def existsDeployKey(self, project, key_title):
# When project exists, object will be stored in self.projectObject.
deployKey = self.findDeployKey(project, key_title)
if deployKey:
self.deployKeyObject = deployKey
return True
return False
def deleteDeployKey(self):
if self._module.check_mode:
return True
return self.deployKeyObject.delete()
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True),
state=dict(type='str', default="present", choices=["absent", "present"]),
project=dict(type='str', required=True),
key=dict(type='str', required=True),
can_push=dict(type='bool', default=False),
title=dict(type='str', required=True)
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token']
],
required_together=[
['api_username', 'api_password']
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
gitlab_url = re.sub('/api.*', '', module.params['api_url'])
validate_certs = module.params['validate_certs']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
state = module.params['state']
project_identifier = module.params['project']
key_title = module.params['title']
key_keyfile = module.params['key']
key_can_push = module.params['can_push']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e))
gitlab_deploy_key = GitLabDeployKey(module, gitlab_instance)
project = findProject(gitlab_instance, project_identifier)
if project is None:
module.fail_json(msg="Failed to create deploy key: project %s doesn't exists" % project_identifier)
deployKey_exists = gitlab_deploy_key.existsDeployKey(project, key_title)
if state == 'absent':
if deployKey_exists:
gitlab_deploy_key.deleteDeployKey()
module.exit_json(changed=True, msg="Successfully deleted deploy key %s" % key_title)
else:
module.exit_json(changed=False, msg="Deploy key deleted or does not exists")
if state == 'present':
if gitlab_deploy_key.createOrUpdateDeployKey(project, key_title, key_keyfile, {'can_push': key_can_push}):
module.exit_json(changed=True, msg="Successfully created or updated the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the deploy key %s" % key_title,
deploy_key=gitlab_deploy_key.deployKeyObject._attrs)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,770 |
gitlab modules : user/password method is deprecated
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`python-gitlab` library release a new version [v1.13.0](https://github.com/python-gitlab/python-gitlab/releases/tag/v1.13.0). User/Password authentication as been removed completely causing all Ansible GitLab module to fail...
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
gitlab_group
gitlab_deploy_key
gitlab_hook
gitlab_project
gitlab_project_variable
gitlab_runner
gitlab_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
All versions
|
https://github.com/ansible/ansible/issues/64770
|
https://github.com/ansible/ansible/pull/64989
|
bc479fcafc3e73cb660577c6a4ef1480277bbd4f
|
4e6fa59ec1b5f31332b5f19f6e51607ada6345dd
| 2019-11-13T09:59:06Z |
python
| 2019-11-19T10:00:34Z |
lib/ansible/modules/source_control/gitlab/gitlab_group.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Guillaume Martinez ([email protected])
# Copyright: (c) 2015, Werner Dijkerman ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: gitlab_group
short_description: Creates/updates/deletes GitLab Groups
description:
- When the group does not exist in GitLab, it will be created.
- When the group does exist and state=absent, the group will be deleted.
version_added: "2.1"
author:
- Werner Dijkerman (@dj-wasabi)
- Guillaume Martinez (@Lunik)
requirements:
- python >= 2.7
- python-gitlab python module
extends_documentation_fragment:
- auth_basic
options:
api_token:
description:
- GitLab token for logging in.
type: str
name:
description:
- Name of the group you want to create.
required: true
type: str
path:
description:
- The path of the group you want to create, this will be api_url/group_path
- If not supplied, the group_name will be used.
type: str
description:
description:
- A description for the group.
version_added: "2.7"
type: str
state:
description:
- create or delete group.
- Possible values are present and absent.
default: present
type: str
choices: ["present", "absent"]
parent:
description:
- Allow to create subgroups
- Id or Full path of parent group in the form of group/name
version_added: "2.8"
type: str
visibility:
description:
- Default visibility of the group
version_added: "2.8"
choices: ["private", "internal", "public"]
default: private
type: str
'''
EXAMPLES = '''
- name: "Delete GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
api_token: "{{ access_token }}"
validate_certs: False
name: my_first_group
state: absent
- name: "Create GitLab Group"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
# The group will by created at https://gitlab.dj-wasabi.local/super_parent/parent/my_first_group
- name: "Create GitLab SubGroup"
gitlab_group:
api_url: https://gitlab.example.com/
validate_certs: True
api_username: dj-wasabi
api_password: "MySecretPassword"
name: my_first_group
path: my_first_group
state: present
parent: "super_parent/parent"
'''
RETURN = '''
msg:
description: Success or failure message
returned: always
type: str
sample: "Success"
result:
description: json parsed response from the server
returned: always
type: dict
error:
description: the error message returned by the GitLab API
returned: failed
type: str
sample: "400: path is already in use"
group:
description: API object
returned: always
type: dict
'''
import traceback
GITLAB_IMP_ERR = None
try:
import gitlab
HAS_GITLAB_PACKAGE = True
except Exception:
GITLAB_IMP_ERR = traceback.format_exc()
HAS_GITLAB_PACKAGE = False
from ansible.module_utils.api import basic_auth_argument_spec
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
from ansible.module_utils.gitlab import findGroup
class GitLabGroup(object):
def __init__(self, module, gitlab_instance):
self._module = module
self._gitlab = gitlab_instance
self.groupObject = None
'''
@param group Group object
'''
def getGroupId(self, group):
if group is not None:
return group.id
return None
'''
@param name Name of the group
@param parent Parent group full path
@param options Group options
'''
def createOrUpdateGroup(self, name, parent, options):
changed = False
# Because we have already call userExists in main()
if self.groupObject is None:
parent_id = self.getGroupId(parent)
group = self.createGroup({
'name': name,
'path': options['path'],
'parent_id': parent_id,
'visibility': options['visibility']})
changed = True
else:
changed, group = self.updateGroup(self.groupObject, {
'name': name,
'description': options['description'],
'visibility': options['visibility']})
self.groupObject = group
if changed:
if self._module.check_mode:
self._module.exit_json(changed=True, msg="Successfully created or updated the group %s" % name)
try:
group.save()
except Exception as e:
self._module.fail_json(msg="Failed to update group: %s " % e)
return True
else:
return False
'''
@param arguments Attributes of the group
'''
def createGroup(self, arguments):
if self._module.check_mode:
return True
try:
group = self._gitlab.groups.create(arguments)
except (gitlab.exceptions.GitlabCreateError) as e:
self._module.fail_json(msg="Failed to create group: %s " % to_native(e))
return group
'''
@param group Group Object
@param arguments Attributes of the group
'''
def updateGroup(self, group, arguments):
changed = False
for arg_key, arg_value in arguments.items():
if arguments[arg_key] is not None:
if getattr(group, arg_key) != arguments[arg_key]:
setattr(group, arg_key, arguments[arg_key])
changed = True
return (changed, group)
def deleteGroup(self):
group = self.groupObject
if len(group.projects.list()) >= 1:
self._module.fail_json(
msg="There are still projects in this group. These needs to be moved or deleted before this group can be removed.")
else:
if self._module.check_mode:
return True
try:
group.delete()
except Exception as e:
self._module.fail_json(msg="Failed to delete group: %s " % to_native(e))
'''
@param name Name of the groupe
@param full_path Complete path of the Group including parent group path. <parent_path>/<group_path>
'''
def existsGroup(self, project_identifier):
# When group/user exists, object will be stored in self.groupObject.
group = findGroup(self._gitlab, project_identifier)
if group:
self.groupObject = group
return True
return False
def main():
argument_spec = basic_auth_argument_spec()
argument_spec.update(dict(
api_token=dict(type='str', no_log=True),
name=dict(type='str', required=True),
path=dict(type='str'),
description=dict(type='str'),
state=dict(type='str', default="present", choices=["absent", "present"]),
parent=dict(type='str'),
visibility=dict(type='str', default="private", choices=["internal", "private", "public"]),
))
module = AnsibleModule(
argument_spec=argument_spec,
mutually_exclusive=[
['api_username', 'api_token'],
['api_password', 'api_token'],
],
required_together=[
['api_username', 'api_password'],
],
required_one_of=[
['api_username', 'api_token']
],
supports_check_mode=True,
)
validate_certs = module.params['validate_certs']
gitlab_url = module.params['api_url']
gitlab_user = module.params['api_username']
gitlab_password = module.params['api_password']
gitlab_token = module.params['api_token']
group_name = module.params['name']
group_path = module.params['path']
description = module.params['description']
state = module.params['state']
parent_identifier = module.params['parent']
group_visibility = module.params['visibility']
if not HAS_GITLAB_PACKAGE:
module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR)
try:
gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password,
private_token=gitlab_token, api_version=4)
gitlab_instance.auth()
except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e))
except (gitlab.exceptions.GitlabHttpError) as e:
module.fail_json(msg="Failed to connect to GitLab server: %s. \
GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e))
# Define default group_path based on group_name
if group_path is None:
group_path = group_name.replace(" ", "_")
gitlab_group = GitLabGroup(module, gitlab_instance)
parent_group = None
if parent_identifier:
parent_group = findGroup(gitlab_instance, parent_identifier)
if not parent_group:
module.fail_json(msg="Failed create GitLab group: Parent group doesn't exists")
group_exists = gitlab_group.existsGroup(parent_group.full_path + '/' + group_path)
else:
group_exists = gitlab_group.existsGroup(group_path)
if state == 'absent':
if group_exists:
gitlab_group.deleteGroup()
module.exit_json(changed=True, msg="Successfully deleted group %s" % group_name)
else:
module.exit_json(changed=False, msg="Group deleted or does not exists")
if state == 'present':
if gitlab_group.createOrUpdateGroup(group_name, parent_group, {
"path": group_path,
"description": description,
"visibility": group_visibility}):
module.exit_json(changed=True, msg="Successfully created or updated the group %s" % group_name, group=gitlab_group.groupObject._attrs)
else:
module.exit_json(changed=False, msg="No need to update the group %s" % group_name, group=gitlab_group.groupObject._attrs)
if __name__ == '__main__':
main()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.