status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,015 |
win_stat has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_stat.ps1:0:0: ansible-deprecated-version: Argument 'get_md5' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_stat.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67015
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:51:09Z |
python
| 2020-02-04T23:02:04Z |
lib/ansible/modules/windows/win_stat.ps1
|
#!powershell
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#AnsibleRequires -CSharpUtil Ansible.Basic
#Requires -Module Ansible.ModuleUtils.FileUtil
#Requires -Module Ansible.ModuleUtils.LinkUtil
function ConvertTo-Timestamp($start_date, $end_date) {
if ($start_date -and $end_date) {
return (New-TimeSpan -Start $start_date -End $end_date).TotalSeconds
}
}
function Get-FileChecksum($path, $algorithm) {
switch ($algorithm) {
'md5' { $sp = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider }
'sha1' { $sp = New-Object -TypeName System.Security.Cryptography.SHA1CryptoServiceProvider }
'sha256' { $sp = New-Object -TypeName System.Security.Cryptography.SHA256CryptoServiceProvider }
'sha384' { $sp = New-Object -TypeName System.Security.Cryptography.SHA384CryptoServiceProvider }
'sha512' { $sp = New-Object -TypeName System.Security.Cryptography.SHA512CryptoServiceProvider }
default { Fail-Json -obj $result -message "Unsupported hash algorithm supplied '$algorithm'" }
}
$fp = [System.IO.File]::Open($path, [System.IO.Filemode]::Open, [System.IO.FileAccess]::Read, [System.IO.FileShare]::ReadWrite)
try {
$hash = [System.BitConverter]::ToString($sp.ComputeHash($fp)).Replace("-", "").ToLower()
} finally {
$fp.Dispose()
}
return $hash
}
function Get-FileInfo {
param([String]$Path, [Switch]$Follow)
$info = Get-AnsibleItem -Path $Path -ErrorAction SilentlyContinue
$link_info = $null
if ($null -ne $info) {
try {
$link_info = Get-Link -link_path $info.FullName
} catch {
$module.Warn("Failed to check/get link info for file: $($_.Exception.Message)")
}
# If follow=true we want to follow the link all the way back to root object
if ($Follow -and $null -ne $link_info -and $link_info.Type -in @("SymbolicLink", "JunctionPoint")) {
$info, $link_info = Get-FileInfo -Path $link_info.AbsolutePath -Follow
}
}
return $info, $link_info
}
$spec = @{
options = @{
path = @{ type='path'; required=$true; aliases=@( 'dest', 'name' ) }
get_checksum = @{ type='bool'; default=$true }
checksum_algorithm = @{ type='str'; default='sha1'; choices=@( 'md5', 'sha1', 'sha256', 'sha384', 'sha512' ) }
get_md5 = @{ type='bool'; default=$false; removed_in_version='2.9' }
follow = @{ type='bool'; default=$false }
}
supports_check_mode = $true
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
$path = $module.Params.path
$get_md5 = $module.Params.get_md5
$get_checksum = $module.Params.get_checksum
$checksum_algorithm = $module.Params.checksum_algorithm
$follow = $module.Params.follow
$module.Result.stat = @{ exists=$false }
Load-LinkUtils
$info, $link_info = Get-FileInfo -Path $path -Follow:$follow
If ($null -ne $info) {
$epoch_date = Get-Date -Date "01/01/1970"
$attributes = @()
foreach ($attribute in ($info.Attributes -split ',')) {
$attributes += $attribute.Trim()
}
# default values that are always set, specific values are set below this
# but are kept commented for easier readability
$stat = @{
exists = $true
attributes = $info.Attributes.ToString()
isarchive = ($attributes -contains "Archive")
isdir = $false
ishidden = ($attributes -contains "Hidden")
isjunction = $false
islnk = $false
isreadonly = ($attributes -contains "ReadOnly")
isreg = $false
isshared = $false
nlink = 1 # Number of links to the file (hard links), overriden below if islnk
# lnk_target = islnk or isjunction Target of the symlink. Note that relative paths remain relative
# lnk_source = islnk os isjunction Target of the symlink normalized for the remote filesystem
hlnk_targets = @()
creationtime = (ConvertTo-Timestamp -start_date $epoch_date -end_date $info.CreationTime)
lastaccesstime = (ConvertTo-Timestamp -start_date $epoch_date -end_date $info.LastAccessTime)
lastwritetime = (ConvertTo-Timestamp -start_date $epoch_date -end_date $info.LastWriteTime)
# size = a file and directory - calculated below
path = $info.FullName
filename = $info.Name
# extension = a file
# owner = set outsite this dict in case it fails
# sharename = a directory and isshared is True
# checksum = a file and get_checksum: True
# md5 = a file and get_md5: True
}
try {
$stat.owner = $info.GetAccessControl().Owner
} catch {
# may not have rights, historical behaviour was to just set to $null
# due to ErrorActionPreference being set to "Continue"
$stat.owner = $null
}
# values that are set according to the type of file
if ($info.Attributes.HasFlag([System.IO.FileAttributes]::Directory)) {
$stat.isdir = $true
$share_info = Get-CimInstance -ClassName Win32_Share -Filter "Path='$($stat.path -replace '\\', '\\')'"
if ($null -ne $share_info) {
$stat.isshared = $true
$stat.sharename = $share_info.Name
}
try {
$size = 0
foreach ($file in $info.EnumerateFiles("*", [System.IO.SearchOption]::AllDirectories)) {
$size += $file.Length
}
$stat.size = $size
} catch {
$stat.size = 0
}
} else {
$stat.extension = $info.Extension
$stat.isreg = $true
$stat.size = $info.Length
if ($get_md5) {
try {
$stat.md5 = Get-FileChecksum -path $path -algorithm "md5"
} catch {
$module.FailJson("Failed to get MD5 hash of file, remove get_md5 to ignore this error: $($_.Exception.Message)", $_)
}
}
if ($get_checksum) {
try {
$stat.checksum = Get-FileChecksum -path $path -algorithm $checksum_algorithm
} catch {
$module.FailJson("Failed to get hash of file, set get_checksum to False to ignore this error: $($_.Exception.Message)", $_)
}
}
}
# Get symbolic link, junction point, hard link info
if ($null -ne $link_info) {
switch ($link_info.Type) {
"SymbolicLink" {
$stat.islnk = $true
$stat.isreg = $false
$stat.lnk_target = $link_info.TargetPath
$stat.lnk_source = $link_info.AbsolutePath
break
}
"JunctionPoint" {
$stat.isjunction = $true
$stat.isreg = $false
$stat.lnk_target = $link_info.TargetPath
$stat.lnk_source = $link_info.AbsolutePath
break
}
"HardLink" {
$stat.lnk_type = "hard"
$stat.nlink = $link_info.HardTargets.Count
# remove current path from the targets
$hlnk_targets = $link_info.HardTargets | Where-Object { $_ -ne $stat.path }
$stat.hlnk_targets = @($hlnk_targets)
break
}
}
}
$module.Result.stat = $stat
}
$module.ExitJson()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,015 |
win_stat has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_stat.ps1:0:0: ansible-deprecated-version: Argument 'get_md5' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_stat.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67015
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:51:09Z |
python
| 2020-02-04T23:02:04Z |
lib/ansible/modules/windows/win_stat.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# this is a windows documentation stub. actual code lives in the .ps1
# file of the same name
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: win_stat
version_added: "1.7"
short_description: Get information about Windows files
description:
- Returns information about a Windows file.
- For non-Windows targets, use the M(stat) module instead.
options:
path:
description:
- The full path of the file/object to get the facts of; both forward and
back slashes are accepted.
type: path
required: yes
aliases: [ dest, name ]
get_md5:
description:
- Whether to return the checksum sum of the file. Between Ansible 1.9
and Ansible 2.2 this is no longer an MD5, but a SHA1 instead. As of Ansible
2.3 this is back to an MD5. Will return None if host is unable to
use specified algorithm.
- The default of this option changed from C(yes) to C(no) in Ansible 2.5
and will be removed altogether in Ansible 2.9.
- Use C(get_checksum=yes) with C(checksum_algorithm=md5) to return an
md5 hash under the C(checksum) return value.
type: bool
default: no
get_checksum:
description:
- Whether to return a checksum of the file (default sha1)
type: bool
default: yes
version_added: "2.1"
checksum_algorithm:
description:
- Algorithm to determine checksum of file.
- Will throw an error if the host is unable to use specified algorithm.
type: str
default: sha1
choices: [ md5, sha1, sha256, sha384, sha512 ]
version_added: "2.3"
follow:
description:
- Whether to follow symlinks or junction points.
- In the case of C(path) pointing to another link, then that will
be followed until no more links are found.
type: bool
default: no
version_added: "2.8"
seealso:
- module: stat
- module: win_acl
- module: win_file
- module: win_owner
author:
- Chris Church (@cchurch)
'''
EXAMPLES = r'''
- name: Obtain information about a file
win_stat:
path: C:\foo.ini
register: file_info
- name: Obtain information about a folder
win_stat:
path: C:\bar
register: folder_info
- name: Get MD5 checksum of a file
win_stat:
path: C:\foo.ini
get_checksum: yes
checksum_algorithm: md5
register: md5_checksum
- debug:
var: md5_checksum.stat.checksum
- name: Get SHA1 checksum of file
win_stat:
path: C:\foo.ini
get_checksum: yes
register: sha1_checksum
- debug:
var: sha1_checksum.stat.checksum
- name: Get SHA256 checksum of file
win_stat:
path: C:\foo.ini
get_checksum: yes
checksum_algorithm: sha256
register: sha256_checksum
- debug:
var: sha256_checksum.stat.checksum
'''
RETURN = r'''
changed:
description: Whether anything was changed
returned: always
type: bool
sample: true
stat:
description: dictionary containing all the stat data
returned: success
type: complex
contains:
attributes:
description: Attributes of the file at path in raw form.
returned: success, path exists
type: str
sample: "Archive, Hidden"
checksum:
description: The checksum of a file based on checksum_algorithm specified.
returned: success, path exist, path is a file, get_checksum == True
checksum_algorithm specified is supported
type: str
sample: 09cb79e8fc7453c84a07f644e441fd81623b7f98
creationtime:
description: The create time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
exists:
description: If the path exists or not.
returned: success
type: bool
sample: true
extension:
description: The extension of the file at path.
returned: success, path exists, path is a file
type: str
sample: ".ps1"
filename:
description: The name of the file (without path).
returned: success, path exists, path is a file
type: str
sample: foo.ini
hlnk_targets:
description: List of other files pointing to the same file (hard links), excludes the current file.
returned: success, path exists
type: list
sample:
- C:\temp\file.txt
- C:\Windows\update.log
isarchive:
description: If the path is ready for archiving or not.
returned: success, path exists
type: bool
sample: true
isdir:
description: If the path is a directory or not.
returned: success, path exists
type: bool
sample: true
ishidden:
description: If the path is hidden or not.
returned: success, path exists
type: bool
sample: true
isjunction:
description: If the path is a junction point or not.
returned: success, path exists
type: bool
sample: true
islnk:
description: If the path is a symbolic link or not.
returned: success, path exists
type: bool
sample: true
isreadonly:
description: If the path is read only or not.
returned: success, path exists
type: bool
sample: true
isreg:
description: If the path is a regular file.
returned: success, path exists
type: bool
sample: true
isshared:
description: If the path is shared or not.
returned: success, path exists
type: bool
sample: true
lastaccesstime:
description: The last access time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
lastwritetime:
description: The last modification time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
lnk_source:
description: Target of the symlink normalized for the remote filesystem.
returned: success, path exists and the path is a symbolic link or junction point
type: str
sample: C:\temp\link
lnk_target:
description: Target of the symlink. Note that relative paths remain relative.
returned: success, path exists and the path is a symbolic link or junction point
type: str
sample: ..\link
md5:
description: The MD5 checksum of a file (Between Ansible 1.9 and Ansible 2.2 this was returned as a SHA1 hash), will be removed in Ansible 2.9.
returned: success, path exist, path is a file, get_md5 == True
type: str
sample: 09cb79e8fc7453c84a07f644e441fd81623b7f98
nlink:
description: Number of links to the file (hard links).
returned: success, path exists
type: int
sample: 1
owner:
description: The owner of the file.
returned: success, path exists
type: str
sample: BUILTIN\Administrators
path:
description: The full absolute path to the file.
returned: success, path exists, file exists
type: str
sample: C:\foo.ini
sharename:
description: The name of share if folder is shared.
returned: success, path exists, file is a directory and isshared == True
type: str
sample: file-share
size:
description: The size in bytes of a file or folder.
returned: success, path exists, file is not a link
type: int
sample: 1024
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,015 |
win_stat has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_stat.ps1:0:0: ansible-deprecated-version: Argument 'get_md5' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_stat.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67015
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:51:09Z |
python
| 2020-02-04T23:02:04Z |
test/integration/targets/win_stat/tasks/tests.yml
|
---
- name: test win_stat module on file
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
register: stat_file
- name: check actual for file
assert:
that:
- stat_file.stat.attributes == 'Archive'
- stat_file.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_file.stat.creationtime == 1477984205
- stat_file.stat.exists == True
- stat_file.stat.extension == '.ps1'
- stat_file.stat.filename == 'file.ps1'
- stat_file.stat.hlnk_targets == []
- stat_file.stat.isarchive == True
- stat_file.stat.isdir == False
- stat_file.stat.ishidden == False
- stat_file.stat.isjunction == False
- stat_file.stat.islnk == False
- stat_file.stat.isreadonly == False
- stat_file.stat.isreg == True
- stat_file.stat.isshared == False
- stat_file.stat.lastaccesstime == 1477984205
- stat_file.stat.lastwritetime == 1477984205
- stat_file.stat.md5 is not defined
- stat_file.stat.nlink == 1
- stat_file.stat.owner == 'BUILTIN\Administrators'
- stat_file.stat.path == win_stat_dir + '\\nested\\file.ps1'
- stat_file.stat.size == 3
# get_md5 will be undocumented in 2.9, remove this test then
- name: test win_stat module on file with md5
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
get_md5: True
register: stat_file_md5
- name: check actual for file without md5
assert:
that:
- stat_file_md5.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- name: test win_stat module on file with sha256
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
checksum_algorithm: sha256
register: stat_file_sha256
- name: check actual for file with sha256
assert:
that:
- stat_file_sha256.stat.checksum == 'ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad'
- name: test win_stat module on file with sha384
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
checksum_algorithm: sha384
register: stat_file_sha384
- name: check actual for file with sha384
assert:
that:
- stat_file_sha384.stat.checksum == 'cb00753f45a35e8bb5a03d699ac65007272c32ab0eded1631a8b605a43ff5bed8086072ba1e7cc2358baeca134c825a7'
- name: test win_stat module on file with sha512
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
checksum_algorithm: sha512
register: stat_file_sha512
- name: check actual for file with sha512
assert:
that:
- stat_file_sha512.stat.checksum == 'ddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f'
- name: test win_stat on hidden file
win_stat:
path: '{{win_stat_dir}}\nested\hidden.ps1'
register: stat_file_hidden
- name: check actual for hidden file
assert:
that:
- stat_file_hidden.stat.attributes == 'Hidden, Archive'
- stat_file_hidden.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_file_hidden.stat.creationtime == 1477984205
- stat_file_hidden.stat.exists == True
- stat_file_hidden.stat.extension == '.ps1'
- stat_file_hidden.stat.filename == 'hidden.ps1'
- stat_file_hidden.stat.hlnk_targets == []
- stat_file_hidden.stat.isarchive == True
- stat_file_hidden.stat.isdir == False
- stat_file_hidden.stat.ishidden == True
- stat_file_hidden.stat.isjunction == False
- stat_file_hidden.stat.islnk == False
- stat_file_hidden.stat.isreadonly == False
- stat_file_hidden.stat.isreg == True
- stat_file_hidden.stat.isshared == False
- stat_file_hidden.stat.lastaccesstime == 1477984205
- stat_file_hidden.stat.lastwritetime == 1477984205
- stat_file_hidden.stat.md5 is not defined
- stat_file_hidden.stat.nlink == 1
- stat_file_hidden.stat.owner == 'BUILTIN\Administrators'
- stat_file_hidden.stat.path == win_stat_dir + '\\nested\\hidden.ps1'
- stat_file_hidden.stat.size == 3
- name: test win_stat on readonly file
win_stat:
path: '{{win_stat_dir}}\nested\read-only.ps1'
register: stat_readonly
- name: check actual for readonly file
assert:
that:
- stat_readonly.stat.attributes == 'ReadOnly, Archive'
- stat_readonly.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_readonly.stat.creationtime == 1477984205
- stat_readonly.stat.exists == True
- stat_readonly.stat.extension == '.ps1'
- stat_readonly.stat.filename == 'read-only.ps1'
- stat_readonly.stat.hlnk_targets == []
- stat_readonly.stat.isarchive == True
- stat_readonly.stat.isdir == False
- stat_readonly.stat.ishidden == False
- stat_readonly.stat.isjunction == False
- stat_readonly.stat.islnk == False
- stat_readonly.stat.isreadonly == True
- stat_readonly.stat.isreg == True
- stat_readonly.stat.isshared == False
- stat_readonly.stat.lastaccesstime == 1477984205
- stat_readonly.stat.lastwritetime == 1477984205
- stat_readonly.stat.md5 is not defined
- stat_readonly.stat.nlink == 1
- stat_readonly.stat.owner == 'BUILTIN\Administrators'
- stat_readonly.stat.path == win_stat_dir + '\\nested\\read-only.ps1'
- stat_readonly.stat.size == 3
- name: test win_stat on hard link file
win_stat:
path: '{{win_stat_dir}}\nested\hard-link.ps1'
follow: True # just verifies we don't do any weird follow logic for hard links
register: stat_hard_link
- name: check actual for hard link file
assert:
that:
- stat_hard_link.stat.attributes == 'Archive'
- stat_hard_link.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_hard_link.stat.creationtime == 1477984205
- stat_hard_link.stat.exists == True
- stat_hard_link.stat.extension == '.ps1'
- stat_hard_link.stat.filename == 'hard-link.ps1'
- stat_hard_link.stat.hlnk_targets == [ win_stat_dir + '\\nested\hard-target.txt' ]
- stat_hard_link.stat.isarchive == True
- stat_hard_link.stat.isdir == False
- stat_hard_link.stat.ishidden == False
- stat_hard_link.stat.isjunction == False
- stat_hard_link.stat.islnk == False
- stat_hard_link.stat.isreadonly == False
- stat_hard_link.stat.isshared == False
- stat_hard_link.stat.lastaccesstime == 1477984205
- stat_hard_link.stat.lastwritetime == 1477984205
- stat_hard_link.stat.md5 is not defined
- stat_hard_link.stat.nlink == 2
- stat_hard_link.stat.owner == 'BUILTIN\Administrators'
- stat_hard_link.stat.path == win_stat_dir + '\\nested\\hard-link.ps1'
- stat_hard_link.stat.size == 3
- name: test win_stat on directory
win_stat:
path: '{{win_stat_dir}}\nested'
register: stat_directory
- name: check actual for directory
assert:
that:
- stat_directory.stat.attributes == 'Directory'
- stat_directory.stat.checksum is not defined
- stat_directory.stat.creationtime == 1477984205
- stat_directory.stat.exists == True
- stat_directory.stat.extension is not defined
- stat_directory.stat.filename == 'nested'
- stat_directory.stat.hlnk_targets == []
- stat_directory.stat.isarchive == False
- stat_directory.stat.isdir == True
- stat_directory.stat.ishidden == False
- stat_directory.stat.isjunction == False
- stat_directory.stat.islnk == False
- stat_directory.stat.isreadonly == False
- stat_directory.stat.isreg == False
- stat_directory.stat.isshared == False
- stat_directory.stat.lastaccesstime == 1477984205
- stat_directory.stat.lastwritetime == 1477984205
- stat_directory.stat.md5 is not defined
- stat_directory.stat.nlink == 1
- stat_directory.stat.owner == 'BUILTIN\Administrators'
- stat_directory.stat.path == win_stat_dir + '\\nested'
- stat_directory.stat.size == 24
- name: test win_stat on empty directory
win_stat:
path: '{{win_stat_dir}}\folder'
register: stat_directory_empty
- name: check actual for empty directory
assert:
that:
- stat_directory_empty.stat.attributes == 'Directory'
- stat_directory_empty.stat.checksum is not defined
- stat_directory_empty.stat.creationtime == 1477984205
- stat_directory_empty.stat.exists == True
- stat_directory_empty.stat.extension is not defined
- stat_directory_empty.stat.filename == 'folder'
- stat_directory_empty.stat.hlnk_targets == []
- stat_directory_empty.stat.isarchive == False
- stat_directory_empty.stat.isdir == True
- stat_directory_empty.stat.ishidden == False
- stat_directory_empty.stat.isjunction == False
- stat_directory_empty.stat.islnk == False
- stat_directory_empty.stat.isreadonly == False
- stat_directory_empty.stat.isreg == False
- stat_directory_empty.stat.isshared == False
- stat_directory_empty.stat.lastaccesstime == 1477984205
- stat_directory_empty.stat.lastwritetime == 1477984205
- stat_directory_empty.stat.md5 is not defined
- stat_directory_empty.stat.nlink == 1
- stat_directory_empty.stat.owner == 'BUILTIN\Administrators'
- stat_directory_empty.stat.path == win_stat_dir + '\\folder'
- stat_directory_empty.stat.size == 0
- name: test win_stat on directory with space in name
win_stat:
path: '{{win_stat_dir}}\folder space'
register: stat_directory_space
- name: check actual for directory with space in name
assert:
that:
- stat_directory_space.stat.attributes == 'Directory'
- stat_directory_space.stat.checksum is not defined
- stat_directory_space.stat.creationtime == 1477984205
- stat_directory_space.stat.exists == True
- stat_directory_space.stat.extension is not defined
- stat_directory_space.stat.filename == 'folder space'
- stat_directory_space.stat.hlnk_targets == []
- stat_directory_space.stat.isarchive == False
- stat_directory_space.stat.isdir == True
- stat_directory_space.stat.ishidden == False
- stat_directory_space.stat.isjunction == False
- stat_directory_space.stat.islnk == False
- stat_directory_space.stat.isreadonly == False
- stat_directory_space.stat.isreg == False
- stat_directory_space.stat.isshared == False
- stat_directory_space.stat.lastaccesstime == 1477984205
- stat_directory_space.stat.lastwritetime == 1477984205
- stat_directory_space.stat.md5 is not defined
- stat_directory_space.stat.nlink == 1
- stat_directory_space.stat.owner == 'BUILTIN\Administrators'
- stat_directory_space.stat.path == win_stat_dir + '\\folder space'
- stat_directory_space.stat.size == 3
- name: test win_stat on hidden directory
win_stat:
path: '{{win_stat_dir}}\hidden'
register: stat_hidden
- name: check actual for hidden directory
assert:
that:
- stat_hidden.stat.attributes == 'Hidden, Directory'
- stat_hidden.stat.checksum is not defined
- stat_hidden.stat.creationtime == 1477984205
- stat_hidden.stat.exists == True
- stat_hidden.stat.extension is not defined
- stat_hidden.stat.filename == 'hidden'
- stat_hidden.stat.hlnk_targets == []
- stat_hidden.stat.isarchive == False
- stat_hidden.stat.isdir == True
- stat_hidden.stat.ishidden == True
- stat_hidden.stat.isjunction == False
- stat_hidden.stat.islnk == False
- stat_hidden.stat.isreadonly == False
- stat_hidden.stat.isreg == False
- stat_hidden.stat.isshared == False
- stat_hidden.stat.lastaccesstime == 1477984205
- stat_hidden.stat.lastwritetime == 1477984205
- stat_hidden.stat.md5 is not defined
- stat_hidden.stat.nlink == 1
- stat_hidden.stat.owner == 'BUILTIN\Administrators'
- stat_hidden.stat.path == win_stat_dir + '\\hidden'
- stat_hidden.stat.size == 0
- name: test win_stat on shared directory
win_stat:
path: '{{win_stat_dir}}\shared'
register: stat_shared
- name: check actual for shared directory
assert:
that:
- stat_shared.stat.attributes == 'Directory'
- stat_shared.stat.checksum is not defined
- stat_shared.stat.creationtime == 1477984205
- stat_shared.stat.exists == True
- stat_shared.stat.extension is not defined
- stat_shared.stat.filename == 'shared'
- stat_shared.stat.hlnk_targets == []
- stat_shared.stat.isarchive == False
- stat_shared.stat.isdir == True
- stat_shared.stat.ishidden == False
- stat_shared.stat.isjunction == False
- stat_shared.stat.islnk == False
- stat_shared.stat.isreadonly == False
- stat_shared.stat.isreg == False
- stat_shared.stat.isshared == True
- stat_shared.stat.lastaccesstime == 1477984205
- stat_shared.stat.lastwritetime == 1477984205
- stat_shared.stat.md5 is not defined
- stat_shared.stat.nlink == 1
- stat_shared.stat.owner == 'BUILTIN\Administrators'
- stat_shared.stat.path == win_stat_dir + '\\shared'
- stat_shared.stat.sharename == 'folder-share'
- stat_shared.stat.size == 0
- name: test win_stat on directory symlink
win_stat:
path: '{{win_stat_dir}}\link'
register: stat_symlink
- name: assert directory symlink actual
assert:
that:
- stat_symlink.stat.attributes == 'Directory, ReparsePoint'
- stat_symlink.stat.creationtime is defined
- stat_symlink.stat.exists == True
- stat_symlink.stat.filename == 'link'
- stat_symlink.stat.hlnk_targets == []
- stat_symlink.stat.isarchive == False
- stat_symlink.stat.isdir == True
- stat_symlink.stat.ishidden == False
- stat_symlink.stat.islnk == True
- stat_symlink.stat.isjunction == False
- stat_symlink.stat.isreadonly == False
- stat_symlink.stat.isreg == False
- stat_symlink.stat.isshared == False
- stat_symlink.stat.lastaccesstime is defined
- stat_symlink.stat.lastwritetime is defined
- stat_symlink.stat.lnk_source == win_stat_dir + '\\link-dest'
- stat_symlink.stat.lnk_target == win_stat_dir + '\\link-dest'
- stat_symlink.stat.nlink == 1
- stat_symlink.stat.owner == 'BUILTIN\\Administrators'
- stat_symlink.stat.path == win_stat_dir + '\\link'
- stat_symlink.stat.checksum is not defined
- stat_symlink.stat.md5 is not defined
- name: test win_stat on file symlink
win_stat:
path: '{{win_stat_dir}}\file-link.txt'
register: stat_file_symlink
- name: assert file symlink actual
assert:
that:
- stat_file_symlink.stat.attributes == 'Archive, ReparsePoint'
- stat_file_symlink.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_file_symlink.stat.creationtime is defined
- stat_file_symlink.stat.exists == True
- stat_file_symlink.stat.extension == '.txt'
- stat_file_symlink.stat.filename == 'file-link.txt'
- stat_file_symlink.stat.hlnk_targets == []
- stat_file_symlink.stat.isarchive == True
- stat_file_symlink.stat.isdir == False
- stat_file_symlink.stat.ishidden == False
- stat_file_symlink.stat.isjunction == False
- stat_file_symlink.stat.islnk == True
- stat_file_symlink.stat.isreadonly == False
- stat_file_symlink.stat.isreg == False
- stat_file_symlink.stat.isshared == False
- stat_file_symlink.stat.lastaccesstime is defined
- stat_file_symlink.stat.lastwritetime is defined
- stat_file_symlink.stat.lnk_source == win_stat_dir + '\\nested\\file.ps1'
- stat_file_symlink.stat.lnk_target == win_stat_dir + '\\nested\\file.ps1'
- stat_file_symlink.stat.md5 is not defined
- stat_file_symlink.stat.nlink == 1
- stat_file_symlink.stat.owner == 'BUILTIN\\Administrators'
- stat_file_symlink.stat.path == win_stat_dir + '\\file-link.txt'
- name: test win_stat of file symlink with follow
win_stat:
path: '{{win_stat_dir}}\file-link.txt'
follow: True
register: stat_file_symlink_follow
- name: assert file system with follow actual
assert:
that:
- stat_file_symlink_follow.stat.attributes == 'Archive'
- stat_file_symlink_follow.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_file_symlink_follow.stat.creationtime is defined
- stat_file_symlink_follow.stat.exists == True
- stat_file_symlink_follow.stat.extension == '.ps1'
- stat_file_symlink_follow.stat.filename == 'file.ps1'
- stat_file_symlink_follow.stat.hlnk_targets == []
- stat_file_symlink_follow.stat.isarchive == True
- stat_file_symlink_follow.stat.isdir == False
- stat_file_symlink_follow.stat.ishidden == False
- stat_file_symlink_follow.stat.isjunction == False
- stat_file_symlink_follow.stat.islnk == False
- stat_file_symlink_follow.stat.isreadonly == False
- stat_file_symlink_follow.stat.isreg == True
- stat_file_symlink_follow.stat.isshared == False
- stat_file_symlink_follow.stat.lastaccesstime is defined
- stat_file_symlink_follow.stat.lastwritetime is defined
- stat_file_symlink_follow.stat.md5 is not defined
- stat_file_symlink_follow.stat.nlink == 1
- stat_file_symlink_follow.stat.owner == 'BUILTIN\\Administrators'
- stat_file_symlink_follow.stat.path == win_stat_dir + '\\nested\\file.ps1'
- name: test win_stat on relative symlink
win_stat:
path: '{{win_stat_dir}}\nested\nested\link-rel'
register: stat_rel_symlink
- name: assert directory relative symlink actual
assert:
that:
- stat_rel_symlink.stat.attributes == 'Directory, ReparsePoint'
- stat_rel_symlink.stat.creationtime is defined
- stat_rel_symlink.stat.exists == True
- stat_rel_symlink.stat.filename == 'link-rel'
- stat_rel_symlink.stat.hlnk_targets == []
- stat_rel_symlink.stat.isarchive == False
- stat_rel_symlink.stat.isdir == True
- stat_rel_symlink.stat.ishidden == False
- stat_rel_symlink.stat.isjunction == False
- stat_rel_symlink.stat.islnk == True
- stat_rel_symlink.stat.isreadonly == False
- stat_rel_symlink.stat.isreg == False
- stat_rel_symlink.stat.isshared == False
- stat_rel_symlink.stat.lastaccesstime is defined
- stat_rel_symlink.stat.lastwritetime is defined
- stat_rel_symlink.stat.lnk_source == win_stat_dir + '\\link-dest'
- stat_rel_symlink.stat.lnk_target == '..\\..\\link-dest'
- stat_rel_symlink.stat.nlink == 1
- stat_rel_symlink.stat.owner == 'BUILTIN\\Administrators'
- stat_rel_symlink.stat.path == win_stat_dir + '\\nested\\nested\\link-rel'
- stat_rel_symlink.stat.checksum is not defined
- stat_rel_symlink.stat.md5 is not defined
- name: test win_stat on relative multiple symlink with follow
win_stat:
path: '{{win_stat_dir}}\outer-link'
follow: True
register: stat_symlink_follow
- name: assert directory relative symlink actual
assert:
that:
- stat_symlink_follow.stat.attributes == 'Directory'
- stat_symlink_follow.stat.creationtime is defined
- stat_symlink_follow.stat.exists == True
- stat_symlink_follow.stat.filename == 'link-dest'
- stat_symlink_follow.stat.hlnk_targets == []
- stat_symlink_follow.stat.isarchive == False
- stat_symlink_follow.stat.isdir == True
- stat_symlink_follow.stat.ishidden == False
- stat_symlink_follow.stat.isjunction == False
- stat_symlink_follow.stat.islnk == False
- stat_symlink_follow.stat.isreadonly == False
- stat_symlink_follow.stat.isreg == False
- stat_symlink_follow.stat.isshared == False
- stat_symlink_follow.stat.lastaccesstime is defined
- stat_symlink_follow.stat.lastwritetime is defined
- stat_symlink_follow.stat.nlink == 1
- stat_symlink_follow.stat.owner == 'BUILTIN\\Administrators'
- stat_symlink_follow.stat.path == win_stat_dir + '\\link-dest'
- stat_symlink_follow.stat.checksum is not defined
- stat_symlink_follow.stat.md5 is not defined
- name: test win_stat on junction
win_stat:
path: '{{win_stat_dir}}\junction-link'
register: stat_junction_point
- name: assert junction actual
assert:
that:
- stat_junction_point.stat.attributes == 'Directory, ReparsePoint'
- stat_junction_point.stat.creationtime is defined
- stat_junction_point.stat.exists == True
- stat_junction_point.stat.filename == 'junction-link'
- stat_junction_point.stat.hlnk_targets == []
- stat_junction_point.stat.isarchive == False
- stat_junction_point.stat.isdir == True
- stat_junction_point.stat.ishidden == False
- stat_junction_point.stat.isjunction == True
- stat_junction_point.stat.islnk == False
- stat_junction_point.stat.isreadonly == False
- stat_junction_point.stat.isreg == False
- stat_junction_point.stat.isshared == False
- stat_junction_point.stat.lastaccesstime is defined
- stat_junction_point.stat.lastwritetime is defined
- stat_junction_point.stat.lnk_source == win_stat_dir + '\\junction-dest'
- stat_junction_point.stat.lnk_target == win_stat_dir + '\\junction-dest'
- stat_junction_point.stat.nlink == 1
- stat_junction_point.stat.owner == 'BUILTIN\\Administrators'
- stat_junction_point.stat.path == win_stat_dir + '\\junction-link'
- stat_junction_point.stat.size == 0
- name: test win_stat on junction with follow
win_stat:
path: '{{win_stat_dir}}\junction-link'
follow: True
register: stat_junction_point_follow
- name: assert junction with follow actual
assert:
that:
- stat_junction_point_follow.stat.attributes == 'Directory'
- stat_junction_point_follow.stat.creationtime is defined
- stat_junction_point_follow.stat.exists == True
- stat_junction_point_follow.stat.filename == 'junction-dest'
- stat_junction_point_follow.stat.hlnk_targets == []
- stat_junction_point_follow.stat.isarchive == False
- stat_junction_point_follow.stat.isdir == True
- stat_junction_point_follow.stat.ishidden == False
- stat_junction_point_follow.stat.isjunction == False
- stat_junction_point_follow.stat.islnk == False
- stat_junction_point_follow.stat.isreadonly == False
- stat_junction_point_follow.stat.isreg == False
- stat_junction_point_follow.stat.isshared == False
- stat_junction_point_follow.stat.lastaccesstime is defined
- stat_junction_point_follow.stat.lastwritetime is defined
- stat_junction_point_follow.stat.nlink == 1
- stat_junction_point_follow.stat.owner == 'BUILTIN\\Administrators'
- stat_junction_point_follow.stat.path == win_stat_dir + '\\junction-dest'
- stat_junction_point_follow.stat.size == 0
- name: test win_stat module non-existent path
win_stat:
path: '{{win_stat_dir}}\this_file_should_not_exist'
register: win_stat_missing
- name: check win_stat missing result
assert:
that:
- not win_stat_missing.stat.exists
- win_stat_missing is not failed
- win_stat_missing is not changed
- name: test win_stat module without path argument
win_stat:
register: win_stat_no_args
failed_when: "win_stat_no_args.msg != 'missing required arguments: path'"
# https://github.com/ansible/ansible/issues/30258
- name: get path of pagefile
win_shell: |
$pagefile = $null
$cs = Get-CimInstance -ClassName Win32_ComputerSystem
if ($cs.AutomaticManagedPagefile) {
$pagefile = "$($env:SystemRoot.Substring(0, 1)):\pagefile.sys"
} else {
$pf = Get-CimInstance -ClassName Win32_PageFileSetting
if ($pf -ne $null) {
$pagefile = $pf[0].Name
}
}
$pagefile
register: pagefile_path
- name: get stat of pagefile
win_stat:
path: '{{pagefile_path.stdout_lines[0]}}'
get_md5: no
get_checksum: no
register: pagefile_stat
when: pagefile_path.stdout_lines|count != 0
- name: assert get stat of pagefile
assert:
that:
- pagefile_stat.stat.exists == True
when: pagefile_path.stdout_lines|count != 0
# Tests with normal user
- set_fact:
gen_pw: password123! + {{ lookup('password', '/dev/null chars=ascii_letters,digits length=8') }}
- name: create test user
win_user:
name: '{{win_stat_user}}'
password: '{{gen_pw}}'
update_password: always
groups: Users
- name: get become user profile dir so we can clean it up later
vars: &become_vars
ansible_become_user: '{{win_stat_user}}'
ansible_become_password: '{{gen_pw}}'
ansible_become_method: runas
ansible_become: yes
win_shell: $env:USERPROFILE
register: profile_dir_out
- name: ensure profile dir contains test username (eg, if become fails silently, prevent deletion of real user profile)
assert:
that:
- win_stat_user in profile_dir_out.stdout_lines[0]
- name: test stat with non admin user on a normal file
vars: *become_vars
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
register: user_file
- name: asert test stat with non admin user on a normal file
assert:
that:
- user_file.stat.attributes == 'Archive'
- user_file.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- user_file.stat.creationtime == 1477984205
- user_file.stat.exists == True
- user_file.stat.extension == '.ps1'
- user_file.stat.filename == 'file.ps1'
- user_file.stat.hlnk_targets == []
- user_file.stat.isarchive == True
- user_file.stat.isdir == False
- user_file.stat.ishidden == False
- user_file.stat.isjunction == False
- user_file.stat.islnk == False
- user_file.stat.isreadonly == False
- user_file.stat.isreg == True
- user_file.stat.isshared == False
- user_file.stat.lastaccesstime == 1477984205
- user_file.stat.lastwritetime == 1477984205
- user_file.stat.md5 is not defined
- user_file.stat.nlink == 1
- user_file.stat.owner == 'BUILTIN\\Administrators'
- user_file.stat.path == win_stat_dir + '\\nested\\file.ps1'
- user_file.stat.size == 3
- name: test stat on a symbolic link as normal user
vars: *become_vars
win_stat:
path: '{{win_stat_dir}}\link'
register: user_symlink
- name: assert test stat on a symbolic link as normal user
assert:
that:
- user_symlink.stat.attributes == 'Directory, ReparsePoint'
- user_symlink.stat.creationtime is defined
- user_symlink.stat.exists == True
- user_symlink.stat.filename == 'link'
- user_symlink.stat.hlnk_targets == []
- user_symlink.stat.isarchive == False
- user_symlink.stat.isdir == True
- user_symlink.stat.ishidden == False
- user_symlink.stat.islnk == True
- user_symlink.stat.isjunction == False
- user_symlink.stat.isreadonly == False
- user_symlink.stat.isreg == False
- user_symlink.stat.isshared == False
- user_symlink.stat.lastaccesstime is defined
- user_symlink.stat.lastwritetime is defined
- user_symlink.stat.lnk_source == win_stat_dir + '\\link-dest'
- user_symlink.stat.lnk_target == win_stat_dir + '\\link-dest'
- user_symlink.stat.nlink == 1
- user_symlink.stat.owner == 'BUILTIN\\Administrators'
- user_symlink.stat.path == win_stat_dir + '\\link'
- user_symlink.stat.checksum is not defined
- user_symlink.stat.md5 is not defined
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,014 |
win_psexec has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_psexec.ps1:0:0: ansible-deprecated-version: Argument 'extra_opts' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_psexec.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67014
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:50:55Z |
python
| 2020-02-04T23:02:04Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
* Windows Server 2008 and 2008 R2 will no longer be supported or tested in the next Ansible release, see :ref:`windows_faq_server2008`.
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
* :ref:`redfish_config <redfish_config_module>`: the ``bios_attribute_name`` and ``bios_attribute_value`` options will be removed. To maintain the existing behavior use the ``bios_attributes`` option instead.
* :ref:`clc_aa_policy <clc_aa_policy_module>`: the ``wait`` parameter will be removed. It has always been ignored by the module.
* :ref:`redfish_config <redfish_config_module>`, :ref:`redfish_command <redfish_command_module>`: the behavior to select the first System, Manager, or Chassis resource to modify when multiple are present will be removed. Use the new ``resource_id`` option to specify target resource to modify.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`ec2 <ec2_module>`: the ``group`` and ``group_id`` options will become mutually exclusive. Currently ``group_id`` is ignored if you pass both.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
* :ref:`vmware_tag_info <vmware_tag_info_module>`: the module will not return ``tag_facts`` since it does not return multiple tags with the same name and different category id. To maintain the existing behavior use ``tag_info`` which is a list of tag metadata.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead.
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`zabbix_proxy <zabbix_proxy_module>` deprecates ``interface`` sub-options ``type`` and ``main`` when proxy type is set to passive via ``status=passive``. Make sure these suboptions are removed from your playbook as they were never supported by Zabbix in the first place.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
* :ref:`docker_container <docker_container_module>` no longer passes information on non-anonymous volumes or binds as ``Volumes`` to the Docker daemon. This increases compatibility with the ``docker`` CLI program. Note that if you specify ``volumes: strict`` in ``comparisons``, this could cause existing containers created with docker_container from Ansible 2.9 or earlier to restart.
* :ref:`docker_container <docker_container_module>`'s support for port ranges was adjusted to be more compatible to the ``docker`` command line utility: a one-port container range combined with a multiple-port host range will no longer result in only the first host port be used, but the whole range being passed to Docker so that a free port in that range will be used.
* :ref:`purefb_fs <purefb_fs_module>` no longer supports the deprecated ``nfs`` option. This has been superceeded by ``nfsv3``.
Plugins
=======
Lookup plugin names case-sensitivity
------------------------------------
* Prior to Ansible ``2.10`` lookup plugin names passed in as an argument to the ``lookup()`` function were treated as case-insensitive as opposed to lookups invoked via ``with_<lookup_name>``. ``2.10`` brings consistency to ``lookup()`` and ``with_`` to be both case-sensitive.
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,014 |
win_psexec has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_psexec.ps1:0:0: ansible-deprecated-version: Argument 'extra_opts' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_psexec.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67014
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:50:55Z |
python
| 2020-02-04T23:02:04Z |
lib/ansible/modules/windows/win_psexec.ps1
|
#!powershell
# Copyright: (c) 2017, Dag Wieers (@dagwieers) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#AnsibleRequires -CSharpUtil Ansible.Basic
#Requires -Module Ansible.ModuleUtils.ArgvParser
#Requires -Module Ansible.ModuleUtils.CommandUtil
# See also: https://technet.microsoft.com/en-us/sysinternals/pxexec.aspx
$spec = @{
options = @{
command = @{ type='str'; required=$true }
executable = @{ type='path'; default='psexec.exe' }
hostnames = @{ type='list' }
username = @{ type='str' }
password = @{ type='str'; no_log=$true }
chdir = @{ type='path' }
wait = @{ type='bool'; default=$true }
nobanner = @{ type='bool'; default=$false }
noprofile = @{ type='bool'; default=$false }
elevated = @{ type='bool'; default=$false }
limited = @{ type='bool'; default=$false }
system = @{ type='bool'; default=$false }
interactive = @{ type='bool'; default=$false }
session = @{ type='int' }
priority = @{ type='str'; choices=@( 'background', 'low', 'belownormal', 'abovenormal', 'high', 'realtime' ) }
timeout = @{ type='int' }
extra_opts = @{ type='list'; removed_in_version = '2.10' }
}
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
$command = $module.Params.command
$executable = $module.Params.executable
$hostnames = $module.Params.hostnames
$username = $module.Params.username
$password = $module.Params.password
$chdir = $module.Params.chdir
$wait = $module.Params.wait
$nobanner = $module.Params.nobanner
$noprofile = $module.Params.noprofile
$elevated = $module.Params.elevated
$limited = $module.Params.limited
$system = $module.Params.system
$interactive = $module.Params.interactive
$session = $module.Params.session
$priority = $module.Params.Priority
$timeout = $module.Params.timeout
$extra_opts = $module.Params.extra_opts
$module.Result.changed = $true
If (-Not (Get-Command $executable -ErrorAction SilentlyContinue)) {
$module.FailJson("Executable '$executable' was not found.")
}
$arguments = [System.Collections.Generic.List`1[String]]@($executable)
If ($nobanner -eq $true) {
$arguments.Add("-nobanner")
}
# Support running on local system if no hostname is specified
If ($hostnames) {
$hostname_argument = ($hostnames | sort -Unique) -join ','
$arguments.Add("\\$hostname_argument")
}
# Username is optional
If ($null -ne $username) {
$arguments.Add("-u")
$arguments.Add($username)
}
# Password is optional
If ($null -ne $password) {
$arguments.Add("-p")
$arguments.Add($password)
}
If ($null -ne $chdir) {
$arguments.Add("-w")
$arguments.Add($chdir)
}
If ($wait -eq $false) {
$arguments.Add("-d")
}
If ($noprofile -eq $true) {
$arguments.Add("-e")
}
If ($elevated -eq $true) {
$arguments.Add("-h")
}
If ($system -eq $true) {
$arguments.Add("-s")
}
If ($interactive -eq $true) {
$arguments.Add("-i")
If ($null -ne $session) {
$arguments.Add($session)
}
}
If ($limited -eq $true) {
$arguments.Add("-l")
}
If ($null -ne $priority) {
$arguments.Add("-$priority")
}
If ($null -ne $timeout) {
$arguments.Add("-n")
$arguments.Add($timeout)
}
# Add additional advanced options
If ($extra_opts) {
ForEach ($opt in $extra_opts) {
$arguments.Add($opt)
}
}
$arguments.Add("-accepteula")
$argument_string = Argv-ToString -arguments $arguments
# Add the command at the end of the argument string, we don't want to escape
# that as psexec doesn't expect it to be one arg
$argument_string += " $command"
$start_datetime = [DateTime]::UtcNow
$module.Result.psexec_command = $argument_string
$command_result = Run-Command -command $argument_string
$end_datetime = [DateTime]::UtcNow
$module.Result.stdout = $command_result.stdout
$module.Result.stderr = $command_result.stderr
If ($wait -eq $true) {
$module.Result.rc = $command_result.rc
} else {
$module.Result.rc = 0
$module.Result.pid = $command_result.rc
}
$module.Result.start = $start_datetime.ToString("yyyy-MM-dd hh:mm:ss.ffffff")
$module.Result.end = $end_datetime.ToString("yyyy-MM-dd hh:mm:ss.ffffff")
$module.Result.delta = $($end_datetime - $start_datetime).ToString("h\:mm\:ss\.ffffff")
$module.ExitJson()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,014 |
win_psexec has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_psexec.ps1:0:0: ansible-deprecated-version: Argument 'extra_opts' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_psexec.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67014
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:50:55Z |
python
| 2020-02-04T23:02:04Z |
lib/ansible/modules/windows/win_psexec.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: 2017, Dag Wieers (@dagwieers) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_psexec
version_added: '2.3'
short_description: Runs commands (remotely) as another (privileged) user
description:
- Run commands (remotely) through the PsExec service.
- Run commands as another (domain) user (with elevated privileges).
requirements:
- Microsoft PsExec
options:
command:
description:
- The command line to run through PsExec (limited to 260 characters).
type: str
required: yes
executable:
description:
- The location of the PsExec utility (in case it is not located in your PATH).
type: path
default: psexec.exe
extra_opts:
description:
- Specify additional options to add onto the PsExec invocation.
- This module was undocumented in older releases and will be removed in
Ansible 2.10.
type: list
hostnames:
description:
- The hostnames to run the command.
- If not provided, the command is run locally.
type: list
username:
description:
- The (remote) user to run the command as.
- If not provided, the current user is used.
type: str
password:
description:
- The password for the (remote) user to run the command as.
- This is mandatory in order authenticate yourself.
type: str
chdir:
description:
- Run the command from this (remote) directory.
type: path
nobanner:
description:
- Do not display the startup banner and copyright message.
- This only works for specific versions of the PsExec binary.
type: bool
default: no
version_added: '2.4'
noprofile:
description:
- Run the command without loading the account's profile.
type: bool
default: no
elevated:
description:
- Run the command with elevated privileges.
type: bool
default: no
interactive:
description:
- Run the program so that it interacts with the desktop on the remote system.
type: bool
default: no
session:
description:
- Specifies the session ID to use.
- This parameter works in conjunction with I(interactive).
- It has no effect when I(interactive) is set to C(no).
type: int
version_added: '2.7'
limited:
description:
- Run the command as limited user (strips the Administrators group and allows only privileges assigned to the Users group).
type: bool
default: no
system:
description:
- Run the remote command in the System account.
type: bool
default: no
priority:
description:
- Used to run the command at a different priority.
choices: [ abovenormal, background, belownormal, high, low, realtime ]
timeout:
description:
- The connection timeout in seconds
type: int
wait:
description:
- Wait for the application to terminate.
- Only use for non-interactive applications.
type: bool
default: yes
notes:
- More information related to Microsoft PsExec is available from
U(https://technet.microsoft.com/en-us/sysinternals/bb897553.aspx)
seealso:
- module: psexec
- module: raw
- module: win_command
- module: win_shell
author:
- Dag Wieers (@dagwieers)
'''
EXAMPLES = r'''
- name: Test the PsExec connection to the local system (target node) with your user
win_psexec:
command: whoami.exe
- name: Run regedit.exe locally (on target node) as SYSTEM and interactively
win_psexec:
command: regedit.exe
interactive: yes
system: yes
- name: Run the setup.exe installer on multiple servers using the Domain Administrator
win_psexec:
command: E:\setup.exe /i /IACCEPTEULA
hostnames:
- remote_server1
- remote_server2
username: DOMAIN\Administrator
password: some_password
priority: high
- name: Run PsExec from custom location C:\Program Files\sysinternals\
win_psexec:
command: netsh advfirewall set allprofiles state off
executable: C:\Program Files\sysinternals\psexec.exe
hostnames: [ remote_server ]
password: some_password
priority: low
'''
RETURN = r'''
cmd:
description: The complete command line used by the module, including PsExec call and additional options.
returned: always
type: str
sample: psexec.exe -nobanner \\remote_server -u "DOMAIN\Administrator" -p "some_password" -accepteula E:\setup.exe
pid:
description: The PID of the async process created by PsExec.
returned: when C(wait=False)
type: int
sample: 1532
rc:
description: The return code for the command.
returned: always
type: int
sample: 0
stdout:
description: The standard output from the command.
returned: always
type: str
sample: Success.
stderr:
description: The error output from the command.
returned: always
type: str
sample: Error 15 running E:\setup.exe
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,014 |
win_psexec has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_psexec.ps1:0:0: ansible-deprecated-version: Argument 'extra_opts' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_psexec.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67014
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:50:55Z |
python
| 2020-02-04T23:02:04Z |
lib/ansible/modules/windows/win_stat.ps1
|
#!powershell
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#AnsibleRequires -CSharpUtil Ansible.Basic
#Requires -Module Ansible.ModuleUtils.FileUtil
#Requires -Module Ansible.ModuleUtils.LinkUtil
function ConvertTo-Timestamp($start_date, $end_date) {
if ($start_date -and $end_date) {
return (New-TimeSpan -Start $start_date -End $end_date).TotalSeconds
}
}
function Get-FileChecksum($path, $algorithm) {
switch ($algorithm) {
'md5' { $sp = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider }
'sha1' { $sp = New-Object -TypeName System.Security.Cryptography.SHA1CryptoServiceProvider }
'sha256' { $sp = New-Object -TypeName System.Security.Cryptography.SHA256CryptoServiceProvider }
'sha384' { $sp = New-Object -TypeName System.Security.Cryptography.SHA384CryptoServiceProvider }
'sha512' { $sp = New-Object -TypeName System.Security.Cryptography.SHA512CryptoServiceProvider }
default { Fail-Json -obj $result -message "Unsupported hash algorithm supplied '$algorithm'" }
}
$fp = [System.IO.File]::Open($path, [System.IO.Filemode]::Open, [System.IO.FileAccess]::Read, [System.IO.FileShare]::ReadWrite)
try {
$hash = [System.BitConverter]::ToString($sp.ComputeHash($fp)).Replace("-", "").ToLower()
} finally {
$fp.Dispose()
}
return $hash
}
function Get-FileInfo {
param([String]$Path, [Switch]$Follow)
$info = Get-AnsibleItem -Path $Path -ErrorAction SilentlyContinue
$link_info = $null
if ($null -ne $info) {
try {
$link_info = Get-Link -link_path $info.FullName
} catch {
$module.Warn("Failed to check/get link info for file: $($_.Exception.Message)")
}
# If follow=true we want to follow the link all the way back to root object
if ($Follow -and $null -ne $link_info -and $link_info.Type -in @("SymbolicLink", "JunctionPoint")) {
$info, $link_info = Get-FileInfo -Path $link_info.AbsolutePath -Follow
}
}
return $info, $link_info
}
$spec = @{
options = @{
path = @{ type='path'; required=$true; aliases=@( 'dest', 'name' ) }
get_checksum = @{ type='bool'; default=$true }
checksum_algorithm = @{ type='str'; default='sha1'; choices=@( 'md5', 'sha1', 'sha256', 'sha384', 'sha512' ) }
get_md5 = @{ type='bool'; default=$false; removed_in_version='2.9' }
follow = @{ type='bool'; default=$false }
}
supports_check_mode = $true
}
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec)
$path = $module.Params.path
$get_md5 = $module.Params.get_md5
$get_checksum = $module.Params.get_checksum
$checksum_algorithm = $module.Params.checksum_algorithm
$follow = $module.Params.follow
$module.Result.stat = @{ exists=$false }
Load-LinkUtils
$info, $link_info = Get-FileInfo -Path $path -Follow:$follow
If ($null -ne $info) {
$epoch_date = Get-Date -Date "01/01/1970"
$attributes = @()
foreach ($attribute in ($info.Attributes -split ',')) {
$attributes += $attribute.Trim()
}
# default values that are always set, specific values are set below this
# but are kept commented for easier readability
$stat = @{
exists = $true
attributes = $info.Attributes.ToString()
isarchive = ($attributes -contains "Archive")
isdir = $false
ishidden = ($attributes -contains "Hidden")
isjunction = $false
islnk = $false
isreadonly = ($attributes -contains "ReadOnly")
isreg = $false
isshared = $false
nlink = 1 # Number of links to the file (hard links), overriden below if islnk
# lnk_target = islnk or isjunction Target of the symlink. Note that relative paths remain relative
# lnk_source = islnk os isjunction Target of the symlink normalized for the remote filesystem
hlnk_targets = @()
creationtime = (ConvertTo-Timestamp -start_date $epoch_date -end_date $info.CreationTime)
lastaccesstime = (ConvertTo-Timestamp -start_date $epoch_date -end_date $info.LastAccessTime)
lastwritetime = (ConvertTo-Timestamp -start_date $epoch_date -end_date $info.LastWriteTime)
# size = a file and directory - calculated below
path = $info.FullName
filename = $info.Name
# extension = a file
# owner = set outsite this dict in case it fails
# sharename = a directory and isshared is True
# checksum = a file and get_checksum: True
# md5 = a file and get_md5: True
}
try {
$stat.owner = $info.GetAccessControl().Owner
} catch {
# may not have rights, historical behaviour was to just set to $null
# due to ErrorActionPreference being set to "Continue"
$stat.owner = $null
}
# values that are set according to the type of file
if ($info.Attributes.HasFlag([System.IO.FileAttributes]::Directory)) {
$stat.isdir = $true
$share_info = Get-CimInstance -ClassName Win32_Share -Filter "Path='$($stat.path -replace '\\', '\\')'"
if ($null -ne $share_info) {
$stat.isshared = $true
$stat.sharename = $share_info.Name
}
try {
$size = 0
foreach ($file in $info.EnumerateFiles("*", [System.IO.SearchOption]::AllDirectories)) {
$size += $file.Length
}
$stat.size = $size
} catch {
$stat.size = 0
}
} else {
$stat.extension = $info.Extension
$stat.isreg = $true
$stat.size = $info.Length
if ($get_md5) {
try {
$stat.md5 = Get-FileChecksum -path $path -algorithm "md5"
} catch {
$module.FailJson("Failed to get MD5 hash of file, remove get_md5 to ignore this error: $($_.Exception.Message)", $_)
}
}
if ($get_checksum) {
try {
$stat.checksum = Get-FileChecksum -path $path -algorithm $checksum_algorithm
} catch {
$module.FailJson("Failed to get hash of file, set get_checksum to False to ignore this error: $($_.Exception.Message)", $_)
}
}
}
# Get symbolic link, junction point, hard link info
if ($null -ne $link_info) {
switch ($link_info.Type) {
"SymbolicLink" {
$stat.islnk = $true
$stat.isreg = $false
$stat.lnk_target = $link_info.TargetPath
$stat.lnk_source = $link_info.AbsolutePath
break
}
"JunctionPoint" {
$stat.isjunction = $true
$stat.isreg = $false
$stat.lnk_target = $link_info.TargetPath
$stat.lnk_source = $link_info.AbsolutePath
break
}
"HardLink" {
$stat.lnk_type = "hard"
$stat.nlink = $link_info.HardTargets.Count
# remove current path from the targets
$hlnk_targets = $link_info.HardTargets | Where-Object { $_ -ne $stat.path }
$stat.hlnk_targets = @($hlnk_targets)
break
}
}
}
$module.Result.stat = $stat
}
$module.ExitJson()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,014 |
win_psexec has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_psexec.ps1:0:0: ansible-deprecated-version: Argument 'extra_opts' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_psexec.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67014
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:50:55Z |
python
| 2020-02-04T23:02:04Z |
lib/ansible/modules/windows/win_stat.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# this is a windows documentation stub. actual code lives in the .ps1
# file of the same name
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: win_stat
version_added: "1.7"
short_description: Get information about Windows files
description:
- Returns information about a Windows file.
- For non-Windows targets, use the M(stat) module instead.
options:
path:
description:
- The full path of the file/object to get the facts of; both forward and
back slashes are accepted.
type: path
required: yes
aliases: [ dest, name ]
get_md5:
description:
- Whether to return the checksum sum of the file. Between Ansible 1.9
and Ansible 2.2 this is no longer an MD5, but a SHA1 instead. As of Ansible
2.3 this is back to an MD5. Will return None if host is unable to
use specified algorithm.
- The default of this option changed from C(yes) to C(no) in Ansible 2.5
and will be removed altogether in Ansible 2.9.
- Use C(get_checksum=yes) with C(checksum_algorithm=md5) to return an
md5 hash under the C(checksum) return value.
type: bool
default: no
get_checksum:
description:
- Whether to return a checksum of the file (default sha1)
type: bool
default: yes
version_added: "2.1"
checksum_algorithm:
description:
- Algorithm to determine checksum of file.
- Will throw an error if the host is unable to use specified algorithm.
type: str
default: sha1
choices: [ md5, sha1, sha256, sha384, sha512 ]
version_added: "2.3"
follow:
description:
- Whether to follow symlinks or junction points.
- In the case of C(path) pointing to another link, then that will
be followed until no more links are found.
type: bool
default: no
version_added: "2.8"
seealso:
- module: stat
- module: win_acl
- module: win_file
- module: win_owner
author:
- Chris Church (@cchurch)
'''
EXAMPLES = r'''
- name: Obtain information about a file
win_stat:
path: C:\foo.ini
register: file_info
- name: Obtain information about a folder
win_stat:
path: C:\bar
register: folder_info
- name: Get MD5 checksum of a file
win_stat:
path: C:\foo.ini
get_checksum: yes
checksum_algorithm: md5
register: md5_checksum
- debug:
var: md5_checksum.stat.checksum
- name: Get SHA1 checksum of file
win_stat:
path: C:\foo.ini
get_checksum: yes
register: sha1_checksum
- debug:
var: sha1_checksum.stat.checksum
- name: Get SHA256 checksum of file
win_stat:
path: C:\foo.ini
get_checksum: yes
checksum_algorithm: sha256
register: sha256_checksum
- debug:
var: sha256_checksum.stat.checksum
'''
RETURN = r'''
changed:
description: Whether anything was changed
returned: always
type: bool
sample: true
stat:
description: dictionary containing all the stat data
returned: success
type: complex
contains:
attributes:
description: Attributes of the file at path in raw form.
returned: success, path exists
type: str
sample: "Archive, Hidden"
checksum:
description: The checksum of a file based on checksum_algorithm specified.
returned: success, path exist, path is a file, get_checksum == True
checksum_algorithm specified is supported
type: str
sample: 09cb79e8fc7453c84a07f644e441fd81623b7f98
creationtime:
description: The create time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
exists:
description: If the path exists or not.
returned: success
type: bool
sample: true
extension:
description: The extension of the file at path.
returned: success, path exists, path is a file
type: str
sample: ".ps1"
filename:
description: The name of the file (without path).
returned: success, path exists, path is a file
type: str
sample: foo.ini
hlnk_targets:
description: List of other files pointing to the same file (hard links), excludes the current file.
returned: success, path exists
type: list
sample:
- C:\temp\file.txt
- C:\Windows\update.log
isarchive:
description: If the path is ready for archiving or not.
returned: success, path exists
type: bool
sample: true
isdir:
description: If the path is a directory or not.
returned: success, path exists
type: bool
sample: true
ishidden:
description: If the path is hidden or not.
returned: success, path exists
type: bool
sample: true
isjunction:
description: If the path is a junction point or not.
returned: success, path exists
type: bool
sample: true
islnk:
description: If the path is a symbolic link or not.
returned: success, path exists
type: bool
sample: true
isreadonly:
description: If the path is read only or not.
returned: success, path exists
type: bool
sample: true
isreg:
description: If the path is a regular file.
returned: success, path exists
type: bool
sample: true
isshared:
description: If the path is shared or not.
returned: success, path exists
type: bool
sample: true
lastaccesstime:
description: The last access time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
lastwritetime:
description: The last modification time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
lnk_source:
description: Target of the symlink normalized for the remote filesystem.
returned: success, path exists and the path is a symbolic link or junction point
type: str
sample: C:\temp\link
lnk_target:
description: Target of the symlink. Note that relative paths remain relative.
returned: success, path exists and the path is a symbolic link or junction point
type: str
sample: ..\link
md5:
description: The MD5 checksum of a file (Between Ansible 1.9 and Ansible 2.2 this was returned as a SHA1 hash), will be removed in Ansible 2.9.
returned: success, path exist, path is a file, get_md5 == True
type: str
sample: 09cb79e8fc7453c84a07f644e441fd81623b7f98
nlink:
description: Number of links to the file (hard links).
returned: success, path exists
type: int
sample: 1
owner:
description: The owner of the file.
returned: success, path exists
type: str
sample: BUILTIN\Administrators
path:
description: The full absolute path to the file.
returned: success, path exists, file exists
type: str
sample: C:\foo.ini
sharename:
description: The name of share if folder is shared.
returned: success, path exists, file is a directory and isshared == True
type: str
sample: file-share
size:
description: The size in bytes of a file or folder.
returned: success, path exists, file is not a link
type: int
sample: 1024
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,014 |
win_psexec has option which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has an option marked with `removed_in_version='2.10'`. This option should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/windows/win_psexec.ps1:0:0: ansible-deprecated-version: Argument 'extra_opts' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/windows/win_psexec.ps1
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67014
|
https://github.com/ansible/ansible/pull/67105
|
1bb94ec92fe837a30177b192a477522b30132aa1
|
78470c43c21d834a9513fb309fb219b74a5d1cee
| 2020-02-01T13:50:55Z |
python
| 2020-02-04T23:02:04Z |
test/integration/targets/win_stat/tasks/tests.yml
|
---
- name: test win_stat module on file
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
register: stat_file
- name: check actual for file
assert:
that:
- stat_file.stat.attributes == 'Archive'
- stat_file.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_file.stat.creationtime == 1477984205
- stat_file.stat.exists == True
- stat_file.stat.extension == '.ps1'
- stat_file.stat.filename == 'file.ps1'
- stat_file.stat.hlnk_targets == []
- stat_file.stat.isarchive == True
- stat_file.stat.isdir == False
- stat_file.stat.ishidden == False
- stat_file.stat.isjunction == False
- stat_file.stat.islnk == False
- stat_file.stat.isreadonly == False
- stat_file.stat.isreg == True
- stat_file.stat.isshared == False
- stat_file.stat.lastaccesstime == 1477984205
- stat_file.stat.lastwritetime == 1477984205
- stat_file.stat.md5 is not defined
- stat_file.stat.nlink == 1
- stat_file.stat.owner == 'BUILTIN\Administrators'
- stat_file.stat.path == win_stat_dir + '\\nested\\file.ps1'
- stat_file.stat.size == 3
# get_md5 will be undocumented in 2.9, remove this test then
- name: test win_stat module on file with md5
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
get_md5: True
register: stat_file_md5
- name: check actual for file without md5
assert:
that:
- stat_file_md5.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- name: test win_stat module on file with sha256
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
checksum_algorithm: sha256
register: stat_file_sha256
- name: check actual for file with sha256
assert:
that:
- stat_file_sha256.stat.checksum == 'ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad'
- name: test win_stat module on file with sha384
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
checksum_algorithm: sha384
register: stat_file_sha384
- name: check actual for file with sha384
assert:
that:
- stat_file_sha384.stat.checksum == 'cb00753f45a35e8bb5a03d699ac65007272c32ab0eded1631a8b605a43ff5bed8086072ba1e7cc2358baeca134c825a7'
- name: test win_stat module on file with sha512
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
checksum_algorithm: sha512
register: stat_file_sha512
- name: check actual for file with sha512
assert:
that:
- stat_file_sha512.stat.checksum == 'ddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f'
- name: test win_stat on hidden file
win_stat:
path: '{{win_stat_dir}}\nested\hidden.ps1'
register: stat_file_hidden
- name: check actual for hidden file
assert:
that:
- stat_file_hidden.stat.attributes == 'Hidden, Archive'
- stat_file_hidden.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_file_hidden.stat.creationtime == 1477984205
- stat_file_hidden.stat.exists == True
- stat_file_hidden.stat.extension == '.ps1'
- stat_file_hidden.stat.filename == 'hidden.ps1'
- stat_file_hidden.stat.hlnk_targets == []
- stat_file_hidden.stat.isarchive == True
- stat_file_hidden.stat.isdir == False
- stat_file_hidden.stat.ishidden == True
- stat_file_hidden.stat.isjunction == False
- stat_file_hidden.stat.islnk == False
- stat_file_hidden.stat.isreadonly == False
- stat_file_hidden.stat.isreg == True
- stat_file_hidden.stat.isshared == False
- stat_file_hidden.stat.lastaccesstime == 1477984205
- stat_file_hidden.stat.lastwritetime == 1477984205
- stat_file_hidden.stat.md5 is not defined
- stat_file_hidden.stat.nlink == 1
- stat_file_hidden.stat.owner == 'BUILTIN\Administrators'
- stat_file_hidden.stat.path == win_stat_dir + '\\nested\\hidden.ps1'
- stat_file_hidden.stat.size == 3
- name: test win_stat on readonly file
win_stat:
path: '{{win_stat_dir}}\nested\read-only.ps1'
register: stat_readonly
- name: check actual for readonly file
assert:
that:
- stat_readonly.stat.attributes == 'ReadOnly, Archive'
- stat_readonly.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_readonly.stat.creationtime == 1477984205
- stat_readonly.stat.exists == True
- stat_readonly.stat.extension == '.ps1'
- stat_readonly.stat.filename == 'read-only.ps1'
- stat_readonly.stat.hlnk_targets == []
- stat_readonly.stat.isarchive == True
- stat_readonly.stat.isdir == False
- stat_readonly.stat.ishidden == False
- stat_readonly.stat.isjunction == False
- stat_readonly.stat.islnk == False
- stat_readonly.stat.isreadonly == True
- stat_readonly.stat.isreg == True
- stat_readonly.stat.isshared == False
- stat_readonly.stat.lastaccesstime == 1477984205
- stat_readonly.stat.lastwritetime == 1477984205
- stat_readonly.stat.md5 is not defined
- stat_readonly.stat.nlink == 1
- stat_readonly.stat.owner == 'BUILTIN\Administrators'
- stat_readonly.stat.path == win_stat_dir + '\\nested\\read-only.ps1'
- stat_readonly.stat.size == 3
- name: test win_stat on hard link file
win_stat:
path: '{{win_stat_dir}}\nested\hard-link.ps1'
follow: True # just verifies we don't do any weird follow logic for hard links
register: stat_hard_link
- name: check actual for hard link file
assert:
that:
- stat_hard_link.stat.attributes == 'Archive'
- stat_hard_link.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_hard_link.stat.creationtime == 1477984205
- stat_hard_link.stat.exists == True
- stat_hard_link.stat.extension == '.ps1'
- stat_hard_link.stat.filename == 'hard-link.ps1'
- stat_hard_link.stat.hlnk_targets == [ win_stat_dir + '\\nested\hard-target.txt' ]
- stat_hard_link.stat.isarchive == True
- stat_hard_link.stat.isdir == False
- stat_hard_link.stat.ishidden == False
- stat_hard_link.stat.isjunction == False
- stat_hard_link.stat.islnk == False
- stat_hard_link.stat.isreadonly == False
- stat_hard_link.stat.isshared == False
- stat_hard_link.stat.lastaccesstime == 1477984205
- stat_hard_link.stat.lastwritetime == 1477984205
- stat_hard_link.stat.md5 is not defined
- stat_hard_link.stat.nlink == 2
- stat_hard_link.stat.owner == 'BUILTIN\Administrators'
- stat_hard_link.stat.path == win_stat_dir + '\\nested\\hard-link.ps1'
- stat_hard_link.stat.size == 3
- name: test win_stat on directory
win_stat:
path: '{{win_stat_dir}}\nested'
register: stat_directory
- name: check actual for directory
assert:
that:
- stat_directory.stat.attributes == 'Directory'
- stat_directory.stat.checksum is not defined
- stat_directory.stat.creationtime == 1477984205
- stat_directory.stat.exists == True
- stat_directory.stat.extension is not defined
- stat_directory.stat.filename == 'nested'
- stat_directory.stat.hlnk_targets == []
- stat_directory.stat.isarchive == False
- stat_directory.stat.isdir == True
- stat_directory.stat.ishidden == False
- stat_directory.stat.isjunction == False
- stat_directory.stat.islnk == False
- stat_directory.stat.isreadonly == False
- stat_directory.stat.isreg == False
- stat_directory.stat.isshared == False
- stat_directory.stat.lastaccesstime == 1477984205
- stat_directory.stat.lastwritetime == 1477984205
- stat_directory.stat.md5 is not defined
- stat_directory.stat.nlink == 1
- stat_directory.stat.owner == 'BUILTIN\Administrators'
- stat_directory.stat.path == win_stat_dir + '\\nested'
- stat_directory.stat.size == 24
- name: test win_stat on empty directory
win_stat:
path: '{{win_stat_dir}}\folder'
register: stat_directory_empty
- name: check actual for empty directory
assert:
that:
- stat_directory_empty.stat.attributes == 'Directory'
- stat_directory_empty.stat.checksum is not defined
- stat_directory_empty.stat.creationtime == 1477984205
- stat_directory_empty.stat.exists == True
- stat_directory_empty.stat.extension is not defined
- stat_directory_empty.stat.filename == 'folder'
- stat_directory_empty.stat.hlnk_targets == []
- stat_directory_empty.stat.isarchive == False
- stat_directory_empty.stat.isdir == True
- stat_directory_empty.stat.ishidden == False
- stat_directory_empty.stat.isjunction == False
- stat_directory_empty.stat.islnk == False
- stat_directory_empty.stat.isreadonly == False
- stat_directory_empty.stat.isreg == False
- stat_directory_empty.stat.isshared == False
- stat_directory_empty.stat.lastaccesstime == 1477984205
- stat_directory_empty.stat.lastwritetime == 1477984205
- stat_directory_empty.stat.md5 is not defined
- stat_directory_empty.stat.nlink == 1
- stat_directory_empty.stat.owner == 'BUILTIN\Administrators'
- stat_directory_empty.stat.path == win_stat_dir + '\\folder'
- stat_directory_empty.stat.size == 0
- name: test win_stat on directory with space in name
win_stat:
path: '{{win_stat_dir}}\folder space'
register: stat_directory_space
- name: check actual for directory with space in name
assert:
that:
- stat_directory_space.stat.attributes == 'Directory'
- stat_directory_space.stat.checksum is not defined
- stat_directory_space.stat.creationtime == 1477984205
- stat_directory_space.stat.exists == True
- stat_directory_space.stat.extension is not defined
- stat_directory_space.stat.filename == 'folder space'
- stat_directory_space.stat.hlnk_targets == []
- stat_directory_space.stat.isarchive == False
- stat_directory_space.stat.isdir == True
- stat_directory_space.stat.ishidden == False
- stat_directory_space.stat.isjunction == False
- stat_directory_space.stat.islnk == False
- stat_directory_space.stat.isreadonly == False
- stat_directory_space.stat.isreg == False
- stat_directory_space.stat.isshared == False
- stat_directory_space.stat.lastaccesstime == 1477984205
- stat_directory_space.stat.lastwritetime == 1477984205
- stat_directory_space.stat.md5 is not defined
- stat_directory_space.stat.nlink == 1
- stat_directory_space.stat.owner == 'BUILTIN\Administrators'
- stat_directory_space.stat.path == win_stat_dir + '\\folder space'
- stat_directory_space.stat.size == 3
- name: test win_stat on hidden directory
win_stat:
path: '{{win_stat_dir}}\hidden'
register: stat_hidden
- name: check actual for hidden directory
assert:
that:
- stat_hidden.stat.attributes == 'Hidden, Directory'
- stat_hidden.stat.checksum is not defined
- stat_hidden.stat.creationtime == 1477984205
- stat_hidden.stat.exists == True
- stat_hidden.stat.extension is not defined
- stat_hidden.stat.filename == 'hidden'
- stat_hidden.stat.hlnk_targets == []
- stat_hidden.stat.isarchive == False
- stat_hidden.stat.isdir == True
- stat_hidden.stat.ishidden == True
- stat_hidden.stat.isjunction == False
- stat_hidden.stat.islnk == False
- stat_hidden.stat.isreadonly == False
- stat_hidden.stat.isreg == False
- stat_hidden.stat.isshared == False
- stat_hidden.stat.lastaccesstime == 1477984205
- stat_hidden.stat.lastwritetime == 1477984205
- stat_hidden.stat.md5 is not defined
- stat_hidden.stat.nlink == 1
- stat_hidden.stat.owner == 'BUILTIN\Administrators'
- stat_hidden.stat.path == win_stat_dir + '\\hidden'
- stat_hidden.stat.size == 0
- name: test win_stat on shared directory
win_stat:
path: '{{win_stat_dir}}\shared'
register: stat_shared
- name: check actual for shared directory
assert:
that:
- stat_shared.stat.attributes == 'Directory'
- stat_shared.stat.checksum is not defined
- stat_shared.stat.creationtime == 1477984205
- stat_shared.stat.exists == True
- stat_shared.stat.extension is not defined
- stat_shared.stat.filename == 'shared'
- stat_shared.stat.hlnk_targets == []
- stat_shared.stat.isarchive == False
- stat_shared.stat.isdir == True
- stat_shared.stat.ishidden == False
- stat_shared.stat.isjunction == False
- stat_shared.stat.islnk == False
- stat_shared.stat.isreadonly == False
- stat_shared.stat.isreg == False
- stat_shared.stat.isshared == True
- stat_shared.stat.lastaccesstime == 1477984205
- stat_shared.stat.lastwritetime == 1477984205
- stat_shared.stat.md5 is not defined
- stat_shared.stat.nlink == 1
- stat_shared.stat.owner == 'BUILTIN\Administrators'
- stat_shared.stat.path == win_stat_dir + '\\shared'
- stat_shared.stat.sharename == 'folder-share'
- stat_shared.stat.size == 0
- name: test win_stat on directory symlink
win_stat:
path: '{{win_stat_dir}}\link'
register: stat_symlink
- name: assert directory symlink actual
assert:
that:
- stat_symlink.stat.attributes == 'Directory, ReparsePoint'
- stat_symlink.stat.creationtime is defined
- stat_symlink.stat.exists == True
- stat_symlink.stat.filename == 'link'
- stat_symlink.stat.hlnk_targets == []
- stat_symlink.stat.isarchive == False
- stat_symlink.stat.isdir == True
- stat_symlink.stat.ishidden == False
- stat_symlink.stat.islnk == True
- stat_symlink.stat.isjunction == False
- stat_symlink.stat.isreadonly == False
- stat_symlink.stat.isreg == False
- stat_symlink.stat.isshared == False
- stat_symlink.stat.lastaccesstime is defined
- stat_symlink.stat.lastwritetime is defined
- stat_symlink.stat.lnk_source == win_stat_dir + '\\link-dest'
- stat_symlink.stat.lnk_target == win_stat_dir + '\\link-dest'
- stat_symlink.stat.nlink == 1
- stat_symlink.stat.owner == 'BUILTIN\\Administrators'
- stat_symlink.stat.path == win_stat_dir + '\\link'
- stat_symlink.stat.checksum is not defined
- stat_symlink.stat.md5 is not defined
- name: test win_stat on file symlink
win_stat:
path: '{{win_stat_dir}}\file-link.txt'
register: stat_file_symlink
- name: assert file symlink actual
assert:
that:
- stat_file_symlink.stat.attributes == 'Archive, ReparsePoint'
- stat_file_symlink.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_file_symlink.stat.creationtime is defined
- stat_file_symlink.stat.exists == True
- stat_file_symlink.stat.extension == '.txt'
- stat_file_symlink.stat.filename == 'file-link.txt'
- stat_file_symlink.stat.hlnk_targets == []
- stat_file_symlink.stat.isarchive == True
- stat_file_symlink.stat.isdir == False
- stat_file_symlink.stat.ishidden == False
- stat_file_symlink.stat.isjunction == False
- stat_file_symlink.stat.islnk == True
- stat_file_symlink.stat.isreadonly == False
- stat_file_symlink.stat.isreg == False
- stat_file_symlink.stat.isshared == False
- stat_file_symlink.stat.lastaccesstime is defined
- stat_file_symlink.stat.lastwritetime is defined
- stat_file_symlink.stat.lnk_source == win_stat_dir + '\\nested\\file.ps1'
- stat_file_symlink.stat.lnk_target == win_stat_dir + '\\nested\\file.ps1'
- stat_file_symlink.stat.md5 is not defined
- stat_file_symlink.stat.nlink == 1
- stat_file_symlink.stat.owner == 'BUILTIN\\Administrators'
- stat_file_symlink.stat.path == win_stat_dir + '\\file-link.txt'
- name: test win_stat of file symlink with follow
win_stat:
path: '{{win_stat_dir}}\file-link.txt'
follow: True
register: stat_file_symlink_follow
- name: assert file system with follow actual
assert:
that:
- stat_file_symlink_follow.stat.attributes == 'Archive'
- stat_file_symlink_follow.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- stat_file_symlink_follow.stat.creationtime is defined
- stat_file_symlink_follow.stat.exists == True
- stat_file_symlink_follow.stat.extension == '.ps1'
- stat_file_symlink_follow.stat.filename == 'file.ps1'
- stat_file_symlink_follow.stat.hlnk_targets == []
- stat_file_symlink_follow.stat.isarchive == True
- stat_file_symlink_follow.stat.isdir == False
- stat_file_symlink_follow.stat.ishidden == False
- stat_file_symlink_follow.stat.isjunction == False
- stat_file_symlink_follow.stat.islnk == False
- stat_file_symlink_follow.stat.isreadonly == False
- stat_file_symlink_follow.stat.isreg == True
- stat_file_symlink_follow.stat.isshared == False
- stat_file_symlink_follow.stat.lastaccesstime is defined
- stat_file_symlink_follow.stat.lastwritetime is defined
- stat_file_symlink_follow.stat.md5 is not defined
- stat_file_symlink_follow.stat.nlink == 1
- stat_file_symlink_follow.stat.owner == 'BUILTIN\\Administrators'
- stat_file_symlink_follow.stat.path == win_stat_dir + '\\nested\\file.ps1'
- name: test win_stat on relative symlink
win_stat:
path: '{{win_stat_dir}}\nested\nested\link-rel'
register: stat_rel_symlink
- name: assert directory relative symlink actual
assert:
that:
- stat_rel_symlink.stat.attributes == 'Directory, ReparsePoint'
- stat_rel_symlink.stat.creationtime is defined
- stat_rel_symlink.stat.exists == True
- stat_rel_symlink.stat.filename == 'link-rel'
- stat_rel_symlink.stat.hlnk_targets == []
- stat_rel_symlink.stat.isarchive == False
- stat_rel_symlink.stat.isdir == True
- stat_rel_symlink.stat.ishidden == False
- stat_rel_symlink.stat.isjunction == False
- stat_rel_symlink.stat.islnk == True
- stat_rel_symlink.stat.isreadonly == False
- stat_rel_symlink.stat.isreg == False
- stat_rel_symlink.stat.isshared == False
- stat_rel_symlink.stat.lastaccesstime is defined
- stat_rel_symlink.stat.lastwritetime is defined
- stat_rel_symlink.stat.lnk_source == win_stat_dir + '\\link-dest'
- stat_rel_symlink.stat.lnk_target == '..\\..\\link-dest'
- stat_rel_symlink.stat.nlink == 1
- stat_rel_symlink.stat.owner == 'BUILTIN\\Administrators'
- stat_rel_symlink.stat.path == win_stat_dir + '\\nested\\nested\\link-rel'
- stat_rel_symlink.stat.checksum is not defined
- stat_rel_symlink.stat.md5 is not defined
- name: test win_stat on relative multiple symlink with follow
win_stat:
path: '{{win_stat_dir}}\outer-link'
follow: True
register: stat_symlink_follow
- name: assert directory relative symlink actual
assert:
that:
- stat_symlink_follow.stat.attributes == 'Directory'
- stat_symlink_follow.stat.creationtime is defined
- stat_symlink_follow.stat.exists == True
- stat_symlink_follow.stat.filename == 'link-dest'
- stat_symlink_follow.stat.hlnk_targets == []
- stat_symlink_follow.stat.isarchive == False
- stat_symlink_follow.stat.isdir == True
- stat_symlink_follow.stat.ishidden == False
- stat_symlink_follow.stat.isjunction == False
- stat_symlink_follow.stat.islnk == False
- stat_symlink_follow.stat.isreadonly == False
- stat_symlink_follow.stat.isreg == False
- stat_symlink_follow.stat.isshared == False
- stat_symlink_follow.stat.lastaccesstime is defined
- stat_symlink_follow.stat.lastwritetime is defined
- stat_symlink_follow.stat.nlink == 1
- stat_symlink_follow.stat.owner == 'BUILTIN\\Administrators'
- stat_symlink_follow.stat.path == win_stat_dir + '\\link-dest'
- stat_symlink_follow.stat.checksum is not defined
- stat_symlink_follow.stat.md5 is not defined
- name: test win_stat on junction
win_stat:
path: '{{win_stat_dir}}\junction-link'
register: stat_junction_point
- name: assert junction actual
assert:
that:
- stat_junction_point.stat.attributes == 'Directory, ReparsePoint'
- stat_junction_point.stat.creationtime is defined
- stat_junction_point.stat.exists == True
- stat_junction_point.stat.filename == 'junction-link'
- stat_junction_point.stat.hlnk_targets == []
- stat_junction_point.stat.isarchive == False
- stat_junction_point.stat.isdir == True
- stat_junction_point.stat.ishidden == False
- stat_junction_point.stat.isjunction == True
- stat_junction_point.stat.islnk == False
- stat_junction_point.stat.isreadonly == False
- stat_junction_point.stat.isreg == False
- stat_junction_point.stat.isshared == False
- stat_junction_point.stat.lastaccesstime is defined
- stat_junction_point.stat.lastwritetime is defined
- stat_junction_point.stat.lnk_source == win_stat_dir + '\\junction-dest'
- stat_junction_point.stat.lnk_target == win_stat_dir + '\\junction-dest'
- stat_junction_point.stat.nlink == 1
- stat_junction_point.stat.owner == 'BUILTIN\\Administrators'
- stat_junction_point.stat.path == win_stat_dir + '\\junction-link'
- stat_junction_point.stat.size == 0
- name: test win_stat on junction with follow
win_stat:
path: '{{win_stat_dir}}\junction-link'
follow: True
register: stat_junction_point_follow
- name: assert junction with follow actual
assert:
that:
- stat_junction_point_follow.stat.attributes == 'Directory'
- stat_junction_point_follow.stat.creationtime is defined
- stat_junction_point_follow.stat.exists == True
- stat_junction_point_follow.stat.filename == 'junction-dest'
- stat_junction_point_follow.stat.hlnk_targets == []
- stat_junction_point_follow.stat.isarchive == False
- stat_junction_point_follow.stat.isdir == True
- stat_junction_point_follow.stat.ishidden == False
- stat_junction_point_follow.stat.isjunction == False
- stat_junction_point_follow.stat.islnk == False
- stat_junction_point_follow.stat.isreadonly == False
- stat_junction_point_follow.stat.isreg == False
- stat_junction_point_follow.stat.isshared == False
- stat_junction_point_follow.stat.lastaccesstime is defined
- stat_junction_point_follow.stat.lastwritetime is defined
- stat_junction_point_follow.stat.nlink == 1
- stat_junction_point_follow.stat.owner == 'BUILTIN\\Administrators'
- stat_junction_point_follow.stat.path == win_stat_dir + '\\junction-dest'
- stat_junction_point_follow.stat.size == 0
- name: test win_stat module non-existent path
win_stat:
path: '{{win_stat_dir}}\this_file_should_not_exist'
register: win_stat_missing
- name: check win_stat missing result
assert:
that:
- not win_stat_missing.stat.exists
- win_stat_missing is not failed
- win_stat_missing is not changed
- name: test win_stat module without path argument
win_stat:
register: win_stat_no_args
failed_when: "win_stat_no_args.msg != 'missing required arguments: path'"
# https://github.com/ansible/ansible/issues/30258
- name: get path of pagefile
win_shell: |
$pagefile = $null
$cs = Get-CimInstance -ClassName Win32_ComputerSystem
if ($cs.AutomaticManagedPagefile) {
$pagefile = "$($env:SystemRoot.Substring(0, 1)):\pagefile.sys"
} else {
$pf = Get-CimInstance -ClassName Win32_PageFileSetting
if ($pf -ne $null) {
$pagefile = $pf[0].Name
}
}
$pagefile
register: pagefile_path
- name: get stat of pagefile
win_stat:
path: '{{pagefile_path.stdout_lines[0]}}'
get_md5: no
get_checksum: no
register: pagefile_stat
when: pagefile_path.stdout_lines|count != 0
- name: assert get stat of pagefile
assert:
that:
- pagefile_stat.stat.exists == True
when: pagefile_path.stdout_lines|count != 0
# Tests with normal user
- set_fact:
gen_pw: password123! + {{ lookup('password', '/dev/null chars=ascii_letters,digits length=8') }}
- name: create test user
win_user:
name: '{{win_stat_user}}'
password: '{{gen_pw}}'
update_password: always
groups: Users
- name: get become user profile dir so we can clean it up later
vars: &become_vars
ansible_become_user: '{{win_stat_user}}'
ansible_become_password: '{{gen_pw}}'
ansible_become_method: runas
ansible_become: yes
win_shell: $env:USERPROFILE
register: profile_dir_out
- name: ensure profile dir contains test username (eg, if become fails silently, prevent deletion of real user profile)
assert:
that:
- win_stat_user in profile_dir_out.stdout_lines[0]
- name: test stat with non admin user on a normal file
vars: *become_vars
win_stat:
path: '{{win_stat_dir}}\nested\file.ps1'
register: user_file
- name: asert test stat with non admin user on a normal file
assert:
that:
- user_file.stat.attributes == 'Archive'
- user_file.stat.checksum == 'a9993e364706816aba3e25717850c26c9cd0d89d'
- user_file.stat.creationtime == 1477984205
- user_file.stat.exists == True
- user_file.stat.extension == '.ps1'
- user_file.stat.filename == 'file.ps1'
- user_file.stat.hlnk_targets == []
- user_file.stat.isarchive == True
- user_file.stat.isdir == False
- user_file.stat.ishidden == False
- user_file.stat.isjunction == False
- user_file.stat.islnk == False
- user_file.stat.isreadonly == False
- user_file.stat.isreg == True
- user_file.stat.isshared == False
- user_file.stat.lastaccesstime == 1477984205
- user_file.stat.lastwritetime == 1477984205
- user_file.stat.md5 is not defined
- user_file.stat.nlink == 1
- user_file.stat.owner == 'BUILTIN\\Administrators'
- user_file.stat.path == win_stat_dir + '\\nested\\file.ps1'
- user_file.stat.size == 3
- name: test stat on a symbolic link as normal user
vars: *become_vars
win_stat:
path: '{{win_stat_dir}}\link'
register: user_symlink
- name: assert test stat on a symbolic link as normal user
assert:
that:
- user_symlink.stat.attributes == 'Directory, ReparsePoint'
- user_symlink.stat.creationtime is defined
- user_symlink.stat.exists == True
- user_symlink.stat.filename == 'link'
- user_symlink.stat.hlnk_targets == []
- user_symlink.stat.isarchive == False
- user_symlink.stat.isdir == True
- user_symlink.stat.ishidden == False
- user_symlink.stat.islnk == True
- user_symlink.stat.isjunction == False
- user_symlink.stat.isreadonly == False
- user_symlink.stat.isreg == False
- user_symlink.stat.isshared == False
- user_symlink.stat.lastaccesstime is defined
- user_symlink.stat.lastwritetime is defined
- user_symlink.stat.lnk_source == win_stat_dir + '\\link-dest'
- user_symlink.stat.lnk_target == win_stat_dir + '\\link-dest'
- user_symlink.stat.nlink == 1
- user_symlink.stat.owner == 'BUILTIN\\Administrators'
- user_symlink.stat.path == win_stat_dir + '\\link'
- user_symlink.stat.checksum is not defined
- user_symlink.stat.md5 is not defined
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,984 |
win_mapped_drive unable to add webdav network location
|
##### SUMMARY
Can't use webdav url for path in this module
<img width="592" alt="image" src="https://user-images.githubusercontent.com/8070665/73541233-ce143480-4429-11ea-8c03-016c608d0e10.png">
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_mapped_drive
##### ANSIBLE VERSION
```
ansible 2.9.3
config file = None
configured module search path = ['/Users/sianob/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.3/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.1 (default, Dec 27 2019, 18:05:45) [Clang 11.0.0 (clang-1100.0.33.16)]
```
##### OS / ENVIRONMENT
mac-os mojave
##### STEPS TO REPRODUCE
```
- name: Create mapped drive with credentials that do not persist on the next logon
win_mapped_drive:
letter: Z
path: "{{ artifactory_url }}"
state: present
username: "{{ artifactory_user }}"
password: "{{ artifactory_token }}"
```
##### EXPECTED RESULTS
mount the webdav share
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
TASK [win_common : Create mapped drive with credentials that do not persist on the next logon] *************************************
fatal: [server]: FAILED! => {"changed": false, "msg": "argument for path is of type System.String and we were unable to convert to path: The given path's format is not supported."}
```
|
https://github.com/ansible/ansible/issues/66984
|
https://github.com/ansible/ansible/pull/67111
|
12e3adb23a793844baaf4d91b798a1b418c75179
|
f23cee214592cb252a96ad808c5d99ca99b81826
| 2020-01-31T13:01:48Z |
python
| 2020-02-05T03:23:52Z |
lib/ansible/modules/windows/win_mapped_drive.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# this is a windows documentation stub, actual code lives in the .ps1
# file of the same name
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_mapped_drive
version_added: '2.4'
short_description: Map network drives for users
description:
- Allows you to modify mapped network drives for individual users.
options:
letter:
description:
- The letter of the network path to map to.
- This letter must not already be in use with Windows.
type: str
required: yes
password:
description:
- The password for C(username) that is used when testing the initial
connection.
- This is never saved with a mapped drive, use the M(win_credential) module
to persist a username and password for a host.
type: str
path:
description:
- The UNC path to map the drive to.
- This is required if C(state=present).
- If C(state=absent) and I(path) is not set, the module will delete the
mapped drive regardless of the target.
- If C(state=absent) and the I(path) is set, the module will throw an error
if path does not match the target of the mapped drive.
type: path
state:
description:
- If C(present) will ensure the mapped drive exists.
- If C(absent) will ensure the mapped drive does not exist.
type: str
choices: [ absent, present ]
default: present
username:
description:
- The username that is used when testing the initial connection.
- This is never saved with a mapped drive, the M(win_credential) module
to persist a username and password for a host.
- This is required if the mapped drive requires authentication with
custom credentials and become, or CredSSP cannot be used.
- If become or CredSSP is used, any credentials saved with
M(win_credential) will automatically be used instead.
type: str
notes:
- You cannot use this module to access a mapped drive in another Ansible task,
drives mapped with this module are only accessible when logging in
interactively with the user through the console or RDP.
- It is recommend to run this module with become or CredSSP when the remote
path requires authentication.
- When using become or CredSSP, the task will have access to any local
credentials stored in the user's vault.
- If become or CredSSP is not available, the I(username) and I(password)
options can be used for the initial authentication but these are not
persisted.
seealso:
- module: win_credential
author:
- Jordan Borean (@jborean93)
'''
EXAMPLES = r'''
- name: Create a mapped drive under Z
win_mapped_drive:
letter: Z
path: \\domain\appdata\accounting
- name: Delete any mapped drives under Z
win_mapped_drive:
letter: Z
state: absent
- name: Only delete the mapped drive Z if the paths match (error is thrown otherwise)
win_mapped_drive:
letter: Z
path: \\domain\appdata\accounting
state: absent
- name: Create mapped drive with credentials and save the username and password
block:
- name: Save the network credentials required for the mapped drive
win_credential:
name: server
type: domain_password
username: username@DOMAIN
secret: Password01
state: present
- name: Create a mapped drive that requires authentication
win_mapped_drive:
letter: M
path: \\SERVER\C$
state: present
vars:
# become is required to save and retrieve the credentials in the tasks
ansible_become: yes
ansible_become_method: runas
ansible_become_user: '{{ ansible_user }}'
ansible_become_pass: '{{ ansible_password }}'
- name: Create mapped drive with credentials that do not persist on the next logon
win_mapped_drive:
letter: M
path: \\SERVER\C$
state: present
username: '{{ ansible_user }}'
password: '{{ ansible_password }}'
'''
RETURN = r'''
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,074 |
Add Custom Properties to ovirt_host_network
|
##### SUMMARY
Add field to ovirt_host_network to configure "Custom Properties" (from oVirt)
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ovirt_host_network
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
This feature will be used to configure specific options during host's network setup (e.g. custom proprieties for FCoE)
```yaml
- ovirt_host_network:
name: myhost
interface: eth0-fcoe
networks:
- name: myvlan1
- name: myvlan2
custom_proprieties:
- type: fcoe
options: "enable=yes,dcb=no,auto_vlan=yes"
```
|
https://github.com/ansible/ansible/issues/67074
|
https://github.com/ansible/ansible/pull/67117
|
822077fefd9929018bea480e2561015f9c65ffae
|
52f2081e62a8d12dcd7be31aac84b3ab9d105c90
| 2020-02-04T07:36:19Z |
python
| 2020-02-05T12:03:32Z |
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2016, 2018 Red Hat, Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_host_network
short_description: Module to manage host networks in oVirt/RHV
version_added: "2.3"
author: "Ondra Machacek (@machacekondra)"
description:
- "Module to manage host networks in oVirt/RHV."
options:
name:
description:
- "Name of the host to manage networks for."
required: true
aliases:
- 'host'
state:
description:
- "Should the host be present/absent."
choices: ['present', 'absent']
default: present
bond:
description:
- "Dictionary describing network bond:"
- "C(name) - Bond name."
- "C(mode) - Bonding mode."
- "C(options) - Bonding options."
- "C(interfaces) - List of interfaces to create a bond."
interface:
description:
- "Name of the network interface where logical network should be attached."
networks:
description:
- "List of dictionary describing networks to be attached to interface or bond:"
- "C(name) - Name of the logical network to be assigned to bond or interface."
- "C(boot_protocol) - Boot protocol one of the I(none), I(static) or I(dhcp)."
- "C(address) - IP address in case of I(static) boot protocol is used."
- "C(netmask) - Subnet mask in case of I(static) boot protocol is used."
- "C(gateway) - Gateway in case of I(static) boot protocol is used."
- "C(version) - IP version. Either v4 or v6. Default is v4."
labels:
description:
- "List of names of the network label to be assigned to bond or interface."
check:
description:
- "If I(true) verify connectivity between host and engine."
- "Network configuration changes will be rolled back if connectivity between
engine and the host is lost after changing network configuration."
type: bool
save:
description:
- "If I(true) network configuration will be persistent, otherwise it is temporary. Default I(true) since Ansible 2.8."
type: bool
default: True
sync_networks:
description:
- "If I(true) all networks will be synchronized before modification"
type: bool
default: false
version_added: 2.8
extends_documentation_fragment: ovirt
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
# In all examples the durability of the configuration created is dependent on the 'save' option value:
# Create bond on eth0 and eth1 interface, and put 'myvlan' network on top of it and persist the new configuration:
- name: Bonds
ovirt_host_network:
name: myhost
save: yes
bond:
name: bond0
mode: 2
interfaces:
- eth1
- eth2
networks:
- name: myvlan
boot_protocol: static
address: 1.2.3.4
netmask: 255.255.255.0
gateway: 1.2.3.4
version: v4
# Create bond on eth1 and eth2 interface, specifying both mode and miimon:
- name: Bonds
ovirt_host_network:
name: myhost
bond:
name: bond0
mode: 1
options:
miimon: 200
interfaces:
- eth1
- eth2
# Remove bond0 bond from host interfaces:
- ovirt_host_network:
state: absent
name: myhost
bond:
name: bond0
# Assign myvlan1 and myvlan2 vlans to host eth0 interface:
- ovirt_host_network:
name: myhost
interface: eth0
networks:
- name: myvlan1
- name: myvlan2
# Remove myvlan2 vlan from host eth0 interface:
- ovirt_host_network:
state: absent
name: myhost
interface: eth0
networks:
- name: myvlan2
# Remove all networks/vlans from host eth0 interface:
- ovirt_host_network:
state: absent
name: myhost
interface: eth0
'''
RETURN = '''
id:
description: ID of the host NIC which is managed
returned: On success if host NIC is found.
type: str
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
host_nic:
description: "Dictionary of all the host NIC attributes. Host NIC attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/host_nic."
returned: On success if host NIC is found.
type: dict
'''
import traceback
try:
import ovirtsdk4.types as otypes
except ImportError:
pass
from ansible.module_utils import six
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
BaseModule,
check_sdk,
create_connection,
equal,
get_dict_of_struct,
get_entity,
get_link_name,
ovirt_full_argument_spec,
search_by_name,
engine_supported
)
def get_bond_options(mode, usr_opts):
MIIMON_100 = dict(miimon='100')
DEFAULT_MODE_OPTS = {
'1': MIIMON_100,
'2': MIIMON_100,
'3': MIIMON_100,
'4': dict(xmit_hash_policy='2', **MIIMON_100)
}
options = []
if mode is None:
return options
def get_type_name(mode_number):
"""
We need to maintain this type strings, for the __compare_options method,
for easier comparision.
"""
modes = [
'Active-Backup',
'Load balance (balance-xor)',
None,
'Dynamic link aggregation (802.3ad)',
]
if (not 0 < mode_number <= len(modes)):
return None
return modes[mode_number - 1]
try:
mode_number = int(mode)
except ValueError:
raise Exception('Bond mode must be a number.')
options.append(
otypes.Option(
name='mode',
type=get_type_name(mode_number),
value=str(mode_number)
)
)
opts_dict = DEFAULT_MODE_OPTS.get(str(mode), {})
if usr_opts is not None:
opts_dict.update(**usr_opts)
options.extend(
[otypes.Option(name=opt, value=str(value))
for opt, value in six.iteritems(opts_dict)]
)
return options
class HostNetworksModule(BaseModule):
def __compare_options(self, new_options, old_options):
return sorted((get_dict_of_struct(opt) for opt in new_options),
key=lambda x: x["name"]) != sorted((get_dict_of_struct(opt) for opt in old_options),
key=lambda x: x["name"])
def build_entity(self):
return otypes.Host()
def update_address(self, attachments_service, attachment, network):
# Check if there is any change in address assignments and
# update it if needed:
for ip in attachment.ip_address_assignments:
if str(ip.ip.version) == network.get('version', 'v4'):
changed = False
if not equal(network.get('boot_protocol'), str(ip.assignment_method)):
ip.assignment_method = otypes.BootProtocol(network.get('boot_protocol'))
changed = True
if not equal(network.get('address'), ip.ip.address):
ip.ip.address = network.get('address')
changed = True
if not equal(network.get('gateway'), ip.ip.gateway):
ip.ip.gateway = network.get('gateway')
changed = True
if not equal(network.get('netmask'), ip.ip.netmask):
ip.ip.netmask = network.get('netmask')
changed = True
if changed:
if not self._module.check_mode:
attachments_service.service(attachment.id).update(attachment)
self.changed = True
break
def has_update(self, nic_service):
update = False
bond = self._module.params['bond']
networks = self._module.params['networks']
labels = self._module.params['labels']
nic = get_entity(nic_service)
if nic is None:
return update
# Check if bond configuration should be updated:
if bond:
update = self.__compare_options(get_bond_options(bond.get('mode'), bond.get('options')), getattr(nic.bonding, 'options', []))
update = update or not equal(
sorted(bond.get('interfaces')) if bond.get('interfaces') else None,
sorted(get_link_name(self._connection, s) for s in nic.bonding.slaves)
)
# Check if labels need to be updated on interface/bond:
if labels:
net_labels = nic_service.network_labels_service().list()
# If any labels which user passed aren't assigned, relabel the interface:
if sorted(labels) != sorted([lbl.id for lbl in net_labels]):
return True
if not networks:
return update
# Check if networks attachments configuration should be updated:
attachments_service = nic_service.network_attachments_service()
network_names = [network.get('name') for network in networks]
attachments = {}
for attachment in attachments_service.list():
name = get_link_name(self._connection, attachment.network)
if name in network_names:
attachments[name] = attachment
for network in networks:
attachment = attachments.get(network.get('name'))
# If attachment don't exists, we need to create it:
if attachment is None:
return True
self.update_address(attachments_service, attachment, network)
return update
def _action_save_configuration(self, entity):
if not self._module.check_mode:
self._service.service(entity.id).commit_net_config()
self.changed = True
def needs_sync(nics_service):
nics = nics_service.list()
for nic in nics:
nic_service = nics_service.nic_service(nic.id)
for network_attachment_service in nic_service.network_attachments_service().list():
if not network_attachment_service.in_sync:
return True
return False
def main():
argument_spec = ovirt_full_argument_spec(
state=dict(
choices=['present', 'absent'],
default='present',
),
name=dict(aliases=['host'], required=True),
bond=dict(default=None, type='dict'),
interface=dict(default=None),
networks=dict(default=None, type='list'),
labels=dict(default=None, type='list'),
check=dict(default=None, type='bool'),
save=dict(default=True, type='bool'),
sync_networks=dict(default=False, type='bool'),
)
module = AnsibleModule(argument_spec=argument_spec)
check_sdk(module)
try:
auth = module.params.pop('auth')
connection = create_connection(auth)
hosts_service = connection.system_service().hosts_service()
host_networks_module = HostNetworksModule(
connection=connection,
module=module,
service=hosts_service,
)
host = host_networks_module.search_entity()
if host is None:
raise Exception("Host '%s' was not found." % module.params['name'])
bond = module.params['bond']
interface = module.params['interface']
networks = module.params['networks']
labels = module.params['labels']
nic_name = bond.get('name') if bond else module.params['interface']
host_service = hosts_service.host_service(host.id)
nics_service = host_service.nics_service()
nic = search_by_name(nics_service, nic_name)
if module.params["sync_networks"]:
if needs_sync(nics_service):
if not module.check_mode:
host_service.sync_all_networks()
host_networks_module.changed = True
network_names = [network['name'] for network in networks or []]
state = module.params['state']
if (
state == 'present' and
(nic is None or host_networks_module.has_update(nics_service.service(nic.id)))
):
# Remove networks which are attached to different interface then user want:
attachments_service = host_service.network_attachments_service()
# Append attachment ID to network if needs update:
for a in attachments_service.list():
current_network_name = get_link_name(connection, a.network)
if current_network_name in network_names:
for n in networks:
if n['name'] == current_network_name:
n['id'] = a.id
# Check if we have to break some bonds:
removed_bonds = []
if nic is not None:
for host_nic in nics_service.list():
if host_nic.bonding and nic.id in [slave.id for slave in host_nic.bonding.slaves]:
removed_bonds.append(otypes.HostNic(id=host_nic.id))
# Assign the networks:
setup_params = dict(
entity=host,
action='setup_networks',
check_connectivity=module.params['check'],
removed_bonds=removed_bonds if removed_bonds else None,
modified_bonds=[
otypes.HostNic(
name=bond.get('name'),
bonding=otypes.Bonding(
options=get_bond_options(bond.get('mode'), bond.get('options')),
slaves=[
otypes.HostNic(name=i) for i in bond.get('interfaces', [])
],
),
),
] if bond else None,
modified_labels=[
otypes.NetworkLabel(
id=str(name),
host_nic=otypes.HostNic(
name=bond.get('name') if bond else interface
),
) for name in labels
] if labels else None,
modified_network_attachments=[
otypes.NetworkAttachment(
id=network.get('id'),
network=otypes.Network(
name=network['name']
) if network['name'] else None,
host_nic=otypes.HostNic(
name=bond.get('name') if bond else interface
),
ip_address_assignments=[
otypes.IpAddressAssignment(
assignment_method=otypes.BootProtocol(
network.get('boot_protocol', 'none')
),
ip=otypes.Ip(
address=network.get('address'),
gateway=network.get('gateway'),
netmask=network.get('netmask'),
version=otypes.IpVersion(
network.get('version')
) if network.get('version') else None,
),
),
],
) for network in networks
] if networks else None,
)
if engine_supported(connection, '4.3'):
setup_params['commit_on_success'] = module.params['save']
elif module.params['save']:
setup_params['post_action'] = host_networks_module._action_save_configuration
host_networks_module.action(**setup_params)
elif state == 'absent' and nic:
attachments = []
nic_service = nics_service.nic_service(nic.id)
attached_labels = set([str(lbl.id) for lbl in nic_service.network_labels_service().list()])
if networks:
attachments_service = nic_service.network_attachments_service()
attachments = attachments_service.list()
attachments = [
attachment for attachment in attachments
if get_link_name(connection, attachment.network) in network_names
]
# Remove unmanaged networks:
unmanaged_networks_service = host_service.unmanaged_networks_service()
unmanaged_networks = [(u.id, u.name) for u in unmanaged_networks_service.list()]
for net_id, net_name in unmanaged_networks:
if net_name in network_names:
if not module.check_mode:
unmanaged_networks_service.unmanaged_network_service(net_id).remove()
host_networks_module.changed = True
# Need to check if there are any labels to be removed, as backend fail
# if we try to send remove non existing label, for bond and attachments it's OK:
if (labels and set(labels).intersection(attached_labels)) or bond or attachments:
setup_params = dict(
entity=host,
action='setup_networks',
check_connectivity=module.params['check'],
removed_bonds=[
otypes.HostNic(
name=bond.get('name'),
),
] if bond else None,
removed_labels=[
otypes.NetworkLabel(id=str(name)) for name in labels
] if labels else None,
removed_network_attachments=attachments if attachments else None,
)
if engine_supported(connection, '4.3'):
setup_params['commit_on_success'] = module.params['save']
elif module.params['save']:
setup_params['post_action'] = host_networks_module._action_save_configuration
host_networks_module.action(**setup_params)
nic = search_by_name(nics_service, nic_name)
module.exit_json(**{
'changed': host_networks_module.changed,
'id': nic.id if nic else None,
'host_nic': get_dict_of_struct(nic),
})
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,149 |
How to distribute Azure VMs in Availability Zones
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The current 'Microsoft Azure Guide' content is not enough for us to place Azure VMs in the **availability zones**. So I suggest adding some notes to this guide.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
Microsoft Azure Guide(docs/docsite/rst/scenario_guides/guide_azure.rst)
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
```paste below
$ ansible --version
ansible 2.9.2
config file = None
configured module search path = ['/Users/mitsuru/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mitsuru/.local/share/virtualenvs/hive-builder-bq6LUPdg/lib/python3.7/site-packages/ansible
executable location = /Users/mitsuru/.local/share/virtualenvs/hive-builder-bq6LUPdg/bin/ansible
python version = 3.7.3 (default, Jun 7 2019, 11:23:14) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
Mac OS catalina 10.15.2
##### ADDITIONAL INFORMATION
When I tried to place VMs in the availability zones, I struggled with the following:
- OS disk and data disk must be a 'Managed Disk', not 'Unmanaged Disk', to place VM in the availability zones.
- When creating a VM with the azure_rm_virtualmachine, you need to specify explicitly managed_disk_type to change OS disk to 'Managed Disk'. Otherwise, 'os disk' becomes 'Unmanaged disk'.
- When creating a data disk with azure_rm_manageddisk, you need to specify explicitly storage_account_type to make it a 'Managed Disk'. Otherwise, the data disk will be 'Unmanaged disk'.
- 'Managed Disk' does not require 'Storage Account' or 'Storage Container' unlike 'Unmanaged Disk'.
In particular, note that once a VM is created on an 'Unmanaged disk', an unnecessary 'Storage container' named "vhds" is automatically created.
- When creating an IP address with azure_rm_publicipaddress, it is necessary to specify 'standard' in the 'sku' property. Otherwise, the IP address cannot be used in the availability zones.
|
https://github.com/ansible/ansible/issues/66149
|
https://github.com/ansible/ansible/pull/66200
|
0a8f5aba747b8da5ff6afd3f886371452eba646b
|
c55ba658c68b0fd6a2cbc26920b66278514818ec
| 2020-01-01T02:37:15Z |
python
| 2020-02-05T17:04:47Z |
docs/docsite/rst/scenario_guides/guide_azure.rst
|
Microsoft Azure Guide
=====================
Ansible includes a suite of modules for interacting with Azure Resource Manager, giving you the tools to easily create
and orchestrate infrastructure on the Microsoft Azure Cloud.
Requirements
------------
Using the Azure Resource Manager modules requires having specific Azure SDK modules
installed on the host running Ansible.
.. code-block:: bash
$ pip install 'ansible[azure]'
If you are running Ansible from source, you can install the dependencies from the
root directory of the Ansible repo.
.. code-block:: bash
$ pip install .[azure]
You can also directly run Ansible in `Azure Cloud Shell <https://shell.azure.com>`_, where Ansible is pre-installed.
Authenticating with Azure
-------------------------
Using the Azure Resource Manager modules requires authenticating with the Azure API. You can choose from two authentication strategies:
* Active Directory Username/Password
* Service Principal Credentials
Follow the directions for the strategy you wish to use, then proceed to `Providing Credentials to Azure Modules`_ for
instructions on how to actually use the modules and authenticate with the Azure API.
Using Service Principal
.......................
There is now a detailed official tutorial describing `how to create a service principal <https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal>`_.
After stepping through the tutorial you will have:
* Your Client ID, which is found in the "client id" box in the "Configure" page of your application in the Azure portal
* Your Secret key, generated when you created the application. You cannot show the key after creation.
If you lost the key, you must create a new one in the "Configure" page of your application.
* And finally, a tenant ID. It's a UUID (e.g. ABCDEFGH-1234-ABCD-1234-ABCDEFGHIJKL) pointing to the AD containing your
application. You will find it in the URL from within the Azure portal, or in the "view endpoints" of any given URL.
Using Active Directory Username/Password
........................................
To create an Active Directory username/password:
* Connect to the Azure Classic Portal with your admin account
* Create a user in your default AAD. You must NOT activate Multi-Factor Authentication
* Go to Settings - Administrators
* Click on Add and enter the email of the new user.
* Check the checkbox of the subscription you want to test with this user.
* Login to Azure Portal with this new user to change the temporary password to a new one. You will not be able to use the
temporary password for OAuth login.
Providing Credentials to Azure Modules
......................................
The modules offer several ways to provide your credentials. For a CI/CD tool such as Ansible Tower or Jenkins, you will
most likely want to use environment variables. For local development you may wish to store your credentials in a file
within your home directory. And of course, you can always pass credentials as parameters to a task within a playbook. The
order of precedence is parameters, then environment variables, and finally a file found in your home directory.
Using Environment Variables
```````````````````````````
To pass service principal credentials via the environment, define the following variables:
* AZURE_CLIENT_ID
* AZURE_SECRET
* AZURE_SUBSCRIPTION_ID
* AZURE_TENANT
To pass Active Directory username/password via the environment, define the following variables:
* AZURE_AD_USER
* AZURE_PASSWORD
* AZURE_SUBSCRIPTION_ID
To pass Active Directory username/password in ADFS via the environment, define the following variables:
* AZURE_AD_USER
* AZURE_PASSWORD
* AZURE_CLIENT_ID
* AZURE_TENANT
* AZURE_ADFS_AUTHORITY_URL
"AZURE_ADFS_AUTHORITY_URL" is optional. It's necessary only when you have own ADFS authority like https://yourdomain.com/adfs.
Storing in a File
`````````````````
When working in a development environment, it may be desirable to store credentials in a file. The modules will look
for credentials in ``$HOME/.azure/credentials``. This file is an ini style file. It will look as follows:
.. code-block:: ini
[default]
subscription_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
client_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
secret=xxxxxxxxxxxxxxxxx
tenant=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
.. note:: If your secret values contain non-ASCII characters, you must `URL Encode <https://www.w3schools.com/tags/ref_urlencode.asp>`_ them to avoid login errors.
It is possible to store multiple sets of credentials within the credentials file by creating multiple sections. Each
section is considered a profile. The modules look for the [default] profile automatically. Define AZURE_PROFILE in the
environment or pass a profile parameter to specify a specific profile.
Passing as Parameters
`````````````````````
If you wish to pass credentials as parameters to a task, use the following parameters for service principal:
* client_id
* secret
* subscription_id
* tenant
Or, pass the following parameters for Active Directory username/password:
* ad_user
* password
* subscription_id
Or, pass the following parameters for ADFS username/pasword:
* ad_user
* password
* client_id
* tenant
* adfs_authority_url
"adfs_authority_url" is optional. It's necessary only when you have own ADFS authority like https://yourdomain.com/adfs.
Other Cloud Environments
------------------------
To use an Azure Cloud other than the default public cloud (eg, Azure China Cloud, Azure US Government Cloud, Azure Stack),
pass the "cloud_environment" argument to modules, configure it in a credential profile, or set the "AZURE_CLOUD_ENVIRONMENT"
environment variable. The value is either a cloud name as defined by the Azure Python SDK (eg, "AzureChinaCloud",
"AzureUSGovernment"; defaults to "AzureCloud") or an Azure metadata discovery URL (for Azure Stack).
Creating Virtual Machines
-------------------------
There are two ways to create a virtual machine, both involving the azure_rm_virtualmachine module. We can either create
a storage account, network interface, security group and public IP address and pass the names of these objects to the
module as parameters, or we can let the module do the work for us and accept the defaults it chooses.
Creating Individual Components
..............................
An Azure module is available to help you create a storage account, virtual network, subnet, network interface,
security group and public IP. Here is a full example of creating each of these and passing the names to the
azure_rm_virtualmachine module at the end:
.. code-block:: yaml
- name: Create storage account
azure_rm_storageaccount:
resource_group: Testing
name: testaccount001
account_type: Standard_LRS
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: Testing
name: testvn001
address_prefixes: "10.10.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: Testing
name: subnet001
address_prefix: "10.10.0.0/24"
virtual_network: testvn001
- name: Create public ip
azure_rm_publicipaddress:
resource_group: Testing
allocation_method: Static
name: publicip001
- name: Create security group that allows SSH
azure_rm_securitygroup:
resource_group: Testing
name: secgroup001
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 101
direction: Inbound
- name: Create NIC
azure_rm_networkinterface:
resource_group: Testing
name: testnic001
virtual_network: testvn001
subnet: subnet001
public_ip_name: publicip001
security_group: secgroup001
- name: Create virtual machine
azure_rm_virtualmachine:
resource_group: Testing
name: testvm001
vm_size: Standard_D1
storage_account: testaccount001
storage_container: testvm001
storage_blob: testvm001.vhd
admin_username: admin
admin_password: Password!
network_interfaces: testnic001
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
Each of the Azure modules offers a variety of parameter options. Not all options are demonstrated in the above example.
See each individual module for further details and examples.
Creating a Virtual Machine with Default Options
...............................................
If you simply want to create a virtual machine without specifying all the details, you can do that as well. The only
caveat is that you will need a virtual network with one subnet already in your resource group. Assuming you have a
virtual network already with an existing subnet, you can run the following to create a VM:
.. code-block:: yaml
azure_rm_virtualmachine:
resource_group: Testing
name: testvm10
vm_size: Standard_D1
admin_username: chouseknecht
ssh_password_enabled: false
ssh_public_keys: "{{ ssh_keys }}"
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
Dynamic Inventory Script
------------------------
If you are not familiar with Ansible's dynamic inventory scripts, check out :ref:`Intro to Dynamic Inventory <intro_dynamic_inventory>`.
The Azure Resource Manager inventory script is called `azure_rm.py <https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/azure_rm.py>`_. It authenticates with the Azure API exactly the same as the
Azure modules, which means you will either define the same environment variables described above in `Using Environment Variables`_,
create a ``$HOME/.azure/credentials`` file (also described above in `Storing in a File`_), or pass command line parameters. To see available command
line options execute the following:
.. code-block:: bash
$ ./ansible/contrib/inventory/azure_rm.py --help
As with all dynamic inventory scripts, the script can be executed directly, passed as a parameter to the ansible command,
or passed directly to ansible-playbook using the -i option. No matter how it is executed the script produces JSON representing
all of the hosts found in your Azure subscription. You can narrow this down to just hosts found in a specific set of
Azure resource groups, or even down to a specific host.
For a given host, the inventory script provides the following host variables:
.. code-block:: JSON
{
"ansible_host": "XXX.XXX.XXX.XXX",
"computer_name": "computer_name2",
"fqdn": null,
"id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Compute/virtualMachines/object-name",
"image": {
"offer": "CentOS",
"publisher": "OpenLogic",
"sku": "7.1",
"version": "latest"
},
"location": "westus",
"mac_address": "00-00-5E-00-53-FE",
"name": "object-name",
"network_interface": "interface-name",
"network_interface_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkInterfaces/object-name1",
"network_security_group": null,
"network_security_group_id": null,
"os_disk": {
"name": "object-name",
"operating_system_type": "Linux"
},
"plan": null,
"powerstate": "running",
"private_ip": "172.26.3.6",
"private_ip_alloc_method": "Static",
"provisioning_state": "Succeeded",
"public_ip": "XXX.XXX.XXX.XXX",
"public_ip_alloc_method": "Static",
"public_ip_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/publicIPAddresses/object-name",
"public_ip_name": "object-name",
"resource_group": "galaxy-production",
"security_group": "object-name",
"security_group_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkSecurityGroups/object-name",
"tags": {
"db": "mysql"
},
"type": "Microsoft.Compute/virtualMachines",
"virtual_machine_size": "Standard_DS4"
}
Host Groups
...........
By default hosts are grouped by:
* azure (all hosts)
* location name
* resource group name
* security group name
* tag key
* tag key_value
* os_disk operating_system_type (Windows/Linux)
You can control host groupings and host selection by either defining environment variables or creating an
azure_rm.ini file in your current working directory.
NOTE: An .ini file will take precedence over environment variables.
NOTE: The name of the .ini file is the basename of the inventory script (i.e. 'azure_rm') with a '.ini'
extension. This allows you to copy, rename and customize the inventory script and have matching .ini files all in
the same directory.
Control grouping using the following variables defined in the environment:
* AZURE_GROUP_BY_RESOURCE_GROUP=yes
* AZURE_GROUP_BY_LOCATION=yes
* AZURE_GROUP_BY_SECURITY_GROUP=yes
* AZURE_GROUP_BY_TAG=yes
* AZURE_GROUP_BY_OS_FAMILY=yes
Select hosts within specific resource groups by assigning a comma separated list to:
* AZURE_RESOURCE_GROUPS=resource_group_a,resource_group_b
Select hosts for specific tag key by assigning a comma separated list of tag keys to:
* AZURE_TAGS=key1,key2,key3
Select hosts for specific locations by assigning a comma separated list of locations to:
* AZURE_LOCATIONS=eastus,eastus2,westus
Or, select hosts for specific tag key:value pairs by assigning a comma separated list key:value pairs to:
* AZURE_TAGS=key1:value1,key2:value2
If you don't need the powerstate, you can improve performance by turning off powerstate fetching:
* AZURE_INCLUDE_POWERSTATE=no
A sample azure_rm.ini file is included along with the inventory script in contrib/inventory. An .ini
file will contain the following:
.. code-block:: ini
[azure]
# Control which resource groups are included. By default all resources groups are included.
# Set resource_groups to a comma separated list of resource groups names.
#resource_groups=
# Control which tags are included. Set tags to a comma separated list of keys or key:value pairs
#tags=
# Control which locations are included. Set locations to a comma separated list of locations.
#locations=
# Include powerstate. If you don't need powerstate information, turning it off improves runtime performance.
# Valid values: yes, no, true, false, True, False, 0, 1.
include_powerstate=yes
# Control grouping with the following boolean flags. Valid values: yes, no, true, false, True, False, 0, 1.
group_by_resource_group=yes
group_by_location=yes
group_by_security_group=yes
group_by_tag=yes
group_by_os_family=yes
Examples
........
Here are some examples using the inventory script:
.. code-block:: bash
# Execute /bin/uname on all instances in the Testing resource group
$ ansible -i azure_rm.py Testing -m shell -a "/bin/uname -a"
# Execute win_ping on all Windows instances
$ ansible -i azure_rm.py windows -m win_ping
# Execute ping on all Linux instances
$ ansible -i azure_rm.py linux -m ping
# Use the inventory script to print instance specific information
$ ./ansible/contrib/inventory/azure_rm.py --host my_instance_host_name --resource-groups=Testing --pretty
# Use the inventory script with ansible-playbook
$ ansible-playbook -i ./ansible/contrib/inventory/azure_rm.py test_playbook.yml
Here is a simple playbook to exercise the Azure inventory script:
.. code-block:: yaml
- name: Test the inventory script
hosts: azure
connection: local
gather_facts: no
tasks:
- debug: msg="{{ inventory_hostname }} has powerstate {{ powerstate }}"
You can execute the playbook with something like:
.. code-block:: bash
$ ansible-playbook -i ./ansible/contrib/inventory/azure_rm.py test_azure_inventory.yml
Disabling certificate validation on Azure endpoints
...................................................
When an HTTPS proxy is present, or when using Azure Stack, it may be necessary to disable certificate validation for
Azure endpoints in the Azure modules. This is not a recommended security practice, but may be necessary when the system
CA store cannot be altered to include the necessary CA certificate. Certificate validation can be controlled by setting
the "cert_validation_mode" value in a credential profile, via the "AZURE_CERT_VALIDATION_MODE" environment variable, or
by passing the "cert_validation_mode" argument to any Azure module. The default value is "validate"; setting the value
to "ignore" will prevent all certificate validation. The module argument takes precedence over a credential profile value,
which takes precedence over the environment value.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,956 |
Modules using add_file_common_args=True and the files document fragment have multiple undocumented arguments
|
##### SUMMARY
If a module sets `add_file_common_args=True` when calling `AnsibleModule`, all elements from [FILE_COMMON_ARGUMENTS](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/module_utils/basic.py#L229-L257) are included in the argument spec. The [files document fragment](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/plugins/doc_fragments/files.py#L15-L80) only documents a subset of them, though. Missing are:
- src
- follow
- force
- content
- backup
- remote_src
- regexp
- delimiter
- directory_mode
Most module authors using `add_file_common_args=True` are probably not aware that their modules have these options as well.
I don't think these extra options should be added by default if `add_file_common_args=True` is specified.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```
devel
```
|
https://github.com/ansible/ansible/issues/64956
|
https://github.com/ansible/ansible/pull/66389
|
802cc602429ea2b37eb7d75a8bb1dc2ebcfc05e1
|
f725dce9368dc4d33c2cddd4790c57e1d00496f0
| 2019-11-17T13:42:30Z |
python
| 2020-02-07T23:56:01Z |
changelogs/fragments/66389-file-common-arguments.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,956 |
Modules using add_file_common_args=True and the files document fragment have multiple undocumented arguments
|
##### SUMMARY
If a module sets `add_file_common_args=True` when calling `AnsibleModule`, all elements from [FILE_COMMON_ARGUMENTS](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/module_utils/basic.py#L229-L257) are included in the argument spec. The [files document fragment](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/plugins/doc_fragments/files.py#L15-L80) only documents a subset of them, though. Missing are:
- src
- follow
- force
- content
- backup
- remote_src
- regexp
- delimiter
- directory_mode
Most module authors using `add_file_common_args=True` are probably not aware that their modules have these options as well.
I don't think these extra options should be added by default if `add_file_common_args=True` is specified.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```
devel
```
|
https://github.com/ansible/ansible/issues/64956
|
https://github.com/ansible/ansible/pull/66389
|
802cc602429ea2b37eb7d75a8bb1dc2ebcfc05e1
|
f725dce9368dc4d33c2cddd4790c57e1d00496f0
| 2019-11-17T13:42:30Z |
python
| 2020-02-07T23:56:01Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
* Windows Server 2008 and 2008 R2 will no longer be supported or tested in the next Ansible release, see :ref:`windows_faq_server2008`.
* The :ref:`win_stat <win_stat_module>` module has removed the deprecated ``get_md55`` option and ``md5`` return value.
* The :ref:`win_psexec <win_psexec_module>` module has removed the deprecated ``extra_opts`` option.
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
* :ref:`redfish_config <redfish_config_module>`: the ``bios_attribute_name`` and ``bios_attribute_value`` options will be removed. To maintain the existing behavior use the ``bios_attributes`` option instead.
* :ref:`clc_aa_policy <clc_aa_policy_module>`: the ``wait`` parameter will be removed. It has always been ignored by the module.
* :ref:`redfish_config <redfish_config_module>`, :ref:`redfish_command <redfish_command_module>`: the behavior to select the first System, Manager, or Chassis resource to modify when multiple are present will be removed. Use the new ``resource_id`` option to specify target resource to modify.
* :ref:`win_domain_controller <win_domain_controller_module>`: the ``log_path`` option will be removed. This was undocumented and only related to debugging information for module development.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`ec2 <ec2_module>`: the ``group`` and ``group_id`` options will become mutually exclusive. Currently ``group_id`` is ignored if you pass both.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
* :ref:`vmware_tag_info <vmware_tag_info_module>`: the module will not return ``tag_facts`` since it does not return multiple tags with the same name and different category id. To maintain the existing behavior use ``tag_info`` which is a list of tag metadata.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead.
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`zabbix_proxy <zabbix_proxy_module>` deprecates ``interface`` sub-options ``type`` and ``main`` when proxy type is set to passive via ``status=passive``. Make sure these suboptions are removed from your playbook as they were never supported by Zabbix in the first place.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
* :ref:`docker_container <docker_container_module>` no longer passes information on non-anonymous volumes or binds as ``Volumes`` to the Docker daemon. This increases compatibility with the ``docker`` CLI program. Note that if you specify ``volumes: strict`` in ``comparisons``, this could cause existing containers created with docker_container from Ansible 2.9 or earlier to restart.
* :ref:`docker_container <docker_container_module>`'s support for port ranges was adjusted to be more compatible to the ``docker`` command line utility: a one-port container range combined with a multiple-port host range will no longer result in only the first host port be used, but the whole range being passed to Docker so that a free port in that range will be used.
* :ref:`purefb_fs <purefb_fs_module>` no longer supports the deprecated ``nfs`` option. This has been superceeded by ``nfsv3``.
Plugins
=======
Lookup plugin names case-sensitivity
------------------------------------
* Prior to Ansible ``2.10`` lookup plugin names passed in as an argument to the ``lookup()`` function were treated as case-insensitive as opposed to lookups invoked via ``with_<lookup_name>``. ``2.10`` brings consistency to ``lookup()`` and ``with_`` to be both case-sensitive.
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,956 |
Modules using add_file_common_args=True and the files document fragment have multiple undocumented arguments
|
##### SUMMARY
If a module sets `add_file_common_args=True` when calling `AnsibleModule`, all elements from [FILE_COMMON_ARGUMENTS](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/module_utils/basic.py#L229-L257) are included in the argument spec. The [files document fragment](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/plugins/doc_fragments/files.py#L15-L80) only documents a subset of them, though. Missing are:
- src
- follow
- force
- content
- backup
- remote_src
- regexp
- delimiter
- directory_mode
Most module authors using `add_file_common_args=True` are probably not aware that their modules have these options as well.
I don't think these extra options should be added by default if `add_file_common_args=True` is specified.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```
devel
```
|
https://github.com/ansible/ansible/issues/64956
|
https://github.com/ansible/ansible/pull/66389
|
802cc602429ea2b37eb7d75a8bb1dc2ebcfc05e1
|
f725dce9368dc4d33c2cddd4790c57e1d00496f0
| 2019-11-17T13:42:30Z |
python
| 2020-02-07T23:56:01Z |
lib/ansible/module_utils/basic.py
|
# Copyright (c), Michael DeHaan <[email protected]>, 2012-2013
# Copyright (c), Toshio Kuratomi <[email protected]> 2016
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
from __future__ import absolute_import, division, print_function
FILE_ATTRIBUTES = {
'A': 'noatime',
'a': 'append',
'c': 'compressed',
'C': 'nocow',
'd': 'nodump',
'D': 'dirsync',
'e': 'extents',
'E': 'encrypted',
'h': 'blocksize',
'i': 'immutable',
'I': 'indexed',
'j': 'journalled',
'N': 'inline',
's': 'zero',
'S': 'synchronous',
't': 'notail',
'T': 'blockroot',
'u': 'undelete',
'X': 'compressedraw',
'Z': 'compresseddirty',
}
# Ansible modules can be written in any language.
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import errno
import datetime
import grp
import fcntl
import locale
import os
import pwd
import platform
import re
import select
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import tempfile
import time
import traceback
import types
from collections import deque
from itertools import chain, repeat
try:
import syslog
HAS_SYSLOG = True
except ImportError:
HAS_SYSLOG = False
try:
from systemd import journal
has_journal = True
except ImportError:
has_journal = False
HAVE_SELINUX = False
try:
import selinux
HAVE_SELINUX = True
except ImportError:
pass
# Python2 & 3 way to get NoneType
NoneType = type(None)
from ._text import to_native, to_bytes, to_text
from ansible.module_utils.common.text.converters import (
jsonify,
container_to_bytes as json_dict_unicode_to_bytes,
container_to_text as json_dict_bytes_to_unicode,
)
from ansible.module_utils.common.text.formatters import (
lenient_lowercase,
bytes_to_human,
human_to_bytes,
SIZE_RANGES,
)
try:
from ansible.module_utils.common._json_compat import json
except ImportError as e:
print('\n{{"msg": "Error: ansible requires the stdlib json: {0}", "failed": true}}'.format(to_native(e)))
sys.exit(1)
AVAILABLE_HASH_ALGORITHMS = dict()
try:
import hashlib
# python 2.7.9+ and 2.7.0+
for attribute in ('available_algorithms', 'algorithms'):
algorithms = getattr(hashlib, attribute, None)
if algorithms:
break
if algorithms is None:
# python 2.5+
algorithms = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
for algorithm in algorithms:
AVAILABLE_HASH_ALGORITHMS[algorithm] = getattr(hashlib, algorithm)
# we may have been able to import md5 but it could still not be available
try:
hashlib.md5()
except ValueError:
AVAILABLE_HASH_ALGORITHMS.pop('md5', None)
except Exception:
import sha
AVAILABLE_HASH_ALGORITHMS = {'sha1': sha.sha}
try:
import md5
AVAILABLE_HASH_ALGORITHMS['md5'] = md5.md5
except Exception:
pass
from ansible.module_utils.common._collections_compat import (
KeysView,
Mapping, MutableMapping,
Sequence, MutableSequence,
Set, MutableSet,
)
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.file import (
_PERM_BITS as PERM_BITS,
_EXEC_PERM_BITS as EXEC_PERM_BITS,
_DEFAULT_PERM as DEFAULT_PERM,
is_executable,
format_attributes,
get_flags_from_attributes,
)
from ansible.module_utils.common.sys_info import (
get_distribution,
get_distribution_version,
get_platform_subclass,
)
from ansible.module_utils.pycompat24 import get_exception, literal_eval
from ansible.module_utils.common.parameters import (
handle_aliases,
list_deprecations,
list_no_log_values,
PASS_VARS,
PASS_BOOLS,
)
from ansible.module_utils.six import (
PY2,
PY3,
b,
binary_type,
integer_types,
iteritems,
string_types,
text_type,
)
from ansible.module_utils.six.moves import map, reduce, shlex_quote
from ansible.module_utils.common.validation import (
check_missing_parameters,
check_mutually_exclusive,
check_required_arguments,
check_required_by,
check_required_if,
check_required_one_of,
check_required_together,
count_terms,
check_type_bool,
check_type_bits,
check_type_bytes,
check_type_float,
check_type_int,
check_type_jsonarg,
check_type_list,
check_type_dict,
check_type_path,
check_type_raw,
check_type_str,
safe_eval,
)
from ansible.module_utils.common._utils import get_all_subclasses as _get_all_subclasses
from ansible.module_utils.parsing.convert_bool import BOOLEANS, BOOLEANS_FALSE, BOOLEANS_TRUE, boolean
from ansible.module_utils.common.warnings import (
deprecate,
get_deprecation_messages,
get_warning_messages,
warn,
)
# Note: When getting Sequence from collections, it matches with strings. If
# this matters, make sure to check for strings before checking for sequencetype
SEQUENCETYPE = frozenset, KeysView, Sequence
PASSWORD_MATCH = re.compile(r'^(?:.+[-_\s])?pass(?:[-_\s]?(?:word|phrase|wrd|wd)?)(?:[-_\s].+)?$', re.I)
imap = map
try:
# Python 2
unicode
except NameError:
# Python 3
unicode = text_type
try:
# Python 2
basestring
except NameError:
# Python 3
basestring = string_types
_literal_eval = literal_eval
# End of deprecated names
# Internal global holding passed in params. This is consulted in case
# multiple AnsibleModules are created. Otherwise each AnsibleModule would
# attempt to read from stdin. Other code should not use this directly as it
# is an internal implementation detail
_ANSIBLE_ARGS = None
FILE_COMMON_ARGUMENTS = dict(
# These are things we want. About setting metadata (mode, ownership, permissions in general) on
# created files (these are used by set_fs_attributes_if_different and included in
# load_file_common_arguments)
mode=dict(type='raw'),
owner=dict(),
group=dict(),
seuser=dict(),
serole=dict(),
selevel=dict(),
setype=dict(),
attributes=dict(aliases=['attr']),
# The following are not about perms and should not be in a rewritten file_common_args
src=dict(), # Maybe dest or path would be appropriate but src is not
follow=dict(type='bool', default=False), # Maybe follow is appropriate because it determines whether to follow symlinks for permission purposes too
force=dict(type='bool'),
# not taken by the file module, but other action plugins call the file module so this ignores
# them for now. In the future, the caller should take care of removing these from the module
# arguments before calling the file module.
content=dict(no_log=True), # used by copy
backup=dict(), # Used by a few modules to create a remote backup before updating the file
remote_src=dict(), # used by assemble
regexp=dict(), # used by assemble
delimiter=dict(), # used by assemble
directory_mode=dict(), # used by copy
unsafe_writes=dict(type='bool'), # should be available to any module using atomic_move
)
PASSWD_ARG_RE = re.compile(r'^[-]{0,2}pass[-]?(word|wd)?')
# Used for parsing symbolic file perms
MODE_OPERATOR_RE = re.compile(r'[+=-]')
USERS_RE = re.compile(r'[^ugo]')
PERMS_RE = re.compile(r'[^rwxXstugo]')
# Used for determining if the system is running a new enough python version
# and should only restrict on our documented minimum versions
_PY3_MIN = sys.version_info[:2] >= (3, 5)
_PY2_MIN = (2, 6) <= sys.version_info[:2] < (3,)
_PY_MIN = _PY3_MIN or _PY2_MIN
if not _PY_MIN:
print(
'\n{"failed": true, '
'"msg": "Ansible requires a minimum of Python2 version 2.6 or Python3 version 3.5. Current version: %s"}' % ''.join(sys.version.splitlines())
)
sys.exit(1)
#
# Deprecated functions
#
def get_platform():
'''
**Deprecated** Use :py:func:`platform.system` directly.
:returns: Name of the platform the module is running on in a native string
Returns a native string that labels the platform ("Linux", "Solaris", etc). Currently, this is
the result of calling :py:func:`platform.system`.
'''
return platform.system()
# End deprecated functions
#
# Compat shims
#
def load_platform_subclass(cls, *args, **kwargs):
"""**Deprecated**: Use ansible.module_utils.common.sys_info.get_platform_subclass instead"""
platform_cls = get_platform_subclass(cls)
return super(cls, platform_cls).__new__(platform_cls)
def get_all_subclasses(cls):
"""**Deprecated**: Use ansible.module_utils.common._utils.get_all_subclasses instead"""
return list(_get_all_subclasses(cls))
# End compat shims
def _remove_values_conditions(value, no_log_strings, deferred_removals):
"""
Helper function for :meth:`remove_values`.
:arg value: The value to check for strings that need to be stripped
:arg no_log_strings: set of strings which must be stripped out of any values
:arg deferred_removals: List which holds information about nested
containers that have to be iterated for removals. It is passed into
this function so that more entries can be added to it if value is
a container type. The format of each entry is a 2-tuple where the first
element is the ``value`` parameter and the second value is a new
container to copy the elements of ``value`` into once iterated.
:returns: if ``value`` is a scalar, returns ``value`` with two exceptions:
1. :class:`~datetime.datetime` objects which are changed into a string representation.
2. objects which are in no_log_strings are replaced with a placeholder
so that no sensitive data is leaked.
If ``value`` is a container type, returns a new empty container.
``deferred_removals`` is added to as a side-effect of this function.
.. warning:: It is up to the caller to make sure the order in which value
is passed in is correct. For instance, higher level containers need
to be passed in before lower level containers. For example, given
``{'level1': {'level2': 'level3': [True]} }`` first pass in the
dictionary for ``level1``, then the dict for ``level2``, and finally
the list for ``level3``.
"""
if isinstance(value, (text_type, binary_type)):
# Need native str type
native_str_value = value
if isinstance(value, text_type):
value_is_text = True
if PY2:
native_str_value = to_bytes(value, errors='surrogate_or_strict')
elif isinstance(value, binary_type):
value_is_text = False
if PY3:
native_str_value = to_text(value, errors='surrogate_or_strict')
if native_str_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
native_str_value = native_str_value.replace(omit_me, '*' * 8)
if value_is_text and isinstance(native_str_value, binary_type):
value = to_text(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
elif not value_is_text and isinstance(native_str_value, text_type):
value = to_bytes(native_str_value, encoding='utf-8', errors='surrogate_then_replace')
else:
value = native_str_value
elif isinstance(value, Sequence):
if isinstance(value, MutableSequence):
new_value = type(value)()
else:
new_value = [] # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Set):
if isinstance(value, MutableSet):
new_value = type(value)()
else:
new_value = set() # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, Mapping):
if isinstance(value, MutableMapping):
new_value = type(value)()
else:
new_value = {} # Need a mutable value
deferred_removals.append((value, new_value))
value = new_value
elif isinstance(value, tuple(chain(integer_types, (float, bool, NoneType)))):
stringy_value = to_native(value, encoding='utf-8', errors='surrogate_or_strict')
if stringy_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
if omit_me in stringy_value:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
elif isinstance(value, datetime.datetime):
value = value.isoformat()
else:
raise TypeError('Value of unknown type: %s, %s' % (type(value), value))
return value
def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more"""
deferred_removals = deque()
no_log_strings = [to_native(s, errors='surrogate_or_strict') for s in no_log_strings]
new_value = _remove_values_conditions(value, no_log_strings, deferred_removals)
while deferred_removals:
old_data, new_data = deferred_removals.popleft()
if isinstance(new_data, Mapping):
for old_key, old_elem in old_data.items():
new_elem = _remove_values_conditions(old_elem, no_log_strings, deferred_removals)
new_data[old_key] = new_elem
else:
for elem in old_data:
new_elem = _remove_values_conditions(elem, no_log_strings, deferred_removals)
if isinstance(new_data, MutableSequence):
new_data.append(new_elem)
elif isinstance(new_data, MutableSet):
new_data.add(new_elem)
else:
raise TypeError('Unknown container type encountered when removing private values from output')
return new_value
def heuristic_log_sanitize(data, no_log_values=None):
''' Remove strings that look like passwords from log messages '''
# Currently filters:
# user:pass@foo/whatever and http://username:pass@wherever/foo
# This code has false positives and consumes parts of logs that are
# not passwds
# begin: start of a passwd containing string
# end: end of a passwd containing string
# sep: char between user and passwd
# prev_begin: where in the overall string to start a search for
# a passwd
# sep_search_end: where in the string to end a search for the sep
data = to_native(data)
output = []
begin = len(data)
prev_begin = begin
sep = 1
while sep:
# Find the potential end of a passwd
try:
end = data.rindex('@', 0, begin)
except ValueError:
# No passwd in the rest of the data
output.insert(0, data[0:begin])
break
# Search for the beginning of a passwd
sep = None
sep_search_end = end
while not sep:
# URL-style username+password
try:
begin = data.rindex('://', 0, sep_search_end)
except ValueError:
# No url style in the data, check for ssh style in the
# rest of the string
begin = 0
# Search for separator
try:
sep = data.index(':', begin + 3, end)
except ValueError:
# No separator; choices:
if begin == 0:
# Searched the whole string so there's no password
# here. Return the remaining data
output.insert(0, data[0:begin])
break
# Search for a different beginning of the password field.
sep_search_end = begin
continue
if sep:
# Password was found; remove it.
output.insert(0, data[end:prev_begin])
output.insert(0, '********')
output.insert(0, data[begin:sep + 1])
prev_begin = begin
output = ''.join(output)
if no_log_values:
output = remove_values(output, no_log_values)
return output
def _load_params():
''' read the modules parameters and store them globally.
This function may be needed for certain very dynamic custom modules which
want to process the parameters that are being handed the module. Since
this is so closely tied to the implementation of modules we cannot
guarantee API stability for it (it may change between versions) however we
will try not to break it gratuitously. It is certainly more future-proof
to call this function and consume its outputs than to implement the logic
inside it as a copy in your own code.
'''
global _ANSIBLE_ARGS
if _ANSIBLE_ARGS is not None:
buffer = _ANSIBLE_ARGS
else:
# debug overrides to read args from file or cmdline
# Avoid tracebacks when locale is non-utf8
# We control the args and we pass them as utf8
if len(sys.argv) > 1:
if os.path.isfile(sys.argv[1]):
fd = open(sys.argv[1], 'rb')
buffer = fd.read()
fd.close()
else:
buffer = sys.argv[1]
if PY3:
buffer = buffer.encode('utf-8', errors='surrogateescape')
# default case, read from stdin
else:
if PY2:
buffer = sys.stdin.read()
else:
buffer = sys.stdin.buffer.read()
_ANSIBLE_ARGS = buffer
try:
params = json.loads(buffer.decode('utf-8'))
except ValueError:
# This helper used too early for fail_json to work.
print('\n{"msg": "Error: Module unable to decode valid JSON on stdin. Unable to figure out what parameters were passed", "failed": true}')
sys.exit(1)
if PY2:
params = json_dict_unicode_to_bytes(params)
try:
return params['ANSIBLE_MODULE_ARGS']
except KeyError:
# This helper does not have access to fail_json so we have to print
# json output on our own.
print('\n{"msg": "Error: Module unable to locate ANSIBLE_MODULE_ARGS in json data from stdin. Unable to figure out what parameters were passed", '
'"failed": true}')
sys.exit(1)
def env_fallback(*args, **kwargs):
''' Load value from environment '''
for arg in args:
if arg in os.environ:
return os.environ[arg]
raise AnsibleFallbackNotFound
def missing_required_lib(library, reason=None, url=None):
hostname = platform.node()
msg = "Failed to import the required Python library (%s) on %s's Python %s." % (library, hostname, sys.executable)
if reason:
msg += " This is required %s." % reason
if url:
msg += " See %s for more info." % url
msg += (" Please read module documentation and install in the appropriate location."
" If the required library is installed, but Ansible is using the wrong Python interpreter,"
" please consult the documentation on ansible_python_interpreter")
return msg
class AnsibleFallbackNotFound(Exception):
pass
class AnsibleModule(object):
def __init__(self, argument_spec, bypass_checks=False, no_log=False,
mutually_exclusive=None, required_together=None,
required_one_of=None, add_file_common_args=False,
supports_check_mode=False, required_if=None, required_by=None):
'''
Common code for quickly building an ansible module in Python
(although you can write modules with anything that can return JSON).
See :ref:`developing_modules_general` for a general introduction
and :ref:`developing_program_flow_modules` for more detailed explanation.
'''
self._name = os.path.basename(__file__) # initialize name until we can parse from options
self.argument_spec = argument_spec
self.supports_check_mode = supports_check_mode
self.check_mode = False
self.bypass_checks = bypass_checks
self.no_log = no_log
self.mutually_exclusive = mutually_exclusive
self.required_together = required_together
self.required_one_of = required_one_of
self.required_if = required_if
self.required_by = required_by
self.cleanup_files = []
self._debug = False
self._diff = False
self._socket_path = None
self._shell = None
self._verbosity = 0
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self._clean = {}
self._string_conversion_action = ''
self.aliases = {}
self._legal_inputs = []
self._options_context = list()
self._tmpdir = None
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
if k not in self.argument_spec:
self.argument_spec[k] = v
self._load_params()
self._set_fallbacks()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except (ValueError, TypeError) as e:
# Use exceptions here because it isn't safe to call fail_json until no_log is processed
print('\n{"failed": true, "msg": "Module alias error: %s"}' % to_native(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
self._handle_no_log_values()
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
self._check_arguments()
# check exclusive early
if not bypass_checks:
self._check_mutually_exclusive(mutually_exclusive)
self._set_defaults(pre=True)
self._CHECK_ARGUMENT_TYPES_DISPATCHER = {
'str': self._check_type_str,
'list': self._check_type_list,
'dict': self._check_type_dict,
'bool': self._check_type_bool,
'int': self._check_type_int,
'float': self._check_type_float,
'path': self._check_type_path,
'raw': self._check_type_raw,
'jsonarg': self._check_type_jsonarg,
'json': self._check_type_jsonarg,
'bytes': self._check_type_bytes,
'bits': self._check_type_bits,
}
if not bypass_checks:
self._check_required_arguments()
self._check_argument_types()
self._check_argument_values()
self._check_required_together(required_together)
self._check_required_one_of(required_one_of)
self._check_required_if(required_if)
self._check_required_by(required_by)
self._set_defaults(pre=False)
# deal with options sub-spec
self._handle_options()
if not self.no_log:
self._log_invocation()
# finally, make sure we're in a sane working dir
self._set_cwd()
@property
def tmpdir(self):
# if _ansible_tmpdir was not set and we have a remote_tmp,
# the module needs to create it and clean it up once finished.
# otherwise we create our own module tmp dir from the system defaults
if self._tmpdir is None:
basedir = None
if self._remote_tmp is not None:
basedir = os.path.expanduser(os.path.expandvars(self._remote_tmp))
if basedir is not None and not os.path.exists(basedir):
try:
os.makedirs(basedir, mode=0o700)
except (OSError, IOError) as e:
self.warn("Unable to use %s as temporary directory, "
"failing back to system: %s" % (basedir, to_native(e)))
basedir = None
else:
self.warn("Module remote_tmp %s did not exist and was "
"created with a mode of 0700, this may cause"
" issues when running as another user. To "
"avoid this, create the remote_tmp dir with "
"the correct permissions manually" % basedir)
basefile = "ansible-moduletmp-%s-" % time.time()
try:
tmpdir = tempfile.mkdtemp(prefix=basefile, dir=basedir)
except (OSError, IOError) as e:
self.fail_json(
msg="Failed to create remote module tmp path at dir %s "
"with prefix %s: %s" % (basedir, basefile, to_native(e))
)
if not self._keep_remote_files:
atexit.register(shutil.rmtree, tmpdir)
self._tmpdir = tmpdir
return self._tmpdir
def warn(self, warning):
warn(warning)
self.log('[WARNING] %s' % warning)
def deprecate(self, msg, version=None):
deprecate(msg, version)
self.log('[DEPRECATION WARNING] %s %s' % (msg, version))
def load_file_common_arguments(self, params):
'''
many modules deal with files, this encapsulates common
options that the file module accepts such that it is directly
available to all modules and they can share code.
'''
path = params.get('path', params.get('dest', None))
if path is None:
return {}
else:
path = os.path.expanduser(os.path.expandvars(path))
b_path = to_bytes(path, errors='surrogate_or_strict')
# if the path is a symlink, and we're following links, get
# the target of the link instead for testing
if params.get('follow', False) and os.path.islink(b_path):
b_path = os.path.realpath(b_path)
path = to_native(b_path)
mode = params.get('mode', None)
owner = params.get('owner', None)
group = params.get('group', None)
# selinux related options
seuser = params.get('seuser', None)
serole = params.get('serole', None)
setype = params.get('setype', None)
selevel = params.get('selevel', None)
secontext = [seuser, serole, setype]
if self.selinux_mls_enabled():
secontext.append(selevel)
default_secontext = self.selinux_default_context(path)
for i in range(len(default_secontext)):
if i is not None and secontext[i] == '_default':
secontext[i] = default_secontext[i]
attributes = params.get('attributes', None)
return dict(
path=path, mode=mode, owner=owner, group=group,
seuser=seuser, serole=serole, setype=setype,
selevel=selevel, secontext=secontext, attributes=attributes,
)
# Detect whether using selinux that is MLS-aware.
# While this means you can set the level/range with
# selinux.lsetfilecon(), it may or may not mean that you
# will get the selevel as part of the context returned
# by selinux.lgetfilecon().
def selinux_mls_enabled(self):
if not HAVE_SELINUX:
return False
if selinux.is_selinux_mls_enabled() == 1:
return True
else:
return False
def selinux_enabled(self):
if not HAVE_SELINUX:
seenabled = self.get_bin_path('selinuxenabled')
if seenabled is not None:
(rc, out, err) = self.run_command(seenabled)
if rc == 0:
self.fail_json(msg="Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!")
return False
if selinux.is_selinux_enabled() == 1:
return True
else:
return False
# Determine whether we need a placeholder for selevel/mls
def selinux_initial_context(self):
context = [None, None, None]
if self.selinux_mls_enabled():
context.append(None)
return context
# If selinux fails to find a default, return an array of None
def selinux_default_context(self, path, mode=0):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.matchpathcon(to_native(path, errors='surrogate_or_strict'), mode)
except OSError:
return context
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def selinux_context(self, path):
context = self.selinux_initial_context()
if not HAVE_SELINUX or not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
self.fail_json(path=path, msg='path %s does not exist' % path)
else:
self.fail_json(path=path, msg='failed to retrieve selinux context')
if ret[0] == -1:
return context
# Limit split to 4 because the selevel, the last in the list,
# may contain ':' characters
context = ret[1].split(':', 3)
return context
def user_and_group(self, path, expand=True):
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
st = os.lstat(b_path)
uid = st.st_uid
gid = st.st_gid
return (uid, gid)
def find_mount_point(self, path):
path_is_bytes = False
if isinstance(path, binary_type):
path_is_bytes = True
b_path = os.path.realpath(to_bytes(os.path.expanduser(os.path.expandvars(path)), errors='surrogate_or_strict'))
while not os.path.ismount(b_path):
b_path = os.path.dirname(b_path)
if path_is_bytes:
return b_path
return to_text(b_path, errors='surrogate_or_strict')
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
mount_data = f.readlines()
f.close()
except Exception:
return (False, None)
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if path_mount_point == mount_point:
for fs in self._selinux_special_fs:
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
cur_context = self.selinux_context(path)
new_context = list(cur_context)
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
if context[i] is not None and context[i] != cur_context[i]:
new_context[i] = context[i]
elif context[i] is None:
new_context[i] = cur_context[i]
if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try:
if self.check_mode:
return True
rc = selinux.lsetfilecon(to_native(path), ':'.join(new_context))
except OSError as e:
self.fail_json(path=path, msg='invalid selinux context: %s' % to_native(e),
new_context=new_context, cur_context=cur_context, input_was=context)
if rc != 0:
self.fail_json(path=path, msg='set selinux context failed')
changed = True
return changed
def set_owner_if_different(self, path, owner, changed, diff=None, expand=True):
if owner is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
uid = int(owner)
except ValueError:
try:
uid = pwd.getpwnam(owner).pw_uid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode:
return True
try:
os.lchown(b_path, uid, -1)
except (IOError, OSError) as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chown failed: %s' % (to_text(e)))
changed = True
return changed
def set_group_if_different(self, path, group, changed, diff=None, expand=True):
if group is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
orig_uid, orig_gid = self.user_and_group(b_path, expand)
try:
gid = int(group)
except ValueError:
try:
gid = grp.getgrnam(group).gr_gid
except KeyError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode:
return True
try:
os.lchown(b_path, -1, gid)
except OSError:
path = to_text(b_path)
self.fail_json(path=path, msg='chgrp failed')
changed = True
return changed
def set_mode_if_different(self, path, mode, changed, diff=None, expand=True):
if mode is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
path_stat = os.lstat(b_path)
if self.check_file_absent_if_check_mode(b_path):
return True
if not isinstance(mode, int):
try:
mode = int(mode, 8)
except Exception:
try:
mode = self._symbolic_mode_to_octal(path_stat, mode)
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path,
msg="mode must be in octal or symbolic form",
details=to_native(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
path = to_text(b_path)
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = '0%03o' % prev_mode
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = '0%03o' % mode
if self.check_mode:
return True
# FIXME: comparison against string above will cause this to be executed
# every time
try:
if hasattr(os, 'lchmod'):
os.lchmod(b_path, mode)
else:
if not os.path.islink(b_path):
os.chmod(b_path, mode)
else:
# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))
except OSError as e:
if os.path.islink(b_path) and e.errno in (errno.EPERM, errno.EROFS): # Can't set mode on symbolic links
pass
elif e.errno in (errno.ENOENT, errno.ELOOP): # Can't set mode on broken symbolic links
pass
else:
raise
except Exception as e:
path = to_text(b_path)
self.fail_json(path=path, msg='chmod failed', details=to_native(e),
exception=traceback.format_exc())
path_stat = os.lstat(b_path)
new_mode = stat.S_IMODE(path_stat.st_mode)
if new_mode != prev_mode:
changed = True
return changed
def set_attributes_if_different(self, path, attributes, changed, diff=None, expand=True):
if attributes is None:
return changed
b_path = to_bytes(path, errors='surrogate_or_strict')
if expand:
b_path = os.path.expanduser(os.path.expandvars(b_path))
if self.check_file_absent_if_check_mode(b_path):
return True
existing = self.get_file_attributes(b_path)
attr_mod = '='
if attributes.startswith(('-', '+')):
attr_mod = attributes[0]
attributes = attributes[1:]
if existing.get('attr_flags', '') != attributes or attr_mod == '-':
attrcmd = self.get_bin_path('chattr')
if attrcmd:
attrcmd = [attrcmd, '%s%s' % (attr_mod, attributes), b_path]
changed = True
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['attributes'] = existing.get('attr_flags')
if 'after' not in diff:
diff['after'] = {}
diff['after']['attributes'] = '%s%s' % (attr_mod, attributes)
if not self.check_mode:
try:
rc, out, err = self.run_command(attrcmd)
if rc != 0 or err:
raise Exception("Error while setting attributes: %s" % (out + err))
except Exception as e:
self.fail_json(path=to_text(b_path), msg='chattr failed',
details=to_native(e), exception=traceback.format_exc())
return changed
def get_file_attributes(self, path):
output = {}
attrcmd = self.get_bin_path('lsattr', False)
if attrcmd:
attrcmd = [attrcmd, '-vd', path]
try:
rc, out, err = self.run_command(attrcmd)
if rc == 0:
res = out.split()
output['attr_flags'] = res[1].replace('-', '').strip()
output['version'] = res[0].strip()
output['attributes'] = format_attributes(output['attr_flags'])
except Exception:
pass
return output
@classmethod
def _symbolic_mode_to_octal(cls, path_stat, symbolic_mode):
"""
This enables symbolic chmod string parsing as stated in the chmod man-page
This includes things like: "u=rw-x+X,g=r-x+X,o=r-x+X"
"""
new_mode = stat.S_IMODE(path_stat.st_mode)
# Now parse all symbolic modes
for mode in symbolic_mode.split(','):
# Per single mode. This always contains a '+', '-' or '='
# Split it on that
permlist = MODE_OPERATOR_RE.split(mode)
# And find all the operators
opers = MODE_OPERATOR_RE.findall(mode)
# The user(s) where it's all about is the first element in the
# 'permlist' list. Take that and remove it from the list.
# An empty user or 'a' means 'all'.
users = permlist.pop(0)
use_umask = (users == '')
if users == 'a' or users == '':
users = 'ugo'
# Check if there are illegal characters in the user list
# They can end up in 'users' because they are not split
if USERS_RE.match(users):
raise ValueError("bad symbolic permission for mode: %s" % mode)
# Now we have two list of equal length, one contains the requested
# permissions and one with the corresponding operators.
for idx, perms in enumerate(permlist):
# Check if there are illegal characters in the permissions
if PERMS_RE.match(perms):
raise ValueError("bad symbolic permission for mode: %s" % mode)
for user in users:
mode_to_apply = cls._get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask)
new_mode = cls._apply_operation_to_mode(user, opers[idx], mode_to_apply, new_mode)
return new_mode
@staticmethod
def _apply_operation_to_mode(user, operator, mode_to_apply, current_mode):
if operator == '=':
if user == 'u':
mask = stat.S_IRWXU | stat.S_ISUID
elif user == 'g':
mask = stat.S_IRWXG | stat.S_ISGID
elif user == 'o':
mask = stat.S_IRWXO | stat.S_ISVTX
# mask out u, g, or o permissions from current_mode and apply new permissions
inverse_mask = mask ^ PERM_BITS
new_mode = (current_mode & inverse_mask) | mode_to_apply
elif operator == '+':
new_mode = current_mode | mode_to_apply
elif operator == '-':
new_mode = current_mode - (current_mode & mode_to_apply)
return new_mode
@staticmethod
def _get_octal_mode_from_symbolic_perms(path_stat, user, perms, use_umask):
prev_mode = stat.S_IMODE(path_stat.st_mode)
is_directory = stat.S_ISDIR(path_stat.st_mode)
has_x_permissions = (prev_mode & EXEC_PERM_BITS) > 0
apply_X_permission = is_directory or has_x_permissions
# Get the umask, if the 'user' part is empty, the effect is as if (a) were
# given, but bits that are set in the umask are not affected.
# We also need the "reversed umask" for masking
umask = os.umask(0)
os.umask(umask)
rev_umask = umask ^ PERM_BITS
# Permission bits constants documented at:
# http://docs.python.org/2/library/stat.html#stat.S_ISUID
if apply_X_permission:
X_perms = {
'u': {'X': stat.S_IXUSR},
'g': {'X': stat.S_IXGRP},
'o': {'X': stat.S_IXOTH},
}
else:
X_perms = {
'u': {'X': 0},
'g': {'X': 0},
'o': {'X': 0},
}
user_perms_to_modes = {
'u': {
'r': rev_umask & stat.S_IRUSR if use_umask else stat.S_IRUSR,
'w': rev_umask & stat.S_IWUSR if use_umask else stat.S_IWUSR,
'x': rev_umask & stat.S_IXUSR if use_umask else stat.S_IXUSR,
's': stat.S_ISUID,
't': 0,
'u': prev_mode & stat.S_IRWXU,
'g': (prev_mode & stat.S_IRWXG) << 3,
'o': (prev_mode & stat.S_IRWXO) << 6},
'g': {
'r': rev_umask & stat.S_IRGRP if use_umask else stat.S_IRGRP,
'w': rev_umask & stat.S_IWGRP if use_umask else stat.S_IWGRP,
'x': rev_umask & stat.S_IXGRP if use_umask else stat.S_IXGRP,
's': stat.S_ISGID,
't': 0,
'u': (prev_mode & stat.S_IRWXU) >> 3,
'g': prev_mode & stat.S_IRWXG,
'o': (prev_mode & stat.S_IRWXO) << 3},
'o': {
'r': rev_umask & stat.S_IROTH if use_umask else stat.S_IROTH,
'w': rev_umask & stat.S_IWOTH if use_umask else stat.S_IWOTH,
'x': rev_umask & stat.S_IXOTH if use_umask else stat.S_IXOTH,
's': 0,
't': stat.S_ISVTX,
'u': (prev_mode & stat.S_IRWXU) >> 6,
'g': (prev_mode & stat.S_IRWXG) >> 3,
'o': prev_mode & stat.S_IRWXO},
}
# Insert X_perms into user_perms_to_modes
for key, value in X_perms.items():
user_perms_to_modes[key].update(value)
def or_reduce(mode, perm):
return mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed, diff, expand
)
changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed, diff, expand
)
changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed, diff, expand
)
changed = self.set_attributes_if_different(
file_args['path'], file_args['attributes'], changed, diff, expand
)
return changed
def check_file_absent_if_check_mode(self, file_path):
return self.check_mode and not os.path.exists(file_path)
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def set_file_attributes_if_different(self, file_args, changed, diff=None, expand=True):
return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
def add_path_info(self, kwargs):
'''
for results that are files, supplement the info about the file
in the return path with stats about the file path.
'''
path = kwargs.get('path', kwargs.get('dest', None))
if path is None:
return kwargs
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.exists(b_path):
(uid, gid) = self.user_and_group(path)
kwargs['uid'] = uid
kwargs['gid'] = gid
try:
user = pwd.getpwuid(uid)[0]
except KeyError:
user = str(uid)
try:
group = grp.getgrgid(gid)[0]
except KeyError:
group = str(gid)
kwargs['owner'] = user
kwargs['group'] = group
st = os.lstat(b_path)
kwargs['mode'] = '0%03o' % stat.S_IMODE(st[stat.ST_MODE])
# secontext not yet supported
if os.path.islink(b_path):
kwargs['state'] = 'link'
elif os.path.isdir(b_path):
kwargs['state'] = 'directory'
elif os.stat(b_path).st_nlink > 1:
kwargs['state'] = 'hard'
else:
kwargs['state'] = 'file'
if HAVE_SELINUX and self.selinux_enabled():
kwargs['secontext'] = ':'.join(self.selinux_context(path))
kwargs['size'] = st[stat.ST_SIZE]
return kwargs
def _check_locale(self):
'''
Uses the locale module to test the currently set locale
(per the LANG and LC_CTYPE environment settings)
'''
try:
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
locale.setlocale(locale.LC_ALL, 'C')
os.environ['LANG'] = 'C'
os.environ['LC_ALL'] = 'C'
os.environ['LC_MESSAGES'] = 'C'
except Exception as e:
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" %
to_native(e), exception=traceback.format_exc())
def _handle_aliases(self, spec=None, param=None, option_prefix=''):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
# this uses exceptions as it happens before we can safely call fail_json
alias_warnings = []
alias_results, self._legal_inputs = handle_aliases(spec, param, alias_warnings=alias_warnings)
for option, alias in alias_warnings:
warn('Both option %s and its alias %s are set.' % (option_prefix + option, option_prefix + alias))
deprecated_aliases = []
for i in spec.keys():
if 'deprecated_aliases' in spec[i].keys():
for alias in spec[i]['deprecated_aliases']:
deprecated_aliases.append(alias)
for deprecation in deprecated_aliases:
if deprecation['name'] in param.keys():
deprecate("Alias '%s' is deprecated. See the module docs for more information" % deprecation['name'], deprecation['version'])
return alias_results
def _handle_no_log_values(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
self.no_log_values.update(list_no_log_values(spec, param))
except TypeError as te:
self.fail_json(msg="Failure when processing no_log parameters. Module invocation will be hidden. "
"%s" % to_native(te), invocation={'module_args': 'HIDDEN DUE TO FAILURE'})
for message in list_deprecations(spec, param):
deprecate(message['msg'], message['version'])
def _check_arguments(self, spec=None, param=None, legal_inputs=None):
self._syslog_facility = 'LOG_USER'
unsupported_parameters = set()
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
if legal_inputs is None:
legal_inputs = self._legal_inputs
for k in list(param.keys()):
if k not in legal_inputs:
unsupported_parameters.add(k)
for k in PASS_VARS:
# handle setting internal properties from internal ansible vars
param_key = '_ansible_%s' % k
if param_key in param:
if k in PASS_BOOLS:
setattr(self, PASS_VARS[k][0], self.boolean(param[param_key]))
else:
setattr(self, PASS_VARS[k][0], param[param_key])
# clean up internal top level params:
if param_key in self.params:
del self.params[param_key]
else:
# use defaults if not already set
if not hasattr(self, PASS_VARS[k][0]):
setattr(self, PASS_VARS[k][0], PASS_VARS[k][1])
if unsupported_parameters:
msg = "Unsupported parameters for (%s) module: %s" % (self._name, ', '.join(sorted(list(unsupported_parameters))))
if self._options_context:
msg += " found in %s." % " -> ".join(self._options_context)
msg += " Supported parameters include: %s" % (', '.join(sorted(spec.keys())))
self.fail_json(msg=msg)
if self.check_mode and not self.supports_check_mode:
self.exit_json(skipped=True, msg="remote module (%s) does not support check mode" % self._name)
def _count_terms(self, check, param=None):
if param is None:
param = self.params
return count_terms(check, param)
def _check_mutually_exclusive(self, spec, param=None):
if param is None:
param = self.params
try:
check_mutually_exclusive(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_one_of(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_one_of(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_together(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_together(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_by(self, spec, param=None):
if spec is None:
return
if param is None:
param = self.params
try:
check_required_by(spec, param)
except TypeError as e:
self.fail_json(msg=to_native(e))
def _check_required_arguments(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
try:
check_required_arguments(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_required_if(self, spec, param=None):
''' ensure that parameters which conditionally required are present '''
if spec is None:
return
if param is None:
param = self.params
try:
check_required_if(spec, param)
except TypeError as e:
msg = to_native(e)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def _check_argument_values(self, spec=None, param=None):
''' ensure all arguments have the requested values, and there are no stray arguments '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
choices = v.get('choices', None)
if choices is None:
continue
if isinstance(choices, SEQUENCETYPE) and not isinstance(choices, (binary_type, text_type)):
if k in param:
# Allow one or more when type='list' param with choices
if isinstance(param[k], list):
diff_list = ", ".join([item for item in param[k] if item not in choices])
if diff_list:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one or more of: %s. Got no match for: %s" % (k, choices_str, diff_list)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
elif param[k] not in choices:
# PyYaml converts certain strings to bools. If we can unambiguously convert back, do so before checking
# the value. If we can't figure this out, module author is responsible.
lowered_choices = None
if param[k] == 'False':
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_FALSE.intersection(choices)
if len(overlap) == 1:
# Extract from a set
(param[k],) = overlap
if param[k] == 'True':
if lowered_choices is None:
lowered_choices = lenient_lowercase(choices)
overlap = BOOLEANS_TRUE.intersection(choices)
if len(overlap) == 1:
(param[k],) = overlap
if param[k] not in choices:
choices_str = ", ".join([to_native(c) for c in choices])
msg = "value of %s must be one of: %s, got: %s" % (k, choices_str, param[k])
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
else:
msg = "internal error: choices for argument %s are not iterable: %s" % (k, choices)
if self._options_context:
msg += " found in %s" % " -> ".join(self._options_context)
self.fail_json(msg=msg)
def safe_eval(self, value, locals=None, include_exceptions=False):
return safe_eval(value, locals, include_exceptions)
def _check_type_str(self, value):
opts = {
'error': False,
'warn': False,
'ignore': True
}
# Ignore, warn, or error when converting to a string.
allow_conversion = opts.get(self._string_conversion_action, True)
try:
return check_type_str(value, allow_conversion)
except TypeError:
common_msg = 'quote the entire value to ensure it does not change.'
if self._string_conversion_action == 'error':
msg = common_msg.capitalize()
raise TypeError(to_native(msg))
elif self._string_conversion_action == 'warn':
msg = ('The value {0!r} (type {0.__class__.__name__}) in a string field was converted to {1!r} (type string). '
'If this does not look like what you expect, {2}').format(value, to_text(value), common_msg)
self.warn(to_native(msg))
return to_native(value, errors='surrogate_or_strict')
def _check_type_list(self, value):
return check_type_list(value)
def _check_type_dict(self, value):
return check_type_dict(value)
def _check_type_bool(self, value):
return check_type_bool(value)
def _check_type_int(self, value):
return check_type_int(value)
def _check_type_float(self, value):
return check_type_float(value)
def _check_type_path(self, value):
return check_type_path(value)
def _check_type_jsonarg(self, value):
return check_type_jsonarg(value)
def _check_type_raw(self, value):
return check_type_raw(value)
def _check_type_bytes(self, value):
return check_type_bytes(value)
def _check_type_bits(self, value):
return check_type_bits(value)
def _handle_options(self, argument_spec=None, params=None, prefix=''):
''' deal with options to create sub spec '''
if argument_spec is None:
argument_spec = self.argument_spec
if params is None:
params = self.params
for (k, v) in argument_spec.items():
wanted = v.get('type', None)
if wanted == 'dict' or (wanted == 'list' and v.get('elements', '') == 'dict'):
spec = v.get('options', None)
if v.get('apply_defaults', False):
if spec is not None:
if params.get(k) is None:
params[k] = {}
else:
continue
elif spec is None or k not in params or params[k] is None:
continue
self._options_context.append(k)
if isinstance(params[k], dict):
elements = [params[k]]
else:
elements = params[k]
for idx, param in enumerate(elements):
if not isinstance(param, dict):
self.fail_json(msg="value of %s must be of type dict or list of dict" % k)
new_prefix = prefix + k
if wanted == 'list':
new_prefix += '[%d]' % idx
new_prefix += '.'
self._set_fallbacks(spec, param)
options_aliases = self._handle_aliases(spec, param, option_prefix=new_prefix)
options_legal_inputs = list(spec.keys()) + list(options_aliases.keys())
self._check_arguments(spec, param, options_legal_inputs)
# check exclusive early
if not self.bypass_checks:
self._check_mutually_exclusive(v.get('mutually_exclusive', None), param)
self._set_defaults(pre=True, spec=spec, param=param)
if not self.bypass_checks:
self._check_required_arguments(spec, param)
self._check_argument_types(spec, param)
self._check_argument_values(spec, param)
self._check_required_together(v.get('required_together', None), param)
self._check_required_one_of(v.get('required_one_of', None), param)
self._check_required_if(v.get('required_if', None), param)
self._check_required_by(v.get('required_by', None), param)
self._set_defaults(pre=False, spec=spec, param=param)
# handle multi level options (sub argspec)
self._handle_options(spec, param, new_prefix)
self._options_context.pop()
def _get_wanted_type(self, wanted, k):
if not callable(wanted):
if wanted is None:
# Mostly we want to default to str.
# For values set to None explicitly, return None instead as
# that allows a user to unset a parameter
wanted = 'str'
try:
type_checker = self._CHECK_ARGUMENT_TYPES_DISPATCHER[wanted]
except KeyError:
self.fail_json(msg="implementation error: unknown type %s requested for %s" % (wanted, k))
else:
# set the type_checker to the callable, and reset wanted to the callable's name (or type if it doesn't have one, ala MagicMock)
type_checker = wanted
wanted = getattr(wanted, '__name__', to_native(type(wanted)))
return type_checker, wanted
def _handle_elements(self, wanted, param, values):
type_checker, wanted_name = self._get_wanted_type(wanted, param)
validated_params = []
for value in values:
try:
validated_params.append(type_checker(value))
except (TypeError, ValueError) as e:
msg = "Elements value for option %s" % param
if self._options_context:
msg += " found in '%s'" % " -> ".join(self._options_context)
msg += " is of type %s and we were unable to convert to %s: %s" % (type(value), wanted_name, to_native(e))
self.fail_json(msg=msg)
return validated_params
def _check_argument_types(self, spec=None, param=None):
''' ensure all arguments have the requested type '''
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
wanted = v.get('type', None)
if k not in param:
continue
value = param[k]
if value is None:
continue
type_checker, wanted_name = self._get_wanted_type(wanted, k)
try:
param[k] = type_checker(value)
wanted_elements = v.get('elements', None)
if wanted_elements:
if wanted != 'list' or not isinstance(param[k], list):
msg = "Invalid type %s for option '%s'" % (wanted_name, param)
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += ", elements value check is supported only with 'list' type"
self.fail_json(msg=msg)
param[k] = self._handle_elements(wanted_elements, k, param[k])
except (TypeError, ValueError) as e:
msg = "argument %s is of type %s" % (k, type(value))
if self._options_context:
msg += " found in '%s'." % " -> ".join(self._options_context)
msg += " and we were unable to convert to %s: %s" % (wanted_name, to_native(e))
self.fail_json(msg=msg)
def _set_defaults(self, pre=True, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
default = v.get('default', None)
if pre is True:
# this prevents setting defaults on required items
if default is not None and k not in param:
param[k] = default
else:
# make sure things without a default still get set None
if k not in param:
param[k] = default
def _set_fallbacks(self, spec=None, param=None):
if spec is None:
spec = self.argument_spec
if param is None:
param = self.params
for (k, v) in spec.items():
fallback = v.get('fallback', (None,))
fallback_strategy = fallback[0]
fallback_args = []
fallback_kwargs = {}
if k not in param and fallback_strategy is not None:
for item in fallback[1:]:
if isinstance(item, dict):
fallback_kwargs = item
else:
fallback_args = item
try:
param[k] = fallback_strategy(*fallback_args, **fallback_kwargs)
except AnsibleFallbackNotFound:
continue
def _load_params(self):
''' read the input and set the params attribute.
This method is for backwards compatibility. The guts of the function
were moved out in 2.1 so that custom modules could read the parameters.
'''
# debug overrides to read args from file or cmdline
self.params = _load_params()
def _log_to_syslog(self, msg):
if HAS_SYSLOG:
module = 'ansible-%s' % self._name
facility = getattr(syslog, self._syslog_facility, syslog.LOG_USER)
syslog.openlog(str(module), 0, facility)
syslog.syslog(syslog.LOG_INFO, msg)
def debug(self, msg):
if self._debug:
self.log('[debug] %s' % msg)
def log(self, msg, log_args=None):
if not self.no_log:
if log_args is None:
log_args = dict()
module = 'ansible-%s' % self._name
if isinstance(module, binary_type):
module = module.decode('utf-8', 'replace')
# 6655 - allow for accented characters
if not isinstance(msg, (binary_type, text_type)):
raise TypeError("msg should be a string (got %s)" % type(msg))
# We want journal to always take text type
# syslog takes bytes on py2, text type on py3
if isinstance(msg, binary_type):
journal_msg = remove_values(msg.decode('utf-8', 'replace'), self.no_log_values)
else:
# TODO: surrogateescape is a danger here on Py3
journal_msg = remove_values(msg, self.no_log_values)
if PY3:
syslog_msg = journal_msg
else:
syslog_msg = journal_msg.encode('utf-8', 'replace')
if has_journal:
journal_args = [("MODULE", os.path.basename(__file__))]
for arg in log_args:
journal_args.append((arg.upper(), str(log_args[arg])))
try:
if HAS_SYSLOG:
# If syslog_facility specified, it needs to convert
# from the facility name to the facility code, and
# set it as SYSLOG_FACILITY argument of journal.send()
facility = getattr(syslog,
self._syslog_facility,
syslog.LOG_USER) >> 3
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
SYSLOG_FACILITY=facility,
**dict(journal_args))
else:
journal.send(MESSAGE=u"%s %s" % (module, journal_msg),
**dict(journal_args))
except IOError:
# fall back to syslog since logging to journal failed
self._log_to_syslog(syslog_msg)
else:
self._log_to_syslog(syslog_msg)
def _log_invocation(self):
''' log that ansible ran the module '''
# TODO: generalize a separate log function and make log_invocation use it
# Sanitize possible password argument when logging.
log_args = dict()
for param in self.params:
canon = self.aliases.get(param, param)
arg_opts = self.argument_spec.get(canon, {})
no_log = arg_opts.get('no_log', None)
# try to proactively capture password/passphrase fields
if no_log is None and PASSWORD_MATCH.search(param):
log_args[param] = 'NOT_LOGGING_PASSWORD'
self.warn('Module did not set no_log for %s' % param)
elif self.boolean(no_log):
log_args[param] = 'NOT_LOGGING_PARAMETER'
else:
param_val = self.params[param]
if not isinstance(param_val, (text_type, binary_type)):
param_val = str(param_val)
elif isinstance(param_val, text_type):
param_val = param_val.encode('utf-8')
log_args[param] = heuristic_log_sanitize(param_val, self.no_log_values)
msg = ['%s=%s' % (to_native(arg), to_native(val)) for arg, val in log_args.items()]
if msg:
msg = 'Invoked with %s' % ' '.join(msg)
else:
msg = 'Invoked'
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
if not os.access(cwd, os.F_OK | os.R_OK):
raise Exception()
return cwd
except Exception:
# we don't have access to the cwd, probably because of sudo.
# Try and move to a neutral location to prevent errors
for cwd in [self.tmpdir, os.path.expandvars('$HOME'), tempfile.gettempdir()]:
try:
if os.access(cwd, os.F_OK | os.R_OK):
os.chdir(cwd)
return cwd
except Exception:
pass
# we won't error here, as it may *not* be a problem,
# and we don't want to break modules unnecessarily
return None
def get_bin_path(self, arg, required=False, opt_dirs=None):
'''
Find system executable in PATH.
:param arg: The executable to find.
:param required: if executable is not found and required is ``True``, fail_json
:param opt_dirs: optional list of directories to search in addition to ``PATH``
:returns: if found return full path; otherwise return None
'''
bin_path = None
try:
bin_path = get_bin_path(arg=arg, opt_dirs=opt_dirs)
except ValueError as e:
if required:
self.fail_json(msg=to_text(e))
else:
return bin_path
return bin_path
def boolean(self, arg):
'''Convert the argument to a boolean'''
if arg is None:
return arg
try:
return boolean(arg)
except TypeError as e:
self.fail_json(msg=to_native(e))
def jsonify(self, data):
try:
return jsonify(data)
except UnicodeError as e:
self.fail_json(msg=to_text(e))
def from_json(self, data):
return json.loads(data)
def add_cleanup_file(self, path):
if path not in self.cleanup_files:
self.cleanup_files.append(path)
def do_cleanup_files(self):
for path in self.cleanup_files:
self.cleanup(path)
def _return_formatted(self, kwargs):
self.add_path_info(kwargs)
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
if 'warnings' in kwargs:
if isinstance(kwargs['warnings'], list):
for w in kwargs['warnings']:
self.warn(w)
else:
self.warn(kwargs['warnings'])
warnings = get_warning_messages()
if warnings:
kwargs['warnings'] = warnings
if 'deprecations' in kwargs:
if isinstance(kwargs['deprecations'], list):
for d in kwargs['deprecations']:
if isinstance(d, SEQUENCETYPE) and len(d) == 2:
self.deprecate(d[0], version=d[1])
elif isinstance(d, Mapping):
self.deprecate(d['msg'], version=d.get('version', None))
else:
self.deprecate(d) # pylint: disable=ansible-deprecated-no-version
else:
self.deprecate(kwargs['deprecations']) # pylint: disable=ansible-deprecated-no-version
deprecations = get_deprecation_messages()
if deprecations:
kwargs['deprecations'] = deprecations
kwargs = remove_values(kwargs, self.no_log_values)
print('\n%s' % self.jsonify(kwargs))
def exit_json(self, **kwargs):
''' return from the module, without error '''
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(0)
def fail_json(self, **kwargs):
''' return from the module, with an error message '''
if 'msg' not in kwargs:
raise AssertionError("implementation error -- msg to explain the error is required")
kwargs['failed'] = True
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
sys.exit(1)
def fail_on_missing_params(self, required_params=None):
if not required_params:
return
try:
check_missing_parameters(self.params, required_params)
except TypeError as e:
self.fail_json(msg=to_native(e))
def digest_from_file(self, filename, algorithm):
''' Return hex digest of local file for a digest_method specified by name, or None if file is not present. '''
b_filename = to_bytes(filename, errors='surrogate_or_strict')
if not os.path.exists(b_filename):
return None
if os.path.isdir(b_filename):
self.fail_json(msg="attempted to take checksum of directory: %s" % filename)
# preserve old behaviour where the third parameter was a hash algorithm object
if hasattr(algorithm, 'hexdigest'):
digest_method = algorithm
else:
try:
digest_method = AVAILABLE_HASH_ALGORITHMS[algorithm]()
except KeyError:
self.fail_json(msg="Could not hash file '%s' with algorithm '%s'. Available algorithms: %s" %
(filename, algorithm, ', '.join(AVAILABLE_HASH_ALGORITHMS)))
blocksize = 64 * 1024
infile = open(os.path.realpath(b_filename), 'rb')
block = infile.read(blocksize)
while block:
digest_method.update(block)
block = infile.read(blocksize)
infile.close()
return digest_method.hexdigest()
def md5(self, filename):
''' Return MD5 hex digest of local file using digest_from_file().
Do not use this function unless you have no other choice for:
1) Optional backwards compatibility
2) Compatibility with a third party protocol
This function will not work on systems complying with FIPS-140-2.
Most uses of this function can use the module.sha1 function instead.
'''
if 'md5' not in AVAILABLE_HASH_ALGORITHMS:
raise ValueError('MD5 not available. Possibly running in FIPS mode')
return self.digest_from_file(filename, 'md5')
def sha1(self, filename):
''' Return SHA1 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha1')
def sha256(self, filename):
''' Return SHA-256 hex digest of local file using digest_from_file(). '''
return self.digest_from_file(filename, 'sha256')
def backup_local(self, fn):
'''make a date-marked backup of the specified file, return True or False on success or failure'''
backupdest = ''
if os.path.exists(fn):
# backups named basename.PID.YYYY-MM-DD@HH:MM:SS~
ext = time.strftime("%Y-%m-%d@%H:%M:%S~", time.localtime(time.time()))
backupdest = '%s.%s.%s' % (fn, os.getpid(), ext)
try:
self.preserved_copy(fn, backupdest)
except (shutil.Error, IOError) as e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, to_native(e)))
return backupdest
def cleanup(self, tmpfile):
if os.path.exists(tmpfile):
try:
os.unlink(tmpfile)
except OSError as e:
sys.stderr.write("could not cleanup %s: %s" % (tmpfile, to_native(e)))
def preserved_copy(self, src, dest):
"""Copy a file with preserved ownership, permissions and context"""
# shutil.copy2(src, dst)
# Similar to shutil.copy(), but metadata is copied as well - in fact,
# this is just shutil.copy() followed by copystat(). This is similar
# to the Unix command cp -p.
#
# shutil.copystat(src, dst)
# Copy the permission bits, last access time, last modification time,
# and flags from src to dst. The file contents, owner, and group are
# unaffected. src and dst are path names given as strings.
shutil.copy2(src, dest)
# Set the context
if self.selinux_enabled():
context = self.selinux_context(src)
self.set_context_if_different(dest, context, False)
# chown it
try:
dest_stat = os.stat(src)
tmp_stat = os.stat(dest)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(dest, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
# Set the attributes
current_attribs = self.get_file_attributes(src)
current_attribs = current_attribs.get('attr_flags', '')
self.set_attributes_if_different(dest, current_attribs, True)
def atomic_move(self, src, dest, unsafe_writes=False):
'''atomically move src to dest, copying attributes from dest, returns true on success
it uses os.rename to ensure this as it is an atomic operation, rest of the function is
to work around limitations, corner cases and ensure selinux context is saved if possible'''
context = None
dest_stat = None
b_src = to_bytes(src, errors='surrogate_or_strict')
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
try:
dest_stat = os.stat(b_dest)
# copy mode and ownership
os.chmod(b_src, dest_stat.st_mode & PERM_BITS)
os.chown(b_src, dest_stat.st_uid, dest_stat.st_gid)
# try to copy flags if possible
if hasattr(os, 'chflags') and hasattr(dest_stat, 'st_flags'):
try:
os.chflags(b_src, dest_stat.st_flags)
except OSError as e:
for err in 'EOPNOTSUPP', 'ENOTSUP':
if hasattr(errno, err) and e.errno == getattr(errno, err):
break
else:
raise
except OSError as e:
if e.errno != errno.EPERM:
raise
if self.selinux_enabled():
context = self.selinux_context(dest)
else:
if self.selinux_enabled():
context = self.selinux_default_context(dest)
creating = not os.path.exists(b_dest)
try:
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(b_src, b_dest)
except (IOError, OSError) as e:
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY, errno.EBUSY]:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders and other 'exotic' non posix file systems
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
else:
# Use bytes here. In the shippable CI, this fails with
# a UnicodeError with surrogateescape'd strings for an unknown
# reason (doesn't happen in a local Ubuntu16.04 VM)
b_dest_dir = os.path.dirname(b_dest)
b_suffix = os.path.basename(b_dest)
error_msg = None
tmp_dest_name = None
try:
tmp_dest_fd, tmp_dest_name = tempfile.mkstemp(prefix=b'.ansible_tmp',
dir=b_dest_dir, suffix=b_suffix)
except (OSError, IOError) as e:
error_msg = 'The destination directory (%s) is not writable by the current user. Error was: %s' % (os.path.dirname(dest), to_native(e))
except TypeError:
# We expect that this is happening because python3.4.x and
# below can't handle byte strings in mkstemp(). Traceback
# would end in something like:
# file = _os.path.join(dir, pre + name + suf)
# TypeError: can't concat bytes to str
error_msg = ('Failed creating tmp file for atomic move. This usually happens when using Python3 less than Python3.5. '
'Please use Python2.x or Python3.5 or greater.')
finally:
if error_msg:
if unsafe_writes:
self._unsafe_writes(b_src, b_dest)
else:
self.fail_json(msg=error_msg, exception=traceback.format_exc())
if tmp_dest_name:
b_tmp_dest_name = to_bytes(tmp_dest_name, errors='surrogate_or_strict')
try:
try:
# close tmp file handle before file operations to prevent text file busy errors on vboxfs synced folders (windows host)
os.close(tmp_dest_fd)
# leaves tmp file behind when sudo and not root
try:
shutil.move(b_src, b_tmp_dest_name)
except OSError:
# cleanup will happen by 'rm' of tmpdir
# copy2 will preserve some metadata
shutil.copy2(b_src, b_tmp_dest_name)
if self.selinux_enabled():
self.set_context_if_different(
b_tmp_dest_name, context, False)
try:
tmp_stat = os.stat(b_tmp_dest_name)
if dest_stat and (tmp_stat.st_uid != dest_stat.st_uid or tmp_stat.st_gid != dest_stat.st_gid):
os.chown(b_tmp_dest_name, dest_stat.st_uid, dest_stat.st_gid)
except OSError as e:
if e.errno != errno.EPERM:
raise
try:
os.rename(b_tmp_dest_name, b_dest)
except (shutil.Error, OSError, IOError) as e:
if unsafe_writes and e.errno == errno.EBUSY:
self._unsafe_writes(b_tmp_dest_name, b_dest)
else:
self.fail_json(msg='Unable to make %s into to %s, failed final rename from %s: %s' %
(src, dest, b_tmp_dest_name, to_native(e)),
exception=traceback.format_exc())
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Failed to replace file: %s to %s: %s' % (src, dest, to_native(e)),
exception=traceback.format_exc())
finally:
self.cleanup(b_tmp_dest_name)
if creating:
# make sure the file has the correct permissions
# based on the current value of umask
umask = os.umask(0)
os.umask(umask)
os.chmod(b_dest, DEFAULT_PERM & ~umask)
try:
os.chown(b_dest, os.geteuid(), os.getegid())
except OSError:
# We're okay with trying our best here. If the user is not
# root (or old Unices) they won't be able to chown.
pass
if self.selinux_enabled():
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def _unsafe_writes(self, src, dest):
# sadly there are some situations where we cannot ensure atomicity, but only if
# the user insists and we get the appropriate error we update the file unsafely
try:
out_dest = in_src = None
try:
out_dest = open(dest, 'wb')
in_src = open(src, 'rb')
shutil.copyfileobj(in_src, out_dest)
finally: # assuring closed files in 2.4 compatible way
if out_dest:
out_dest.close()
if in_src:
in_src.close()
except (shutil.Error, OSError, IOError) as e:
self.fail_json(msg='Could not write data to file (%s) from (%s): %s' % (dest, src, to_native(e)),
exception=traceback.format_exc())
def _read_from_pipes(self, rpipes, rfds, file_descriptor):
data = b('')
if file_descriptor in rfds:
data = os.read(file_descriptor.fileno(), self.get_buffer_size(file_descriptor))
if data == b(''):
rpipes.remove(file_descriptor)
return data
def _clean_args(self, args):
if not self._clean:
# create a printable version of the command for use in reporting later,
# which strips out things like passwords from the args list
to_clean_args = args
if PY2:
if isinstance(args, text_type):
to_clean_args = to_bytes(args)
else:
if isinstance(args, binary_type):
to_clean_args = to_text(args)
if isinstance(args, (text_type, binary_type)):
to_clean_args = shlex.split(to_clean_args)
clean_args = []
is_passwd = False
for arg in (to_native(a) for a in to_clean_args):
if is_passwd:
is_passwd = False
clean_args.append('********')
continue
if PASSWD_ARG_RE.match(arg):
sep_idx = arg.find('=')
if sep_idx > -1:
clean_args.append('%s=********' % arg[:sep_idx])
continue
else:
is_passwd = True
arg = heuristic_log_sanitize(arg, self.no_log_values)
clean_args.append(arg)
self._clean = ' '.join(shlex_quote(arg) for arg in clean_args)
return self._clean
def _restore_signal_handlers(self):
# Reset SIGPIPE to SIG_DFL, otherwise in Python2.7 it gets ignored in subprocesses.
if PY2 and sys.platform != 'win32':
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None,
use_unsafe_shell=False, prompt_regex=None, environ_update=None, umask=None, encoding='utf-8', errors='surrogate_or_strict',
expand_user_and_vars=True, pass_fds=None, before_communicate_callback=None):
'''
Execute a command, returns rc, stdout, and stderr.
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment variable so helper commands in
the same directory can also be found
:kw cwd: If given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kw environ_update: dictionary to *update* os.environ with
:kw umask: Umask to be used when running the command. Default None
:kw encoding: Since we return native strings, on python3 we need to
know the encoding to use to transform from bytes to text. If you
want to always get bytes back, use encoding=None. The default is
"utf-8". This does not affect transformation of strings given as
args.
:kw errors: Since we return native strings, on python3 we need to
transform stdout and stderr from bytes to text. If the bytes are
undecodable in the ``encoding`` specified, then use this error
handler to deal with them. The default is ``surrogate_or_strict``
which means that the bytes will be decoded using the
surrogateescape error handler if available (available on all
python3 versions we support) otherwise a UnicodeError traceback
will be raised. This does not affect transformations of strings
given as args.
:kw expand_user_and_vars: When ``use_unsafe_shell=False`` this argument
dictates whether ``~`` is expanded in paths and environment variables
are expanded before running the command. When ``True`` a string such as
``$SHELL`` will be expanded regardless of escaping. When ``False`` and
``use_unsafe_shell=False`` no path or variable expansion will be done.
:kw pass_fds: When running on Python 3 this argument
dictates which file descriptors should be passed
to an underlying ``Popen`` constructor. On Python 2, this will
set ``close_fds`` to False.
:kw before_communicate_callback: This function will be called
after ``Popen`` object will be created
but before communicating to the process.
(``Popen`` object will be passed to callback as a first argument)
:returns: A 3-tuple of return code (integer), stdout (native string),
and stderr (native string). On python2, stdout and stderr are both
byte strings. On python3, stdout and stderr are text strings converted
according to the encoding and errors parameters. If you want byte
strings on python3, use encoding=None to turn decoding to text off.
'''
# used by clean args later on
self._clean = None
if not isinstance(args, (list, binary_type, text_type)):
msg = "Argument 'args' to run_command must be list or string"
self.fail_json(rc=257, cmd=args, msg=msg)
shell = False
if use_unsafe_shell:
# stringify args for unsafe/direct shell usage
if isinstance(args, list):
args = b" ".join([to_bytes(shlex_quote(x), errors='surrogate_or_strict') for x in args])
else:
args = to_bytes(args, errors='surrogate_or_strict')
# not set explicitly, check if set by controller
if executable:
executable = to_bytes(executable, errors='surrogate_or_strict')
args = [executable, b'-c', args]
elif self._shell not in (None, '/bin/sh'):
args = [to_bytes(self._shell, errors='surrogate_or_strict'), b'-c', args]
else:
shell = True
else:
# ensure args are a list
if isinstance(args, (binary_type, text_type)):
# On python2.6 and below, shlex has problems with text type
# On python3, shlex needs a text type.
if PY2:
args = to_bytes(args, errors='surrogate_or_strict')
elif PY3:
args = to_text(args, errors='surrogateescape')
args = shlex.split(args)
# expand ``~`` in paths, and all environment vars
if expand_user_and_vars:
args = [to_bytes(os.path.expanduser(os.path.expandvars(x)), errors='surrogate_or_strict') for x in args if x is not None]
else:
args = [to_bytes(x, errors='surrogate_or_strict') for x in args if x is not None]
prompt_re = None
if prompt_regex:
if isinstance(prompt_regex, text_type):
if PY3:
prompt_regex = to_bytes(prompt_regex, errors='surrogateescape')
elif PY2:
prompt_regex = to_bytes(prompt_regex, errors='surrogate_or_strict')
try:
prompt_re = re.compile(prompt_regex, re.MULTILINE)
except re.error:
self.fail_json(msg="invalid prompt regular expression given to run_command")
rc = 0
msg = None
st_in = None
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
old_env_vals['PATH'] = os.environ['PATH']
os.environ['PATH'] = "%s:%s" % (path_prefix, os.environ['PATH'])
# If using test-module.py and explode, the remote lib path will resemble:
# /tmp/test_module_scratch/debug_dir/ansible/module_utils/basic.py
# If using ansible or ansible-playbook with a remote system:
# /tmp/ansible_vmweLQ/ansible_modlib.zip/ansible/module_utils/basic.py
# Clean out python paths set by ansiballz
if 'PYTHONPATH' in os.environ:
pypaths = os.environ['PYTHONPATH'].split(':')
pypaths = [x for x in pypaths
if not x.endswith('/ansible_modlib.zip') and
not x.endswith('/debug_dir')]
os.environ['PYTHONPATH'] = ':'.join(pypaths)
if not os.environ['PYTHONPATH']:
del os.environ['PYTHONPATH']
if data:
st_in = subprocess.PIPE
kwargs = dict(
executable=executable,
shell=shell,
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=self._restore_signal_handlers,
)
if PY3 and pass_fds:
kwargs["pass_fds"] = pass_fds
elif PY2 and pass_fds:
kwargs['close_fds'] = False
# store the pwd
prev_dir = os.getcwd()
# make sure we're in the right working directory
if cwd and os.path.isdir(cwd):
cwd = to_bytes(os.path.abspath(os.path.expanduser(cwd)), errors='surrogate_or_strict')
kwargs['cwd'] = cwd
try:
os.chdir(cwd)
except (OSError, IOError) as e:
self.fail_json(rc=e.errno, msg="Could not open %s, %s" % (cwd, to_native(e)),
exception=traceback.format_exc())
old_umask = None
if umask:
old_umask = os.umask(umask)
try:
if self._debug:
self.log('Executing: ' + self._clean_args(args))
cmd = subprocess.Popen(args, **kwargs)
if before_communicate_callback:
before_communicate_callback(cmd)
# the communication logic here is essentially taken from that
# of the _communicate() function in ssh.py
stdout = b('')
stderr = b('')
rpipes = [cmd.stdout, cmd.stderr]
if data:
if not binary_data:
data += '\n'
if isinstance(data, text_type):
data = to_bytes(data)
cmd.stdin.write(data)
cmd.stdin.close()
while True:
rfds, wfds, efds = select.select(rpipes, [], rpipes, 1)
stdout += self._read_from_pipes(rpipes, rfds, cmd.stdout)
stderr += self._read_from_pipes(rpipes, rfds, cmd.stderr)
# if we're checking for prompts, do it now
if prompt_re:
if prompt_re.search(stdout) and not data:
if encoding:
stdout = to_native(stdout, encoding=encoding, errors=errors)
return (257, stdout, "A prompt was encountered while running a command, but no input data was specified")
# only break out if no pipes are left to read or
# the pipes are completely read and
# the process is terminated
if (not rpipes or not rfds) and cmd.poll() is not None:
break
# No pipes are left to read but process is not yet terminated
# Only then it is safe to wait for the process to be finished
# NOTE: Actually cmd.poll() is always None here if rpipes is empty
elif not rpipes and cmd.poll() is None:
cmd.wait()
# The process is terminated. Since no pipes to read from are
# left, there is no need to call select() again.
break
cmd.stdout.close()
cmd.stderr.close()
rc = cmd.returncode
except (OSError, IOError) as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(e)))
self.fail_json(rc=e.errno, msg=to_native(e), cmd=self._clean_args(args))
except Exception as e:
self.log("Error Executing CMD:%s Exception:%s" % (self._clean_args(args), to_native(traceback.format_exc())))
self.fail_json(rc=257, msg=to_native(e), exception=traceback.format_exc(), cmd=self._clean_args(args))
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if old_umask:
os.umask(old_umask)
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=self._clean_args(args), rc=rc, stdout=stdout, stderr=stderr, msg=msg)
# reset the pwd
os.chdir(prev_dir)
if encoding is not None:
return (rc, to_native(stdout, encoding=encoding, errors=errors),
to_native(stderr, encoding=encoding, errors=errors))
return (rc, stdout, stderr)
def append_to_file(self, filename, str):
filename = os.path.expandvars(os.path.expanduser(filename))
fh = open(filename, 'a')
fh.write(str)
fh.close()
def bytes_to_human(self, size):
return bytes_to_human(size)
# for backwards compatibility
pretty_bytes = bytes_to_human
def human_to_bytes(self, number, isbits=False):
return human_to_bytes(number, isbits)
#
# Backwards compat
#
# In 2.0, moved from inside the module to the toplevel
is_executable = is_executable
@staticmethod
def get_buffer_size(fd):
try:
# 1032 == FZ_GETPIPE_SZ
buffer_size = fcntl.fcntl(fd, 1032)
except Exception:
try:
# not as exact as above, but should be good enough for most platforms that fail the previous call
buffer_size = select.PIPE_BUF
except Exception:
buffer_size = 9000 # use sane default JIC
return buffer_size
def get_module_path():
return os.path.dirname(os.path.realpath(__file__))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,956 |
Modules using add_file_common_args=True and the files document fragment have multiple undocumented arguments
|
##### SUMMARY
If a module sets `add_file_common_args=True` when calling `AnsibleModule`, all elements from [FILE_COMMON_ARGUMENTS](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/module_utils/basic.py#L229-L257) are included in the argument spec. The [files document fragment](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/plugins/doc_fragments/files.py#L15-L80) only documents a subset of them, though. Missing are:
- src
- follow
- force
- content
- backup
- remote_src
- regexp
- delimiter
- directory_mode
Most module authors using `add_file_common_args=True` are probably not aware that their modules have these options as well.
I don't think these extra options should be added by default if `add_file_common_args=True` is specified.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```
devel
```
|
https://github.com/ansible/ansible/issues/64956
|
https://github.com/ansible/ansible/pull/66389
|
802cc602429ea2b37eb7d75a8bb1dc2ebcfc05e1
|
f725dce9368dc4d33c2cddd4790c57e1d00496f0
| 2019-11-17T13:42:30Z |
python
| 2020-02-07T23:56:01Z |
lib/ansible/modules/database/postgresql/postgresql_pg_hba.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Sebastiaan Mannem (@sebasmannem) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
'''
This module is used to manage postgres pg_hba files with Ansible.
'''
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: postgresql_pg_hba
short_description: Add, remove or modify a rule in a pg_hba file
description:
- The fundamental function of the module is to create, or delete lines in pg_hba files.
- The lines in the file should be in a typical pg_hba form and lines should be unique per key (type, databases, users, source).
If they are not unique and the SID is 'the one to change', only one for C(state=present) or none for C(state=absent) of the SID's will remain.
extends_documentation_fragment: files
version_added: '2.8'
options:
address:
description:
- The source address/net where the connections could come from.
- Will not be used for entries of I(type)=C(local).
- You can also use keywords C(all), C(samehost), and C(samenet).
default: samehost
type: str
aliases: [ source, src ]
backup:
description:
- If set, create a backup of the C(pg_hba) file before it is modified.
The location of the backup is returned in the (backup) variable by this module.
default: false
type: bool
backup_file:
description:
- Write backup to a specific backupfile rather than a temp file.
type: str
create:
description:
- Create an C(pg_hba) file if none exists.
- When set to false, an error is raised when the C(pg_hba) file doesn't exist.
default: false
type: bool
contype:
description:
- Type of the rule. If not set, C(postgresql_pg_hba) will only return contents.
type: str
choices: [ local, host, hostnossl, hostssl ]
databases:
description:
- Databases this line applies to.
default: all
type: str
dest:
description:
- Path to C(pg_hba) file to modify.
type: path
required: true
method:
description:
- Authentication method to be used.
type: str
choices: [ cert, gss, ident, krb5, ldap, md5, pam, password, peer, radius, reject, scram-sha-256 , sspi, trust ]
default: md5
netmask:
description:
- The netmask of the source address.
type: str
options:
description:
- Additional options for the authentication I(method).
type: str
order:
description:
- The entries will be written out in a specific order.
With this option you can control by which field they are ordered first, second and last.
s=source, d=databases, u=users.
This option is deprecated since 2.9 and will be removed in 2.11.
Sortorder is now hardcoded to sdu.
type: str
default: sdu
choices: [ sdu, sud, dsu, dus, usd, uds ]
state:
description:
- The lines will be added/modified when C(state=present) and removed when C(state=absent).
type: str
default: present
choices: [ absent, present ]
users:
description:
- Users this line applies to.
type: str
default: all
notes:
- The default authentication assumes that on the host, you are either logging in as or
sudo'ing to an account with appropriate permissions to read and modify the file.
- This module also returns the pg_hba info. You can use this module to only retrieve it by only specifying I(dest).
The info can be found in the returned data under key pg_hba, being a list, containing a dict per rule.
- This module will sort resulting C(pg_hba) files if a rule change is required.
This could give unexpected results with manual created hba files, if it was improperly sorted.
For example a rule was created for a net first and for a ip in that net range next.
In that situation, the 'ip specific rule' will never hit, it is in the C(pg_hba) file obsolete.
After the C(pg_hba) file is rewritten by the M(postgresql_pg_hba) module, the ip specific rule will be sorted above the range rule.
And then it will hit, which will give unexpected results.
- With the 'order' parameter you can control which field is used to sort first, next and last.
- The module supports a check mode and a diff mode.
seealso:
- name: PostgreSQL pg_hba.conf file reference
description: Complete reference of the PostgreSQL pg_hba.conf file documentation.
link: https://www.postgresql.org/docs/current/auth-pg-hba-conf.html
requirements:
- ipaddress
author: Sebastiaan Mannem (@sebasmannem)
'''
EXAMPLES = '''
- name: Grant users joe and simon access to databases sales and logistics from ipv6 localhost ::1/128 using peer authentication.
postgresql_pg_hba:
dest: /var/lib/postgres/data/pg_hba.conf
contype: host
users: joe,simon
source: ::1
databases: sales,logistics
method: peer
create: true
- name: Grant user replication from network 192.168.0.100/24 access for replication with client cert authentication.
postgresql_pg_hba:
dest: /var/lib/postgres/data/pg_hba.conf
contype: host
users: replication
source: 192.168.0.100/24
databases: replication
method: cert
- name: Revoke access from local user mary on database mydb.
postgresql_pg_hba:
dest: /var/lib/postgres/data/pg_hba.conf
contype: local
users: mary
databases: mydb
state: absent
'''
RETURN = r'''
msgs:
description: List of textual messages what was done
returned: always
type: list
sample:
"msgs": [
"Removing",
"Changed",
"Writing"
]
backup_file:
description: File that the original pg_hba file was backed up to
returned: changed
type: str
sample: /tmp/pg_hba_jxobj_p
pg_hba:
description: List of the pg_hba rules as they are configured in the specified hba file
returned: always
type: list
sample:
"pg_hba": [
{
"db": "all",
"method": "md5",
"src": "samehost",
"type": "host",
"usr": "all"
}
]
'''
import os
import re
import traceback
IPADDRESS_IMP_ERR = None
try:
import ipaddress
except ImportError:
IPADDRESS_IMP_ERR = traceback.format_exc()
import tempfile
import shutil
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
# from ansible.module_utils.postgres import postgres_common_argument_spec
PG_HBA_METHODS = ["trust", "reject", "md5", "password", "gss", "sspi", "krb5", "ident", "peer",
"ldap", "radius", "cert", "pam", "scram-sha-256"]
PG_HBA_TYPES = ["local", "host", "hostssl", "hostnossl"]
PG_HBA_ORDERS = ["sdu", "sud", "dsu", "dus", "usd", "uds"]
PG_HBA_HDR = ['type', 'db', 'usr', 'src', 'mask', 'method', 'options']
WHITESPACES_RE = re.compile(r'\s+')
class PgHbaError(Exception):
'''
This exception is raised when parsing the pg_hba file ends in an error.
'''
class PgHbaRuleError(PgHbaError):
'''
This exception is raised when parsing the pg_hba file ends in an error.
'''
class PgHbaRuleChanged(PgHbaRuleError):
'''
This exception is raised when a new parsed rule is a changed version of an existing rule.
'''
class PgHbaValueError(PgHbaError):
'''
This exception is raised when a new parsed rule is a changed version of an existing rule.
'''
class PgHbaRuleValueError(PgHbaRuleError):
'''
This exception is raised when a new parsed rule is a changed version of an existing rule.
'''
class PgHba(object):
"""
PgHba object to read/write entries to/from.
pg_hba_file - the pg_hba file almost always /etc/pg_hba
"""
def __init__(self, pg_hba_file=None, order="sdu", backup=False, create=False):
if order not in PG_HBA_ORDERS:
msg = "invalid order setting {0} (should be one of '{1}')."
raise PgHbaError(msg.format(order, "', '".join(PG_HBA_ORDERS)))
self.pg_hba_file = pg_hba_file
self.rules = None
self.comment = None
self.order = order
self.backup = backup
self.last_backup = None
self.create = create
self.unchanged()
# self.databases will be update by add_rule and gives some idea of the number of databases
# (at least that are handled by this pg_hba)
self.databases = set(['postgres', 'template0', 'template1'])
# self.databases will be update by add_rule and gives some idea of the number of users
# (at least that are handled by this pg_hba) since this might also be groups with multiple
# users, this might be totally off, but at least it is some info...
self.users = set(['postgres'])
self.read()
def unchanged(self):
'''
This method resets self.diff to a empty default
'''
self.diff = {'before': {'file': self.pg_hba_file, 'pg_hba': []},
'after': {'file': self.pg_hba_file, 'pg_hba': []}}
def read(self):
'''
Read in the pg_hba from the system
'''
self.rules = {}
self.comment = []
# read the pg_hbafile
try:
with open(self.pg_hba_file, 'r') as file:
for line in file:
line = line.strip()
# uncomment
if '#' in line:
line, comment = line.split('#', 1)
self.comment.append('#' + comment)
try:
self.add_rule(PgHbaRule(line=line))
except PgHbaRuleError:
pass
self.unchanged()
except IOError:
pass
def write(self, backup_file=''):
'''
This method writes the PgHba rules (back) to a file.
'''
if not self.changed():
return False
contents = self.render()
if self.pg_hba_file:
if not (os.path.isfile(self.pg_hba_file) or self.create):
raise PgHbaError("pg_hba file '{0}' doesn't exist. "
"Use create option to autocreate.".format(self.pg_hba_file))
if self.backup and os.path.isfile(self.pg_hba_file):
if backup_file:
self.last_backup = backup_file
else:
__backup_file_h, self.last_backup = tempfile.mkstemp(prefix='pg_hba')
shutil.copy(self.pg_hba_file, self.last_backup)
fileh = open(self.pg_hba_file, 'w')
else:
filed, __path = tempfile.mkstemp(prefix='pg_hba')
fileh = os.fdopen(filed, 'w')
fileh.write(contents)
self.unchanged()
fileh.close()
return True
def add_rule(self, rule):
'''
This method can be used to add a rule to the list of rules in this PgHba object
'''
key = rule.key()
try:
try:
oldrule = self.rules[key]
except KeyError:
raise PgHbaRuleChanged
ekeys = set(list(oldrule.keys()) + list(rule.keys()))
ekeys.remove('line')
for k in ekeys:
if oldrule[k] != rule[k]:
raise PgHbaRuleChanged('{0} changes {1}'.format(rule, oldrule))
except PgHbaRuleChanged:
self.rules[key] = rule
self.diff['after']['pg_hba'].append(rule.line())
if rule['db'] not in ['all', 'samerole', 'samegroup', 'replication']:
databases = set(rule['db'].split(','))
self.databases.update(databases)
if rule['usr'] != 'all':
user = rule['usr']
if user[0] == '+':
user = user[1:]
self.users.add(user)
def remove_rule(self, rule):
'''
This method can be used to find and remove a rule. It doesn't look for the exact rule, only
the rule with the same key.
'''
keys = rule.key()
try:
del self.rules[keys]
self.diff['before']['pg_hba'].append(rule.line())
except KeyError:
pass
def get_rules(self, with_lines=False):
'''
This method returns all the rules of the PgHba object
'''
rules = sorted(self.rules.values())
for rule in rules:
ret = {}
for key, value in rule.items():
ret[key] = value
if not with_lines:
if 'line' in ret:
del ret['line']
else:
ret['line'] = rule.line()
yield ret
def render(self):
'''
This method renders the content of the PgHba rules and comments.
The returning value can be used directly to write to a new file.
'''
comment = '\n'.join(self.comment)
rule_lines = '\n'.join([rule['line'] for rule in self.get_rules(with_lines=True)])
result = comment + '\n' + rule_lines
# End it properly with a linefeed (if not already).
if result and result[-1] not in ['\n', '\r']:
result += '\n'
return result
def changed(self):
'''
This method can be called to detect if the PgHba file has been changed.
'''
return bool(self.diff['before']['pg_hba'] or self.diff['after']['pg_hba'])
class PgHbaRule(dict):
'''
This class represents one rule as defined in a line in a PgHbaFile.
'''
def __init__(self, contype=None, databases=None, users=None, source=None, netmask=None,
method=None, options=None, line=None):
'''
This function can be called with a comma seperated list of databases and a comma seperated
list of users and it will act as a generator that returns a expanded list of rules one by
one.
'''
super(PgHbaRule, self).__init__()
if line:
# Read values from line if parsed
self.fromline(line)
# read rule cols from parsed items
rule = dict(zip(PG_HBA_HDR, [contype, databases, users, source, netmask, method, options]))
for key, value in rule.items():
if value:
self[key] = value
# Some sanity checks
for key in ['method', 'type']:
if key not in self:
raise PgHbaRuleError('Missing {0} in rule {1}'.format(key, self))
if self['method'] not in PG_HBA_METHODS:
msg = "invalid method {0} (should be one of '{1}')."
raise PgHbaRuleValueError(msg.format(self['method'], "', '".join(PG_HBA_METHODS)))
if self['type'] not in PG_HBA_TYPES:
msg = "invalid connection type {0} (should be one of '{1}')."
raise PgHbaRuleValueError(msg.format(self['type'], "', '".join(PG_HBA_TYPES)))
if self['type'] == 'local':
self.unset('src')
self.unset('mask')
elif 'src' not in self:
raise PgHbaRuleError('Missing src in rule {1}'.format(self))
elif '/' in self['src']:
self.unset('mask')
else:
self['src'] = str(self.source())
self.unset('mask')
def unset(self, key):
'''
This method is used to unset certain columns if they exist
'''
if key in self:
del self[key]
def line(self):
'''
This method can be used to return (or generate) the line
'''
try:
return self['line']
except KeyError:
self['line'] = "\t".join([self[k] for k in PG_HBA_HDR if k in self.keys()])
return self['line']
def fromline(self, line):
'''
split into 'type', 'db', 'usr', 'src', 'mask', 'method', 'options' cols
'''
if WHITESPACES_RE.sub('', line) == '':
# empty line. skip this one...
return
cols = WHITESPACES_RE.split(line)
if len(cols) < 4:
msg = "Rule {0} has too few columns."
raise PgHbaValueError(msg.format(line))
if cols[0] not in PG_HBA_TYPES:
msg = "Rule {0} has unknown type: {1}."
raise PgHbaValueError(msg.format(line, cols[0]))
if cols[0] == 'local':
cols.insert(3, None) # No address
cols.insert(3, None) # No IP-mask
if len(cols) < 6:
cols.insert(4, None) # No IP-mask
elif cols[5] not in PG_HBA_METHODS:
cols.insert(4, None) # No IP-mask
if cols[5] not in PG_HBA_METHODS:
raise PgHbaValueError("Rule {0} of '{1}' type has invalid auth-method '{2}'".format(line, cols[0], cols[5]))
if len(cols) < 7:
cols.insert(6, None) # No auth-options
else:
cols[6] = " ".join(cols[6:]) # combine all auth-options
rule = dict(zip(PG_HBA_HDR, cols[:7]))
for key, value in rule.items():
if value:
self[key] = value
def key(self):
'''
This method can be used to get the key from a rule.
'''
if self['type'] == 'local':
source = 'local'
else:
source = str(self.source())
return (source, self['db'], self['usr'])
def source(self):
'''
This method is used to get the source of a rule as an ipaddress object if possible.
'''
if 'mask' in self.keys():
try:
ipaddress.ip_address(u'{0}'.format(self['src']))
except ValueError:
raise PgHbaValueError('Mask was specified, but source "{0}" '
'is no valid ip'.format(self['src']))
# ipaddress module cannot work with ipv6 netmask, so lets convert it to prefixlen
# furthermore ipv4 with bad netmask throws 'Rule {} doesn't seem to be an ip, but has a
# mask error that doesn't seem to describe what is going on.
try:
mask_as_ip = ipaddress.ip_address(u'{0}'.format(self['mask']))
except ValueError:
raise PgHbaValueError('Mask {0} seems to be invalid'.format(self['mask']))
binvalue = "{0:b}".format(int(mask_as_ip))
if '01' in binvalue:
raise PgHbaValueError('IP mask {0} seems invalid '
'(binary value has 1 after 0)'.format(self['mask']))
prefixlen = binvalue.count('1')
sourcenw = '{0}/{1}'.format(self['src'], prefixlen)
try:
return ipaddress.ip_network(u'{0}'.format(sourcenw), strict=False)
except ValueError:
raise PgHbaValueError('{0} is no valid address range'.format(sourcenw))
try:
return ipaddress.ip_network(u'{0}'.format(self['src']), strict=False)
except ValueError:
return self['src']
def __lt__(self, other):
"""This function helps sorted to decide how to sort.
It just checks itself against the other and decides on some key values
if it should be sorted higher or lower in the list.
The way it works:
For networks, every 1 in 'netmask in binary' makes the subnet more specific.
Therefore I chose to use prefix as the weight.
So a single IP (/32) should have twice the weight of a /16 network.
To keep everything in the same weight scale,
- for ipv6, we use a weight scale of 0 (all possible ipv6 addresses) to 128 (single ip)
- for ipv4, we use a weight scale of 0 (all possible ipv4 addresses) to 128 (single ip)
Therefore for ipv4, we use prefixlen (0-32) * 4 for weight,
which corresponds to ipv6 (0-128).
"""
myweight = self.source_weight()
hisweight = other.source_weight()
if myweight != hisweight:
return myweight > hisweight
myweight = self.db_weight()
hisweight = other.db_weight()
if myweight != hisweight:
return myweight < hisweight
myweight = self.user_weight()
hisweight = other.user_weight()
if myweight != hisweight:
return myweight < hisweight
try:
return self['src'] < other['src']
except TypeError:
return self.source_type_weight() < other.source_type_weight()
except Exception:
# When all else fails, just compare the exact line.
return self.line() < other.line()
def source_weight(self):
"""Report the weight of this source net.
Basically this is the netmask, where IPv4 is normalized to IPv6
(IPv4/32 has the same weight as IPv6/128).
"""
if self['type'] == 'local':
return 130
sourceobj = self.source()
if isinstance(sourceobj, ipaddress.IPv4Network):
return sourceobj.prefixlen * 4
if isinstance(sourceobj, ipaddress.IPv6Network):
return sourceobj.prefixlen
if isinstance(sourceobj, str):
# You can also write all to match any IP address,
# samehost to match any of the server's own IP addresses,
# or samenet to match any address in any subnet that the server is connected to.
if sourceobj == 'all':
# (all is considered the full range of all ips, which has a weight of 0)
return 0
if sourceobj == 'samehost':
# (sort samehost second after local)
return 129
if sourceobj == 'samenet':
# Might write some fancy code to determine all prefix's
# from all interfaces and find a sane value for this one.
# For now, let's assume IPv4/24 or IPv6/96 (both have weight 96).
return 96
if sourceobj[0] == '.':
# suffix matching (domain name), let's assume a very large scale
# and therefore a very low weight IPv4/16 or IPv6/64 (both have weight 64).
return 64
# hostname, let's assume only one host matches, which is
# IPv4/32 or IPv6/128 (both have weight 128)
return 128
raise PgHbaValueError('Cannot deduct the source weight of this source {1}'.format(sourceobj))
def source_type_weight(self):
"""Give a weight on the type of this source.
Basically make sure that IPv6Networks are sorted higher than IPv4Networks.
This is a 'when all else fails' solution in __lt__.
"""
if self['type'] == 'local':
return 3
sourceobj = self.source()
if isinstance(sourceobj, ipaddress.IPv4Network):
return 2
if isinstance(sourceobj, ipaddress.IPv6Network):
return 1
if isinstance(sourceobj, str):
return 0
raise PgHbaValueError('This source {0} is of an unknown type...'.format(sourceobj))
def db_weight(self):
"""Report the weight of the database.
Normally, just 1, but for replication this is 0, and for 'all', this is more than 2.
"""
if self['db'] == 'all':
return 100000
if self['db'] == 'replication':
return 0
if self['db'] in ['samerole', 'samegroup']:
return 1
return 1 + self['db'].count(',')
def user_weight(self):
"""Report weight when comparing users."""
if self['usr'] == 'all':
return 1000000
return 1
def main():
'''
This function is the main function of this module
'''
# argument_spec = postgres_common_argument_spec()
argument_spec = dict()
argument_spec.update(
address=dict(type='str', default='samehost', aliases=['source', 'src']),
backup_file=dict(type='str'),
contype=dict(type='str', default=None, choices=PG_HBA_TYPES),
create=dict(type='bool', default=False),
databases=dict(type='str', default='all'),
dest=dict(type='path', required=True),
method=dict(type='str', default='md5', choices=PG_HBA_METHODS),
netmask=dict(type='str'),
options=dict(type='str'),
order=dict(type='str', default="sdu", choices=PG_HBA_ORDERS),
state=dict(type='str', default="present", choices=["absent", "present"]),
users=dict(type='str', default='all')
)
module = AnsibleModule(
argument_spec=argument_spec,
add_file_common_args=True,
supports_check_mode=True
)
if IPADDRESS_IMP_ERR is not None:
module.fail_json(msg=missing_required_lib('ipaddress'), exception=IPADDRESS_IMP_ERR)
contype = module.params["contype"]
create = bool(module.params["create"] or module.check_mode)
if module.check_mode:
backup = False
else:
backup = module.params['backup']
backup_file = module.params['backup_file']
databases = module.params["databases"]
dest = module.params["dest"]
method = module.params["method"]
netmask = module.params["netmask"]
options = module.params["options"]
order = module.params["order"]
source = module.params["address"]
state = module.params["state"]
users = module.params["users"]
ret = {'msgs': []}
try:
pg_hba = PgHba(dest, order, backup=backup, create=create)
except PgHbaError as error:
module.fail_json(msg='Error reading file:\n{0}'.format(error))
if contype:
try:
for database in databases.split(','):
for user in users.split(','):
rule = PgHbaRule(contype, database, user, source, netmask, method, options)
if state == "present":
ret['msgs'].append('Adding')
pg_hba.add_rule(rule)
else:
ret['msgs'].append('Removing')
pg_hba.remove_rule(rule)
except PgHbaError as error:
module.fail_json(msg='Error modifying rules:\n{0}'.format(error))
file_args = module.load_file_common_arguments(module.params)
ret['changed'] = changed = pg_hba.changed()
if changed:
ret['msgs'].append('Changed')
ret['diff'] = pg_hba.diff
if not module.check_mode:
ret['msgs'].append('Writing')
try:
if pg_hba.write(backup_file):
module.set_fs_attributes_if_different(file_args, True, pg_hba.diff,
expand=False)
except PgHbaError as error:
module.fail_json(msg='Error writing file:\n{0}'.format(error))
if pg_hba.last_backup:
ret['backup_file'] = pg_hba.last_backup
ret['pg_hba'] = list(pg_hba.get_rules())
module.exit_json(**ret)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,956 |
Modules using add_file_common_args=True and the files document fragment have multiple undocumented arguments
|
##### SUMMARY
If a module sets `add_file_common_args=True` when calling `AnsibleModule`, all elements from [FILE_COMMON_ARGUMENTS](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/module_utils/basic.py#L229-L257) are included in the argument spec. The [files document fragment](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/plugins/doc_fragments/files.py#L15-L80) only documents a subset of them, though. Missing are:
- src
- follow
- force
- content
- backup
- remote_src
- regexp
- delimiter
- directory_mode
Most module authors using `add_file_common_args=True` are probably not aware that their modules have these options as well.
I don't think these extra options should be added by default if `add_file_common_args=True` is specified.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```
devel
```
|
https://github.com/ansible/ansible/issues/64956
|
https://github.com/ansible/ansible/pull/66389
|
802cc602429ea2b37eb7d75a8bb1dc2ebcfc05e1
|
f725dce9368dc4d33c2cddd4790c57e1d00496f0
| 2019-11-17T13:42:30Z |
python
| 2020-02-07T23:56:01Z |
lib/ansible/modules/files/copy.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: copy
version_added: historical
short_description: Copy files to remote locations
description:
- The C(copy) module copies a file from the local or remote machine to a location on the remote machine.
- Use the M(fetch) module to copy files from remote locations to the local box.
- If you need variable interpolation in copied files, use the M(template) module. Using a variable in the C(content)
field will result in unpredictable output.
- For Windows targets, use the M(win_copy) module instead.
options:
src:
description:
- Local path to a file to copy to the remote server.
- This can be absolute or relative.
- If path is a directory, it is copied recursively. In this case, if path ends
with "/", only inside contents of that directory are copied to destination.
Otherwise, if it does not end with "/", the directory itself with all contents
is copied. This behavior is similar to the C(rsync) command line tool.
type: path
content:
description:
- When used instead of C(src), sets the contents of a file directly to the specified value.
- Works only when C(dest) is a file. Creates the file if it does not exist.
- For advanced formatting or if C(content) contains a variable, use the M(template) module.
type: str
version_added: '1.1'
dest:
description:
- Remote absolute path where the file should be copied to.
- If C(src) is a directory, this must be a directory too.
- If C(dest) is a non-existent path and if either C(dest) ends with "/" or C(src) is a directory, C(dest) is created.
- If I(dest) is a relative path, the starting directory is determined by the remote host.
- If C(src) and C(dest) are files, the parent directory of C(dest) is not created and the task fails if it does not already exist.
type: path
required: yes
backup:
description:
- Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '0.7'
force:
description:
- Influence whether the remote file must always be replaced.
- If C(yes), the remote file will be replaced when contents are different than the source.
- If C(no), the file will only be transferred if the destination does not exist.
- Alias C(thirsty) has been deprecated and will be removed in 2.13.
type: bool
default: yes
aliases: [ thirsty ]
version_added: '1.1'
mode:
description:
- The permissions of the destination file or directory.
- For those used to C(/usr/bin/chmod) remember that modes are actually octal numbers.
You must either add a leading zero so that Ansible's YAML parser knows it is an octal number
(like C(0644) or C(01777))or quote it (like C('644') or C('1777')) so Ansible receives a string
and can do its own conversion from string into number. Giving Ansible a number without following
one of these rules will end up with a decimal number which will have unexpected results.
- As of Ansible 1.8, the mode may be specified as a symbolic mode (for example, C(u+rwx) or C(u=rw,g=r,o=r)).
- As of Ansible 2.3, the mode may also be the special string C(preserve).
- C(preserve) means that the file will be given the same permissions as the source file.
type: path
directory_mode:
description:
- When doing a recursive copy set the mode for the directories.
- If this is not set we will use the system defaults.
- The mode is only set on directories which are newly created, and will not affect those that already existed.
type: raw
version_added: '1.5'
remote_src:
description:
- Influence whether C(src) needs to be transferred or already is present remotely.
- If C(no), it will search for C(src) at originating/master machine.
- If C(yes) it will go to the remote/target machine for the C(src).
- C(remote_src) supports recursive copying as of version 2.8.
- C(remote_src) only works with C(mode=preserve) as of version 2.6.
type: bool
default: no
version_added: '2.0'
follow:
description:
- This flag indicates that filesystem links in the destination, if they exist, should be followed.
type: bool
default: no
version_added: '1.8'
local_follow:
description:
- This flag indicates that filesystem links in the source tree, if they exist, should be followed.
type: bool
default: yes
version_added: '2.4'
checksum:
description:
- SHA1 checksum of the file being transferred.
- Used to validate that the copy of the file was successful.
- If this is not provided, ansible will use the local calculated checksum of the src file.
type: str
version_added: '2.5'
extends_documentation_fragment:
- decrypt
- files
- validate
notes:
- The M(copy) module recursively copy facility does not scale to lots (>hundreds) of files.
seealso:
- module: assemble
- module: fetch
- module: file
- module: synchronize
- module: template
- module: win_copy
author:
- Ansible Core Team
- Michael DeHaan
'''
EXAMPLES = r'''
- name: Copy file with owner and permissions
copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: '0644'
- name: Copy file with owner and permission, using symbolic representation
copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u=rw,g=r,o=r
- name: Another symbolic mode example, adding some permissions and removing others
copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u+rw,g-wx,o-rwx
- name: Copy a new "ntp.conf" file into place, backing up the original if it differs from the copied version
copy:
src: /mine/ntp.conf
dest: /etc/ntp.conf
owner: root
group: root
mode: '0644'
backup: yes
- name: Copy a new "sudoers" file into place, after passing validation with visudo
copy:
src: /mine/sudoers
dest: /etc/sudoers
validate: /usr/sbin/visudo -csf %s
- name: Copy a "sudoers" file on the remote machine for editing
copy:
src: /etc/sudoers
dest: /etc/sudoers.edit
remote_src: yes
validate: /usr/sbin/visudo -csf %s
- name: Copy using inline content
copy:
content: '# This file was moved to /etc/other.conf'
dest: /etc/mine.conf
- name: If follow=yes, /path/to/file will be overwritten by contents of foo.conf
copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: yes
- name: If follow=no, /path/to/link will become a file and be overwritten by contents of foo.conf
copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: no
'''
RETURN = r'''
dest:
description: Destination file/path
returned: success
type: str
sample: /path/to/file.txt
src:
description: Source file used for the copy on the target machine
returned: changed
type: str
sample: /home/httpd/.ansible/tmp/ansible-tmp-1423796390.97-147729857856000/source
md5sum:
description: MD5 checksum of the file after running copy
returned: when supported
type: str
sample: 2a5aeecc61dc98c4d780b14b330e3282
checksum:
description: SHA1 checksum of the file after running copy
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
backup_file:
description: Name of backup file created
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
gid:
description: Group id of the file, after execution
returned: success
type: int
sample: 100
group:
description: Group of the file, after execution
returned: success
type: str
sample: httpd
owner:
description: Owner of the file, after execution
returned: success
type: str
sample: httpd
uid:
description: Owner id of the file, after execution
returned: success
type: int
sample: 100
mode:
description: Permissions of the target, after execution
returned: success
type: str
sample: 0644
size:
description: Size of the target, after execution
returned: success
type: int
sample: 1220
state:
description: State of the target, after execution
returned: success
type: str
sample: file
'''
import errno
import filecmp
import grp
import os
import os.path
import platform
import pwd
import shutil
import stat
import tempfile
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils._text import to_bytes, to_native
from ansible.module_utils.six import PY3
# The AnsibleModule object
module = None
class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
# Once we get run_command moved into common, we can move this into a common/files module. We can't
# until then because of the module.run_command() method. We may need to move it into
# basic::AnsibleModule() until then but if so, make it a private function so that we don't have to
# keep it for backwards compatibility later.
def clear_facls(path):
setfacl = get_bin_path('setfacl')
# FIXME "setfacl -b" is available on Linux and FreeBSD. There is "setfacl -D e" on z/OS. Others?
acl_command = [setfacl, '-b', path]
b_acl_command = [to_bytes(x) for x in acl_command]
rc, out, err = module.run_command(b_acl_command, environ_update=dict(LANG='C', LC_ALL='C', LC_MESSAGES='C'))
if rc != 0:
raise RuntimeError('Error running "{0}": stdout: "{1}"; stderr: "{2}"'.format(' '.join(b_acl_command), out, err))
def split_pre_existing_dir(dirname):
'''
Return the first pre-existing directory and a list of the new directories that will be created.
'''
head, tail = os.path.split(dirname)
b_head = to_bytes(head, errors='surrogate_or_strict')
if head == '':
return ('.', [tail])
if not os.path.exists(b_head):
if head == '/':
raise AnsibleModuleError(results={'msg': "The '/' directory doesn't exist on this machine."})
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(head)
else:
return (head, [tail])
new_directory_list.append(tail)
return (pre_existing_dir, new_directory_list)
def adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed):
'''
Walk the new directories list and make sure that permissions are as we would expect
'''
if new_directory_list:
working_dir = os.path.join(pre_existing_dir, new_directory_list.pop(0))
directory_args['path'] = working_dir
changed = module.set_fs_attributes_if_different(directory_args, changed)
changed = adjust_recursive_directory_permissions(working_dir, new_directory_list, module, directory_args, changed)
return changed
def chown_recursive(path, module):
changed = False
owner = module.params['owner']
group = module.params['group']
if owner is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = module.set_owner_if_different(dirpath, owner, False)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = module.set_owner_if_different(dir, owner, False)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = module.set_owner_if_different(file, owner, False)
if owner_changed is True:
changed = owner_changed
else:
uid = pwd.getpwnam(owner).pw_uid
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = (os.stat(dirpath).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = (os.stat(dir).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = (os.stat(file).st_uid != uid)
if owner_changed is True:
changed = owner_changed
if group is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
group_changed = module.set_group_if_different(dirpath, group, False)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = module.set_group_if_different(dir, group, False)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = module.set_group_if_different(file, group, False)
if group_changed is True:
changed = group_changed
else:
gid = grp.getgrnam(group).gr_gid
for dirpath, dirnames, filenames in os.walk(path):
group_changed = (os.stat(dirpath).st_gid != gid)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = (os.stat(dir).st_gid != gid)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = (os.stat(file).st_gid != gid)
if group_changed is True:
changed = group_changed
return changed
def copy_diff_files(src, dest, module):
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
diff_files = filecmp.dircmp(src, dest).diff_files
if len(diff_files):
changed = True
if not module.check_mode:
for item in diff_files:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
else:
shutil.copyfile(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
changed = True
return changed
def copy_left_only(src, dest, module):
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
left_only = filecmp.dircmp(src, dest).left_only
if len(left_only):
changed = True
if not module.check_mode:
for item in left_only:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is True:
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not(local_follow))
chown_recursive(b_dest_item_path, module)
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is True:
shutil.copyfile(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if not os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path):
shutil.copyfile(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if not os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path):
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not(local_follow))
chown_recursive(b_dest_item_path, module)
changed = True
return changed
def copy_common_dirs(src, dest, module):
changed = False
common_dirs = filecmp.dircmp(src, dest).common_dirs
for item in common_dirs:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src_item_path, b_dest_item_path, module)
left_only_changed = copy_left_only(b_src_item_path, b_dest_item_path, module)
if diff_files_changed or left_only_changed:
changed = True
# recurse into subdirectory
changed = changed or copy_common_dirs(os.path.join(src, item), os.path.join(dest, item), module)
return changed
def main():
global module
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path'),
_original_basename=dict(type='str'), # used to handle 'dest is a directory' via template, a slight hack
content=dict(type='str', no_log=True),
dest=dict(type='path', required=True),
backup=dict(type='bool', default=False),
force=dict(type='bool', default=True, aliases=['thirsty']),
validate=dict(type='str'),
directory_mode=dict(type='raw'),
remote_src=dict(type='bool'),
local_follow=dict(type='bool'),
checksum=dict(type='str'),
),
add_file_common_args=True,
supports_check_mode=True,
)
if module.params.get('thirsty'):
module.deprecate('The alias "thirsty" has been deprecated and will be removed, use "force" instead', version='2.13')
src = module.params['src']
b_src = to_bytes(src, errors='surrogate_or_strict')
dest = module.params['dest']
# Make sure we always have a directory component for later processing
if os.path.sep not in dest:
dest = '.{0}{1}'.format(os.path.sep, dest)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
backup = module.params['backup']
force = module.params['force']
_original_basename = module.params.get('_original_basename', None)
validate = module.params.get('validate', None)
follow = module.params['follow']
local_follow = module.params['local_follow']
mode = module.params['mode']
owner = module.params['owner']
group = module.params['group']
remote_src = module.params['remote_src']
checksum = module.params['checksum']
if not os.path.exists(b_src):
module.fail_json(msg="Source %s not found" % (src))
if not os.access(b_src, os.R_OK):
module.fail_json(msg="Source %s not readable" % (src))
# Preserve is usually handled in the action plugin but mode + remote_src has to be done on the
# remote host
if module.params['mode'] == 'preserve':
module.params['mode'] = '0%03o' % stat.S_IMODE(os.stat(b_src).st_mode)
mode = module.params['mode']
checksum_dest = None
if os.path.isfile(src):
checksum_src = module.sha1(src)
else:
checksum_src = None
# Backwards compat only. This will be None in FIPS mode
try:
if os.path.isfile(src):
md5sum_src = module.md5(src)
else:
md5sum_src = None
except ValueError:
md5sum_src = None
changed = False
if checksum and checksum_src != checksum:
module.fail_json(
msg='Copied file does not match the expected checksum. Transfer failed.',
checksum=checksum_src,
expected_checksum=checksum
)
# Special handling for recursive copy - create intermediate dirs
if _original_basename and dest.endswith(os.sep):
dest = os.path.join(dest, _original_basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
dirname = os.path.dirname(dest)
b_dirname = to_bytes(dirname, errors='surrogate_or_strict')
if not os.path.exists(b_dirname):
try:
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(dirname)
except AnsibleModuleError as e:
e.result['msg'] += ' Could not copy to {0}'.format(dest)
module.fail_json(**e.results)
os.makedirs(b_dirname)
directory_args = module.load_file_common_arguments(module.params)
directory_mode = module.params["directory_mode"]
if directory_mode is not None:
directory_args['mode'] = directory_mode
else:
directory_args['mode'] = None
adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed)
if os.path.isdir(b_dest):
basename = os.path.basename(src)
if _original_basename:
basename = _original_basename
dest = os.path.join(dest, basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
if os.path.islink(b_dest) and follow:
b_dest = os.path.realpath(b_dest)
dest = to_native(b_dest, errors='surrogate_or_strict')
if not force:
module.exit_json(msg="file already exists", src=src, dest=dest, changed=False)
if os.access(b_dest, os.R_OK) and os.path.isfile(b_dest):
checksum_dest = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(b_dest)):
try:
# os.path.exists() can return false in some
# circumstances where the directory does not have
# the execute bit for the current user set, in
# which case the stat() call will raise an OSError
os.stat(os.path.dirname(b_dest))
except OSError as e:
if "permission denied" in to_native(e).lower():
module.fail_json(msg="Destination directory %s is not accessible" % (os.path.dirname(dest)))
module.fail_json(msg="Destination directory %s does not exist" % (os.path.dirname(dest)))
if not os.access(os.path.dirname(b_dest), os.W_OK) and not module.params['unsafe_writes']:
module.fail_json(msg="Destination %s not writable" % (os.path.dirname(dest)))
backup_file = None
if checksum_src != checksum_dest or os.path.islink(b_dest):
if not module.check_mode:
try:
if backup:
if os.path.exists(b_dest):
backup_file = module.backup_local(dest)
# allow for conversion from symlink.
if os.path.islink(b_dest):
os.unlink(b_dest)
open(b_dest, 'w').close()
if validate:
# if we have a mode, make sure we set it on the temporary
# file source as some validations may require it
if mode is not None:
module.set_mode_if_different(src, mode, False)
if owner is not None:
module.set_owner_if_different(src, owner, False)
if group is not None:
module.set_group_if_different(src, group, False)
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(validate % src)
if rc != 0:
module.fail_json(msg="failed to validate", exit_status=rc, stdout=out, stderr=err)
b_mysrc = b_src
if remote_src and os.path.isfile(b_src):
_, b_mysrc = tempfile.mkstemp(dir=os.path.dirname(b_dest))
shutil.copyfile(b_src, b_mysrc)
try:
shutil.copystat(b_src, b_mysrc)
except OSError as err:
if err.errno == errno.ENOSYS and mode == "preserve":
module.warn("Unable to copy stats {0}".format(to_native(b_src)))
else:
raise
# might be needed below
if PY3 and hasattr(os, 'listxattr'):
try:
src_has_acls = 'system.posix_acl_access' in os.listxattr(src)
except Exception as e:
# assume unwanted ACLs by default
src_has_acls = True
module.atomic_move(b_mysrc, dest, unsafe_writes=module.params['unsafe_writes'])
if PY3 and hasattr(os, 'listxattr') and platform.system() == 'Linux' and not remote_src:
# atomic_move used above to copy src into dest might, in some cases,
# use shutil.copy2 which in turn uses shutil.copystat.
# Since Python 3.3, shutil.copystat copies file extended attributes:
# https://docs.python.org/3/library/shutil.html#shutil.copystat
# os.listxattr (along with others) was added to handle the operation.
# This means that on Python 3 we are copying the extended attributes which includes
# the ACLs on some systems - further limited to Linux as the documentation above claims
# that the extended attributes are copied only on Linux. Also, os.listxattr is only
# available on Linux.
# If not remote_src, then the file was copied from the controller. In that
# case, any filesystem ACLs are artifacts of the copy rather than preservation
# of existing attributes. Get rid of them:
if src_has_acls:
# FIXME If dest has any default ACLs, there are not applied to src now because
# they were overridden by copystat. Should/can we do anything about this?
# 'system.posix_acl_default' in os.listxattr(os.path.dirname(b_dest))
try:
clear_facls(dest)
except ValueError as e:
if 'setfacl' in to_native(e):
# No setfacl so we're okay. The controller couldn't have set a facl
# without the setfacl command
pass
else:
raise
except RuntimeError as e:
# setfacl failed.
if 'Operation not supported' in to_native(e):
# The file system does not support ACLs.
pass
else:
raise
except (IOError, OSError):
module.fail_json(msg="failed to copy: %s to %s" % (src, dest), traceback=traceback.format_exc())
changed = True
else:
changed = False
if checksum_src is None and checksum_dest is None:
if remote_src and os.path.isdir(module.params['src']):
b_src = to_bytes(module.params['src'], errors='surrogate_or_strict')
b_dest = to_bytes(module.params['dest'], errors='surrogate_or_strict')
if src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode:
shutil.copytree(b_src, b_dest, symlinks=not(local_follow))
chown_recursive(dest, module)
changed = True
if not src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
shutil.copytree(b_src, b_dest, symlinks=not(local_follow))
changed = True
chown_recursive(dest, module)
if module.check_mode and not os.path.exists(b_dest):
changed = True
if os.path.exists(b_dest):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if not src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(module.params['src']), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
os.makedirs(b_dest)
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if module.check_mode and not os.path.exists(b_dest):
changed = True
res_args = dict(
dest=dest, src=src, md5sum=md5sum_src, checksum=checksum_src, changed=changed
)
if backup_file:
res_args['backup_file'] = backup_file
module.params['dest'] = dest
if not module.check_mode:
file_args = module.load_file_common_arguments(module.params)
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'])
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,956 |
Modules using add_file_common_args=True and the files document fragment have multiple undocumented arguments
|
##### SUMMARY
If a module sets `add_file_common_args=True` when calling `AnsibleModule`, all elements from [FILE_COMMON_ARGUMENTS](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/module_utils/basic.py#L229-L257) are included in the argument spec. The [files document fragment](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/plugins/doc_fragments/files.py#L15-L80) only documents a subset of them, though. Missing are:
- src
- follow
- force
- content
- backup
- remote_src
- regexp
- delimiter
- directory_mode
Most module authors using `add_file_common_args=True` are probably not aware that their modules have these options as well.
I don't think these extra options should be added by default if `add_file_common_args=True` is specified.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```
devel
```
|
https://github.com/ansible/ansible/issues/64956
|
https://github.com/ansible/ansible/pull/66389
|
802cc602429ea2b37eb7d75a8bb1dc2ebcfc05e1
|
f725dce9368dc4d33c2cddd4790c57e1d00496f0
| 2019-11-17T13:42:30Z |
python
| 2020-02-07T23:56:01Z |
lib/ansible/modules/net_tools/basics/uri.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Romeo Theriault <romeot () hawaii.edu>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: uri
short_description: Interacts with webservices
description:
- Interacts with HTTP and HTTPS web services and supports Digest, Basic and WSSE
HTTP authentication mechanisms.
- For Windows targets, use the M(win_uri) module instead.
version_added: "1.1"
options:
url:
description:
- HTTP or HTTPS URL in the form (http|https)://host.domain[:port]/path
type: str
required: true
dest:
description:
- A path of where to download the file to (if desired). If I(dest) is a
directory, the basename of the file on the remote server will be used.
type: path
url_username:
description:
- A username for the module to use for Digest, Basic or WSSE authentication.
type: str
aliases: [ user ]
url_password:
description:
- A password for the module to use for Digest, Basic or WSSE authentication.
type: str
aliases: [ password ]
body:
description:
- The body of the http request/response to the web service. If C(body_format) is set
to 'json' it will take an already formatted JSON string or convert a data structure
into JSON. If C(body_format) is set to 'form-urlencoded' it will convert a dictionary
or list of tuples into an 'application/x-www-form-urlencoded' string. (Added in v2.7)
type: raw
body_format:
description:
- The serialization format of the body. When set to C(json) or C(form-urlencoded), encodes the
body argument, if needed, and automatically sets the Content-Type header accordingly.
As of C(2.3) it is possible to override the `Content-Type` header, when
set to C(json) or C(form-urlencoded) via the I(headers) option.
type: str
choices: [ form-urlencoded, json, raw ]
default: raw
version_added: "2.0"
method:
description:
- The HTTP method of the request or response.
- In more recent versions we do not restrict the method at the module level anymore
but it still must be a valid method accepted by the service handling the request.
type: str
default: GET
return_content:
description:
- Whether or not to return the body of the response as a "content" key in
the dictionary result.
- Independently of this option, if the reported Content-type is "application/json", then the JSON is
always loaded into a key called C(json) in the dictionary results.
type: bool
default: no
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- The library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail.
type: bool
default: no
follow_redirects:
description:
- Whether or not the URI module should follow redirects. C(all) will follow all redirects.
C(safe) will follow only "safe" redirects, where "safe" means that the client is only
doing a GET or HEAD on the URI to which it is being redirected. C(none) will not follow
any redirects. Note that C(yes) and C(no) choices are accepted for backwards compatibility,
where C(yes) is the equivalent of C(all) and C(no) is the equivalent of C(safe). C(yes) and C(no)
are deprecated and will be removed in some future version of Ansible.
type: str
choices: ['all', 'no', 'none', 'safe', 'urllib2', 'yes']
default: safe
creates:
description:
- A filename, when it already exists, this step will not be run.
type: path
removes:
description:
- A filename, when it does not exist, this step will not be run.
type: path
status_code:
description:
- A list of valid, numeric, HTTP status codes that signifies success of the request.
type: list
default: [ 200 ]
timeout:
description:
- The socket level timeout in seconds
type: int
default: 30
headers:
description:
- Add custom HTTP headers to a request in the format of a YAML hash. As
of C(2.3) supplying C(Content-Type) here will override the header
generated by supplying C(json) or C(form-urlencoded) for I(body_format).
type: dict
version_added: '2.1'
validate_certs:
description:
- If C(no), SSL certificates will not be validated.
- This should only set to C(no) used on personally controlled sites using self-signed certificates.
- Prior to 1.9.2 the code defaulted to C(no).
type: bool
default: yes
version_added: '1.9.2'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, I(client_key) is not required
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If I(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
src:
description:
- Path to file to be submitted to the remote server.
- Cannot be used with I(body).
type: path
version_added: '2.7'
remote_src:
description:
- If C(no), the module will search for src on originating/master machine.
- If C(yes) the module will use the C(src) path on the remote/target machine.
type: bool
default: no
version_added: '2.7'
force:
description:
- If C(yes) do not get a cached copy.
- Alias C(thirsty) has been deprecated and will be removed in 2.13.
type: bool
default: no
aliases: [ thirsty ]
use_proxy:
description:
- If C(no), it will not use a proxy, even if one is defined in an environment variable on the target hosts.
type: bool
default: yes
unix_socket:
description:
- Path to Unix domain socket to use for connection
version_added: '2.8'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
notes:
- The dependency on httplib2 was removed in Ansible 2.1.
- The module returns all the HTTP headers in lower-case.
- For Windows targets, use the M(win_uri) module instead.
seealso:
- module: get_url
- module: win_uri
author:
- Romeo Theriault (@romeotheriault)
extends_documentation_fragment: files
'''
EXAMPLES = r'''
- name: Check that you can connect (GET) to a page and it returns a status 200
uri:
url: http://www.example.com
- name: Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents
uri:
url: http://www.example.com
return_content: yes
register: this
failed_when: "'AWESOME' not in this.content"
- name: Create a JIRA issue
uri:
url: https://your.jira.example.com/rest/api/2/issue/
user: your_username
password: your_pass
method: POST
body: "{{ lookup('file','issue.json') }}"
force_basic_auth: yes
status_code: 201
body_format: json
- name: Login to a form based webpage, then use the returned cookie to access the app in later tasks
uri:
url: https://your.form.based.auth.example.com/index.php
method: POST
body_format: form-urlencoded
body:
name: your_username
password: your_password
enter: Sign in
status_code: 302
register: login
- name: Login to a form based webpage using a list of tuples
uri:
url: https://your.form.based.auth.example.com/index.php
method: POST
body_format: form-urlencoded
body:
- [ name, your_username ]
- [ password, your_password ]
- [ enter, Sign in ]
status_code: 302
register: login
- name: Connect to website using a previously stored cookie
uri:
url: https://your.form.based.auth.example.com/dashboard.php
method: GET
return_content: yes
headers:
Cookie: "{{ login.cookies_string }}"
- name: Queue build of a project in Jenkins
uri:
url: http://{{ jenkins.host }}/job/{{ jenkins.job }}/build?token={{ jenkins.token }}
user: "{{ jenkins.user }}"
password: "{{ jenkins.password }}"
method: GET
force_basic_auth: yes
status_code: 201
- name: POST from contents of local file
uri:
url: https://httpbin.org/post
method: POST
src: file.json
- name: POST from contents of remote file
uri:
url: https://httpbin.org/post
method: POST
src: /path/to/my/file.json
remote_src: yes
- name: Pause play until a URL is reachable from this host
uri:
url: "http://192.0.2.1/some/test"
follow_redirects: none
method: GET
register: _result
until: _result.status == 200
retries: 720 # 720 * 5 seconds = 1hour (60*60/5)
delay: 5 # Every 5 seconds
# There are issues in a supporting Python library that is discussed in
# https://github.com/ansible/ansible/issues/52705 where a proxy is defined
# but you want to bypass proxy use on CIDR masks by using no_proxy
- name: Work around a python issue that doesn't support no_proxy envvar
uri:
follow_redirects: none
validate_certs: false
timeout: 5
url: "http://{{ ip_address }}:{{ port | default(80) }}"
register: uri_data
failed_when: false
changed_when: false
vars:
ip_address: 192.0.2.1
environment: |
{
{% for no_proxy in (lookup('env', 'no_proxy') | regex_replace('\s*,\s*', ' ') ).split() %}
{% if no_proxy | regex_search('\/') and
no_proxy | ipaddr('net') != '' and
no_proxy | ipaddr('net') != false and
ip_address | ipaddr(no_proxy) is not none and
ip_address | ipaddr(no_proxy) != false %}
'no_proxy': '{{ ip_address }}'
{% elif no_proxy | regex_search(':') != '' and
no_proxy | regex_search(':') != false and
no_proxy == ip_address + ':' + (port | default(80)) %}
'no_proxy': '{{ ip_address }}:{{ port | default(80) }}'
{% elif no_proxy | ipaddr('host') != '' and
no_proxy | ipaddr('host') != false and
no_proxy == ip_address %}
'no_proxy': '{{ ip_address }}'
{% elif no_proxy | regex_search('^(\*|)\.') != '' and
no_proxy | regex_search('^(\*|)\.') != false and
no_proxy | regex_replace('\*', '') in ip_address %}
'no_proxy': '{{ ip_address }}'
{% endif %}
{% endfor %}
}
'''
RETURN = r'''
# The return information includes all the HTTP headers in lower-case.
content:
description: The response body content.
returned: status not in status_code or return_content is true
type: str
sample: "{}"
cookies:
description: The cookie values placed in cookie jar.
returned: on success
type: dict
sample: {"SESSIONID": "[SESSIONID]"}
version_added: "2.4"
cookies_string:
description: The value for future request Cookie headers.
returned: on success
type: str
sample: "SESSIONID=[SESSIONID]"
version_added: "2.6"
elapsed:
description: The number of seconds that elapsed while performing the download.
returned: on success
type: int
sample: 23
msg:
description: The HTTP message from the request.
returned: always
type: str
sample: OK (unknown bytes)
redirected:
description: Whether the request was redirected.
returned: on success
type: bool
sample: false
status:
description: The HTTP status code from the request.
returned: always
type: int
sample: 200
url:
description: The actual URL used for the request.
returned: always
type: str
sample: https://www.ansible.com/
'''
import cgi
import datetime
import json
import os
import re
import shutil
import sys
import tempfile
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import PY2, iteritems, string_types
from ansible.module_utils.six.moves.urllib.parse import urlencode, urlsplit
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common._collections_compat import Mapping, Sequence
from ansible.module_utils.urls import fetch_url, url_argument_spec
JSON_CANDIDATES = ('text', 'json', 'javascript')
def format_message(err, resp):
msg = resp.pop('msg')
return err + (' %s' % msg if msg else '')
def write_file(module, url, dest, content, resp):
# create a tempfile with some test content
fd, tmpsrc = tempfile.mkstemp(dir=module.tmpdir)
f = open(tmpsrc, 'wb')
try:
f.write(content)
except Exception as e:
os.remove(tmpsrc)
msg = format_message("Failed to create temporary content file: %s" % to_native(e), resp)
module.fail_json(msg=msg, **resp)
f.close()
checksum_src = None
checksum_dest = None
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
msg = format_message("Source '%s' does not exist" % tmpsrc, resp)
module.fail_json(msg=msg, **resp)
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
msg = format_message("Source '%s' not readable" % tmpsrc, resp)
module.fail_json(msg=msg, **resp)
checksum_src = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
msg = format_message("Destination '%s' not writable" % dest, resp)
module.fail_json(msg=msg, **resp)
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
msg = format_message("Destination '%s' not readable" % dest, resp)
module.fail_json(msg=msg, **resp)
checksum_dest = module.sha1(dest)
else:
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
msg = format_message("Destination dir '%s' not writable" % os.path.dirname(dest), resp)
module.fail_json(msg=msg, **resp)
if checksum_src != checksum_dest:
try:
shutil.copyfile(tmpsrc, dest)
except Exception as e:
os.remove(tmpsrc)
msg = format_message("failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)), resp)
module.fail_json(msg=msg, **resp)
os.remove(tmpsrc)
def url_filename(url):
fn = os.path.basename(urlsplit(url)[2])
if fn == '':
return 'index.html'
return fn
def absolute_location(url, location):
"""Attempts to create an absolute URL based on initial URL, and
next URL, specifically in the case of a ``Location`` header.
"""
if '://' in location:
return location
elif location.startswith('/'):
parts = urlsplit(url)
base = url.replace(parts[2], '')
return '%s%s' % (base, location)
elif not location.startswith('/'):
base = os.path.dirname(url)
return '%s/%s' % (base, location)
else:
return location
def kv_list(data):
''' Convert data into a list of key-value tuples '''
if data is None:
return None
if isinstance(data, Sequence):
return list(data)
if isinstance(data, Mapping):
return list(data.items())
raise TypeError('cannot form-urlencode body, expect list or dict')
def form_urlencoded(body):
''' Convert data into a form-urlencoded string '''
if isinstance(body, string_types):
return body
if isinstance(body, (Mapping, Sequence)):
result = []
# Turn a list of lists into a list of tuples that urlencode accepts
for key, values in kv_list(body):
if isinstance(values, string_types) or not isinstance(values, (Mapping, Sequence)):
values = [values]
for value in values:
if value is not None:
result.append((to_text(key), to_text(value)))
return urlencode(result, doseq=True)
return body
def uri(module, url, dest, body, body_format, method, headers, socket_timeout):
# is dest is set and is a directory, let's check if we get redirected and
# set the filename from that url
redirected = False
redir_info = {}
r = {}
src = module.params['src']
if src:
try:
headers.update({
'Content-Length': os.stat(src).st_size
})
data = open(src, 'rb')
except OSError:
module.fail_json(msg='Unable to open source file %s' % src, elapsed=0)
else:
data = body
kwargs = {}
if dest is not None:
# Stash follow_redirects, in this block we don't want to follow
# we'll reset back to the supplied value soon
follow_redirects = module.params['follow_redirects']
module.params['follow_redirects'] = False
if os.path.isdir(dest):
# first check if we are redirected to a file download
_, redir_info = fetch_url(module, url, data=body,
headers=headers,
method=method,
timeout=socket_timeout, unix_socket=module.params['unix_socket'])
# if we are redirected, update the url with the location header,
# and update dest with the new url filename
if redir_info['status'] in (301, 302, 303, 307):
url = redir_info['location']
redirected = True
dest = os.path.join(dest, url_filename(url))
# if destination file already exist, only download if file newer
if os.path.exists(dest):
kwargs['last_mod_time'] = datetime.datetime.utcfromtimestamp(os.path.getmtime(dest))
# Reset follow_redirects back to the stashed value
module.params['follow_redirects'] = follow_redirects
resp, info = fetch_url(module, url, data=data, headers=headers,
method=method, timeout=socket_timeout, unix_socket=module.params['unix_socket'],
**kwargs)
try:
content = resp.read()
except AttributeError:
# there was no content, but the error read()
# may have been stored in the info as 'body'
content = info.pop('body', '')
if src:
# Try to close the open file handle
try:
data.close()
except Exception:
pass
r['redirected'] = redirected or info['url'] != url
r.update(redir_info)
r.update(info)
return r, content, dest
def main():
argument_spec = url_argument_spec()
argument_spec.update(
dest=dict(type='path'),
url_username=dict(type='str', aliases=['user']),
url_password=dict(type='str', aliases=['password'], no_log=True),
body=dict(type='raw'),
body_format=dict(type='str', default='raw', choices=['form-urlencoded', 'json', 'raw']),
src=dict(type='path'),
method=dict(type='str', default='GET'),
return_content=dict(type='bool', default=False),
follow_redirects=dict(type='str', default='safe', choices=['all', 'no', 'none', 'safe', 'urllib2', 'yes']),
creates=dict(type='path'),
removes=dict(type='path'),
status_code=dict(type='list', default=[200]),
timeout=dict(type='int', default=30),
headers=dict(type='dict', default={}),
unix_socket=dict(type='path'),
)
module = AnsibleModule(
argument_spec=argument_spec,
add_file_common_args=True,
mutually_exclusive=[['body', 'src']],
)
if module.params.get('thirsty'):
module.deprecate('The alias "thirsty" has been deprecated and will be removed, use "force" instead', version='2.13')
url = module.params['url']
body = module.params['body']
body_format = module.params['body_format'].lower()
method = module.params['method'].upper()
dest = module.params['dest']
return_content = module.params['return_content']
creates = module.params['creates']
removes = module.params['removes']
status_code = [int(x) for x in list(module.params['status_code'])]
socket_timeout = module.params['timeout']
dict_headers = module.params['headers']
if not re.match('^[A-Z]+$', method):
module.fail_json(msg="Parameter 'method' needs to be a single word in uppercase, like GET or POST.")
if body_format == 'json':
# Encode the body unless its a string, then assume it is pre-formatted JSON
if not isinstance(body, string_types):
body = json.dumps(body)
if 'content-type' not in [header.lower() for header in dict_headers]:
dict_headers['Content-Type'] = 'application/json'
elif body_format == 'form-urlencoded':
if not isinstance(body, string_types):
try:
body = form_urlencoded(body)
except ValueError as e:
module.fail_json(msg='failed to parse body as form_urlencoded: %s' % to_native(e), elapsed=0)
if 'content-type' not in [header.lower() for header in dict_headers]:
dict_headers['Content-Type'] = 'application/x-www-form-urlencoded'
if creates is not None:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of uri executions.
if os.path.exists(creates):
module.exit_json(stdout="skipped, since '%s' exists" % creates, changed=False)
if removes is not None:
# do not run the command if the line contains removes=filename
# and the filename does not exist. This allows idempotence
# of uri executions.
if not os.path.exists(removes):
module.exit_json(stdout="skipped, since '%s' does not exist" % removes, changed=False)
# Make the request
start = datetime.datetime.utcnow()
resp, content, dest = uri(module, url, dest, body, body_format, method,
dict_headers, socket_timeout)
resp['elapsed'] = (datetime.datetime.utcnow() - start).seconds
resp['status'] = int(resp['status'])
resp['changed'] = False
# Write the file out if requested
if dest is not None:
if resp['status'] in status_code and resp['status'] != 304:
write_file(module, url, dest, content, resp)
# allow file attribute changes
resp['changed'] = True
module.params['path'] = dest
file_args = module.load_file_common_arguments(module.params)
file_args['path'] = dest
resp['changed'] = module.set_fs_attributes_if_different(file_args, resp['changed'])
resp['path'] = dest
# Transmogrify the headers, replacing '-' with '_', since variables don't
# work with dashes.
# In python3, the headers are title cased. Lowercase them to be
# compatible with the python2 behaviour.
uresp = {}
for key, value in iteritems(resp):
ukey = key.replace("-", "_").lower()
uresp[ukey] = value
if 'location' in uresp:
uresp['location'] = absolute_location(url, uresp['location'])
# Default content_encoding to try
content_encoding = 'utf-8'
if 'content_type' in uresp:
# Handle multiple Content-Type headers
charsets = []
content_types = []
for value in uresp['content_type'].split(','):
ct, params = cgi.parse_header(value)
if ct not in content_types:
content_types.append(ct)
if 'charset' in params:
if params['charset'] not in charsets:
charsets.append(params['charset'])
if content_types:
content_type = content_types[0]
if len(content_types) > 1:
module.warn(
'Received multiple conflicting Content-Type values (%s), using %s' % (', '.join(content_types), content_type)
)
if charsets:
content_encoding = charsets[0]
if len(charsets) > 1:
module.warn(
'Received multiple conflicting charset values (%s), using %s' % (', '.join(charsets), content_encoding)
)
u_content = to_text(content, encoding=content_encoding)
if any(candidate in content_type for candidate in JSON_CANDIDATES):
try:
js = json.loads(u_content)
uresp['json'] = js
except Exception:
if PY2:
sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2
else:
u_content = to_text(content, encoding=content_encoding)
if resp['status'] not in status_code:
uresp['msg'] = 'Status code was %s and not %s: %s' % (resp['status'], status_code, uresp.get('msg', ''))
module.fail_json(content=u_content, **uresp)
elif return_content:
module.exit_json(content=u_content, **uresp)
else:
module.exit_json(**uresp)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,956 |
Modules using add_file_common_args=True and the files document fragment have multiple undocumented arguments
|
##### SUMMARY
If a module sets `add_file_common_args=True` when calling `AnsibleModule`, all elements from [FILE_COMMON_ARGUMENTS](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/module_utils/basic.py#L229-L257) are included in the argument spec. The [files document fragment](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/plugins/doc_fragments/files.py#L15-L80) only documents a subset of them, though. Missing are:
- src
- follow
- force
- content
- backup
- remote_src
- regexp
- delimiter
- directory_mode
Most module authors using `add_file_common_args=True` are probably not aware that their modules have these options as well.
I don't think these extra options should be added by default if `add_file_common_args=True` is specified.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```
devel
```
|
https://github.com/ansible/ansible/issues/64956
|
https://github.com/ansible/ansible/pull/66389
|
802cc602429ea2b37eb7d75a8bb1dc2ebcfc05e1
|
f725dce9368dc4d33c2cddd4790c57e1d00496f0
| 2019-11-17T13:42:30Z |
python
| 2020-02-07T23:56:01Z |
lib/ansible/modules/packaging/language/maven_artifact.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2014, Chris Schmidt <chris.schmidt () contrastsecurity.com>
#
# Built using https://github.com/hamnis/useful-scripts/blob/master/python/download-maven-artifact
# as a reference and starting point.
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: maven_artifact
short_description: Downloads an Artifact from a Maven Repository
version_added: "2.0"
description:
- Downloads an artifact from a maven repository given the maven coordinates provided to the module.
- Can retrieve snapshots or release versions of the artifact and will resolve the latest available
version if one is not available.
author: "Chris Schmidt (@chrisisbeef)"
requirements:
- lxml
- boto if using a S3 repository (s3://...)
options:
group_id:
description:
- The Maven groupId coordinate
required: true
artifact_id:
description:
- The maven artifactId coordinate
required: true
version:
description:
- The maven version coordinate
- Mutually exclusive with I(version_by_spec).
version_by_spec:
description:
- The maven dependency version ranges.
- See supported version ranges on U(https://cwiki.apache.org/confluence/display/MAVENOLD/Dependency+Mediation+and+Conflict+Resolution)
- The range type "(,1.0],[1.2,)" and "(,1.1),(1.1,)" is not supported.
- Mutually exclusive with I(version).
version_added: "2.10"
classifier:
description:
- The maven classifier coordinate
extension:
description:
- The maven type/extension coordinate
default: jar
repository_url:
description:
- The URL of the Maven Repository to download from.
- Use s3://... if the repository is hosted on Amazon S3, added in version 2.2.
- Use file://... if the repository is local, added in version 2.6
default: https://repo1.maven.org/maven2
username:
description:
- The username to authenticate as to the Maven Repository. Use AWS secret key of the repository is hosted on S3
aliases: [ "aws_secret_key" ]
password:
description:
- The password to authenticate with to the Maven Repository. Use AWS secret access key of the repository is hosted on S3
aliases: [ "aws_secret_access_key" ]
headers:
description:
- Add custom HTTP headers to a request in hash/dict format.
type: dict
version_added: "2.8"
force_basic_auth:
version_added: "2.10"
description:
- httplib2, the library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail. This option forces the sending of the Basic authentication header
upon initial request.
default: 'no'
type: bool
dest:
description:
- The path where the artifact should be written to
- If file mode or ownerships are specified and destination path already exists, they affect the downloaded file
required: true
state:
description:
- The desired state of the artifact
default: present
choices: [present,absent]
timeout:
description:
- Specifies a timeout in seconds for the connection attempt
default: 10
version_added: "2.3"
validate_certs:
description:
- If C(no), SSL certificates will not be validated. This should only be set to C(no) when no other option exists.
type: bool
default: 'yes'
version_added: "1.9.3"
keep_name:
description:
- If C(yes), the downloaded artifact's name is preserved, i.e the version number remains part of it.
- This option only has effect when C(dest) is a directory and C(version) is set to C(latest) or C(version_by_spec)
is defined.
type: bool
default: 'no'
version_added: "2.4"
verify_checksum:
description:
- If C(never), the md5 checksum will never be downloaded and verified.
- If C(download), the md5 checksum will be downloaded and verified only after artifact download. This is the default.
- If C(change), the md5 checksum will be downloaded and verified if the destination already exist,
to verify if they are identical. This was the behaviour before 2.6. Since it downloads the md5 before (maybe)
downloading the artifact, and since some repository software, when acting as a proxy/cache, return a 404 error
if the artifact has not been cached yet, it may fail unexpectedly.
If you still need it, you should consider using C(always) instead - if you deal with a checksum, it is better to
use it to verify integrity after download.
- C(always) combines C(download) and C(change).
required: false
default: 'download'
choices: ['never', 'download', 'change', 'always']
version_added: "2.6"
extends_documentation_fragment:
- files
'''
EXAMPLES = '''
# Download the latest version of the JUnit framework artifact from Maven Central
- maven_artifact:
group_id: junit
artifact_id: junit
dest: /tmp/junit-latest.jar
# Download JUnit 4.11 from Maven Central
- maven_artifact:
group_id: junit
artifact_id: junit
version: 4.11
dest: /tmp/junit-4.11.jar
# Download an artifact from a private repository requiring authentication
- maven_artifact:
group_id: com.company
artifact_id: library-name
repository_url: 'https://repo.company.com/maven'
username: user
password: pass
dest: /tmp/library-name-latest.jar
# Download a WAR File to the Tomcat webapps directory to be deployed
- maven_artifact:
group_id: com.company
artifact_id: web-app
extension: war
repository_url: 'https://repo.company.com/maven'
dest: /var/lib/tomcat7/webapps/web-app.war
# Keep a downloaded artifact's name, i.e. retain the version
- maven_artifact:
version: latest
artifact_id: spring-core
group_id: org.springframework
dest: /tmp/
keep_name: yes
# Download the latest version of the JUnit framework artifact from Maven local
- maven_artifact:
group_id: junit
artifact_id: junit
dest: /tmp/junit-latest.jar
repository_url: "file://{{ lookup('env','HOME') }}/.m2/repository"
# Download the latest version between 3.8 and 4.0 (exclusive) of the JUnit framework artifact from Maven Central
- maven_artifact:
group_id: junit
artifact_id: junit
version_by_spec: "[3.8,4.0)"
dest: /tmp/
'''
import hashlib
import os
import posixpath
import shutil
import io
import tempfile
import traceback
from ansible.module_utils.ansible_release import __version__ as ansible_version
from re import match
LXML_ETREE_IMP_ERR = None
try:
from lxml import etree
HAS_LXML_ETREE = True
except ImportError:
LXML_ETREE_IMP_ERR = traceback.format_exc()
HAS_LXML_ETREE = False
BOTO_IMP_ERR = None
try:
import boto3
HAS_BOTO = True
except ImportError:
BOTO_IMP_ERR = traceback.format_exc()
HAS_BOTO = False
SEMANTIC_VERSION_IMP_ERR = None
try:
from semantic_version import Version, Spec
HAS_SEMANTIC_VERSION = True
except ImportError:
SEMANTIC_VERSION_IMP_ERR = traceback.format_exc()
HAS_SEMANTIC_VERSION = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.six.moves.urllib.parse import urlparse
from ansible.module_utils.urls import fetch_url
from ansible.module_utils._text import to_bytes, to_native, to_text
def split_pre_existing_dir(dirname):
'''
Return the first pre-existing directory and a list of the new directories that will be created.
'''
head, tail = os.path.split(dirname)
b_head = to_bytes(head, errors='surrogate_or_strict')
if not os.path.exists(b_head):
if head == dirname:
return None, [head]
else:
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(head)
else:
return head, [tail]
new_directory_list.append(tail)
return pre_existing_dir, new_directory_list
def adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed):
'''
Walk the new directories list and make sure that permissions are as we would expect
'''
if new_directory_list:
first_sub_dir = new_directory_list.pop(0)
if not pre_existing_dir:
working_dir = first_sub_dir
else:
working_dir = os.path.join(pre_existing_dir, first_sub_dir)
directory_args['path'] = working_dir
changed = module.set_fs_attributes_if_different(directory_args, changed)
changed = adjust_recursive_directory_permissions(working_dir, new_directory_list, module, directory_args, changed)
return changed
class Artifact(object):
def __init__(self, group_id, artifact_id, version, version_by_spec, classifier='', extension='jar'):
if not group_id:
raise ValueError("group_id must be set")
if not artifact_id:
raise ValueError("artifact_id must be set")
self.group_id = group_id
self.artifact_id = artifact_id
self.version = version
self.version_by_spec = version_by_spec
self.classifier = classifier
if not extension:
self.extension = "jar"
else:
self.extension = extension
def is_snapshot(self):
return self.version and self.version.endswith("SNAPSHOT")
def path(self, with_version=True):
base = posixpath.join(self.group_id.replace(".", "/"), self.artifact_id)
if with_version and self.version:
base = posixpath.join(base, self.version)
return base
def _generate_filename(self):
filename = self.artifact_id + "-" + self.classifier + "." + self.extension
if not self.classifier:
filename = self.artifact_id + "." + self.extension
return filename
def get_filename(self, filename=None):
if not filename:
filename = self._generate_filename()
elif os.path.isdir(filename):
filename = os.path.join(filename, self._generate_filename())
return filename
def __str__(self):
result = "%s:%s:%s" % (self.group_id, self.artifact_id, self.version)
if self.classifier:
result = "%s:%s:%s:%s:%s" % (self.group_id, self.artifact_id, self.extension, self.classifier, self.version)
elif self.extension != "jar":
result = "%s:%s:%s:%s" % (self.group_id, self.artifact_id, self.extension, self.version)
return result
@staticmethod
def parse(input):
parts = input.split(":")
if len(parts) >= 3:
g = parts[0]
a = parts[1]
v = parts[len(parts) - 1]
t = None
c = None
if len(parts) == 4:
t = parts[2]
if len(parts) == 5:
t = parts[2]
c = parts[3]
return Artifact(g, a, v, c, t)
else:
return None
class MavenDownloader:
def __init__(self, module, base, local=False, headers=None):
self.module = module
if base.endswith("/"):
base = base.rstrip("/")
self.base = base
self.local = local
self.headers = headers
self.user_agent = "Ansible {0} maven_artifact".format(ansible_version)
self.latest_version_found = None
self.metadata_file_name = "maven-metadata-local.xml" if local else "maven-metadata.xml"
def find_version_by_spec(self, artifact):
path = "/%s/%s" % (artifact.path(False), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
xml = etree.fromstring(content)
original_versions = xml.xpath("/metadata/versioning/versions/version/text()")
versions = []
for version in original_versions:
try:
versions.append(Version.coerce(version))
except ValueError:
# This means that version string is not a valid semantic versioning
pass
parse_versions_syntax = {
# example -> (,1.0]
r"^\(,(?P<upper_bound>[0-9.]*)]$": "<={upper_bound}",
# example -> 1.0
r"^(?P<version>[0-9.]*)$": "~={version}",
# example -> [1.0]
r"^\[(?P<version>[0-9.]*)\]$": "=={version}",
# example -> [1.2, 1.3]
r"^\[(?P<lower_bound>[0-9.]*),\s*(?P<upper_bound>[0-9.]*)\]$": ">={lower_bound},<={upper_bound}",
# example -> [1.2, 1.3)
r"^\[(?P<lower_bound>[0-9.]*),\s*(?P<upper_bound>[0-9.]+)\)$": ">={lower_bound},<{upper_bound}",
# example -> [1.5,)
r"^\[(?P<lower_bound>[0-9.]*),\)$": ">={lower_bound}",
}
for regex, spec_format in parse_versions_syntax.items():
regex_result = match(regex, artifact.version_by_spec)
if regex_result:
spec = Spec(spec_format.format(**regex_result.groupdict()))
selected_version = spec.select(versions)
if not selected_version:
raise ValueError("No version found with this spec version: {0}".format(artifact.version_by_spec))
# To deal when repos on maven don't have patch number on first build (e.g. 3.8 instead of 3.8.0)
if str(selected_version) not in original_versions:
selected_version.patch = None
return str(selected_version)
raise ValueError("The spec version {0} is not supported! ".format(artifact.version_by_spec))
def find_latest_version_available(self, artifact):
if self.latest_version_found:
return self.latest_version_found
path = "/%s/%s" % (artifact.path(False), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
xml = etree.fromstring(content)
v = xml.xpath("/metadata/versioning/versions/version[last()]/text()")
if v:
self.latest_version_found = v[0]
return v[0]
def find_uri_for_artifact(self, artifact):
if artifact.version_by_spec:
artifact.version = self.find_version_by_spec(artifact)
if artifact.version == "latest":
artifact.version = self.find_latest_version_available(artifact)
if artifact.is_snapshot():
if self.local:
return self._uri_for_artifact(artifact, artifact.version)
path = "/%s/%s" % (artifact.path(), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
xml = etree.fromstring(content)
for snapshotArtifact in xml.xpath("/metadata/versioning/snapshotVersions/snapshotVersion"):
classifier = snapshotArtifact.xpath("classifier/text()")
artifact_classifier = classifier[0] if classifier else ''
extension = snapshotArtifact.xpath("extension/text()")
artifact_extension = extension[0] if extension else ''
if artifact_classifier == artifact.classifier and artifact_extension == artifact.extension:
return self._uri_for_artifact(artifact, snapshotArtifact.xpath("value/text()")[0])
timestamp_xmlpath = xml.xpath("/metadata/versioning/snapshot/timestamp/text()")
if timestamp_xmlpath:
timestamp = timestamp_xmlpath[0]
build_number = xml.xpath("/metadata/versioning/snapshot/buildNumber/text()")[0]
return self._uri_for_artifact(artifact, artifact.version.replace("SNAPSHOT", timestamp + "-" + build_number))
return self._uri_for_artifact(artifact, artifact.version)
def _uri_for_artifact(self, artifact, version=None):
if artifact.is_snapshot() and not version:
raise ValueError("Expected uniqueversion for snapshot artifact " + str(artifact))
elif not artifact.is_snapshot():
version = artifact.version
if artifact.classifier:
return posixpath.join(self.base, artifact.path(), artifact.artifact_id + "-" + version + "-" + artifact.classifier + "." + artifact.extension)
return posixpath.join(self.base, artifact.path(), artifact.artifact_id + "-" + version + "." + artifact.extension)
# for small files, directly get the full content
def _getContent(self, url, failmsg, force=True):
if self.local:
parsed_url = urlparse(url)
if os.path.isfile(parsed_url.path):
with io.open(parsed_url.path, 'rb') as f:
return f.read()
if force:
raise ValueError(failmsg + " because can not find file: " + url)
return None
response = self._request(url, failmsg, force)
if response:
return response.read()
return None
# only for HTTP request
def _request(self, url, failmsg, force=True):
url_to_use = url
parsed_url = urlparse(url)
if parsed_url.scheme == 's3':
parsed_url = urlparse(url)
bucket_name = parsed_url.netloc
key_name = parsed_url.path[1:]
client = boto3.client('s3', aws_access_key_id=self.module.params.get('username', ''), aws_secret_access_key=self.module.params.get('password', ''))
url_to_use = client.generate_presigned_url('get_object', Params={'Bucket': bucket_name, 'Key': key_name}, ExpiresIn=10)
req_timeout = self.module.params.get('timeout')
# Hack to add parameters in the way that fetch_url expects
self.module.params['url_username'] = self.module.params.get('username', '')
self.module.params['url_password'] = self.module.params.get('password', '')
self.module.params['http_agent'] = self.user_agent
response, info = fetch_url(self.module, url_to_use, timeout=req_timeout, headers=self.headers)
if info['status'] == 200:
return response
if force:
raise ValueError(failmsg + " because of " + info['msg'] + "for URL " + url_to_use)
return None
def download(self, tmpdir, artifact, verify_download, filename=None):
if (not artifact.version and not artifact.version_by_spec) or artifact.version == "latest":
artifact = Artifact(artifact.group_id, artifact.artifact_id, self.find_latest_version_available(artifact), None,
artifact.classifier, artifact.extension)
url = self.find_uri_for_artifact(artifact)
tempfd, tempname = tempfile.mkstemp(dir=tmpdir)
try:
# copy to temp file
if self.local:
parsed_url = urlparse(url)
if os.path.isfile(parsed_url.path):
shutil.copy2(parsed_url.path, tempname)
else:
return "Can not find local file: " + parsed_url.path
else:
response = self._request(url, "Failed to download artifact " + str(artifact))
with os.fdopen(tempfd, 'wb') as f:
shutil.copyfileobj(response, f)
if verify_download:
invalid_md5 = self.is_invalid_md5(tempname, url)
if invalid_md5:
# if verify_change was set, the previous file would be deleted
os.remove(tempname)
return invalid_md5
except Exception as e:
os.remove(tempname)
raise e
# all good, now copy temp file to target
shutil.move(tempname, artifact.get_filename(filename))
return None
def is_invalid_md5(self, file, remote_url):
if os.path.exists(file):
local_md5 = self._local_md5(file)
if self.local:
parsed_url = urlparse(remote_url)
remote_md5 = self._local_md5(parsed_url.path)
else:
try:
remote_md5 = to_text(self._getContent(remote_url + '.md5', "Failed to retrieve MD5", False), errors='strict')
except UnicodeError as e:
return "Cannot retrieve a valid md5 from %s: %s" % (remote_url, to_native(e))
if(not remote_md5):
return "Cannot find md5 from " + remote_url
try:
# Check if remote md5 only contains md5 or md5 + filename
_remote_md5 = remote_md5.split(None)[0]
remote_md5 = _remote_md5
# remote_md5 is empty so we continue and keep original md5 string
# This should not happen since we check for remote_md5 before
except IndexError as e:
pass
if local_md5 == remote_md5:
return None
else:
return "Checksum does not match: we computed " + local_md5 + "but the repository states " + remote_md5
return "Path does not exist: " + file
def _local_md5(self, file):
md5 = hashlib.md5()
with io.open(file, 'rb') as f:
for chunk in iter(lambda: f.read(8192), b''):
md5.update(chunk)
return md5.hexdigest()
def main():
module = AnsibleModule(
argument_spec=dict(
group_id=dict(required=True),
artifact_id=dict(required=True),
version=dict(default=None),
version_by_spec=dict(default=None),
classifier=dict(default=''),
extension=dict(default='jar'),
repository_url=dict(default='https://repo1.maven.org/maven2'),
username=dict(default=None, aliases=['aws_secret_key']),
password=dict(default=None, no_log=True, aliases=['aws_secret_access_key']),
headers=dict(type='dict'),
force_basic_auth=dict(default=False, type='bool'),
state=dict(default="present", choices=["present", "absent"]), # TODO - Implement a "latest" state
timeout=dict(default=10, type='int'),
dest=dict(type="path", required=True),
validate_certs=dict(required=False, default=True, type='bool'),
keep_name=dict(required=False, default=False, type='bool'),
verify_checksum=dict(required=False, default='download', choices=['never', 'download', 'change', 'always'])
),
add_file_common_args=True,
mutually_exclusive=([('version', 'version_by_spec')])
)
if not HAS_LXML_ETREE:
module.fail_json(msg=missing_required_lib('lxml'), exception=LXML_ETREE_IMP_ERR)
if module.params['version_by_spec'] and not HAS_SEMANTIC_VERSION:
module.fail_json(msg=missing_required_lib('semantic_version'), exception=SEMANTIC_VERSION_IMP_ERR)
repository_url = module.params["repository_url"]
if not repository_url:
repository_url = "https://repo1.maven.org/maven2"
try:
parsed_url = urlparse(repository_url)
except AttributeError as e:
module.fail_json(msg='url parsing went wrong %s' % e)
local = parsed_url.scheme == "file"
if parsed_url.scheme == 's3' and not HAS_BOTO:
module.fail_json(msg=missing_required_lib('boto3', reason='when using s3:// repository URLs'),
exception=BOTO_IMP_ERR)
group_id = module.params["group_id"]
artifact_id = module.params["artifact_id"]
version = module.params["version"]
version_by_spec = module.params["version_by_spec"]
classifier = module.params["classifier"]
extension = module.params["extension"]
headers = module.params['headers']
state = module.params["state"]
dest = module.params["dest"]
b_dest = to_bytes(dest, errors='surrogate_or_strict')
keep_name = module.params["keep_name"]
verify_checksum = module.params["verify_checksum"]
verify_download = verify_checksum in ['download', 'always']
verify_change = verify_checksum in ['change', 'always']
downloader = MavenDownloader(module, repository_url, local, headers)
if not version_by_spec and not version:
version = "latest"
try:
artifact = Artifact(group_id, artifact_id, version, version_by_spec, classifier, extension)
except ValueError as e:
module.fail_json(msg=e.args[0])
changed = False
prev_state = "absent"
if dest.endswith(os.sep):
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest):
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(dest)
os.makedirs(b_dest)
directory_args = module.load_file_common_arguments(module.params)
directory_mode = module.params["directory_mode"]
if directory_mode is not None:
directory_args['mode'] = directory_mode
else:
directory_args['mode'] = None
changed = adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed)
if os.path.isdir(b_dest):
version_part = version
if version == 'latest':
version_part = downloader.find_latest_version_available(artifact)
elif version_by_spec:
version_part = downloader.find_version_by_spec(artifact)
filename = "{artifact_id}{version_part}{classifier}.{extension}".format(
artifact_id=artifact_id,
version_part="-{0}".format(version_part) if keep_name else "",
classifier="-{0}".format(classifier) if classifier else "",
extension=extension
)
dest = posixpath.join(dest, filename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.lexists(b_dest) and ((not verify_change) or not downloader.is_invalid_md5(dest, downloader.find_uri_for_artifact(artifact))):
prev_state = "present"
if prev_state == "absent":
try:
download_error = downloader.download(module.tmpdir, artifact, verify_download, b_dest)
if download_error is None:
changed = True
else:
module.fail_json(msg="Cannot retrieve the artifact to destination: " + download_error)
except ValueError as e:
module.fail_json(msg=e.args[0])
module.params['dest'] = dest
file_args = module.load_file_common_arguments(module.params)
changed = module.set_fs_attributes_if_different(file_args, changed)
if changed:
module.exit_json(state=state, dest=dest, group_id=group_id, artifact_id=artifact_id, version=version, classifier=classifier,
extension=extension, repository_url=repository_url, changed=changed)
else:
module.exit_json(state=state, dest=dest, changed=changed)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,956 |
Modules using add_file_common_args=True and the files document fragment have multiple undocumented arguments
|
##### SUMMARY
If a module sets `add_file_common_args=True` when calling `AnsibleModule`, all elements from [FILE_COMMON_ARGUMENTS](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/module_utils/basic.py#L229-L257) are included in the argument spec. The [files document fragment](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/plugins/doc_fragments/files.py#L15-L80) only documents a subset of them, though. Missing are:
- src
- follow
- force
- content
- backup
- remote_src
- regexp
- delimiter
- directory_mode
Most module authors using `add_file_common_args=True` are probably not aware that their modules have these options as well.
I don't think these extra options should be added by default if `add_file_common_args=True` is specified.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```
devel
```
|
https://github.com/ansible/ansible/issues/64956
|
https://github.com/ansible/ansible/pull/66389
|
802cc602429ea2b37eb7d75a8bb1dc2ebcfc05e1
|
f725dce9368dc4d33c2cddd4790c57e1d00496f0
| 2019-11-17T13:42:30Z |
python
| 2020-02-07T23:56:01Z |
lib/ansible/plugins/action/copy.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import os
import os.path
import stat
import tempfile
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleFileNotFound
from ansible.module_utils.basic import FILE_COMMON_ARGUMENTS
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.plugins.action import ActionBase
from ansible.utils.hashing import checksum
# Supplement the FILE_COMMON_ARGUMENTS with arguments that are specific to file
# FILE_COMMON_ARGUMENTS contains things that are not arguments of file so remove those as well
REAL_FILE_ARGS = frozenset(FILE_COMMON_ARGUMENTS.keys()).union(
('state', 'path', '_original_basename', 'recurse', 'force',
'_diff_peek', 'src')).difference(
('content', 'decrypt', 'backup', 'remote_src', 'regexp', 'delimiter',
'directory_mode', 'unsafe_writes'))
def _create_remote_file_args(module_args):
"""remove keys that are not relevant to file"""
return dict((k, v) for k, v in module_args.items() if k in REAL_FILE_ARGS)
def _create_remote_copy_args(module_args):
"""remove action plugin only keys"""
return dict((k, v) for k, v in module_args.items() if k not in ('content', 'decrypt'))
def _walk_dirs(topdir, base_path=None, local_follow=False, trailing_slash_detector=None):
"""
Walk a filesystem tree returning enough information to copy the files
:arg topdir: The directory that the filesystem tree is rooted at
:kwarg base_path: The initial directory structure to strip off of the
files for the destination directory. If this is None (the default),
the base_path is set to ``top_dir``.
:kwarg local_follow: Whether to follow symlinks on the source. When set
to False, no symlinks are dereferenced. When set to True (the
default), the code will dereference most symlinks. However, symlinks
can still be present if needed to break a circular link.
:kwarg trailing_slash_detector: Function to determine if a path has
a trailing directory separator. Only needed when dealing with paths on
a remote machine (in which case, pass in a function that is aware of the
directory separator conventions on the remote machine).
:returns: dictionary of tuples. All of the path elements in the structure are text strings.
This separates all the files, directories, and symlinks along with
important information about each::
{ 'files': [('/absolute/path/to/copy/from', 'relative/path/to/copy/to'), ...],
'directories': [('/absolute/path/to/copy/from', 'relative/path/to/copy/to'), ...],
'symlinks': [('/symlink/target/path', 'relative/path/to/copy/to'), ...],
}
The ``symlinks`` field is only populated if ``local_follow`` is set to False
*or* a circular symlink cannot be dereferenced.
"""
# Convert the path segments into byte strings
r_files = {'files': [], 'directories': [], 'symlinks': []}
def _recurse(topdir, rel_offset, parent_dirs, rel_base=u''):
"""
This is a closure (function utilizing variables from it's parent
function's scope) so that we only need one copy of all the containers.
Note that this function uses side effects (See the Variables used from
outer scope).
:arg topdir: The directory we are walking for files
:arg rel_offset: Integer defining how many characters to strip off of
the beginning of a path
:arg parent_dirs: Directories that we're copying that this directory is in.
:kwarg rel_base: String to prepend to the path after ``rel_offset`` is
applied to form the relative path.
Variables used from the outer scope
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:r_files: Dictionary of files in the hierarchy. See the return value
for :func:`walk` for the structure of this dictionary.
:local_follow: Read-only inside of :func:`_recurse`. Whether to follow symlinks
"""
for base_path, sub_folders, files in os.walk(topdir):
for filename in files:
filepath = os.path.join(base_path, filename)
dest_filepath = os.path.join(rel_base, filepath[rel_offset:])
if os.path.islink(filepath):
# Dereference the symlnk
real_file = os.path.realpath(filepath)
if local_follow and os.path.isfile(real_file):
# Add the file pointed to by the symlink
r_files['files'].append((real_file, dest_filepath))
else:
# Mark this file as a symlink to copy
r_files['symlinks'].append((os.readlink(filepath), dest_filepath))
else:
# Just a normal file
r_files['files'].append((filepath, dest_filepath))
for dirname in sub_folders:
dirpath = os.path.join(base_path, dirname)
dest_dirpath = os.path.join(rel_base, dirpath[rel_offset:])
real_dir = os.path.realpath(dirpath)
dir_stats = os.stat(real_dir)
if os.path.islink(dirpath):
if local_follow:
if (dir_stats.st_dev, dir_stats.st_ino) in parent_dirs:
# Just insert the symlink if the target directory
# exists inside of the copy already
r_files['symlinks'].append((os.readlink(dirpath), dest_dirpath))
else:
# Walk the dirpath to find all parent directories.
new_parents = set()
parent_dir_list = os.path.dirname(dirpath).split(os.path.sep)
for parent in range(len(parent_dir_list), 0, -1):
parent_stat = os.stat(u'/'.join(parent_dir_list[:parent]))
if (parent_stat.st_dev, parent_stat.st_ino) in parent_dirs:
# Reached the point at which the directory
# tree is already known. Don't add any
# more or we might go to an ancestor that
# isn't being copied.
break
new_parents.add((parent_stat.st_dev, parent_stat.st_ino))
if (dir_stats.st_dev, dir_stats.st_ino) in new_parents:
# This was a a circular symlink. So add it as
# a symlink
r_files['symlinks'].append((os.readlink(dirpath), dest_dirpath))
else:
# Walk the directory pointed to by the symlink
r_files['directories'].append((real_dir, dest_dirpath))
offset = len(real_dir) + 1
_recurse(real_dir, offset, parent_dirs.union(new_parents), rel_base=dest_dirpath)
else:
# Add the symlink to the destination
r_files['symlinks'].append((os.readlink(dirpath), dest_dirpath))
else:
# Just a normal directory
r_files['directories'].append((dirpath, dest_dirpath))
# Check if the source ends with a "/" so that we know which directory
# level to work at (similar to rsync)
source_trailing_slash = False
if trailing_slash_detector:
source_trailing_slash = trailing_slash_detector(topdir)
else:
source_trailing_slash = topdir.endswith(os.path.sep)
# Calculate the offset needed to strip the base_path to make relative
# paths
if base_path is None:
base_path = topdir
if not source_trailing_slash:
base_path = os.path.dirname(base_path)
if topdir.startswith(base_path):
offset = len(base_path)
# Make sure we're making the new paths relative
if trailing_slash_detector and not trailing_slash_detector(base_path):
offset += 1
elif not base_path.endswith(os.path.sep):
offset += 1
if os.path.islink(topdir) and not local_follow:
r_files['symlinks'] = (os.readlink(topdir), os.path.basename(topdir))
return r_files
dir_stats = os.stat(topdir)
parents = frozenset(((dir_stats.st_dev, dir_stats.st_ino),))
# Actually walk the directory hierarchy
_recurse(topdir, offset, parents)
return r_files
class ActionModule(ActionBase):
TRANSFERS_FILES = True
def _ensure_invocation(self, result):
# NOTE: adding invocation arguments here needs to be kept in sync with
# any no_log specified in the argument_spec in the module.
# This is not automatic.
if 'invocation' not in result:
if self._play_context.no_log:
result['invocation'] = "CENSORED: no_log is set"
else:
# NOTE: Should be removed in the future. For now keep this broken
# behaviour, have a look in the PR 51582
result['invocation'] = self._task.args.copy()
result['invocation']['module_args'] = self._task.args.copy()
if isinstance(result['invocation'], dict) and 'content' in result['invocation']:
result['invocation']['content'] = 'CENSORED: content is a no_log parameter'
return result
def _copy_file(self, source_full, source_rel, content, content_tempfile,
dest, task_vars, follow):
decrypt = boolean(self._task.args.get('decrypt', True), strict=False)
force = boolean(self._task.args.get('force', 'yes'), strict=False)
raw = boolean(self._task.args.get('raw', 'no'), strict=False)
result = {}
result['diff'] = []
# If the local file does not exist, get_real_file() raises AnsibleFileNotFound
try:
source_full = self._loader.get_real_file(source_full, decrypt=decrypt)
except AnsibleFileNotFound as e:
result['failed'] = True
result['msg'] = "could not find src=%s, %s" % (source_full, to_text(e))
return result
# Get the local mode and set if user wanted it preserved
# https://github.com/ansible/ansible-modules-core/issues/1124
lmode = None
if self._task.args.get('mode', None) == 'preserve':
lmode = '0%03o' % stat.S_IMODE(os.stat(source_full).st_mode)
# This is kind of optimization - if user told us destination is
# dir, do path manipulation right away, otherwise we still check
# for dest being a dir via remote call below.
if self._connection._shell.path_has_trailing_slash(dest):
dest_file = self._connection._shell.join_path(dest, source_rel)
else:
dest_file = dest
# Attempt to get remote file info
dest_status = self._execute_remote_stat(dest_file, all_vars=task_vars, follow=follow, checksum=force)
if dest_status['exists'] and dest_status['isdir']:
# The dest is a directory.
if content is not None:
# If source was defined as content remove the temporary file and fail out.
self._remove_tempfile_if_content_defined(content, content_tempfile)
result['failed'] = True
result['msg'] = "can not use content with a dir as dest"
return result
else:
# Append the relative source location to the destination and get remote stats again
dest_file = self._connection._shell.join_path(dest, source_rel)
dest_status = self._execute_remote_stat(dest_file, all_vars=task_vars, follow=follow, checksum=force)
if dest_status['exists'] and not force:
# remote_file exists so continue to next iteration.
return None
# Generate a hash of the local file.
local_checksum = checksum(source_full)
if local_checksum != dest_status['checksum']:
# The checksums don't match and we will change or error out.
if self._play_context.diff and not raw:
result['diff'].append(self._get_diff_data(dest_file, source_full, task_vars))
if self._play_context.check_mode:
self._remove_tempfile_if_content_defined(content, content_tempfile)
result['changed'] = True
return result
# Define a remote directory that we will copy the file to.
tmp_src = self._connection._shell.join_path(self._connection._shell.tmpdir, 'source')
remote_path = None
if not raw:
remote_path = self._transfer_file(source_full, tmp_src)
else:
self._transfer_file(source_full, dest_file)
# We have copied the file remotely and no longer require our content_tempfile
self._remove_tempfile_if_content_defined(content, content_tempfile)
self._loader.cleanup_tmp_file(source_full)
# FIXME: I don't think this is needed when PIPELINING=0 because the source is created
# world readable. Access to the directory itself is controlled via fixup_perms2() as
# part of executing the module. Check that umask with scp/sftp/piped doesn't cause
# a problem before acting on this idea. (This idea would save a round-trip)
# fix file permissions when the copy is done as a different user
if remote_path:
self._fixup_perms2((self._connection._shell.tmpdir, remote_path))
if raw:
# Continue to next iteration if raw is defined.
return None
# Run the copy module
# src and dest here come after original and override them
# we pass dest only to make sure it includes trailing slash in case of recursive copy
new_module_args = _create_remote_copy_args(self._task.args)
new_module_args.update(
dict(
src=tmp_src,
dest=dest,
_original_basename=source_rel,
follow=follow
)
)
if not self._task.args.get('checksum'):
new_module_args['checksum'] = local_checksum
if lmode:
new_module_args['mode'] = lmode
module_return = self._execute_module(module_name='copy', module_args=new_module_args, task_vars=task_vars)
else:
# no need to transfer the file, already correct hash, but still need to call
# the file module in case we want to change attributes
self._remove_tempfile_if_content_defined(content, content_tempfile)
self._loader.cleanup_tmp_file(source_full)
if raw:
return None
# Fix for https://github.com/ansible/ansible-modules-core/issues/1568.
# If checksums match, and follow = True, find out if 'dest' is a link. If so,
# change it to point to the source of the link.
if follow:
dest_status_nofollow = self._execute_remote_stat(dest_file, all_vars=task_vars, follow=False)
if dest_status_nofollow['islnk'] and 'lnk_source' in dest_status_nofollow.keys():
dest = dest_status_nofollow['lnk_source']
# Build temporary module_args.
new_module_args = _create_remote_file_args(self._task.args)
new_module_args.update(
dict(
dest=dest,
_original_basename=source_rel,
recurse=False,
state='file',
)
)
# src is sent to the file module in _original_basename, not in src
try:
del new_module_args['src']
except KeyError:
pass
if lmode:
new_module_args['mode'] = lmode
# Execute the file module.
module_return = self._execute_module(module_name='file', module_args=new_module_args, task_vars=task_vars)
if not module_return.get('checksum'):
module_return['checksum'] = local_checksum
result.update(module_return)
return result
def _create_content_tempfile(self, content):
''' Create a tempfile containing defined content '''
fd, content_tempfile = tempfile.mkstemp(dir=C.DEFAULT_LOCAL_TMP)
f = os.fdopen(fd, 'wb')
content = to_bytes(content)
try:
f.write(content)
except Exception as err:
os.remove(content_tempfile)
raise Exception(err)
finally:
f.close()
return content_tempfile
def _remove_tempfile_if_content_defined(self, content, content_tempfile):
if content is not None:
os.remove(content_tempfile)
def run(self, tmp=None, task_vars=None):
''' handler for file transfer operations '''
if task_vars is None:
task_vars = dict()
result = super(ActionModule, self).run(tmp, task_vars)
del tmp # tmp no longer has any effect
source = self._task.args.get('src', None)
content = self._task.args.get('content', None)
dest = self._task.args.get('dest', None)
remote_src = boolean(self._task.args.get('remote_src', False), strict=False)
local_follow = boolean(self._task.args.get('local_follow', True), strict=False)
result['failed'] = True
if not source and content is None:
result['msg'] = 'src (or content) is required'
elif not dest:
result['msg'] = 'dest is required'
elif source and content is not None:
result['msg'] = 'src and content are mutually exclusive'
elif content is not None and dest is not None and dest.endswith("/"):
result['msg'] = "can not use content with a dir as dest"
else:
del result['failed']
if result.get('failed'):
return self._ensure_invocation(result)
# Define content_tempfile in case we set it after finding content populated.
content_tempfile = None
# If content is defined make a tmp file and write the content into it.
if content is not None:
try:
# If content comes to us as a dict it should be decoded json.
# We need to encode it back into a string to write it out.
if isinstance(content, dict) or isinstance(content, list):
content_tempfile = self._create_content_tempfile(json.dumps(content))
else:
content_tempfile = self._create_content_tempfile(content)
source = content_tempfile
except Exception as err:
result['failed'] = True
result['msg'] = "could not write content temp file: %s" % to_native(err)
return self._ensure_invocation(result)
# if we have first_available_file in our vars
# look up the files and use the first one we find as src
elif remote_src:
result.update(self._execute_module(module_name='copy', task_vars=task_vars))
return self._ensure_invocation(result)
else:
# find_needle returns a path that may not have a trailing slash on
# a directory so we need to determine that now (we use it just
# like rsync does to figure out whether to include the directory
# or only the files inside the directory
trailing_slash = source.endswith(os.path.sep)
try:
# find in expected paths
source = self._find_needle('files', source)
except AnsibleError as e:
result['failed'] = True
result['msg'] = to_text(e)
result['exception'] = traceback.format_exc()
return self._ensure_invocation(result)
if trailing_slash != source.endswith(os.path.sep):
if source[-1] == os.path.sep:
source = source[:-1]
else:
source = source + os.path.sep
# A list of source file tuples (full_path, relative_path) which will try to copy to the destination
source_files = {'files': [], 'directories': [], 'symlinks': []}
# If source is a directory populate our list else source is a file and translate it to a tuple.
if os.path.isdir(to_bytes(source, errors='surrogate_or_strict')):
# Get a list of the files we want to replicate on the remote side
source_files = _walk_dirs(source, local_follow=local_follow,
trailing_slash_detector=self._connection._shell.path_has_trailing_slash)
# If it's recursive copy, destination is always a dir,
# explicitly mark it so (note - copy module relies on this).
if not self._connection._shell.path_has_trailing_slash(dest):
dest = self._connection._shell.join_path(dest, '')
# FIXME: Can we optimize cases where there's only one file, no
# symlinks and any number of directories? In the original code,
# empty directories are not copied....
else:
source_files['files'] = [(source, os.path.basename(source))]
changed = False
module_return = dict(changed=False)
# A register for if we executed a module.
# Used to cut down on command calls when not recursive.
module_executed = False
# expand any user home dir specifier
dest = self._remote_expand_user(dest)
implicit_directories = set()
for source_full, source_rel in source_files['files']:
# copy files over. This happens first as directories that have
# a file do not need to be created later
# We only follow symlinks for files in the non-recursive case
if source_files['directories']:
follow = False
else:
follow = boolean(self._task.args.get('follow', False), strict=False)
module_return = self._copy_file(source_full, source_rel, content, content_tempfile, dest, task_vars, follow)
if module_return is None:
continue
if module_return.get('failed'):
result.update(module_return)
return self._ensure_invocation(result)
paths = os.path.split(source_rel)
dir_path = ''
for dir_component in paths:
os.path.join(dir_path, dir_component)
implicit_directories.add(dir_path)
if 'diff' in result and not result['diff']:
del result['diff']
module_executed = True
changed = changed or module_return.get('changed', False)
for src, dest_path in source_files['directories']:
# Find directories that are leaves as they might not have been
# created yet.
if dest_path in implicit_directories:
continue
# Use file module to create these
new_module_args = _create_remote_file_args(self._task.args)
new_module_args['path'] = os.path.join(dest, dest_path)
new_module_args['state'] = 'directory'
new_module_args['mode'] = self._task.args.get('directory_mode', None)
new_module_args['recurse'] = False
del new_module_args['src']
module_return = self._execute_module(module_name='file', module_args=new_module_args, task_vars=task_vars)
if module_return.get('failed'):
result.update(module_return)
return self._ensure_invocation(result)
module_executed = True
changed = changed or module_return.get('changed', False)
for target_path, dest_path in source_files['symlinks']:
# Copy symlinks over
new_module_args = _create_remote_file_args(self._task.args)
new_module_args['path'] = os.path.join(dest, dest_path)
new_module_args['src'] = target_path
new_module_args['state'] = 'link'
new_module_args['force'] = True
# Only follow remote symlinks in the non-recursive case
if source_files['directories']:
new_module_args['follow'] = False
module_return = self._execute_module(module_name='file', module_args=new_module_args, task_vars=task_vars)
module_executed = True
if module_return.get('failed'):
result.update(module_return)
return self._ensure_invocation(result)
changed = changed or module_return.get('changed', False)
if module_executed and len(source_files['files']) == 1:
result.update(module_return)
# the file module returns the file path as 'path', but
# the copy module uses 'dest', so add it if it's not there
if 'path' in result and 'dest' not in result:
result['dest'] = result['path']
else:
result.update(dict(dest=dest, src=source, changed=changed))
# Delete tmp path
self._remove_tmp_path(self._connection._shell.tmpdir)
return self._ensure_invocation(result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,956 |
Modules using add_file_common_args=True and the files document fragment have multiple undocumented arguments
|
##### SUMMARY
If a module sets `add_file_common_args=True` when calling `AnsibleModule`, all elements from [FILE_COMMON_ARGUMENTS](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/module_utils/basic.py#L229-L257) are included in the argument spec. The [files document fragment](https://github.com/ansible/ansible/blob/e9d29b1fe4285d90d7a4506b80260a9e24c3bcea/lib/ansible/plugins/doc_fragments/files.py#L15-L80) only documents a subset of them, though. Missing are:
- src
- follow
- force
- content
- backup
- remote_src
- regexp
- delimiter
- directory_mode
Most module authors using `add_file_common_args=True` are probably not aware that their modules have these options as well.
I don't think these extra options should be added by default if `add_file_common_args=True` is specified.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/module_utils/basic.py
##### ANSIBLE VERSION
```
devel
```
|
https://github.com/ansible/ansible/issues/64956
|
https://github.com/ansible/ansible/pull/66389
|
802cc602429ea2b37eb7d75a8bb1dc2ebcfc05e1
|
f725dce9368dc4d33c2cddd4790c57e1d00496f0
| 2019-11-17T13:42:30Z |
python
| 2020-02-07T23:56:01Z |
test/sanity/ignore.txt
|
contrib/inventory/abiquo.py future-import-boilerplate
contrib/inventory/abiquo.py metaclass-boilerplate
contrib/inventory/apache-libcloud.py future-import-boilerplate
contrib/inventory/apache-libcloud.py metaclass-boilerplate
contrib/inventory/apstra_aos.py future-import-boilerplate
contrib/inventory/apstra_aos.py metaclass-boilerplate
contrib/inventory/azure_rm.py future-import-boilerplate
contrib/inventory/azure_rm.py metaclass-boilerplate
contrib/inventory/brook.py future-import-boilerplate
contrib/inventory/brook.py metaclass-boilerplate
contrib/inventory/cloudforms.py future-import-boilerplate
contrib/inventory/cloudforms.py metaclass-boilerplate
contrib/inventory/cobbler.py future-import-boilerplate
contrib/inventory/cobbler.py metaclass-boilerplate
contrib/inventory/collins.py future-import-boilerplate
contrib/inventory/collins.py metaclass-boilerplate
contrib/inventory/consul_io.py future-import-boilerplate
contrib/inventory/consul_io.py metaclass-boilerplate
contrib/inventory/digital_ocean.py future-import-boilerplate
contrib/inventory/digital_ocean.py metaclass-boilerplate
contrib/inventory/ec2.py future-import-boilerplate
contrib/inventory/ec2.py metaclass-boilerplate
contrib/inventory/fleet.py future-import-boilerplate
contrib/inventory/fleet.py metaclass-boilerplate
contrib/inventory/foreman.py future-import-boilerplate
contrib/inventory/foreman.py metaclass-boilerplate
contrib/inventory/freeipa.py future-import-boilerplate
contrib/inventory/freeipa.py metaclass-boilerplate
contrib/inventory/gce.py future-import-boilerplate
contrib/inventory/gce.py metaclass-boilerplate
contrib/inventory/gce.py pylint:blacklisted-name
contrib/inventory/infoblox.py future-import-boilerplate
contrib/inventory/infoblox.py metaclass-boilerplate
contrib/inventory/jail.py future-import-boilerplate
contrib/inventory/jail.py metaclass-boilerplate
contrib/inventory/landscape.py future-import-boilerplate
contrib/inventory/landscape.py metaclass-boilerplate
contrib/inventory/libvirt_lxc.py future-import-boilerplate
contrib/inventory/libvirt_lxc.py metaclass-boilerplate
contrib/inventory/linode.py future-import-boilerplate
contrib/inventory/linode.py metaclass-boilerplate
contrib/inventory/lxc_inventory.py future-import-boilerplate
contrib/inventory/lxc_inventory.py metaclass-boilerplate
contrib/inventory/lxd.py future-import-boilerplate
contrib/inventory/lxd.py metaclass-boilerplate
contrib/inventory/mdt_dynamic_inventory.py future-import-boilerplate
contrib/inventory/mdt_dynamic_inventory.py metaclass-boilerplate
contrib/inventory/nagios_livestatus.py future-import-boilerplate
contrib/inventory/nagios_livestatus.py metaclass-boilerplate
contrib/inventory/nagios_ndo.py future-import-boilerplate
contrib/inventory/nagios_ndo.py metaclass-boilerplate
contrib/inventory/nsot.py future-import-boilerplate
contrib/inventory/nsot.py metaclass-boilerplate
contrib/inventory/openshift.py future-import-boilerplate
contrib/inventory/openshift.py metaclass-boilerplate
contrib/inventory/openstack_inventory.py future-import-boilerplate
contrib/inventory/openstack_inventory.py metaclass-boilerplate
contrib/inventory/openvz.py future-import-boilerplate
contrib/inventory/openvz.py metaclass-boilerplate
contrib/inventory/ovirt.py future-import-boilerplate
contrib/inventory/ovirt.py metaclass-boilerplate
contrib/inventory/ovirt4.py future-import-boilerplate
contrib/inventory/ovirt4.py metaclass-boilerplate
contrib/inventory/packet_net.py future-import-boilerplate
contrib/inventory/packet_net.py metaclass-boilerplate
contrib/inventory/proxmox.py future-import-boilerplate
contrib/inventory/proxmox.py metaclass-boilerplate
contrib/inventory/rackhd.py future-import-boilerplate
contrib/inventory/rackhd.py metaclass-boilerplate
contrib/inventory/rax.py future-import-boilerplate
contrib/inventory/rax.py metaclass-boilerplate
contrib/inventory/rudder.py future-import-boilerplate
contrib/inventory/rudder.py metaclass-boilerplate
contrib/inventory/scaleway.py future-import-boilerplate
contrib/inventory/scaleway.py metaclass-boilerplate
contrib/inventory/serf.py future-import-boilerplate
contrib/inventory/serf.py metaclass-boilerplate
contrib/inventory/softlayer.py future-import-boilerplate
contrib/inventory/softlayer.py metaclass-boilerplate
contrib/inventory/spacewalk.py future-import-boilerplate
contrib/inventory/spacewalk.py metaclass-boilerplate
contrib/inventory/ssh_config.py future-import-boilerplate
contrib/inventory/ssh_config.py metaclass-boilerplate
contrib/inventory/stacki.py future-import-boilerplate
contrib/inventory/stacki.py metaclass-boilerplate
contrib/inventory/vagrant.py future-import-boilerplate
contrib/inventory/vagrant.py metaclass-boilerplate
contrib/inventory/vbox.py future-import-boilerplate
contrib/inventory/vbox.py metaclass-boilerplate
contrib/inventory/vmware.py future-import-boilerplate
contrib/inventory/vmware.py metaclass-boilerplate
contrib/inventory/vmware_inventory.py future-import-boilerplate
contrib/inventory/vmware_inventory.py metaclass-boilerplate
contrib/inventory/zabbix.py future-import-boilerplate
contrib/inventory/zabbix.py metaclass-boilerplate
contrib/inventory/zone.py future-import-boilerplate
contrib/inventory/zone.py metaclass-boilerplate
contrib/vault/azure_vault.py future-import-boilerplate
contrib/vault/azure_vault.py metaclass-boilerplate
contrib/vault/vault-keyring-client.py future-import-boilerplate
contrib/vault/vault-keyring-client.py metaclass-boilerplate
contrib/vault/vault-keyring.py future-import-boilerplate
contrib/vault/vault-keyring.py metaclass-boilerplate
docs/bin/find-plugin-refs.py future-import-boilerplate
docs/bin/find-plugin-refs.py metaclass-boilerplate
docs/docsite/_extensions/pygments_lexer.py future-import-boilerplate
docs/docsite/_extensions/pygments_lexer.py metaclass-boilerplate
docs/docsite/_themes/sphinx_rtd_theme/__init__.py future-import-boilerplate
docs/docsite/_themes/sphinx_rtd_theme/__init__.py metaclass-boilerplate
docs/docsite/rst/conf.py future-import-boilerplate
docs/docsite/rst/conf.py metaclass-boilerplate
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
examples/scripts/uptime.py future-import-boilerplate
examples/scripts/uptime.py metaclass-boilerplate
hacking/build-ansible.py shebang # only run by release engineers, Python 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/announce.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/plugin_formatter.py compile-2.6!skip # docs build only, 2.7+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-3.5!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.6!skip # release process and docs build only, 3.5+ required
hacking/build_library/build_ansible/command_plugins/update_intersphinx.py compile-2.7!skip # release process and docs build only, 3.5+ required
hacking/fix_test_syntax.py future-import-boilerplate
hacking/fix_test_syntax.py metaclass-boilerplate
hacking/get_library.py future-import-boilerplate
hacking/get_library.py metaclass-boilerplate
hacking/report.py future-import-boilerplate
hacking/report.py metaclass-boilerplate
hacking/return_skeleton_generator.py future-import-boilerplate
hacking/return_skeleton_generator.py metaclass-boilerplate
hacking/test-module.py future-import-boilerplate
hacking/test-module.py metaclass-boilerplate
hacking/tests/gen_distribution_version_testcase.py future-import-boilerplate
hacking/tests/gen_distribution_version_testcase.py metaclass-boilerplate
lib/ansible/cli/console.py pylint:blacklisted-name
lib/ansible/cli/scripts/ansible_cli_stub.py shebang
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/compat/selectors/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/compat/selectors/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/compat/selectors/_selectors2.py pylint:blacklisted-name
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/config/module_defaults.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:blacklisted-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name
lib/ansible/module_utils/_text.py future-import-boilerplate
lib/ansible/module_utils/_text.py metaclass-boilerplate
lib/ansible/module_utils/alicloud_ecs.py future-import-boilerplate
lib/ansible/module_utils/alicloud_ecs.py metaclass-boilerplate
lib/ansible/module_utils/ansible_tower.py future-import-boilerplate
lib/ansible/module_utils/ansible_tower.py metaclass-boilerplate
lib/ansible/module_utils/api.py future-import-boilerplate
lib/ansible/module_utils/api.py metaclass-boilerplate
lib/ansible/module_utils/azure_rm_common.py future-import-boilerplate
lib/ansible/module_utils/azure_rm_common.py metaclass-boilerplate
lib/ansible/module_utils/azure_rm_common_ext.py future-import-boilerplate
lib/ansible/module_utils/azure_rm_common_ext.py metaclass-boilerplate
lib/ansible/module_utils/azure_rm_common_rest.py future-import-boilerplate
lib/ansible/module_utils/azure_rm_common_rest.py metaclass-boilerplate
lib/ansible/module_utils/basic.py metaclass-boilerplate
lib/ansible/module_utils/cloud.py future-import-boilerplate
lib/ansible/module_utils/cloud.py metaclass-boilerplate
lib/ansible/module_utils/common/network.py future-import-boilerplate
lib/ansible/module_utils/common/network.py metaclass-boilerplate
lib/ansible/module_utils/compat/ipaddress.py future-import-boilerplate
lib/ansible/module_utils/compat/ipaddress.py metaclass-boilerplate
lib/ansible/module_utils/compat/ipaddress.py no-assert
lib/ansible/module_utils/compat/ipaddress.py no-unicode-literals
lib/ansible/module_utils/connection.py future-import-boilerplate
lib/ansible/module_utils/connection.py metaclass-boilerplate
lib/ansible/module_utils/database.py future-import-boilerplate
lib/ansible/module_utils/database.py metaclass-boilerplate
lib/ansible/module_utils/digital_ocean.py future-import-boilerplate
lib/ansible/module_utils/digital_ocean.py metaclass-boilerplate
lib/ansible/module_utils/dimensiondata.py future-import-boilerplate
lib/ansible/module_utils/dimensiondata.py metaclass-boilerplate
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/f5_utils.py future-import-boilerplate
lib/ansible/module_utils/f5_utils.py metaclass-boilerplate
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name
lib/ansible/module_utils/facts/sysctl.py future-import-boilerplate
lib/ansible/module_utils/facts/sysctl.py metaclass-boilerplate
lib/ansible/module_utils/facts/system/distribution.py pylint:ansible-bad-function
lib/ansible/module_utils/facts/utils.py future-import-boilerplate
lib/ansible/module_utils/facts/utils.py metaclass-boilerplate
lib/ansible/module_utils/firewalld.py future-import-boilerplate
lib/ansible/module_utils/firewalld.py metaclass-boilerplate
lib/ansible/module_utils/gcdns.py future-import-boilerplate
lib/ansible/module_utils/gcdns.py metaclass-boilerplate
lib/ansible/module_utils/gce.py future-import-boilerplate
lib/ansible/module_utils/gce.py metaclass-boilerplate
lib/ansible/module_utils/gcp.py future-import-boilerplate
lib/ansible/module_utils/gcp.py metaclass-boilerplate
lib/ansible/module_utils/gcp_utils.py future-import-boilerplate
lib/ansible/module_utils/gcp_utils.py metaclass-boilerplate
lib/ansible/module_utils/gitlab.py future-import-boilerplate
lib/ansible/module_utils/gitlab.py metaclass-boilerplate
lib/ansible/module_utils/hwc_utils.py future-import-boilerplate
lib/ansible/module_utils/hwc_utils.py metaclass-boilerplate
lib/ansible/module_utils/infinibox.py future-import-boilerplate
lib/ansible/module_utils/infinibox.py metaclass-boilerplate
lib/ansible/module_utils/ipa.py future-import-boilerplate
lib/ansible/module_utils/ipa.py metaclass-boilerplate
lib/ansible/module_utils/ismount.py future-import-boilerplate
lib/ansible/module_utils/ismount.py metaclass-boilerplate
lib/ansible/module_utils/json_utils.py future-import-boilerplate
lib/ansible/module_utils/json_utils.py metaclass-boilerplate
lib/ansible/module_utils/k8s/common.py metaclass-boilerplate
lib/ansible/module_utils/k8s/raw.py metaclass-boilerplate
lib/ansible/module_utils/k8s/scale.py metaclass-boilerplate
lib/ansible/module_utils/known_hosts.py future-import-boilerplate
lib/ansible/module_utils/known_hosts.py metaclass-boilerplate
lib/ansible/module_utils/kubevirt.py future-import-boilerplate
lib/ansible/module_utils/kubevirt.py metaclass-boilerplate
lib/ansible/module_utils/linode.py future-import-boilerplate
lib/ansible/module_utils/linode.py metaclass-boilerplate
lib/ansible/module_utils/lxd.py future-import-boilerplate
lib/ansible/module_utils/lxd.py metaclass-boilerplate
lib/ansible/module_utils/manageiq.py future-import-boilerplate
lib/ansible/module_utils/manageiq.py metaclass-boilerplate
lib/ansible/module_utils/memset.py future-import-boilerplate
lib/ansible/module_utils/memset.py metaclass-boilerplate
lib/ansible/module_utils/mysql.py future-import-boilerplate
lib/ansible/module_utils/mysql.py metaclass-boilerplate
lib/ansible/module_utils/net_tools/netbox/netbox_utils.py future-import-boilerplate
lib/ansible/module_utils/net_tools/nios/api.py future-import-boilerplate
lib/ansible/module_utils/net_tools/nios/api.py metaclass-boilerplate
lib/ansible/module_utils/netapp.py future-import-boilerplate
lib/ansible/module_utils/netapp.py metaclass-boilerplate
lib/ansible/module_utils/netapp_elementsw_module.py future-import-boilerplate
lib/ansible/module_utils/netapp_elementsw_module.py metaclass-boilerplate
lib/ansible/module_utils/netapp_module.py future-import-boilerplate
lib/ansible/module_utils/netapp_module.py metaclass-boilerplate
lib/ansible/module_utils/network/a10/a10.py future-import-boilerplate
lib/ansible/module_utils/network/a10/a10.py metaclass-boilerplate
lib/ansible/module_utils/network/aireos/aireos.py future-import-boilerplate
lib/ansible/module_utils/network/aireos/aireos.py metaclass-boilerplate
lib/ansible/module_utils/network/aos/aos.py future-import-boilerplate
lib/ansible/module_utils/network/aos/aos.py metaclass-boilerplate
lib/ansible/module_utils/network/aruba/aruba.py future-import-boilerplate
lib/ansible/module_utils/network/aruba/aruba.py metaclass-boilerplate
lib/ansible/module_utils/network/asa/asa.py future-import-boilerplate
lib/ansible/module_utils/network/asa/asa.py metaclass-boilerplate
lib/ansible/module_utils/network/avi/ansible_utils.py future-import-boilerplate
lib/ansible/module_utils/network/avi/ansible_utils.py metaclass-boilerplate
lib/ansible/module_utils/network/avi/avi.py future-import-boilerplate
lib/ansible/module_utils/network/avi/avi.py metaclass-boilerplate
lib/ansible/module_utils/network/avi/avi_api.py future-import-boilerplate
lib/ansible/module_utils/network/avi/avi_api.py metaclass-boilerplate
lib/ansible/module_utils/network/bigswitch/bigswitch.py future-import-boilerplate
lib/ansible/module_utils/network/bigswitch/bigswitch.py metaclass-boilerplate
lib/ansible/module_utils/network/checkpoint/checkpoint.py metaclass-boilerplate
lib/ansible/module_utils/network/cloudengine/ce.py future-import-boilerplate
lib/ansible/module_utils/network/cloudengine/ce.py metaclass-boilerplate
lib/ansible/module_utils/network/cnos/cnos.py future-import-boilerplate
lib/ansible/module_utils/network/cnos/cnos.py metaclass-boilerplate
lib/ansible/module_utils/network/cnos/cnos_devicerules.py future-import-boilerplate
lib/ansible/module_utils/network/cnos/cnos_devicerules.py metaclass-boilerplate
lib/ansible/module_utils/network/cnos/cnos_errorcodes.py future-import-boilerplate
lib/ansible/module_utils/network/cnos/cnos_errorcodes.py metaclass-boilerplate
lib/ansible/module_utils/network/common/cfg/base.py future-import-boilerplate
lib/ansible/module_utils/network/common/cfg/base.py metaclass-boilerplate
lib/ansible/module_utils/network/common/config.py future-import-boilerplate
lib/ansible/module_utils/network/common/config.py metaclass-boilerplate
lib/ansible/module_utils/network/common/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/common/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/common/netconf.py future-import-boilerplate
lib/ansible/module_utils/network/common/netconf.py metaclass-boilerplate
lib/ansible/module_utils/network/common/network.py future-import-boilerplate
lib/ansible/module_utils/network/common/network.py metaclass-boilerplate
lib/ansible/module_utils/network/common/parsing.py future-import-boilerplate
lib/ansible/module_utils/network/common/parsing.py metaclass-boilerplate
lib/ansible/module_utils/network/common/utils.py future-import-boilerplate
lib/ansible/module_utils/network/common/utils.py metaclass-boilerplate
lib/ansible/module_utils/network/dellos10/dellos10.py future-import-boilerplate
lib/ansible/module_utils/network/dellos10/dellos10.py metaclass-boilerplate
lib/ansible/module_utils/network/dellos6/dellos6.py future-import-boilerplate
lib/ansible/module_utils/network/dellos6/dellos6.py metaclass-boilerplate
lib/ansible/module_utils/network/dellos9/dellos9.py future-import-boilerplate
lib/ansible/module_utils/network/dellos9/dellos9.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeos/edgeos.py future-import-boilerplate
lib/ansible/module_utils/network/edgeos/edgeos.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch.py future-import-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py future-import-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py metaclass-boilerplate
lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py pylint:duplicate-string-formatting-argument
lib/ansible/module_utils/network/enos/enos.py future-import-boilerplate
lib/ansible/module_utils/network/enos/enos.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/eos.py future-import-boilerplate
lib/ansible/module_utils/network/eos/eos.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/eos/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/eos/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/exos/exos.py future-import-boilerplate
lib/ansible/module_utils/network/exos/exos.py metaclass-boilerplate
lib/ansible/module_utils/network/fortimanager/common.py future-import-boilerplate
lib/ansible/module_utils/network/fortimanager/common.py metaclass-boilerplate
lib/ansible/module_utils/network/fortimanager/fortimanager.py future-import-boilerplate
lib/ansible/module_utils/network/fortimanager/fortimanager.py metaclass-boilerplate
lib/ansible/module_utils/network/fortios/fortios.py future-import-boilerplate
lib/ansible/module_utils/network/fortios/fortios.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/frr.py future-import-boilerplate
lib/ansible/module_utils/network/frr/frr.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/base.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/base.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/frr/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/frr/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/common.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/common.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/configuration.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/configuration.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/device.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/device.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/fdm_swagger_client.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/fdm_swagger_client.py metaclass-boilerplate
lib/ansible/module_utils/network/ftd/operation.py future-import-boilerplate
lib/ansible/module_utils/network/ftd/operation.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/ios.py future-import-boilerplate
lib/ansible/module_utils/network/ios/ios.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/base.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/base.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/ios/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/ios/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/iosxr.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/iosxr.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/address_family.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/address_family.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/neighbors.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/neighbors.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/process.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/process.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/module.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/module.py metaclass-boilerplate
lib/ansible/module_utils/network/iosxr/providers/providers.py future-import-boilerplate
lib/ansible/module_utils/network/iosxr/providers/providers.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/argspec/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/junos/argspec/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/junos/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/facts/legacy/base.py future-import-boilerplate
lib/ansible/module_utils/network/junos/facts/legacy/base.py metaclass-boilerplate
lib/ansible/module_utils/network/junos/junos.py future-import-boilerplate
lib/ansible/module_utils/network/junos/junos.py metaclass-boilerplate
lib/ansible/module_utils/network/meraki/meraki.py future-import-boilerplate
lib/ansible/module_utils/network/meraki/meraki.py metaclass-boilerplate
lib/ansible/module_utils/network/netconf/netconf.py future-import-boilerplate
lib/ansible/module_utils/network/netconf/netconf.py metaclass-boilerplate
lib/ansible/module_utils/network/netscaler/netscaler.py future-import-boilerplate
lib/ansible/module_utils/network/netscaler/netscaler.py metaclass-boilerplate
lib/ansible/module_utils/network/nos/nos.py future-import-boilerplate
lib/ansible/module_utils/network/nos/nos.py metaclass-boilerplate
lib/ansible/module_utils/network/nso/nso.py future-import-boilerplate
lib/ansible/module_utils/network/nso/nso.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/argspec/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/argspec/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/facts/facts.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/facts/facts.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/facts/legacy/base.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/facts/legacy/base.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/nxos.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/nxos.py metaclass-boilerplate
lib/ansible/module_utils/network/nxos/utils/utils.py future-import-boilerplate
lib/ansible/module_utils/network/nxos/utils/utils.py metaclass-boilerplate
lib/ansible/module_utils/network/onyx/onyx.py future-import-boilerplate
lib/ansible/module_utils/network/onyx/onyx.py metaclass-boilerplate
lib/ansible/module_utils/network/ordnance/ordnance.py future-import-boilerplate
lib/ansible/module_utils/network/ordnance/ordnance.py metaclass-boilerplate
lib/ansible/module_utils/network/restconf/restconf.py future-import-boilerplate
lib/ansible/module_utils/network/restconf/restconf.py metaclass-boilerplate
lib/ansible/module_utils/network/routeros/routeros.py future-import-boilerplate
lib/ansible/module_utils/network/routeros/routeros.py metaclass-boilerplate
lib/ansible/module_utils/network/skydive/api.py future-import-boilerplate
lib/ansible/module_utils/network/skydive/api.py metaclass-boilerplate
lib/ansible/module_utils/network/slxos/slxos.py future-import-boilerplate
lib/ansible/module_utils/network/slxos/slxos.py metaclass-boilerplate
lib/ansible/module_utils/network/sros/sros.py future-import-boilerplate
lib/ansible/module_utils/network/sros/sros.py metaclass-boilerplate
lib/ansible/module_utils/network/voss/voss.py future-import-boilerplate
lib/ansible/module_utils/network/voss/voss.py metaclass-boilerplate
lib/ansible/module_utils/network/vyos/vyos.py future-import-boilerplate
lib/ansible/module_utils/network/vyos/vyos.py metaclass-boilerplate
lib/ansible/module_utils/oneandone.py future-import-boilerplate
lib/ansible/module_utils/oneandone.py metaclass-boilerplate
lib/ansible/module_utils/oneview.py metaclass-boilerplate
lib/ansible/module_utils/opennebula.py future-import-boilerplate
lib/ansible/module_utils/opennebula.py metaclass-boilerplate
lib/ansible/module_utils/openstack.py future-import-boilerplate
lib/ansible/module_utils/openstack.py metaclass-boilerplate
lib/ansible/module_utils/oracle/oci_utils.py future-import-boilerplate
lib/ansible/module_utils/oracle/oci_utils.py metaclass-boilerplate
lib/ansible/module_utils/ovirt.py future-import-boilerplate
lib/ansible/module_utils/ovirt.py metaclass-boilerplate
lib/ansible/module_utils/parsing/convert_bool.py future-import-boilerplate
lib/ansible/module_utils/parsing/convert_bool.py metaclass-boilerplate
lib/ansible/module_utils/postgres.py future-import-boilerplate
lib/ansible/module_utils/postgres.py metaclass-boilerplate
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pure.py future-import-boilerplate
lib/ansible/module_utils/pure.py metaclass-boilerplate
lib/ansible/module_utils/pycompat24.py future-import-boilerplate
lib/ansible/module_utils/pycompat24.py metaclass-boilerplate
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/rax.py future-import-boilerplate
lib/ansible/module_utils/rax.py metaclass-boilerplate
lib/ansible/module_utils/redhat.py future-import-boilerplate
lib/ansible/module_utils/redhat.py metaclass-boilerplate
lib/ansible/module_utils/remote_management/dellemc/dellemc_idrac.py future-import-boilerplate
lib/ansible/module_utils/remote_management/intersight.py future-import-boilerplate
lib/ansible/module_utils/remote_management/intersight.py metaclass-boilerplate
lib/ansible/module_utils/remote_management/lxca/common.py future-import-boilerplate
lib/ansible/module_utils/remote_management/lxca/common.py metaclass-boilerplate
lib/ansible/module_utils/remote_management/ucs.py future-import-boilerplate
lib/ansible/module_utils/remote_management/ucs.py metaclass-boilerplate
lib/ansible/module_utils/scaleway.py future-import-boilerplate
lib/ansible/module_utils/scaleway.py metaclass-boilerplate
lib/ansible/module_utils/service.py future-import-boilerplate
lib/ansible/module_utils/service.py metaclass-boilerplate
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/splitter.py future-import-boilerplate
lib/ansible/module_utils/splitter.py metaclass-boilerplate
lib/ansible/module_utils/storage/hpe3par/hpe3par.py future-import-boilerplate
lib/ansible/module_utils/storage/hpe3par/hpe3par.py metaclass-boilerplate
lib/ansible/module_utils/univention_umc.py future-import-boilerplate
lib/ansible/module_utils/univention_umc.py metaclass-boilerplate
lib/ansible/module_utils/urls.py future-import-boilerplate
lib/ansible/module_utils/urls.py metaclass-boilerplate
lib/ansible/module_utils/urls.py pylint:blacklisted-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/module_utils/vca.py future-import-boilerplate
lib/ansible/module_utils/vca.py metaclass-boilerplate
lib/ansible/module_utils/vexata.py future-import-boilerplate
lib/ansible/module_utils/vexata.py metaclass-boilerplate
lib/ansible/module_utils/yumdnf.py future-import-boilerplate
lib/ansible/module_utils/yumdnf.py metaclass-boilerplate
lib/ansible/modules/cloud/alicloud/ali_instance.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/alicloud/ali_instance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/alicloud/ali_instance_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/alicloud/ali_instance_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/alicloud/ali_instance_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/alicloud/ali_instance_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/amazon/aws_acm_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_acm_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_batch_compute_environment.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_batch_compute_environment.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_batch_job_definition.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_batch_job_definition.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_batch_job_queue.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_batch_job_queue.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_codebuild.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_codebuild.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_codepipeline.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_codepipeline.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_config_aggregator.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_config_aggregator.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_direct_connect_virtual_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_direct_connect_virtual_interface.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_eks_cluster.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_eks_cluster.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_glue_connection.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_glue_connection.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_glue_job.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_glue_job.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_kms.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_kms.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_netapp_cvs_FileSystems.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_s3.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_s3.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_s3_cors.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_waf_condition.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_waf_condition.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_waf_rule.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_waf_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/aws_waf_web_acl.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/aws_waf_web_acl.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/cloudformation.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/cloudformation.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/cloudformation_stack_set.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/cloudformation_stack_set.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/cloudfront_distribution.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/cloudfront_distribution.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/cloudfront_invalidation.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/cloudfront_invalidation.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/cloudwatchevent_rule.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/cloudwatchevent_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/data_pipeline.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/data_pipeline.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/dynamodb_table.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/dynamodb_table.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_ami.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_ami.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_ami_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_ami_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_asg.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_asg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_customer_gateway_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_customer_gateway_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_elb.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_elb_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_elb_lb.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_eni.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_eni.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_group.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_instance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_instance_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_launch_template.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_launch_template.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_lc.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_lc.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_lc_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_lc_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_metric_alarm.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_metric_alarm.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_placement_group_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_placement_group_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_snapshot_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_snapshot_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_tag.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/cloud/amazon/ec2_transit_gateway_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vol.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/cloud/amazon/ec2_vpc_dhcp_option.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_dhcp_option.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_dhcp_option_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_dhcp_option_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_endpoint.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_endpoint.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_endpoint_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_endpoint_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_igw_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_igw_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_nacl_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_nat_gateway_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_nat_gateway_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_net.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_net.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_net_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_net_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_peering_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_peering_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_route_table.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_route_table.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_subnet_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_subnet_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_vgw_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_vgw_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ec2_vpc_vpn_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ecs_attribute.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ecs_attribute.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ecs_service.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ecs_service.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ecs_service_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ecs_service_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ecs_task.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ecs_task.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/ecs_taskdefinition.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/ecs_taskdefinition.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/efs.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/efs.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/efs_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/efs_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/elasticache.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/elasticache.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/elasticache_subnet_group.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/elasticache_subnet_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/elb_application_lb.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/elb_application_lb.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/elb_application_lb_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/elb_classic_lb.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/elb_classic_lb_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/elb_instance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/elb_network_lb.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/elb_network_lb.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/elb_target_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/elb_target_group_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/iam.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/iam_group.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/iam_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/iam_role.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/iam_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/lambda.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/lambda.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/rds.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/rds.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/rds_instance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/rds_subnet_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/redshift.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/redshift.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/redshift_subnet_group.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/redshift_subnet_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/route53.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/route53.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/cloud/amazon/route53_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/route53_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/s3_bucket_notification.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/s3_bucket_notification.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/s3_lifecycle.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/amazon/sns_topic.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/amazon/sns_topic.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/atomic/atomic_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/atomic/atomic_image.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_acs.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_acs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_aks_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_aks_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_aksversion_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_appgateway.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_appgateway.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_appgateway.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_appgateway.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_applicationsecuritygroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_applicationsecuritygroup_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_applicationsecuritygroup_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_appserviceplan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_appserviceplan_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_appserviceplan_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_automationaccount_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_autoscale.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_autoscale.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_autoscale.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_autoscale_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_autoscale_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_availabilityset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_availabilityset_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_availabilityset_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_batchaccount.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/azure/azure_rm_batchaccount.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_cdnprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cdnprofile_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_cdnprofile_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_containerinstance_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_containerinstance_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_containerregistry.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_containerregistry_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_containerregistry_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_deployment.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_deployment.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_deployment.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/azure/azure_rm_deployment_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_deployment_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlab.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlab_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_devtestlab_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabarmtemplate_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifact_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifactsource.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifactsource_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifactsource_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabcustomimage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabcustomimage_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_devtestlabcustomimage_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_devtestlabcustomimage_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabenvironment.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_devtestlabenvironment.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabenvironment_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_devtestlabenvironment_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabpolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabpolicy_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_devtestlabpolicy_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabschedule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabschedule_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_devtestlabschedule_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualnetwork.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_dnszone.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_dnszone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_dnszone_info.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_dnszone_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_dnszone_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_dnszone_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_functionapp.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_functionapp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_functionapp_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_functionapp_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_gallery.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/azure/azure_rm_galleryimage.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_galleryimage.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/azure/azure_rm_galleryimage.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_galleryimage_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_image.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_image.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_image.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_image_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_image_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_image_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_iothub.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_iothub_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_iothubconsumergroup.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_keyvault.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_keyvault.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_keyvault.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_keyvault.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_keyvault_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_keyvault_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_keyvaultkey.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_keyvaultkey_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_keyvaultkey_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_keyvaultsecret.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_loadbalancer_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_lock_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_loganalyticsworkspace.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_loganalyticsworkspace_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_loganalyticsworkspace_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_manageddisk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_manageddisk_info.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_manageddisk_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_mariadbconfiguration.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mariadbconfiguration_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_mariadbdatabase.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mariadbfirewallrule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mariadbserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mariadbserver_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_monitorlogprofile.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_mysqlconfiguration.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mysqlconfiguration_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_mysqldatabase.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mysqlfirewallrule.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_mysqlfirewallrule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mysqlserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_mysqlserver_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_networkinterface_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_networkinterface_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_networkinterface_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_postgresqlconfiguration.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_postgresqlconfiguration_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_postgresqldatabase.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_postgresqlfirewallrule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_postgresqlserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_postgresqlserver_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_publicipaddress.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_publicipaddress.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_publicipaddress.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_publicipaddress_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_publicipaddress_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_rediscache_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_rediscache_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_rediscachefirewallrule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_resource.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_resource.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_resource_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_resource_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_resourcegroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_resourcegroup_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_resourcegroup_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_roleassignment.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_roleassignment_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:invalid-argument-spec
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_roledefinition_info.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/azure/azure_rm_roledefinition_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_roledefinition_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_routetable.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_routetable_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_routetable_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_securitygroup_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_securitygroup_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebus.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebus_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_servicebus_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_servicebus_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebusqueue.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebussaspolicy.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_servicebussaspolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebustopic.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_servicebustopic.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_servicebustopicsubscription.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_snapshot.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_snapshot.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/azure/azure_rm_sqldatabase.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/azure/azure_rm_sqldatabase.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_sqldatabase_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_sqldatabase_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_sqlfirewallrule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_sqlfirewallrule_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_sqlserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_sqlserver_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_storageaccount_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_storageaccount_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_storageaccount_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/azure/azure_rm_storageblob.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_subnet.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_subnet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_subnet_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerendpoint.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerendpoint.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerendpoint_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachineextension.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachineextension_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_virtualmachineextension_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachineimage_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetextension.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetextension.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetextension_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualnetwork.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_virtualnetwork.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualnetwork_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_virtualnetwork_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkpeering.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkpeering.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkpeering_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_webapp.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_webapp.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_webapp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_webapp_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/azure/azure_rm_webapp_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/azure/azure_rm_webappslot.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/azure/azure_rm_webappslot.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/azure/azure_rm_webappslot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_aa_policy.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_aa_policy.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_alert_policy.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/centurylink/clc_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_group.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/centurylink/clc_loadbalancer.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_loadbalancer.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/centurylink/clc_loadbalancer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_modify_server.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_modify_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/centurylink/clc_modify_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_publicip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_publicip.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/centurylink/clc_publicip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_server.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/centurylink/clc_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/cloudscale/cloudscale_floating_ip.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/cloudscale/cloudscale_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/cloudscale/cloudscale_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudscale/cloudscale_server_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/cloudscale/cloudscale_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/cloudscale/cloudscale_volume.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_disk_offering.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_firewall.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_host.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_instance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_ip_address.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_iso.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_loadbalancer_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/cloudstack/cs_loadbalancer_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_loadbalancer_rule_member.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_network_acl_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_network_offering.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_physical_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_portforward.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_project.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_resourcelimit.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/cloudstack/cs_service_offering.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_storage_pool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_template.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_vmsnapshot.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_volume.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_vpc.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_vpc_offering.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/cloudstack/cs_vpn_customer_gateway.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/digital_ocean/digital_ocean_block_storage.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_block_storage.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/digital_ocean/digital_ocean_block_storage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_droplet.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_droplet.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/digital_ocean/digital_ocean_droplet.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/digital_ocean/digital_ocean_droplet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_firewall_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/digital_ocean/digital_ocean_image_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_load_balancer_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_snapshot_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/digital_ocean/digital_ocean_volume_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/docker/docker_container.py use-argspec-type-path # uses colon-separated paths, can't use type=path
lib/ansible/modules/cloud/docker/docker_stack.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/_gcdns_record.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/_gcdns_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcdns_zone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gce.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/_gce.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gce.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gce.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/_gce.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/_gce.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gce.py yamllint:unparsable-with-libyaml
lib/ansible/modules/cloud/google/_gcp_backend_service.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/_gcp_healthcheck.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/_gcspanner.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/_gcspanner.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gc_storage.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_eip.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_eip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_eip.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gce_eip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_eip.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_img.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_img.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_img.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_instance_template.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_labels.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_lb.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gce_lb.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_mig.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_mig.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_mig.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gce_mig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_mig.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_net.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_net.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gce_net.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_net.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/google/gce_net.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gce_net.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_pd.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_pd.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gce_snapshot.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gce_tag.py pylint:blacklisted-name
lib/ansible/modules/cloud/google/gce_tag.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gce_tag.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gcp_appengine_firewall_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_appengine_firewall_rule_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_bigquery_dataset.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_bigquery_dataset.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_bigquery_dataset_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_bigquery_table.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gcp_bigquery_table.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_bigquery_table.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_bigquery_table_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_cloudbuild_trigger.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_cloudbuild_trigger.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_cloudbuild_trigger_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_cloudfunctions_cloud_function.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_cloudfunctions_cloud_function_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_cloudscheduler_job.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_cloudscheduler_job_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_cloudtasks_queue.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_cloudtasks_queue_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_address.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_address_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_address_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_autoscaler.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_autoscaler.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_autoscaler_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_autoscaler_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_backend_bucket.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_backend_bucket_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_backend_bucket_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_backend_service.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_backend_service.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_backend_service_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_backend_service_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_disk.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_disk.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_disk_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_disk_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_firewall.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_firewall.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_firewall_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_firewall_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_forwarding_rule.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_forwarding_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_forwarding_rule_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_forwarding_rule_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_global_address.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_global_address_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_global_address_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_global_forwarding_rule.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_global_forwarding_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_global_forwarding_rule_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_global_forwarding_rule_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_health_check.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_health_check_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_health_check_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_http_health_check.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_http_health_check_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_http_health_check_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_https_health_check.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_https_health_check_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_https_health_check_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_image.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_image.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_image_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_image_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_instance.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_instance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_instance_group.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_instance_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_instance_group_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_instance_group_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_instance_group_manager.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_instance_group_manager.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_instance_group_manager_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_instance_group_manager_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_instance_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_instance_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_instance_template.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_instance_template.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_instance_template_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_instance_template_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_interconnect_attachment.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_interconnect_attachment.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_interconnect_attachment_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_interconnect_attachment_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_network_endpoint_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_network_endpoint_group_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_network_endpoint_group_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_network_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_network_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_node_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_node_group_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_node_group_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_node_template.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_node_template_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_node_template_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_region_backend_service.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_region_backend_service.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_region_backend_service_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_region_backend_service_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_region_disk.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_region_disk.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_region_disk_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_region_disk_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_reservation.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_reservation.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_reservation_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_reservation_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_route.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_route.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_route_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_route_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_router.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_router.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_router_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_router_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_snapshot.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_snapshot_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_snapshot_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_ssl_certificate.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_ssl_certificate_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_ssl_certificate_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_ssl_policy.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_ssl_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_ssl_policy_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_ssl_policy_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_subnetwork.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_subnetwork.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_subnetwork_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_subnetwork_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_http_proxy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_http_proxy_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_target_http_proxy_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_https_proxy.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_target_https_proxy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_https_proxy_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_target_https_proxy_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_instance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_instance_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_target_instance_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_pool.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_target_pool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_pool_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_target_pool_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_ssl_proxy.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_target_ssl_proxy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_ssl_proxy_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_target_ssl_proxy_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_tcp_proxy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_tcp_proxy_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_target_tcp_proxy_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_vpn_gateway.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_target_vpn_gateway_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_target_vpn_gateway_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_url_map.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_url_map.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_url_map_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_url_map_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_vpn_tunnel.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_vpn_tunnel.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_compute_vpn_tunnel_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_compute_vpn_tunnel_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_container_cluster.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_container_cluster.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_container_cluster_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_container_node_pool.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_container_node_pool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_container_node_pool_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_dns_managed_zone.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_dns_managed_zone.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_dns_managed_zone_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_dns_managed_zone_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_dns_resource_record_set.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_dns_resource_record_set.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_dns_resource_record_set_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_filestore_instance.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_filestore_instance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_filestore_instance_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_iam_role.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_iam_role.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_iam_role_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_iam_service_account.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_iam_service_account_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_iam_service_account_key.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_kms_crypto_key.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_kms_crypto_key_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_kms_key_ring.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_kms_key_ring_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_logging_metric.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_logging_metric.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_logging_metric_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_mlengine_model.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_mlengine_model.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_mlengine_model_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_mlengine_version.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_mlengine_version_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_pubsub_subscription.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_pubsub_subscription_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_pubsub_topic.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_pubsub_topic.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_pubsub_topic_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_redis_instance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_redis_instance_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_resourcemanager_project.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_resourcemanager_project_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_runtimeconfig_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_runtimeconfig_config_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_runtimeconfig_variable.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_runtimeconfig_variable_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_serviceusage_service.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_serviceusage_service_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_sourcerepo_repository.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_sourcerepo_repository_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_spanner_database.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_spanner_database.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_spanner_database_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_spanner_instance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_spanner_instance_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_sql_database.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_sql_database_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_sql_instance.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_sql_instance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_sql_instance_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_sql_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_sql_user_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_storage_bucket.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/google/gcp_storage_bucket.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_storage_bucket_access_control.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_storage_object.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_tpu_node.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcp_tpu_node_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/hcloud/hcloud_network_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/hcloud/hcloud_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/hcloud/hcloud_server_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/heroku/heroku_collaborator.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/heroku/heroku_collaborator.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/huawei/hwc_ecs_instance.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/huawei/hwc_vpc_port.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/huawei/hwc_vpc_subnet.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/kubevirt/kubevirt_template.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_template.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/linode/linode.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/linode/linode.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/linode/linode.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/linode/linode.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/linode/linode_v4.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/linode/linode_v4.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/lxc/lxc_container.py pylint:blacklisted-name
lib/ansible/modules/cloud/lxc/lxc_container.py use-argspec-type-path
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:use-run-command-not-popen
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_dns_reload.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_memstore_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_server_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_zone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_zone_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/memset/memset_zone_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/cloud_init_data_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/helm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/helm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/ovirt.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/misc/ovirt.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/ovirt.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/misc/proxmox.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/proxmox.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/rhevm.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/misc/rhevm.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/misc/rhevm.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/cloud/misc/serverless.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/misc/terraform.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/misc/terraform.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/terraform.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/misc/terraform.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/terraform.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/misc/virt.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/misc/virt.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/misc/virt.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/misc/virt_net.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/misc/virt_net.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/misc/virt_pool.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/misc/virt_pool.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/oneandone/oneandone_firewall_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_firewall_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/oneandone/oneandone_firewall_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oneandone/oneandone_monitoring_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_monitoring_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/oneandone/oneandone_monitoring_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/online/_online_server_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/online/_online_server_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/online/_online_user_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/online/_online_user_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/online/online_server_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/online/online_server_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/online/online_user_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/online/online_user_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/opennebula/one_host.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/opennebula/one_host.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/opennebula/one_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/opennebula/one_image.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/opennebula/one_image_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/opennebula/one_image_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/opennebula/one_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/opennebula/one_vm.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/opennebula/one_vm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_auth.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_client_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_client_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_coe_cluster.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_coe_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_coe_cluster_template.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_coe_cluster_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_coe_cluster_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_floating_ip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_group_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_image.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_image_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_ironic_inspect.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_keypair.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_keystone_domain.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_keystone_domain_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_keystone_domain_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_keystone_role.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_keystone_service.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_listener.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_listener.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_loadbalancer.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_loadbalancer.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_loadbalancer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_member.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_networks_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_networks_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_nova_flavor.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_nova_flavor.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_nova_flavor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_nova_host_aggregate.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_nova_host_aggregate.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_nova_host_aggregate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_object.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_pool.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_port.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_port.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_port.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_port.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_port_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_port_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_project.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_project_access.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_project_access.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_project_access.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_project_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_project_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_project_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_recordset.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_recordset.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_recordset.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_recordset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_router.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_router.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_router.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_security_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_security_group_rule.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_security_group_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_server.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_server_action.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_server_action.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_server_action.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_server_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_server_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_server_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_server_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_server_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_server_metadata.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_server_metadata.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_server_volume.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_subnets_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_subnets_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_user.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_user_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_user_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_user_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/openstack/os_user_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_user_role.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/openstack/os_volume_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_zone.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/openstack/os_zone.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/openstack/os_zone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/oracle/oci_vcn.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/oracle/oci_vcn.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovh/ovh_ip_failover.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovh/ovh_ip_failover.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovh/ovh_ip_loadbalancing_backend.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovh/ovh_ip_loadbalancing_backend.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_api_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_auth.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_auth.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/ovirt/ovirt_cluster_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_cluster_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_cluster_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_datacenter_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_datacenter_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_datacenter_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_disk.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/ovirt/ovirt_disk_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_disk_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_disk_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_event.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_event_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/ovirt/ovirt_group.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_group_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_group_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_group_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_host.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_host_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_host_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_host_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_job.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_job.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_job.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_job.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_job.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_network.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_network_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_network_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_network_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_nic.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_nic.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_nic.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_nic_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_nic_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_nic_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_permission.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_permission.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_permission_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_permission_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_permission_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_quota.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_quota.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_quota.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_quota_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_quota_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_quota_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_role.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_role.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_role.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_role.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_role.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_snapshot_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_snapshot_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_snapshot_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_tag.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_tag.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_tag.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_tag_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_tag_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_tag_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_template.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_template.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_template_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_template_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_template_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_user.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_user_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_user_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_user_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_vm.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_vm.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_vm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_vmpool_info.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool_info.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vmpool_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/ovirt/ovirt_vmpool_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py future-import-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py metaclass-boilerplate
lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/packet/packet_device.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/packet/packet_device.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/packet/packet_device.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/packet/packet_ip_subnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/packet/packet_volume_attachment.py pylint:ansible-bad-function
lib/ansible/modules/cloud/packet/packet_volume_attachment.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/podman/podman_image.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/cloud/podman/podman_image.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/podman/podman_image.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/podman/podman_image.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/podman/podman_image_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/podman/podman_image_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/pubnub/pubnub_blocks.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/pubnub/pubnub_blocks.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/pubnub/pubnub_blocks.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax.py use-argspec-type-path # fix needed
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/_scaleway_image_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_image_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_image_facts.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/_scaleway_image_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_ip_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_ip_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_ip_facts.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/_scaleway_ip_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_organization_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_organization_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_security_group_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_security_group_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_security_group_facts.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/_scaleway_security_group_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_server_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_server_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_server_facts.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/_scaleway_server_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_snapshot_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_snapshot_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_snapshot_facts.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/_scaleway_snapshot_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/_scaleway_volume_facts.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/_scaleway_volume_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/_scaleway_volume_facts.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/_scaleway_volume_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/scaleway_image_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_image_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_image_info.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_image_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_ip.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_ip.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_ip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_ip_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_ip_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_ip_info.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_ip_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/scaleway_organization_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_organization_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_security_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_security_group.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_security_group_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_security_group_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_security_group_info.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_security_group_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_security_group_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_security_group_rule.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_security_group_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/scaleway_server_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_server_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_server_info.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_server_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_snapshot_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_snapshot_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_snapshot_info.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_snapshot_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/scaleway/scaleway_sshkey.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_sshkey.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_user_data.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_user_data.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_user_data.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_user_data.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/scaleway_volume.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_volume.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/smartos/imgadm.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/smartos/imgadm.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/cloud/smartos/smartos_image_info.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/univention/udm_dns_record.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/univention/udm_dns_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/univention/udm_group.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/univention/udm_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/univention/udm_share.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/univention/udm_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/univention/udm_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/univention/udm_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/univention/udm_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/_vmware_dns_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/_vmware_drs_group_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_category.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/vmware/vmware_category.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/vmware/vmware_cfg_backup.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_cluster.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/vmware/vmware_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_cluster_drs.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/cloud/vmware/vmware_cluster_ha.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/vmware/vmware_content_library_manager.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_datastore_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_deploy_ovf.py use-argspec-type-path
lib/ansible/modules/cloud/vmware/vmware_deploy_ovf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_drs_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_drs_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_drs_group_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_dvswitch_lacp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_dvswitch_pvlans.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_guest.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_guest.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_guest.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_guest.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_guest_boot_manager.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_guest_controller.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attribute_defs.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_guest_disk.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_guest_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_guest_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_guest_sendkey.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_guest_serial_port.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_guest_snapshot.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_host_acceptance.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/cloud/vmware/vmware_host_datastore.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_host_dns.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_host_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_host_firewall_manager.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_host_lockdown.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_host_ntp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_host_snmp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_local_role_manager.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_tag_manager.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_vm_host_drs_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware/vmware_vm_shell.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_vm_vm_drs_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware/vmware_vswitch.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware/vsphere_copy.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/vmware_httpapi/vmware_appliance_access_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware_httpapi/vmware_appliance_access_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware_httpapi/vmware_appliance_health_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware_httpapi/vmware_appliance_health_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware_httpapi/vmware_cis_category_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware_httpapi/vmware_cis_category_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vmware_httpapi/vmware_core_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vmware_httpapi/vmware_core_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vultr/_vultr_block_storage_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_dns_domain_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_firewall_group_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_network_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_os_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_region_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_server_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_ssh_key_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_startup_script_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/_vultr_user_facts.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_block_storage.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_block_storage.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/vultr/vultr_block_storage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vultr/vultr_dns_domain.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_dns_domain_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_dns_record.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_dns_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vultr/vultr_firewall_group.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_firewall_group_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_firewall_rule.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_firewall_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/vultr/vultr_network.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/vultr/vultr_network_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_region_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/vultr/vultr_server_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_startup_script_info.py validate-modules:return-syntax-error
lib/ansible/modules/cloud/vultr/vultr_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/webfaction/webfaction_app.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/webfaction/webfaction_db.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/webfaction/webfaction_domain.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/webfaction/webfaction_domain.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/webfaction/webfaction_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/webfaction/webfaction_mailbox.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/webfaction/webfaction_site.py validate-modules:doc-missing-type
lib/ansible/modules/cloud/webfaction/webfaction_site.py validate-modules:parameter-list-no-elements
lib/ansible/modules/cloud/webfaction/webfaction_site.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:doc-elements-mismatch
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:missing-suboption-docs
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:undocumented-parameter
lib/ansible/modules/cloud/xenserver/xenserver_guest_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/cloud/xenserver/xenserver_guest_powerstate.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/consul/consul.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/consul/consul.py validate-modules:parameter-list-no-elements
lib/ansible/modules/clustering/consul/consul.py validate-modules:undocumented-parameter
lib/ansible/modules/clustering/consul/consul_acl.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/consul/consul_acl.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/consul/consul_acl.py validate-modules:parameter-list-no-elements
lib/ansible/modules/clustering/consul/consul_kv.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/consul/consul_kv.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/consul/consul_session.py validate-modules:parameter-list-no-elements
lib/ansible/modules/clustering/consul/consul_session.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/clustering/etcd3.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/clustering/etcd3.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:parameter-list-no-elements
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s.py validate-modules:return-syntax-error
lib/ansible/modules/clustering/k8s/k8s_auth.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/k8s/k8s_auth.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s_info.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/k8s/k8s_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/clustering/k8s/k8s_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:return-syntax-error
lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:parameter-list-no-elements
lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:return-syntax-error
lib/ansible/modules/clustering/pacemaker_cluster.py validate-modules:doc-required-mismatch
lib/ansible/modules/clustering/pacemaker_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/clustering/znode.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/clustering/znode.py validate-modules:doc-missing-type
lib/ansible/modules/clustering/znode.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/commands/command.py validate-modules:doc-missing-type
lib/ansible/modules/commands/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/commands/command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/commands/command.py validate-modules:undocumented-parameter
lib/ansible/modules/commands/expect.py validate-modules:doc-missing-type
lib/ansible/modules/crypto/acme/acme_account_info.py validate-modules:return-syntax-error
lib/ansible/modules/crypto/acme/acme_certificate.py validate-modules:doc-elements-mismatch
lib/ansible/modules/database/aerospike/aerospike_migrations.py yamllint:unparsable-with-libyaml
lib/ansible/modules/database/influxdb/influxdb_database.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/influxdb/influxdb_database.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/influxdb/influxdb_query.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/influxdb/influxdb_query.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/influxdb/influxdb_retention_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/influxdb/influxdb_retention_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/influxdb/influxdb_retention_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/influxdb/influxdb_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/influxdb/influxdb_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/database/influxdb/influxdb_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/influxdb/influxdb_write.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/influxdb/influxdb_write.py validate-modules:parameter-list-no-elements
lib/ansible/modules/database/influxdb/influxdb_write.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/misc/elasticsearch_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/database/misc/elasticsearch_plugin.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/database/misc/elasticsearch_plugin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/misc/kibana_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/database/misc/kibana_plugin.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/database/misc/kibana_plugin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/misc/redis.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/misc/redis.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/misc/riak.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/database/misc/riak.py validate-modules:doc-missing-type
lib/ansible/modules/database/misc/riak.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/mongodb/mongodb_parameter.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:doc-missing-type
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/mongodb/mongodb_replicaset.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_replicaset.py validate-modules:parameter-list-no-elements
lib/ansible/modules/database/mongodb/mongodb_shard.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_shard.py validate-modules:doc-missing-type
lib/ansible/modules/database/mongodb/mongodb_shard.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/mongodb/mongodb_shard.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/mongodb/mongodb_user.py use-argspec-type-path
lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:doc-missing-type
lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:undocumented-parameter
lib/ansible/modules/database/mssql/mssql_db.py validate-modules:doc-missing-type
lib/ansible/modules/database/mssql/mssql_db.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/mysql/mysql_db.py validate-modules:doc-elements-mismatch
lib/ansible/modules/database/mysql/mysql_db.py validate-modules:parameter-list-no-elements
lib/ansible/modules/database/mysql/mysql_db.py validate-modules:use-run-command-not-popen
lib/ansible/modules/database/mysql/mysql_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/database/mysql/mysql_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/database/mysql/mysql_query.py validate-modules:parameter-list-no-elements
lib/ansible/modules/database/mysql/mysql_user.py validate-modules:undocumented-parameter
lib/ansible/modules/database/mysql/mysql_variables.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/postgresql/postgresql_db.py use-argspec-type-path
lib/ansible/modules/database/postgresql/postgresql_db.py validate-modules:use-run-command-not-popen
lib/ansible/modules/database/postgresql/postgresql_privs.py validate-modules:parameter-documented-multiple-times
lib/ansible/modules/database/postgresql/postgresql_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_manage_config.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_manage_config.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:undocumented-parameter
lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:doc-missing-type
lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:undocumented-parameter
lib/ansible/modules/database/vertica/vertica_configuration.py validate-modules:doc-missing-type
lib/ansible/modules/database/vertica/vertica_configuration.py validate-modules:doc-required-mismatch
lib/ansible/modules/database/vertica/vertica_info.py validate-modules:doc-missing-type
lib/ansible/modules/database/vertica/vertica_role.py validate-modules:doc-missing-type
lib/ansible/modules/database/vertica/vertica_role.py validate-modules:undocumented-parameter
lib/ansible/modules/database/vertica/vertica_schema.py validate-modules:doc-missing-type
lib/ansible/modules/database/vertica/vertica_schema.py validate-modules:undocumented-parameter
lib/ansible/modules/database/vertica/vertica_user.py validate-modules:doc-missing-type
lib/ansible/modules/database/vertica/vertica_user.py validate-modules:undocumented-parameter
lib/ansible/modules/files/acl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/files/archive.py use-argspec-type-path # fix needed
lib/ansible/modules/files/archive.py validate-modules:parameter-list-no-elements
lib/ansible/modules/files/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/files/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/copy.py pylint:blacklisted-name
lib/ansible/modules/files/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/files/file.py pylint:ansible-bad-function
lib/ansible/modules/files/file.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/file.py validate-modules:undocumented-parameter
lib/ansible/modules/files/find.py use-argspec-type-path # fix needed
lib/ansible/modules/files/find.py validate-modules:parameter-list-no-elements
lib/ansible/modules/files/find.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/files/iso_extract.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/iso_extract.py validate-modules:parameter-list-no-elements
lib/ansible/modules/files/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/files/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/patch.py pylint:blacklisted-name
lib/ansible/modules/files/read_csv.py validate-modules:parameter-list-no-elements
lib/ansible/modules/files/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/stat.py validate-modules:parameter-invalid
lib/ansible/modules/files/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/files/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/files/synchronize.py pylint:blacklisted-name
lib/ansible/modules/files/synchronize.py use-argspec-type-path
lib/ansible/modules/files/synchronize.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/files/synchronize.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/synchronize.py validate-modules:parameter-list-no-elements
lib/ansible/modules/files/synchronize.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/files/synchronize.py validate-modules:undocumented-parameter
lib/ansible/modules/files/unarchive.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/files/unarchive.py validate-modules:parameter-list-no-elements
lib/ansible/modules/files/xml.py validate-modules:doc-required-mismatch
lib/ansible/modules/files/xml.py validate-modules:parameter-list-no-elements
lib/ansible/modules/identity/cyberark/cyberark_authentication.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/identity/ipa/ipa_hbacrule.py validate-modules:doc-elements-mismatch
lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:doc-elements-mismatch
lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:doc-missing-type
lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:parameter-list-no-elements
lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:doc-elements-mismatch
lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:doc-missing-type
lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/identity/onepassword_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/identity/opendj/opendj_backendprop.py validate-modules:doc-missing-type
lib/ansible/modules/identity/opendj/opendj_backendprop.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_binding.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/messaging/rabbitmq/rabbitmq_binding.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_global_parameter.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_global_parameter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_parameter.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_user.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/messaging/rabbitmq/rabbitmq_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/messaging/rabbitmq/rabbitmq_vhost.py validate-modules:doc-missing-type
lib/ansible/modules/messaging/rabbitmq/rabbitmq_vhost_limits.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/airbrake_deployment.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/airbrake_deployment.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/bigpanda.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/bigpanda.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/bigpanda.py validate-modules:undocumented-parameter
lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/datadog/datadog_monitor.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/datadog/datadog_monitor.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/datadog/datadog_monitor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/grafana/grafana_dashboard.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/grafana/grafana_dashboard.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/grafana/grafana_dashboard.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/grafana/grafana_datasource.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/grafana/grafana_datasource.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/grafana/grafana_datasource.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/grafana/grafana_datasource.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/grafana/grafana_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/grafana/grafana_plugin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/honeybadger_deployment.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/icinga2_feature.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/icinga2_host.py validate-modules:undocumented-parameter
lib/ansible/modules/monitoring/librato_annotation.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/librato_annotation.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/librato_annotation.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/librato_annotation.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/logentries.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/monitoring/logentries.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/logentries.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/logentries.py validate-modules:undocumented-parameter
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/logicmonitor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/logicmonitor.py yamllint:unparsable-with-libyaml
lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/monitoring/logstash_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/logstash_plugin.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/monitoring/logstash_plugin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/monit.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/monit.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/nagios.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/nagios.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/nagios.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/nagios.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/monitoring/newrelic_deployment.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/pagerduty.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/pagerduty.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/pagerduty.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/pagerduty.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/pagerduty_alert.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/pagerduty_alert.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/pingdom.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/monitoring/pingdom.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/rollbar_deployment.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/sensu/sensu_check.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/sensu/sensu_check.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/sensu/sensu_check.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/sensu/sensu_check.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/sensu/sensu_client.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/sensu/sensu_client.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/sensu/sensu_client.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/sensu/sensu_client.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/sensu/sensu_handler.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/monitoring/sensu/sensu_handler.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/sensu/sensu_handler.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/sensu/sensu_handler.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/sensu/sensu_silence.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/sensu/sensu_silence.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/sensu/sensu_silence.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/sensu/sensu_subscription.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/spectrum_device.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/spectrum_device.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/spectrum_device.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/stackdriver.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/stackdriver.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/statusio_maintenance.py pylint:blacklisted-name
lib/ansible/modules/monitoring/statusio_maintenance.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/statusio_maintenance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/statusio_maintenance.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/uptimerobot.py validate-modules:doc-missing-type
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-elements-mismatch
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-required-mismatch
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:missing-suboption-docs
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:undocumented-parameter
lib/ansible/modules/monitoring/zabbix/zabbix_group.py validate-modules:doc-elements-mismatch
lib/ansible/modules/monitoring/zabbix/zabbix_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/zabbix/zabbix_group_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/monitoring/zabbix/zabbix_group_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/zabbix/zabbix_host.py validate-modules:doc-elements-mismatch
lib/ansible/modules/monitoring/zabbix/zabbix_host.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/zabbix/zabbix_host_info.py validate-modules:doc-elements-mismatch
lib/ansible/modules/monitoring/zabbix/zabbix_host_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/zabbix/zabbix_maintenance.py validate-modules:doc-elements-mismatch
lib/ansible/modules/monitoring/zabbix/zabbix_maintenance.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/zabbix/zabbix_mediatype.py validate-modules:doc-elements-mismatch
lib/ansible/modules/monitoring/zabbix/zabbix_mediatype.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/zabbix/zabbix_template.py validate-modules:doc-elements-mismatch
lib/ansible/modules/monitoring/zabbix/zabbix_template.py validate-modules:parameter-list-no-elements
lib/ansible/modules/monitoring/zabbix/zabbix_user.py validate-modules:doc-elements-mismatch
lib/ansible/modules/monitoring/zabbix/zabbix_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/net_tools/basics/get_url.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/basics/uri.py pylint:blacklisted-name
lib/ansible/modules/net_tools/basics/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/basics/uri.py validate-modules:parameter-list-no-elements
lib/ansible/modules/net_tools/basics/uri.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/cloudflare_dns.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/dnsimple.py validate-modules:parameter-list-no-elements
lib/ansible/modules/net_tools/dnsmadeeasy.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/dnsmadeeasy.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/dnsmadeeasy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/ip_netns.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/ipinfoio_facts.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/ipinfoio_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/ldap/ldap_entry.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/ldap/ldap_entry.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/ldap/ldap_passwd.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/ldap/ldap_passwd.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/netbox/netbox_device.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/netbox/netbox_device.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/netbox/netbox_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/netbox/netbox_ip_address.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/netbox/netbox_ip_address.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/netbox/netbox_prefix.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/netbox/netbox_prefix.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/netbox/netbox_site.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/netcup_dns.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/netcup_dns.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:doc-elements-mismatch
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:parameter-alias-self
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:doc-elements-mismatch
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:parameter-alias-self
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:parameter-list-no-elements
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:doc-elements-mismatch
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:parameter-list-no-elements
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:doc-elements-mismatch
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-elements-mismatch
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:missing-suboption-docs
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:doc-elements-mismatch
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:doc-missing-type
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:doc-required-mismatch
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:parameter-alias-self
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:undocumented-parameter
lib/ansible/modules/net_tools/nmcli.py validate-modules:parameter-list-no-elements
lib/ansible/modules/net_tools/nmcli.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/nsupdate.py validate-modules:parameter-list-no-elements
lib/ansible/modules/net_tools/nsupdate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/net_tools/omapi_host.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/a10/a10_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/a10/a10_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/a10/a10_server_axapi3.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/a10/a10_server_axapi3.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/a10/a10_server_axapi3.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/a10/a10_service_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/a10/a10_service_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/a10/a10_virtual_server.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/a10/a10_virtual_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/a10/a10_virtual_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/a10/a10_virtual_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/aci/aci_aaa_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_aaa_user_certificate.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_access_port_block_to_access_port.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_access_port_to_interface_policy_leaf_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_access_sub_port_block_to_access_port.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_aep.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_aep_to_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_ap.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_bd.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_bd_subnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_bd_subnet.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/aci_bd_to_l3out.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_config_rollback.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_config_snapshot.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_contract.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_contract_subject.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_contract_subject_to_filter.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_domain_to_encap_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_domain_to_vlan_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_encap_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_encap_pool_range.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_epg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_epg_monitoring_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_epg_to_contract.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_epg_to_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_fabric_node.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_fabric_scheduler.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_fabric_scheduler.py validate-modules:parameter-alias-self
lib/ansible/modules/network/aci/aci_filter.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_filter_entry.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_firmware_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_firmware_group.py validate-modules:parameter-alias-self
lib/ansible/modules/network/aci/aci_firmware_group_node.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_firmware_group_node.py validate-modules:parameter-alias-self
lib/ansible/modules/network/aci/aci_firmware_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_firmware_policy.py validate-modules:parameter-alias-self
lib/ansible/modules/network/aci/aci_firmware_source.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_cdp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_fc.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_l2.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_leaf_policy_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_leaf_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_mcp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_ospf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_ospf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/aci_interface_policy_port_channel.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_policy_port_security.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_interface_selector_to_switch_policy_leaf_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_l3out.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_l3out.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/aci_l3out_extepg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_l3out_extsubnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_l3out_extsubnet.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/aci_l3out_route_tag_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_maintenance_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_maintenance_group_node.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_maintenance_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_rest.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_static_binding_to_epg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_static_binding_to_epg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/aci_switch_leaf_selector.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_switch_policy_leaf_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_switch_policy_vpc_protection_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_taboo_contract.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant_action_rule_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant_ep_retention_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant_span_dst_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant_span_src_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_tenant_span_src_group_to_dst_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_vlan_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_vlan_pool_encap_block.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_vmm_credential.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/aci_vmm_credential.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/aci/aci_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_label.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_role.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_role.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/mso_schema.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/mso_schema_site.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_anp_epg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_anp_epg_domain.py pylint:ansible-bad-function
lib/ansible/modules/network/aci/mso_schema_site_anp_epg_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_anp_epg_staticleaf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_anp_epg_staticport.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_anp_epg_subnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_bd_l3out.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_vrf_region.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_vrf_region_cidr.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_site_vrf_region_cidr_subnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_template_anp_epg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/mso_schema_template_bd.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/mso_schema_template_contract_filter.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/aci/mso_schema_template_contract_filter.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/mso_schema_template_deploy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_schema_template_external_epg_subnet.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/mso_schema_template_filter_entry.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/mso_site.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_site.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/mso_tenant.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_tenant.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aci/mso_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aci/mso_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aireos/aireos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aireos/aireos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/apconos/apconos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aruba/aruba_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/aruba/aruba_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/asa/asa_acl.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/asa/asa_acl.py validate-modules:doc-missing-type
lib/ansible/modules/network/asa/asa_acl.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/asa/asa_acl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/asa/asa_acl.py validate-modules:undocumented-parameter
lib/ansible/modules/network/asa/asa_acl.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/asa/asa_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/asa/asa_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/asa/asa_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/asa/asa_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/asa/asa_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/asa/asa_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/asa/asa_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/asa/asa_config.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/network/asa/asa_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/asa/asa_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/asa/asa_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/asa/asa_config.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/asa/asa_og.py validate-modules:doc-missing-type
lib/ansible/modules/network/asa/asa_og.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/asa/asa_og.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_actiongroupconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_actiongroupconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_actiongroupconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_actiongroupconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_alertconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_alertconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_alertemailconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertemailconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertemailconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_alertemailconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_alertscriptconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertscriptconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertscriptconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_alertscriptconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_alertsyslogconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_analyticsprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_analyticsprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_analyticsprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_analyticsprofile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_analyticsprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_api_session.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_api_session.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_api_session.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_api_session.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/avi/avi_api_session.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_api_version.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_api_version.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_api_version.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_api_version.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_applicationprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_applicationprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_applicationprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_applicationprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_authprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_authprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_authprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_authprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_backup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_backup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_backup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_backup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_backupconfiguration.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_backupconfiguration.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_backupconfiguration.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_backupconfiguration.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_cloud.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cloud.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cloud.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_cloud.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_cloud.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_cloudconnectoruser.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_cloudproperties.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cloudproperties.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cloudproperties.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_cloudproperties.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_cloudproperties.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_cluster.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_cluster.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_cluster.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_cluster.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_clusterclouddetails.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_clusterclouddetails.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_clusterclouddetails.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_clusterclouddetails.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_controllerproperties.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_controllerproperties.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_controllerproperties.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_controllerproperties.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_controllerproperties.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_customipamdnsprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_dnspolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_dnspolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_dnspolicy.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_dnspolicy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_dnspolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_errorpagebody.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_errorpagebody.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_errorpagebody.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_errorpagebody.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_errorpageprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_errorpageprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_errorpageprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_errorpageprofile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_errorpageprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_gslb.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslb.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslb.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_gslb.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_gslb.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_gslbservice.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_gslbservice.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_gslbservice.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_healthmonitor.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_healthmonitor.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_healthmonitor.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_healthmonitor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_httppolicyset.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_httppolicyset.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_httppolicyset.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_httppolicyset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_ipaddrgroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_ipaddrgroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_ipaddrgroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_ipaddrgroup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_ipaddrgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_l4policyset.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_l4policyset.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_l4policyset.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_l4policyset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_microservicegroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_microservicegroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_microservicegroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_microservicegroup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_microservicegroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_network.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_network.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_network.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_networkprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_networkprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_networkprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_networkprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_networksecuritypolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_pkiprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_pkiprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_pkiprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_pkiprofile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_pkiprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_pool.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_pool.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_pool.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_pool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_poolgroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_poolgroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_poolgroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_poolgroup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_poolgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_prioritylabels.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_prioritylabels.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_prioritylabels.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_prioritylabels.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_prioritylabels.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_role.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_role.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_role.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_role.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_role.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_scheduler.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_scheduler.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_scheduler.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_scheduler.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_seproperties.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_seproperties.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_seproperties.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_seproperties.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_serviceengine.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_serviceengine.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_serviceengine.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_serviceengine.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_serviceengine.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_serviceenginegroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_serviceenginegroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_serviceenginegroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_serviceenginegroup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_serviceenginegroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_snmptrapprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_snmptrapprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_snmptrapprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_snmptrapprofile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_snmptrapprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_sslprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_sslprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_sslprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_sslprofile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_sslprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_stringgroup.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_stringgroup.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_stringgroup.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_stringgroup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_stringgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_systemconfiguration.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_systemconfiguration.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_systemconfiguration.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_systemconfiguration.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_systemconfiguration.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_tenant.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_tenant.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_tenant.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_tenant.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_trafficcloneprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_useraccount.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_useraccount.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_useraccount.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_useraccount.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/avi/avi_useraccount.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_useraccountprofile.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_useraccountprofile.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_useraccountprofile.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_useraccountprofile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_virtualservice.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_virtualservice.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_virtualservice.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_virtualservice.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_virtualservice.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_vrfcontext.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_vrfcontext.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_vrfcontext.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_vrfcontext.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_vrfcontext.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_vsdatascriptset.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_vsdatascriptset.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_vsdatascriptset.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_vsdatascriptset.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_vsdatascriptset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_vsvip.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_vsvip.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_vsvip.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_vsvip.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/avi/avi_vsvip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/avi/avi_webhook.py future-import-boilerplate
lib/ansible/modules/network/avi/avi_webhook.py metaclass-boilerplate
lib/ansible/modules/network/avi/avi_webhook.py validate-modules:doc-missing-type
lib/ansible/modules/network/avi/avi_webhook.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/bigswitch/bcf_switch.py validate-modules:doc-missing-type
lib/ansible/modules/network/bigswitch/bcf_switch.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/bigswitch/bigmon_chain.py validate-modules:doc-missing-type
lib/ansible/modules/network/bigswitch/bigmon_chain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:doc-missing-type
lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/check_point/checkpoint_access_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/check_point/checkpoint_access_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/checkpoint_host.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/checkpoint_object_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/check_point/checkpoint_run_script.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/checkpoint_session.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/check_point/checkpoint_task_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/check_point/cp_mgmt_access_layer.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_access_layer_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_access_role.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_access_role_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_access_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_access_rule_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_address_range.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_address_range_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_administrator.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_administrator_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_application_site.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_application_site_category.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_application_site_category_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_application_site_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_application_site_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_application_site_group_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_assign_global_assignment.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_dns_domain.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_dns_domain_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_dynamic_object.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_dynamic_object_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_exception_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_exception_group_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_global_assignment_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_group_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_group_with_exclusion.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_group_with_exclusion_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_host.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_host_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_install_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_mds_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_multicast_address_range.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_multicast_address_range_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_network_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_package.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_package_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_put_file.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_run_script.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_security_zone.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_security_zone_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_dce_rpc.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_dce_rpc_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_group_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_icmp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_icmp6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_icmp6_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_icmp_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_other.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_other_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_rpc.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_rpc_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_sctp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_sctp_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_tcp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_tcp_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_udp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_service_udp_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_session_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_simple_gateway.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_simple_gateway_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_tag.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_tag_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_threat_exception.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_threat_exception_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_threat_indicator.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_threat_indicator_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_threat_layer.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_threat_layer_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_threat_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_threat_profile_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_threat_protection_override.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_threat_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_threat_rule_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_time.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_time_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_vpn_community_meshed.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_vpn_community_meshed_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_vpn_community_star.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_vpn_community_star_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_wildcard.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/check_point/cp_mgmt_wildcard_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cli/cli_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cli/cli_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cli/cli_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/cli/cli_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cli/cli_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_aaa_server.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_acl.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_acl_advance.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_advance.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_acl_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bfd_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bfd_session.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_session.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bfd_view.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_view.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bgp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bgp_af.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_af.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_command.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_command.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_command.py pylint:blacklisted-name
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_config.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_config.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_dldp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py pylint:blacklisted-name
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_evpn_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_facts.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_facts.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_file_copy.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_file_copy.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_info_center_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_info_center_log.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_log.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_ip_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ip_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_is_is_view.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_link_status.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_link_status.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_mlag_config.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_config.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py pylint:blacklisted-name
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_mtu.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_mtu.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_netconf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netconf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_netstream_export.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_export.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_netstream_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_netstream_template.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_template.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_ntp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_ospf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_reboot.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_reboot.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_rollback.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_rollback.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_sflow.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_sflow.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_community.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_community.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_location.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_location.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_snmp_user.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_user.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_startup.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_startup.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_static_route.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_static_route.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_static_route_bfd.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudengine/ce_static_route_bfd.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cloudengine/ce_stp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_stp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_switchport.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_switchport.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vlan.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vlan.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vrf.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vrf_af.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_af.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vrrp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrrp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py future-import-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py metaclass-boilerplate
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cloudvision/cv_server_provision.py pylint:blacklisted-name
lib/ansible/modules/network/cloudvision/cv_server_provision.py validate-modules:doc-missing-type
lib/ansible/modules/network/cloudvision/cv_server_provision.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cloudvision/cv_server_provision.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_backup.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_bgp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_bgp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_bgp.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cnos/cnos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_conditional_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_conditional_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_conditional_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_conditional_command.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_conditional_template.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_conditional_template.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_conditional_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_conditional_template.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cnos/cnos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_config.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_factory.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_factory.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_factory.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_facts.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/cnos/cnos_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cnos/cnos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_facts.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_image.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_image.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_image.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_image.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_reload.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_reload.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_reload.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_rollback.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_save.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_save.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_save.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_showrun.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_showrun.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/cnos/cnos_showrun.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_system.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cnos/cnos_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_template.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_template.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_template.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_vlag.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_vlag.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_vlag.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_vlag.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/cumulus/nclu.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/cumulus/nclu.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/edgeos/edgeos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/edgeos/edgeos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/edgeos/edgeos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/edgeos/edgeos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/edgeos/edgeos_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/edgeos/edgeos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/edgeos/edgeos_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/edgeos/edgeos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/edgeswitch/edgeswitch_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/edgeswitch/edgeswitch_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/enos/enos_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/enos/enos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/enos/enos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/enos/enos_command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/enos/enos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/enos/enos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/enos/enos_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/enos/enos_command.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/enos/enos_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/enos/enos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/enos/enos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/enos/enos_config.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/enos/enos_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/enos/enos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/enos/enos_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/enos/enos_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/enos/enos_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/enos/enos_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/enos/enos_facts.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/enos/enos_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/enos/enos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/enos/enos_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/enos/enos_facts.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/_eos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/_eos_vlan.py future-import-boilerplate
lib/ansible/modules/network/eos/_eos_vlan.py metaclass-boilerplate
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/eos_banner.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_banner.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/eos/eos_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_command.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_command.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/eos/eos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_config.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_config.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/eos/eos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_eapi.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_eapi.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_eapi.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/eos/eos_interfaces.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/eos/eos_l2_interfaces.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/eos/eos_lldp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_logging.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_logging.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_logging.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/eos/eos_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/eos_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/eos_system.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_system.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_system.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/eos/eos_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_user.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_user.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_user.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/eos/eos_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/eos_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eos/eos_vrf.py future-import-boilerplate
lib/ansible/modules/network/eos/eos_vrf.py metaclass-boilerplate
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/eos/eos_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/eric_eccli/eric_eccli_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/exos/exos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/exos/exos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/exos/exos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/exos/exos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/exos/exos_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/exos/exos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/exos/exos_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/exos/exos_l2_interfaces.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:doc-missing-type
lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/_bigip_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/_bigip_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/bigip_apm_acl.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/f5/bigip_apm_acl.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_apm_network_access.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_apm_network_access.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_apm_policy_fetch.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_apm_policy_import.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_appsvcs_extension.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_dos_application.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/f5/bigip_asm_dos_application.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_dos_application.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_asm_policy_fetch.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_policy_import.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_policy_manage.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_policy_server_technology.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_asm_policy_signature_set.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_cli_alias.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_cli_script.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_configsync_action.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_data_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_data_group.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/f5/bigip_data_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_device_auth.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_auth_ldap.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_auth_ldap.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_device_certificate.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_connectivity.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_connectivity.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_device_dns.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_dns.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_device_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_group_member.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_ha_group.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/f5/bigip_device_ha_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_httpd.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_httpd.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_device_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_device_info.py validate-modules:return-syntax-error
lib/ansible/modules/network/f5/bigip_device_license.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_ntp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_ntp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_device_sshd.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_sshd.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_device_syslog.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_traffic_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_device_traffic_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_device_trust.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_dns_cache_resolver.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/f5/bigip_dns_cache_resolver.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_dns_nameserver.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_dns_resolver.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_dns_zone.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_dns_zone.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_file_copy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_address_list.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/f5/bigip_firewall_address_list.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/f5/bigip_firewall_address_list.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_address_list.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_firewall_dos_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_dos_vector.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_global_rules.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_log_profile.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_log_profile_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_log_profile_network.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/network/f5/bigip_firewall_log_profile_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_firewall_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_firewall_port_list.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_port_list.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_firewall_rule.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/f5/bigip_firewall_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_rule_list.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_rule_list.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_firewall_schedule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_firewall_schedule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_gtm_datacenter.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_global.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_bigip.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_external.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_firepass.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_http.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_https.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_tcp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_monitor_tcp_half_open.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_pool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:doc-missing-type
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:undocumented-parameter
lib/ansible/modules/network/f5/bigip_gtm_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_gtm_topology_record.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_topology_region.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/f5/bigip_gtm_topology_region.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_virtual_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_virtual_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_gtm_wide_ip.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_gtm_wide_ip.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_hostname.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_iapp_service.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_iapp_service.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_iapp_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ike_peer.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ike_peer.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_imish_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_imish_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_ipsec_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_irule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_log_destination.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_log_destination.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/f5/bigip_log_publisher.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_log_publisher.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_lx_package.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_management_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_message_routing_peer.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_message_routing_protocol.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_message_routing_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_message_routing_route.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_message_routing_router.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_message_routing_router.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_message_routing_transport_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_message_routing_transport_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_monitor_dns.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_external.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_gateway_icmp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_http.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_https.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_ldap.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_snmp_dca.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_tcp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_tcp_echo.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_tcp_half_open.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_monitor_udp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_node.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_node.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_partition.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_password_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_policy_rule.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/f5/bigip_policy_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_policy_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:doc-missing-type
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/bigip_pool.py validate-modules:undocumented-parameter
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:doc-missing-type
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:undocumented-parameter
lib/ansible/modules/network/f5/bigip_profile_analytics.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_analytics.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_profile_client_ssl.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_client_ssl.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_profile_dns.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_fastl4.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_http.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_http.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_profile_http2.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_http2.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/f5/bigip_profile_http2.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_profile_http_compression.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_oneconnect.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_persistence_cookie.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_persistence_src_addr.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_server_ssl.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_tcp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_profile_udp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_provision.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_qkview.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_qkview.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_remote_role.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_remote_syslog.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_remote_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_routedomain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_routedomain.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_selfip.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_selfip.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_service_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_smtp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_snat_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_snat_pool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_snat_translation.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_snmp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_snmp_community.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_snmp_trap.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_software_image.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_software_install.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_software_update.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ssl_certificate.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ssl_key.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ssl_ocsp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_sys_daemon_log_tmm.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_sys_db.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_sys_global.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_timer_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_timer_policy.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/f5/bigip_timer_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_traffic_selector.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_trunk.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_trunk.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_tunnel.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_tunnel.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/f5/bigip_ucs.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_ucs_fetch.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_vcmp_guest.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_vcmp_guest.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_virtual_address.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_virtual_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_virtual_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigip_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigip_wait.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_fasthttp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_fasthttp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigiq_application_fastl4_tcp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_fastl4_tcp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigiq_application_fastl4_udp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_fastl4_udp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigiq_application_http.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_http.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigiq_application_https_offload.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_https_offload.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/f5/bigiq_application_https_offload.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigiq_application_https_waf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_application_https_waf.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/f5/bigiq_application_https_waf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigiq_device_discovery.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_device_discovery.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigiq_device_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_device_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/f5/bigiq_device_info.py validate-modules:return-syntax-error
lib/ansible/modules/network/f5/bigiq_regkey_license.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_regkey_license_assignment.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_regkey_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_utility_license.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/f5/bigiq_utility_license_assignment.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortianalyzer/faz_device.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_device.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_device.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_device_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_device_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_device_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_device_provision_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_device_provision_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwobj_address.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwobj_ippool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_fwobj_ippool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwobj_ippool6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_fwobj_ippool6.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwobj_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwobj_vip.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_fwobj_vip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwpol_ipv4.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_fwpol_ipv4.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_fwpol_package.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_fwpol_package.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_ha.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_provisioning.py validate-modules:doc-missing-type
lib/ansible/modules/network/fortimanager/fmgr_provisioning.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_provisioning.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_query.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_query.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_script.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/fortimanager/fmgr_script.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortimanager/fmgr_script.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_appctrl.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_secprof_appctrl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_av.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_secprof_av.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_dns.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_ips.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_secprof_ips.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_profile_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_proxy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_secprof_proxy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_spam.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_secprof_spam.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_ssl_ssh.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_secprof_ssl_ssh.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_voip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_waf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_secprof_waf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_wanopt.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortimanager/fmgr_secprof_web.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortimanager/fmgr_secprof_web.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortios/fortios_address.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/fortios/fortios_address.py validate-modules:doc-missing-type
lib/ansible/modules/network/fortios/fortios_antivirus_quarantine.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_application_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_application_list.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_application_name.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_authentication_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_authentication_scheme.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortios/fortios_dlp_filepattern.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_dlp_sensor.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_dnsfilter_domain_filter.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_dnsfilter_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_endpoint_control_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy6.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_address.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_address6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_address6_template.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_addrgrp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_addrgrp6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_auth_portal.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_central_snat_map.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_identity_based_route.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_interface_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_interface_policy6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_internet_service.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_internet_service_custom.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_internet_service_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_local_in_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_local_in_policy6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_multicast_address.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_multicast_address6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_multicast_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_multicast_policy6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_policy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_firewall_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_policy46.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_policy6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_policy64.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_proxy_address.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_proxy_addrgrp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_proxy_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_schedule_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_service_custom.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_service_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_shaping_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_shaping_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_sniffer.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_firewall_sniffer.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_ssl_ssh_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_ttl_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_vip.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_vip46.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_vip6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_vip64.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_vipgrp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_vipgrp46.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_vipgrp6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_vipgrp64.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_firewall_wildcard_fqdn_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_ips_decoder.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_ips_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_ips_sensor.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_ipv4_policy.py validate-modules:doc-missing-type
lib/ansible/modules/network/fortios/fortios_ipv4_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_ipv4_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortios/fortios_log_setting.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_log_syslogd2_setting.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_log_syslogd3_setting.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_log_syslogd4_setting.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_log_syslogd_override_setting.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_log_syslogd_setting.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_log_threat_weight.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_report_chart.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_report_chart.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_report_dataset.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_report_layout.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_access_list.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_access_list6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_aspath_list.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_bfd.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_bfd6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_bgp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_community_list.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_isis.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_key_chain.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_multicast.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_multicast6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_multicast_flow.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_ospf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_ospf6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_prefix_list.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_prefix_list6.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_rip.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_ripng.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_router_route_map.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_spamfilter_bwl.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_spamfilter_bword.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_spamfilter_dnsbl.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_spamfilter_iptrust.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_spamfilter_mheader.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_spamfilter_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_ssh_filter_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_switch_controller_global.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_switch_controller_lldp_profile.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_switch_controller_lldp_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_switch_controller_managed_switch.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_switch_controller_managed_switch.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_switch_controller_qos_ip_dscp_map.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_switch_controller_qos_queue_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_switch_controller_quarantine.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_switch_controller_security_policy_802_1X.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_switch_controller_switch_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_switch_controller_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_admin.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_alarm.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_api_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_automation_action.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_automation_destination.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_automation_stitch.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_central_management.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_cluster_sync.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_csf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_ddns.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_dhcp6_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_dhcp_server.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_system_dhcp_server.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_dns.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_dns_database.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_geoip_override.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_global.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_system_global.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_ha.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_interface.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_link_monitor.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_mobile_tunnel.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_nat64.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_nd_proxy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_ntp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_object_tagging.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_replacemsg_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_sdn_connector.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_session_ttl.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_settings.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_snmp_community.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_snmp_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_switch_interface.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_vdom_exception.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_virtual_wan_link.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_virtual_wire_pair.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_vxlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_system_zone.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_user_device.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_user_device_access_list.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_user_device_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_user_fsso_polling.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_user_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_user_peergrp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_user_quarantine.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_user_radius.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_user_security_exempt_list.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_user_setting.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_voip_profile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_concentrator.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_phase1.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_phase1_interface.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_vpn_ipsec_phase2_interface.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_vpn_ssl_settings.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_vpn_ssl_web_host_check_software.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_vpn_ssl_web_portal.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_vpn_ssl_web_user_bookmark.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_vpn_ssl_web_user_group_bookmark.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_waf_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wanopt_cache_service.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wanopt_content_delivery_network_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_web_proxy_explicit.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_web_proxy_forward_server_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_web_proxy_global.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_web_proxy_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:doc-choices-incompatible-type
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:parameter-invalid
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/fortios/fortios_webfilter_content.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_webfilter_content_header.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_webfilter_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_webfilter_urlfilter.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_bonjour_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_hotspot20_anqp_3gpp_cellular.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_hotspot20_anqp_nai_realm.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_hotspot20_anqp_roaming_consortium.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_hotspot20_anqp_venue_name.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_hotspot20_h2qp_operator_name.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_hotspot20_h2qp_osu_provider.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_hotspot20_hs_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_hotspot20_icon.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_hotspot20_qos_map.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_inter_controller.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_qos_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_setting.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_wireless_controller_timers.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_vap.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_vap_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp_profile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp_profile.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/frr/frr_bgp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/frr/frr_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/frr/frr_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/icx/icx_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/icx/icx_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/icx/icx_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/icx/icx_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/icx/icx_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/icx/icx_l3_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/icx/icx_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/icx/icx_linkagg.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/icx/icx_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/icx/icx_linkagg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/icx/icx_lldp.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/icx/icx_lldp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/icx/icx_logging.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/icx/icx_logging.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/icx/icx_static_route.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/icx/icx_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/icx/icx_system.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/icx/icx_system.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/icx/icx_user.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/icx/icx_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/icx/icx_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/icx/icx_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/icx/icx_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/illumos/dladm_etherstub.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_etherstub.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/dladm_iptun.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_iptun.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/dladm_iptun.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/illumos/dladm_linkprop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/illumos/dladm_vlan.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/illumos/dladm_vnic.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/dladm_vnic.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/illumos/dladm_vnic.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/flowadm.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/flowadm.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/illumos/flowadm.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/ipadm_addr.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_addr.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/ipadm_addr.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/illumos/ipadm_addrprop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_addrprop.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/ipadm_addrprop.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/network/illumos/ipadm_if.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_if.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/ipadm_ifprop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:doc-missing-type
lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/network/illumos/ipadm_prop.py pylint:blacklisted-name
lib/ansible/modules/network/illumos/ipadm_prop.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/illumos/ipadm_prop.py validate-modules:doc-missing-type
lib/ansible/modules/network/ingate/ig_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/ingate/ig_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ingate/ig_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ingate/ig_config.py validate-modules:return-syntax-error
lib/ansible/modules/network/ingate/ig_unit_information.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ingate/ig_unit_information.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/_ios_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/ios_banner.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_banner.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_bgp.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/ios_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_bgp.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/ios/ios_bgp.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/ios/ios_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_command.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_command.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ios/ios_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_config.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_config.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ios/ios_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_facts.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_facts.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ios/ios_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_interfaces.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/ios_l2_interfaces.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ios/ios_l3_interfaces.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/ios_l3_interfaces.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/ios/ios_l3_interfaces.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ios/ios_lag_interfaces.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/ios_lag_interfaces.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/ios_lldp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/ios_lldp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_logging.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_logging.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/ios_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/ios_ntp.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_ntp.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_ntp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_ntp.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_ntp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_ping.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_ping.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_ping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_static_route.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_static_route.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/ios_system.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_system.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_system.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ios/ios_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_user.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_user.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/ios/ios_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ios/ios_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ios/ios_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ios/ios_vrf.py future-import-boilerplate
lib/ansible/modules/network/ios/ios_vrf.py metaclass-boilerplate
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ios/ios_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_l2_interfaces.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/iosxr/iosxr_l2_interfaces.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/iosxr/iosxr_l3_interfaces.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/iosxr/iosxr_l3_interfaces.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/iosxr/iosxr_lacp_interfaces.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:undocumented-parameter
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ironware/ironware_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ironware/ironware_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/itential/iap_start_workflow.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/itential/iap_token.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_banner.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_banner.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/junos/junos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_command.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_config.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/junos/junos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/junos/junos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_interfaces.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/junos/junos_interfaces.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/network/junos/junos_l2_interfaces.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/junos/junos_lag_interfaces.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/junos/junos_lag_interfaces.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/junos_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_netconf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_package.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_package.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_package.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_package.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_package.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_package.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_ping.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_ping.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_ping.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_ping.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_ping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_ping.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_rpc.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_scp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_scp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_scp.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_scp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_scp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/junos/junos_scp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_scp.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_system.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_system.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/junos/junos_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_system.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/junos_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/junos/junos_vlans.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/junos/junos_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/meraki/meraki_admin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_config_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_content_filtering.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/meraki/meraki_firewalled_services.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/meraki/meraki_malware.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/meraki/meraki_malware.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_mr_l3_firewall.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/meraki/meraki_mr_l3_firewall.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/network/meraki/meraki_mx_l3_firewall.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/meraki/meraki_mx_l3_firewall.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py pylint:ansible-bad-function
lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_nat.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/meraki/meraki_nat.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/meraki/meraki_nat.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/meraki/meraki_network.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/meraki/meraki_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_organization.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_snmp.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/meraki/meraki_snmp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/meraki/meraki_ssid.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/meraki/meraki_ssid.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/meraki/meraki_ssid.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/meraki/meraki_static_route.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/meraki/meraki_switchport.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/meraki/meraki_switchport.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/meraki/meraki_switchport.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_syslog.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/meraki/meraki_syslog.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/netact/netact_cm_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/netact/netact_cm_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netconf/netconf_config.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/netconf/netconf_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/netconf/netconf_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netconf/netconf_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netconf/netconf_get.py validate-modules:doc-missing-type
lib/ansible/modules/network/netconf/netconf_get.py validate-modules:return-syntax-error
lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:doc-missing-type
lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:return-syntax-error
lib/ansible/modules/network/netscaler/netscaler_cs_action.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/netscaler/netscaler_cs_action.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_cs_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:undocumented-parameter
lib/ansible/modules/network/netscaler/netscaler_gslb_service.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/netscaler/netscaler_gslb_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_gslb_site.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_gslb_vserver.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/netscaler/netscaler_gslb_vserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_gslb_vserver.py validate-modules:undocumented-parameter
lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_lb_vserver.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/netscaler/netscaler_lb_vserver.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/netscaler/netscaler_lb_vserver.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_nitro_request.py pylint:ansible-bad-function
lib/ansible/modules/network/netscaler/netscaler_nitro_request.py validate-modules:doc-missing-type
lib/ansible/modules/network/netscaler/netscaler_nitro_request.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netscaler/netscaler_nitro_request.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/netscaler/netscaler_nitro_request.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_save_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/netscaler/netscaler_save_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_server.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/netscaler/netscaler_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/netscaler/netscaler_service.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/netscaler/netscaler_service.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_servicegroup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/netscaler/netscaler_servicegroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netscaler/netscaler_ssl_certkey.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_cluster.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_cluster.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_ospf.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospf.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/_pn_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_ospfarea.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospfarea.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_ospfarea.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_show.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_show.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_show.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/_pn_show.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_trunk.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_trunk.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_trunk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vlag.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlag.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlag.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vlan.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlan.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vrouter.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouter.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vrouterif.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterif.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterif.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/_pn_vrouterif.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py future-import-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py metaclass-boilerplate
lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_access_list.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_access_list.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_access_list_ip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_access_list_ip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_admin_service.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_admin_session_timeout.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_admin_syslog.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_connection_stats_settings.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_cpu_class.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_cpu_class.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_cpu_mgmt_class.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_dhcp_filter.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_dscp_map.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_dscp_map.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_dscp_map_pri_map.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_fabric_local.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_fabric_local.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_igmp_snooping.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_igmp_snooping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_ipv6security_raguard.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_ipv6security_raguard_port.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_ipv6security_raguard_vlan.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_log_audit_exception.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/pn_log_audit_exception.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_port_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_port_cos_bw.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_port_cos_rate_setting.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_prefix_list.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_prefix_list_network.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_role.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/pn_role.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_snmp_community.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_snmp_community.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_snmp_trap_sink.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_snmp_vacm.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_stp.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_stp_port.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_switch_setup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_user.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_vflow_table_profile.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_vrouter_bgp.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_vrouter_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/netvisor/pn_vrouter_bgp_network.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_vrouter_interface_ip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_vrouter_loopback_interface.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_vrouter_ospf.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_vrouter_ospf6.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_vrouter_packet_relay.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_vrouter_pim_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/netvisor/pn_vrouter_pim_config.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/netvisor/pn_vtep.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/nos/nos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/nos/nos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nos/nos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nos/nos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/nos/nos_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nos/nos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nos/nos_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nos/nos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nso/nso_action.py validate-modules:doc-missing-type
lib/ansible/modules/network/nso/nso_action.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nso/nso_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nso/nso_config.py validate-modules:return-syntax-error
lib/ansible/modules/network/nso/nso_query.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nso/nso_query.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nso/nso_show.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nso/nso_verify.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/nxos_aaa_server.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_acl.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_acl.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_acl_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_acl_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_banner.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_banner.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bfd_global.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bfd_global.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bfd_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_bfd_global.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_bfd_global.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bgp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_bgp_af.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_af.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_config.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_config.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_evpn_global.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_global.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_evpn_global.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_evpn_global.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_evpn_vni.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_vni.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_facts.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_facts.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_feature.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_feature.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_gir.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_gir.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_hsrp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_hsrp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_igmp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_igmp_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_install_os.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_install_os.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_interface_ospf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_interface_ospf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_lag_interfaces.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_lag_interfaces.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_logging.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_logging.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ntp_auth.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_auth.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ntp_options.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_options.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_nxapi.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_nxapi.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ospf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_overlay_global.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_overlay_global.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_pim.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_pim.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_pim_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_ping.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_ping.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_reboot.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_reboot.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_reboot.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_reboot.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_reboot.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_rollback.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_rollback.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_rpm.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_rpm.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/nxos_smu.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_smu.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snapshot.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snapshot.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_snmp_community.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_community.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_snmp_contact.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_contact.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_snmp_host.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_host.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_snmp_location.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_location.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_snmp_traps.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_traps.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_user.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_user.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_static_route.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_static_route.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/nxos_system.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_system.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_telemetry.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_udld.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_udld.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_udld_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_udld_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_user.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_user.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/nxos_vlans.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/nxos_vpc.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vpc_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vrf.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:undocumented-parameter
lib/ansible/modules/network/nxos/nxos_vrf_af.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_af.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vrf_interface.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_interface.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vrrp.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vrrp.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vtp_domain.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_domain.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vtp_password.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_password.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vtp_version.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_version.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py future-import-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py metaclass-boilerplate
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:doc-missing-type
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/nxos/storage/nxos_devicealias.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/storage/nxos_vsan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/nxos/storage/nxos_zone_zoneset.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_bgp.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_bgp.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_bgp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_buffer_pool.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_buffer_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/onyx/onyx_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/onyx/onyx_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_config.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/onyx/onyx_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/onyx/onyx_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/onyx/onyx_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_igmp.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_igmp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_igmp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_magp.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_magp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_mlag_ipl.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_ntp.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_ntp_servers_peers.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_ospf.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_ospf.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_ospf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_protocol.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_ptp_global.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_ptp_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_ptp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_ptp_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_qos.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_qos.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/onyx/onyx_qos.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_snmp.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_snmp_hosts.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_snmp_users.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_syslog_remote.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_traffic_class.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_traffic_class.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/onyx/onyx_traffic_class.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/opx/opx_cps.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/opx/opx_cps.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ordnance/ordnance_config.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:doc-missing-type
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:undocumented-parameter
lib/ansible/modules/network/ordnance/ordnance_facts.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:doc-missing-type
lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ovs/openvswitch_db.py validate-modules:doc-missing-type
lib/ansible/modules/network/ovs/openvswitch_db.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/ovs/openvswitch_port.py validate-modules:doc-missing-type
lib/ansible/modules/network/ovs/openvswitch_port.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_admin.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_admin.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_admin.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_admin.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_admpwd.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_admpwd.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_admpwd.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_check.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_check.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_check.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_commit.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_commit.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_commit.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_commit.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_commit.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/panos/_panos_commit.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_dag.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_dag.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_dag.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_dag_tags.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_dag_tags.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_dag_tags.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_dag_tags.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_dag_tags.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/panos/_panos_dag_tags.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_import.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_import.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_import.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_interface.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_interface.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_lic.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_lic.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_lic.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_lic.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_loadcfg.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_loadcfg.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_loadcfg.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_match_rule.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_match_rule.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_match_rule.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_match_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_match_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_mgtconfig.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_mgtconfig.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_mgtconfig.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_nat_rule.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_nat_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_nat_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/panos/_panos_nat_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_object.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_object.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_object.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_object.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_object.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/panos/_panos_object.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_op.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_op.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_op.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_op.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_pg.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_pg.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_pg.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_query_rules.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_query_rules.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_query_rules.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_query_rules.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_restart.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_restart.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_restart.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_sag.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_sag.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_sag.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_sag.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/panos/_panos_sag.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_security_rule.py validate-modules:doc-missing-type
lib/ansible/modules/network/panos/_panos_security_rule.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/panos/_panos_security_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/panos/_panos_security_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/panos/_panos_set.py future-import-boilerplate
lib/ansible/modules/network/panos/_panos_set.py metaclass-boilerplate
lib/ansible/modules/network/panos/_panos_set.py validate-modules:doc-missing-type
lib/ansible/modules/network/radware/vdirect_commit.py validate-modules:doc-missing-type
lib/ansible/modules/network/radware/vdirect_commit.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/radware/vdirect_commit.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/radware/vdirect_file.py validate-modules:doc-missing-type
lib/ansible/modules/network/radware/vdirect_file.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/radware/vdirect_runnable.py validate-modules:doc-missing-type
lib/ansible/modules/network/radware/vdirect_runnable.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/restconf/restconf_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/restconf/restconf_get.py validate-modules:doc-missing-type
lib/ansible/modules/network/routeros/routeros_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/routeros/routeros_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/routeros/routeros_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/routeros/routeros_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/routeros/routeros_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:doc-missing-type
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:undocumented-parameter
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:doc-missing-type
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:undocumented-parameter
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:doc-missing-type
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/skydive/skydive_node.py validate-modules:undocumented-parameter
lib/ansible/modules/network/slxos/slxos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/slxos/slxos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/slxos/slxos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/slxos/slxos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/slxos/slxos_lldp.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/network/sros/sros_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/sros/sros_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/sros/sros_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/sros/sros_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/sros/sros_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/sros/sros_command.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/sros/sros_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/sros/sros_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/sros/sros_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/sros/sros_config.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/network/sros/sros_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/sros/sros_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/sros/sros_config.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/sros/sros_rollback.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/sros/sros_rollback.py validate-modules:doc-missing-type
lib/ansible/modules/network/sros/sros_rollback.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/sros/sros_rollback.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/sros/sros_rollback.py yamllint:unparsable-with-libyaml
lib/ansible/modules/network/voss/voss_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/voss/voss_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/voss/voss_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/voss/voss_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/voss/voss_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/voss/voss_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/voss/voss_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/voss/voss_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/_vyos_interface.py future-import-boilerplate
lib/ansible/modules/network/vyos/_vyos_interface.py metaclass-boilerplate
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/_vyos_l3_interface.py future-import-boilerplate
lib/ansible/modules/network/vyos/_vyos_l3_interface.py metaclass-boilerplate
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/_vyos_linkagg.py future-import-boilerplate
lib/ansible/modules/network/vyos/_vyos_linkagg.py metaclass-boilerplate
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/_vyos_lldp.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/_vyos_lldp.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/_vyos_lldp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py future-import-boilerplate
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py metaclass-boilerplate
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/vyos_banner.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_banner.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_banner.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_banner.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_command.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_command.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_command.py pylint:blacklisted-name
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/vyos/vyos_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_config.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_config.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/vyos/vyos_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_facts.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_facts.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_facts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/vyos/vyos_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_interfaces.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/vyos/vyos_lag_interfaces.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/vyos/vyos_lag_interfaces.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/vyos/vyos_lldp_global.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/vyos/vyos_lldp_interfaces.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/vyos/vyos_lldp_interfaces.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_logging.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_logging.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/vyos_ping.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_ping.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_ping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_static_route.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_static_route.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/vyos_system.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_system.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_system.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/vyos/vyos_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_user.py future-import-boilerplate
lib/ansible/modules/network/vyos/vyos_user.py metaclass-boilerplate
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_user.py validate-modules:undocumented-parameter
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-elements-mismatch
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-required-mismatch
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:missing-suboption-docs
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/bearychat.py validate-modules:parameter-list-no-elements
lib/ansible/modules/notification/bearychat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/campfire.py validate-modules:doc-missing-type
lib/ansible/modules/notification/catapult.py validate-modules:doc-missing-type
lib/ansible/modules/notification/catapult.py validate-modules:parameter-list-no-elements
lib/ansible/modules/notification/catapult.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/cisco_spark.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/cisco_spark.py validate-modules:doc-missing-type
lib/ansible/modules/notification/cisco_spark.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/flowdock.py validate-modules:doc-missing-type
lib/ansible/modules/notification/grove.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/hipchat.py validate-modules:doc-missing-type
lib/ansible/modules/notification/hipchat.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/irc.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/notification/irc.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/irc.py validate-modules:doc-missing-type
lib/ansible/modules/notification/irc.py validate-modules:doc-required-mismatch
lib/ansible/modules/notification/irc.py validate-modules:parameter-list-no-elements
lib/ansible/modules/notification/irc.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/irc.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/jabber.py validate-modules:doc-missing-type
lib/ansible/modules/notification/jabber.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/logentries_msg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/mail.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/mail.py validate-modules:parameter-list-no-elements
lib/ansible/modules/notification/mail.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/matrix.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/mattermost.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/mqtt.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/mqtt.py validate-modules:doc-missing-type
lib/ansible/modules/notification/mqtt.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/nexmo.py validate-modules:doc-missing-type
lib/ansible/modules/notification/nexmo.py validate-modules:parameter-list-no-elements
lib/ansible/modules/notification/nexmo.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/office_365_connector_card.py validate-modules:doc-missing-type
lib/ansible/modules/notification/office_365_connector_card.py validate-modules:parameter-list-no-elements
lib/ansible/modules/notification/office_365_connector_card.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/pushbullet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/pushbullet.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/pushover.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/notification/pushover.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/pushover.py validate-modules:doc-missing-type
lib/ansible/modules/notification/pushover.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/rabbitmq_publish.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/rocketchat.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/notification/rocketchat.py validate-modules:parameter-list-no-elements
lib/ansible/modules/notification/rocketchat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/say.py validate-modules:doc-missing-type
lib/ansible/modules/notification/sendgrid.py validate-modules:doc-missing-type
lib/ansible/modules/notification/sendgrid.py validate-modules:doc-required-mismatch
lib/ansible/modules/notification/sendgrid.py validate-modules:parameter-list-no-elements
lib/ansible/modules/notification/sendgrid.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/sendgrid.py validate-modules:undocumented-parameter
lib/ansible/modules/notification/slack.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/notification/slack.py validate-modules:parameter-list-no-elements
lib/ansible/modules/notification/slack.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/snow_record_find.py validate-modules:parameter-list-no-elements
lib/ansible/modules/notification/syslogger.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/telegram.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/twilio.py validate-modules:doc-missing-type
lib/ansible/modules/notification/twilio.py validate-modules:parameter-list-no-elements
lib/ansible/modules/notification/twilio.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/notification/typetalk.py validate-modules:doc-missing-type
lib/ansible/modules/notification/typetalk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/bower.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/bower.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/bundler.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/language/bundler.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/bundler.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/language/bundler.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/composer.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/language/composer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/cpanm.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/cpanm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/easy_install.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/language/easy_install.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/easy_install.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/gem.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/maven_artifact.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/maven_artifact.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/pear.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/language/pear.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/pear.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/language/pear.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/language/pip.py pylint:blacklisted-name
lib/ansible/modules/packaging/language/pip.py validate-modules:doc-elements-mismatch
lib/ansible/modules/packaging/language/pip.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/packaging/language/pip_package_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/language/yarn.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/language/yarn.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apk.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/apk.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/apk.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/apk.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/apt.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/apt.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/apt_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt_key.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/apt_repo.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt_repository.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/dnf.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/dnf.py validate-modules:doc-required-mismatch
lib/ansible/modules/packaging/os/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/dnf.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/dnf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/dpkg_selections.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/dpkg_selections.py validate-modules:doc-required-mismatch
lib/ansible/modules/packaging/os/flatpak.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/flatpak.py validate-modules:use-run-command-not-popen
lib/ansible/modules/packaging/os/flatpak_remote.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/flatpak_remote.py validate-modules:use-run-command-not-popen
lib/ansible/modules/packaging/os/homebrew.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/homebrew.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/homebrew.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/homebrew.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/homebrew.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:doc-required-mismatch
lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/homebrew_tap.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/homebrew_tap.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/homebrew_tap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/installp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/layman.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/layman.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/macports.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/macports.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/macports.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/openbsd_pkg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/openbsd_pkg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/openbsd_pkg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/opkg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/opkg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/opkg.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/opkg.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/opkg.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/package_facts.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/package_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/package_facts.py validate-modules:return-syntax-error
lib/ansible/modules/packaging/os/pacman.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/pacman.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/pacman.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/pkg5.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/pkg5.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/pkg5.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/pkg5_publisher.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/pkg5_publisher.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/pkg5_publisher.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/pkgin.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/pkgin.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/pkgin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/pkgin.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/pkgng.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/pkgng.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/pkgng.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/pkgng.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/pkgutil.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/portage.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/portage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/portage.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/portinstall.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/portinstall.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:doc-required-mismatch
lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:return-syntax-error
lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/rhn_register.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/rhsm_release.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:doc-required-mismatch
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/rpm_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/slackpkg.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/snap.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/snap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/sorcery.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/sorcery.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/sorcery.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/svr4pkg.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/swdepot.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/swdepot.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/swupd.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/urpmi.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/urpmi.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/urpmi.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/urpmi.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/urpmi.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/urpmi.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/xbps.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/xbps.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/xbps.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/xbps.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/xbps.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/xbps.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/yum.py pylint:blacklisted-name
lib/ansible/modules/packaging/os/yum.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/yum.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/yum.py validate-modules:parameter-invalid
lib/ansible/modules/packaging/os/yum.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/yum.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/yum.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/modules/packaging/os/zypper.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/packaging/os/zypper.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/zypper.py validate-modules:parameter-list-no-elements
lib/ansible/modules/packaging/os/zypper.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/packaging/os/zypper_repository.py validate-modules:doc-missing-type
lib/ansible/modules/packaging/os/zypper_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cobbler/cobbler_sync.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/cobbler/cobbler_sync.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cobbler/cobbler_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/cobbler/cobbler_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cpm/cpm_plugconfig.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/cpm/cpm_plugconfig.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/cpm/cpm_plugconfig.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cpm/cpm_plugcontrol.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/cpm/cpm_plugcontrol.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/cpm/cpm_plugcontrol.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cpm/cpm_serial_port_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cpm/cpm_serial_port_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/cpm/cpm_serial_port_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/cpm/cpm_serial_port_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/cpm/cpm_user.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/cpm/cpm_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/dellemc/idrac_server_config_profile.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/dellemc/idrac_server_config_profile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/dellemc/ome_device_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/foreman/_foreman.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/foreman/_katello.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/foreman/_katello.py yamllint:unparsable-with-libyaml
lib/ansible/modules/remote_management/hpilo/hpilo_boot.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/hpilo/hpilo_boot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/hpilo/hpilo_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/hpilo/hponcfg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/imc/imc_rest.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/intersight/intersight_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/intersight/intersight_rest_api.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/lxca/lxca_cmms.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/lxca/lxca_nodes.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:doc-elements-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_datacenter_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/oneview/oneview_datacenter_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_datacenter_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_enclosure_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/oneview/oneview_enclosure_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_enclosure_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_ethernet_network_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_fc_network_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_fc_network_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_fcoe_network_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:doc-missing-type
lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_network_set_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/oneview/oneview_network_set_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_network_set_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_san_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_san_manager.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/redfish/idrac_redfish_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/redfish/idrac_redfish_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/redfish/idrac_redfish_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/redfish/redfish_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/redfish/redfish_config.py validate-modules:doc-elements-mismatch
lib/ansible/modules/remote_management/redfish/redfish_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/redfish/redfish_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:doc-elements-mismatch
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_ip_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_lan_connectivity.py validate-modules:doc-elements-mismatch
lib/ansible/modules/remote_management/ucs/ucs_lan_connectivity.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_mac_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_managed_objects.py validate-modules:doc-elements-mismatch
lib/ansible/modules/remote_management/ucs/ucs_managed_objects.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/ucs/ucs_managed_objects.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_managed_objects.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/ucs/ucs_ntp_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/ucs/ucs_service_profile_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:doc-elements-mismatch
lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:doc-type-does-not-match-spec
lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_timezone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_uuid_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/ucs/ucs_vlans.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_vlans.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_vnic_template.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/remote_management/ucs/ucs_vnic_template.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/ucs/ucs_vnic_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_vsans.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_vsans.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/ucs/ucs_vsans.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_vsans.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:undocumented-parameter
lib/ansible/modules/remote_management/wakeonlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/bzr.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/git.py pylint:blacklisted-name
lib/ansible/modules/source_control/git.py use-argspec-type-path
lib/ansible/modules/source_control/git.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/git.py validate-modules:parameter-list-no-elements
lib/ansible/modules/source_control/git.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/git_config.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/git_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/github/_github_hooks.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/github/github_deploy_key.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/github/github_deploy_key.py validate-modules:parameter-invalid
lib/ansible/modules/source_control/github/github_deploy_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/github/github_issue.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/github/github_issue.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/github/github_key.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/github/github_release.py validate-modules:doc-missing-type
lib/ansible/modules/source_control/github/github_release.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/github/github_webhook.py validate-modules:doc-elements-mismatch
lib/ansible/modules/source_control/github/github_webhook.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/github/github_webhook_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/gitlab/gitlab_deploy_key.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/gitlab/gitlab_hook.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/gitlab/gitlab_runner.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/gitlab/gitlab_runner.py validate-modules:parameter-list-no-elements
lib/ansible/modules/source_control/hg.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/hg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/subversion.py validate-modules:doc-required-mismatch
lib/ansible/modules/source_control/subversion.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/source_control/subversion.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/emc/emc_vnx_sg_member.py validate-modules:doc-missing-type
lib/ansible/modules/storage/emc/emc_vnx_sg_member.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/glusterfs/gluster_heal_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/glusterfs/gluster_peer.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/glusterfs/gluster_peer.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/glusterfs/gluster_peer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/glusterfs/gluster_volume.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/glusterfs/gluster_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/ibm/ibm_sa_domain.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_domain.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/ibm/ibm_sa_host.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/ibm/ibm_sa_host_ports.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_host_ports.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/ibm/ibm_sa_pool.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_pool.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/ibm/ibm_sa_vol.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_vol.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/ibm/ibm_sa_vol_map.py validate-modules:doc-missing-type
lib/ansible/modules/storage/ibm/ibm_sa_vol_map.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:doc-missing-type
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/infinidat/infini_export_client.py validate-modules:doc-missing-type
lib/ansible/modules/storage/infinidat/infini_export_client.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/storage/infinidat/infini_fs.py validate-modules:doc-missing-type
lib/ansible/modules/storage/infinidat/infini_host.py validate-modules:doc-missing-type
lib/ansible/modules/storage/infinidat/infini_host.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/infinidat/infini_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/infinidat/infini_pool.py validate-modules:doc-missing-type
lib/ansible/modules/storage/infinidat/infini_vol.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_aggregate.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_aggregate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_license.py validate-modules:incompatible-default-type
lib/ansible/modules/storage/netapp/_na_cdot_license.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_lun.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_lun.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_qtree.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_qtree.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_svm.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_svm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_user.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_user_role.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_user_role.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/_na_ontap_gather_facts.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_na_ontap_gather_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/_na_ontap_gather_facts.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/storage/netapp/_na_ontap_gather_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_account_manager.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_sf_account_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_check_connections.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_snapshot_schedule_manager.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_sf_snapshot_schedule_manager.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/_sf_snapshot_schedule_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_volume_access_group_manager.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_sf_volume_access_group_manager.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/_sf_volume_access_group_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:parameter-invalid
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/na_elementsw_access_group.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_access_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_elementsw_access_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_account.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_account.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_admin_users.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_admin_users.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_elementsw_admin_users.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_backup.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_backup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_check_connections.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_cluster_config.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_elementsw_cluster_config.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_cluster_pair.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_cluster_pair.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_cluster_snmp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_drive.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_drive.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/na_elementsw_ldap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_network_interfaces.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_elementsw_network_interfaces.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_node.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_node.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_elementsw_node.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_restore.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_schedule.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_schedule.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_schedule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_elementsw_snapshot_schedule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_vlan.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_elementsw_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:parameter-invalid
lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_volume_clone.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_volume_clone.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_elementsw_volume_pair.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_elementsw_volume_pair.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_aggregate.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_aggregate.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_aggregate.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_autosupport.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_autosupport.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_autosupport.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain_ports.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain_ports.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain_ports.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain_ports.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cg_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_cg_snapshot.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_cg_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cifs.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_cifs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cifs_acl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cifs_server.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_cifs_server.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cluster.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_cluster.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_cluster_ha.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_cluster_peer.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_cluster_peer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_command.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_command.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_disks.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_dns.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_dns.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_dns.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_export_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_export_policy_rule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_export_policy_rule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_fcp.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_fcp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_firewall_policy.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_firewall_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_firewall_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_firmware_upgrade.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_flexcache.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_flexcache.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_igroup.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_igroup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_igroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_igroup_initiator.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_igroup_initiator.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_igroup_initiator.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_info.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/storage/netapp/na_ontap_interface.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_interface.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_ipspace.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_ipspace.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_iscsi.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_iscsi.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_job_schedule.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_job_schedule.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_job_schedule.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_kerberos_realm.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/storage/netapp/na_ontap_ldap_client.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/storage/netapp/na_ontap_ldap_client.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_license.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_license.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_license.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_lun.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_lun.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_lun_copy.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_lun_copy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_lun_map.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_lun_map.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_motd.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_ndmp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_net_ifgrp.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_net_ifgrp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_net_ifgrp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_net_port.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_net_port.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_net_port.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_net_routes.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_net_routes.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_net_subnet.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_net_subnet.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_net_subnet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_net_vlan.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_net_vlan.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:parameter-invalid
lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_node.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_ntp.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_ntp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_nvme.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_nvme_namespace.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_nvme_namespace.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_nvme_subsystem.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_nvme_subsystem.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_ports.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_portset.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_portset.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_portset.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_qos_policy_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_qos_policy_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_qtree.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_qtree.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_security_key_manager.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_security_key_manager.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_service_processor_network.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_service_processor_network.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_snapmirror.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_snapshot.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_snapshot.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_snapshot_policy.py validate-modules:doc-elements-mismatch
lib/ansible/modules/storage/netapp/na_ontap_snapshot_policy.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_snapshot_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_snmp.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_software_update.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_software_update.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_svm.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_svm.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_svm.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_svm_options.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_ucadapter.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_ucadapter.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_unix_group.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_unix_group.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_unix_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_unix_user.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_unix_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_user.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_user_role.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_user_role.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_volume.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_volume.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_volume_clone.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_access_policy.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_access_policy.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_access_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_demand_task.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_demand_task.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_vscan_on_demand_task.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_vscan_scanner_pool.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/na_ontap_vscan_scanner_pool.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_vscan_scanner_pool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/na_ontap_vserver_peer.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/na_ontap_vserver_peer.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/na_ontap_vserver_peer.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_alerts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/netapp_e_alerts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_amg_sync.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_amg_sync.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_asup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/netapp_e_asup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_auditlog.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_auth.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_auth.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_auth.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_drive_firmware.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/netapp_e_facts.py validate-modules:return-syntax-error
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_global.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_host.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/netapp_e_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_hostgroup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/netapp_e_hostgroup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_iscsi_interface.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_iscsi_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_iscsi_target.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_ldap.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_ldap.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/netapp_e_ldap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_lun_mapping.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_lun_mapping.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_mgmt_interface.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/netapp/netapp_e_storagepool.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_storagepool.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_syslog.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_syslog.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/netapp/netapp_e_syslog.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:doc-missing-type
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:implied-parameter-type-mismatch
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:undocumented-parameter
lib/ansible/modules/storage/purestorage/_purefa_facts.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/_purefa_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/purestorage/_purefa_facts.py validate-modules:return-syntax-error
lib/ansible/modules/storage/purestorage/_purefb_facts.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/purestorage/_purefb_facts.py validate-modules:return-syntax-error
lib/ansible/modules/storage/purestorage/purefa_alert.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_arrayname.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_banner.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_connect.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_dns.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_dns.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/purestorage/purefa_ds.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_ds.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/purestorage/purefa_dsrole.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_dsrole.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/purestorage/purefa_hg.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_hg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/purestorage/purefa_host.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_host.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/purestorage/purefa_info.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/purestorage/purefa_info.py validate-modules:return-syntax-error
lib/ansible/modules/storage/purestorage/purefa_ntp.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_ntp.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/purestorage/purefa_offload.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_pg.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_pg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/purestorage/purefa_pgsnap.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_pgsnap.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/purestorage/purefa_phonehome.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_ra.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_smtp.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_snap.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_snmp.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_syslog.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_vg.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefa_volume.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefb_ds.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefb_ds.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/purestorage/purefb_dsrole.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefb_fs.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/storage/purestorage/purefb_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/purestorage/purefb_info.py validate-modules:return-syntax-error
lib/ansible/modules/storage/purestorage/purefb_s3acc.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/purestorage/purefb_s3user.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/zfs/zfs.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/zfs/zfs_delegate_admin.py validate-modules:doc-required-mismatch
lib/ansible/modules/storage/zfs/zfs_delegate_admin.py validate-modules:parameter-list-no-elements
lib/ansible/modules/storage/zfs/zfs_delegate_admin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/zfs/zfs_facts.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/storage/zfs/zfs_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/storage/zfs/zpool_facts.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/storage/zfs/zpool_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/aix_devices.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/aix_filesystem.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/aix_filesystem.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/aix_inittab.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/aix_lvg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/aix_lvol.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/alternatives.py pylint:blacklisted-name
lib/ansible/modules/system/at.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/authorized_key.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/awall.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/beadm.py pylint:blacklisted-name
lib/ansible/modules/system/cronvar.py pylint:blacklisted-name
lib/ansible/modules/system/dconf.py pylint:blacklisted-name
lib/ansible/modules/system/dconf.py validate-modules:doc-missing-type
lib/ansible/modules/system/dconf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/filesystem.py pylint:blacklisted-name
lib/ansible/modules/system/filesystem.py validate-modules:doc-missing-type
lib/ansible/modules/system/gconftool2.py pylint:blacklisted-name
lib/ansible/modules/system/gconftool2.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/system/gconftool2.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/getent.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/hostname.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/system/hostname.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/interfaces_file.py pylint:blacklisted-name
lib/ansible/modules/system/interfaces_file.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/iptables.py pylint:blacklisted-name
lib/ansible/modules/system/iptables.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/java_cert.py pylint:blacklisted-name
lib/ansible/modules/system/java_keystore.py validate-modules:doc-missing-type
lib/ansible/modules/system/kernel_blacklist.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/known_hosts.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/system/known_hosts.py validate-modules:doc-missing-type
lib/ansible/modules/system/known_hosts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/lbu.py validate-modules:doc-elements-mismatch
lib/ansible/modules/system/locale_gen.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/lvg.py pylint:blacklisted-name
lib/ansible/modules/system/lvg.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/lvol.py pylint:blacklisted-name
lib/ansible/modules/system/lvol.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/lvol.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/mksysb.py validate-modules:doc-missing-type
lib/ansible/modules/system/modprobe.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/nosh.py validate-modules:doc-missing-type
lib/ansible/modules/system/nosh.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/nosh.py validate-modules:return-syntax-error
lib/ansible/modules/system/openwrt_init.py validate-modules:doc-missing-type
lib/ansible/modules/system/openwrt_init.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/osx_defaults.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/osx_defaults.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/system/pam_limits.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/pamd.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/parted.py pylint:blacklisted-name
lib/ansible/modules/system/parted.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/parted.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/system/puppet.py use-argspec-type-path
lib/ansible/modules/system/puppet.py validate-modules:parameter-invalid
lib/ansible/modules/system/puppet.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/puppet.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/puppet.py validate-modules:undocumented-parameter
lib/ansible/modules/system/python_requirements_info.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/python_requirements_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/runit.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/system/runit.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/runit.py validate-modules:undocumented-parameter
lib/ansible/modules/system/seboolean.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/sefcontext.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/system/selinux.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/system/selinux.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/selogin.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/selogin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/seport.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/system/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/system/setup.py validate-modules:doc-missing-type
lib/ansible/modules/system/setup.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/setup.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/solaris_zone.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/sysctl.py validate-modules:doc-missing-type
lib/ansible/modules/system/sysctl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/syspatch.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/systemd.py validate-modules:parameter-invalid
lib/ansible/modules/system/systemd.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/systemd.py validate-modules:return-syntax-error
lib/ansible/modules/system/sysvinit.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/sysvinit.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/system/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/system/timezone.py pylint:blacklisted-name
lib/ansible/modules/system/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/system/user.py validate-modules:doc-default-incompatible-type
lib/ansible/modules/system/user.py validate-modules:parameter-list-no-elements
lib/ansible/modules/system/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/system/vdo.py validate-modules:doc-required-mismatch
lib/ansible/modules/system/xfconf.py validate-modules:parameter-state-invalid-choice
lib/ansible/modules/system/xfconf.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/utilities/logic/async_status.py use-argspec-type-path
lib/ansible/modules/utilities/logic/async_status.py validate-modules!skip
lib/ansible/modules/utilities/logic/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/utilities/logic/async_wrapper.py pylint:ansible-bad-function
lib/ansible/modules/utilities/logic/async_wrapper.py use-argspec-type-path
lib/ansible/modules/utilities/logic/wait_for.py validate-modules:parameter-list-no-elements
lib/ansible/modules/web_infrastructure/_nginx_status_facts.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/_nginx_status_facts.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential_type.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential_type.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential_type.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py use-argspec-type-path
lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_host.py use-argspec-type-path
lib/ansible/modules/web_infrastructure/ansible_tower/tower_host.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory_source.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory_source.py validate-modules:invalid-ansiblemodule-schema
lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory_source.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_cancel.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:parameter-list-no-elements
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_list.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_list.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_wait.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_label.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_notification.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_notification.py validate-modules:parameter-list-no-elements
lib/ansible/modules/web_infrastructure/ansible_tower/tower_notification.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_organization.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_project.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_project.py validate-modules:doc-required-mismatch
lib/ansible/modules/web_infrastructure/ansible_tower/tower_project.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_receive.py validate-modules:parameter-list-no-elements
lib/ansible/modules/web_infrastructure/ansible_tower/tower_receive.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_role.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_role.py validate-modules:doc-required-mismatch
lib/ansible/modules/web_infrastructure/ansible_tower/tower_send.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_send.py validate-modules:parameter-list-no-elements
lib/ansible/modules/web_infrastructure/ansible_tower/tower_send.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_settings.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_team.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_team.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/ansible_tower/tower_user.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_launch.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_launch.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/apache2_module.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/apache2_module.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/deploy_helper.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/deploy_helper.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:no-default-for-required-parameter
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/ejabberd_user.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/ejabberd_user.py validate-modules:doc-required-mismatch
lib/ansible/modules/web_infrastructure/ejabberd_user.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/gunicorn.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/gunicorn.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/htpasswd.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/web_infrastructure/htpasswd.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/jenkins_job.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/jenkins_job_info.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/jenkins_plugin.py use-argspec-type-path
lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/jenkins_script.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/jira.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/jira.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/jira.py validate-modules:undocumented-parameter
lib/ansible/modules/web_infrastructure/rundeck_acl_policy.py pylint:blacklisted-name
lib/ansible/modules/web_infrastructure/rundeck_acl_policy.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/rundeck_project.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_aaa_group.py validate-modules:doc-elements-mismatch
lib/ansible/modules/web_infrastructure/sophos_utm/utm_aaa_group_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_ca_host_key_cert.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_ca_host_key_cert_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_dns_host.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_network_interface_address.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_network_interface_address_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_auth_profile.py validate-modules:doc-elements-mismatch
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_auth_profile.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_exception.py validate-modules:doc-elements-mismatch
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_exception.py validate-modules:return-syntax-error
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_frontend.py validate-modules:doc-elements-mismatch
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_frontend.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_frontend_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_location.py validate-modules:doc-elements-mismatch
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_location.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_location_info.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/supervisorctl.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/supervisorctl.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/web_infrastructure/taiga_issue.py validate-modules:doc-missing-type
lib/ansible/modules/web_infrastructure/taiga_issue.py validate-modules:parameter-list-no-elements
lib/ansible/modules/web_infrastructure/taiga_issue.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/async_status.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/setup.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_acl_inheritance.ps1 pslint:PSAvoidTrailingWhitespace
lib/ansible/modules/windows/win_audit_rule.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_certificate_store.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_chocolatey.ps1 validate-modules:doc-elements-mismatch
lib/ansible/modules/windows/win_chocolatey_config.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_chocolatey_facts.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_chocolatey_source.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_copy.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_credential.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_credential.ps1 validate-modules:doc-elements-mismatch
lib/ansible/modules/windows/win_credential.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_defrag.ps1 validate-modules:parameter-list-no-elements
lib/ansible/modules/windows/win_dns_client.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_dns_record.ps1 validate-modules:doc-elements-mismatch
lib/ansible/modules/windows/win_domain.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_domain.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSAvoidGlobalVars # New PR
lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSAvoidGlobalVars # New PR
lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_dotnet_ngen.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_dsc.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_dsc.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_eventlog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_feature.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_file_version.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_find.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_find.ps1 validate-modules:doc-elements-mismatch
lib/ansible/modules/windows/win_firewall_rule.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_hosts.ps1 validate-modules:doc-elements-mismatch
lib/ansible/modules/windows/win_hotfix.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_hotfix.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_http_proxy.ps1 validate-modules:parameter-list-no-elements
lib/ansible/modules/windows/win_http_proxy.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_iis_virtualdirectory.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_webapplication.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_webapppool.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_webbinding.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_iis_webbinding.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_iis_website.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_inet_proxy.ps1 validate-modules:parameter-list-no-elements
lib/ansible/modules/windows/win_inet_proxy.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_lineinfile.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_mapped_drive.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_netbios.ps1 validate-modules:parameter-list-no-elements
lib/ansible/modules/windows/win_optional_feature.ps1 validate-modules:parameter-list-no-elements
lib/ansible/modules/windows/win_package.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_package.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_package.ps1 validate-modules:doc-elements-mismatch
lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSUseDeclaredVarsMoreThanAssignments # New PR - bug test_path should be testPath
lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSUseSupportsShouldProcess
lib/ansible/modules/windows/win_pester.ps1 validate-modules:doc-elements-mismatch
lib/ansible/modules/windows/win_product_facts.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_psexec.ps1 validate-modules:parameter-list-no-elements
lib/ansible/modules/windows/win_psexec.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_rabbitmq_plugin.ps1 pslint:PSAvoidUsingInvokeExpression
lib/ansible/modules/windows/win_rabbitmq_plugin.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_rds_cap.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_rds_rap.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_rds_settings.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_regedit.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_region.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_region.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_regmerge.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_robocopy.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_say.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_security_policy.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_security_policy.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_share.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_shell.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_shortcut.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_snmp.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_unzip.ps1 pslint:PSUseApprovedVerbs
lib/ansible/modules/windows/win_updates.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_uri.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep
lib/ansible/modules/windows/win_user_profile.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_user_profile.ps1 validate-modules:parameter-type-not-in-doc
lib/ansible/modules/windows/win_wait_for.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/modules/windows/win_wait_for_process.ps1 validate-modules:parameter-list-no-elements
lib/ansible/modules/windows/win_webpicmd.ps1 pslint:PSAvoidUsingInvokeExpression
lib/ansible/modules/windows/win_xml.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name
lib/ansible/playbook/base.py pylint:blacklisted-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:blacklisted-name
lib/ansible/playbook/role/__init__.py pylint:blacklisted-name
lib/ansible/plugins/action/aireos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/aruba.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/asa.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/bigip.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added
lib/ansible/plugins/action/bigiq.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added
lib/ansible/plugins/action/ce.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/ce_template.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added
lib/ansible/plugins/action/cnos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/dellos10.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/dellos6.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/dellos9.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/enos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/eos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/exos.py action-plugin-docs # undocumented action plugin to fix
lib/ansible/plugins/action/ios.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/iosxr.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/ironware.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/junos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/net_base.py action-plugin-docs # base class for other net_* action plugins which have a matching module
lib/ansible/plugins/action/netconf.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/network.py action-plugin-docs # base class for network action plugins
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/action/nxos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/slxos.py action-plugin-docs # undocumented action plugin to fix
lib/ansible/plugins/action/sros.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/action/voss.py action-plugin-docs # undocumented action plugin to fix
lib/ansible/plugins/action/vyos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local`
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/callback/hipchat.py pylint:blacklisted-name
lib/ansible/plugins/connection/lxc.py pylint:blacklisted-name
lib/ansible/plugins/connection/vmware_tools.py yamllint:unparsable-with-libyaml
lib/ansible/plugins/doc_fragments/a10.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/a10.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aireos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aireos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/alicloud.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/alicloud.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aruba.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aruba.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/asa.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/asa.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/auth_basic.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/auth_basic.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/avi.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/avi.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aws.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aws.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aws_credentials.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aws_credentials.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/aws_region.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/aws_region.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/azure.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/azure.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/azure_tags.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/azure_tags.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/backup.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/backup.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ce.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ce.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/cnos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/cnos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/constructed.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/constructed.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/decrypt.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/decrypt.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/default_callback.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/default_callback.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dellos10.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dellos10.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dellos6.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dellos6.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dellos9.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dellos9.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/digital_ocean.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/digital_ocean.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata_wait.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/dimensiondata_wait.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ec2.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ec2.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/emc.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/emc.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/enos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/enos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/eos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/eos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/f5.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/f5.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/files.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/files.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/fortios.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/fortios.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/gcp.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/gcp.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hcloud.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hcloud.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hetzner.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hetzner.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hpe3par.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hpe3par.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/hwc.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/hwc.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/infinibox.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/infinibox.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/influxdb.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/influxdb.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ingate.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ingate.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/intersight.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/intersight.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/inventory_cache.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/inventory_cache.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ios.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ios.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/iosxr.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/iosxr.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ipa.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ipa.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ironware.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ironware.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/junos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/junos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_auth_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_auth_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_name_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_name_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_resource_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_resource_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_scale_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_scale_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/k8s_state_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/k8s_state_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/keycloak.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/keycloak.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_common_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_common_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_vm_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/kubevirt_vm_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ldap.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ldap.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/lxca_common.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/lxca_common.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/manageiq.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/manageiq.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/meraki.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/meraki.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/mysql.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/mysql.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/netapp.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/netapp.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/netconf.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/netconf.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/netscaler.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/netscaler.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/network_agnostic.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/network_agnostic.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/nios.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/nios.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/nso.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/nso.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/nxos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/nxos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oneview.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oneview.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/online.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/online.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/onyx.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/onyx.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/opennebula.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/opennebula.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/openstack.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/openstack.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/openswitch.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/openswitch.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_creatable_resource.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_creatable_resource.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_display_name_option.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_display_name_option.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_name_option.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_name_option.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_tags.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_tags.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/oracle_wait_options.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/oracle_wait_options.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ovirt.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ovirt.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ovirt_info.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ovirt_info.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/panos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/panos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/postgres.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/postgres.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/proxysql.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/proxysql.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/purestorage.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/purestorage.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/rabbitmq.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/rabbitmq.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/rackspace.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/rackspace.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/return_common.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/return_common.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/scaleway.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/scaleway.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/shell_common.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/shell_common.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/shell_windows.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/shell_windows.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/skydive.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/skydive.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/sros.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/sros.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/tower.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/tower.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/ucs.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/ucs.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/url.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/url.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/utm.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/utm.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/validate.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/validate.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vca.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vca.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vexata.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vexata.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vmware.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vmware.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vmware_rest_client.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vmware_rest_client.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vultr.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vultr.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/vyos.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/vyos.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/xenserver.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/xenserver.py metaclass-boilerplate
lib/ansible/plugins/doc_fragments/zabbix.py future-import-boilerplate
lib/ansible/plugins/doc_fragments/zabbix.py metaclass-boilerplate
lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name
lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name
lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name
lib/ansible/vars/hostvars.py pylint:blacklisted-name
setup.py future-import-boilerplate
setup.py metaclass-boilerplate
test/integration/targets/ansible-runner/files/adhoc_example1.py future-import-boilerplate
test/integration/targets/ansible-runner/files/adhoc_example1.py metaclass-boilerplate
test/integration/targets/ansible-runner/files/playbook_example1.py future-import-boilerplate
test/integration/targets/ansible-runner/files/playbook_example1.py metaclass-boilerplate
test/integration/targets/async/library/async_test.py future-import-boilerplate
test/integration/targets/async/library/async_test.py metaclass-boilerplate
test/integration/targets/async_fail/library/async_test.py future-import-boilerplate
test/integration/targets/async_fail/library/async_test.py metaclass-boilerplate
test/integration/targets/aws_lambda/files/mini_lambda.py future-import-boilerplate
test/integration/targets/aws_lambda/files/mini_lambda.py metaclass-boilerplate
test/integration/targets/collections_plugin_namespace/collection_root/ansible_collections/my_ns/my_col/plugins/lookup/lookup_no_future_boilerplate.py future-import-boilerplate
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/expect/files/test_command.py future-import-boilerplate
test/integration/targets/expect/files/test_command.py metaclass-boilerplate
test/integration/targets/get_url/files/testserver.py future-import-boilerplate
test/integration/targets/get_url/files/testserver.py metaclass-boilerplate
test/integration/targets/group/files/gidget.py future-import-boilerplate
test/integration/targets/group/files/gidget.py metaclass-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_exec.py future-import-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_exec.py metaclass-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_put_file.py future-import-boilerplate
test/integration/targets/ignore_unreachable/fake_connectors/bad_put_file.py metaclass-boilerplate
test/integration/targets/inventory_kubevirt_conformance/inventory_diff.py future-import-boilerplate
test/integration/targets/inventory_kubevirt_conformance/inventory_diff.py metaclass-boilerplate
test/integration/targets/inventory_kubevirt_conformance/server.py future-import-boilerplate
test/integration/targets/inventory_kubevirt_conformance/server.py metaclass-boilerplate
test/integration/targets/jinja2_native_types/filter_plugins/native_plugins.py future-import-boilerplate
test/integration/targets/jinja2_native_types/filter_plugins/native_plugins.py metaclass-boilerplate
test/integration/targets/lambda_policy/files/mini_http_lambda.py future-import-boilerplate
test/integration/targets/lambda_policy/files/mini_http_lambda.py metaclass-boilerplate
test/integration/targets/lookup_properties/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/ping.py future-import-boilerplate
test/integration/targets/module_precedence/lib_with_extension/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/multiple_roles/bar/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/multiple_roles/bar/library/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/multiple_roles/foo/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/multiple_roles/foo/library/ping.py metaclass-boilerplate
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.py future-import-boilerplate
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.py metaclass-boilerplate
test/integration/targets/module_utils/library/test.py future-import-boilerplate
test/integration/targets/module_utils/library/test.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_env_override.py future-import-boilerplate
test/integration/targets/module_utils/library/test_env_override.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_failure.py future-import-boilerplate
test/integration/targets/module_utils/library/test_failure.py metaclass-boilerplate
test/integration/targets/module_utils/library/test_override.py future-import-boilerplate
test/integration/targets/module_utils/library/test_override.py metaclass-boilerplate
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/pause/test-pause.py future-import-boilerplate
test/integration/targets/pause/test-pause.py metaclass-boilerplate
test/integration/targets/pip/files/ansible_test_pip_chdir/__init__.py future-import-boilerplate
test/integration/targets/pip/files/ansible_test_pip_chdir/__init__.py metaclass-boilerplate
test/integration/targets/pip/files/setup.py future-import-boilerplate
test/integration/targets/pip/files/setup.py metaclass-boilerplate
test/integration/targets/run_modules/library/test.py future-import-boilerplate
test/integration/targets/run_modules/library/test.py metaclass-boilerplate
test/integration/targets/s3_bucket_notification/files/mini_lambda.py future-import-boilerplate
test/integration/targets/s3_bucket_notification/files/mini_lambda.py metaclass-boilerplate
test/integration/targets/script/files/no_shebang.py future-import-boilerplate
test/integration/targets/script/files/no_shebang.py metaclass-boilerplate
test/integration/targets/service/files/ansible_test_service.py future-import-boilerplate
test/integration/targets/service/files/ansible_test_service.py metaclass-boilerplate
test/integration/targets/setup_rpm_repo/files/create-repo.py future-import-boilerplate
test/integration/targets/setup_rpm_repo/files/create-repo.py metaclass-boilerplate
test/integration/targets/sns_topic/files/sns_topic_lambda/sns_topic_lambda.py future-import-boilerplate
test/integration/targets/sns_topic/files/sns_topic_lambda/sns_topic_lambda.py metaclass-boilerplate
test/integration/targets/supervisorctl/files/sendProcessStdin.py future-import-boilerplate
test/integration/targets/supervisorctl/files/sendProcessStdin.py metaclass-boilerplate
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/role_filter/filter_plugins/myplugin.py future-import-boilerplate
test/integration/targets/template/role_filter/filter_plugins/myplugin.py metaclass-boilerplate
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/test_infra/library/test.py future-import-boilerplate
test/integration/targets/test_infra/library/test.py metaclass-boilerplate
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/uri/files/testserver.py future-import-boilerplate
test/integration/targets/uri/files/testserver.py metaclass-boilerplate
test/integration/targets/var_precedence/ansible-var-precedence-check.py future-import-boilerplate
test/integration/targets/var_precedence/ansible-var-precedence-check.py metaclass-boilerplate
test/integration/targets/vars_prompt/test-vars_prompt.py future-import-boilerplate
test/integration/targets/vars_prompt/test-vars_prompt.py metaclass-boilerplate
test/integration/targets/vault/test-vault-client.py future-import-boilerplate
test/integration/targets/vault/test-vault-client.py metaclass-boilerplate
test/integration/targets/wait_for/files/testserver.py future-import-boilerplate
test/integration/targets/wait_for/files/testserver.py metaclass-boilerplate
test/integration/targets/want_json_modules_posix/library/helloworld.py future-import-boilerplate
test/integration/targets/want_json_modules_posix/library/helloworld.py metaclass-boilerplate
test/integration/targets/win_audit_rule/library/test_get_audit_rule.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_chocolatey/files/tools/chocolateyUninstall.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_chocolatey_source/library/choco_source.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_csharp_utils/library/ansible_basic_tests.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_csharp_utils/library/ansible_basic_tests.ps1 pslint:PSUseDeclaredVarsMoreThanAssignments # test setup requires vars to be set globally and not referenced in the same scope
test/integration/targets/win_csharp_utils/library/ansible_become_tests.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip
test/integration/targets/win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_iis_webbinding/library/test_get_webbindings.ps1 pslint:PSUseApprovedVerbs
test/integration/targets/win_module_utils/library/argv_parser_test.ps1 pslint:PSUseApprovedVerbs
test/integration/targets/win_module_utils/library/backup_file_test.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_module_utils/library/command_util_test.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings
test/integration/targets/win_ping/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/win_psmodule/files/module/template.psd1 pslint!skip
test/integration/targets/win_psmodule/files/module/template.psm1 pslint!skip
test/integration/targets/win_psmodule/files/setup_modules.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_reboot/templates/post_reboot.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_regmerge/templates/win_line_ending.j2 line-endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_stat/library/test_symlink_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_template/files/foo.dos.txt line-endings
test/integration/targets/win_user_right/library/test_get_right.ps1 pslint:PSCustomUseLiteralPath
test/legacy/cleanup_gce.py future-import-boilerplate
test/legacy/cleanup_gce.py metaclass-boilerplate
test/legacy/cleanup_gce.py pylint:blacklisted-name
test/legacy/cleanup_rax.py future-import-boilerplate
test/legacy/cleanup_rax.py metaclass-boilerplate
test/legacy/consul_running.py future-import-boilerplate
test/legacy/consul_running.py metaclass-boilerplate
test/legacy/gce_credentials.py future-import-boilerplate
test/legacy/gce_credentials.py metaclass-boilerplate
test/legacy/gce_credentials.py pylint:blacklisted-name
test/legacy/setup_gce.py future-import-boilerplate
test/legacy/setup_gce.py metaclass-boilerplate
test/lib/ansible_test/_data/requirements/constraints.txt test-constraints
test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints
test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six
test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/lib/ansible_test/_data/setup/windows-httptester.ps1 pslint:PSCustomUseLiteralPath
test/units/config/manager/test_find_ini_config_file.py future-import-boilerplate
test/units/contrib/inventory/test_vmware_inventory.py future-import-boilerplate
test/units/contrib/inventory/test_vmware_inventory.py metaclass-boilerplate
test/units/contrib/inventory/test_vmware_inventory.py pylint:blacklisted-name
test/units/executor/test_play_iterator.py pylint:blacklisted-name
test/units/mock/path.py future-import-boilerplate
test/units/mock/path.py metaclass-boilerplate
test/units/mock/yaml_helper.py future-import-boilerplate
test/units/mock/yaml_helper.py metaclass-boilerplate
test/units/module_utils/aws/test_aws_module.py metaclass-boilerplate
test/units/module_utils/basic/test__symbolic_mode_to_octal.py future-import-boilerplate
test/units/module_utils/basic/test_deprecate_warn.py future-import-boilerplate
test/units/module_utils/basic/test_deprecate_warn.py metaclass-boilerplate
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_exit_json.py future-import-boilerplate
test/units/module_utils/basic/test_get_file_attributes.py future-import-boilerplate
test/units/module_utils/basic/test_heuristic_log_sanitize.py future-import-boilerplate
test/units/module_utils/basic/test_run_command.py future-import-boilerplate
test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name
test/units/module_utils/basic/test_safe_eval.py future-import-boilerplate
test/units/module_utils/basic/test_tmpdir.py future-import-boilerplate
test/units/module_utils/cloud/test_backoff.py future-import-boilerplate
test/units/module_utils/cloud/test_backoff.py metaclass-boilerplate
test/units/module_utils/common/test_dict_transformations.py future-import-boilerplate
test/units/module_utils/common/test_dict_transformations.py metaclass-boilerplate
test/units/module_utils/conftest.py future-import-boilerplate
test/units/module_utils/conftest.py metaclass-boilerplate
test/units/module_utils/facts/base.py future-import-boilerplate
test/units/module_utils/facts/hardware/test_sunos_get_uptime_facts.py future-import-boilerplate
test/units/module_utils/facts/hardware/test_sunos_get_uptime_facts.py metaclass-boilerplate
test/units/module_utils/facts/network/test_generic_bsd.py future-import-boilerplate
test/units/module_utils/facts/other/test_facter.py future-import-boilerplate
test/units/module_utils/facts/other/test_ohai.py future-import-boilerplate
test/units/module_utils/facts/system/test_lsb.py future-import-boilerplate
test/units/module_utils/facts/test_ansible_collector.py future-import-boilerplate
test/units/module_utils/facts/test_collector.py future-import-boilerplate
test/units/module_utils/facts/test_collectors.py future-import-boilerplate
test/units/module_utils/facts/test_facts.py future-import-boilerplate
test/units/module_utils/facts/test_timeout.py future-import-boilerplate
test/units/module_utils/facts/test_utils.py future-import-boilerplate
test/units/module_utils/gcp/test_auth.py future-import-boilerplate
test/units/module_utils/gcp/test_auth.py metaclass-boilerplate
test/units/module_utils/gcp/test_gcp_utils.py future-import-boilerplate
test/units/module_utils/gcp/test_gcp_utils.py metaclass-boilerplate
test/units/module_utils/gcp/test_utils.py future-import-boilerplate
test/units/module_utils/gcp/test_utils.py metaclass-boilerplate
test/units/module_utils/hwc/test_dict_comparison.py future-import-boilerplate
test/units/module_utils/hwc/test_dict_comparison.py metaclass-boilerplate
test/units/module_utils/hwc/test_hwc_utils.py future-import-boilerplate
test/units/module_utils/hwc/test_hwc_utils.py metaclass-boilerplate
test/units/module_utils/json_utils/test_filter_non_json_lines.py future-import-boilerplate
test/units/module_utils/net_tools/test_netbox.py future-import-boilerplate
test/units/module_utils/net_tools/test_netbox.py metaclass-boilerplate
test/units/module_utils/network/avi/test_avi_api_utils.py future-import-boilerplate
test/units/module_utils/network/avi/test_avi_api_utils.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_common.py future-import-boilerplate
test/units/module_utils/network/ftd/test_common.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_configuration.py future-import-boilerplate
test/units/module_utils/network/ftd/test_configuration.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_device.py future-import-boilerplate
test/units/module_utils/network/ftd/test_device.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_parser.py future-import-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_parser.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_validator.py future-import-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_validator.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_with_real_data.py future-import-boilerplate
test/units/module_utils/network/ftd/test_fdm_swagger_with_real_data.py metaclass-boilerplate
test/units/module_utils/network/ftd/test_upsert_functionality.py future-import-boilerplate
test/units/module_utils/network/ftd/test_upsert_functionality.py metaclass-boilerplate
test/units/module_utils/network/nso/test_nso.py metaclass-boilerplate
test/units/module_utils/parsing/test_convert_bool.py future-import-boilerplate
test/units/module_utils/postgresql/test_postgres.py future-import-boilerplate
test/units/module_utils/postgresql/test_postgres.py metaclass-boilerplate
test/units/module_utils/remote_management/dellemc/test_ome.py future-import-boilerplate
test/units/module_utils/remote_management/dellemc/test_ome.py metaclass-boilerplate
test/units/module_utils/test_database.py future-import-boilerplate
test/units/module_utils/test_database.py metaclass-boilerplate
test/units/module_utils/test_distro.py future-import-boilerplate
test/units/module_utils/test_distro.py metaclass-boilerplate
test/units/module_utils/test_hetzner.py future-import-boilerplate
test/units/module_utils/test_hetzner.py metaclass-boilerplate
test/units/module_utils/test_kubevirt.py future-import-boilerplate
test/units/module_utils/test_kubevirt.py metaclass-boilerplate
test/units/module_utils/test_netapp.py future-import-boilerplate
test/units/module_utils/test_text.py future-import-boilerplate
test/units/module_utils/test_utm_utils.py future-import-boilerplate
test/units/module_utils/test_utm_utils.py metaclass-boilerplate
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/module_utils/xenserver/FakeAnsibleModule.py future-import-boilerplate
test/units/module_utils/xenserver/FakeAnsibleModule.py metaclass-boilerplate
test/units/module_utils/xenserver/FakeXenAPI.py future-import-boilerplate
test/units/module_utils/xenserver/FakeXenAPI.py metaclass-boilerplate
test/units/modules/cloud/google/test_gce_tag.py future-import-boilerplate
test/units/modules/cloud/google/test_gce_tag.py metaclass-boilerplate
test/units/modules/cloud/google/test_gcp_forwarding_rule.py future-import-boilerplate
test/units/modules/cloud/google/test_gcp_forwarding_rule.py metaclass-boilerplate
test/units/modules/cloud/google/test_gcp_url_map.py future-import-boilerplate
test/units/modules/cloud/google/test_gcp_url_map.py metaclass-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_rs.py future-import-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_rs.py metaclass-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_vm.py future-import-boilerplate
test/units/modules/cloud/kubevirt/test_kubevirt_vm.py metaclass-boilerplate
test/units/modules/cloud/linode/conftest.py future-import-boilerplate
test/units/modules/cloud/linode/conftest.py metaclass-boilerplate
test/units/modules/cloud/linode/test_linode.py metaclass-boilerplate
test/units/modules/cloud/linode_v4/conftest.py future-import-boilerplate
test/units/modules/cloud/linode_v4/conftest.py metaclass-boilerplate
test/units/modules/cloud/linode_v4/test_linode_v4.py metaclass-boilerplate
test/units/modules/cloud/misc/test_terraform.py future-import-boilerplate
test/units/modules/cloud/misc/test_terraform.py metaclass-boilerplate
test/units/modules/cloud/misc/virt_net/conftest.py future-import-boilerplate
test/units/modules/cloud/misc/virt_net/conftest.py metaclass-boilerplate
test/units/modules/cloud/misc/virt_net/test_virt_net.py future-import-boilerplate
test/units/modules/cloud/misc/virt_net/test_virt_net.py metaclass-boilerplate
test/units/modules/cloud/openstack/test_os_server.py future-import-boilerplate
test/units/modules/cloud/openstack/test_os_server.py metaclass-boilerplate
test/units/modules/cloud/xenserver/FakeAnsibleModule.py future-import-boilerplate
test/units/modules/cloud/xenserver/FakeAnsibleModule.py metaclass-boilerplate
test/units/modules/cloud/xenserver/FakeXenAPI.py future-import-boilerplate
test/units/modules/cloud/xenserver/FakeXenAPI.py metaclass-boilerplate
test/units/modules/conftest.py future-import-boilerplate
test/units/modules/conftest.py metaclass-boilerplate
test/units/modules/files/test_copy.py future-import-boilerplate
test/units/modules/messaging/rabbitmq/test_rabbimq_user.py future-import-boilerplate
test/units/modules/messaging/rabbitmq/test_rabbimq_user.py metaclass-boilerplate
test/units/modules/monitoring/test_circonus_annotation.py future-import-boilerplate
test/units/modules/monitoring/test_circonus_annotation.py metaclass-boilerplate
test/units/modules/monitoring/test_icinga2_feature.py future-import-boilerplate
test/units/modules/monitoring/test_icinga2_feature.py metaclass-boilerplate
test/units/modules/monitoring/test_pagerduty.py future-import-boilerplate
test/units/modules/monitoring/test_pagerduty.py metaclass-boilerplate
test/units/modules/monitoring/test_pagerduty_alert.py future-import-boilerplate
test/units/modules/monitoring/test_pagerduty_alert.py metaclass-boilerplate
test/units/modules/net_tools/test_nmcli.py future-import-boilerplate
test/units/modules/net_tools/test_nmcli.py metaclass-boilerplate
test/units/modules/network/avi/test_avi_user.py future-import-boilerplate
test/units/modules/network/avi/test_avi_user.py metaclass-boilerplate
test/units/modules/network/check_point/test_checkpoint_access_rule.py future-import-boilerplate
test/units/modules/network/check_point/test_checkpoint_access_rule.py metaclass-boilerplate
test/units/modules/network/check_point/test_checkpoint_host.py future-import-boilerplate
test/units/modules/network/check_point/test_checkpoint_host.py metaclass-boilerplate
test/units/modules/network/check_point/test_checkpoint_session.py future-import-boilerplate
test/units/modules/network/check_point/test_checkpoint_session.py metaclass-boilerplate
test/units/modules/network/check_point/test_checkpoint_task_facts.py future-import-boilerplate
test/units/modules/network/check_point/test_checkpoint_task_facts.py metaclass-boilerplate
test/units/modules/network/cloudvision/test_cv_server_provision.py future-import-boilerplate
test/units/modules/network/cloudvision/test_cv_server_provision.py metaclass-boilerplate
test/units/modules/network/cumulus/test_nclu.py future-import-boilerplate
test/units/modules/network/cumulus/test_nclu.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_configuration.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_configuration.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_file_download.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_file_download.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_file_upload.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_file_upload.py metaclass-boilerplate
test/units/modules/network/ftd/test_ftd_install.py future-import-boilerplate
test/units/modules/network/ftd/test_ftd_install.py metaclass-boilerplate
test/units/modules/network/netscaler/netscaler_module.py future-import-boilerplate
test/units/modules/network/netscaler/netscaler_module.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_action.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_action.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_policy.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_policy.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_vserver.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_cs_vserver.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_service.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_service.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_site.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_site.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_vserver.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_gslb_vserver.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_monitor.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_monitor.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_vserver.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_lb_vserver.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_module_utils.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_module_utils.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_nitro_request.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_nitro_request.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_save_config.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_save_config.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_server.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_server.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_service.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_service.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_servicegroup.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_servicegroup.py metaclass-boilerplate
test/units/modules/network/netscaler/test_netscaler_ssl_certkey.py future-import-boilerplate
test/units/modules/network/netscaler/test_netscaler_ssl_certkey.py metaclass-boilerplate
test/units/modules/network/nso/nso_module.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_action.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_config.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_query.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_show.py metaclass-boilerplate
test/units/modules/network/nso/test_nso_verify.py metaclass-boilerplate
test/units/modules/network/nuage/nuage_module.py future-import-boilerplate
test/units/modules/network/nuage/nuage_module.py metaclass-boilerplate
test/units/modules/network/nuage/test_nuage_vspk.py future-import-boilerplate
test/units/modules/network/nuage/test_nuage_vspk.py metaclass-boilerplate
test/units/modules/network/nxos/test_nxos_acl_interface.py metaclass-boilerplate
test/units/modules/network/radware/test_vdirect_commit.py future-import-boilerplate
test/units/modules/network/radware/test_vdirect_commit.py metaclass-boilerplate
test/units/modules/network/radware/test_vdirect_file.py future-import-boilerplate
test/units/modules/network/radware/test_vdirect_file.py metaclass-boilerplate
test/units/modules/network/radware/test_vdirect_runnable.py future-import-boilerplate
test/units/modules/network/radware/test_vdirect_runnable.py metaclass-boilerplate
test/units/modules/notification/test_slack.py future-import-boilerplate
test/units/modules/notification/test_slack.py metaclass-boilerplate
test/units/modules/packaging/language/test_gem.py future-import-boilerplate
test/units/modules/packaging/language/test_gem.py metaclass-boilerplate
test/units/modules/packaging/language/test_pip.py future-import-boilerplate
test/units/modules/packaging/language/test_pip.py metaclass-boilerplate
test/units/modules/packaging/os/conftest.py future-import-boilerplate
test/units/modules/packaging/os/conftest.py metaclass-boilerplate
test/units/modules/packaging/os/test_apk.py future-import-boilerplate
test/units/modules/packaging/os/test_apk.py metaclass-boilerplate
test/units/modules/packaging/os/test_apt.py future-import-boilerplate
test/units/modules/packaging/os/test_apt.py metaclass-boilerplate
test/units/modules/packaging/os/test_apt.py pylint:blacklisted-name
test/units/modules/packaging/os/test_rhn_channel.py future-import-boilerplate
test/units/modules/packaging/os/test_rhn_channel.py metaclass-boilerplate
test/units/modules/packaging/os/test_rhn_register.py future-import-boilerplate
test/units/modules/packaging/os/test_rhn_register.py metaclass-boilerplate
test/units/modules/packaging/os/test_yum.py future-import-boilerplate
test/units/modules/packaging/os/test_yum.py metaclass-boilerplate
test/units/modules/remote_management/dellemc/test_ome_device_info.py future-import-boilerplate
test/units/modules/remote_management/dellemc/test_ome_device_info.py metaclass-boilerplate
test/units/modules/remote_management/lxca/test_lxca_cmms.py future-import-boilerplate
test/units/modules/remote_management/lxca/test_lxca_cmms.py metaclass-boilerplate
test/units/modules/remote_management/lxca/test_lxca_nodes.py future-import-boilerplate
test/units/modules/remote_management/lxca/test_lxca_nodes.py metaclass-boilerplate
test/units/modules/remote_management/oneview/conftest.py future-import-boilerplate
test/units/modules/remote_management/oneview/conftest.py metaclass-boilerplate
test/units/modules/remote_management/oneview/hpe_test_utils.py future-import-boilerplate
test/units/modules/remote_management/oneview/hpe_test_utils.py metaclass-boilerplate
test/units/modules/remote_management/oneview/oneview_module_loader.py future-import-boilerplate
test/units/modules/remote_management/oneview/oneview_module_loader.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_datacenter_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_datacenter_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_enclosure_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_enclosure_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_ethernet_network_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fc_network_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_fcoe_network_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_network_set_info.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager.py metaclass-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager_info.py future-import-boilerplate
test/units/modules/remote_management/oneview/test_oneview_san_manager_info.py metaclass-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_access_key.py future-import-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_access_key.py metaclass-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_key_pair.py future-import-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_key_pair.py metaclass-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_known_host.py future-import-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_known_host.py metaclass-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_variable.py future-import-boilerplate
test/units/modules/source_control/bitbucket/test_bitbucket_pipeline_variable.py metaclass-boilerplate
test/units/modules/source_control/gitlab/gitlab.py future-import-boilerplate
test/units/modules/source_control/gitlab/gitlab.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_deploy_key.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_deploy_key.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_group.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_group.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_hook.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_hook.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_project.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_project.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_runner.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_runner.py metaclass-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_user.py future-import-boilerplate
test/units/modules/source_control/gitlab/test_gitlab_user.py metaclass-boilerplate
test/units/modules/storage/hpe3par/test_ss_3par_cpg.py future-import-boilerplate
test/units/modules/storage/hpe3par/test_ss_3par_cpg.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_config.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_config.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_snmp.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_cluster_snmp.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_initiators.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_elementsw_initiators.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_aggregate.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_aggregate.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_autosupport.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_autosupport.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_broadcast_domain.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_broadcast_domain.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs_server.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cifs_server.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster_peer.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_cluster_peer.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_command.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_command.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_export_policy_rule.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_export_policy_rule.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_firewall_policy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_firewall_policy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_flexcache.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_flexcache.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup_initiator.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_igroup_initiator.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_info.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_info.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_interface.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_interface.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ipspace.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ipspace.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_job_schedule.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_job_schedule.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_copy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_copy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_map.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_lun_map.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_motd.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_motd.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_ifgrp.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_ifgrp.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_port.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_port.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_routes.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_routes.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_subnet.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_net_subnet.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nfs.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nfs.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_namespace.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_namespace.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_subsystem.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_nvme_subsystem.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_portset.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_portset.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_qos_policy_group.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_qos_policy_group.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_quotas.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_quotas.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_security_key_manager.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_security_key_manager.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_service_processor_network.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_service_processor_network.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapmirror.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapmirror.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot_policy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_snapshot_policy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_software_update.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_software_update.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_svm.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_svm.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ucadapter.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_ucadapter.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_group.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_group.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_user.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_unix_user.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user_role.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_user_role.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume_clone.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_volume_clone.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_access_policy.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_access_policy.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_demand_task.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_on_demand_task.py metaclass-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_scanner_pool.py future-import-boilerplate
test/units/modules/storage/netapp/test_na_ontap_vscan_scanner_pool.py metaclass-boilerplate
test/units/modules/storage/netapp/test_netapp.py metaclass-boilerplate
test/units/modules/storage/netapp/test_netapp_e_alerts.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_asup.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_auditlog.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_global.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_host.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_iscsi_interface.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_iscsi_target.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_ldap.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_mgmt_interface.py future-import-boilerplate
test/units/modules/storage/netapp/test_netapp_e_syslog.py future-import-boilerplate
test/units/modules/system/interfaces_file/test_interfaces_file.py future-import-boilerplate
test/units/modules/system/interfaces_file/test_interfaces_file.py metaclass-boilerplate
test/units/modules/system/interfaces_file/test_interfaces_file.py pylint:blacklisted-name
test/units/modules/system/test_iptables.py future-import-boilerplate
test/units/modules/system/test_iptables.py metaclass-boilerplate
test/units/modules/system/test_java_keystore.py future-import-boilerplate
test/units/modules/system/test_java_keystore.py metaclass-boilerplate
test/units/modules/system/test_known_hosts.py future-import-boilerplate
test/units/modules/system/test_known_hosts.py metaclass-boilerplate
test/units/modules/system/test_known_hosts.py pylint:ansible-bad-function
test/units/modules/system/test_linux_mountinfo.py future-import-boilerplate
test/units/modules/system/test_linux_mountinfo.py metaclass-boilerplate
test/units/modules/system/test_pamd.py metaclass-boilerplate
test/units/modules/system/test_parted.py future-import-boilerplate
test/units/modules/system/test_systemd.py future-import-boilerplate
test/units/modules/system/test_systemd.py metaclass-boilerplate
test/units/modules/system/test_ufw.py future-import-boilerplate
test/units/modules/system/test_ufw.py metaclass-boilerplate
test/units/modules/utils.py future-import-boilerplate
test/units/modules/utils.py metaclass-boilerplate
test/units/modules/web_infrastructure/test_apache2_module.py future-import-boilerplate
test/units/modules/web_infrastructure/test_apache2_module.py metaclass-boilerplate
test/units/modules/web_infrastructure/test_jenkins_plugin.py future-import-boilerplate
test/units/modules/web_infrastructure/test_jenkins_plugin.py metaclass-boilerplate
test/units/parsing/utils/test_addresses.py future-import-boilerplate
test/units/parsing/utils/test_addresses.py metaclass-boilerplate
test/units/parsing/vault/test_vault.py pylint:blacklisted-name
test/units/playbook/role/test_role.py pylint:blacklisted-name
test/units/playbook/test_attribute.py future-import-boilerplate
test/units/playbook/test_attribute.py metaclass-boilerplate
test/units/playbook/test_conditional.py future-import-boilerplate
test/units/playbook/test_conditional.py metaclass-boilerplate
test/units/plugins/action/test_synchronize.py future-import-boilerplate
test/units/plugins/action/test_synchronize.py metaclass-boilerplate
test/units/plugins/httpapi/test_ftd.py future-import-boilerplate
test/units/plugins/httpapi/test_ftd.py metaclass-boilerplate
test/units/plugins/inventory/test_constructed.py future-import-boilerplate
test/units/plugins/inventory/test_constructed.py metaclass-boilerplate
test/units/plugins/inventory/test_group.py future-import-boilerplate
test/units/plugins/inventory/test_group.py metaclass-boilerplate
test/units/plugins/inventory/test_host.py future-import-boilerplate
test/units/plugins/inventory/test_host.py metaclass-boilerplate
test/units/plugins/loader_fixtures/import_fixture.py future-import-boilerplate
test/units/plugins/shell/test_cmd.py future-import-boilerplate
test/units/plugins/shell/test_cmd.py metaclass-boilerplate
test/units/plugins/shell/test_powershell.py future-import-boilerplate
test/units/plugins/shell/test_powershell.py metaclass-boilerplate
test/units/plugins/test_plugins.py pylint:blacklisted-name
test/units/template/test_templar.py pylint:blacklisted-name
test/units/test_constants.py future-import-boilerplate
test/units/test_context.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/action/my_action.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/action/my_action.py metaclass-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_other_util.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_other_util.py metaclass-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_util.py future-import-boilerplate
test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_util.py metaclass-boilerplate
test/units/utils/kubevirt_fixtures.py future-import-boilerplate
test/units/utils/kubevirt_fixtures.py metaclass-boilerplate
test/units/utils/test_cleanup_tmp_file.py future-import-boilerplate
test/units/utils/test_encrypt.py future-import-boilerplate
test/units/utils/test_encrypt.py metaclass-boilerplate
test/units/utils/test_helpers.py future-import-boilerplate
test/units/utils/test_helpers.py metaclass-boilerplate
test/units/utils/test_shlex.py future-import-boilerplate
test/units/utils/test_shlex.py metaclass-boilerplate
test/utils/shippable/check_matrix.py replace-urlopen
test/utils/shippable/timing.py shebang
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,457 |
VMware: require "vmware_host_facts" module return ESXi server update version info
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Now "vmware_host_facts" module returns the ESXi version and build number, no update version info, so request getting that info too.
References:
https://www.virtuallyghetto.com/2016/08/quick-tip-how-to-retrieve-the-esxi-update-level-using-the-vsphere-api.html
MOB path:
https://esxi_hostname/mob/?moid=ha-adv-options&doPath=setting%5b%22Misc.HostAgentUpdateLevel%22%5d
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_host_facts
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Some features of ESXi are introduced in specific ESXi update release, so it's better to get that info and use it as a condition before some tasks.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/66457
|
https://github.com/ansible/ansible/pull/67162
|
4dd2513371800c649eeb45ea0bd819ac3ebd153b
|
1b263e77de4c5714a558fe0fe5934363a2d85649
| 2020-01-14T02:58:02Z |
python
| 2020-02-10T16:47:29Z |
lib/ansible/modules/cloud/vmware/vmware_host_facts.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Wei Gao <[email protected]>
# Copyright: (c) 2018, Ansible Project
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_host_facts
short_description: Gathers facts about remote ESXi hostsystem
description:
- This module can be used to gathers facts like CPU, memory, datastore, network and system etc. about ESXi host system.
- Please specify hostname or IP address of ESXi host system as C(hostname).
- If hostname or IP address of vCenter is provided as C(hostname) and C(esxi_hostname) is not specified, then the
module will throw an error.
- VSAN facts added in 2.7 version.
- SYSTEM fact uuid added in 2.10 version.
version_added: 2.5
author:
- Wei Gao (@woshihaoren)
requirements:
- python >= 2.6
- PyVmomi
options:
esxi_hostname:
description:
- ESXi hostname.
- Host facts about the specified ESXi server will be returned.
- By specifying this option, you can select which ESXi hostsystem is returned if connecting to a vCenter.
version_added: 2.8
type: str
show_tag:
description:
- Tags related to Host are shown if set to C(True).
default: False
type: bool
required: False
version_added: 2.9
schema:
description:
- Specify the output schema desired.
- The 'summary' output schema is the legacy output from the module
- The 'vsphere' output schema is the vSphere API class definition
which requires pyvmomi>6.7.1
choices: ['summary', 'vsphere']
default: 'summary'
type: str
version_added: '2.10'
properties:
description:
- Specify the properties to retrieve.
- If not specified, all properties are retrieved (deeply).
- Results are returned in a structure identical to the vsphere API.
- 'Example:'
- ' properties: ['
- ' "hardware.memorySize",'
- ' "hardware.cpuInfo.numCpuCores",'
- ' "config.product.apiVersion",'
- ' "overallStatus"'
- ' ]'
- Only valid when C(schema) is C(vsphere).
type: list
required: False
version_added: '2.10'
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Gather vmware host facts
vmware_host_facts:
hostname: "{{ esxi_server }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
register: host_facts
delegate_to: localhost
- name: Gather vmware host facts from vCenter
vmware_host_facts:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
esxi_hostname: "{{ esxi_hostname }}"
register: host_facts
delegate_to: localhost
- name: Gather vmware host facts from vCenter with tag information
vmware_host_facts:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
esxi_hostname: "{{ esxi_hostname }}"
show_tag: True
register: host_facts_tag
delegate_to: localhost
- name: Get VSAN Cluster UUID from host facts
vmware_host_facts:
hostname: "{{ esxi_server }}"
username: "{{ esxi_username }}"
password: "{{ esxi_password }}"
register: host_facts
- set_fact:
cluster_uuid: "{{ host_facts['ansible_facts']['vsan_cluster_uuid'] }}"
- name: Gather some info from a host using the vSphere API output schema
vmware_host_facts:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
esxi_hostname: "{{ esxi_hostname }}"
schema: vsphere
properties:
- hardware.memorySize
- hardware.cpuInfo.numCpuCores
- config.product.apiVersion
- overallStatus
register: host_facts
'''
RETURN = r'''
ansible_facts:
description: system info about the host machine
returned: always
type: dict
sample:
{
"ansible_all_ipv4_addresses": [
"10.76.33.200"
],
"ansible_bios_date": "2011-01-01T00:00:00+00:00",
"ansible_bios_version": "0.5.1",
"ansible_datastore": [
{
"free": "11.63 GB",
"name": "datastore1",
"total": "12.50 GB"
}
],
"ansible_distribution": "VMware ESXi",
"ansible_distribution_build": "4887370",
"ansible_distribution_version": "6.5.0",
"ansible_hostname": "10.76.33.100",
"ansible_in_maintenance_mode": true,
"ansible_interfaces": [
"vmk0"
],
"ansible_memfree_mb": 2702,
"ansible_memtotal_mb": 4095,
"ansible_os_type": "vmnix-x86",
"ansible_processor": "Intel Xeon E312xx (Sandy Bridge)",
"ansible_processor_cores": 2,
"ansible_processor_count": 2,
"ansible_processor_vcpus": 2,
"ansible_product_name": "KVM",
"ansible_product_serial": "NA",
"ansible_system_vendor": "Red Hat",
"ansible_uptime": 1791680,
"ansible_uuid": "4c4c4544-0052-3410-804c-b2c04f4e3632",
"ansible_vmk0": {
"device": "vmk0",
"ipv4": {
"address": "10.76.33.100",
"netmask": "255.255.255.0"
},
"macaddress": "52:54:00:56:7d:59",
"mtu": 1500
},
"vsan_cluster_uuid": null,
"vsan_node_uuid": null,
"vsan_health": "unknown",
"tags": [
{
"category_id": "urn:vmomi:InventoryServiceCategory:8eb81431-b20d-49f5-af7b-126853aa1189:GLOBAL",
"category_name": "host_category_0001",
"description": "",
"id": "urn:vmomi:InventoryServiceTag:e9398232-46fd-461a-bf84-06128e182a4a:GLOBAL",
"name": "host_tag_0001"
}
],
}
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.text.formatters import bytes_to_human
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, find_obj
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.vmware_rest_client import VmwareRestClient
class VMwareHostFactManager(PyVmomi):
def __init__(self, module):
super(VMwareHostFactManager, self).__init__(module)
esxi_host_name = self.params.get('esxi_hostname', None)
if self.is_vcenter():
if esxi_host_name is None:
self.module.fail_json(msg="Connected to a vCenter system without specifying esxi_hostname")
self.host = self.get_all_host_objs(esxi_host_name=esxi_host_name)
if len(self.host) > 1:
self.module.fail_json(msg="esxi_hostname matched multiple hosts")
self.host = self.host[0]
else:
self.host = find_obj(self.content, [vim.HostSystem], None)
if self.host is None:
self.module.fail_json(msg="Failed to find host system.")
def all_facts(self):
ansible_facts = {}
ansible_facts.update(self.get_cpu_facts())
ansible_facts.update(self.get_memory_facts())
ansible_facts.update(self.get_datastore_facts())
ansible_facts.update(self.get_network_facts())
ansible_facts.update(self.get_system_facts())
ansible_facts.update(self.get_vsan_facts())
ansible_facts.update(self.get_cluster_facts())
if self.params.get('show_tag'):
vmware_client = VmwareRestClient(self.module)
tag_info = {
'tags': vmware_client.get_tags_for_hostsystem(hostsystem_mid=self.host._moId)
}
ansible_facts.update(tag_info)
self.module.exit_json(changed=False, ansible_facts=ansible_facts)
def get_cluster_facts(self):
cluster_facts = {'cluster': None}
if self.host.parent and isinstance(self.host.parent, vim.ClusterComputeResource):
cluster_facts.update(cluster=self.host.parent.name)
return cluster_facts
def get_vsan_facts(self):
config_mgr = self.host.configManager.vsanSystem
if config_mgr is None:
return {
'vsan_cluster_uuid': None,
'vsan_node_uuid': None,
'vsan_health': "unknown",
}
status = config_mgr.QueryHostStatus()
return {
'vsan_cluster_uuid': status.uuid,
'vsan_node_uuid': status.nodeUuid,
'vsan_health': status.health,
}
def get_cpu_facts(self):
return {
'ansible_processor': self.host.summary.hardware.cpuModel,
'ansible_processor_cores': self.host.summary.hardware.numCpuCores,
'ansible_processor_count': self.host.summary.hardware.numCpuPkgs,
'ansible_processor_vcpus': self.host.summary.hardware.numCpuThreads,
}
def get_memory_facts(self):
return {
'ansible_memfree_mb': self.host.hardware.memorySize // 1024 // 1024 - self.host.summary.quickStats.overallMemoryUsage,
'ansible_memtotal_mb': self.host.hardware.memorySize // 1024 // 1024,
}
def get_datastore_facts(self):
facts = dict()
facts['ansible_datastore'] = []
for store in self.host.datastore:
_tmp = {
'name': store.summary.name,
'total': bytes_to_human(store.summary.capacity),
'free': bytes_to_human(store.summary.freeSpace),
}
facts['ansible_datastore'].append(_tmp)
return facts
def get_network_facts(self):
facts = dict()
facts['ansible_interfaces'] = []
facts['ansible_all_ipv4_addresses'] = []
for nic in self.host.config.network.vnic:
device = nic.device
facts['ansible_interfaces'].append(device)
facts['ansible_all_ipv4_addresses'].append(nic.spec.ip.ipAddress)
_tmp = {
'device': device,
'ipv4': {
'address': nic.spec.ip.ipAddress,
'netmask': nic.spec.ip.subnetMask,
},
'macaddress': nic.spec.mac,
'mtu': nic.spec.mtu,
}
facts['ansible_' + device] = _tmp
return facts
def get_system_facts(self):
sn = 'NA'
for info in self.host.hardware.systemInfo.otherIdentifyingInfo:
if info.identifierType.key == 'ServiceTag':
sn = info.identifierValue
facts = {
'ansible_distribution': self.host.config.product.name,
'ansible_distribution_version': self.host.config.product.version,
'ansible_distribution_build': self.host.config.product.build,
'ansible_os_type': self.host.config.product.osType,
'ansible_system_vendor': self.host.hardware.systemInfo.vendor,
'ansible_hostname': self.host.summary.config.name,
'ansible_product_name': self.host.hardware.systemInfo.model,
'ansible_product_serial': sn,
'ansible_bios_date': self.host.hardware.biosInfo.releaseDate,
'ansible_bios_version': self.host.hardware.biosInfo.biosVersion,
'ansible_uptime': self.host.summary.quickStats.uptime,
'ansible_in_maintenance_mode': self.host.runtime.inMaintenanceMode,
'ansible_uuid': self.host.hardware.systemInfo.uuid,
}
return facts
def properties_facts(self):
ansible_facts = self.to_json(self.host, self.params.get('properties'))
if self.params.get('show_tag'):
vmware_client = VmwareRestClient(self.module)
tag_info = {
'tags': vmware_client.get_tags_for_hostsystem(hostsystem_mid=self.host._moId)
}
ansible_facts.update(tag_info)
self.module.exit_json(changed=False, ansible_facts=ansible_facts)
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
esxi_hostname=dict(type='str', required=False),
show_tag=dict(type='bool', default=False),
schema=dict(type='str', choices=['summary', 'vsphere'], default='summary'),
properties=dict(type='list')
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
vm_host_manager = VMwareHostFactManager(module)
if module.params['schema'] == 'summary':
vm_host_manager.all_facts()
else:
vm_host_manager.properties_facts()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,012 |
nxos_igmp_interface has options which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has options marked with `removed_in_version='2.10'`. These options should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/network/nxos/nxos_igmp_interface.py:0:0: ansible-deprecated-version: Argument 'oif_prefix' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/nxos/nxos_igmp_interface.py:0:0: ansible-deprecated-version: Argument 'oif_source' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/network/nxos/nxos_igmp_interface.py
lib/ansible/modules/network/nxos/nxos_igmp_interface.py
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67012
|
https://github.com/ansible/ansible/pull/67186
|
11eee1181a9ee8f69bc36c44bbe63cf0554b0bff
|
88f0c8522882467d512eb4f1769e0eaf78404760
| 2020-02-01T13:48:39Z |
python
| 2020-02-11T11:27:07Z |
changelogs/fragments/67186_remove_deprecated_keys_nxos.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,012 |
nxos_igmp_interface has options which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has options marked with `removed_in_version='2.10'`. These options should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/network/nxos/nxos_igmp_interface.py:0:0: ansible-deprecated-version: Argument 'oif_prefix' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/nxos/nxos_igmp_interface.py:0:0: ansible-deprecated-version: Argument 'oif_source' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/network/nxos/nxos_igmp_interface.py
lib/ansible/modules/network/nxos/nxos_igmp_interface.py
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67012
|
https://github.com/ansible/ansible/pull/67186
|
11eee1181a9ee8f69bc36c44bbe63cf0554b0bff
|
88f0c8522882467d512eb4f1769e0eaf78404760
| 2020-02-01T13:48:39Z |
python
| 2020-02-11T11:27:07Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
* Windows Server 2008 and 2008 R2 will no longer be supported or tested in the next Ansible release, see :ref:`windows_faq_server2008`.
* The :ref:`win_stat <win_stat_module>` module has removed the deprecated ``get_md55`` option and ``md5`` return value.
* The :ref:`win_psexec <win_psexec_module>` module has removed the deprecated ``extra_opts`` option.
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
* :ref:`redfish_config <redfish_config_module>`: the ``bios_attribute_name`` and ``bios_attribute_value`` options will be removed. To maintain the existing behavior use the ``bios_attributes`` option instead.
* :ref:`clc_aa_policy <clc_aa_policy_module>`: the ``wait`` parameter will be removed. It has always been ignored by the module.
* :ref:`redfish_config <redfish_config_module>`, :ref:`redfish_command <redfish_command_module>`: the behavior to select the first System, Manager, or Chassis resource to modify when multiple are present will be removed. Use the new ``resource_id`` option to specify target resource to modify.
* :ref:`win_domain_controller <win_domain_controller_module>`: the ``log_path`` option will be removed. This was undocumented and only related to debugging information for module development.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`ec2 <ec2_module>`: the ``group`` and ``group_id`` options will become mutually exclusive. Currently ``group_id`` is ignored if you pass both.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
* :ref:`vmware_tag_info <vmware_tag_info_module>`: the module will not return ``tag_facts`` since it does not return multiple tags with the same name and different category id. To maintain the existing behavior use ``tag_info`` which is a list of tag metadata.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead.
Noteworthy module changes
-------------------------
* Ansible modules created with ``add_file_common_args=True`` added a number of undocumented arguments which were mostly there to ease implementing certain action plugins. The undocumented arguments ``src``, ``follow``, ``force``, ``content``, ``backup``, ``remote_src``, ``regexp``, ``delimiter``, and ``directory_mode`` are now no longer added. Modules relying on these options to be added need to specify them by themselves.
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`zabbix_proxy <zabbix_proxy_module>` deprecates ``interface`` sub-options ``type`` and ``main`` when proxy type is set to passive via ``status=passive``. Make sure these suboptions are removed from your playbook as they were never supported by Zabbix in the first place.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
* :ref:`docker_container <docker_container_module>` no longer passes information on non-anonymous volumes or binds as ``Volumes`` to the Docker daemon. This increases compatibility with the ``docker`` CLI program. Note that if you specify ``volumes: strict`` in ``comparisons``, this could cause existing containers created with docker_container from Ansible 2.9 or earlier to restart.
* :ref:`docker_container <docker_container_module>`'s support for port ranges was adjusted to be more compatible to the ``docker`` command line utility: a one-port container range combined with a multiple-port host range will no longer result in only the first host port be used, but the whole range being passed to Docker so that a free port in that range will be used.
* :ref:`purefb_fs <purefb_fs_module>` no longer supports the deprecated ``nfs`` option. This has been superceeded by ``nfsv3``.
Plugins
=======
Lookup plugin names case-sensitivity
------------------------------------
* Prior to Ansible ``2.10`` lookup plugin names passed in as an argument to the ``lookup()`` function were treated as case-insensitive as opposed to lookups invoked via ``with_<lookup_name>``. ``2.10`` brings consistency to ``lookup()`` and ``with_`` to be both case-sensitive.
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
* Some undocumented arguments from ``FILE_COMMON_ARGUMENTS`` have been removed; plugins using these, in particular action plugins, need to be adjusted. The undocumented arguments which were removed are ``src``, ``follow``, ``force``, ``content``, ``backup``, ``remote_src``, ``regexp``, ``delimiter``, and ``directory_mode``.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,012 |
nxos_igmp_interface has options which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has options marked with `removed_in_version='2.10'`. These options should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/network/nxos/nxos_igmp_interface.py:0:0: ansible-deprecated-version: Argument 'oif_prefix' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/nxos/nxos_igmp_interface.py:0:0: ansible-deprecated-version: Argument 'oif_source' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/network/nxos/nxos_igmp_interface.py
lib/ansible/modules/network/nxos/nxos_igmp_interface.py
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67012
|
https://github.com/ansible/ansible/pull/67186
|
11eee1181a9ee8f69bc36c44bbe63cf0554b0bff
|
88f0c8522882467d512eb4f1769e0eaf78404760
| 2020-02-01T13:48:39Z |
python
| 2020-02-11T11:27:07Z |
lib/ansible/modules/network/nxos/nxos_igmp_interface.py
|
#!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = '''
---
module: nxos_igmp_interface
extends_documentation_fragment: nxos
version_added: "2.2"
short_description: Manages IGMP interface configuration.
description:
- Manages IGMP interface configuration settings.
author:
- Jason Edelman (@jedelman8)
- Gabriele Gerbino (@GGabriele)
notes:
- Tested against NXOSv 7.3.(0)D1(1) on VIRL
- When C(state=default), supported params will be reset to a default state.
These include C(version), C(startup_query_interval),
C(startup_query_count), C(robustness), C(querier_timeout), C(query_mrt),
C(query_interval), C(last_member_qrt), C(last_member_query_count),
C(group_timeout), C(report_llg), and C(immediate_leave).
- When C(state=absent), all configs for C(oif_ps), and
C(oif_routemap) will be removed.
- PIM must be enabled to use this module.
- This module is for Layer 3 interfaces.
- Route-map check not performed (same as CLI) check when configuring
route-map with 'static-oif'
- If restart is set to true with other params set, the restart will happen
last, i.e. after the configuration takes place. However, 'restart' itself
is not idempotent as it is an action and not configuration.
options:
interface:
description:
- The full interface name for IGMP configuration.
e.g. I(Ethernet1/2).
required: true
version:
description:
- IGMP version. It can be 2 or 3 or keyword 'default'.
choices: ['2', '3', 'default']
startup_query_interval:
description:
- Query interval used when the IGMP process starts up.
The range is from 1 to 18000 or keyword 'default'.
The default is 31.
startup_query_count:
description:
- Query count used when the IGMP process starts up.
The range is from 1 to 10 or keyword 'default'.
The default is 2.
robustness:
description:
- Sets the robustness variable. Values can range from 1 to 7 or
keyword 'default'. The default is 2.
querier_timeout:
description:
- Sets the querier timeout that the software uses when deciding
to take over as the querier. Values can range from 1 to 65535
seconds or keyword 'default'. The default is 255 seconds.
query_mrt:
description:
- Sets the response time advertised in IGMP queries.
Values can range from 1 to 25 seconds or keyword 'default'.
The default is 10 seconds.
query_interval:
description:
- Sets the frequency at which the software sends IGMP host query
messages. Values can range from 1 to 18000 seconds or keyword
'default'. The default is 125 seconds.
last_member_qrt:
description:
- Sets the query interval waited after sending membership reports
before the software deletes the group state. Values can range
from 1 to 25 seconds or keyword 'default'. The default is 1 second.
last_member_query_count:
description:
- Sets the number of times that the software sends an IGMP query
in response to a host leave message.
Values can range from 1 to 5 or keyword 'default'. The default is 2.
group_timeout:
description:
- Sets the group membership timeout for IGMPv2.
Values can range from 3 to 65,535 seconds or keyword 'default'.
The default is 260 seconds.
report_llg:
description:
- Configures report-link-local-groups.
Enables sending reports for groups in 224.0.0.0/24.
Reports are always sent for nonlink local groups.
By default, reports are not sent for link local groups.
type: bool
immediate_leave:
description:
- Enables the device to remove the group entry from the multicast
routing table immediately upon receiving a leave message for
the group. Use this command to minimize the leave latency of
IGMPv2 group memberships on a given IGMP interface because the
device does not send group-specific queries.
The default is disabled.
type: bool
oif_routemap:
description:
- Configure a routemap for static outgoing interface (OIF) or
keyword 'default'.
oif_prefix:
description:
- This argument is deprecated, please use oif_ps instead.
Configure a prefix for static outgoing interface (OIF).
oif_source:
description:
- This argument is deprecated, please use oif_ps instead.
Configure a source for static outgoing interface (OIF).
oif_ps:
description:
- Configure prefixes and sources for static outgoing interface (OIF). This
is a list of dict where each dict has source and prefix defined or just
prefix if source is not needed. The specified values will be configured
on the device and if any previous prefix/sources exist, they will be removed.
Keyword 'default' is also accepted which removes all existing prefix/sources.
version_added: 2.6
restart:
description:
- Restart IGMP. This is NOT idempotent as this is action only.
type: bool
default: False
state:
description:
- Manages desired state of the resource.
default: present
choices: ['present', 'absent', 'default']
'''
EXAMPLES = '''
- nxos_igmp_interface:
interface: ethernet1/32
startup_query_interval: 30
oif_ps:
- { 'prefix': '238.2.2.6' }
- { 'source': '192.168.0.1', 'prefix': '238.2.2.5'}
state: present
'''
RETURN = '''
proposed:
description: k/v pairs of parameters passed into module
returned: always
type: dict
sample: {"startup_query_count": "30",
"oif_ps": [{'prefix': '238.2.2.6'}, {'source': '192.168.0.1', 'prefix': '238.2.2.5'}]}
existing:
description: k/v pairs of existing igmp_interface configuration
returned: always
type: dict
sample: {"startup_query_count": "2", "oif_ps": []}
end_state:
description: k/v pairs of igmp interface configuration after module execution
returned: always
type: dict
sample: {"startup_query_count": "30",
"oif_ps": [{'prefix': '238.2.2.6'}, {'source': '192.168.0.1', 'prefix': '238.2.2.5'}]}
updates:
description: commands sent to the device
returned: always
type: list
sample: ["interface Ethernet1/32", "ip igmp startup-query-count 30",
"ip igmp static-oif 238.2.2.6", "ip igmp static-oif 238.2.2.5 source 192.168.0.1"]
changed:
description: check to see if a change was made on the device
returned: always
type: bool
sample: true
'''
from ansible.module_utils.network.nxos.nxos import load_config, run_commands
from ansible.module_utils.network.nxos.nxos import nxos_argument_spec
from ansible.module_utils.network.nxos.nxos import get_interface_type
from ansible.module_utils.basic import AnsibleModule
import re
def execute_show_command(command, module, command_type='cli_show'):
if command_type == 'cli_show_ascii':
cmds = [{
'command': command,
'output': 'text',
}]
else:
cmds = [{
'command': command,
'output': 'json',
}]
return run_commands(module, cmds)
def get_interface_mode(interface, intf_type, module):
command = 'show interface {0}'.format(interface)
interface = {}
mode = 'unknown'
if intf_type in ['ethernet', 'portchannel']:
body = execute_show_command(command, module)[0]
interface_table = body['TABLE_interface']['ROW_interface']
mode = str(interface_table.get('eth_mode', 'layer3'))
if mode == 'access' or mode == 'trunk':
mode = 'layer2'
elif intf_type == 'loopback' or intf_type == 'svi':
mode = 'layer3'
return mode
def apply_key_map(key_map, table):
new_dict = {}
for key, value in table.items():
new_key = key_map.get(key)
if new_key:
value = table.get(key)
if value:
new_dict[new_key] = value
else:
new_dict[new_key] = value
return new_dict
def flatten_list(command_lists):
flat_command_list = []
for command in command_lists:
if isinstance(command, list):
flat_command_list.extend(command)
else:
flat_command_list.append(command)
return flat_command_list
def get_igmp_interface(module, interface):
command = 'show ip igmp interface {0}'.format(interface)
igmp = {}
key_map = {
'IGMPVersion': 'version',
'ConfiguredStartupQueryInterval': 'startup_query_interval',
'StartupQueryCount': 'startup_query_count',
'RobustnessVariable': 'robustness',
'ConfiguredQuerierTimeout': 'querier_timeout',
'ConfiguredMaxResponseTime': 'query_mrt',
'ConfiguredQueryInterval': 'query_interval',
'LastMemberMTR': 'last_member_qrt',
'LastMemberQueryCount': 'last_member_query_count',
'ConfiguredGroupTimeout': 'group_timeout'
}
body = execute_show_command(command, module)[0]
if body:
if 'not running' in body:
return igmp
resource = body['TABLE_vrf']['ROW_vrf']['TABLE_if']['ROW_if']
igmp = apply_key_map(key_map, resource)
report_llg = str(resource['ReportingForLinkLocal']).lower()
if report_llg == 'true':
igmp['report_llg'] = True
elif report_llg == 'false':
igmp['report_llg'] = False
immediate_leave = str(resource['ImmediateLeave']).lower() # returns en or dis
if re.search(r'^en|^true|^enabled', immediate_leave):
igmp['immediate_leave'] = True
elif re.search(r'^dis|^false|^disabled', immediate_leave):
igmp['immediate_leave'] = False
# the next block of code is used to retrieve anything with:
# ip igmp static-oif *** i.e.. could be route-map ROUTEMAP
# or PREFIX source <ip>, etc.
command = 'show run interface {0} | inc oif'.format(interface)
body = execute_show_command(
command, module, command_type='cli_show_ascii')[0]
staticoif = []
if body:
split_body = body.split('\n')
route_map_regex = (r'.*ip igmp static-oif route-map\s+'
r'(?P<route_map>\S+).*')
prefix_source_regex = (r'.*ip igmp static-oif\s+(?P<prefix>'
r'((\d+.){3}\d+))(\ssource\s'
r'(?P<source>\S+))?.*')
for line in split_body:
temp = {}
try:
match_route_map = re.match(route_map_regex, line, re.DOTALL)
route_map = match_route_map.groupdict()['route_map']
except AttributeError:
route_map = ''
try:
match_prefix_source = re.match(
prefix_source_regex, line, re.DOTALL)
prefix_source_group = match_prefix_source.groupdict()
prefix = prefix_source_group['prefix']
source = prefix_source_group['source']
except AttributeError:
prefix = ''
source = ''
if route_map:
temp['route_map'] = route_map
if prefix:
temp['prefix'] = prefix
if source:
temp['source'] = source
if temp:
staticoif.append(temp)
igmp['oif_routemap'] = None
igmp['oif_prefix_source'] = []
if staticoif:
if len(staticoif) == 1 and staticoif[0].get('route_map'):
igmp['oif_routemap'] = staticoif[0]['route_map']
else:
igmp['oif_prefix_source'] = staticoif
return igmp
def config_igmp_interface(delta, existing, existing_oif_prefix_source):
CMDS = {
'version': 'ip igmp version {0}',
'startup_query_interval': 'ip igmp startup-query-interval {0}',
'startup_query_count': 'ip igmp startup-query-count {0}',
'robustness': 'ip igmp robustness-variable {0}',
'querier_timeout': 'ip igmp querier-timeout {0}',
'query_mrt': 'ip igmp query-max-response-time {0}',
'query_interval': 'ip igmp query-interval {0}',
'last_member_qrt': 'ip igmp last-member-query-response-time {0}',
'last_member_query_count': 'ip igmp last-member-query-count {0}',
'group_timeout': 'ip igmp group-timeout {0}',
'report_llg': 'ip igmp report-link-local-groups',
'immediate_leave': 'ip igmp immediate-leave',
'oif_prefix_source': 'ip igmp static-oif {0} source {1} ',
'oif_routemap': 'ip igmp static-oif route-map {0}',
'oif_prefix': 'ip igmp static-oif {0}',
}
commands = []
command = None
def_vals = get_igmp_interface_defaults()
for key, value in delta.items():
if key == 'oif_ps' and value != 'default':
for each in value:
if each in existing_oif_prefix_source:
existing_oif_prefix_source.remove(each)
else:
# add new prefix/sources
pf = each['prefix']
src = ''
if 'source' in each.keys():
src = each['source']
if src:
commands.append(CMDS.get('oif_prefix_source').format(pf, src))
else:
commands.append(CMDS.get('oif_prefix').format(pf))
if existing_oif_prefix_source:
for each in existing_oif_prefix_source:
# remove stale prefix/sources
pf = each['prefix']
src = ''
if 'source' in each.keys():
src = each['source']
if src:
commands.append('no ' + CMDS.get('oif_prefix_source').format(pf, src))
else:
commands.append('no ' + CMDS.get('oif_prefix').format(pf))
elif key == 'oif_routemap':
if value == 'default':
if existing.get(key):
command = 'no ' + CMDS.get(key).format(existing.get(key))
else:
command = CMDS.get(key).format(value)
elif value:
if value == 'default':
if def_vals.get(key) != existing.get(key):
command = CMDS.get(key).format(def_vals.get(key))
else:
command = CMDS.get(key).format(value)
elif not value:
command = 'no {0}'.format(CMDS.get(key).format(value))
if command:
if command not in commands:
commands.append(command)
command = None
return commands
def get_igmp_interface_defaults():
version = '2'
startup_query_interval = '31'
startup_query_count = '2'
robustness = '2'
querier_timeout = '255'
query_mrt = '10'
query_interval = '125'
last_member_qrt = '1'
last_member_query_count = '2'
group_timeout = '260'
report_llg = False
immediate_leave = False
args = dict(version=version, startup_query_interval=startup_query_interval,
startup_query_count=startup_query_count, robustness=robustness,
querier_timeout=querier_timeout, query_mrt=query_mrt,
query_interval=query_interval, last_member_qrt=last_member_qrt,
last_member_query_count=last_member_query_count,
group_timeout=group_timeout, report_llg=report_llg,
immediate_leave=immediate_leave)
default = dict((param, value) for (param, value) in args.items()
if value is not None)
return default
def config_default_igmp_interface(existing, delta):
commands = []
proposed = get_igmp_interface_defaults()
delta = dict(set(proposed.items()).difference(existing.items()))
if delta:
command = config_igmp_interface(delta, existing, existing_oif_prefix_source=None)
if command:
for each in command:
commands.append(each)
return commands
def config_remove_oif(existing, existing_oif_prefix_source):
commands = []
command = None
if existing.get('oif_routemap'):
commands.append('no ip igmp static-oif route-map {0}'.format(existing.get('oif_routemap')))
elif existing_oif_prefix_source:
for each in existing_oif_prefix_source:
if each.get('prefix') and each.get('source'):
command = 'no ip igmp static-oif {0} source {1} '.format(
each.get('prefix'), each.get('source')
)
elif each.get('prefix'):
command = 'no ip igmp static-oif {0}'.format(
each.get('prefix')
)
if command:
commands.append(command)
command = None
return commands
def main():
argument_spec = dict(
interface=dict(required=True, type='str'),
version=dict(required=False, type='str'),
startup_query_interval=dict(required=False, type='str'),
startup_query_count=dict(required=False, type='str'),
robustness=dict(required=False, type='str'),
querier_timeout=dict(required=False, type='str'),
query_mrt=dict(required=False, type='str'),
query_interval=dict(required=False, type='str'),
last_member_qrt=dict(required=False, type='str'),
last_member_query_count=dict(required=False, type='str'),
group_timeout=dict(required=False, type='str'),
report_llg=dict(type='bool'),
immediate_leave=dict(type='bool'),
oif_routemap=dict(required=False, type='str'),
oif_prefix=dict(required=False, type='str', removed_in_version='2.10'),
oif_source=dict(required=False, type='str', removed_in_version='2.10'),
oif_ps=dict(required=False, type='raw'),
restart=dict(type='bool', default=False),
state=dict(choices=['present', 'absent', 'default'],
default='present')
)
argument_spec.update(nxos_argument_spec)
mutually_exclusive = [('oif_ps', 'oif_prefix'),
('oif_ps', 'oif_source'),
('oif_ps', 'oif_routemap'),
('oif_prefix', 'oif_routemap')]
module = AnsibleModule(argument_spec=argument_spec,
mutually_exclusive=mutually_exclusive,
supports_check_mode=True)
warnings = list()
state = module.params['state']
interface = module.params['interface']
oif_prefix = module.params['oif_prefix']
oif_source = module.params['oif_source']
oif_routemap = module.params['oif_routemap']
oif_ps = module.params['oif_ps']
if oif_source and not oif_prefix:
module.fail_json(msg='oif_prefix required when setting oif_source')
elif oif_source and oif_prefix:
oif_ps = [{'source': oif_source, 'prefix': oif_prefix}]
elif not oif_source and oif_prefix:
oif_ps = [{'prefix': oif_prefix}]
intf_type = get_interface_type(interface)
if get_interface_mode(interface, intf_type, module) == 'layer2':
module.fail_json(msg='this module only works on Layer 3 interfaces')
existing = get_igmp_interface(module, interface)
existing_copy = existing.copy()
end_state = existing_copy
if not existing.get('version'):
module.fail_json(msg='pim needs to be enabled on the interface')
existing_oif_prefix_source = existing.get('oif_prefix_source')
# not json serializable
existing.pop('oif_prefix_source')
if oif_routemap and existing_oif_prefix_source:
module.fail_json(msg='Delete static-oif configurations on this '
'interface if you want to use a routemap')
if oif_ps and existing.get('oif_routemap'):
module.fail_json(msg='Delete static-oif route-map configuration '
'on this interface if you want to config '
'static entries')
args = [
'version',
'startup_query_interval',
'startup_query_count',
'robustness',
'querier_timeout',
'query_mrt',
'query_interval',
'last_member_qrt',
'last_member_query_count',
'group_timeout',
'report_llg',
'immediate_leave',
'oif_routemap',
]
changed = False
commands = []
proposed = dict((k, v) for k, v in module.params.items()
if v is not None and k in args)
CANNOT_ABSENT = ['version', 'startup_query_interval',
'startup_query_count', 'robustness', 'querier_timeout',
'query_mrt', 'query_interval', 'last_member_qrt',
'last_member_query_count', 'group_timeout', 'report_llg',
'immediate_leave']
if state == 'absent':
for each in CANNOT_ABSENT:
if each in proposed:
module.fail_json(msg='only params: oif_prefix, oif_source, '
'oif_ps, oif_routemap can be used when '
'state=absent')
# delta check for all params except oif_ps
delta = dict(set(proposed.items()).difference(existing.items()))
if oif_ps:
if oif_ps == 'default':
delta['oif_ps'] = []
else:
delta['oif_ps'] = oif_ps
if state == 'present':
if delta:
command = config_igmp_interface(delta, existing, existing_oif_prefix_source)
if command:
commands.append(command)
elif state == 'default':
command = config_default_igmp_interface(existing, delta)
if command:
commands.append(command)
elif state == 'absent':
command = None
if existing.get('oif_routemap') or existing_oif_prefix_source:
command = config_remove_oif(existing, existing_oif_prefix_source)
if command:
commands.append(command)
command = config_default_igmp_interface(existing, delta)
if command:
commands.append(command)
cmds = []
results = {}
if commands:
commands.insert(0, ['interface {0}'.format(interface)])
cmds = flatten_list(commands)
if module.check_mode:
module.exit_json(changed=True, commands=cmds)
else:
load_config(module, cmds)
changed = True
end_state = get_igmp_interface(module, interface)
if 'configure' in cmds:
cmds.pop(0)
if module.params['restart']:
cmd = {'command': 'restart igmp', 'output': 'text'}
run_commands(module, cmd)
results['proposed'] = proposed
results['existing'] = existing_copy
results['updates'] = cmds
results['changed'] = changed
results['warnings'] = warnings
results['end_state'] = end_state
module.exit_json(**results)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,012 |
nxos_igmp_interface has options which should be removed for Ansible 2.10
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, this module has options marked with `removed_in_version='2.10'`. These options should better be removed before Ansible 2.10 is released.
```
lib/ansible/modules/network/nxos/nxos_igmp_interface.py:0:0: ansible-deprecated-version: Argument 'oif_prefix' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/network/nxos/nxos_igmp_interface.py:0:0: ansible-deprecated-version: Argument 'oif_source' in argument_spec has a deprecated removed_in_version '2.10', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/network/nxos/nxos_igmp_interface.py
lib/ansible/modules/network/nxos/nxos_igmp_interface.py
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67012
|
https://github.com/ansible/ansible/pull/67186
|
11eee1181a9ee8f69bc36c44bbe63cf0554b0bff
|
88f0c8522882467d512eb4f1769e0eaf78404760
| 2020-02-01T13:48:39Z |
python
| 2020-02-11T11:27:07Z |
test/integration/targets/nxos_igmp_interface/tests/common/sanity.yaml
|
---
- debug: msg="START connection={{ ansible_connection }} nxos_igmp_interface sanity test"
# Select interface for test
- set_fact: intname="{{ nxos_int1 }}"
- set_fact: restart="true"
when: platform is not match("N35")
- name: "Enable feature PIM"
nxos_feature:
feature: pim
state: enabled
ignore_errors: yes
- name: Put interface in default mode
nxos_config:
commands:
- "default interface {{ intname }}"
match: none
ignore_errors: yes
- block:
- name: put interface in L3 and enable PIM
nxos_config:
commands:
- no switchport
- ip pim sparse-mode
parents:
- "interface {{ intname }}"
match: none
- name: Configure igmp interface with non-default values
nxos_igmp_interface: &non-default
interface: "{{ intname }}"
version: 3
startup_query_interval: 60
startup_query_count: 5
robustness: 6
querier_timeout: 2000
query_mrt: 12
query_interval: 200
last_member_qrt: 2
last_member_query_count: 4
report_llg: true
immediate_leave: true
group_timeout: 300
# deprecated
oif_prefix: 239.255.255.2
oif_source: 192.0.2.1
state: present
register: result
- assert: &true
that:
- "result.changed == true"
- name: "Check Idempotence - Configure igmp interface with non-default values"
nxos_igmp_interface: *non-default
register: result
- assert: &false
that:
- "result.changed == false"
- name: Configure igmp interface with some default values
nxos_igmp_interface: &sdef
interface: "{{ intname }}"
version: default
startup_query_interval: default
startup_query_count: default
robustness: default
querier_timeout: default
query_mrt: default
query_interval: default
last_member_qrt: default
last_member_query_count: default
group_timeout: default
oif_ps:
- {'prefix': '238.2.2.6'}
- {'prefix': '238.2.2.5'}
- {'source': '192.0.2.1', 'prefix': '238.2.2.5'}
state: present
register: result
- assert: *true
- name: "Check Idempotence - Configure igmp interface with some default values"
nxos_igmp_interface: *sdef
register: result
- assert: *false
- name: restart igmp
nxos_igmp_interface: &restart
interface: "{{ intname }}"
restart: "{{restart|default(omit)}}"
- name: Configure igmp interface with default oif_ps
nxos_igmp_interface: &defoif
interface: "{{ intname }}"
oif_ps: default
state: present
register: result
- assert: *true
- name: "Check Idempotence - Configure igmp interface with default oif_ps"
nxos_igmp_interface: *defoif
register: result
- assert: *false
- name: Configure igmp interface with oif_routemap
nxos_igmp_interface: &orm
interface: "{{ intname }}"
version: 3
startup_query_interval: 60
startup_query_count: 5
robustness: 6
oif_routemap: abcd
state: present
register: result
- assert: *true
- name: "Check Idempotence - Configure igmp interface with oif_routemap"
nxos_igmp_interface: *orm
register: result
- assert: *false
- name: Configure igmp interface with default state
nxos_igmp_interface: &default
interface: "{{ intname }}"
state: default
register: result
- assert: *true
- name: "Check Idempotence - Configure igmp interface with default state"
nxos_igmp_interface: *default
register: result
- assert: *false
- name: Configure igmp interface with absent state
nxos_igmp_interface: &absent
interface: "{{ intname }}"
state: absent
register: result
- assert: *true
- name: "Check Idempotence - Configure igmp interface with absent state"
nxos_igmp_interface: *absent
register: result
- assert: *false
always:
- name: Configure igmp interface with absent state
nxos_igmp_interface: *absent
register: result
- name: Put interface in default mode
nxos_config:
commands:
- "default interface {{ intname }}"
match: none
- name: "Disable feature PIM"
nxos_feature:
feature: pim
state: disabled
ignore_errors: yes
- debug: msg="END connection={{ ansible_connection }} nxos_igmp_interface sanity test"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,016 |
some vmware modules have options which should have been removed for Ansible 2.9
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, (some of) these modules have options marked with `removed_in_version='2.9'`. These options should have been removed before Ansible 2.9 was released. Since that is too late, it would be good if they could be removed before Ansible 2.10 is released.
```
lib/ansible/modules/cloud/vmware/vmware_guest_find.py:0:0: ansible-deprecated-version: Argument 'datacenter' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py:0:0: ansible-deprecated-version: Argument 'ip_address' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py:0:0: ansible-deprecated-version: Argument 'subnet_mask' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/cloud/vmware/vmware_guest_find.py
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67016
|
https://github.com/ansible/ansible/pull/67282
|
88f0c8522882467d512eb4f1769e0eaf78404760
|
808bf02588febe08f109364f20ad5d4a96a28100
| 2020-02-01T13:58:36Z |
python
| 2020-02-11T11:30:22Z |
changelogs/fragments/67282-remove_options_from_some_vmware_modules_that_aren't_used_in_the_code.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,016 |
some vmware modules have options which should have been removed for Ansible 2.9
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, (some of) these modules have options marked with `removed_in_version='2.9'`. These options should have been removed before Ansible 2.9 was released. Since that is too late, it would be good if they could be removed before Ansible 2.10 is released.
```
lib/ansible/modules/cloud/vmware/vmware_guest_find.py:0:0: ansible-deprecated-version: Argument 'datacenter' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py:0:0: ansible-deprecated-version: Argument 'ip_address' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py:0:0: ansible-deprecated-version: Argument 'subnet_mask' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/cloud/vmware/vmware_guest_find.py
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67016
|
https://github.com/ansible/ansible/pull/67282
|
88f0c8522882467d512eb4f1769e0eaf78404760
|
808bf02588febe08f109364f20ad5d4a96a28100
| 2020-02-01T13:58:36Z |
python
| 2020-02-11T11:30:22Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
* Windows Server 2008 and 2008 R2 will no longer be supported or tested in the next Ansible release, see :ref:`windows_faq_server2008`.
* The :ref:`win_stat <win_stat_module>` module has removed the deprecated ``get_md55`` option and ``md5`` return value.
* The :ref:`win_psexec <win_psexec_module>` module has removed the deprecated ``extra_opts`` option.
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
* :ref:`redfish_config <redfish_config_module>`: the ``bios_attribute_name`` and ``bios_attribute_value`` options will be removed. To maintain the existing behavior use the ``bios_attributes`` option instead.
* :ref:`clc_aa_policy <clc_aa_policy_module>`: the ``wait`` parameter will be removed. It has always been ignored by the module.
* :ref:`redfish_config <redfish_config_module>`, :ref:`redfish_command <redfish_command_module>`: the behavior to select the first System, Manager, or Chassis resource to modify when multiple are present will be removed. Use the new ``resource_id`` option to specify target resource to modify.
* :ref:`win_domain_controller <win_domain_controller_module>`: the ``log_path`` option will be removed. This was undocumented and only related to debugging information for module development.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`ec2 <ec2_module>`: the ``group`` and ``group_id`` options will become mutually exclusive. Currently ``group_id`` is ignored if you pass both.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
* :ref:`vmware_tag_info <vmware_tag_info_module>`: the module will not return ``tag_facts`` since it does not return multiple tags with the same name and different category id. To maintain the existing behavior use ``tag_info`` which is a list of tag metadata.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead.
Noteworthy module changes
-------------------------
* Ansible modules created with ``add_file_common_args=True`` added a number of undocumented arguments which were mostly there to ease implementing certain action plugins. The undocumented arguments ``src``, ``follow``, ``force``, ``content``, ``backup``, ``remote_src``, ``regexp``, ``delimiter``, and ``directory_mode`` are now no longer added. Modules relying on these options to be added need to specify them by themselves.
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`zabbix_proxy <zabbix_proxy_module>` deprecates ``interface`` sub-options ``type`` and ``main`` when proxy type is set to passive via ``status=passive``. Make sure these suboptions are removed from your playbook as they were never supported by Zabbix in the first place.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
* :ref:`docker_container <docker_container_module>` no longer passes information on non-anonymous volumes or binds as ``Volumes`` to the Docker daemon. This increases compatibility with the ``docker`` CLI program. Note that if you specify ``volumes: strict`` in ``comparisons``, this could cause existing containers created with docker_container from Ansible 2.9 or earlier to restart.
* :ref:`docker_container <docker_container_module>`'s support for port ranges was adjusted to be more compatible to the ``docker`` command line utility: a one-port container range combined with a multiple-port host range will no longer result in only the first host port be used, but the whole range being passed to Docker so that a free port in that range will be used.
* :ref:`purefb_fs <purefb_fs_module>` no longer supports the deprecated ``nfs`` option. This has been superceeded by ``nfsv3``.
* :ref:`nxos_igmp_interface <nxos_igmp_interface_module>` no longer supports the deprecated ``oif_prefix`` and ``oif_source`` options. These have been superceeded by ``oif_ps``.
Plugins
=======
Lookup plugin names case-sensitivity
------------------------------------
* Prior to Ansible ``2.10`` lookup plugin names passed in as an argument to the ``lookup()`` function were treated as case-insensitive as opposed to lookups invoked via ``with_<lookup_name>``. ``2.10`` brings consistency to ``lookup()`` and ``with_`` to be both case-sensitive.
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
* Some undocumented arguments from ``FILE_COMMON_ARGUMENTS`` have been removed; plugins using these, in particular action plugins, need to be adjusted. The undocumented arguments which were removed are ``src``, ``follow``, ``force``, ``content``, ``backup``, ``remote_src``, ``regexp``, ``delimiter``, and ``directory_mode``.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,016 |
some vmware modules have options which should have been removed for Ansible 2.9
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, (some of) these modules have options marked with `removed_in_version='2.9'`. These options should have been removed before Ansible 2.9 was released. Since that is too late, it would be good if they could be removed before Ansible 2.10 is released.
```
lib/ansible/modules/cloud/vmware/vmware_guest_find.py:0:0: ansible-deprecated-version: Argument 'datacenter' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py:0:0: ansible-deprecated-version: Argument 'ip_address' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py:0:0: ansible-deprecated-version: Argument 'subnet_mask' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/cloud/vmware/vmware_guest_find.py
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67016
|
https://github.com/ansible/ansible/pull/67282
|
88f0c8522882467d512eb4f1769e0eaf78404760
|
808bf02588febe08f109364f20ad5d4a96a28100
| 2020-02-01T13:58:36Z |
python
| 2020-02-11T11:30:22Z |
lib/ansible/modules/cloud/vmware/vmware_guest_find.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: vmware_guest_find
short_description: Find the folder path(s) for a virtual machine by name or UUID
description:
- Find the folder path(s) for a virtual machine by name or UUID
version_added: 2.4
author:
- Abhijeet Kasurde (@Akasurde) <[email protected]>
notes:
- Tested on vSphere 6.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
name:
description:
- Name of the VM to work with.
- This is required if C(uuid) parameter is not supplied.
type: str
uuid:
description:
- UUID of the instance to manage if known, this is VMware's BIOS UUID by default.
- This is required if C(name) parameter is not supplied.
type: str
use_instance_uuid:
description:
- Whether to use the VMware instance UUID rather than the BIOS UUID.
default: no
type: bool
version_added: '2.8'
datacenter:
description:
- Destination datacenter for the find operation.
- Deprecated in 2.5, will be removed in 2.9 release.
type: str
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Find Guest's Folder using name
vmware_guest_find:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
name: testvm
delegate_to: localhost
register: vm_folder
- name: Find Guest's Folder using UUID
vmware_guest_find:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
uuid: 38c4c89c-b3d7-4ae6-ae4e-43c5118eae49
delegate_to: localhost
register: vm_folder
'''
RETURN = r"""
folders:
description: List of folders for user specified virtual machine
returned: on success
type: list
sample: [
'/DC0/vm',
]
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, find_vm_by_id
try:
from pyVmomi import vim
except ImportError:
pass
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
self.name = self.params['name']
self.uuid = self.params['uuid']
self.use_instance_uuid = self.params['use_instance_uuid']
def getvm_folder_paths(self):
results = []
vms = []
if self.uuid:
if self.use_instance_uuid:
vm_obj = find_vm_by_id(self.content, vm_id=self.uuid, vm_id_type="instance_uuid")
else:
vm_obj = find_vm_by_id(self.content, vm_id=self.uuid, vm_id_type="uuid")
if vm_obj is None:
self.module.fail_json(msg="Failed to find the virtual machine with UUID : %s" % self.uuid)
vms = [vm_obj]
elif self.name:
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
for temp_vm_object in objects:
if temp_vm_object.obj.name == self.name:
vms.append(temp_vm_object.obj)
for vm in vms:
folder_path = self.get_vm_path(self.content, vm)
results.append(folder_path)
return results
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
name=dict(type='str'),
uuid=dict(type='str'),
use_instance_uuid=dict(type='bool', default=False),
datacenter=dict(removed_in_version=2.9, type='str')
)
module = AnsibleModule(argument_spec=argument_spec,
required_one_of=[['name', 'uuid']],
mutually_exclusive=[['name', 'uuid']],
)
pyv = PyVmomiHelper(module)
# Check if the VM exists before continuing
folders = pyv.getvm_folder_paths()
# VM already exists
if folders:
try:
module.exit_json(folders=folders)
except Exception as exc:
module.fail_json(msg="Folder enumeration failed with exception %s" % to_native(exc))
else:
module.fail_json(msg="Unable to find folders for virtual machine %s" % (module.params.get('name') or
module.params.get('uuid')))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,016 |
some vmware modules have options which should have been removed for Ansible 2.9
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, (some of) these modules have options marked with `removed_in_version='2.9'`. These options should have been removed before Ansible 2.9 was released. Since that is too late, it would be good if they could be removed before Ansible 2.10 is released.
```
lib/ansible/modules/cloud/vmware/vmware_guest_find.py:0:0: ansible-deprecated-version: Argument 'datacenter' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py:0:0: ansible-deprecated-version: Argument 'ip_address' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py:0:0: ansible-deprecated-version: Argument 'subnet_mask' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/cloud/vmware/vmware_guest_find.py
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67016
|
https://github.com/ansible/ansible/pull/67282
|
88f0c8522882467d512eb4f1769e0eaf78404760
|
808bf02588febe08f109364f20ad5d4a96a28100
| 2020-02-01T13:58:36Z |
python
| 2020-02-11T11:30:22Z |
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Joseph Callen <jcallen () csc.com>
# Copyright: (c) 2017-18, Ansible Project
# Copyright: (c) 2017-18, Abhijeet Kasurde <[email protected]>
# Copyright: (c) 2018, Christian Kotte <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: vmware_vmkernel
short_description: Manages a VMware VMkernel Adapter of an ESXi host.
description:
- This module can be used to manage the VMKernel adapters / VMKernel network interfaces of an ESXi host.
- The module assumes that the host is already configured with the Port Group in case of a vSphere Standard Switch (vSS).
- The module assumes that the host is already configured with the Distributed Port Group in case of a vSphere Distributed Switch (vDS).
- The module automatically migrates the VMKernel adapter from vSS to vDS or vice versa if present.
version_added: 2.0
author:
- Joseph Callen (@jcpowermac)
- Russell Teague (@mtnbikenc)
- Abhijeet Kasurde (@Akasurde)
- Christian Kotte (@ckotte)
notes:
- The option C(device) need to be used with DHCP because otherwise it's not possible to check if a VMkernel device is already present
- You can only change from DHCP to static, and vSS to vDS, or vice versa, in one step, without creating a new device, with C(device) specified.
- You can only create the VMKernel adapter on a vDS if authenticated to vCenter and not if authenticated to ESXi.
- Tested on vSphere 5.5 and 6.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
vswitch_name:
description:
- The name of the vSwitch where to add the VMKernel interface.
- Required parameter only if C(state) is set to C(present).
- Optional parameter from version 2.5 and onwards.
type: str
aliases: ['vswitch']
dvswitch_name:
description:
- The name of the vSphere Distributed Switch (vDS) where to add the VMKernel interface.
- Required parameter only if C(state) is set to C(present).
- Optional parameter from version 2.8 and onwards.
type: str
aliases: ['dvswitch']
version_added: 2.8
portgroup_name:
description:
- The name of the port group for the VMKernel interface.
required: True
aliases: ['portgroup']
type: str
network:
description:
- A dictionary of network details.
- 'The following parameter is required:'
- ' - C(type) (string): Type of IP assignment (either C(dhcp) or C(static)).'
- 'The following parameters are required in case of C(type) is set to C(static):'
- ' - C(ip_address) (string): Static IP address (implies C(type: static)).'
- ' - C(subnet_mask) (string): Static netmask required for C(ip_address).'
- 'The following parameter is optional in case of C(type) is set to C(static):'
- ' - C(default_gateway) (string): Default gateway (Override default gateway for this adapter).'
- 'The following parameter is optional:'
- ' - C(tcpip_stack) (string): The TCP/IP stack for the VMKernel interface. Can be default, provisioning, vmotion, or vxlan. (default: default)'
type: dict
default: {
type: 'static',
tcpip_stack: 'default',
}
version_added: 2.5
ip_address:
description:
- The IP Address for the VMKernel interface.
- Use C(network) parameter with C(ip_address) instead.
- Deprecated option, will be removed in version 2.9.
type: str
subnet_mask:
description:
- The Subnet Mask for the VMKernel interface.
- Use C(network) parameter with C(subnet_mask) instead.
- Deprecated option, will be removed in version 2.9.
type: str
mtu:
description:
- The MTU for the VMKernel interface.
- The default value of 1500 is valid from version 2.5 and onwards.
default: 1500
type: int
device:
description:
- Search VMkernel adapter by device name.
- The parameter is required only in case of C(type) is set to C(dhcp).
version_added: 2.8
type: str
enable_vsan:
description:
- Enable VSAN traffic on the VMKernel adapter.
- This option is only allowed if the default TCP/IP stack is used.
type: bool
enable_vmotion:
description:
- Enable vMotion traffic on the VMKernel adapter.
- This option is only allowed if the default TCP/IP stack is used.
- You cannot enable vMotion on an additional adapter if you already have an adapter with the vMotion TCP/IP stack configured.
type: bool
enable_mgmt:
description:
- Enable Management traffic on the VMKernel adapter.
- This option is only allowed if the default TCP/IP stack is used.
type: bool
enable_ft:
description:
- Enable Fault Tolerance traffic on the VMKernel adapter.
- This option is only allowed if the default TCP/IP stack is used.
type: bool
enable_provisioning:
description:
- Enable Provisioning traffic on the VMKernel adapter.
- This option is only allowed if the default TCP/IP stack is used.
type: bool
version_added: 2.8
enable_replication:
description:
- Enable vSphere Replication traffic on the VMKernel adapter.
- This option is only allowed if the default TCP/IP stack is used.
type: bool
version_added: 2.8
enable_replication_nfc:
description:
- Enable vSphere Replication NFC traffic on the VMKernel adapter.
- This option is only allowed if the default TCP/IP stack is used.
type: bool
version_added: 2.8
state:
description:
- If set to C(present), the VMKernel adapter will be created with the given specifications.
- If set to C(absent), the VMKernel adapter will be removed.
- If set to C(present) and VMKernel adapter exists, the configurations will be updated.
choices: [ present, absent ]
default: present
version_added: 2.5
type: str
esxi_hostname:
description:
- Name of ESXi host to which VMKernel is to be managed.
- "From version 2.5 onwards, this parameter is required."
required: True
version_added: 2.5
type: str
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
- name: Add Management vmkernel port using static network type
vmware_vmkernel:
hostname: '{{ esxi_hostname }}'
username: '{{ esxi_username }}'
password: '{{ esxi_password }}'
esxi_hostname: '{{ esxi_hostname }}'
vswitch_name: vSwitch0
portgroup_name: PG_0001
network:
type: 'static'
ip_address: 192.168.127.10
subnet_mask: 255.255.255.0
state: present
enable_mgmt: True
delegate_to: localhost
- name: Add Management vmkernel port using DHCP network type
vmware_vmkernel:
hostname: '{{ esxi_hostname }}'
username: '{{ esxi_username }}'
password: '{{ esxi_password }}'
esxi_hostname: '{{ esxi_hostname }}'
vswitch_name: vSwitch0
portgroup_name: PG_0002
state: present
network:
type: 'dhcp'
enable_mgmt: True
delegate_to: localhost
- name: Change IP allocation from static to dhcp
vmware_vmkernel:
hostname: '{{ esxi_hostname }}'
username: '{{ esxi_username }}'
password: '{{ esxi_password }}'
esxi_hostname: '{{ esxi_hostname }}'
vswitch_name: vSwitch0
portgroup_name: PG_0002
state: present
device: vmk1
network:
type: 'dhcp'
enable_mgmt: True
delegate_to: localhost
- name: Delete VMkernel port
vmware_vmkernel:
hostname: '{{ esxi_hostname }}'
username: '{{ esxi_username }}'
password: '{{ esxi_password }}'
esxi_hostname: '{{ esxi_hostname }}'
vswitch_name: vSwitch0
portgroup_name: PG_0002
state: absent
delegate_to: localhost
- name: Add Management vmkernel port to Distributed Switch
vmware_vmkernel:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
dvswitch_name: dvSwitch1
portgroup_name: dvPG_0001
network:
type: 'static'
ip_address: 192.168.127.10
subnet_mask: 255.255.255.0
state: present
enable_mgmt: True
delegate_to: localhost
- name: Add vMotion vmkernel port with vMotion TCP/IP stack
vmware_vmkernel:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
dvswitch_name: dvSwitch1
portgroup_name: dvPG_0001
network:
type: 'static'
ip_address: 192.168.127.10
subnet_mask: 255.255.255.0
tcpip_stack: vmotion
state: present
delegate_to: localhost
'''
RETURN = r'''
result:
description: metadata about VMKernel name
returned: always
type: dict
sample: {
"changed": false,
"msg": "VMkernel Adapter already configured properly",
"device": "vmk1",
"ipv4": "static",
"ipv4_gw": "No override",
"ipv4_ip": "192.168.1.15",
"ipv4_sm": "255.255.255.0",
"mtu": 9000,
"services": "vMotion",
"switch": "vDS"
}
'''
try:
from pyVmomi import vim, vmodl
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import (
PyVmomi, TaskError, vmware_argument_spec, wait_for_task,
find_dvspg_by_name, find_dvs_by_name, get_all_objs
)
from ansible.module_utils._text import to_native
class PyVmomiHelper(PyVmomi):
"""Class to manage VMkernel configuration of an ESXi host system"""
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
if self.params['network']:
self.network_type = self.params['network'].get('type')
self.ip_address = self.params['network'].get('ip_address', None)
self.subnet_mask = self.params['network'].get('subnet_mask', None)
self.default_gateway = self.params['network'].get('default_gateway', None)
self.tcpip_stack = self.params['network'].get('tcpip_stack')
self.device = self.params['device']
if self.network_type == 'dhcp' and not self.device:
module.fail_json(msg="device is a required parameter when network type is set to 'dhcp'")
self.mtu = self.params['mtu']
self.enable_vsan = self.params['enable_vsan']
self.enable_vmotion = self.params['enable_vmotion']
self.enable_mgmt = self.params['enable_mgmt']
self.enable_ft = self.params['enable_ft']
self.enable_provisioning = self.params['enable_provisioning']
self.enable_replication = self.params['enable_replication']
self.enable_replication_nfc = self.params['enable_replication_nfc']
self.vswitch_name = self.params['vswitch_name']
self.vds_name = self.params['dvswitch_name']
self.port_group_name = self.params['portgroup_name']
self.esxi_host_name = self.params['esxi_hostname']
hosts = self.get_all_host_objs(esxi_host_name=self.esxi_host_name)
if hosts:
self.esxi_host_obj = hosts[0]
else:
self.module.fail_json(
msg="Failed to get details of ESXi server. Please specify esxi_hostname."
)
if self.network_type == 'static':
if self.module.params['state'] == 'absent':
pass
elif not self.ip_address:
module.fail_json(msg="ip_address is a required parameter when network type is set to 'static'")
elif not self.subnet_mask:
module.fail_json(msg="subnet_mask is a required parameter when network type is set to 'static'")
# find Port Group
if self.vswitch_name:
self.port_group_obj = self.get_port_group_by_name(
host_system=self.esxi_host_obj,
portgroup_name=self.port_group_name,
vswitch_name=self.vswitch_name
)
if not self.port_group_obj:
module.fail_json(msg="Portgroup '%s' not found on vSS '%s'" % (self.port_group_name, self.vswitch_name))
elif self.vds_name:
self.dv_switch_obj = find_dvs_by_name(self.content, self.vds_name)
if not self.dv_switch_obj:
module.fail_json(msg="vDS '%s' not found" % self.vds_name)
self.port_group_obj = find_dvspg_by_name(self.dv_switch_obj, self.port_group_name)
if not self.port_group_obj:
module.fail_json(msg="Portgroup '%s' not found on vDS '%s'" % (self.port_group_name, self.vds_name))
# find VMkernel Adapter
if self.device:
self.vnic = self.get_vmkernel_by_device(device_name=self.device)
else:
# config change (e.g. DHCP to static, or vice versa); doesn't work with virtual port change
self.vnic = self.get_vmkernel_by_portgroup_new(port_group_name=self.port_group_name)
if not self.vnic and self.network_type == 'static':
# vDS to vSS or vSS to vSS (static IP)
self.vnic = self.get_vmkernel_by_ip(ip_address=self.ip_address)
def get_port_group_by_name(self, host_system, portgroup_name, vswitch_name):
"""
Get specific port group by given name
Args:
host_system: Name of Host System
portgroup_name: Name of Port Group
vswitch_name: Name of the vSwitch
Returns: List of port groups by given specifications
"""
portgroups = self.get_all_port_groups_by_host(host_system=host_system)
for portgroup in portgroups:
if portgroup.spec.vswitchName == vswitch_name and portgroup.spec.name == portgroup_name:
return portgroup
return None
def ensure(self):
"""
Manage internal VMKernel management
Returns: NA
"""
host_vmk_states = {
'absent': {
'present': self.host_vmk_delete,
'absent': self.host_vmk_unchange,
},
'present': {
'present': self.host_vmk_update,
'absent': self.host_vmk_create,
}
}
try:
host_vmk_states[self.module.params['state']][self.check_state()]()
except vmodl.RuntimeFault as runtime_fault:
self.module.fail_json(msg=to_native(runtime_fault.msg))
except vmodl.MethodFault as method_fault:
self.module.fail_json(msg=to_native(method_fault.msg))
def get_vmkernel_by_portgroup_new(self, port_group_name=None):
"""
Check if vmkernel available or not
Args:
port_group_name: name of port group
Returns: vmkernel managed object if vmkernel found, false if not
"""
for vnic in self.esxi_host_obj.config.network.vnic:
# check if it's a vSS Port Group
if vnic.spec.portgroup == port_group_name:
return vnic
# check if it's a vDS Port Group
try:
if vnic.spec.distributedVirtualPort.portgroupKey == self.port_group_obj.key:
return vnic
except AttributeError:
pass
return False
def get_vmkernel_by_ip(self, ip_address):
"""
Check if vmkernel available or not
Args:
ip_address: IP address of vmkernel device
Returns: vmkernel managed object if vmkernel found, false if not
"""
for vnic in self.esxi_host_obj.config.network.vnic:
if vnic.spec.ip.ipAddress == ip_address:
return vnic
return None
def get_vmkernel_by_device(self, device_name):
"""
Check if vmkernel available or not
Args:
device_name: name of vmkernel device
Returns: vmkernel managed object if vmkernel found, false if not
"""
for vnic in self.esxi_host_obj.config.network.vnic:
if vnic.device == device_name:
return vnic
return None
def check_state(self):
"""
Check internal state management
Returns: Present if found and absent if not found
"""
return 'present' if self.vnic else 'absent'
def host_vmk_delete(self):
"""
Delete VMKernel
Returns: NA
"""
results = dict(changed=False, msg='')
vmk_device = self.vnic.device
try:
if self.module.check_mode:
results['msg'] = "VMkernel Adapter would be deleted"
else:
self.esxi_host_obj.configManager.networkSystem.RemoveVirtualNic(vmk_device)
results['msg'] = "VMkernel Adapter deleted"
results['changed'] = True
results['device'] = vmk_device
except vim.fault.NotFound as not_found:
self.module.fail_json(
msg="Failed to find vmk to delete due to %s" %
to_native(not_found.msg)
)
except vim.fault.HostConfigFault as host_config_fault:
self.module.fail_json(
msg="Failed to delete vmk due host config issues : %s" %
to_native(host_config_fault.msg)
)
self.module.exit_json(**results)
def host_vmk_unchange(self):
"""
Denote no change in VMKernel
Returns: NA
"""
self.module.exit_json(changed=False)
def host_vmk_update(self):
"""
Update VMKernel with given parameters
Returns: NA
"""
changed = changed_settings = changed_vds = changed_services = \
changed_service_vmotion = changed_service_mgmt = changed_service_ft = \
changed_service_vsan = changed_service_prov = changed_service_rep = changed_service_rep_nfc = False
changed_list = []
results = dict(changed=False, msg='')
results['tcpip_stack'] = self.tcpip_stack
net_stack_instance_key = self.get_api_net_stack_instance(self.tcpip_stack)
if self.vnic.spec.netStackInstanceKey != net_stack_instance_key:
self.module.fail_json(msg="The TCP/IP stack cannot be changed on an existing VMkernel adapter!")
# Check MTU
results['mtu'] = self.mtu
if self.vnic.spec.mtu != self.mtu:
changed_settings = True
changed_list.append("MTU")
results['mtu_previous'] = self.vnic.spec.mtu
# Check IPv4 settings
results['ipv4'] = self.network_type
results['ipv4_ip'] = self.ip_address
results['ipv4_sm'] = self.subnet_mask
if self.default_gateway:
results['ipv4_gw'] = self.default_gateway
else:
results['ipv4_gw'] = "No override"
if self.vnic.spec.ip.dhcp:
if self.network_type == 'static':
changed_settings = True
changed_list.append("IPv4 settings")
results['ipv4_previous'] = "DHCP"
if not self.vnic.spec.ip.dhcp:
if self.network_type == 'dhcp':
changed_settings = True
changed_list.append("IPv4 settings")
results['ipv4_previous'] = "static"
elif self.network_type == 'static':
if self.ip_address != self.vnic.spec.ip.ipAddress:
changed_settings = True
changed_list.append("IP")
results['ipv4_ip_previous'] = self.vnic.spec.ip.ipAddress
if self.subnet_mask != self.vnic.spec.ip.subnetMask:
changed_settings = True
changed_list.append("SM")
results['ipv4_sm_previous'] = self.vnic.spec.ip.subnetMask
if self.default_gateway:
try:
if self.default_gateway != self.vnic.spec.ipRouteSpec.ipRouteConfig.defaultGateway:
changed_settings = True
changed_list.append("GW override")
results['ipv4_gw_previous'] = self.vnic.spec.ipRouteSpec.ipRouteConfig.defaultGateway
except AttributeError:
changed_settings = True
changed_list.append("GW override")
results['ipv4_gw_previous'] = "No override"
else:
try:
if self.vnic.spec.ipRouteSpec.ipRouteConfig.defaultGateway:
changed_settings = True
changed_list.append("GW override")
results['ipv4_gw_previous'] = self.vnic.spec.ipRouteSpec.ipRouteConfig.defaultGateway
except AttributeError:
pass
# Check virtual port (vSS or vDS)
results['portgroup'] = self.port_group_name
dvs_uuid = None
if self.vswitch_name:
results['switch'] = self.vswitch_name
try:
if self.vnic.spec.distributedVirtualPort.switchUuid:
changed_vds = True
changed_list.append("Virtual Port")
dvs_uuid = self.vnic.spec.distributedVirtualPort.switchUuid
except AttributeError:
pass
if changed_vds:
results['switch_previous'] = self.find_dvs_by_uuid(dvs_uuid)
self.dv_switch_obj = find_dvs_by_name(self.content, results['switch_previous'])
results['portgroup_previous'] = self.find_dvspg_by_key(
self.dv_switch_obj, self.vnic.spec.distributedVirtualPort.portgroupKey
)
elif self.vds_name:
results['switch'] = self.vds_name
try:
if self.vnic.spec.distributedVirtualPort.switchUuid != self.dv_switch_obj.uuid:
changed_vds = True
changed_list.append("Virtual Port")
dvs_uuid = self.vnic.spec.distributedVirtualPort.switchUuid
except AttributeError:
changed_vds = True
changed_list.append("Virtual Port")
if changed_vds:
results['switch_previous'] = self.find_dvs_by_uuid(dvs_uuid)
results['portgroup_previous'] = self.vnic.spec.portgroup
portgroups = self.get_all_port_groups_by_host(host_system=self.esxi_host_obj)
for portgroup in portgroups:
if portgroup.spec.name == self.vnic.spec.portgroup:
results['switch_previous'] = portgroup.spec.vswitchName
results['services'] = self.create_enabled_services_string()
# Check configuration of service types (only if default TCP/IP stack is used)
if self.vnic.spec.netStackInstanceKey == 'defaultTcpipStack':
service_type_vmks = self.get_all_vmks_by_service_type()
if (self.enable_vmotion and self.vnic.device not in service_type_vmks['vmotion']) or \
(not self.enable_vmotion and self.vnic.device in service_type_vmks['vmotion']):
changed_services = changed_service_vmotion = True
if (self.enable_mgmt and self.vnic.device not in service_type_vmks['management']) or \
(not self.enable_mgmt and self.vnic.device in service_type_vmks['management']):
changed_services = changed_service_mgmt = True
if (self.enable_ft and self.vnic.device not in service_type_vmks['faultToleranceLogging']) or \
(not self.enable_ft and self.vnic.device in service_type_vmks['faultToleranceLogging']):
changed_services = changed_service_ft = True
if (self.enable_vsan and self.vnic.device not in service_type_vmks['vsan']) or \
(not self.enable_vsan and self.vnic.device in service_type_vmks['vsan']):
changed_services = changed_service_vsan = True
if (self.enable_provisioning and self.vnic.device not in service_type_vmks['vSphereProvisioning']) or \
(not self.enable_provisioning and self.vnic.device in service_type_vmks['vSphereProvisioning']):
changed_services = changed_service_prov = True
if (self.enable_replication and self.vnic.device not in service_type_vmks['vSphereReplication']) or \
(not self.enable_provisioning and self.vnic.device in service_type_vmks['vSphereReplication']):
changed_services = changed_service_rep = True
if (self.enable_replication_nfc and self.vnic.device not in service_type_vmks['vSphereReplicationNFC']) or \
(not self.enable_provisioning and self.vnic.device in service_type_vmks['vSphereReplicationNFC']):
changed_services = changed_service_rep_nfc = True
if changed_services:
changed_list.append("services")
if changed_settings or changed_vds or changed_services:
changed = True
if self.module.check_mode:
changed_suffix = ' would be updated'
else:
changed_suffix = ' updated'
if len(changed_list) > 2:
message = ', '.join(changed_list[:-1]) + ', and ' + str(changed_list[-1])
elif len(changed_list) == 2:
message = ' and '.join(changed_list)
elif len(changed_list) == 1:
message = changed_list[0]
message = "VMkernel Adapter " + message + changed_suffix
if changed_settings or changed_vds:
vnic_config = vim.host.VirtualNic.Specification()
ip_spec = vim.host.IpConfig()
if self.network_type == 'dhcp':
ip_spec.dhcp = True
else:
ip_spec.dhcp = False
ip_spec.ipAddress = self.ip_address
ip_spec.subnetMask = self.subnet_mask
if self.default_gateway:
vnic_config.ipRouteSpec = vim.host.VirtualNic.IpRouteSpec()
vnic_config.ipRouteSpec.ipRouteConfig = vim.host.IpRouteConfig()
vnic_config.ipRouteSpec.ipRouteConfig.defaultGateway = self.default_gateway
else:
vnic_config.ipRouteSpec = vim.host.VirtualNic.IpRouteSpec()
vnic_config.ipRouteSpec.ipRouteConfig = vim.host.IpRouteConfig()
vnic_config.ip = ip_spec
vnic_config.mtu = self.mtu
if changed_vds:
if self.vswitch_name:
vnic_config.portgroup = self.port_group_name
elif self.vds_name:
vnic_config.distributedVirtualPort = vim.dvs.PortConnection()
vnic_config.distributedVirtualPort.switchUuid = self.dv_switch_obj.uuid
vnic_config.distributedVirtualPort.portgroupKey = self.port_group_obj.key
try:
if not self.module.check_mode:
self.esxi_host_obj.configManager.networkSystem.UpdateVirtualNic(self.vnic.device, vnic_config)
except vim.fault.NotFound as not_found:
self.module.fail_json(
msg="Failed to update vmk as virtual network adapter cannot be found %s" %
to_native(not_found.msg)
)
except vim.fault.HostConfigFault as host_config_fault:
self.module.fail_json(
msg="Failed to update vmk due to host config issues : %s" %
to_native(host_config_fault.msg)
)
except vim.fault.InvalidState as invalid_state:
self.module.fail_json(
msg="Failed to update vmk as ipv6 address is specified in an ipv4 only system : %s" %
to_native(invalid_state.msg)
)
except vmodl.fault.InvalidArgument as invalid_arg:
self.module.fail_json(
msg="Failed to update vmk as IP address or Subnet Mask in the IP configuration"
"are invalid or PortGroup does not exist : %s" % to_native(invalid_arg.msg)
)
if changed_services:
changed_list.append("Services")
services_previous = []
vnic_manager = self.esxi_host_obj.configManager.virtualNicManager
if changed_service_mgmt:
if self.vnic.device in service_type_vmks['management']:
services_previous.append('Mgmt')
operation = 'select' if self.enable_mgmt else 'deselect'
self.set_service_type(
vnic_manager=vnic_manager, vmk=self.vnic, service_type='management', operation=operation
)
if changed_service_vmotion:
if self.vnic.device in service_type_vmks['vmotion']:
services_previous.append('vMotion')
operation = 'select' if self.enable_vmotion else 'deselect'
self.set_service_type(
vnic_manager=vnic_manager, vmk=self.vnic, service_type='vmotion', operation=operation
)
if changed_service_ft:
if self.vnic.device in service_type_vmks['faultToleranceLogging']:
services_previous.append('FT')
operation = 'select' if self.enable_ft else 'deselect'
self.set_service_type(
vnic_manager=vnic_manager, vmk=self.vnic, service_type='faultToleranceLogging', operation=operation
)
if changed_service_prov:
if self.vnic.device in service_type_vmks['vSphereProvisioning']:
services_previous.append('Prov')
operation = 'select' if self.enable_provisioning else 'deselect'
self.set_service_type(
vnic_manager=vnic_manager, vmk=self.vnic, service_type='vSphereProvisioning', operation=operation
)
if changed_service_rep:
if self.vnic.device in service_type_vmks['vSphereReplication']:
services_previous.append('Repl')
operation = 'select' if self.enable_replication else 'deselect'
self.set_service_type(
vnic_manager=vnic_manager, vmk=self.vnic, service_type='vSphereReplication', operation=operation
)
if changed_service_rep_nfc:
if self.vnic.device in service_type_vmks['vSphereReplicationNFC']:
services_previous.append('Repl_NFC')
operation = 'select' if self.enable_replication_nfc else 'deselect'
self.set_service_type(
vnic_manager=vnic_manager, vmk=self.vnic, service_type='vSphereReplicationNFC', operation=operation
)
if changed_service_vsan:
if self.vnic.device in service_type_vmks['vsan']:
services_previous.append('VSAN')
if self.enable_vsan:
results['vsan'] = self.set_vsan_service_type()
else:
self.set_service_type(
vnic_manager=vnic_manager, vmk=self.vnic, service_type='vsan', operation=operation
)
results['services_previous'] = ', '.join(services_previous)
else:
message = "VMkernel Adapter already configured properly"
results['changed'] = changed
results['msg'] = message
results['device'] = self.vnic.device
self.module.exit_json(**results)
def find_dvs_by_uuid(self, uuid):
"""
Find DVS by UUID
Returns: DVS name
"""
dvs_list = get_all_objs(self.content, [vim.DistributedVirtualSwitch])
for dvs in dvs_list:
if dvs.uuid == uuid:
return dvs.summary.name
return None
def find_dvspg_by_key(self, dv_switch, portgroup_key):
"""
Find dvPortgroup by key
Returns: dvPortgroup name
"""
portgroups = dv_switch.portgroup
for portgroup in portgroups:
if portgroup.key == portgroup_key:
return portgroup.name
return None
def set_vsan_service_type(self):
"""
Set VSAN service type
Returns: result of UpdateVsan_Task
"""
result = None
vsan_system = self.esxi_host_obj.configManager.vsanSystem
vsan_port_config = vim.vsan.host.ConfigInfo.NetworkInfo.PortConfig()
vsan_port_config.device = self.vnic.device
vsan_config = vim.vsan.host.ConfigInfo()
vsan_config.networkInfo = vim.vsan.host.ConfigInfo.NetworkInfo()
vsan_config.networkInfo.port = [vsan_port_config]
if not self.module.check_mode:
try:
vsan_task = vsan_system.UpdateVsan_Task(vsan_config)
wait_for_task(vsan_task)
except TaskError as task_err:
self.module.fail_json(
msg="Failed to set service type to vsan for %s : %s" % (self.vnic.device, to_native(task_err))
)
return result
def host_vmk_create(self):
"""
Create VMKernel
Returns: NA
"""
results = dict(changed=False, message='')
if self.vswitch_name:
results['switch'] = self.vswitch_name
elif self.vds_name:
results['switch'] = self.vds_name
results['portgroup'] = self.port_group_name
vnic_config = vim.host.VirtualNic.Specification()
ip_spec = vim.host.IpConfig()
results['ipv4'] = self.network_type
if self.network_type == 'dhcp':
ip_spec.dhcp = True
else:
ip_spec.dhcp = False
results['ipv4_ip'] = self.ip_address
results['ipv4_sm'] = self.subnet_mask
ip_spec.ipAddress = self.ip_address
ip_spec.subnetMask = self.subnet_mask
if self.default_gateway:
vnic_config.ipRouteSpec = vim.host.VirtualNic.IpRouteSpec()
vnic_config.ipRouteSpec.ipRouteConfig = vim.host.IpRouteConfig()
vnic_config.ipRouteSpec.ipRouteConfig.defaultGateway = self.default_gateway
vnic_config.ip = ip_spec
results['mtu'] = self.mtu
vnic_config.mtu = self.mtu
results['tcpip_stack'] = self.tcpip_stack
vnic_config.netStackInstanceKey = self.get_api_net_stack_instance(self.tcpip_stack)
vmk_device = None
try:
if self.module.check_mode:
results['msg'] = "VMkernel Adapter would be created"
else:
if self.vswitch_name:
vmk_device = self.esxi_host_obj.configManager.networkSystem.AddVirtualNic(
self.port_group_name,
vnic_config
)
elif self.vds_name:
vnic_config.distributedVirtualPort = vim.dvs.PortConnection()
vnic_config.distributedVirtualPort.switchUuid = self.dv_switch_obj.uuid
vnic_config.distributedVirtualPort.portgroupKey = self.port_group_obj.key
vmk_device = self.esxi_host_obj.configManager.networkSystem.AddVirtualNic(portgroup="", nic=vnic_config)
results['msg'] = "VMkernel Adapter created"
results['changed'] = True
results['device'] = vmk_device
if self.network_type != 'dhcp':
if self.default_gateway:
results['ipv4_gw'] = self.default_gateway
else:
results['ipv4_gw'] = "No override"
results['services'] = self.create_enabled_services_string()
except vim.fault.AlreadyExists as already_exists:
self.module.fail_json(
msg="Failed to add vmk as portgroup already has a virtual network adapter %s" %
to_native(already_exists.msg)
)
except vim.fault.HostConfigFault as host_config_fault:
self.module.fail_json(
msg="Failed to add vmk due to host config issues : %s" %
to_native(host_config_fault.msg)
)
except vim.fault.InvalidState as invalid_state:
self.module.fail_json(
msg="Failed to add vmk as ipv6 address is specified in an ipv4 only system : %s" %
to_native(invalid_state.msg)
)
except vmodl.fault.InvalidArgument as invalid_arg:
self.module.fail_json(
msg="Failed to add vmk as IP address or Subnet Mask in the IP configuration "
"are invalid or PortGroup does not exist : %s" % to_native(invalid_arg.msg)
)
# do service type configuration
if self.tcpip_stack == 'default' and not all(
option is False for option in [self.enable_vsan, self.enable_vmotion,
self.enable_mgmt, self.enable_ft,
self.enable_provisioning, self.enable_replication,
self.enable_replication_nfc]):
self.vnic = self.get_vmkernel_by_device(device_name=vmk_device)
# VSAN
if self.enable_vsan:
results['vsan'] = self.set_vsan_service_type()
# Other service type
host_vnic_manager = self.esxi_host_obj.configManager.virtualNicManager
if self.enable_vmotion:
self.set_service_type(host_vnic_manager, self.vnic, 'vmotion')
if self.enable_mgmt:
self.set_service_type(host_vnic_manager, self.vnic, 'management')
if self.enable_ft:
self.set_service_type(host_vnic_manager, self.vnic, 'faultToleranceLogging')
if self.enable_provisioning:
self.set_service_type(host_vnic_manager, self.vnic, 'vSphereProvisioning')
if self.enable_replication:
self.set_service_type(host_vnic_manager, self.vnic, 'vSphereReplication')
if self.enable_replication_nfc:
self.set_service_type(host_vnic_manager, self.vnic, 'vSphereReplicationNFC')
self.module.exit_json(**results)
def set_service_type(self, vnic_manager, vmk, service_type, operation='select'):
"""
Set service type to given VMKernel
Args:
vnic_manager: Virtual NIC manager object
vmk: VMkernel managed object
service_type: Name of service type
operation: Select to select service type, deselect to deselect service type
"""
try:
if operation == 'select':
if not self.module.check_mode:
vnic_manager.SelectVnicForNicType(service_type, vmk.device)
elif operation == 'deselect':
if not self.module.check_mode:
vnic_manager.DeselectVnicForNicType(service_type, vmk.device)
except vmodl.fault.InvalidArgument as invalid_arg:
self.module.fail_json(
msg="Failed to %s VMK service type '%s' on '%s' due to : %s" %
(operation, service_type, vmk.device, to_native(invalid_arg.msg))
)
def get_all_vmks_by_service_type(self):
"""
Return information about service types and VMKernel
Returns: Dictionary of service type as key and VMKernel list as value
"""
service_type_vmk = dict(
vmotion=[],
vsan=[],
management=[],
faultToleranceLogging=[],
vSphereProvisioning=[],
vSphereReplication=[],
vSphereReplicationNFC=[],
)
for service_type in list(service_type_vmk):
vmks_list = self.query_service_type_for_vmks(service_type)
service_type_vmk[service_type] = vmks_list
return service_type_vmk
def query_service_type_for_vmks(self, service_type):
"""
Return list of VMKernels
Args:
service_type: Name of service type
Returns: List of VMKernel which belongs to that service type
"""
vmks_list = []
query = None
try:
query = self.esxi_host_obj.configManager.virtualNicManager.QueryNetConfig(service_type)
except vim.fault.HostConfigFault as config_fault:
self.module.fail_json(
msg="Failed to get all VMKs for service type %s due to host config fault : %s" %
(service_type, to_native(config_fault.msg))
)
except vmodl.fault.InvalidArgument as invalid_argument:
self.module.fail_json(
msg="Failed to get all VMKs for service type %s due to invalid arguments : %s" %
(service_type, to_native(invalid_argument.msg))
)
if not query.selectedVnic:
return vmks_list
selected_vnics = [vnic for vnic in query.selectedVnic]
vnics_with_service_type = [vnic.device for vnic in query.candidateVnic if vnic.key in selected_vnics]
return vnics_with_service_type
def create_enabled_services_string(self):
"""Create services list"""
services = []
if self.enable_mgmt:
services.append('Mgmt')
if self.enable_vmotion:
services.append('vMotion')
if self.enable_ft:
services.append('FT')
if self.enable_vsan:
services.append('VSAN')
if self.enable_provisioning:
services.append('Prov')
if self.enable_replication:
services.append('Repl')
if self.enable_replication_nfc:
services.append('Repl_NFC')
return ', '.join(services)
@staticmethod
def get_api_net_stack_instance(tcpip_stack):
"""Get TCP/IP stack instance name or key"""
net_stack_instance = None
if tcpip_stack == 'default':
net_stack_instance = 'defaultTcpipStack'
elif tcpip_stack == 'provisioning':
net_stack_instance = 'vSphereProvisioning'
# vmotion and vxlan stay the same
elif tcpip_stack == 'vmotion':
net_stack_instance = 'vmotion'
elif tcpip_stack == 'vxlan':
net_stack_instance = 'vxlan'
elif tcpip_stack == 'defaultTcpipStack':
net_stack_instance = 'default'
elif tcpip_stack == 'vSphereProvisioning':
net_stack_instance = 'provisioning'
# vmotion and vxlan stay the same
elif tcpip_stack == 'vmotion':
net_stack_instance = 'vmotion'
elif tcpip_stack == 'vxlan':
net_stack_instance = 'vxlan'
return net_stack_instance
def main():
"""Main"""
argument_spec = vmware_argument_spec()
argument_spec.update(dict(
esxi_hostname=dict(required=True, type='str'),
portgroup_name=dict(required=True, type='str', aliases=['portgroup']),
ip_address=dict(removed_in_version=2.9, type='str'),
subnet_mask=dict(removed_in_version=2.9, type='str'),
mtu=dict(required=False, type='int', default=1500),
device=dict(type='str'),
enable_vsan=dict(required=False, type='bool', default=False),
enable_vmotion=dict(required=False, type='bool', default=False),
enable_mgmt=dict(required=False, type='bool', default=False),
enable_ft=dict(required=False, type='bool', default=False),
enable_provisioning=dict(type='bool', default=False),
enable_replication=dict(type='bool', default=False),
enable_replication_nfc=dict(type='bool', default=False),
vswitch_name=dict(required=False, type='str', aliases=['vswitch']),
dvswitch_name=dict(required=False, type='str', aliases=['dvswitch']),
network=dict(
type='dict',
options=dict(
type=dict(type='str', default='static', choices=['static', 'dhcp']),
ip_address=dict(type='str'),
subnet_mask=dict(type='str'),
default_gateway=dict(type='str'),
tcpip_stack=dict(type='str', default='default', choices=['default', 'provisioning', 'vmotion', 'vxlan']),
),
default=dict(
type='static',
tcpip_stack='default',
),
),
state=dict(
type='str',
default='present',
choices=['absent', 'present']
),
))
module = AnsibleModule(argument_spec=argument_spec,
mutually_exclusive=[
['vswitch_name', 'dvswitch_name'],
['tcpip_stack', 'enable_vsan'],
['tcpip_stack', 'enable_vmotion'],
['tcpip_stack', 'enable_mgmt'],
['tcpip_stack', 'enable_ft'],
['tcpip_stack', 'enable_provisioning'],
['tcpip_stack', 'enable_replication'],
['tcpip_stack', 'enable_replication_nfc'],
],
required_one_of=[
['vswitch_name', 'dvswitch_name'],
['portgroup_name', 'device'],
],
required_if=[
['state', 'present', ['portgroup_name']],
['state', 'absent', ['device']]
],
supports_check_mode=True)
pyv = PyVmomiHelper(module)
pyv.ensure()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,016 |
some vmware modules have options which should have been removed for Ansible 2.9
|
##### SUMMARY
As detected by https://github.com/ansible/ansible/pull/66920, (some of) these modules have options marked with `removed_in_version='2.9'`. These options should have been removed before Ansible 2.9 was released. Since that is too late, it would be good if they could be removed before Ansible 2.10 is released.
```
lib/ansible/modules/cloud/vmware/vmware_guest_find.py:0:0: ansible-deprecated-version: Argument 'datacenter' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py:0:0: ansible-deprecated-version: Argument 'ip_address' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py:0:0: ansible-deprecated-version: Argument 'subnet_mask' in argument_spec has a deprecated removed_in_version '2.9', i.e. the version is less than or equal to the current version of Ansible (2.10.0.dev0)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/modules/cloud/vmware/vmware_guest_find.py
lib/ansible/modules/cloud/vmware/vmware_vmkernel.py
##### ANSIBLE VERSION
```paste below
2.10
```
|
https://github.com/ansible/ansible/issues/67016
|
https://github.com/ansible/ansible/pull/67282
|
88f0c8522882467d512eb4f1769e0eaf78404760
|
808bf02588febe08f109364f20ad5d4a96a28100
| 2020-02-01T13:58:36Z |
python
| 2020-02-11T11:30:22Z |
test/integration/targets/vmware_guest_find/tasks/main.yml
|
# Test code for the vmware_guest_find module.
# Copyright: (c) 2017, James Tanner <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- import_role:
name: prepare_vmware_tests
vars:
setup_attach_host: true
setup_datastore: true
setup_virtualmachines: true
- name: find folders for each vm
vmware_guest_find:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ item.name }}"
datacenter: "{{ dc1 }}"
with_items: "{{ virtual_machines }}"
register: folders
- debug: var=item
with_items: "{{ folders.results }}"
# We only care that each VM was found, not that the folder path
# is completely accurate. Eventually the test should be extended
# to validate the full path for each VM.
- assert:
that:
- "{{ 'folders' in item }}"
- "{{ item['folders']|length == 1 }}"
with_items: "{{ folders.results }}"
- name: get fact of the first VM
vmware_guest_info:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ dc1 }}"
name: "{{ virtual_machines[0].name }}"
folder: "{{ virtual_machines[0].folder }}"
register: guest_info_0001
- debug: var=guest_info_0001
- name: find folders for each vm using UUID
vmware_guest_find:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
uuid: "{{ guest_info_0001['instance']['hw_product_uuid'] }}"
register: folder_uuid
- debug: var=folder_uuid
- assert:
that:
- "{{ 'folders' in folder_uuid }}"
- "{{ folder_uuid['folders']|length == 1 }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,144 |
The nios_a_record module updates values on existing records when in check mode
|
##### SUMMARY
The `nios_a_record` module updates values on existing records when in `--check` mode.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`nios`
`nios_a_record`
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.9.4
config file = /var/lib/jenkins/workspace/zure-infoblox-dns_initial_create/ansible.cfg
configured module search path = [u'/var/lib/jenkins/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
```
$ ansible-config dump --only-changed
DEFAULT_STDOUT_CALLBACK(/var/lib/jenkins/workspace/zure-infoblox-dns_initial_create/ansible.cfg) = yaml
```
##### OS / ENVIRONMENT
Red Hat Enterprise Linux 7.7
Infoblox 8.3.4-381259
```
$ pip show infoblox-client
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Name: infoblox-client
Version: 0.4.23
Summary: Client for interacting with Infoblox NIOS over WAPI
Home-page: https://github.com/infobloxopen/infoblox-client
Author: John Belamaric
Author-email: [email protected]
License: Apache
Location: /usr/lib/python2.7/site-packages
Requires: urllib3, setuptools, oslo.log, requests, oslo.serialization
Required-by:
```
##### STEPS TO REPRODUCE
There is a host record named test.example.com already present in Infoblox, with no comment. Ansible-playbook is run in `--check` mode to validate what would be changed.
```
ansible-playbook --check main.yml
```
```yaml
---
- hosts: localhost
gather_facts: false
connection: local
tasks:
- name: Manage A Record
nios_a_record:
comment: "Test"
ipv4addr: "10.10.1.1"
name: "test.example.com"
provider: "{{ nios_provider }}"
```
##### EXPECTED RESULTS
Task shows 'changed' but no data is changed.
##### ACTUAL RESULTS
The comment of the existing record is changed to 'Test'.
```
$ ansible-playbook --check main.yml
PLAY [localhost] ***************************************************************
TASK [Manage A Record] ********************************************************
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/67144
|
https://github.com/ansible/ansible/pull/67145
|
a4ec18d8a3cd6284b35d3d26b70256eb31ad9ef2
|
bc2419c47ceb85fb4d83a9d64e1137fd077a95c7
| 2020-02-05T23:13:10Z |
python
| 2020-02-12T14:40:10Z |
lib/ansible/module_utils/net_tools/nios/api.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# (c) 2018 Red Hat Inc.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
import os
from functools import partial
from ansible.module_utils._text import to_native
from ansible.module_utils.six import iteritems
from ansible.module_utils._text import to_text
from ansible.module_utils.basic import env_fallback
try:
from infoblox_client.connector import Connector
from infoblox_client.exceptions import InfobloxException
HAS_INFOBLOX_CLIENT = True
except ImportError:
HAS_INFOBLOX_CLIENT = False
# defining nios constants
NIOS_DNS_VIEW = 'view'
NIOS_NETWORK_VIEW = 'networkview'
NIOS_HOST_RECORD = 'record:host'
NIOS_IPV4_NETWORK = 'network'
NIOS_IPV6_NETWORK = 'ipv6network'
NIOS_ZONE = 'zone_auth'
NIOS_PTR_RECORD = 'record:ptr'
NIOS_A_RECORD = 'record:a'
NIOS_AAAA_RECORD = 'record:aaaa'
NIOS_CNAME_RECORD = 'record:cname'
NIOS_MX_RECORD = 'record:mx'
NIOS_SRV_RECORD = 'record:srv'
NIOS_NAPTR_RECORD = 'record:naptr'
NIOS_TXT_RECORD = 'record:txt'
NIOS_NSGROUP = 'nsgroup'
NIOS_IPV4_FIXED_ADDRESS = 'fixedaddress'
NIOS_IPV6_FIXED_ADDRESS = 'ipv6fixedaddress'
NIOS_NEXT_AVAILABLE_IP = 'func:nextavailableip'
NIOS_IPV4_NETWORK_CONTAINER = 'networkcontainer'
NIOS_IPV6_NETWORK_CONTAINER = 'ipv6networkcontainer'
NIOS_MEMBER = 'member'
NIOS_PROVIDER_SPEC = {
'host': dict(fallback=(env_fallback, ['INFOBLOX_HOST'])),
'username': dict(fallback=(env_fallback, ['INFOBLOX_USERNAME'])),
'password': dict(fallback=(env_fallback, ['INFOBLOX_PASSWORD']), no_log=True),
'validate_certs': dict(type='bool', default=False, fallback=(env_fallback, ['INFOBLOX_SSL_VERIFY']), aliases=['ssl_verify']),
'silent_ssl_warnings': dict(type='bool', default=True),
'http_request_timeout': dict(type='int', default=10, fallback=(env_fallback, ['INFOBLOX_HTTP_REQUEST_TIMEOUT'])),
'http_pool_connections': dict(type='int', default=10),
'http_pool_maxsize': dict(type='int', default=10),
'max_retries': dict(type='int', default=3, fallback=(env_fallback, ['INFOBLOX_MAX_RETRIES'])),
'wapi_version': dict(default='2.1', fallback=(env_fallback, ['INFOBLOX_WAP_VERSION'])),
'max_results': dict(type='int', default=1000, fallback=(env_fallback, ['INFOBLOX_MAX_RETRIES']))
}
def get_connector(*args, **kwargs):
''' Returns an instance of infoblox_client.connector.Connector
:params args: positional arguments are silently ignored
:params kwargs: dict that is passed to Connector init
:returns: Connector
'''
if not HAS_INFOBLOX_CLIENT:
raise Exception('infoblox-client is required but does not appear '
'to be installed. It can be installed using the '
'command `pip install infoblox-client`')
if not set(kwargs.keys()).issubset(list(NIOS_PROVIDER_SPEC.keys()) + ['ssl_verify']):
raise Exception('invalid or unsupported keyword argument for connector')
for key, value in iteritems(NIOS_PROVIDER_SPEC):
if key not in kwargs:
# apply default values from NIOS_PROVIDER_SPEC since we cannot just
# assume the provider values are coming from AnsibleModule
if 'default' in value:
kwargs[key] = value['default']
# override any values with env variables unless they were
# explicitly set
env = ('INFOBLOX_%s' % key).upper()
if env in os.environ:
kwargs[key] = os.environ.get(env)
if 'validate_certs' in kwargs.keys():
kwargs['ssl_verify'] = kwargs['validate_certs']
kwargs.pop('validate_certs', None)
return Connector(kwargs)
def normalize_extattrs(value):
''' Normalize extattrs field to expected format
The module accepts extattrs as key/value pairs. This method will
transform the key/value pairs into a structure suitable for
sending across WAPI in the format of:
extattrs: {
key: {
value: <value>
}
}
'''
return dict([(k, {'value': v}) for k, v in iteritems(value)])
def flatten_extattrs(value):
''' Flatten the key/value struct for extattrs
WAPI returns extattrs field as a dict in form of:
extattrs: {
key: {
value: <value>
}
}
This method will flatten the structure to:
extattrs: {
key: value
}
'''
return dict([(k, v['value']) for k, v in iteritems(value)])
def member_normalize(member_spec):
''' Transforms the member module arguments into a valid WAPI struct
This function will transform the arguments into a structure that
is a valid WAPI structure in the format of:
{
key: <value>,
}
It will remove any arguments that are set to None since WAPI will error on
that condition.
The remainder of the value validation is performed by WAPI
Some parameters in ib_spec are passed as a list in order to pass the validation for elements.
In this function, they are converted to dictionary.
'''
member_elements = ['vip_setting', 'ipv6_setting', 'lan2_port_setting', 'mgmt_port_setting',
'pre_provisioning', 'network_setting', 'v6_network_setting',
'ha_port_setting', 'lan_port_setting', 'lan2_physical_setting',
'lan_ha_port_setting', 'mgmt_network_setting', 'v6_mgmt_network_setting']
for key in member_spec.keys():
if key in member_elements and member_spec[key] is not None:
member_spec[key] = member_spec[key][0]
if isinstance(member_spec[key], dict):
member_spec[key] = member_normalize(member_spec[key])
elif isinstance(member_spec[key], list):
for x in member_spec[key]:
if isinstance(x, dict):
x = member_normalize(x)
elif member_spec[key] is None:
del member_spec[key]
return member_spec
class WapiBase(object):
''' Base class for implementing Infoblox WAPI API '''
provider_spec = {'provider': dict(type='dict', options=NIOS_PROVIDER_SPEC)}
def __init__(self, provider):
self.connector = get_connector(**provider)
def __getattr__(self, name):
try:
return self.__dict__[name]
except KeyError:
if name.startswith('_'):
raise AttributeError("'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
return partial(self._invoke_method, name)
def _invoke_method(self, name, *args, **kwargs):
try:
method = getattr(self.connector, name)
return method(*args, **kwargs)
except InfobloxException as exc:
if hasattr(self, 'handle_exception'):
self.handle_exception(name, exc)
else:
raise
class WapiLookup(WapiBase):
''' Implements WapiBase for lookup plugins '''
def handle_exception(self, method_name, exc):
if ('text' in exc.response):
raise Exception(exc.response['text'])
else:
raise Exception(exc)
class WapiInventory(WapiBase):
''' Implements WapiBase for dynamic inventory script '''
pass
class WapiModule(WapiBase):
''' Implements WapiBase for executing a NIOS module '''
def __init__(self, module):
self.module = module
provider = module.params['provider']
try:
super(WapiModule, self).__init__(provider)
except Exception as exc:
self.module.fail_json(msg=to_text(exc))
def handle_exception(self, method_name, exc):
''' Handles any exceptions raised
This method will be called if an InfobloxException is raised for
any call to the instance of Connector and also, in case of generic
exception. This method will then gracefully fail the module.
:args exc: instance of InfobloxException
'''
if ('text' in exc.response):
self.module.fail_json(
msg=exc.response['text'],
type=exc.response['Error'].split(':')[0],
code=exc.response.get('code'),
operation=method_name
)
else:
self.module.fail_json(msg=to_native(exc))
def run(self, ib_obj_type, ib_spec):
''' Runs the module and performans configuration tasks
:args ib_obj_type: the WAPI object type to operate against
:args ib_spec: the specification for the WAPI object as a dict
:returns: a results dict
'''
update = new_name = None
state = self.module.params['state']
if state not in ('present', 'absent'):
self.module.fail_json(msg='state must be one of `present`, `absent`, got `%s`' % state)
result = {'changed': False}
obj_filter = dict([(k, self.module.params[k]) for k, v in iteritems(ib_spec) if v.get('ib_req')])
# get object reference
ib_obj_ref, update, new_name = self.get_object_ref(self.module, ib_obj_type, obj_filter, ib_spec)
proposed_object = {}
for key, value in iteritems(ib_spec):
if self.module.params[key] is not None:
if 'transform' in value:
proposed_object[key] = value['transform'](self.module)
else:
proposed_object[key] = self.module.params[key]
# If configure_by_dns is set to False, then delete the default dns set in the param else throw exception
if not proposed_object.get('configure_for_dns') and proposed_object.get('view') == 'default'\
and ib_obj_type == NIOS_HOST_RECORD:
del proposed_object['view']
elif not proposed_object.get('configure_for_dns') and proposed_object.get('view') != 'default'\
and ib_obj_type == NIOS_HOST_RECORD:
self.module.fail_json(msg='DNS Bypass is not allowed if DNS view is set other than \'default\'')
if ib_obj_ref:
if len(ib_obj_ref) > 1:
for each in ib_obj_ref:
# To check for existing A_record with same name with input A_record by IP
if each.get('ipv4addr') and each.get('ipv4addr') == proposed_object.get('ipv4addr'):
current_object = each
# To check for existing Host_record with same name with input Host_record by IP
elif each.get('ipv4addrs')[0].get('ipv4addr') and each.get('ipv4addrs')[0].get('ipv4addr')\
== proposed_object.get('ipv4addrs')[0].get('ipv4addr'):
current_object = each
# Else set the current_object with input value
else:
current_object = obj_filter
ref = None
else:
current_object = ib_obj_ref[0]
if 'extattrs' in current_object:
current_object['extattrs'] = flatten_extattrs(current_object['extattrs'])
if current_object.get('_ref'):
ref = current_object.pop('_ref')
else:
current_object = obj_filter
ref = None
# checks if the object type is member to normalize the attributes being passed
if (ib_obj_type == NIOS_MEMBER):
proposed_object = member_normalize(proposed_object)
# checks if the name's field has been updated
if update and new_name:
proposed_object['name'] = new_name
check_remove = []
if (ib_obj_type == NIOS_HOST_RECORD):
# this check is for idempotency, as if the same ip address shall be passed
# add param will be removed, and same exists true for remove case as well.
if 'ipv4addrs' in [current_object and proposed_object]:
for each in current_object['ipv4addrs']:
if each['ipv4addr'] == proposed_object['ipv4addrs'][0]['ipv4addr']:
if 'add' in proposed_object['ipv4addrs'][0]:
del proposed_object['ipv4addrs'][0]['add']
break
check_remove += each.values()
if proposed_object['ipv4addrs'][0]['ipv4addr'] not in check_remove:
if 'remove' in proposed_object['ipv4addrs'][0]:
del proposed_object['ipv4addrs'][0]['remove']
res = None
modified = not self.compare_objects(current_object, proposed_object)
if 'extattrs' in proposed_object:
proposed_object['extattrs'] = normalize_extattrs(proposed_object['extattrs'])
# Checks if nios_next_ip param is passed in ipv4addrs/ipv4addr args
proposed_object = self.check_if_nios_next_ip_exists(proposed_object)
if state == 'present':
if ref is None:
if not self.module.check_mode:
self.create_object(ib_obj_type, proposed_object)
result['changed'] = True
# Check if NIOS_MEMBER and the flag to call function create_token is set
elif (ib_obj_type == NIOS_MEMBER) and (proposed_object['create_token']):
proposed_object = None
# the function creates a token that can be used by a pre-provisioned member to join the grid
result['api_results'] = self.call_func('create_token', ref, proposed_object)
result['changed'] = True
elif modified:
if 'ipv4addrs' in proposed_object:
if ('add' not in proposed_object['ipv4addrs'][0]) and ('remove' not in proposed_object['ipv4addrs'][0]):
self.check_if_recordname_exists(obj_filter, ib_obj_ref, ib_obj_type, current_object, proposed_object)
if (ib_obj_type in (NIOS_HOST_RECORD, NIOS_NETWORK_VIEW, NIOS_DNS_VIEW)):
run_update = True
proposed_object = self.on_update(proposed_object, ib_spec)
if 'ipv4addrs' in proposed_object:
if ('add' or 'remove') in proposed_object['ipv4addrs'][0]:
run_update, proposed_object = self.check_if_add_remove_ip_arg_exists(proposed_object)
if run_update:
res = self.update_object(ref, proposed_object)
result['changed'] = True
else:
res = ref
if (ib_obj_type in (NIOS_A_RECORD, NIOS_AAAA_RECORD, NIOS_PTR_RECORD, NIOS_SRV_RECORD)):
# popping 'view' key as update of 'view' is not supported with respect to a:record/aaaa:record/srv:record/ptr:record
proposed_object = self.on_update(proposed_object, ib_spec)
del proposed_object['view']
res = self.update_object(ref, proposed_object)
result['changed'] = True
elif 'network_view' in proposed_object:
proposed_object.pop('network_view')
result['changed'] = True
if not self.module.check_mode and res is None:
proposed_object = self.on_update(proposed_object, ib_spec)
self.update_object(ref, proposed_object)
result['changed'] = True
elif state == 'absent':
if ref is not None:
if 'ipv4addrs' in proposed_object:
if 'remove' in proposed_object['ipv4addrs'][0]:
self.check_if_add_remove_ip_arg_exists(proposed_object)
self.update_object(ref, proposed_object)
result['changed'] = True
elif not self.module.check_mode:
self.delete_object(ref)
result['changed'] = True
return result
def check_if_recordname_exists(self, obj_filter, ib_obj_ref, ib_obj_type, current_object, proposed_object):
''' Send POST request if host record input name and retrieved ref name is same,
but input IP and retrieved IP is different'''
if 'name' in (obj_filter and ib_obj_ref[0]) and ib_obj_type == NIOS_HOST_RECORD:
obj_host_name = obj_filter['name']
ref_host_name = ib_obj_ref[0]['name']
if 'ipv4addrs' in (current_object and proposed_object):
current_ip_addr = current_object['ipv4addrs'][0]['ipv4addr']
proposed_ip_addr = proposed_object['ipv4addrs'][0]['ipv4addr']
elif 'ipv6addrs' in (current_object and proposed_object):
current_ip_addr = current_object['ipv6addrs'][0]['ipv6addr']
proposed_ip_addr = proposed_object['ipv6addrs'][0]['ipv6addr']
if obj_host_name == ref_host_name and current_ip_addr != proposed_ip_addr:
self.create_object(ib_obj_type, proposed_object)
def check_if_nios_next_ip_exists(self, proposed_object):
''' Check if nios_next_ip argument is passed in ipaddr while creating
host record, if yes then format proposed object ipv4addrs and pass
func:nextavailableip and ipaddr range to create hostrecord with next
available ip in one call to avoid any race condition '''
if 'ipv4addrs' in proposed_object:
if 'nios_next_ip' in proposed_object['ipv4addrs'][0]['ipv4addr']:
ip_range = self.module._check_type_dict(proposed_object['ipv4addrs'][0]['ipv4addr'])['nios_next_ip']
proposed_object['ipv4addrs'][0]['ipv4addr'] = NIOS_NEXT_AVAILABLE_IP + ':' + ip_range
elif 'ipv4addr' in proposed_object:
if 'nios_next_ip' in proposed_object['ipv4addr']:
ip_range = self.module._check_type_dict(proposed_object['ipv4addr'])['nios_next_ip']
proposed_object['ipv4addr'] = NIOS_NEXT_AVAILABLE_IP + ':' + ip_range
return proposed_object
def check_if_add_remove_ip_arg_exists(self, proposed_object):
'''
This function shall check if add/remove param is set to true and
is passed in the args, then we will update the proposed dictionary
to add/remove IP to existing host_record, if the user passes false
param with the argument nothing shall be done.
:returns: True if param is changed based on add/remove, and also the
changed proposed_object.
'''
update = False
if 'add' in proposed_object['ipv4addrs'][0]:
if proposed_object['ipv4addrs'][0]['add']:
proposed_object['ipv4addrs+'] = proposed_object['ipv4addrs']
del proposed_object['ipv4addrs']
del proposed_object['ipv4addrs+'][0]['add']
update = True
else:
del proposed_object['ipv4addrs'][0]['add']
elif 'remove' in proposed_object['ipv4addrs'][0]:
if proposed_object['ipv4addrs'][0]['remove']:
proposed_object['ipv4addrs-'] = proposed_object['ipv4addrs']
del proposed_object['ipv4addrs']
del proposed_object['ipv4addrs-'][0]['remove']
update = True
else:
del proposed_object['ipv4addrs'][0]['remove']
return update, proposed_object
def issubset(self, item, objects):
''' Checks if item is a subset of objects
:args item: the subset item to validate
:args objects: superset list of objects to validate against
:returns: True if item is a subset of one entry in objects otherwise
this method will return None
'''
for obj in objects:
if isinstance(item, dict):
if all(entry in obj.items() for entry in item.items()):
return True
else:
if item in obj:
return True
def compare_objects(self, current_object, proposed_object):
for key, proposed_item in iteritems(proposed_object):
current_item = current_object.get(key)
# if proposed has a key that current doesn't then the objects are
# not equal and False will be immediately returned
if current_item is None:
return False
elif isinstance(proposed_item, list):
for subitem in proposed_item:
if not self.issubset(subitem, current_item):
return False
elif isinstance(proposed_item, dict):
return self.compare_objects(current_item, proposed_item)
else:
if current_item != proposed_item:
return False
return True
def get_object_ref(self, module, ib_obj_type, obj_filter, ib_spec):
''' this function gets the reference object of pre-existing nios objects '''
update = False
old_name = new_name = None
if ('name' in obj_filter):
# gets and returns the current object based on name/old_name passed
try:
name_obj = self.module._check_type_dict(obj_filter['name'])
old_name = name_obj['old_name']
new_name = name_obj['new_name']
except TypeError:
name = obj_filter['name']
if old_name and new_name:
if (ib_obj_type == NIOS_HOST_RECORD):
test_obj_filter = dict([('name', old_name), ('view', obj_filter['view'])])
elif (ib_obj_type in (NIOS_AAAA_RECORD, NIOS_A_RECORD)):
test_obj_filter = obj_filter
else:
test_obj_filter = dict([('name', old_name)])
# get the object reference
ib_obj = self.get_object(ib_obj_type, test_obj_filter, return_fields=ib_spec.keys())
if ib_obj:
obj_filter['name'] = new_name
else:
test_obj_filter['name'] = new_name
ib_obj = self.get_object(ib_obj_type, test_obj_filter, return_fields=ib_spec.keys())
update = True
return ib_obj, update, new_name
if (ib_obj_type == NIOS_HOST_RECORD):
# to check only by name if dns bypassing is set
if not obj_filter['configure_for_dns']:
test_obj_filter = dict([('name', name)])
else:
test_obj_filter = dict([('name', name), ('view', obj_filter['view'])])
elif (ib_obj_type == NIOS_IPV4_FIXED_ADDRESS or ib_obj_type == NIOS_IPV6_FIXED_ADDRESS and 'mac' in obj_filter):
test_obj_filter = dict([['mac', obj_filter['mac']]])
elif (ib_obj_type == NIOS_A_RECORD):
# resolves issue where a_record with uppercase name was returning null and was failing
test_obj_filter = obj_filter
test_obj_filter['name'] = test_obj_filter['name'].lower()
# resolves issue where multiple a_records with same name and different IP address
try:
ipaddr_obj = self.module._check_type_dict(obj_filter['ipv4addr'])
ipaddr = ipaddr_obj['old_ipv4addr']
except TypeError:
ipaddr = obj_filter['ipv4addr']
test_obj_filter['ipv4addr'] = ipaddr
elif (ib_obj_type == NIOS_TXT_RECORD):
# resolves issue where multiple txt_records with same name and different text
test_obj_filter = obj_filter
try:
text_obj = self.module._check_type_dict(obj_filter['text'])
txt = text_obj['old_text']
except TypeError:
txt = obj_filter['text']
test_obj_filter['text'] = txt
# check if test_obj_filter is empty copy passed obj_filter
else:
test_obj_filter = obj_filter
ib_obj = self.get_object(ib_obj_type, test_obj_filter.copy(), return_fields=ib_spec.keys())
elif (ib_obj_type == NIOS_A_RECORD):
# resolves issue where multiple a_records with same name and different IP address
test_obj_filter = obj_filter
try:
ipaddr_obj = self.module._check_type_dict(obj_filter['ipv4addr'])
ipaddr = ipaddr_obj['old_ipv4addr']
except TypeError:
ipaddr = obj_filter['ipv4addr']
test_obj_filter['ipv4addr'] = ipaddr
ib_obj = self.get_object(ib_obj_type, test_obj_filter.copy(), return_fields=ib_spec.keys())
elif (ib_obj_type == NIOS_TXT_RECORD):
# resolves issue where multiple txt_records with same name and different text
test_obj_filter = obj_filter
try:
text_obj = self.module._check_type_dict(obj_filter['text'])
txt = text_obj['old_text']
except TypeError:
txt = obj_filter['text']
test_obj_filter['text'] = txt
ib_obj = self.get_object(ib_obj_type, test_obj_filter.copy(), return_fields=ib_spec.keys())
elif (ib_obj_type == NIOS_ZONE):
# del key 'restart_if_needed' as nios_zone get_object fails with the key present
temp = ib_spec['restart_if_needed']
del ib_spec['restart_if_needed']
ib_obj = self.get_object(ib_obj_type, obj_filter.copy(), return_fields=ib_spec.keys())
# reinstate restart_if_needed if ib_obj is none, meaning there's no existing nios_zone ref
if not ib_obj:
ib_spec['restart_if_needed'] = temp
elif (ib_obj_type == NIOS_MEMBER):
# del key 'create_token' as nios_member get_object fails with the key present
temp = ib_spec['create_token']
del ib_spec['create_token']
ib_obj = self.get_object(ib_obj_type, obj_filter.copy(), return_fields=ib_spec.keys())
if temp:
# reinstate 'create_token' key
ib_spec['create_token'] = temp
else:
ib_obj = self.get_object(ib_obj_type, obj_filter.copy(), return_fields=ib_spec.keys())
return ib_obj, update, new_name
def on_update(self, proposed_object, ib_spec):
''' Event called before the update is sent to the API endpoing
This method will allow the final proposed object to be changed
and/or keys filtered before it is sent to the API endpoint to
be processed.
:args proposed_object: A dict item that will be encoded and sent
the API endpoint with the updated data structure
:returns: updated object to be sent to API endpoint
'''
keys = set()
for key, value in iteritems(proposed_object):
update = ib_spec[key].get('update', True)
if not update:
keys.add(key)
return dict([(k, v) for k, v in iteritems(proposed_object) if k not in keys])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,313 |
eos_vlans using state parameter replaced is giving odd behavior
|
##### SUMMARY
I assume that the vlan-id is the winning key that will replace other data. I am seeing some odd behavior where if I have something like
on-box before
```
- vlan_id: 80
```
on-box after
```
- vlan_id: 80
```
but i am actually sending a key,value name: sean
```
commands:
- vlan 80
- name sean
- no name
```
but for some reason it nos the name....
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_vlans
##### ANSIBLE VERSION
```paste below
ansible 2.9.2
config file = /home/student1/.ansible.cfg
configured module search path = [u'/home/student1/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 200
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 200
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
```
[student1@ansible playbooks]$ cat /etc/*release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.7 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.7"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.7 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.7:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.7
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.7"
Red Hat Enterprise Linux Server release 7.7 (Maipo)
Red Hat Enterprise Linux Server release 7.7 (Maipo)
```
##### STEPS TO REPRODUCE
https://gist.github.com/IPvSean/028b36bab5783dfd4f4a01a2c4063613
##### EXPECTED RESULTS
vlan-id would win and over-ride
##### ACTUAL RESULTS
the vlan name is being stripped out for some reason, see the gist link above
|
https://github.com/ansible/ansible/issues/67313
|
https://github.com/ansible/ansible/pull/67318
|
cd146b836e032df785ecd9eb711c6ef23c2228b8
|
4ec1437212b2fb3c313e44ed5a76b105f2151622
| 2020-02-11T18:17:40Z |
python
| 2020-02-12T16:12:12Z |
lib/ansible/module_utils/network/eos/config/vlans/vlans.py
|
# -*- coding: utf-8 -*-
# Copyright 2019 Red Hat
# GNU General Public License v3.0+
# (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""
The eos_vlans class
It is in this file where the current configuration (as dict)
is compared to the provided configuration (as dict) and the command set
necessary to bring the current configuration to it's desired end-state is
created
"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.module_utils.network.common.cfg.base import ConfigBase
from ansible.module_utils.network.common.utils import to_list, dict_diff, param_list_to_dict
from ansible.module_utils.network.eos.facts.facts import Facts
class Vlans(ConfigBase):
"""
The eos_vlans class
"""
gather_subset = [
'!all',
'!min',
]
gather_network_resources = [
'vlans',
]
def get_vlans_facts(self):
""" Get the 'facts' (the current configuration)
:rtype: A dictionary
:returns: The current configuration as a dictionary
"""
facts, _warnings = Facts(self._module).get_facts(self.gather_subset, self.gather_network_resources)
vlans_facts = facts['ansible_network_resources'].get('vlans')
if not vlans_facts:
return []
return vlans_facts
def execute_module(self):
""" Execute the module
:rtype: A dictionary
:returns: The result from module execution
"""
result = {'changed': False}
warnings = list()
commands = list()
existing_vlans_facts = self.get_vlans_facts()
commands.extend(self.set_config(existing_vlans_facts))
if commands:
if not self._module.check_mode:
self._connection.edit_config(commands)
result['changed'] = True
result['commands'] = commands
changed_vlans_facts = self.get_vlans_facts()
result['before'] = existing_vlans_facts
if result['changed']:
result['after'] = changed_vlans_facts
result['warnings'] = warnings
return result
def set_config(self, existing_vlans_facts):
""" Collect the configuration from the args passed to the module,
collect the current configuration (as a dict from facts)
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
want = self._module.params['config']
have = existing_vlans_facts
resp = self.set_state(want, have)
return to_list(resp)
def set_state(self, want, have):
""" Select the appropriate function based on the state provided
:param want: the desired configuration as a dictionary
:param have: the current configuration as a dictionary
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
state = self._module.params['state']
want = param_list_to_dict(want, "vlan_id", remove_key=False)
have = param_list_to_dict(have, "vlan_id", remove_key=False)
if state == 'overridden':
commands = self._state_overridden(want, have)
elif state == 'deleted':
commands = self._state_deleted(want, have)
elif state == 'merged':
commands = self._state_merged(want, have)
elif state == 'replaced':
commands = self._state_replaced(want, have)
return commands
@staticmethod
def _state_replaced(want, have):
""" The command generator when state is replaced
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
for vlan_id, desired in want.items():
if vlan_id in have:
extant = have[vlan_id]
else:
extant = dict()
add_config = dict_diff(extant, desired)
del_config = dict_diff(desired, extant)
commands.extend(generate_commands(vlan_id, add_config, del_config))
return commands
@staticmethod
def _state_overridden(want, have):
""" The command generator when state is overridden
:rtype: A list
:returns: the commands necessary to migrate the current configuration
to the desired configuration
"""
commands = []
for vlan_id, extant in have.items():
if vlan_id in want:
desired = want[vlan_id]
else:
desired = dict()
add_config = dict_diff(extant, desired)
del_config = dict_diff(desired, extant)
commands.extend(generate_commands(vlan_id, add_config, del_config))
# Handle vlans not already in config
new_vlans = [vlan_id for vlan_id in want if vlan_id not in have]
for vlan_id in new_vlans:
desired = want[vlan_id]
extant = dict(vlan_id=vlan_id)
add_config = dict_diff(extant, desired)
commands.extend(generate_commands(vlan_id, add_config, {}))
return commands
@staticmethod
def _state_merged(want, have):
""" The command generator when state is merged
:rtype: A list
:returns: the commands necessary to merge the provided into
the current configuration
"""
commands = []
for vlan_id, desired in want.items():
if vlan_id in have:
extant = have[vlan_id]
else:
extant = dict()
add_config = dict_diff(extant, desired)
commands.extend(generate_commands(vlan_id, add_config, {}))
return commands
@staticmethod
def _state_deleted(want, have):
""" The command generator when state is deleted
:rtype: A list
:returns: the commands necessary to remove the current configuration
of the provided objects
"""
commands = []
for vlan_id in want:
desired = dict()
if vlan_id in have:
extant = have[vlan_id]
else:
continue
del_config = dict_diff(desired, extant)
commands.extend(generate_commands(vlan_id, {}, del_config))
return commands
def generate_commands(vlan_id, to_set, to_remove):
commands = []
if "vlan_id" in to_remove:
return ["no vlan {0}".format(vlan_id)]
for key, value in to_set.items():
if key == "vlan_id" or value is None:
continue
commands.append("{0} {1}".format(key, value))
for key in to_remove:
commands.append("no {0}".format(key))
if commands:
commands.insert(0, "vlan {0}".format(vlan_id))
return commands
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,313 |
eos_vlans using state parameter replaced is giving odd behavior
|
##### SUMMARY
I assume that the vlan-id is the winning key that will replace other data. I am seeing some odd behavior where if I have something like
on-box before
```
- vlan_id: 80
```
on-box after
```
- vlan_id: 80
```
but i am actually sending a key,value name: sean
```
commands:
- vlan 80
- name sean
- no name
```
but for some reason it nos the name....
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_vlans
##### ANSIBLE VERSION
```paste below
ansible 2.9.2
config file = /home/student1/.ansible.cfg
configured module search path = [u'/home/student1/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 200
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 200
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
```
[student1@ansible playbooks]$ cat /etc/*release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.7 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.7"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.7 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.7:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.7
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.7"
Red Hat Enterprise Linux Server release 7.7 (Maipo)
Red Hat Enterprise Linux Server release 7.7 (Maipo)
```
##### STEPS TO REPRODUCE
https://gist.github.com/IPvSean/028b36bab5783dfd4f4a01a2c4063613
##### EXPECTED RESULTS
vlan-id would win and over-ride
##### ACTUAL RESULTS
the vlan name is being stripped out for some reason, see the gist link above
|
https://github.com/ansible/ansible/issues/67313
|
https://github.com/ansible/ansible/pull/67318
|
cd146b836e032df785ecd9eb711c6ef23c2228b8
|
4ec1437212b2fb3c313e44ed5a76b105f2151622
| 2020-02-11T18:17:40Z |
python
| 2020-02-12T16:12:12Z |
test/integration/targets/eos_vlans/tests/cli/replaced.yaml
|
---
- include_tasks: reset_config.yml
- set_fact:
config:
- vlan_id: 20
state: suspend
other_config:
- vlan_id: 10
name: ten
- eos_facts:
gather_network_resources: vlans
become: yes
- name: Replaces device configuration of listed vlans with provided configuration
eos_vlans:
config: "{{ config }}"
state: replaced
register: result
become: yes
- assert:
that:
- "ansible_facts.network_resources.vlans|symmetric_difference(result.before) == []"
- eos_facts:
gather_network_resources: vlans
become: yes
- assert:
that:
- "ansible_facts.network_resources.vlans|symmetric_difference(result.after) == []"
- set_fact:
expected_config: "{{ config }} + {{ other_config }}"
- assert:
that:
- "expected_config|symmetric_difference(ansible_facts.network_resources.vlans) == []"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,313 |
eos_vlans using state parameter replaced is giving odd behavior
|
##### SUMMARY
I assume that the vlan-id is the winning key that will replace other data. I am seeing some odd behavior where if I have something like
on-box before
```
- vlan_id: 80
```
on-box after
```
- vlan_id: 80
```
but i am actually sending a key,value name: sean
```
commands:
- vlan 80
- name sean
- no name
```
but for some reason it nos the name....
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
eos_vlans
##### ANSIBLE VERSION
```paste below
ansible 2.9.2
config file = /home/student1/.ansible.cfg
configured module search path = [u'/home/student1/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/home/student1/.ansible.cfg) = [u'/home/student1/networking-workshop/lab_inventory/
DEFAULT_STDOUT_CALLBACK(/home/student1/.ansible.cfg) = yaml
DEFAULT_TIMEOUT(/home/student1/.ansible.cfg) = 60
DEPRECATION_WARNINGS(/home/student1/.ansible.cfg) = False
HOST_KEY_CHECKING(/home/student1/.ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/student1/.ansible.cfg) = 200
PERSISTENT_CONNECT_TIMEOUT(/home/student1/.ansible.cfg) = 200
RETRY_FILES_ENABLED(/home/student1/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
```
[student1@ansible playbooks]$ cat /etc/*release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.7 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.7"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.7 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.7:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.7
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.7"
Red Hat Enterprise Linux Server release 7.7 (Maipo)
Red Hat Enterprise Linux Server release 7.7 (Maipo)
```
##### STEPS TO REPRODUCE
https://gist.github.com/IPvSean/028b36bab5783dfd4f4a01a2c4063613
##### EXPECTED RESULTS
vlan-id would win and over-ride
##### ACTUAL RESULTS
the vlan name is being stripped out for some reason, see the gist link above
|
https://github.com/ansible/ansible/issues/67313
|
https://github.com/ansible/ansible/pull/67318
|
cd146b836e032df785ecd9eb711c6ef23c2228b8
|
4ec1437212b2fb3c313e44ed5a76b105f2151622
| 2020-02-11T18:17:40Z |
python
| 2020-02-12T16:12:12Z |
test/units/modules/network/eos/test_eos_vlans.py
|
#
# (c) 2019, Ansible by Red Hat, inc
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from units.compat.mock import patch
from ansible.modules.network.eos import eos_vlans
from units.modules.utils import set_module_args
from .eos_module import TestEosModule, load_fixture
class TestEosVlansModule(TestEosModule):
module = eos_vlans
def setUp(self):
super(TestEosVlansModule, self).setUp()
self.mock_get_config = patch('ansible.module_utils.network.common.network.Config.get_config')
self.get_config = self.mock_get_config.start()
self.mock_load_config = patch('ansible.module_utils.network.common.network.Config.load_config')
self.load_config = self.mock_load_config.start()
self.mock_get_resource_connection_config = patch('ansible.module_utils.network.common.cfg.base.get_resource_connection')
self.get_resource_connection_config = self.mock_get_resource_connection_config.start()
self.mock_get_resource_connection_facts = patch('ansible.module_utils.network.common.facts.facts.get_resource_connection')
self.get_resource_connection_facts = self.mock_get_resource_connection_facts.start()
self.mock_edit_config = patch('ansible.module_utils.network.eos.providers.providers.CliProvider.edit_config')
self.edit_config = self.mock_edit_config.start()
self.mock_execute_show_command = patch('ansible.module_utils.network.eos.config.vlans.vlans.Vlans.get_vlans_facts')
self.execute_show_command = self.mock_execute_show_command.start()
def tearDown(self):
super(TestEosVlansModule, self).tearDown()
self.mock_get_resource_connection_config.stop()
self.mock_get_resource_connection_facts.stop()
self.mock_edit_config.stop()
self.mock_get_config.stop()
self.mock_load_config.stop()
self.mock_execute_show_command.stop()
def load_fixtures(self, commands=None, transport='cli'):
file_cmd = load_fixture('eos_vlan_config.cfg').split()
file_cmd_dict = {}
for i in range(0, len(file_cmd), 2):
if file_cmd[i] == 'vlan_id':
y = int(file_cmd[i + 1])
else:
y = file_cmd[i + 1]
file_cmd_dict.update({file_cmd[i]: y})
self.execute_show_command.return_value = [file_cmd_dict]
def test_eos_vlan_default(self):
self.execute_show_command.return_value = []
set_module_args(dict(
config=[dict(
vlan_id=30,
name="thirty"
)]
))
commands = ['vlan 30', 'name thirty']
self.execute_module(changed=True, commands=commands)
def test_eos_vlan_default_idempotent(self):
self.execute_show_command.return_value = load_fixture('eos_vlan_config.cfg')
set_module_args(dict(
config=[dict(
vlan_id=10,
name="ten"
)]
))
self.execute_module(changed=False, commands=[])
def test_eos_vlan_merged(self):
self.execute_show_command.return_value = []
set_module_args(dict(
config=[dict(
vlan_id=30,
name="thirty"
)], state="merged"
))
commands = ['vlan 30', 'name thirty']
self.execute_module(changed=True, commands=commands)
def test_eos_vlan_merged_idempotent(self):
self.execute_show_command.return_value = load_fixture('eos_vlan_config.cfg')
set_module_args(dict(
config=[dict(
vlan_id=10,
name="ten"
)], state="merged"
))
self.execute_module(changed=False, commands=[])
def test_eos_vlan_replaced(self):
self.execute_show_command.return_value = []
set_module_args(dict(
config=[dict(
vlan_id=30,
name="thirty",
state="suspend"
)], state="replaced"
))
commands = ['vlan 30', 'name thirty', 'state suspend']
self.execute_module(changed=True, commands=commands)
def test_eos_vlan_replaced_idempotent(self):
self.execute_show_command.return_value = load_fixture('eos_vlan_config.cfg')
set_module_args(dict(
config=[dict(
vlan_id=10,
name="ten"
)], state="replaced"
))
self.execute_module(changed=False, commands=[])
def test_eos_vlan_overridden(self):
self.execute_show_command.return_value = []
set_module_args(dict(
config=[dict(
vlan_id=30,
name="thirty",
state="suspend"
)], state="overridden"
))
commands = ['no vlan 10', 'vlan 30', 'name thirty', 'state suspend']
self.execute_module(changed=True, commands=commands)
def test_eos_vlan_overridden_idempotent(self):
self.execute_show_command.return_value = load_fixture('eos_vlan_config.cfg')
set_module_args(dict(
config=[dict(
vlan_id=10,
name="ten"
)], state="overridden"
))
self.execute_module(changed=False, commands=[])
def test_eos_vlan_deleted(self):
set_module_args(dict(
config=[dict(
vlan_id=10,
name="ten",
)], state="deleted"
))
commands = ['no vlan 10']
self.execute_module(changed=True, commands=commands)
def test_eos_vlan_id_datatype(self):
set_module_args(dict(
config=[dict(
vlan_id="thirty"
)]
))
result = self.execute_module(failed=True)
self.assertIn("we were unable to convert to int", result['msg'])
def test_eos_vlan_state_datatype(self):
set_module_args(dict(
config=[dict(
vlan_id=30,
state=10
)]
))
result = self.execute_module(failed=True)
self.assertIn("value of state must be one of: active, suspend", result['msg'])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,386 |
Deep merge of dictionaries
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The [__combine__ filter of Ansible](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries) is limited if it comes to nested elements that contain lists. The __recursive__ functionality only does merge dict elements but not nested list elements.
* The current implementation will simply take the second list as result.
* The expected results would be to merge both lists into a single list.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Jinjia2 integration
##### ADDITIONAL INFORMATION
Example:
```yaml
---
- hosts: localhost
gather_facts: false
vars:
foo:
list:
- a
- b
dict:
foo: 1
bar:
list:
- c
- d
dict:
bar: 2
tasks:
- debug:
msg: '{{ {} | combine(foo, bar, recursive=True) }}'
```
Expected:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"a",
"b",
"c",
"d"
]
}
```
Actual:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"c",
"d"
]
}
```
|
https://github.com/ansible/ansible/issues/59386
|
https://github.com/ansible/ansible/pull/57894
|
33f136292b06a14c98fa4c05bdb6409a5e84e352
|
53e043b5febd30f258a233f51b180a543300151b
| 2019-07-22T13:50:47Z |
python
| 2020-02-12T21:40:36Z |
changelogs/fragments/57894-combine-filter-rework.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,386 |
Deep merge of dictionaries
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The [__combine__ filter of Ansible](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries) is limited if it comes to nested elements that contain lists. The __recursive__ functionality only does merge dict elements but not nested list elements.
* The current implementation will simply take the second list as result.
* The expected results would be to merge both lists into a single list.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Jinjia2 integration
##### ADDITIONAL INFORMATION
Example:
```yaml
---
- hosts: localhost
gather_facts: false
vars:
foo:
list:
- a
- b
dict:
foo: 1
bar:
list:
- c
- d
dict:
bar: 2
tasks:
- debug:
msg: '{{ {} | combine(foo, bar, recursive=True) }}'
```
Expected:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"a",
"b",
"c",
"d"
]
}
```
Actual:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"c",
"d"
]
}
```
|
https://github.com/ansible/ansible/issues/59386
|
https://github.com/ansible/ansible/pull/57894
|
33f136292b06a14c98fa4c05bdb6409a5e84e352
|
53e043b5febd30f258a233f51b180a543300151b
| 2019-07-22T13:50:47Z |
python
| 2020-02-12T21:40:36Z |
docs/docsite/rst/user_guide/playbooks_filters.rst
|
.. _playbooks_filters:
*******
Filters
*******
Filters let you transform data inside template expressions. This page documents mainly Ansible-specific filters, but you can use any of the standard filters shipped with Jinja2 - see the list of :ref:`builtin filters <jinja2:builtin-filters>` in the official Jinja2 template documentation. You can also use :ref:`Python methods <jinja2:python-methods>` to manipulate variables. A few useful filters are typically added with each new Ansible release. The development documentation shows
how to create custom Ansible filters as plugins, though we generally welcome new filters into the core code so everyone can use them.
Templating happens on the Ansible controller, **not** on the target host, so filters execute on the controller and manipulate data locally.
.. contents::
:local:
Handling undefined variables
============================
Filters can help you manage missing or undefined variables by providing defaults or making some variable optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the ``mandatory`` filter.
.. _defaulting_undefined_variables:
Providing default values
------------------------
You can provide default values for variables directly in your templates using the Jinja2 'default' filter. This is often a better approach than failing if a variable is not defined::
{{ some_variable | default(5) }}
In the above example, if the variable 'some_variable' is not defined, Ansible uses the default value 5, rather than raising an "undefined variable" error and failing. If you are working within a role, you can also add a ``defaults/main.yml`` to define the default values for variables in your role.
Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use
a default with a value in a nested data structure (i.e :code:`{{ foo.bar.baz | default('DEFAULT') }}`) when you do not know if the intermediate values are defined.
If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to ``true``::
{{ lookup('env', 'MY_USER') | default('admin', true) }}
.. _omitting_undefined_variables:
Making variables optional
-------------------------
In some cases, you want to make a variable optional. For example, if you want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable ``omit``::
- name: touch files with an optional mode
file:
dest: "{{ item.path }}"
state: touch
mode: "{{ item.mode | default(omit) }}"
loop:
- path: /tmp/foo
- path: /tmp/bar
- path: /tmp/baz
mode: "0444"
In this example, the default mode for the files ``/tmp/foo`` and ``/tmp/bar`` is determined by the umask of the system. Ansible does not send a value for ``mode``. Only the third file, ``/tmp/baz``, receives the `mode=0444` option.
.. note:: If you are "chaining" additional filters after the ``default(omit)`` filter, you should instead do something like this:
``"{{ foo | default(None) | some_filter or omit }}"``. In this example, the default ``None`` (Python null) value will cause the
later filters to fail, which will trigger the ``or omit`` portion of the logic. Using ``omit`` in this manner is very specific to
the later filters you're chaining though, so be prepared for some trial and error if you do this.
.. _forcing_variables_to_be_defined:
Defining mandatory values
-------------------------
If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting :ref:`DEFAULT_UNDEFINED_VAR_BEHAVIOR` to ``false``. In that case, you may want to require some variables to be defined. You can do with this with::
{{ variable | mandatory }}
The variable value will be used as is, but the template evaluation will raise an error if it is undefined.
Defining different values for true/false/null
=============================================
You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9)::
{{ (name == "John") | ternary('Mr','Ms') }}
In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8)::
{{ enabled | ternary('no shutdown', 'shutdown', omit) }}
Manipulating data types
=======================
Sometimes a variables file or registered variable contains a dictionary when your playbook needs a list. Sometimes you have a list when your template needs a dictionary. These filters help you transform these data types.
.. _dict_filter:
Transforming dictionaries into lists
------------------------------------
.. versionadded:: 2.6
To turn a dictionary into a list of items, suitable for looping, use `dict2items`::
{{ dict | dict2items }}
Which turns::
tags:
Application: payment
Environment: dev
into::
- key: Application
value: payment
- key: Environment
value: dev
.. versionadded:: 2.8
``dict2items`` accepts 2 keyword arguments, ``key_name`` and ``value_name`` that allow configuration of the names of the keys to use for the transformation::
{{ files | dict2items(key_name='file', value_name='path') }}
Which turns::
files:
users: /etc/passwd
groups: /etc/group
into::
- file: users
path: /etc/passwd
- file: groups
path: /etc/group
Transforming lists into dictionaries
------------------------------------
.. versionadded:: 2.7
This filter turns a list of dicts with 2 keys, into a dict, mapping the values of those keys into ``key: value`` pairs::
{{ tags | items2dict }}
Which turns::
tags:
- key: Application
value: payment
- key: Environment
value: dev
into::
Application: payment
Environment: dev
This is the reverse of the ``dict2items`` filter.
``items2dict`` accepts 2 keyword arguments, ``key_name`` and ``value_name`` that allow configuration of the names of the keys to use for the transformation::
{{ tags | items2dict(key_name='key', value_name='value') }}
Discovering the data type
-------------------------
.. versionadded:: 2.3
If you are unsure of the underlying Python type of a variable, you can use the ``type_debug`` filter to display it. This is useful in debugging when you need a particular type of variable::
{{ myvar | type_debug }}
Forcing the data type
---------------------
You can cast values as certain types. For example, if you expect the input "True" from a :ref:`vars_prompt <playbooks_prompts>` and you want Ansible to recognize it as a Boolean value instead of a string::
- debug:
msg: test
when: some_string_value | bool
.. versionadded:: 1.6
.. _filters_for_formatting_data:
Controlling data formats: YAML and JSON
=======================================
The following filters will take a data structure in a template and manipulate it or switch it from or to JSON or YAML format. These are occasionally useful for debugging::
{{ some_variable | to_json }}
{{ some_variable | to_yaml }}
For human readable output, you can use::
{{ some_variable | to_nice_json }}
{{ some_variable | to_nice_yaml }}
You can change the indentation of either format::
{{ some_variable | to_nice_json(indent=2) }}
{{ some_variable | to_nice_yaml(indent=8) }}
The ``to_yaml`` and ``to_nice_yaml`` filters use the `PyYAML library`_ which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol)
To avoid such behavior and generate long lines, use the ``width`` option. You must use a hardcoded number to define the width, instead of a construction like ``float("inf")``, because the filter does not support proxying Python functions. For example::
{{ some_variable | to_yaml(indent=8, width=1337) }}
{{ some_variable | to_nice_yaml(indent=8, width=1337) }}
The filter does support passing through other YAML parameters. For a full list, see the `PyYAML documentation`_.
If you are reading in some already formatted data::
{{ some_variable | from_json }}
{{ some_variable | from_yaml }}
for example::
tasks:
- shell: cat /some/path/to/file.json
register: result
- set_fact:
myvar: "{{ result.stdout | from_json }}"
.. versionadded:: 2.7
To parse multi-document YAML strings, the ``from_yaml_all`` filter is provided.
The ``from_yaml_all`` filter will return a generator of parsed YAML documents.
for example::
tasks:
- shell: cat /some/path/to/multidoc-file.yaml
register: result
- debug:
msg: '{{ item }}'
loop: '{{ result.stdout | from_yaml_all | list }}'
Combining and selecting data
============================
These filters let you manipulate data from multiple sources and types and manage large data structures, giving you precise control over complex data.
.. _zip_filter:
Combining items from multiple lists: zip and zip_longest
--------------------------------------------------------
.. versionadded:: 2.3
To get a list combining the elements of other lists use ``zip``::
- name: give me list combo of two lists
debug:
msg: "{{ [1,2,3,4,5] | zip(['a','b','c','d','e','f']) | list }}"
- name: give me shortest combo of two lists
debug:
msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}"
To always exhaust all list use ``zip_longest``::
- name: give me longest combo of three lists , fill with X
debug:
msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}"
Similarly to the output of the ``items2dict`` filter mentioned above, these filters can be used to construct a ``dict``::
{{ dict(keys_list | zip(values_list)) }}
Which turns::
keys_list:
- one
- two
values_list:
- apple
- orange
into::
one: apple
two: orange
Combining objects and subelements
---------------------------------
.. versionadded:: 2.7
The ``subelements`` filter produces a product of an object and the subelement values of that object, similar to the ``subelements`` lookup. This lets you specify individual subelements to use in a template. For example, this expression::
{{ users | subelements('groups', skip_missing=True) }}
turns this data::
users:
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
groups:
- wheel
- docker
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
Into this data::
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- wheel
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- docker
-
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
- docker
You can use the transformed data with ``loop`` to iterate over the same subelement for multiple objects::
- name: Set authorized ssh key, extracting just that data from 'users'
authorized_key:
user: "{{ item.0.name }}"
key: "{{ lookup('file', item.1) }}"
loop: "{{ users | subelements('authorized') }}"
.. _combine_filter:
Combining hashes
----------------
.. versionadded:: 2.0
The `combine` filter allows hashes to be merged. For example, the following would override keys in one hash::
{{ {'a':1, 'b':2} | combine({'b':3}) }}
The resulting hash would be::
{'a':1, 'b':3}
The filter also accepts an optional `recursive=True` parameter to not
only override keys in the first hash, but also recurse into nested
hashes and merge their keys too:
.. code-block:: jinja
{{ {'a':{'foo':1, 'bar':2}, 'b':2} | combine({'a':{'bar':3, 'baz':4}}, recursive=True) }}
This would result in::
{'a':{'foo':1, 'bar':3, 'baz':4}, 'b':2}
The filter can also take multiple arguments to merge::
{{ a | combine(b, c, d) }}
In this case, keys in `d` would override those in `c`, which would override those in `b`, and so on.
This behavior does not depend on the value of the `hash_behavior` setting in `ansible.cfg`.
.. _extract_filter:
Selecting values from arrays or hashtables
-------------------------------------------
.. versionadded:: 2.1
The `extract` filter is used to map from a list of indices to a list of
values from a container (hash or array)::
{{ [0,2] | map('extract', ['x','y','z']) | list }}
{{ ['x','y'] | map('extract', {'x': 42, 'y': 31}) | list }}
The results of the above expressions would be::
['x', 'z']
[42, 31]
The filter can take another argument::
{{ groups['x'] | map('extract', hostvars, 'ec2_ip_address') | list }}
This takes the list of hosts in group 'x', looks them up in `hostvars`,
and then looks up the `ec2_ip_address` of the result. The final result
is a list of IP addresses for the hosts in group 'x'.
The third argument to the filter can also be a list, for a recursive
lookup inside the container::
{{ ['a'] | map('extract', b, ['x','y']) | list }}
This would return a list containing the value of `b['a']['x']['y']`.
Combining lists
---------------
This set of filters returns a list of combined lists.
permutations
^^^^^^^^^^^^
To get permutations of a list::
- name: give me largest permutations (order matters)
debug:
msg: "{{ [1,2,3,4,5] | permutations | list }}"
- name: give me permutations of sets of three
debug:
msg: "{{ [1,2,3,4,5] | permutations(3) | list }}"
combinations
^^^^^^^^^^^^
Combinations always require a set size::
- name: give me combinations for sets of two
debug:
msg: "{{ [1,2,3,4,5] | combinations(2) | list }}"
Also see the :ref:`zip_filter`
products
^^^^^^^^
The product filter returns the `cartesian product <https://docs.python.org/3/library/itertools.html#itertools.product>`_ of the input iterables.
This is roughly equivalent to nested for-loops in a generator expression.
For example::
- name: generate multiple hostnames
debug:
msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}"
This would result in::
{ "msg": "foo.com,bar.com" }
.. json_query_filter:
Selecting JSON data: JSON queries
---------------------------------
Sometimes you end up with a complex data structure in JSON format and you need to extract only a small set of data within it. The **json_query** filter lets you query a complex JSON structure and iterate over it using a loop structure.
.. note:: This filter is built upon **jmespath**, and you can use the same syntax. For examples, see `jmespath examples <http://jmespath.org/examples.html>`_.
Consider this data structure::
{
"domain_definition": {
"domain": {
"cluster": [
{
"name": "cluster1"
},
{
"name": "cluster2"
}
],
"server": [
{
"name": "server11",
"cluster": "cluster1",
"port": "8080"
},
{
"name": "server12",
"cluster": "cluster1",
"port": "8090"
},
{
"name": "server21",
"cluster": "cluster2",
"port": "9080"
},
{
"name": "server22",
"cluster": "cluster2",
"port": "9090"
}
],
"library": [
{
"name": "lib1",
"target": "cluster1"
},
{
"name": "lib2",
"target": "cluster2"
}
]
}
}
}
To extract all clusters from this structure, you can use the following query::
- name: "Display all cluster names"
debug:
var: item
loop: "{{ domain_definition | json_query('domain.cluster[*].name') }}"
Same thing for all server names::
- name: "Display all server names"
debug:
var: item
loop: "{{ domain_definition | json_query('domain.server[*].name') }}"
This example shows ports from cluster1::
- name: "Display all ports from cluster1"
debug:
var: item
loop: "{{ domain_definition | json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port"
.. note:: You can use a variable to make the query more readable.
Or, alternatively print out the ports in a comma separated string::
- name: "Display all ports from cluster1 as a string"
debug:
msg: "{{ domain_definition | json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}"
.. note:: Here, quoting literals using backticks avoids escaping quotes and maintains readability.
Or, using YAML `single quote escaping <https://yaml.org/spec/current.html#id2534365>`_::
- name: "Display all ports from cluster1"
debug:
var: item
loop: "{{ domain_definition | json_query('domain.server[?cluster==''cluster1''].port') }}"
.. note:: Escaping single quotes within single quotes in YAML is done by doubling the single quote.
In this example, we get a hash map with all ports and names of a cluster::
- name: "Display all server ports and names from cluster1"
debug:
var: item
loop: "{{ domain_definition | json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}"
Randomizing data
================
When you need a randomly generated value, use one of these filters.
.. _random_mac_filter:
Random MAC addresses
--------------------
.. versionadded:: 2.6
This filter can be used to generate a random MAC address from a string prefix.
To get a random MAC address from a string prefix starting with '52:54:00'::
"{{ '52:54:00' | random_mac }}"
# => '52:54:00:ef:1c:03'
Note that if anything is wrong with the prefix string, the filter will issue an error.
.. versionadded:: 2.9
As of Ansible version 2.9, you can also initialize the random number generator from a seed. This way, you can create random-but-idempotent MAC addresses::
"{{ '52:54:00' | random_mac(seed=inventory_hostname) }}"
.. _random_filter:
Random items or numbers
-----------------------
This filter can be used similar to the default Jinja2 random filter (returning a random item from a sequence of
items), but can also generate a random number based on a range.
To get a random item from a list::
"{{ ['a','b','c'] | random }}"
# => 'c'
To get a random number between 0 and a specified number::
"{{ 60 | random }} * * * * root /script/from/cron"
# => '21 * * * * root /script/from/cron'
Get a random number from 0 to 100 but in steps of 10::
{{ 101 | random(step=10) }}
# => 70
Get a random number from 1 to 100 but in steps of 10::
{{ 101 | random(1, 10) }}
# => 31
{{ 101 | random(start=1, step=10) }}
# => 51
It's also possible to initialize the random number generator from a seed. This way, you can create random-but-idempotent numbers::
"{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron"
Shuffling a list
----------------
This filter will randomize an existing list, giving a different order every invocation.
To get a random list from an existing list::
{{ ['a','b','c'] | shuffle }}
# => ['c','a','b']
{{ ['a','b','c'] | shuffle }}
# => ['b','c','a']
It's also possible to shuffle a list idempotent. All you need is a seed.::
{{ ['a','b','c'] | shuffle(seed=inventory_hostname) }}
# => ['b','a','c']
The shuffle filter returns a list whenever possible. If you use it with a non 'listable' item, the filter does nothing.
.. _list_filters:
List filters
============
These filters all operate on list variables.
To get the minimum value from list of numbers::
{{ list1 | min }}
To get the maximum value from a list of numbers::
{{ [3, 4, 2] | max }}
.. versionadded:: 2.5
Flatten a list (same thing the `flatten` lookup does)::
{{ [3, [4, 2] ] | flatten }}
Flatten only the first level of a list (akin to the `items` lookup)::
{{ [3, [4, [2]] ] | flatten(levels=1) }}
.. _set_theory_filters:
Set theory filters
==================
These functions return a unique set from sets or lists.
.. versionadded:: 1.4
To get a unique set from a list::
{{ list1 | unique }}
To get a union of two lists::
{{ list1 | union(list2) }}
To get the intersection of 2 lists (unique list of all items in both)::
{{ list1 | intersect(list2) }}
To get the difference of 2 lists (items in 1 that don't exist in 2)::
{{ list1 | difference(list2) }}
To get the symmetric difference of 2 lists (items exclusive to each list)::
{{ list1 | symmetric_difference(list2) }}
.. _math_stuff:
Math filters
============
.. versionadded:: 1.9
Get the logarithm (default is e)::
{{ myvar | log }}
Get the base 10 logarithm::
{{ myvar | log(10) }}
Give me the power of 2! (or 5)::
{{ myvar | pow(2) }}
{{ myvar | pow(5) }}
Square root, or the 5th::
{{ myvar | root }}
{{ myvar | root(5) }}
Note that jinja2 already provides some like abs() and round().
Network filters
===============
These filters help you with common network tasks.
.. _ipaddr_filter:
IP address filters
------------------
.. versionadded:: 1.9
To test if a string is a valid IP address::
{{ myvar | ipaddr }}
You can also require a specific IP protocol version::
{{ myvar | ipv4 }}
{{ myvar | ipv6 }}
IP address filter can also be used to extract specific information from an IP
address. For example, to get the IP address itself from a CIDR, you can use::
{{ '192.0.2.1/24' | ipaddr('address') }}
More information about ``ipaddr`` filter and complete usage guide can be found
in :ref:`playbooks_filters_ipaddr`.
.. _network_filters:
Network CLI filters
-------------------
.. versionadded:: 2.4
To convert the output of a network device CLI command into structured JSON
output, use the ``parse_cli`` filter::
{{ output | parse_cli('path/to/spec') }}
The ``parse_cli`` filter will load the spec file and pass the command output
through it, returning JSON output. The YAML spec file defines how to parse the CLI output.
The spec file should be valid formatted YAML. It defines how to parse the CLI
output and return JSON data. Below is an example of a valid spec file that
will parse the output from the ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
Another common use case for parsing CLI commands is to break a large command
into blocks that can be parsed. This can be done using the ``start_block`` and
``end_block`` directives to break the command into blocks that can be parsed.
.. code-block:: yaml
---
vars:
interface:
name: "{{ item[0].match[0] }}"
state: "{{ item[1].state }}"
mode: "{{ item[2].match[0] }}"
keys:
interfaces:
value: "{{ interface }}"
start_block: "^Ethernet.*$"
end_block: "^$"
items:
- "^(?P<name>Ethernet\\d\\/\\d*)"
- "admin state is (?P<state>.+),"
- "Port mode is (.+)"
The example above will parse the output of ``show interface`` into a list of
hashes.
The network filters also support parsing the output of a CLI command using the
TextFSM library. To parse the CLI output with TextFSM use the following
filter::
{{ output.stdout[0] | parse_cli_textfsm('path/to/fsm') }}
Use of the TextFSM filter requires the TextFSM library to be installed.
Network XML filters
-------------------
.. versionadded:: 2.5
To convert the XML output of a network device command into structured JSON
output, use the ``parse_xml`` filter::
{{ output | parse_xml('path/to/spec') }}
The ``parse_xml`` filter will load the spec file and pass the command output
through formatted as JSON.
The spec file should be valid formatted YAML. It defines how to parse the XML
output and return JSON data.
Below is an example of a valid spec file that
will parse the output from the ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The value of ``top`` is the XPath relative to the XML root node.
In the example XML output given below, the value of ``top`` is ``configuration/vlans/vlan``,
which is an XPath expression relative to the root node (<rpc-reply>).
``configuration`` in the value of ``top`` is the outer most container node, and ``vlan``
is the inner-most container node.
``items`` is a dictionary of key-value pairs that map user-defined names to XPath expressions
that select elements. The Xpath expression is relative to the value of the XPath value contained in ``top``.
For example, the ``vlan_id`` in the spec file is a user defined name and its value ``vlan-id`` is the
relative to the value of XPath in ``top``
Attributes of XML tags can be extracted using XPath expressions. The value of ``state`` in the spec
is an XPath expression used to get the attributes of the ``vlan`` tag in output XML.::
<rpc-reply>
<configuration>
<vlans>
<vlan inactive="inactive">
<name>vlan-1</name>
<vlan-id>200</vlan-id>
<description>This is vlan-1</description>
</vlan>
</vlans>
</configuration>
</rpc-reply>
.. note:: For more information on supported XPath expressions, see `<https://docs.python.org/2/library/xml.etree.elementtree.html#xpath-support>`_.
Network VLAN filters
--------------------
.. versionadded:: 2.8
Use the ``vlan_parser`` filter to manipulate an unsorted list of VLAN integers into a
sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties:
* Vlans are listed in ascending order.
* Three or more consecutive VLANs are listed with a dash.
* The first line of the list can be first_line_len characters long.
* Subsequent list lines can be other_line_len characters.
To sort a VLAN list::
{{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | vlan_parser }}
This example renders the following sorted list::
['100,1688,3002-3005,3999']
Another example Jinja template::
{% set parsed_vlans = vlans | vlan_parser %}
switchport trunk allowed vlan {{ parsed_vlans[0] }}
{% for i in range (1, parsed_vlans | count) %}
switchport trunk allowed vlan add {{ parsed_vlans[i] }}
This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration.
.. _hash_filters:
Encryption filters
==================
.. versionadded:: 1.9
To get the sha1 hash of a string::
{{ 'test1' | hash('sha1') }}
To get the md5 hash of a string::
{{ 'test1' | hash('md5') }}
Get a string checksum::
{{ 'test2' | checksum }}
Other hashes (platform dependent)::
{{ 'test2' | hash('blowfish') }}
To get a sha512 password hash (random salt)::
{{ 'passwordsaresecret' | password_hash('sha512') }}
To get a sha256 password hash with a specific salt::
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt') }}
An idempotent method to generate unique hashes per system is to use a salt that is consistent between runs::
{{ 'secretpassword' | password_hash('sha512', 65534 | random(seed=inventory_hostname) | string) }}
Hash types available depend on the master system running ansible,
'hash' depends on hashlib password_hash depends on passlib (https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html).
.. versionadded:: 2.7
Some hash types allow providing a rounds parameter::
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=10000) }}
.. _other_useful_filters:
Text filters
============
These filters work with strings and text.
.. _comment_filter:
Adding comments to files
------------------------
The `comment` filter lets you turn text in a template into comments in a file, with a variety of comment styles. By default Ansible uses ``#`` to start a comment line and adds a blank comment line above and below your comment text. For example the following::
{{ "Plain style (default)" | comment }}
produces this output:
.. code-block:: text
#
# Plain style (default)
#
Ansible offers styles for comments in C (``//...``), C block
(``/*...*/``), Erlang (``%...``) and XML (``<!--...-->``)::
{{ "C style" | comment('c') }}
{{ "C block style" | comment('cblock') }}
{{ "Erlang style" | comment('erlang') }}
{{ "XML style" | comment('xml') }}
You can define a custom comment character. This filter::
{{ "My Special Case" | comment(decoration="! ") }}
produces:
.. code-block:: text
!
! My Special Case
!
You can fully customize the comment style::
{{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }}
That creates the following output:
.. code-block:: text
#######
#
# Custom style
#
#######
###
#
The filter can also be applied to any Ansible variable. For example to
make the output of the ``ansible_managed`` variable more readable, we can
change the definition in the ``ansible.cfg`` file to this:
.. code-block:: jinja
[defaults]
ansible_managed = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
and then use the variable with the `comment` filter::
{{ ansible_managed | comment }}
which produces this output:
.. code-block:: sh
#
# This file is managed by Ansible.
#
# template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2
# date: 2015-09-10 11:02:58
# user: ansible
# host: myhost
#
Splitting URLs
--------------
.. versionadded:: 2.4
The ``urlsplit`` filter extracts the fragment, hostname, netloc, password, path, port, query, scheme, and username from an URL. With no arguments, returns a dictionary of all the fields::
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('hostname') }}
# => 'www.acme.com'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('netloc') }}
# => 'user:[email protected]:9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('username') }}
# => 'user'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('password') }}
# => 'password'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('path') }}
# => '/dir/index.html'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('port') }}
# => '9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('scheme') }}
# => 'http'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('query') }}
# => 'query=term'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('fragment') }}
# => 'fragment'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit }}
# =>
# {
# "fragment": "fragment",
# "hostname": "www.acme.com",
# "netloc": "user:[email protected]:9000",
# "password": "password",
# "path": "/dir/index.html",
# "port": 9000,
# "query": "query=term",
# "scheme": "http",
# "username": "user"
# }
Searching strings with regular expressions
------------------------------------------
To search a string with a regex, use the "regex_search" filter::
# search for "foo" in "foobar"
{{ 'foobar' | regex_search('(foo)') }}
# will return empty if it cannot find a match
{{ 'ansible' | regex_search('(foobar)') }}
# case insensitive search in multiline mode
{{ 'foo\nBAR' | regex_search("^bar", multiline=True, ignorecase=True) }}
To search for all occurrences of regex matches, use the "regex_findall" filter::
# Return a list of all IPv4 addresses in the string
{{ 'Some DNS servers are 8.8.8.8 and 8.8.4.4' | regex_findall('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }}
To replace text in a string with regex, use the "regex_replace" filter::
# convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
# convert "localhost:80" to "localhost, 80" using named groups
{{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }}
# convert "localhost:80" to "localhost"
{{ 'localhost:80' | regex_replace(':80') }}
.. note:: If you want to match the whole string and you are using ``*`` make sure to always wraparound your regular expression with the start/end anchors.
For example ``^(.*)$`` will always match only one result, while ``(.*)`` on some Python versions will match the whole string and an empty string at the
end, which means it will make two replacements::
# add "https://" prefix to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '^', 'https://') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }}
# append ':80' to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }}
{{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }}
{{ hosts | map('regex_replace', '$', ':80') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }}
.. note:: Prior to ansible 2.0, if "regex_replace" filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments),
then you needed to escape backreferences (e.g. ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
.. versionadded:: 2.0
To escape special characters within a standard Python regex, use the "regex_escape" filter (using the default re_type='python' option)::
# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }}
.. versionadded:: 2.8
To escape special characters within a POSIX basic regex, use the "regex_escape" filter with the re_type='posix_basic' option::
# convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$'
{{ '^f.*o(.*)$' | regex_escape('posix_basic') }}
Working with filenames and pathnames
------------------------------------
To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt'::
{{ path | basename }}
To get the last name of a windows style file path (new in version 2.0)::
{{ path | win_basename }}
To separate the windows drive letter from the rest of a file path (new in version 2.0)::
{{ path | win_splitdrive }}
To get only the windows drive letter::
{{ path | win_splitdrive | first }}
To get the rest of the path without the drive letter::
{{ path | win_splitdrive | last }}
To get the directory from a path::
{{ path | dirname }}
To get the directory from a windows path (new version 2.0)::
{{ path | win_dirname }}
To expand a path containing a tilde (`~`) character (new in version 1.5)::
{{ path | expanduser }}
To expand a path containing environment variables::
{{ path | expandvars }}
.. note:: `expandvars` expands local variables; using it on remote paths can lead to errors.
.. versionadded:: 2.6
To get the real path of a link (new in version 1.8)::
{{ path | realpath }}
To get the relative path of a link, from a start point (new in version 1.7)::
{{ path | relpath('/etc') }}
To get the root and extension of a path or filename (new in version 2.0)::
# with path == 'nginx.conf' the return would be ('nginx', '.conf')
{{ path | splitext }}
To join one or more path components::
{{ ('/etc', path, 'subdir', file) | path_join }}
.. versionadded:: 2.10
String filters
==============
To add quotes for shell usage::
- shell: echo {{ string_value | quote }}
To concatenate a list into a string::
{{ list | join(" ") }}
To work with Base64 encoded strings::
{{ encoded | b64decode }}
{{ decoded | b64encode }}
As of version 2.6, you can define the type of encoding to use, the default is ``utf-8``::
{{ encoded | b64decode(encoding='utf-16-le') }}
{{ decoded | b64encode(encoding='utf-16-le') }}
.. versionadded:: 2.6
UUID filters
============
To create a namespaced UUIDv5::
{{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }}
.. versionadded:: 2.10
To create a namespaced UUIDv5 using the default Ansible namespace '361E6D51-FAEC-444A-9079-341386DA8E2E'::
{{ string | to_uuid }}
.. versionadded:: 1.9
To make use of one attribute from each item in a list of complex variables, use the :func:`Jinja2 map filter <jinja2:map>`::
# get a comma-separated list of the mount points (e.g. "/,/mnt/stuff") on a host
{{ ansible_mounts | map(attribute='mount') | join(',') }}
Date and time filters
=====================
To get a date object from a string use the `to_datetime` filter::
# Get total amount of seconds between two dates. Default date format is %Y-%m-%d %H:%M:%S but you can pass your own format
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).total_seconds() }}
# Get remaining seconds after delta has been calculated. NOTE: This does NOT convert years, days, hours, etc to seconds. For that, use total_seconds()
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2016-08-14 18:00:00" | to_datetime)).seconds }}
# This expression evaluates to "12" and not "132". Delta is 2 hours, 12 seconds
# get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }}
.. versionadded:: 2.4
To format a date using a string (like with the shell date command), use the "strftime" filter::
# Display year-month-day
{{ '%Y-%m-%d' | strftime }}
# Display hour:min:sec
{{ '%H:%M:%S' | strftime }}
# Use ansible_date_time.epoch fact
{{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }}
# Use arbitrary epoch value
{{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01
{{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04
.. note:: To get all string possibilities, check https://docs.python.org/2/library/time.html#time.strftime
Kubernetes filters
==================
Use the "k8s_config_resource_name" filter to obtain the name of a Kubernetes ConfigMap or Secret,
including its hash::
{{ configmap_resource_definition | k8s_config_resource_name }}
This can then be used to reference hashes in Pod specifications::
my_secret:
kind: Secret
name: my_secret_name
deployment_resource:
kind: Deployment
spec:
template:
spec:
containers:
- envFrom:
- secretRef:
name: {{ my_secret | k8s_config_resource_name }}
.. versionadded:: 2.8
.. _PyYAML library: https://pyyaml.org/
.. _PyYAML documentation: https://pyyaml.org/wiki/PyYAMLDocumentation
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
:ref:`playbooks_loops`
Looping in playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`playbooks_best_practices`
Best practices in playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,386 |
Deep merge of dictionaries
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The [__combine__ filter of Ansible](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries) is limited if it comes to nested elements that contain lists. The __recursive__ functionality only does merge dict elements but not nested list elements.
* The current implementation will simply take the second list as result.
* The expected results would be to merge both lists into a single list.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Jinjia2 integration
##### ADDITIONAL INFORMATION
Example:
```yaml
---
- hosts: localhost
gather_facts: false
vars:
foo:
list:
- a
- b
dict:
foo: 1
bar:
list:
- c
- d
dict:
bar: 2
tasks:
- debug:
msg: '{{ {} | combine(foo, bar, recursive=True) }}'
```
Expected:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"a",
"b",
"c",
"d"
]
}
```
Actual:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"c",
"d"
]
}
```
|
https://github.com/ansible/ansible/issues/59386
|
https://github.com/ansible/ansible/pull/57894
|
33f136292b06a14c98fa4c05bdb6409a5e84e352
|
53e043b5febd30f258a233f51b180a543300151b
| 2019-07-22T13:50:47Z |
python
| 2020-02-12T21:40:36Z |
lib/ansible/plugins/filter/core.py
|
# (c) 2012, Jeroen Hoekx <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import crypt
import glob
import hashlib
import itertools
import json
import ntpath
import os.path
import re
import string
import sys
import time
import uuid
import yaml
import datetime
from functools import partial
from random import Random, SystemRandom, shuffle
from jinja2.filters import environmentfilter, do_groupby as _do_groupby
from ansible.errors import AnsibleError, AnsibleFilterError
from ansible.module_utils.six import iteritems, string_types, integer_types, reraise
from ansible.module_utils.six.moves import reduce, shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common._collections_compat import Mapping, MutableMapping
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.template import recursive_check_defined
from ansible.utils.display import Display
from ansible.utils.encrypt import passlib_or_crypt
from ansible.utils.hashing import md5s, checksum_s
from ansible.utils.unicode import unicode_wrap
from ansible.utils.vars import merge_hash
display = Display()
UUID_NAMESPACE_ANSIBLE = uuid.UUID('361E6D51-FAEC-444A-9079-341386DA8E2E')
def to_yaml(a, *args, **kw):
'''Make verbose, human readable yaml'''
default_flow_style = kw.pop('default_flow_style', None)
transformed = yaml.dump(a, Dumper=AnsibleDumper, allow_unicode=True, default_flow_style=default_flow_style, **kw)
return to_text(transformed)
def to_nice_yaml(a, indent=4, *args, **kw):
'''Make verbose, human readable yaml'''
transformed = yaml.dump(a, Dumper=AnsibleDumper, indent=indent, allow_unicode=True, default_flow_style=False, **kw)
return to_text(transformed)
def to_json(a, *args, **kw):
''' Convert the value to JSON '''
return json.dumps(a, cls=AnsibleJSONEncoder, *args, **kw)
def to_nice_json(a, indent=4, sort_keys=True, *args, **kw):
'''Make verbose, human readable JSON'''
try:
return json.dumps(a, indent=indent, sort_keys=sort_keys, separators=(',', ': '), cls=AnsibleJSONEncoder, *args, **kw)
except Exception as e:
# Fallback to the to_json filter
display.warning(u'Unable to convert data using to_nice_json, falling back to to_json: %s' % to_text(e))
return to_json(a, *args, **kw)
def to_bool(a):
''' return a bool for the arg '''
if a is None or isinstance(a, bool):
return a
if isinstance(a, string_types):
a = a.lower()
if a in ('yes', 'on', '1', 'true', 1):
return True
return False
def to_datetime(string, format="%Y-%m-%d %H:%M:%S"):
return datetime.datetime.strptime(string, format)
def strftime(string_format, second=None):
''' return a date string using string. See https://docs.python.org/2/library/time.html#time.strftime for format '''
if second is not None:
try:
second = int(second)
except Exception:
raise AnsibleFilterError('Invalid value for epoch value (%s)' % second)
return time.strftime(string_format, time.localtime(second))
def quote(a):
''' return its argument quoted for shell usage '''
return shlex_quote(to_text(a))
def fileglob(pathname):
''' return list of matched regular files for glob '''
return [g for g in glob.glob(pathname) if os.path.isfile(g)]
def regex_replace(value='', pattern='', replacement='', ignorecase=False):
''' Perform a `re.sub` returning a string '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
if ignorecase:
flags = re.I
else:
flags = 0
_re = re.compile(pattern, flags=flags)
return _re.sub(replacement, value)
def regex_findall(value, regex, multiline=False, ignorecase=False):
''' Perform re.findall and return the list of matches '''
flags = 0
if ignorecase:
flags |= re.I
if multiline:
flags |= re.M
return re.findall(regex, value, flags)
def regex_search(value, regex, *args, **kwargs):
''' Perform re.search and return the list of matches or a backref '''
groups = list()
for arg in args:
if arg.startswith('\\g'):
match = re.match(r'\\g<(\S+)>', arg).group(1)
groups.append(match)
elif arg.startswith('\\'):
match = int(re.match(r'\\(\d+)', arg).group(1))
groups.append(match)
else:
raise AnsibleFilterError('Unknown argument')
flags = 0
if kwargs.get('ignorecase'):
flags |= re.I
if kwargs.get('multiline'):
flags |= re.M
match = re.search(regex, value, flags)
if match:
if not groups:
return match.group()
else:
items = list()
for item in groups:
items.append(match.group(item))
return items
def ternary(value, true_val, false_val, none_val=None):
''' value ? true_val : false_val '''
if value is None and none_val is not None:
return none_val
elif bool(value):
return true_val
else:
return false_val
def regex_escape(string, re_type='python'):
'''Escape all regular expressions special characters from STRING.'''
if re_type == 'python':
return re.escape(string)
elif re_type == 'posix_basic':
# list of BRE special chars:
# https://en.wikibooks.org/wiki/Regular_Expressions/POSIX_Basic_Regular_Expressions
return regex_replace(string, r'([].[^$*\\])', r'\\\1')
# TODO: implement posix_extended
# It's similar to, but different from python regex, which is similar to,
# but different from PCRE. It's possible that re.escape would work here.
# https://remram44.github.io/regex-cheatsheet/regex.html#programs
elif re_type == 'posix_extended':
raise AnsibleFilterError('Regex type (%s) not yet implemented' % re_type)
else:
raise AnsibleFilterError('Invalid regex type (%s)' % re_type)
def from_yaml(data):
if isinstance(data, string_types):
return yaml.safe_load(data)
return data
def from_yaml_all(data):
if isinstance(data, string_types):
return yaml.safe_load_all(data)
return data
@environmentfilter
def rand(environment, end, start=None, step=None, seed=None):
if seed is None:
r = SystemRandom()
else:
r = Random(seed)
if isinstance(end, integer_types):
if not start:
start = 0
if not step:
step = 1
return r.randrange(start, end, step)
elif hasattr(end, '__iter__'):
if start or step:
raise AnsibleFilterError('start and step can only be used with integer values')
return r.choice(end)
else:
raise AnsibleFilterError('random can only be used on sequences and integers')
def randomize_list(mylist, seed=None):
try:
mylist = list(mylist)
if seed:
r = Random(seed)
r.shuffle(mylist)
else:
shuffle(mylist)
except Exception:
pass
return mylist
def get_hash(data, hashtype='sha1'):
try: # see if hash is supported
h = hashlib.new(hashtype)
except Exception:
return None
h.update(to_bytes(data, errors='surrogate_or_strict'))
return h.hexdigest()
def get_encrypted_password(password, hashtype='sha512', salt=None, salt_size=None, rounds=None):
passlib_mapping = {
'md5': 'md5_crypt',
'blowfish': 'bcrypt',
'sha256': 'sha256_crypt',
'sha512': 'sha512_crypt',
}
hashtype = passlib_mapping.get(hashtype, hashtype)
try:
return passlib_or_crypt(password, hashtype, salt=salt, salt_size=salt_size, rounds=rounds)
except AnsibleError as e:
reraise(AnsibleFilterError, AnsibleFilterError(to_native(e), orig_exc=e), sys.exc_info()[2])
def to_uuid(string, namespace=UUID_NAMESPACE_ANSIBLE):
uuid_namespace = namespace
if not isinstance(uuid_namespace, uuid.UUID):
try:
uuid_namespace = uuid.UUID(namespace)
except (AttributeError, ValueError) as e:
raise AnsibleFilterError("Invalid value '%s' for 'namespace': %s" % (to_native(namespace), to_native(e)))
# uuid.uuid5() requires bytes on Python 2 and bytes or text or Python 3
return to_text(uuid.uuid5(uuid_namespace, to_native(string, errors='surrogate_or_strict')))
def mandatory(a, msg=None):
from jinja2.runtime import Undefined
''' Make a variable mandatory '''
if isinstance(a, Undefined):
if a._undefined_name is not None:
name = "'%s' " % to_text(a._undefined_name)
else:
name = ''
if msg is not None:
raise AnsibleFilterError(to_native(msg))
else:
raise AnsibleFilterError("Mandatory variable %s not defined." % name)
return a
def combine(*terms, **kwargs):
recursive = kwargs.get('recursive', False)
if len(kwargs) > 1 or (len(kwargs) == 1 and 'recursive' not in kwargs):
raise AnsibleFilterError("'recursive' is the only valid keyword argument")
dicts = []
for t in terms:
if isinstance(t, MutableMapping):
recursive_check_defined(t)
dicts.append(t)
elif isinstance(t, list):
recursive_check_defined(t)
dicts.append(combine(*t, **kwargs))
else:
raise AnsibleFilterError("|combine expects dictionaries, got " + repr(t))
if recursive:
return reduce(merge_hash, dicts)
else:
return dict(itertools.chain(*map(iteritems, dicts)))
def comment(text, style='plain', **kw):
# Predefined comment types
comment_styles = {
'plain': {
'decoration': '# '
},
'erlang': {
'decoration': '% '
},
'c': {
'decoration': '// '
},
'cblock': {
'beginning': '/*',
'decoration': ' * ',
'end': ' */'
},
'xml': {
'beginning': '<!--',
'decoration': ' - ',
'end': '-->'
}
}
# Pointer to the right comment type
style_params = comment_styles[style]
if 'decoration' in kw:
prepostfix = kw['decoration']
else:
prepostfix = style_params['decoration']
# Default params
p = {
'newline': '\n',
'beginning': '',
'prefix': (prepostfix).rstrip(),
'prefix_count': 1,
'decoration': '',
'postfix': (prepostfix).rstrip(),
'postfix_count': 1,
'end': ''
}
# Update default params
p.update(style_params)
p.update(kw)
# Compose substrings for the final string
str_beginning = ''
if p['beginning']:
str_beginning = "%s%s" % (p['beginning'], p['newline'])
str_prefix = ''
if p['prefix']:
if p['prefix'] != p['newline']:
str_prefix = str(
"%s%s" % (p['prefix'], p['newline'])) * int(p['prefix_count'])
else:
str_prefix = str(
"%s" % (p['newline'])) * int(p['prefix_count'])
str_text = ("%s%s" % (
p['decoration'],
# Prepend each line of the text with the decorator
text.replace(
p['newline'], "%s%s" % (p['newline'], p['decoration'])))).replace(
# Remove trailing spaces when only decorator is on the line
"%s%s" % (p['decoration'], p['newline']),
"%s%s" % (p['decoration'].rstrip(), p['newline']))
str_postfix = p['newline'].join(
[''] + [p['postfix'] for x in range(p['postfix_count'])])
str_end = ''
if p['end']:
str_end = "%s%s" % (p['newline'], p['end'])
# Return the final string
return "%s%s%s%s%s" % (
str_beginning,
str_prefix,
str_text,
str_postfix,
str_end)
@environmentfilter
def extract(environment, item, container, morekeys=None):
if morekeys is None:
keys = [item]
elif isinstance(morekeys, list):
keys = [item] + morekeys
else:
keys = [item, morekeys]
value = container
for key in keys:
value = environment.getitem(value, key)
return value
@environmentfilter
def do_groupby(environment, value, attribute):
"""Overridden groupby filter for jinja2, to address an issue with
jinja2>=2.9.0,<2.9.5 where a namedtuple was returned which
has repr that prevents ansible.template.safe_eval.safe_eval from being
able to parse and eval the data.
jinja2<2.9.0,>=2.9.5 is not affected, as <2.9.0 uses a tuple, and
>=2.9.5 uses a standard tuple repr on the namedtuple.
The adaptation here, is to run the jinja2 `do_groupby` function, and
cast all of the namedtuples to a regular tuple.
See https://github.com/ansible/ansible/issues/20098
We may be able to remove this in the future.
"""
return [tuple(t) for t in _do_groupby(environment, value, attribute)]
def b64encode(string, encoding='utf-8'):
return to_text(base64.b64encode(to_bytes(string, encoding=encoding, errors='surrogate_or_strict')))
def b64decode(string, encoding='utf-8'):
return to_text(base64.b64decode(to_bytes(string, errors='surrogate_or_strict')), encoding=encoding)
def flatten(mylist, levels=None):
ret = []
for element in mylist:
if element in (None, 'None', 'null'):
# ignore undefined items
break
elif is_sequence(element):
if levels is None:
ret.extend(flatten(element))
elif levels >= 1:
# decrement as we go down the stack
ret.extend(flatten(element, levels=(int(levels) - 1)))
else:
ret.append(element)
else:
ret.append(element)
return ret
def subelements(obj, subelements, skip_missing=False):
'''Accepts a dict or list of dicts, and a dotted accessor and produces a product
of the element and the results of the dotted accessor
>>> obj = [{"name": "alice", "groups": ["wheel"], "authorized": ["/tmp/alice/onekey.pub"]}]
>>> subelements(obj, 'groups')
[({'name': 'alice', 'groups': ['wheel'], 'authorized': ['/tmp/alice/onekey.pub']}, 'wheel')]
'''
if isinstance(obj, dict):
element_list = list(obj.values())
elif isinstance(obj, list):
element_list = obj[:]
else:
raise AnsibleFilterError('obj must be a list of dicts or a nested dict')
if isinstance(subelements, list):
subelement_list = subelements[:]
elif isinstance(subelements, string_types):
subelement_list = subelements.split('.')
else:
raise AnsibleFilterError('subelements must be a list or a string')
results = []
for element in element_list:
values = element
for subelement in subelement_list:
try:
values = values[subelement]
except KeyError:
if skip_missing:
values = []
break
raise AnsibleFilterError("could not find %r key in iterated item %r" % (subelement, values))
except TypeError:
raise AnsibleFilterError("the key %s should point to a dictionary, got '%s'" % (subelement, values))
if not isinstance(values, list):
raise AnsibleFilterError("the key %r should point to a list, got %r" % (subelement, values))
for value in values:
results.append((element, value))
return results
def dict_to_list_of_dict_key_value_elements(mydict, key_name='key', value_name='value'):
''' takes a dictionary and transforms it into a list of dictionaries,
with each having a 'key' and 'value' keys that correspond to the keys and values of the original '''
if not isinstance(mydict, Mapping):
raise AnsibleFilterError("dict2items requires a dictionary, got %s instead." % type(mydict))
ret = []
for key in mydict:
ret.append({key_name: key, value_name: mydict[key]})
return ret
def list_of_dict_key_value_elements_to_dict(mylist, key_name='key', value_name='value'):
''' takes a list of dicts with each having a 'key' and 'value' keys, and transforms the list into a dictionary,
effectively as the reverse of dict2items '''
if not is_sequence(mylist):
raise AnsibleFilterError("items2dict requires a list, got %s instead." % type(mylist))
return dict((item[key_name], item[value_name]) for item in mylist)
def path_join(paths):
''' takes a sequence or a string, and return a concatenation
of the different members '''
if isinstance(paths, string_types):
return os.path.join(paths)
elif is_sequence(paths):
return os.path.join(*paths)
else:
raise AnsibleFilterError("|path_join expects string or sequence, got %s instead." % type(paths))
class FilterModule(object):
''' Ansible core jinja2 filters '''
def filters(self):
return {
# jinja2 overrides
'groupby': do_groupby,
# base 64
'b64decode': b64decode,
'b64encode': b64encode,
# uuid
'to_uuid': to_uuid,
# json
'to_json': to_json,
'to_nice_json': to_nice_json,
'from_json': json.loads,
# yaml
'to_yaml': to_yaml,
'to_nice_yaml': to_nice_yaml,
'from_yaml': from_yaml,
'from_yaml_all': from_yaml_all,
# path
'basename': partial(unicode_wrap, os.path.basename),
'dirname': partial(unicode_wrap, os.path.dirname),
'expanduser': partial(unicode_wrap, os.path.expanduser),
'expandvars': partial(unicode_wrap, os.path.expandvars),
'path_join': path_join,
'realpath': partial(unicode_wrap, os.path.realpath),
'relpath': partial(unicode_wrap, os.path.relpath),
'splitext': partial(unicode_wrap, os.path.splitext),
'win_basename': partial(unicode_wrap, ntpath.basename),
'win_dirname': partial(unicode_wrap, ntpath.dirname),
'win_splitdrive': partial(unicode_wrap, ntpath.splitdrive),
# file glob
'fileglob': fileglob,
# types
'bool': to_bool,
'to_datetime': to_datetime,
# date formatting
'strftime': strftime,
# quote string for shell usage
'quote': quote,
# hash filters
# md5 hex digest of string
'md5': md5s,
# sha1 hex digest of string
'sha1': checksum_s,
# checksum of string as used by ansible for checksumming files
'checksum': checksum_s,
# generic hashing
'password_hash': get_encrypted_password,
'hash': get_hash,
# regex
'regex_replace': regex_replace,
'regex_escape': regex_escape,
'regex_search': regex_search,
'regex_findall': regex_findall,
# ? : ;
'ternary': ternary,
# random stuff
'random': rand,
'shuffle': randomize_list,
# undefined
'mandatory': mandatory,
# comment-style decoration
'comment': comment,
# debug
'type_debug': lambda o: o.__class__.__name__,
# Data structures
'combine': combine,
'extract': extract,
'flatten': flatten,
'dict2items': dict_to_list_of_dict_key_value_elements,
'items2dict': list_of_dict_key_value_elements_to_dict,
'subelements': subelements,
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,386 |
Deep merge of dictionaries
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The [__combine__ filter of Ansible](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries) is limited if it comes to nested elements that contain lists. The __recursive__ functionality only does merge dict elements but not nested list elements.
* The current implementation will simply take the second list as result.
* The expected results would be to merge both lists into a single list.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Jinjia2 integration
##### ADDITIONAL INFORMATION
Example:
```yaml
---
- hosts: localhost
gather_facts: false
vars:
foo:
list:
- a
- b
dict:
foo: 1
bar:
list:
- c
- d
dict:
bar: 2
tasks:
- debug:
msg: '{{ {} | combine(foo, bar, recursive=True) }}'
```
Expected:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"a",
"b",
"c",
"d"
]
}
```
Actual:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"c",
"d"
]
}
```
|
https://github.com/ansible/ansible/issues/59386
|
https://github.com/ansible/ansible/pull/57894
|
33f136292b06a14c98fa4c05bdb6409a5e84e352
|
53e043b5febd30f258a233f51b180a543300151b
| 2019-07-22T13:50:47Z |
python
| 2020-02-12T21:40:36Z |
lib/ansible/utils/vars.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import random
import uuid
from json import dumps
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils.six import iteritems, string_types
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common._collections_compat import MutableMapping
from ansible.parsing.splitter import parse_kv
_MAXSIZE = 2 ** 32
cur_id = 0
node_mac = ("%012x" % uuid.getnode())[:12]
random_int = ("%08x" % random.randint(0, _MAXSIZE))[:8]
def get_unique_id():
global cur_id
cur_id += 1
return "-".join([
node_mac[0:8],
node_mac[8:12],
random_int[0:4],
random_int[4:8],
("%012x" % cur_id)[:12],
])
def _validate_mutable_mappings(a, b):
"""
Internal convenience function to ensure arguments are MutableMappings
This checks that all arguments are MutableMappings or raises an error
:raises AnsibleError: if one of the arguments is not a MutableMapping
"""
# If this becomes generally needed, change the signature to operate on
# a variable number of arguments instead.
if not (isinstance(a, MutableMapping) and isinstance(b, MutableMapping)):
myvars = []
for x in [a, b]:
try:
myvars.append(dumps(x))
except Exception:
myvars.append(to_native(x))
raise AnsibleError("failed to combine variables, expected dicts but got a '{0}' and a '{1}': \n{2}\n{3}".format(
a.__class__.__name__, b.__class__.__name__, myvars[0], myvars[1])
)
def combine_vars(a, b):
"""
Return a copy of dictionaries of variables based on configured hash behavior
"""
if C.DEFAULT_HASH_BEHAVIOUR == "merge":
return merge_hash(a, b)
else:
# HASH_BEHAVIOUR == 'replace'
_validate_mutable_mappings(a, b)
result = a.copy()
result.update(b)
return result
def merge_hash(a, b):
"""
Recursively merges hash b into a so that keys from b take precedence over keys from a
"""
_validate_mutable_mappings(a, b)
# if a is empty or equal to b, return b
if a == {} or a == b:
return b.copy()
# if b is empty the below unfolds quickly
result = a.copy()
# next, iterate over b keys and values
for k, v in iteritems(b):
# if there's already such key in a
# and that key contains a MutableMapping
if k in result and isinstance(result[k], MutableMapping) and isinstance(v, MutableMapping):
# merge those dicts recursively
result[k] = merge_hash(result[k], v)
else:
# otherwise, just copy the value from b to a
result[k] = v
return result
def load_extra_vars(loader):
extra_vars = {}
for extra_vars_opt in context.CLIARGS.get('extra_vars', tuple()):
data = None
extra_vars_opt = to_text(extra_vars_opt, errors='surrogate_or_strict')
if extra_vars_opt is None or not extra_vars_opt:
continue
if extra_vars_opt.startswith(u"@"):
# Argument is a YAML file (JSON is a subset of YAML)
data = loader.load_from_file(extra_vars_opt[1:])
elif extra_vars_opt[0] in [u'/', u'.']:
raise AnsibleOptionsError("Please prepend extra_vars filename '%s' with '@'" % extra_vars_opt)
elif extra_vars_opt[0] in [u'[', u'{']:
# Arguments as YAML
data = loader.load(extra_vars_opt)
else:
# Arguments as Key-value
data = parse_kv(extra_vars_opt)
if isinstance(data, MutableMapping):
extra_vars = combine_vars(extra_vars, data)
else:
raise AnsibleOptionsError("Invalid extra vars data supplied. '%s' could not be made into a dictionary" % extra_vars_opt)
return extra_vars
def load_options_vars(version):
if version is None:
version = 'Unknown'
options_vars = {'ansible_version': version}
attrs = {'check': 'check_mode',
'diff': 'diff_mode',
'forks': 'forks',
'inventory': 'inventory_sources',
'skip_tags': 'skip_tags',
'subset': 'limit',
'tags': 'run_tags',
'verbosity': 'verbosity'}
for attr, alias in attrs.items():
opt = context.CLIARGS.get(attr)
if opt is not None:
options_vars['ansible_%s' % alias] = opt
return options_vars
def isidentifier(ident):
"""
Determines, if string is valid Python identifier using the ast module.
Originally posted at: http://stackoverflow.com/a/29586366
"""
if not isinstance(ident, string_types):
return False
try:
root = ast.parse(ident)
except SyntaxError:
return False
if not isinstance(root, ast.Module):
return False
if len(root.body) != 1:
return False
if not isinstance(root.body[0], ast.Expr):
return False
if not isinstance(root.body[0].value, ast.Name):
return False
if root.body[0].value.id != ident:
return False
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,386 |
Deep merge of dictionaries
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The [__combine__ filter of Ansible](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries) is limited if it comes to nested elements that contain lists. The __recursive__ functionality only does merge dict elements but not nested list elements.
* The current implementation will simply take the second list as result.
* The expected results would be to merge both lists into a single list.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Jinjia2 integration
##### ADDITIONAL INFORMATION
Example:
```yaml
---
- hosts: localhost
gather_facts: false
vars:
foo:
list:
- a
- b
dict:
foo: 1
bar:
list:
- c
- d
dict:
bar: 2
tasks:
- debug:
msg: '{{ {} | combine(foo, bar, recursive=True) }}'
```
Expected:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"a",
"b",
"c",
"d"
]
}
```
Actual:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"c",
"d"
]
}
```
|
https://github.com/ansible/ansible/issues/59386
|
https://github.com/ansible/ansible/pull/57894
|
33f136292b06a14c98fa4c05bdb6409a5e84e352
|
53e043b5febd30f258a233f51b180a543300151b
| 2019-07-22T13:50:47Z |
python
| 2020-02-12T21:40:36Z |
test/integration/targets/filter_core/tasks/main.yml
|
# test code for filters
# Copyright: (c) 2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- set_fact:
output_dir: "{{ lookup('env', 'OUTPUT_DIR') }}"
- name: a dummy task to test the changed and success filters
shell: echo hi
register: some_registered_var
- debug:
var: some_registered_var
- name: Verify that we workaround a py26 json bug
template:
src: py26json.j2
dest: "{{ output_dir }}/py26json.templated"
mode: 0644
- name: 9851 - Verify that we don't trigger https://github.com/ansible/ansible/issues/9851
copy:
content: " [{{ item | to_nice_json }}]"
dest: "{{ output_dir }}/9851.out"
with_items:
- {"k": "Quotes \"'\n"}
- name: 9851 - copy known good output into place
copy:
src: 9851.txt
dest: "{{ output_dir }}/9851.txt"
- name: 9851 - Compare generated json to known good
shell: diff -w {{ output_dir }}/9851.out {{ output_dir }}/9851.txt
register: diff_result_9851
- name: 9851 - verify generated file matches known good
assert:
that:
- 'diff_result_9851.stdout == ""'
- name: fill in a basic template
template:
src: foo.j2
dest: "{{ output_dir }}/foo.templated"
mode: 0644
register: template_result
- name: copy known good into place
copy:
src: foo.txt
dest: "{{ output_dir }}/foo.txt"
- name: compare templated file to known good
shell: diff -w {{ output_dir }}/foo.templated {{ output_dir }}/foo.txt
register: diff_result
- name: verify templated file matches known good
assert:
that:
- 'diff_result.stdout == ""'
- name: Test extract
assert:
that:
- '"c" == 2 | extract(["a", "b", "c"])'
- '"b" == 1 | extract(["a", "b", "c"])'
- '"a" == 0 | extract(["a", "b", "c"])'
- name: Container lookups with extract
assert:
that:
- "'x' == [0]|map('extract',['x','y'])|list|first"
- "'y' == [1]|map('extract',['x','y'])|list|first"
- "42 == ['x']|map('extract',{'x':42,'y':31})|list|first"
- "31 == ['x','y']|map('extract',{'x':42,'y':31})|list|last"
- "'local' == ['localhost']|map('extract',hostvars,'ansible_connection')|list|first"
- "'local' == ['localhost']|map('extract',hostvars,['ansible_connection'])|list|first"
# map was added to jinja2 in version 2.7
when: lookup('pipe', ansible_python.executable ~ ' -c "import jinja2; print(jinja2.__version__)"') is version('2.7', '>=')
- name: Test extract filter with defaults
vars:
container:
key:
subkey: value
assert:
that:
- "'key' | extract(badcontainer) | default('a') == 'a'"
- "'key' | extract(badcontainer, 'subkey') | default('a') == 'a'"
- "('key' | extract(badcontainer)).subkey | default('a') == 'a'"
- "'badkey' | extract(container) | default('a') == 'a'"
- "'badkey' | extract(container, 'subkey') | default('a') == 'a'"
- "('badkey' | extract(container)).subsubkey | default('a') == 'a'"
- "'key' | extract(container, 'badsubkey') | default('a') == 'a'"
- "'key' | extract(container, ['badsubkey', 'subsubkey']) | default('a') == 'a'"
- "('key' | extract(container, 'badsubkey')).subsubkey | default('a') == 'a'"
- "'badkey' | extract(hostvars) | default('a') == 'a'"
- "'badkey' | extract(hostvars, 'subkey') | default('a') == 'a'"
- "('badkey' | extract(hostvars)).subsubkey | default('a') == 'a'"
- "'localhost' | extract(hostvars, 'badsubkey') | default('a') == 'a'"
- "'localhost' | extract(hostvars, ['badsubkey', 'subsubkey']) | default('a') == 'a'"
- "('localhost' | extract(hostvars, 'badsubkey')).subsubkey | default('a') == 'a'"
- name: Test hash filter
assert:
that:
- '"{{ "hash" | hash("sha1") }}" == "2346ad27d7568ba9896f1b7da6b5991251debdf2"'
- '"{{ "café" | hash("sha1") }}" == "f424452a9673918c6f09b0cdd35b20be8e6ae7d7"'
- name: Flatten tests
block:
- name: use flatten
set_fact:
flat_full: '{{orig_list|flatten}}'
flat_one: '{{orig_list|flatten(levels=1)}}'
flat_two: '{{orig_list|flatten(levels=2)}}'
flat_tuples: '{{ [1,3] | zip([2,4]) | list | flatten }}'
- name: Verify flatten filter works as expected
assert:
that:
- flat_full == [1, 2, 3, 4, 5, 6, 7]
- flat_one == [1, 2, 3, [4, [5]], 6, 7]
- flat_two == [1, 2, 3, 4, [5], 6, 7]
- flat_tuples == [1, 2, 3, 4]
vars:
orig_list: [1, 2, [3, [4, [5]], 6], 7]
- name: Test base64 filter
assert:
that:
- "'Ansible - くらとみ\n' | b64encode == 'QW5zaWJsZSAtIOOBj+OCieOBqOOBvwo='"
- "'QW5zaWJsZSAtIOOBj+OCieOBqOOBvwo=' | b64decode == 'Ansible - くらとみ\n'"
- "'Ansible - くらとみ\n' | b64encode(encoding='utf-16-le') == 'QQBuAHMAaQBiAGwAZQAgAC0AIABPMIkwaDB/MAoA'"
- "'QQBuAHMAaQBiAGwAZQAgAC0AIABPMIkwaDB/MAoA' | b64decode(encoding='utf-16-le') == 'Ansible - くらとみ\n'"
- name: Ensure combining two dictionaries containing undefined variables provides a helpful error
block:
- set_fact:
foo:
key1: value1
- set_fact:
combined: "{{ foo | combine({'key2': undef_variable}) }}"
ignore_errors: yes
register: result
- assert:
that:
- "result.msg.startswith('The task includes an option with an undefined variable')"
- set_fact:
combined: "{{ foo | combine({'key2': {'nested': [undef_variable]}})}}"
ignore_errors: yes
register: result
- assert:
that:
- "result.msg.startswith('The task includes an option with an undefined variable')"
- set_fact:
key2: is_defined
- set_fact:
combined: "{{ foo | combine({'key2': key2}) }}"
- assert:
that:
- "combined.key2 == 'is_defined'"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,386 |
Deep merge of dictionaries
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The [__combine__ filter of Ansible](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries) is limited if it comes to nested elements that contain lists. The __recursive__ functionality only does merge dict elements but not nested list elements.
* The current implementation will simply take the second list as result.
* The expected results would be to merge both lists into a single list.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
Jinjia2 integration
##### ADDITIONAL INFORMATION
Example:
```yaml
---
- hosts: localhost
gather_facts: false
vars:
foo:
list:
- a
- b
dict:
foo: 1
bar:
list:
- c
- d
dict:
bar: 2
tasks:
- debug:
msg: '{{ {} | combine(foo, bar, recursive=True) }}'
```
Expected:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"a",
"b",
"c",
"d"
]
}
```
Actual:
```yaml
{
"dict": {
"bar": 2,
"foo": 1
},
"list": [
"c",
"d"
]
}
```
|
https://github.com/ansible/ansible/issues/59386
|
https://github.com/ansible/ansible/pull/57894
|
33f136292b06a14c98fa4c05bdb6409a5e84e352
|
53e043b5febd30f258a233f51b180a543300151b
| 2019-07-22T13:50:47Z |
python
| 2020-02-12T21:40:36Z |
test/units/utils/test_vars.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2015, Toshio Kuraotmi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from collections import defaultdict
from units.compat import mock, unittest
from ansible.errors import AnsibleError
from ansible.utils.vars import combine_vars, merge_hash
class TestVariableUtils(unittest.TestCase):
test_merge_data = (
dict(
a=dict(a=1),
b=dict(b=2),
result=dict(a=1, b=2)
),
dict(
a=dict(a=1, c=dict(foo='bar')),
b=dict(b=2, c=dict(baz='bam')),
result=dict(a=1, b=2, c=dict(foo='bar', baz='bam'))
),
dict(
a=defaultdict(a=1, c=defaultdict(foo='bar')),
b=dict(b=2, c=dict(baz='bam')),
result=defaultdict(a=1, b=2, c=defaultdict(foo='bar', baz='bam'))
),
)
test_replace_data = (
dict(
a=dict(a=1),
b=dict(b=2),
result=dict(a=1, b=2)
),
dict(
a=dict(a=1, c=dict(foo='bar')),
b=dict(b=2, c=dict(baz='bam')),
result=dict(a=1, b=2, c=dict(baz='bam'))
),
dict(
a=defaultdict(a=1, c=dict(foo='bar')),
b=dict(b=2, c=defaultdict(baz='bam')),
result=defaultdict(a=1, b=2, c=defaultdict(baz='bam'))
),
)
def test_merge_hash(self):
for test in self.test_merge_data:
self.assertEqual(merge_hash(test['a'], test['b']), test['result'])
def test_improper_args(self):
with mock.patch('ansible.constants.DEFAULT_HASH_BEHAVIOUR', 'replace'):
with self.assertRaises(AnsibleError):
combine_vars([1, 2, 3], dict(a=1))
with self.assertRaises(AnsibleError):
combine_vars(dict(a=1), [1, 2, 3])
with mock.patch('ansible.constants.DEFAULT_HASH_BEHAVIOUR', 'merge'):
with self.assertRaises(AnsibleError):
combine_vars([1, 2, 3], dict(a=1))
with self.assertRaises(AnsibleError):
combine_vars(dict(a=1), [1, 2, 3])
def test_combine_vars_replace(self):
with mock.patch('ansible.constants.DEFAULT_HASH_BEHAVIOUR', 'replace'):
for test in self.test_replace_data:
self.assertEqual(combine_vars(test['a'], test['b']), test['result'])
def test_combine_vars_merge(self):
with mock.patch('ansible.constants.DEFAULT_HASH_BEHAVIOUR', 'merge'):
for test in self.test_merge_data:
self.assertEqual(combine_vars(test['a'], test['b']), test['result'])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,357 |
Fix batch of broken links in module docs
|
We're trying to fix as many broken links as possible before modules move into collections. This is the batch of broken links on some Ansible modules.
NOTE: the link checker sometimes reports an error where a link actually works. Ignore those if you find them.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
###### BROKEN LINKS
~~https://docs.ansible.com/ansible/latest/modules/openssl_privatekey_module.html#openssl-privatekey-module~~
~~├─BROKEN─ https://en.wikipedia.org/wiki/RSA_(cryptosystem~~
~~https://docs.ansible.com/ansible/devel/modules/elb_target_info_module.html#elb-target-info-module~~
~~└─BROKEN─ https://boto3.readthedocs.io/en/latest/%20reference/services/elbv2.html#ElasticLoadBalancingv2.Client.describe_target_health~~
~~https://docs.ansible.com/ansible/devel/plugins/lookup/laps_password.html~~
~~├─BROKEN─ https://keathmilligan.net/python-ldap-and-macos/~~
~~https://docs.ansible.com/ansible/devel/modules/acme_certificate_revoke_module.html#acme-certificate-revoke-module~~
~~─BROKEN─ http://0.0.0.0:1337/modules/Section%205.3.1%20of%20RFC5280~~
https://docs.ansible.com/ansible/devel/modules/airbrake_deployment_module.html#airbrake-deployment-module
└─BROKEN─ http://help.airbrake.io/kb/api-2/deploy-tracking
https://docs.ansible.com/ansible/devel/modules/consul_module.html#consul-module
├─BROKEN─ http://0.0.0.0:1337/v1/agent/service/register
~~https://docs.ansible.com/ansible/devel/modules/hetzner_firewall_module.html#hetzner-firewall-module~~
~~─BROKEN─ http://0.0.0.0:1337/modules/the%20documentation,https:/wiki.hetzner.de/index.php/Robot_Firewall/en#Parameter~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_client_module.html#keycloak-client-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/index.html#_resourceserverrepresentation~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_clienttemplate_module.html#keycloak-clienttemplate-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_group_module.html#keycloak-group-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
https://docs.ansible.com/ansible/devel/modules/meraki_network_module.html#meraki-network-module
├─BROKEN─ http://0.0.0.0:1337/modules/my.meraki.com
~~https://docs.ansible.com/ansible/devel/modules/os_ironic_module.html#os-ironic-module~~
~~├─BROKEN─ https://docs.openstack.org/ironic/latest/install/include/root-device-hints.html~~
~~https://docs.ansible.com/ansible/devel/modules/ovh_ip_failover_module.html#ovh-ip-failover-module~~
~~├─BROKEN─ https://eu.api.ovh.com/g934.first_step_with_api~~
~~https://docs.ansible.com/ansible/devel/modules/ovh_ip_loadbalancing_backend_module.html#ovh-ip-loadbalancing-backend-module~~
~~├─BROKEN─ https://eu.api.ovh.com/g934.first_step_with_api~~
~~https://docs.ansible.com/ansible/devel/modules/packet_volume_attachment_module.html#packet-volume-attachment-module~~
~~├─BROKEN─ https://www.packet.net/developers/api/volumeattachments/~~
https://docs.ansible.com/ansible/devel/modules/postgresql_info_module.html#postgresql-info-module
├─BROKEN─ https://www.postgresql.org/docs/current/catalog-pg-replication-slots.html
~~https://docs.ansible.com/ansible/devel/modules/postgresql_table_module.html#postgresql-table-module~~
~~├─BROKEN─ http://0.0.0.0:1337/modules/postgresql.org/docs/current/datatype.html~~
https://docs.ansible.com/ansible/devel/modules/win_credential_module.html#win-credential-module
├─BROKEN─ https://docs.microsoft.com/en-us/windows/desktop/api/wincred/ns-wincred-_credentiala
https://docs.ansible.com/ansible/devel/modules/win_dsc_module.html#win-dsc-module
├─BROKEN─ https://docs.microsoft.com/en-us/powershell/dsc/resources/resources
https://docs.ansible.com/ansible/devel/modules/win_inet_proxy_module.html#win-inet-proxy-module
├─BROKEN─ http://0.0.0.0:1337/modules/host
https://docs.ansible.com/ansible/devel/modules/win_webpicmd_module.html#win-webpicmd-module
└─BROKEN─ http://www.iis.net/learn/install/web-platform-installer/web-platform-installer-v4-command-line-webpicmdexe-rtw-release
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_module.html#xenserver-guest-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_powerstate_module.html#xenserver-guest-powerstate-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_info_module.html#xenserver-guest-info-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
https://docs.ansible.com/ansible/devel/modules/zabbix_map_module.html#zabbix-map-module
├─BROKEN─ https://en.wikipedia.org/wiki/DOT_(graph_description_language
|
https://github.com/ansible/ansible/issues/67357
|
https://github.com/ansible/ansible/pull/67360
|
53e043b5febd30f258a233f51b180a543300151b
|
11e75b0af256f9f09c54365282a4969a5fe0390e
| 2020-02-12T20:18:07Z |
python
| 2020-02-12T21:41:40Z |
lib/ansible/modules/crypto/acme/acme_certificate_revoke.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016 Michael Gruener <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: acme_certificate_revoke
author: "Felix Fontein (@felixfontein)"
version_added: "2.7"
short_description: Revoke certificates with the ACME protocol
description:
- "Allows to revoke certificates issued by a CA supporting the
L(ACME protocol,https://tools.ietf.org/html/rfc8555),
such as L(Let's Encrypt,https://letsencrypt.org/)."
notes:
- "Exactly one of C(account_key_src), C(account_key_content),
C(private_key_src) or C(private_key_content) must be specified."
- "Trying to revoke an already revoked certificate
should result in an unchanged status, even if the revocation reason
was different than the one specified here. Also, depending on the
server, it can happen that some other error is returned if the
certificate has already been revoked."
seealso:
- name: The Let's Encrypt documentation
description: Documentation for the Let's Encrypt Certification Authority.
Provides useful information for example on rate limits.
link: https://letsencrypt.org/docs/
- name: Automatic Certificate Management Environment (ACME)
description: The specification of the ACME protocol (RFC 8555).
link: https://tools.ietf.org/html/rfc8555
- module: acme_inspect
description: Allows to debug problems.
extends_documentation_fragment:
- acme
options:
certificate:
description:
- "Path to the certificate to revoke."
type: path
required: yes
account_key_src:
description:
- "Path to a file containing the ACME account RSA or Elliptic Curve
key."
- "RSA keys can be created with C(openssl rsa ...). Elliptic curve keys can
be created with C(openssl ecparam -genkey ...). Any other tool creating
private keys in PEM format can be used as well."
- "Mutually exclusive with C(account_key_content)."
- "Required if C(account_key_content) is not used."
type: path
account_key_content:
description:
- "Content of the ACME account RSA or Elliptic Curve key."
- "Note that exactly one of C(account_key_src), C(account_key_content),
C(private_key_src) or C(private_key_content) must be specified."
- "I(Warning): the content will be written into a temporary file, which will
be deleted by Ansible when the module completes. Since this is an
important private key — it can be used to change the account key,
or to revoke your certificates without knowing their private keys
—, this might not be acceptable."
- "In case C(cryptography) is used, the content is not written into a
temporary file. It can still happen that it is written to disk by
Ansible in the process of moving the module with its argument to
the node where it is executed."
type: str
private_key_src:
description:
- "Path to the certificate's private key."
- "Note that exactly one of C(account_key_src), C(account_key_content),
C(private_key_src) or C(private_key_content) must be specified."
type: path
private_key_content:
description:
- "Content of the certificate's private key."
- "Note that exactly one of C(account_key_src), C(account_key_content),
C(private_key_src) or C(private_key_content) must be specified."
- "I(Warning): the content will be written into a temporary file, which will
be deleted by Ansible when the module completes. Since this is an
important private key — it can be used to change the account key,
or to revoke your certificates without knowing their private keys
—, this might not be acceptable."
- "In case C(cryptography) is used, the content is not written into a
temporary file. It can still happen that it is written to disk by
Ansible in the process of moving the module with its argument to
the node where it is executed."
type: str
revoke_reason:
description:
- "One of the revocation reasonCodes defined in
L(https://tools.ietf.org/html/rfc5280#section-5.3.1, Section 5.3.1 of RFC5280)."
- "Possible values are C(0) (unspecified), C(1) (keyCompromise),
C(2) (cACompromise), C(3) (affiliationChanged), C(4) (superseded),
C(5) (cessationOfOperation), C(6) (certificateHold),
C(8) (removeFromCRL), C(9) (privilegeWithdrawn),
C(10) (aACompromise)"
type: int
'''
EXAMPLES = '''
- name: Revoke certificate with account key
acme_certificate_revoke:
account_key_src: /etc/pki/cert/private/account.key
certificate: /etc/httpd/ssl/sample.com.crt
- name: Revoke certificate with certificate's private key
acme_certificate_revoke:
private_key_src: /etc/httpd/ssl/sample.com.key
certificate: /etc/httpd/ssl/sample.com.crt
'''
RETURN = '''
'''
from ansible.module_utils.acme import (
ModuleFailException,
ACMEAccount,
nopad_b64,
pem_to_der,
handle_standard_module_arguments,
get_default_argspec,
)
from ansible.module_utils.basic import AnsibleModule
def main():
argument_spec = get_default_argspec()
argument_spec.update(dict(
private_key_src=dict(type='path'),
private_key_content=dict(type='str', no_log=True),
certificate=dict(type='path', required=True),
revoke_reason=dict(type='int'),
))
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=(
['account_key_src', 'account_key_content', 'private_key_src', 'private_key_content'],
),
mutually_exclusive=(
['account_key_src', 'account_key_content', 'private_key_src', 'private_key_content'],
),
supports_check_mode=False,
)
handle_standard_module_arguments(module)
try:
account = ACMEAccount(module)
# Load certificate
certificate = pem_to_der(module.params.get('certificate'))
certificate = nopad_b64(certificate)
# Construct payload
payload = {
'certificate': certificate
}
if module.params.get('revoke_reason') is not None:
payload['reason'] = module.params.get('revoke_reason')
# Determine endpoint
if module.params.get('acme_version') == 1:
endpoint = account.directory['revoke-cert']
payload['resource'] = 'revoke-cert'
else:
endpoint = account.directory['revokeCert']
# Get hold of private key (if available) and make sure it comes from disk
private_key = module.params.get('private_key_src')
private_key_content = module.params.get('private_key_content')
# Revoke certificate
if private_key or private_key_content:
# Step 1: load and parse private key
error, private_key_data = account.parse_key(private_key, private_key_content)
if error:
raise ModuleFailException("error while parsing private key: %s" % error)
# Step 2: sign revokation request with private key
jws_header = {
"alg": private_key_data['alg'],
"jwk": private_key_data['jwk'],
}
result, info = account.send_signed_request(endpoint, payload, key_data=private_key_data, jws_header=jws_header)
else:
# Step 1: get hold of account URI
created, account_data = account.setup_account(allow_creation=False)
if created:
raise AssertionError('Unwanted account creation')
if account_data is None:
raise ModuleFailException(msg='Account does not exist or is deactivated.')
# Step 2: sign revokation request with account key
result, info = account.send_signed_request(endpoint, payload)
if info['status'] != 200:
already_revoked = False
# Standardized error from draft 14 on (https://tools.ietf.org/html/rfc8555#section-7.6)
if result.get('type') == 'urn:ietf:params:acme:error:alreadyRevoked':
already_revoked = True
else:
# Hack for Boulder errors
if module.params.get('acme_version') == 1:
error_type = 'urn:acme:error:malformed'
else:
error_type = 'urn:ietf:params:acme:error:malformed'
if result.get('type') == error_type and result.get('detail') == 'Certificate already revoked':
# Fallback: boulder returns this in case the certificate was already revoked.
already_revoked = True
# If we know the certificate was already revoked, we don't fail,
# but successfully terminate while indicating no change
if already_revoked:
module.exit_json(changed=False)
raise ModuleFailException('Error revoking certificate: {0} {1}'.format(info['status'], result))
module.exit_json(changed=True)
except ModuleFailException as e:
e.do_fail(module)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,357 |
Fix batch of broken links in module docs
|
We're trying to fix as many broken links as possible before modules move into collections. This is the batch of broken links on some Ansible modules.
NOTE: the link checker sometimes reports an error where a link actually works. Ignore those if you find them.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
###### BROKEN LINKS
~~https://docs.ansible.com/ansible/latest/modules/openssl_privatekey_module.html#openssl-privatekey-module~~
~~├─BROKEN─ https://en.wikipedia.org/wiki/RSA_(cryptosystem~~
~~https://docs.ansible.com/ansible/devel/modules/elb_target_info_module.html#elb-target-info-module~~
~~└─BROKEN─ https://boto3.readthedocs.io/en/latest/%20reference/services/elbv2.html#ElasticLoadBalancingv2.Client.describe_target_health~~
~~https://docs.ansible.com/ansible/devel/plugins/lookup/laps_password.html~~
~~├─BROKEN─ https://keathmilligan.net/python-ldap-and-macos/~~
~~https://docs.ansible.com/ansible/devel/modules/acme_certificate_revoke_module.html#acme-certificate-revoke-module~~
~~─BROKEN─ http://0.0.0.0:1337/modules/Section%205.3.1%20of%20RFC5280~~
https://docs.ansible.com/ansible/devel/modules/airbrake_deployment_module.html#airbrake-deployment-module
└─BROKEN─ http://help.airbrake.io/kb/api-2/deploy-tracking
https://docs.ansible.com/ansible/devel/modules/consul_module.html#consul-module
├─BROKEN─ http://0.0.0.0:1337/v1/agent/service/register
~~https://docs.ansible.com/ansible/devel/modules/hetzner_firewall_module.html#hetzner-firewall-module~~
~~─BROKEN─ http://0.0.0.0:1337/modules/the%20documentation,https:/wiki.hetzner.de/index.php/Robot_Firewall/en#Parameter~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_client_module.html#keycloak-client-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/index.html#_resourceserverrepresentation~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_clienttemplate_module.html#keycloak-clienttemplate-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_group_module.html#keycloak-group-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
https://docs.ansible.com/ansible/devel/modules/meraki_network_module.html#meraki-network-module
├─BROKEN─ http://0.0.0.0:1337/modules/my.meraki.com
~~https://docs.ansible.com/ansible/devel/modules/os_ironic_module.html#os-ironic-module~~
~~├─BROKEN─ https://docs.openstack.org/ironic/latest/install/include/root-device-hints.html~~
~~https://docs.ansible.com/ansible/devel/modules/ovh_ip_failover_module.html#ovh-ip-failover-module~~
~~├─BROKEN─ https://eu.api.ovh.com/g934.first_step_with_api~~
~~https://docs.ansible.com/ansible/devel/modules/ovh_ip_loadbalancing_backend_module.html#ovh-ip-loadbalancing-backend-module~~
~~├─BROKEN─ https://eu.api.ovh.com/g934.first_step_with_api~~
~~https://docs.ansible.com/ansible/devel/modules/packet_volume_attachment_module.html#packet-volume-attachment-module~~
~~├─BROKEN─ https://www.packet.net/developers/api/volumeattachments/~~
https://docs.ansible.com/ansible/devel/modules/postgresql_info_module.html#postgresql-info-module
├─BROKEN─ https://www.postgresql.org/docs/current/catalog-pg-replication-slots.html
~~https://docs.ansible.com/ansible/devel/modules/postgresql_table_module.html#postgresql-table-module~~
~~├─BROKEN─ http://0.0.0.0:1337/modules/postgresql.org/docs/current/datatype.html~~
https://docs.ansible.com/ansible/devel/modules/win_credential_module.html#win-credential-module
├─BROKEN─ https://docs.microsoft.com/en-us/windows/desktop/api/wincred/ns-wincred-_credentiala
https://docs.ansible.com/ansible/devel/modules/win_dsc_module.html#win-dsc-module
├─BROKEN─ https://docs.microsoft.com/en-us/powershell/dsc/resources/resources
https://docs.ansible.com/ansible/devel/modules/win_inet_proxy_module.html#win-inet-proxy-module
├─BROKEN─ http://0.0.0.0:1337/modules/host
https://docs.ansible.com/ansible/devel/modules/win_webpicmd_module.html#win-webpicmd-module
└─BROKEN─ http://www.iis.net/learn/install/web-platform-installer/web-platform-installer-v4-command-line-webpicmdexe-rtw-release
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_module.html#xenserver-guest-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_powerstate_module.html#xenserver-guest-powerstate-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_info_module.html#xenserver-guest-info-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
https://docs.ansible.com/ansible/devel/modules/zabbix_map_module.html#zabbix-map-module
├─BROKEN─ https://en.wikipedia.org/wiki/DOT_(graph_description_language
|
https://github.com/ansible/ansible/issues/67357
|
https://github.com/ansible/ansible/pull/67360
|
53e043b5febd30f258a233f51b180a543300151b
|
11e75b0af256f9f09c54365282a4969a5fe0390e
| 2020-02-12T20:18:07Z |
python
| 2020-02-12T21:41:40Z |
lib/ansible/modules/crypto/openssl_privatekey.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Yanis Guenane <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: openssl_privatekey
version_added: "2.3"
short_description: Generate OpenSSL private keys
description:
- This module allows one to (re)generate OpenSSL private keys.
- One can generate L(RSA,https://en.wikipedia.org/wiki/RSA_(cryptosystem)),
L(DSA,https://en.wikipedia.org/wiki/Digital_Signature_Algorithm),
L(ECC,https://en.wikipedia.org/wiki/Elliptic-curve_cryptography) or
L(EdDSA,https://en.wikipedia.org/wiki/EdDSA) private keys.
- Keys are generated in PEM format.
- "Please note that the module regenerates private keys if they don't match
the module's options. In particular, if you provide another passphrase
(or specify none), change the keysize, etc., the private key will be
regenerated. If you are concerned that this could **overwrite your private key**,
consider using the I(backup) option."
- The module can use the cryptography Python library, or the pyOpenSSL Python
library. By default, it tries to detect which one is available. This can be
overridden with the I(select_crypto_backend) option. Please note that the
PyOpenSSL backend was deprecated in Ansible 2.9 and will be removed in Ansible 2.13."
requirements:
- Either cryptography >= 1.2.3 (older versions might work as well)
- Or pyOpenSSL
author:
- Yanis Guenane (@Spredzy)
- Felix Fontein (@felixfontein)
options:
state:
description:
- Whether the private key should exist or not, taking action if the state is different from what is stated.
type: str
default: present
choices: [ absent, present ]
size:
description:
- Size (in bits) of the TLS/SSL key to generate.
type: int
default: 4096
type:
description:
- The algorithm used to generate the TLS/SSL private key.
- Note that C(ECC), C(X25519), C(X448), C(Ed25519) and C(Ed448) require the C(cryptography) backend.
C(X25519) needs cryptography 2.5 or newer, while C(X448), C(Ed25519) and C(Ed448) require
cryptography 2.6 or newer. For C(ECC), the minimal cryptography version required depends on the
I(curve) option.
type: str
default: RSA
choices: [ DSA, ECC, Ed25519, Ed448, RSA, X25519, X448 ]
curve:
description:
- Note that not all curves are supported by all versions of C(cryptography).
- For maximal interoperability, C(secp384r1) or C(secp256r1) should be used.
- We use the curve names as defined in the
L(IANA registry for TLS,https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-8).
type: str
choices:
- secp384r1
- secp521r1
- secp224r1
- secp192r1
- secp256r1
- secp256k1
- brainpoolP256r1
- brainpoolP384r1
- brainpoolP512r1
- sect571k1
- sect409k1
- sect283k1
- sect233k1
- sect163k1
- sect571r1
- sect409r1
- sect283r1
- sect233r1
- sect163r2
version_added: "2.8"
force:
description:
- Should the key be regenerated even if it already exists.
type: bool
default: no
path:
description:
- Name of the file in which the generated TLS/SSL private key will be written. It will have 0600 mode.
type: path
required: true
passphrase:
description:
- The passphrase for the private key.
type: str
version_added: "2.4"
cipher:
description:
- The cipher to encrypt the private key. (Valid values can be found by
running `openssl list -cipher-algorithms` or `openssl list-cipher-algorithms`,
depending on your OpenSSL version.)
- When using the C(cryptography) backend, use C(auto).
type: str
version_added: "2.4"
select_crypto_backend:
description:
- Determines which crypto backend to use.
- The default choice is C(auto), which tries to use C(cryptography) if available, and falls back to C(pyopenssl).
- If set to C(pyopenssl), will try to use the L(pyOpenSSL,https://pypi.org/project/pyOpenSSL/) library.
- If set to C(cryptography), will try to use the L(cryptography,https://cryptography.io/) library.
- Please note that the C(pyopenssl) backend has been deprecated in Ansible 2.9, and will be removed in Ansible 2.13.
From that point on, only the C(cryptography) backend will be available.
type: str
default: auto
choices: [ auto, cryptography, pyopenssl ]
version_added: "2.8"
format:
description:
- Determines which format the private key is written in. By default, PKCS1 (traditional OpenSSL format)
is used for all keys which support it. Please note that not every key can be exported in any format.
- The value C(auto) selects a fromat based on the key format. The value C(auto_ignore) does the same,
but for existing private key files, it will not force a regenerate when its format is not the automatically
selected one for generation.
- Note that if the format for an existing private key mismatches, the key is *regenerated* by default.
To change this behavior, use the I(format_mismatch) option.
- The I(format) option is only supported by the C(cryptography) backend. The C(pyopenssl) backend will
fail if a value different from C(auto_ignore) is used.
type: str
default: auto_ignore
choices: [ pkcs1, pkcs8, raw, auto, auto_ignore ]
version_added: "2.10"
format_mismatch:
description:
- Determines behavior of the module if the format of a private key does not match the expected format, but all
other parameters are as expected.
- If set to C(regenerate) (default), generates a new private key.
- If set to C(convert), the key will be converted to the new format instead.
- Only supported by the C(cryptography) backend.
type: str
default: regenerate
choices: [ regenerate, convert ]
version_added: "2.10"
backup:
description:
- Create a backup file including a timestamp so you can get
the original private key back if you overwrote it with a new one by accident.
type: bool
default: no
version_added: "2.8"
return_content:
description:
- If set to C(yes), will return the (current or generated) private key's content as I(privatekey).
- Note that especially if the private key is not encrypted, you have to make sure that the returned
value is treated appropriately and not accidentally written to logs etc.! Use with care!
type: bool
default: no
version_added: "2.10"
extends_documentation_fragment:
- files
seealso:
- module: openssl_certificate
- module: openssl_csr
- module: openssl_dhparam
- module: openssl_pkcs12
- module: openssl_publickey
'''
EXAMPLES = r'''
- name: Generate an OpenSSL private key with the default values (4096 bits, RSA)
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
- name: Generate an OpenSSL private key with the default values (4096 bits, RSA) and a passphrase
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
passphrase: ansible
cipher: aes256
- name: Generate an OpenSSL private key with a different size (2048 bits)
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
size: 2048
- name: Force regenerate an OpenSSL private key if it already exists
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
force: yes
- name: Generate an OpenSSL private key with a different algorithm (DSA)
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
type: DSA
'''
RETURN = r'''
size:
description: Size (in bits) of the TLS/SSL private key.
returned: changed or success
type: int
sample: 4096
type:
description: Algorithm used to generate the TLS/SSL private key.
returned: changed or success
type: str
sample: RSA
curve:
description: Elliptic curve used to generate the TLS/SSL private key.
returned: changed or success, and I(type) is C(ECC)
type: str
sample: secp256r1
filename:
description: Path to the generated TLS/SSL private key file.
returned: changed or success
type: str
sample: /etc/ssl/private/ansible.com.pem
fingerprint:
description:
- The fingerprint of the public key. Fingerprint will be generated for each C(hashlib.algorithms) available.
- The PyOpenSSL backend requires PyOpenSSL >= 16.0 for meaningful output.
returned: changed or success
type: dict
sample:
md5: "84:75:71:72:8d:04:b5:6c:4d:37:6d:66:83:f5:4c:29"
sha1: "51:cc:7c:68:5d:eb:41:43:88:7e:1a:ae:c7:f8:24:72:ee:71:f6:10"
sha224: "b1:19:a6:6c:14:ac:33:1d:ed:18:50:d3:06:5c:b2:32:91:f1:f1:52:8c:cb:d5:75:e9:f5:9b:46"
sha256: "41:ab:c7:cb:d5:5f:30:60:46:99:ac:d4:00:70:cf:a1:76:4f:24:5d:10:24:57:5d:51:6e:09:97:df:2f:de:c7"
sha384: "85:39:50:4e:de:d9:19:33:40:70:ae:10:ab:59:24:19:51:c3:a2:e4:0b:1c:b1:6e:dd:b3:0c:d9:9e:6a:46:af:da:18:f8:ef:ae:2e:c0:9a:75:2c:9b:b3:0f:3a:5f:3d"
sha512: "fd:ed:5e:39:48:5f:9f:fe:7f:25:06:3f:79:08:cd:ee:a5:e7:b3:3d:13:82:87:1f:84:e1:f5:c7:28:77:53:94:86:56:38:69:f0:d9:35:22:01:1e:a6:60:...:0f:9b"
backup_file:
description: Name of backup file created.
returned: changed and if I(backup) is C(yes)
type: str
sample: /path/to/privatekey.pem.2019-03-09@11:22~
privatekey:
description:
- The (current or generated) private key's content.
- Will be Base64-encoded if the key is in raw format.
returned: if I(state) is C(present) and I(return_content) is C(yes)
type: str
version_added: "2.10"
'''
import abc
import base64
import os
import traceback
from distutils.version import LooseVersion
MINIMAL_PYOPENSSL_VERSION = '0.6'
MINIMAL_CRYPTOGRAPHY_VERSION = '1.2.3'
PYOPENSSL_IMP_ERR = None
try:
import OpenSSL
from OpenSSL import crypto
PYOPENSSL_VERSION = LooseVersion(OpenSSL.__version__)
except ImportError:
PYOPENSSL_IMP_ERR = traceback.format_exc()
PYOPENSSL_FOUND = False
else:
PYOPENSSL_FOUND = True
CRYPTOGRAPHY_IMP_ERR = None
try:
import cryptography
import cryptography.exceptions
import cryptography.hazmat.backends
import cryptography.hazmat.primitives.serialization
import cryptography.hazmat.primitives.asymmetric.rsa
import cryptography.hazmat.primitives.asymmetric.dsa
import cryptography.hazmat.primitives.asymmetric.ec
import cryptography.hazmat.primitives.asymmetric.utils
CRYPTOGRAPHY_VERSION = LooseVersion(cryptography.__version__)
except ImportError:
CRYPTOGRAPHY_IMP_ERR = traceback.format_exc()
CRYPTOGRAPHY_FOUND = False
else:
CRYPTOGRAPHY_FOUND = True
from ansible.module_utils.crypto import (
CRYPTOGRAPHY_HAS_X25519,
CRYPTOGRAPHY_HAS_X25519_FULL,
CRYPTOGRAPHY_HAS_X448,
CRYPTOGRAPHY_HAS_ED25519,
CRYPTOGRAPHY_HAS_ED448,
)
from ansible.module_utils import crypto as crypto_utils
from ansible.module_utils._text import to_native, to_bytes
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class PrivateKeyError(crypto_utils.OpenSSLObjectError):
pass
class PrivateKeyBase(crypto_utils.OpenSSLObject):
def __init__(self, module):
super(PrivateKeyBase, self).__init__(
module.params['path'],
module.params['state'],
module.params['force'],
module.check_mode
)
self.size = module.params['size']
self.passphrase = module.params['passphrase']
self.cipher = module.params['cipher']
self.privatekey = None
self.fingerprint = {}
self.format = module.params['format']
self.format_mismatch = module.params['format_mismatch']
self.privatekey_bytes = None
self.return_content = module.params['return_content']
self.backup = module.params['backup']
self.backup_file = None
if module.params['mode'] is None:
module.params['mode'] = '0600'
@abc.abstractmethod
def _generate_private_key(self):
"""(Re-)Generate private key."""
pass
@abc.abstractmethod
def _get_private_key_data(self):
"""Return bytes for self.privatekey"""
pass
@abc.abstractmethod
def _get_fingerprint(self):
pass
def generate(self, module):
"""Generate a keypair."""
if not self.check(module, perms_required=False, ignore_conversion=True) or self.force:
# Regenerate
if self.backup:
self.backup_file = module.backup_local(self.path)
self._generate_private_key()
privatekey_data = self._get_private_key_data()
if self.return_content:
self.privatekey_bytes = privatekey_data
crypto_utils.write_file(module, privatekey_data, 0o600)
self.changed = True
elif not self.check(module, perms_required=False, ignore_conversion=False):
# Convert
if self.backup:
self.backup_file = module.backup_local(self.path)
privatekey_data = self._get_private_key_data()
if self.return_content:
self.privatekey_bytes = privatekey_data
crypto_utils.write_file(module, privatekey_data, 0o600)
self.changed = True
self.fingerprint = self._get_fingerprint()
file_args = module.load_file_common_arguments(module.params)
if module.set_fs_attributes_if_different(file_args, False):
self.changed = True
def remove(self, module):
if self.backup:
self.backup_file = module.backup_local(self.path)
super(PrivateKeyBase, self).remove(module)
@abc.abstractmethod
def _check_passphrase(self):
pass
@abc.abstractmethod
def _check_size_and_type(self):
pass
@abc.abstractmethod
def _check_format(self):
pass
def check(self, module, perms_required=True, ignore_conversion=True):
"""Ensure the resource is in its desired state."""
state_and_perms = super(PrivateKeyBase, self).check(module, perms_required)
if not state_and_perms or not self._check_passphrase():
return False
if not self._check_size_and_type():
return False
if not self._check_format():
if not ignore_conversion or self.format_mismatch != 'convert':
return False
return True
def dump(self):
"""Serialize the object into a dictionary."""
result = {
'size': self.size,
'filename': self.path,
'changed': self.changed,
'fingerprint': self.fingerprint,
}
if self.backup_file:
result['backup_file'] = self.backup_file
if self.return_content:
if self.privatekey_bytes is None:
self.privatekey_bytes = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
if self.privatekey_bytes:
if crypto_utils.identify_private_key_format(self.privatekey_bytes) == 'raw':
result['privatekey'] = base64.b64encode(self.privatekey_bytes)
else:
result['privatekey'] = self.privatekey_bytes.decode('utf-8')
else:
result['privatekey'] = None
return result
# Implementation with using pyOpenSSL
class PrivateKeyPyOpenSSL(PrivateKeyBase):
def __init__(self, module):
super(PrivateKeyPyOpenSSL, self).__init__(module)
if module.params['type'] == 'RSA':
self.type = crypto.TYPE_RSA
elif module.params['type'] == 'DSA':
self.type = crypto.TYPE_DSA
else:
module.fail_json(msg="PyOpenSSL backend only supports RSA and DSA keys.")
if self.format != 'auto_ignore':
module.fail_json(msg="PyOpenSSL backend only supports auto_ignore format.")
def _generate_private_key(self):
"""(Re-)Generate private key."""
self.privatekey = crypto.PKey()
try:
self.privatekey.generate_key(self.type, self.size)
except (TypeError, ValueError) as exc:
raise PrivateKeyError(exc)
def _get_private_key_data(self):
"""Return bytes for self.privatekey"""
if self.cipher and self.passphrase:
return crypto.dump_privatekey(crypto.FILETYPE_PEM, self.privatekey,
self.cipher, to_bytes(self.passphrase))
else:
return crypto.dump_privatekey(crypto.FILETYPE_PEM, self.privatekey)
def _get_fingerprint(self):
return crypto_utils.get_fingerprint(self.path, self.passphrase)
def _check_passphrase(self):
try:
crypto_utils.load_privatekey(self.path, self.passphrase)
return True
except Exception as dummy:
return False
def _check_size_and_type(self):
def _check_size(privatekey):
return self.size == privatekey.bits()
def _check_type(privatekey):
return self.type == privatekey.type()
try:
privatekey = crypto_utils.load_privatekey(self.path, self.passphrase)
except crypto_utils.OpenSSLBadPassphraseError as exc:
raise PrivateKeyError(exc)
return _check_size(privatekey) and _check_type(privatekey)
def _check_format(self):
# Not supported by this backend
return True
def dump(self):
"""Serialize the object into a dictionary."""
result = super(PrivateKeyPyOpenSSL, self).dump()
if self.type == crypto.TYPE_RSA:
result['type'] = 'RSA'
else:
result['type'] = 'DSA'
return result
# Implementation with using cryptography
class PrivateKeyCryptography(PrivateKeyBase):
def _get_ec_class(self, ectype):
ecclass = cryptography.hazmat.primitives.asymmetric.ec.__dict__.get(ectype)
if ecclass is None:
self.module.fail_json(msg='Your cryptography version does not support {0}'.format(ectype))
return ecclass
def _add_curve(self, name, ectype, deprecated=False):
def create(size):
ecclass = self._get_ec_class(ectype)
return ecclass()
def verify(privatekey):
ecclass = self._get_ec_class(ectype)
return isinstance(privatekey.private_numbers().public_numbers.curve, ecclass)
self.curves[name] = {
'create': create,
'verify': verify,
'deprecated': deprecated,
}
def __init__(self, module):
super(PrivateKeyCryptography, self).__init__(module)
self.curves = dict()
self._add_curve('secp384r1', 'SECP384R1')
self._add_curve('secp521r1', 'SECP521R1')
self._add_curve('secp224r1', 'SECP224R1')
self._add_curve('secp192r1', 'SECP192R1')
self._add_curve('secp256r1', 'SECP256R1')
self._add_curve('secp256k1', 'SECP256K1')
self._add_curve('brainpoolP256r1', 'BrainpoolP256R1', deprecated=True)
self._add_curve('brainpoolP384r1', 'BrainpoolP384R1', deprecated=True)
self._add_curve('brainpoolP512r1', 'BrainpoolP512R1', deprecated=True)
self._add_curve('sect571k1', 'SECT571K1', deprecated=True)
self._add_curve('sect409k1', 'SECT409K1', deprecated=True)
self._add_curve('sect283k1', 'SECT283K1', deprecated=True)
self._add_curve('sect233k1', 'SECT233K1', deprecated=True)
self._add_curve('sect163k1', 'SECT163K1', deprecated=True)
self._add_curve('sect571r1', 'SECT571R1', deprecated=True)
self._add_curve('sect409r1', 'SECT409R1', deprecated=True)
self._add_curve('sect283r1', 'SECT283R1', deprecated=True)
self._add_curve('sect233r1', 'SECT233R1', deprecated=True)
self._add_curve('sect163r2', 'SECT163R2', deprecated=True)
self.module = module
self.cryptography_backend = cryptography.hazmat.backends.default_backend()
self.type = module.params['type']
self.curve = module.params['curve']
if not CRYPTOGRAPHY_HAS_X25519 and self.type == 'X25519':
self.module.fail_json(msg='Your cryptography version does not support X25519')
if not CRYPTOGRAPHY_HAS_X25519_FULL and self.type == 'X25519':
self.module.fail_json(msg='Your cryptography version does not support X25519 serialization')
if not CRYPTOGRAPHY_HAS_X448 and self.type == 'X448':
self.module.fail_json(msg='Your cryptography version does not support X448')
if not CRYPTOGRAPHY_HAS_ED25519 and self.type == 'Ed25519':
self.module.fail_json(msg='Your cryptography version does not support Ed25519')
if not CRYPTOGRAPHY_HAS_ED448 and self.type == 'Ed448':
self.module.fail_json(msg='Your cryptography version does not support Ed448')
def _get_wanted_format(self):
if self.format not in ('auto', 'auto_ignore'):
return self.format
if self.type in ('X25519', 'X448', 'Ed25519', 'Ed448'):
return 'pkcs8'
else:
return 'pkcs1'
def _generate_private_key(self):
"""(Re-)Generate private key."""
try:
if self.type == 'RSA':
self.privatekey = cryptography.hazmat.primitives.asymmetric.rsa.generate_private_key(
public_exponent=65537, # OpenSSL always uses this
key_size=self.size,
backend=self.cryptography_backend
)
if self.type == 'DSA':
self.privatekey = cryptography.hazmat.primitives.asymmetric.dsa.generate_private_key(
key_size=self.size,
backend=self.cryptography_backend
)
if CRYPTOGRAPHY_HAS_X25519_FULL and self.type == 'X25519':
self.privatekey = cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.generate()
if CRYPTOGRAPHY_HAS_X448 and self.type == 'X448':
self.privatekey = cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey.generate()
if CRYPTOGRAPHY_HAS_ED25519 and self.type == 'Ed25519':
self.privatekey = cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey.generate()
if CRYPTOGRAPHY_HAS_ED448 and self.type == 'Ed448':
self.privatekey = cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey.generate()
if self.type == 'ECC' and self.curve in self.curves:
if self.curves[self.curve]['deprecated']:
self.module.warn('Elliptic curves of type {0} should not be used for new keys!'.format(self.curve))
self.privatekey = cryptography.hazmat.primitives.asymmetric.ec.generate_private_key(
curve=self.curves[self.curve]['create'](self.size),
backend=self.cryptography_backend
)
except cryptography.exceptions.UnsupportedAlgorithm as dummy:
self.module.fail_json(msg='Cryptography backend does not support the algorithm required for {0}'.format(self.type))
def _get_private_key_data(self):
"""Return bytes for self.privatekey"""
# Select export format and encoding
try:
export_format = self._get_wanted_format()
export_encoding = cryptography.hazmat.primitives.serialization.Encoding.PEM
if export_format == 'pkcs1':
# "TraditionalOpenSSL" format is PKCS1
export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.TraditionalOpenSSL
elif export_format == 'pkcs8':
export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.PKCS8
elif export_format == 'raw':
export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.Raw
export_encoding = cryptography.hazmat.primitives.serialization.Encoding.Raw
except AttributeError:
self.module.fail_json(msg='Cryptography backend does not support the selected output format "{0}"'.format(self.format))
# Select key encryption
encryption_algorithm = cryptography.hazmat.primitives.serialization.NoEncryption()
if self.cipher and self.passphrase:
if self.cipher == 'auto':
encryption_algorithm = cryptography.hazmat.primitives.serialization.BestAvailableEncryption(to_bytes(self.passphrase))
else:
self.module.fail_json(msg='Cryptography backend can only use "auto" for cipher option.')
# Serialize key
try:
return self.privatekey.private_bytes(
encoding=export_encoding,
format=export_format,
encryption_algorithm=encryption_algorithm
)
except ValueError as dummy:
self.module.fail_json(
msg='Cryptography backend cannot serialize the private key in the required format "{0}"'.format(self.format)
)
except Exception as dummy:
self.module.fail_json(
msg='Error while serializing the private key in the required format "{0}"'.format(self.format),
exception=traceback.format_exc()
)
def _load_privatekey(self):
try:
# Read bytes
with open(self.path, 'rb') as f:
data = f.read()
# Interpret bytes depending on format.
format = crypto_utils.identify_private_key_format(data)
if format == 'raw':
if len(data) == 56 and CRYPTOGRAPHY_HAS_X448:
return cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey.from_private_bytes(data)
if len(data) == 57 and CRYPTOGRAPHY_HAS_ED448:
return cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey.from_private_bytes(data)
if len(data) == 32:
if CRYPTOGRAPHY_HAS_X25519 and (self.type == 'X25519' or not CRYPTOGRAPHY_HAS_ED25519):
return cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.from_private_bytes(data)
if CRYPTOGRAPHY_HAS_ED25519 and (self.type == 'Ed25519' or not CRYPTOGRAPHY_HAS_X25519):
return cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey.from_private_bytes(data)
if CRYPTOGRAPHY_HAS_X25519 and CRYPTOGRAPHY_HAS_ED25519:
try:
return cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.from_private_bytes(data)
except Exception:
return cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey.from_private_bytes(data)
raise PrivateKeyError('Cannot load raw key')
else:
return cryptography.hazmat.primitives.serialization.load_pem_private_key(
data,
None if self.passphrase is None else to_bytes(self.passphrase),
backend=self.cryptography_backend
)
except Exception as e:
raise PrivateKeyError(e)
def _get_fingerprint(self):
# Get bytes of public key
private_key = self._load_privatekey()
public_key = private_key.public_key()
public_key_bytes = public_key.public_bytes(
cryptography.hazmat.primitives.serialization.Encoding.DER,
cryptography.hazmat.primitives.serialization.PublicFormat.SubjectPublicKeyInfo
)
# Get fingerprints of public_key_bytes
return crypto_utils.get_fingerprint_of_bytes(public_key_bytes)
def _check_passphrase(self):
try:
with open(self.path, 'rb') as f:
data = f.read()
format = crypto_utils.identify_private_key_format(data)
if format == 'raw':
# Raw keys cannot be encrypted
return self.passphrase is None
else:
return cryptography.hazmat.primitives.serialization.load_pem_private_key(
data,
None if self.passphrase is None else to_bytes(self.passphrase),
backend=self.cryptography_backend
)
except Exception as dummy:
return False
def _check_size_and_type(self):
privatekey = self._load_privatekey()
self.privatekey = privatekey
if isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateKey):
return self.type == 'RSA' and self.size == privatekey.key_size
if isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.dsa.DSAPrivateKey):
return self.type == 'DSA' and self.size == privatekey.key_size
if CRYPTOGRAPHY_HAS_X25519 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey):
return self.type == 'X25519'
if CRYPTOGRAPHY_HAS_X448 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey):
return self.type == 'X448'
if CRYPTOGRAPHY_HAS_ED25519 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey):
return self.type == 'Ed25519'
if CRYPTOGRAPHY_HAS_ED448 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey):
return self.type == 'Ed448'
if isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePrivateKey):
if self.type != 'ECC':
return False
if self.curve not in self.curves:
return False
return self.curves[self.curve]['verify'](privatekey)
return False
def _check_format(self):
if self.format == 'auto_ignore':
return True
try:
with open(self.path, 'rb') as f:
content = f.read()
format = crypto_utils.identify_private_key_format(content)
return format == self._get_wanted_format()
except Exception as dummy:
return False
def dump(self):
"""Serialize the object into a dictionary."""
result = super(PrivateKeyCryptography, self).dump()
result['type'] = self.type
if self.type == 'ECC':
result['curve'] = self.curve
return result
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['present', 'absent']),
size=dict(type='int', default=4096),
type=dict(type='str', default='RSA', choices=[
'DSA', 'ECC', 'Ed25519', 'Ed448', 'RSA', 'X25519', 'X448'
]),
curve=dict(type='str', choices=[
'secp384r1', 'secp521r1', 'secp224r1', 'secp192r1', 'secp256r1',
'secp256k1', 'brainpoolP256r1', 'brainpoolP384r1', 'brainpoolP512r1',
'sect571k1', 'sect409k1', 'sect283k1', 'sect233k1', 'sect163k1',
'sect571r1', 'sect409r1', 'sect283r1', 'sect233r1', 'sect163r2',
]),
force=dict(type='bool', default=False),
path=dict(type='path', required=True),
passphrase=dict(type='str', no_log=True),
cipher=dict(type='str'),
backup=dict(type='bool', default=False),
format=dict(type='str', default='auto_ignore', choices=['pkcs1', 'pkcs8', 'raw', 'auto', 'auto_ignore']),
format_mismatch=dict(type='str', default='regenerate', choices=['regenerate', 'convert']),
select_crypto_backend=dict(type='str', choices=['auto', 'pyopenssl', 'cryptography'], default='auto'),
return_content=dict(type='bool', default=False),
),
supports_check_mode=True,
add_file_common_args=True,
required_together=[
['cipher', 'passphrase']
],
required_if=[
['type', 'ECC', ['curve']],
],
)
base_dir = os.path.dirname(module.params['path']) or '.'
if not os.path.isdir(base_dir):
module.fail_json(
name=base_dir,
msg='The directory %s does not exist or the file is not a directory' % base_dir
)
backend = module.params['select_crypto_backend']
if backend == 'auto':
# Detection what is possible
can_use_cryptography = CRYPTOGRAPHY_FOUND and CRYPTOGRAPHY_VERSION >= LooseVersion(MINIMAL_CRYPTOGRAPHY_VERSION)
can_use_pyopenssl = PYOPENSSL_FOUND and PYOPENSSL_VERSION >= LooseVersion(MINIMAL_PYOPENSSL_VERSION)
# Decision
if module.params['cipher'] and module.params['passphrase'] and module.params['cipher'] != 'auto':
# First try pyOpenSSL, then cryptography
if can_use_pyopenssl:
backend = 'pyopenssl'
elif can_use_cryptography:
backend = 'cryptography'
else:
# First try cryptography, then pyOpenSSL
if can_use_cryptography:
backend = 'cryptography'
elif can_use_pyopenssl:
backend = 'pyopenssl'
# Success?
if backend == 'auto':
module.fail_json(msg=("Can't detect any of the required Python libraries "
"cryptography (>= {0}) or PyOpenSSL (>= {1})").format(
MINIMAL_CRYPTOGRAPHY_VERSION,
MINIMAL_PYOPENSSL_VERSION))
try:
if backend == 'pyopenssl':
if not PYOPENSSL_FOUND:
module.fail_json(msg=missing_required_lib('pyOpenSSL >= {0}'.format(MINIMAL_PYOPENSSL_VERSION)),
exception=PYOPENSSL_IMP_ERR)
module.deprecate('The module is using the PyOpenSSL backend. This backend has been deprecated', version='2.13')
private_key = PrivateKeyPyOpenSSL(module)
elif backend == 'cryptography':
if not CRYPTOGRAPHY_FOUND:
module.fail_json(msg=missing_required_lib('cryptography >= {0}'.format(MINIMAL_CRYPTOGRAPHY_VERSION)),
exception=CRYPTOGRAPHY_IMP_ERR)
private_key = PrivateKeyCryptography(module)
if private_key.state == 'present':
if module.check_mode:
result = private_key.dump()
result['changed'] = module.params['force'] or not private_key.check(module)
module.exit_json(**result)
private_key.generate(module)
else:
if module.check_mode:
result = private_key.dump()
result['changed'] = os.path.exists(module.params['path'])
module.exit_json(**result)
private_key.remove(module)
result = private_key.dump()
module.exit_json(**result)
except crypto_utils.OpenSSLObjectError as exc:
module.fail_json(msg=to_native(exc))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,357 |
Fix batch of broken links in module docs
|
We're trying to fix as many broken links as possible before modules move into collections. This is the batch of broken links on some Ansible modules.
NOTE: the link checker sometimes reports an error where a link actually works. Ignore those if you find them.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
###### BROKEN LINKS
~~https://docs.ansible.com/ansible/latest/modules/openssl_privatekey_module.html#openssl-privatekey-module~~
~~├─BROKEN─ https://en.wikipedia.org/wiki/RSA_(cryptosystem~~
~~https://docs.ansible.com/ansible/devel/modules/elb_target_info_module.html#elb-target-info-module~~
~~└─BROKEN─ https://boto3.readthedocs.io/en/latest/%20reference/services/elbv2.html#ElasticLoadBalancingv2.Client.describe_target_health~~
~~https://docs.ansible.com/ansible/devel/plugins/lookup/laps_password.html~~
~~├─BROKEN─ https://keathmilligan.net/python-ldap-and-macos/~~
~~https://docs.ansible.com/ansible/devel/modules/acme_certificate_revoke_module.html#acme-certificate-revoke-module~~
~~─BROKEN─ http://0.0.0.0:1337/modules/Section%205.3.1%20of%20RFC5280~~
https://docs.ansible.com/ansible/devel/modules/airbrake_deployment_module.html#airbrake-deployment-module
└─BROKEN─ http://help.airbrake.io/kb/api-2/deploy-tracking
https://docs.ansible.com/ansible/devel/modules/consul_module.html#consul-module
├─BROKEN─ http://0.0.0.0:1337/v1/agent/service/register
~~https://docs.ansible.com/ansible/devel/modules/hetzner_firewall_module.html#hetzner-firewall-module~~
~~─BROKEN─ http://0.0.0.0:1337/modules/the%20documentation,https:/wiki.hetzner.de/index.php/Robot_Firewall/en#Parameter~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_client_module.html#keycloak-client-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/index.html#_resourceserverrepresentation~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_clienttemplate_module.html#keycloak-clienttemplate-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_group_module.html#keycloak-group-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
https://docs.ansible.com/ansible/devel/modules/meraki_network_module.html#meraki-network-module
├─BROKEN─ http://0.0.0.0:1337/modules/my.meraki.com
~~https://docs.ansible.com/ansible/devel/modules/os_ironic_module.html#os-ironic-module~~
~~├─BROKEN─ https://docs.openstack.org/ironic/latest/install/include/root-device-hints.html~~
~~https://docs.ansible.com/ansible/devel/modules/ovh_ip_failover_module.html#ovh-ip-failover-module~~
~~├─BROKEN─ https://eu.api.ovh.com/g934.first_step_with_api~~
~~https://docs.ansible.com/ansible/devel/modules/ovh_ip_loadbalancing_backend_module.html#ovh-ip-loadbalancing-backend-module~~
~~├─BROKEN─ https://eu.api.ovh.com/g934.first_step_with_api~~
~~https://docs.ansible.com/ansible/devel/modules/packet_volume_attachment_module.html#packet-volume-attachment-module~~
~~├─BROKEN─ https://www.packet.net/developers/api/volumeattachments/~~
https://docs.ansible.com/ansible/devel/modules/postgresql_info_module.html#postgresql-info-module
├─BROKEN─ https://www.postgresql.org/docs/current/catalog-pg-replication-slots.html
~~https://docs.ansible.com/ansible/devel/modules/postgresql_table_module.html#postgresql-table-module~~
~~├─BROKEN─ http://0.0.0.0:1337/modules/postgresql.org/docs/current/datatype.html~~
https://docs.ansible.com/ansible/devel/modules/win_credential_module.html#win-credential-module
├─BROKEN─ https://docs.microsoft.com/en-us/windows/desktop/api/wincred/ns-wincred-_credentiala
https://docs.ansible.com/ansible/devel/modules/win_dsc_module.html#win-dsc-module
├─BROKEN─ https://docs.microsoft.com/en-us/powershell/dsc/resources/resources
https://docs.ansible.com/ansible/devel/modules/win_inet_proxy_module.html#win-inet-proxy-module
├─BROKEN─ http://0.0.0.0:1337/modules/host
https://docs.ansible.com/ansible/devel/modules/win_webpicmd_module.html#win-webpicmd-module
└─BROKEN─ http://www.iis.net/learn/install/web-platform-installer/web-platform-installer-v4-command-line-webpicmdexe-rtw-release
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_module.html#xenserver-guest-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_powerstate_module.html#xenserver-guest-powerstate-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_info_module.html#xenserver-guest-info-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
https://docs.ansible.com/ansible/devel/modules/zabbix_map_module.html#zabbix-map-module
├─BROKEN─ https://en.wikipedia.org/wiki/DOT_(graph_description_language
|
https://github.com/ansible/ansible/issues/67357
|
https://github.com/ansible/ansible/pull/67360
|
53e043b5febd30f258a233f51b180a543300151b
|
11e75b0af256f9f09c54365282a4969a5fe0390e
| 2020-02-12T20:18:07Z |
python
| 2020-02-12T21:41:40Z |
lib/ansible/modules/net_tools/hetzner_firewall.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2019 Felix Fontein <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: hetzner_firewall
version_added: "2.10"
short_description: Manage Hetzner's dedicated server firewall
author:
- Felix Fontein (@felixfontein)
description:
- Manage Hetzner's dedicated server firewall.
- Note that idempotency check for TCP flags simply compares strings and doesn't
try to interpret the rules. This might change in the future.
seealso:
- name: Firewall documentation
description: Hetzner's documentation on the stateless firewall for dedicated servers
link: https://wiki.hetzner.de/index.php/Robot_Firewall/en
- module: hetzner_firewall_info
description: Retrieve information on firewall configuration.
extends_documentation_fragment:
- hetzner
options:
server_ip:
description: The server's main IP address.
required: yes
type: str
port:
description:
- Switch port of firewall.
type: str
choices: [ main, kvm ]
default: main
state:
description:
- Status of the firewall.
- Firewall is active if state is C(present), and disabled if state is C(absent).
type: str
default: present
choices: [ present, absent ]
whitelist_hos:
description:
- Whether Hetzner services have access.
type: bool
rules:
description:
- Firewall rules.
type: dict
suboptions:
input:
description:
- Input firewall rules.
type: list
elements: dict
suboptions:
name:
description:
- Name of the firewall rule.
type: str
ip_version:
description:
- Internet protocol version.
- Note that currently, only IPv4 is supported by Hetzner.
required: yes
type: str
choices: [ ipv4, ipv6 ]
dst_ip:
description:
- Destination IP address or subnet address.
- CIDR notation.
type: str
dst_port:
description:
- Destination port or port range.
type: str
src_ip:
description:
- Source IP address or subnet address.
- CIDR notation.
type: str
src_port:
description:
- Source port or port range.
type: str
protocol:
description:
- Protocol above IP layer
type: str
tcp_flags:
description:
- TCP flags or logical combination of flags.
- Flags supported by Hetzner are C(syn), C(fin), C(rst), C(psh) and C(urg).
- They can be combined with C(|) (logical or) and C(&) (logical and).
- See U(the documentation,https://wiki.hetzner.de/index.php/Robot_Firewall/en#Parameter)
for more information.
type: str
action:
description:
- Action if rule matches.
required: yes
type: str
choices: [ accept, discard ]
update_timeout:
description:
- Timeout to use when configuring the firewall.
- Note that the API call returns before the firewall has been
successfully set up.
type: int
default: 30
wait_for_configured:
description:
- Whether to wait until the firewall has been successfully configured before
determining what to do, and before returning from the module.
- The API returns status C(in progress) when the firewall is currently
being configured. If this happens, the module will try again until
the status changes to C(active) or C(disabled).
- Please note that there is a request limit. If you have to do multiple
updates, it can be better to disable waiting, and regularly use
M(hetzner_firewall_info) to query status.
type: bool
default: yes
wait_delay:
description:
- Delay to wait (in seconds) before checking again whether the firewall has
been configured.
type: int
default: 10
timeout:
description:
- Timeout (in seconds) for waiting for firewall to be configured.
type: int
default: 180
'''
EXAMPLES = r'''
- name: Configure firewall for server with main IP 1.2.3.4
hetzner_firewall:
hetzner_user: foo
hetzner_password: bar
server_ip: 1.2.3.4
status: active
whitelist_hos: yes
rules:
input:
- name: Allow everything to ports 20-23 from 4.3.2.1/24
ip_version: ipv4
src_ip: 4.3.2.1/24
dst_port: '20-23'
action: accept
- name: Allow everything to port 443
ip_version: ipv4
dst_port: '443'
action: accept
- name: Drop everything else
ip_version: ipv4
action: discard
register: result
- debug:
msg: "{{ result }}"
'''
RETURN = r'''
firewall:
description:
- The firewall configuration.
type: dict
returned: success
contains:
port:
description:
- Switch port of firewall.
- C(main) or C(kvm).
type: str
sample: main
server_ip:
description:
- Server's main IP address.
type: str
sample: 1.2.3.4
server_number:
description:
- Hetzner's internal server number.
type: int
sample: 12345
status:
description:
- Status of the firewall.
- C(active) or C(disabled).
- Will be C(in process) if the firewall is currently updated, and
I(wait_for_configured) is set to C(no) or I(timeout) to a too small value.
type: str
sample: active
whitelist_hos:
description:
- Whether Hetzner services have access.
type: bool
sample: true
rules:
description:
- Firewall rules.
type: dict
contains:
input:
description:
- Input firewall rules.
type: list
elements: dict
contains:
name:
description:
- Name of the firewall rule.
type: str
sample: Allow HTTP access to server
ip_version:
description:
- Internet protocol version.
type: str
sample: ipv4
dst_ip:
description:
- Destination IP address or subnet address.
- CIDR notation.
type: str
sample: 1.2.3.4/32
dst_port:
description:
- Destination port or port range.
type: str
sample: "443"
src_ip:
description:
- Source IP address or subnet address.
- CIDR notation.
type: str
sample: null
src_port:
description:
- Source port or port range.
type: str
sample: null
protocol:
description:
- Protocol above IP layer
type: str
sample: tcp
tcp_flags:
description:
- TCP flags or logical combination of flags.
type: str
sample: null
action:
description:
- Action if rule matches.
- C(accept) or C(discard).
type: str
sample: accept
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.compat import ipaddress as compat_ipaddress
from ansible.module_utils.hetzner import (
HETZNER_DEFAULT_ARGUMENT_SPEC,
BASE_URL,
fetch_url_json,
fetch_url_json_with_retries,
CheckDoneTimeoutException,
)
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils._text import to_native, to_text
RULE_OPTION_NAMES = [
'name', 'ip_version', 'dst_ip', 'dst_port', 'src_ip', 'src_port',
'protocol', 'tcp_flags', 'action',
]
RULES = ['input']
def restrict_dict(dictionary, fields):
result = dict()
for k, v in dictionary.items():
if k in fields:
result[k] = v
return result
def restrict_firewall_config(config):
result = restrict_dict(config, ['port', 'status', 'whitelist_hos'])
result['rules'] = dict()
for ruleset in RULES:
result['rules'][ruleset] = [
restrict_dict(rule, RULE_OPTION_NAMES)
for rule in config['rules'].get(ruleset) or []
]
return result
def update(before, after, params, name):
bv = before.get(name)
after[name] = bv
changed = False
pv = params[name]
if pv is not None:
changed = pv != bv
if changed:
after[name] = pv
return changed
def normalize_ip(ip, ip_version):
if ip is None:
return ip
if '/' in ip:
ip, range = ip.split('/')
else:
ip, range = ip, ''
ip_addr = to_native(compat_ipaddress.ip_address(to_text(ip)).compressed)
if range == '':
range = '32' if ip_version.lower() == 'ipv4' else '128'
return ip_addr + '/' + range
def update_rules(before, after, params, ruleset):
before_rules = before['rules'][ruleset]
after_rules = after['rules'][ruleset]
params_rules = params['rules'][ruleset]
changed = len(before_rules) != len(params_rules)
for no, rule in enumerate(params_rules):
rule['src_ip'] = normalize_ip(rule['src_ip'], rule['ip_version'])
rule['dst_ip'] = normalize_ip(rule['dst_ip'], rule['ip_version'])
if no < len(before_rules):
before_rule = before_rules[no]
before_rule['src_ip'] = normalize_ip(before_rule['src_ip'], before_rule['ip_version'])
before_rule['dst_ip'] = normalize_ip(before_rule['dst_ip'], before_rule['ip_version'])
if before_rule != rule:
changed = True
after_rules.append(rule)
return changed
def encode_rule(output, rulename, input):
for i, rule in enumerate(input['rules'][rulename]):
for k, v in rule.items():
if v is not None:
output['rules[{0}][{1}][{2}]'.format(rulename, i, k)] = v
def create_default_rules_object():
rules = dict()
for ruleset in RULES:
rules[ruleset] = []
return rules
def firewall_configured(result, error):
return result['firewall']['status'] != 'in process'
def main():
argument_spec = dict(
server_ip=dict(type='str', required=True),
port=dict(type='str', default='main', choices=['main', 'kvm']),
state=dict(type='str', default='present', choices=['present', 'absent']),
whitelist_hos=dict(type='bool'),
rules=dict(type='dict', options=dict(
input=dict(type='list', elements='dict', options=dict(
name=dict(type='str'),
ip_version=dict(type='str', required=True, choices=['ipv4', 'ipv6']),
dst_ip=dict(type='str'),
dst_port=dict(type='str'),
src_ip=dict(type='str'),
src_port=dict(type='str'),
protocol=dict(type='str'),
tcp_flags=dict(type='str'),
action=dict(type='str', required=True, choices=['accept', 'discard']),
)),
)),
update_timeout=dict(type='int', default=30),
wait_for_configured=dict(type='bool', default=True),
wait_delay=dict(type='int', default=10),
timeout=dict(type='int', default=180),
)
argument_spec.update(HETZNER_DEFAULT_ARGUMENT_SPEC)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
# Sanitize input
module.params['status'] = 'active' if (module.params['state'] == 'present') else 'disabled'
if module.params['rules'] is None:
module.params['rules'] = {}
if module.params['rules'].get('input') is None:
module.params['rules']['input'] = []
server_ip = module.params['server_ip']
# https://robot.your-server.de/doc/webservice/en.html#get-firewall-server-ip
url = "{0}/firewall/{1}".format(BASE_URL, server_ip)
if module.params['wait_for_configured']:
try:
result, error = fetch_url_json_with_retries(
module,
url,
check_done_callback=firewall_configured,
check_done_delay=module.params['wait_delay'],
check_done_timeout=module.params['timeout'],
)
except CheckDoneTimeoutException as dummy:
module.fail_json(msg='Timeout while waiting for firewall to be configured.')
else:
result, error = fetch_url_json(module, url)
if not firewall_configured(result, error):
module.fail_json(msg='Firewall configuration cannot be read as it is not configured.')
full_before = result['firewall']
if not full_before.get('rules'):
full_before['rules'] = create_default_rules_object()
before = restrict_firewall_config(full_before)
# Build wanted (after) state and compare
after = dict(before)
changed = False
changed |= update(before, after, module.params, 'port')
changed |= update(before, after, module.params, 'status')
changed |= update(before, after, module.params, 'whitelist_hos')
after['rules'] = create_default_rules_object()
if module.params['status'] == 'active':
for ruleset in RULES:
changed |= update_rules(before, after, module.params, ruleset)
# Update if different
construct_result = True
construct_status = None
if changed and not module.check_mode:
# https://robot.your-server.de/doc/webservice/en.html#post-firewall-server-ip
url = "{0}/firewall/{1}".format(BASE_URL, server_ip)
headers = {"Content-type": "application/x-www-form-urlencoded"}
data = dict(after)
data['whitelist_hos'] = str(data['whitelist_hos']).lower()
del data['rules']
for ruleset in RULES:
encode_rule(data, ruleset, after)
result, error = fetch_url_json(
module,
url,
method='POST',
timeout=module.params['update_timeout'],
data=urlencode(data),
headers=headers,
)
if module.params['wait_for_configured'] and not firewall_configured(result, error):
try:
result, error = fetch_url_json_with_retries(
module,
url,
check_done_callback=firewall_configured,
check_done_delay=module.params['wait_delay'],
check_done_timeout=module.params['timeout'],
skip_first=True,
)
except CheckDoneTimeoutException as e:
result, error = e.result, e.error
module.warn('Timeout while waiting for firewall to be configured.')
full_after = result['firewall']
if not full_after.get('rules'):
full_after['rules'] = create_default_rules_object()
construct_status = full_after['status']
if construct_status != 'in process':
# Only use result if configuration is done, so that diff will be ok
after = restrict_firewall_config(full_after)
construct_result = False
if construct_result:
# Construct result (used for check mode, and configuration still in process)
full_after = dict(full_before)
for k, v in after.items():
if k != 'rules':
full_after[k] = after[k]
if construct_status is not None:
# We want 'in process' here
full_after['status'] = construct_status
full_after['rules'] = dict()
for ruleset in RULES:
full_after['rules'][ruleset] = after['rules'][ruleset]
module.exit_json(
changed=changed,
diff=dict(
before=before,
after=after,
),
firewall=full_after,
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,357 |
Fix batch of broken links in module docs
|
We're trying to fix as many broken links as possible before modules move into collections. This is the batch of broken links on some Ansible modules.
NOTE: the link checker sometimes reports an error where a link actually works. Ignore those if you find them.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
###### BROKEN LINKS
~~https://docs.ansible.com/ansible/latest/modules/openssl_privatekey_module.html#openssl-privatekey-module~~
~~├─BROKEN─ https://en.wikipedia.org/wiki/RSA_(cryptosystem~~
~~https://docs.ansible.com/ansible/devel/modules/elb_target_info_module.html#elb-target-info-module~~
~~└─BROKEN─ https://boto3.readthedocs.io/en/latest/%20reference/services/elbv2.html#ElasticLoadBalancingv2.Client.describe_target_health~~
~~https://docs.ansible.com/ansible/devel/plugins/lookup/laps_password.html~~
~~├─BROKEN─ https://keathmilligan.net/python-ldap-and-macos/~~
~~https://docs.ansible.com/ansible/devel/modules/acme_certificate_revoke_module.html#acme-certificate-revoke-module~~
~~─BROKEN─ http://0.0.0.0:1337/modules/Section%205.3.1%20of%20RFC5280~~
https://docs.ansible.com/ansible/devel/modules/airbrake_deployment_module.html#airbrake-deployment-module
└─BROKEN─ http://help.airbrake.io/kb/api-2/deploy-tracking
https://docs.ansible.com/ansible/devel/modules/consul_module.html#consul-module
├─BROKEN─ http://0.0.0.0:1337/v1/agent/service/register
~~https://docs.ansible.com/ansible/devel/modules/hetzner_firewall_module.html#hetzner-firewall-module~~
~~─BROKEN─ http://0.0.0.0:1337/modules/the%20documentation,https:/wiki.hetzner.de/index.php/Robot_Firewall/en#Parameter~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_client_module.html#keycloak-client-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/index.html#_resourceserverrepresentation~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_clienttemplate_module.html#keycloak-clienttemplate-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
~~https://docs.ansible.com/ansible/devel/modules/keycloak_group_module.html#keycloak-group-module~~
~~├─BROKEN─ http://www.keycloak.org/docs-api/3.3/rest-api/~~
https://docs.ansible.com/ansible/devel/modules/meraki_network_module.html#meraki-network-module
├─BROKEN─ http://0.0.0.0:1337/modules/my.meraki.com
~~https://docs.ansible.com/ansible/devel/modules/os_ironic_module.html#os-ironic-module~~
~~├─BROKEN─ https://docs.openstack.org/ironic/latest/install/include/root-device-hints.html~~
~~https://docs.ansible.com/ansible/devel/modules/ovh_ip_failover_module.html#ovh-ip-failover-module~~
~~├─BROKEN─ https://eu.api.ovh.com/g934.first_step_with_api~~
~~https://docs.ansible.com/ansible/devel/modules/ovh_ip_loadbalancing_backend_module.html#ovh-ip-loadbalancing-backend-module~~
~~├─BROKEN─ https://eu.api.ovh.com/g934.first_step_with_api~~
~~https://docs.ansible.com/ansible/devel/modules/packet_volume_attachment_module.html#packet-volume-attachment-module~~
~~├─BROKEN─ https://www.packet.net/developers/api/volumeattachments/~~
https://docs.ansible.com/ansible/devel/modules/postgresql_info_module.html#postgresql-info-module
├─BROKEN─ https://www.postgresql.org/docs/current/catalog-pg-replication-slots.html
~~https://docs.ansible.com/ansible/devel/modules/postgresql_table_module.html#postgresql-table-module~~
~~├─BROKEN─ http://0.0.0.0:1337/modules/postgresql.org/docs/current/datatype.html~~
https://docs.ansible.com/ansible/devel/modules/win_credential_module.html#win-credential-module
├─BROKEN─ https://docs.microsoft.com/en-us/windows/desktop/api/wincred/ns-wincred-_credentiala
https://docs.ansible.com/ansible/devel/modules/win_dsc_module.html#win-dsc-module
├─BROKEN─ https://docs.microsoft.com/en-us/powershell/dsc/resources/resources
https://docs.ansible.com/ansible/devel/modules/win_inet_proxy_module.html#win-inet-proxy-module
├─BROKEN─ http://0.0.0.0:1337/modules/host
https://docs.ansible.com/ansible/devel/modules/win_webpicmd_module.html#win-webpicmd-module
└─BROKEN─ http://www.iis.net/learn/install/web-platform-installer/web-platform-installer-v4-command-line-webpicmdexe-rtw-release
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_module.html#xenserver-guest-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_powerstate_module.html#xenserver-guest-powerstate-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
~~https://docs.ansible.com/ansible/devel/modules/xenserver_guest_info_module.html#xenserver-guest-info-module~~
~~├─BROKEN─ https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py~~
https://docs.ansible.com/ansible/devel/modules/zabbix_map_module.html#zabbix-map-module
├─BROKEN─ https://en.wikipedia.org/wiki/DOT_(graph_description_language
|
https://github.com/ansible/ansible/issues/67357
|
https://github.com/ansible/ansible/pull/67360
|
53e043b5febd30f258a233f51b180a543300151b
|
11e75b0af256f9f09c54365282a4969a5fe0390e
| 2020-02-12T20:18:07Z |
python
| 2020-02-12T21:41:40Z |
lib/ansible/plugins/doc_fragments/docker.py
|
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Docker doc fragment
DOCUMENTATION = r'''
options:
docker_host:
description:
- The URL or Unix socket path used to connect to the Docker API. To connect to a remote host, provide the
TCP connection string. For example, C(tcp://192.0.2.23:2376). If TLS is used to encrypt the connection,
the module will automatically replace C(tcp) in the connection URL with C(https).
- If the value is not specified in the task, the value of environment variable C(DOCKER_HOST) will be used
instead. If the environment variable is not set, the default value will be used.
type: str
default: unix://var/run/docker.sock
aliases: [ docker_url ]
tls_hostname:
description:
- When verifying the authenticity of the Docker Host server, provide the expected name of the server.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS_HOSTNAME) will
be used instead. If the environment variable is not set, the default value will be used.
type: str
default: localhost
api_version:
description:
- The version of the Docker API running on the Docker Host.
- Defaults to the latest version of the API supported by Docker SDK for Python and the docker daemon.
- If the value is not specified in the task, the value of environment variable C(DOCKER_API_VERSION) will be
used instead. If the environment variable is not set, the default value will be used.
type: str
default: auto
aliases: [ docker_api_version ]
timeout:
description:
- The maximum amount of time in seconds to wait on a response from the API.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TIMEOUT) will be used
instead. If the environment variable is not set, the default value will be used.
type: int
default: 60
ca_cert:
description:
- Use a CA certificate when performing server verification by providing the path to a CA certificate file.
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
the file C(ca.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
type: path
aliases: [ tls_ca_cert, cacert_path ]
client_cert:
description:
- Path to the client's TLS certificate file.
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
the file C(cert.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
type: path
aliases: [ tls_client_cert, cert_path ]
client_key:
description:
- Path to the client's TLS key file.
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
the file C(key.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
type: path
aliases: [ tls_client_key, key_path ]
ssl_version:
description:
- Provide a valid SSL version number. Default value determined by ssl.py module.
- If the value is not specified in the task, the value of environment variable C(DOCKER_SSL_VERSION) will be
used instead.
type: str
tls:
description:
- Secure the connection to the API by using TLS without verifying the authenticity of the Docker host
server. Note that if I(validate_certs) is set to C(yes) as well, it will take precedence.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS) will be used
instead. If the environment variable is not set, the default value will be used.
type: bool
default: no
validate_certs:
description:
- Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS_VERIFY) will be
used instead. If the environment variable is not set, the default value will be used.
type: bool
default: no
aliases: [ tls_verify ]
debug:
description:
- Debug mode
type: bool
default: no
notes:
- Connect to the Docker daemon by providing parameters with each task or by defining environment variables.
You can define C(DOCKER_HOST), C(DOCKER_TLS_HOSTNAME), C(DOCKER_API_VERSION), C(DOCKER_CERT_PATH), C(DOCKER_SSL_VERSION),
C(DOCKER_TLS), C(DOCKER_TLS_VERIFY) and C(DOCKER_TIMEOUT). If you are using docker machine, run the script shipped
with the product that sets up the environment. It will set these variables for you. See
U(https://docker-py.readthedocs.io/en/stable/machine/) for more details.
- When connecting to Docker daemon with TLS, you might need to install additional Python packages.
For the Docker SDK for Python, version 2.4 or newer, this can be done by installing C(docker[tls]) with M(pip).
- Note that the Docker SDK for Python only allows to specify the path to the Docker configuration for very few functions.
In general, it will use C($HOME/.docker/config.json) if the C(DOCKER_CONFIG) environment variable is not specified,
and use C($DOCKER_CONFIG/config.json) otherwise.
'''
# Additional, more specific stuff for minimal Docker SDK for Python version < 2.0
DOCKER_PY_1_DOCUMENTATION = r'''
options: {}
requirements:
- "Docker SDK for Python: Please note that the L(docker-py,https://pypi.org/project/docker-py/)
Python module has been superseded by L(docker,https://pypi.org/project/docker/)
(see L(here,https://github.com/docker/docker-py/issues/1310) for details).
For Python 2.6, C(docker-py) must be used. Otherwise, it is recommended to
install the C(docker) Python module. Note that both modules should *not*
be installed at the same time. Also note that when both modules are installed
and one of them is uninstalled, the other might no longer function and a
reinstall of it is required."
'''
# Additional, more specific stuff for minimal Docker SDK for Python version >= 2.0.
# Note that Docker SDK for Python >= 2.0 requires Python 2.7 or newer.
DOCKER_PY_2_DOCUMENTATION = r'''
options: {}
requirements:
- "Python >= 2.7"
- "Docker SDK for Python: Please note that the L(docker-py,https://pypi.org/project/docker-py/)
Python module has been superseded by L(docker,https://pypi.org/project/docker/)
(see L(here,https://github.com/docker/docker-py/issues/1310) for details).
This module does *not* work with docker-py."
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,355 |
Fix broken links in docker modules
|
We're trying to fix as many broken links as possible before modules move into collections. This is the batch of broken links on docker modules.
NOTE: the link checker sometimes reports an error where a link actually works. Ignore those if you find them.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
###### BROKEN LINKS
https://docs.ansible.com/ansible/devel/modules/docker_compose_module.html#docker-compose-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_image_module.html#docker-image-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_config_module.html#docker-config-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_container_info_module.html#docker-container-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_host_info_module.html#docker-host-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_image_info_module.html#docker-image-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_login_module.html#docker-login-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_network_module.html#docker-network-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_network_info_module.html#docker-network-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_node_module.html#docker-node-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_node_info_module.html#docker-node-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_prune_module.html#docker-prune-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_secret_module.html#docker-secret-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_module.html#docker-swarm-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_info_module.html#docker-swarm-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_service_module.html#docker-swarm-service-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_service_info_module.html#docker-swarm-service-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_volume_module.html#docker-volume-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_volume_info_module.html#docker-volume-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
|
https://github.com/ansible/ansible/issues/67355
|
https://github.com/ansible/ansible/pull/67360
|
53e043b5febd30f258a233f51b180a543300151b
|
11e75b0af256f9f09c54365282a4969a5fe0390e
| 2020-02-12T20:13:45Z |
python
| 2020-02-12T21:41:40Z |
lib/ansible/modules/crypto/acme/acme_certificate_revoke.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016 Michael Gruener <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: acme_certificate_revoke
author: "Felix Fontein (@felixfontein)"
version_added: "2.7"
short_description: Revoke certificates with the ACME protocol
description:
- "Allows to revoke certificates issued by a CA supporting the
L(ACME protocol,https://tools.ietf.org/html/rfc8555),
such as L(Let's Encrypt,https://letsencrypt.org/)."
notes:
- "Exactly one of C(account_key_src), C(account_key_content),
C(private_key_src) or C(private_key_content) must be specified."
- "Trying to revoke an already revoked certificate
should result in an unchanged status, even if the revocation reason
was different than the one specified here. Also, depending on the
server, it can happen that some other error is returned if the
certificate has already been revoked."
seealso:
- name: The Let's Encrypt documentation
description: Documentation for the Let's Encrypt Certification Authority.
Provides useful information for example on rate limits.
link: https://letsencrypt.org/docs/
- name: Automatic Certificate Management Environment (ACME)
description: The specification of the ACME protocol (RFC 8555).
link: https://tools.ietf.org/html/rfc8555
- module: acme_inspect
description: Allows to debug problems.
extends_documentation_fragment:
- acme
options:
certificate:
description:
- "Path to the certificate to revoke."
type: path
required: yes
account_key_src:
description:
- "Path to a file containing the ACME account RSA or Elliptic Curve
key."
- "RSA keys can be created with C(openssl rsa ...). Elliptic curve keys can
be created with C(openssl ecparam -genkey ...). Any other tool creating
private keys in PEM format can be used as well."
- "Mutually exclusive with C(account_key_content)."
- "Required if C(account_key_content) is not used."
type: path
account_key_content:
description:
- "Content of the ACME account RSA or Elliptic Curve key."
- "Note that exactly one of C(account_key_src), C(account_key_content),
C(private_key_src) or C(private_key_content) must be specified."
- "I(Warning): the content will be written into a temporary file, which will
be deleted by Ansible when the module completes. Since this is an
important private key — it can be used to change the account key,
or to revoke your certificates without knowing their private keys
—, this might not be acceptable."
- "In case C(cryptography) is used, the content is not written into a
temporary file. It can still happen that it is written to disk by
Ansible in the process of moving the module with its argument to
the node where it is executed."
type: str
private_key_src:
description:
- "Path to the certificate's private key."
- "Note that exactly one of C(account_key_src), C(account_key_content),
C(private_key_src) or C(private_key_content) must be specified."
type: path
private_key_content:
description:
- "Content of the certificate's private key."
- "Note that exactly one of C(account_key_src), C(account_key_content),
C(private_key_src) or C(private_key_content) must be specified."
- "I(Warning): the content will be written into a temporary file, which will
be deleted by Ansible when the module completes. Since this is an
important private key — it can be used to change the account key,
or to revoke your certificates without knowing their private keys
—, this might not be acceptable."
- "In case C(cryptography) is used, the content is not written into a
temporary file. It can still happen that it is written to disk by
Ansible in the process of moving the module with its argument to
the node where it is executed."
type: str
revoke_reason:
description:
- "One of the revocation reasonCodes defined in
L(https://tools.ietf.org/html/rfc5280#section-5.3.1, Section 5.3.1 of RFC5280)."
- "Possible values are C(0) (unspecified), C(1) (keyCompromise),
C(2) (cACompromise), C(3) (affiliationChanged), C(4) (superseded),
C(5) (cessationOfOperation), C(6) (certificateHold),
C(8) (removeFromCRL), C(9) (privilegeWithdrawn),
C(10) (aACompromise)"
type: int
'''
EXAMPLES = '''
- name: Revoke certificate with account key
acme_certificate_revoke:
account_key_src: /etc/pki/cert/private/account.key
certificate: /etc/httpd/ssl/sample.com.crt
- name: Revoke certificate with certificate's private key
acme_certificate_revoke:
private_key_src: /etc/httpd/ssl/sample.com.key
certificate: /etc/httpd/ssl/sample.com.crt
'''
RETURN = '''
'''
from ansible.module_utils.acme import (
ModuleFailException,
ACMEAccount,
nopad_b64,
pem_to_der,
handle_standard_module_arguments,
get_default_argspec,
)
from ansible.module_utils.basic import AnsibleModule
def main():
argument_spec = get_default_argspec()
argument_spec.update(dict(
private_key_src=dict(type='path'),
private_key_content=dict(type='str', no_log=True),
certificate=dict(type='path', required=True),
revoke_reason=dict(type='int'),
))
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=(
['account_key_src', 'account_key_content', 'private_key_src', 'private_key_content'],
),
mutually_exclusive=(
['account_key_src', 'account_key_content', 'private_key_src', 'private_key_content'],
),
supports_check_mode=False,
)
handle_standard_module_arguments(module)
try:
account = ACMEAccount(module)
# Load certificate
certificate = pem_to_der(module.params.get('certificate'))
certificate = nopad_b64(certificate)
# Construct payload
payload = {
'certificate': certificate
}
if module.params.get('revoke_reason') is not None:
payload['reason'] = module.params.get('revoke_reason')
# Determine endpoint
if module.params.get('acme_version') == 1:
endpoint = account.directory['revoke-cert']
payload['resource'] = 'revoke-cert'
else:
endpoint = account.directory['revokeCert']
# Get hold of private key (if available) and make sure it comes from disk
private_key = module.params.get('private_key_src')
private_key_content = module.params.get('private_key_content')
# Revoke certificate
if private_key or private_key_content:
# Step 1: load and parse private key
error, private_key_data = account.parse_key(private_key, private_key_content)
if error:
raise ModuleFailException("error while parsing private key: %s" % error)
# Step 2: sign revokation request with private key
jws_header = {
"alg": private_key_data['alg'],
"jwk": private_key_data['jwk'],
}
result, info = account.send_signed_request(endpoint, payload, key_data=private_key_data, jws_header=jws_header)
else:
# Step 1: get hold of account URI
created, account_data = account.setup_account(allow_creation=False)
if created:
raise AssertionError('Unwanted account creation')
if account_data is None:
raise ModuleFailException(msg='Account does not exist or is deactivated.')
# Step 2: sign revokation request with account key
result, info = account.send_signed_request(endpoint, payload)
if info['status'] != 200:
already_revoked = False
# Standardized error from draft 14 on (https://tools.ietf.org/html/rfc8555#section-7.6)
if result.get('type') == 'urn:ietf:params:acme:error:alreadyRevoked':
already_revoked = True
else:
# Hack for Boulder errors
if module.params.get('acme_version') == 1:
error_type = 'urn:acme:error:malformed'
else:
error_type = 'urn:ietf:params:acme:error:malformed'
if result.get('type') == error_type and result.get('detail') == 'Certificate already revoked':
# Fallback: boulder returns this in case the certificate was already revoked.
already_revoked = True
# If we know the certificate was already revoked, we don't fail,
# but successfully terminate while indicating no change
if already_revoked:
module.exit_json(changed=False)
raise ModuleFailException('Error revoking certificate: {0} {1}'.format(info['status'], result))
module.exit_json(changed=True)
except ModuleFailException as e:
e.do_fail(module)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,355 |
Fix broken links in docker modules
|
We're trying to fix as many broken links as possible before modules move into collections. This is the batch of broken links on docker modules.
NOTE: the link checker sometimes reports an error where a link actually works. Ignore those if you find them.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
###### BROKEN LINKS
https://docs.ansible.com/ansible/devel/modules/docker_compose_module.html#docker-compose-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_image_module.html#docker-image-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_config_module.html#docker-config-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_container_info_module.html#docker-container-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_host_info_module.html#docker-host-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_image_info_module.html#docker-image-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_login_module.html#docker-login-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_network_module.html#docker-network-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_network_info_module.html#docker-network-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_node_module.html#docker-node-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_node_info_module.html#docker-node-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_prune_module.html#docker-prune-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_secret_module.html#docker-secret-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_module.html#docker-swarm-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_info_module.html#docker-swarm-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_service_module.html#docker-swarm-service-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_service_info_module.html#docker-swarm-service-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_volume_module.html#docker-volume-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_volume_info_module.html#docker-volume-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
|
https://github.com/ansible/ansible/issues/67355
|
https://github.com/ansible/ansible/pull/67360
|
53e043b5febd30f258a233f51b180a543300151b
|
11e75b0af256f9f09c54365282a4969a5fe0390e
| 2020-02-12T20:13:45Z |
python
| 2020-02-12T21:41:40Z |
lib/ansible/modules/crypto/openssl_privatekey.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Yanis Guenane <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: openssl_privatekey
version_added: "2.3"
short_description: Generate OpenSSL private keys
description:
- This module allows one to (re)generate OpenSSL private keys.
- One can generate L(RSA,https://en.wikipedia.org/wiki/RSA_(cryptosystem)),
L(DSA,https://en.wikipedia.org/wiki/Digital_Signature_Algorithm),
L(ECC,https://en.wikipedia.org/wiki/Elliptic-curve_cryptography) or
L(EdDSA,https://en.wikipedia.org/wiki/EdDSA) private keys.
- Keys are generated in PEM format.
- "Please note that the module regenerates private keys if they don't match
the module's options. In particular, if you provide another passphrase
(or specify none), change the keysize, etc., the private key will be
regenerated. If you are concerned that this could **overwrite your private key**,
consider using the I(backup) option."
- The module can use the cryptography Python library, or the pyOpenSSL Python
library. By default, it tries to detect which one is available. This can be
overridden with the I(select_crypto_backend) option. Please note that the
PyOpenSSL backend was deprecated in Ansible 2.9 and will be removed in Ansible 2.13."
requirements:
- Either cryptography >= 1.2.3 (older versions might work as well)
- Or pyOpenSSL
author:
- Yanis Guenane (@Spredzy)
- Felix Fontein (@felixfontein)
options:
state:
description:
- Whether the private key should exist or not, taking action if the state is different from what is stated.
type: str
default: present
choices: [ absent, present ]
size:
description:
- Size (in bits) of the TLS/SSL key to generate.
type: int
default: 4096
type:
description:
- The algorithm used to generate the TLS/SSL private key.
- Note that C(ECC), C(X25519), C(X448), C(Ed25519) and C(Ed448) require the C(cryptography) backend.
C(X25519) needs cryptography 2.5 or newer, while C(X448), C(Ed25519) and C(Ed448) require
cryptography 2.6 or newer. For C(ECC), the minimal cryptography version required depends on the
I(curve) option.
type: str
default: RSA
choices: [ DSA, ECC, Ed25519, Ed448, RSA, X25519, X448 ]
curve:
description:
- Note that not all curves are supported by all versions of C(cryptography).
- For maximal interoperability, C(secp384r1) or C(secp256r1) should be used.
- We use the curve names as defined in the
L(IANA registry for TLS,https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-8).
type: str
choices:
- secp384r1
- secp521r1
- secp224r1
- secp192r1
- secp256r1
- secp256k1
- brainpoolP256r1
- brainpoolP384r1
- brainpoolP512r1
- sect571k1
- sect409k1
- sect283k1
- sect233k1
- sect163k1
- sect571r1
- sect409r1
- sect283r1
- sect233r1
- sect163r2
version_added: "2.8"
force:
description:
- Should the key be regenerated even if it already exists.
type: bool
default: no
path:
description:
- Name of the file in which the generated TLS/SSL private key will be written. It will have 0600 mode.
type: path
required: true
passphrase:
description:
- The passphrase for the private key.
type: str
version_added: "2.4"
cipher:
description:
- The cipher to encrypt the private key. (Valid values can be found by
running `openssl list -cipher-algorithms` or `openssl list-cipher-algorithms`,
depending on your OpenSSL version.)
- When using the C(cryptography) backend, use C(auto).
type: str
version_added: "2.4"
select_crypto_backend:
description:
- Determines which crypto backend to use.
- The default choice is C(auto), which tries to use C(cryptography) if available, and falls back to C(pyopenssl).
- If set to C(pyopenssl), will try to use the L(pyOpenSSL,https://pypi.org/project/pyOpenSSL/) library.
- If set to C(cryptography), will try to use the L(cryptography,https://cryptography.io/) library.
- Please note that the C(pyopenssl) backend has been deprecated in Ansible 2.9, and will be removed in Ansible 2.13.
From that point on, only the C(cryptography) backend will be available.
type: str
default: auto
choices: [ auto, cryptography, pyopenssl ]
version_added: "2.8"
format:
description:
- Determines which format the private key is written in. By default, PKCS1 (traditional OpenSSL format)
is used for all keys which support it. Please note that not every key can be exported in any format.
- The value C(auto) selects a fromat based on the key format. The value C(auto_ignore) does the same,
but for existing private key files, it will not force a regenerate when its format is not the automatically
selected one for generation.
- Note that if the format for an existing private key mismatches, the key is *regenerated* by default.
To change this behavior, use the I(format_mismatch) option.
- The I(format) option is only supported by the C(cryptography) backend. The C(pyopenssl) backend will
fail if a value different from C(auto_ignore) is used.
type: str
default: auto_ignore
choices: [ pkcs1, pkcs8, raw, auto, auto_ignore ]
version_added: "2.10"
format_mismatch:
description:
- Determines behavior of the module if the format of a private key does not match the expected format, but all
other parameters are as expected.
- If set to C(regenerate) (default), generates a new private key.
- If set to C(convert), the key will be converted to the new format instead.
- Only supported by the C(cryptography) backend.
type: str
default: regenerate
choices: [ regenerate, convert ]
version_added: "2.10"
backup:
description:
- Create a backup file including a timestamp so you can get
the original private key back if you overwrote it with a new one by accident.
type: bool
default: no
version_added: "2.8"
return_content:
description:
- If set to C(yes), will return the (current or generated) private key's content as I(privatekey).
- Note that especially if the private key is not encrypted, you have to make sure that the returned
value is treated appropriately and not accidentally written to logs etc.! Use with care!
type: bool
default: no
version_added: "2.10"
extends_documentation_fragment:
- files
seealso:
- module: openssl_certificate
- module: openssl_csr
- module: openssl_dhparam
- module: openssl_pkcs12
- module: openssl_publickey
'''
EXAMPLES = r'''
- name: Generate an OpenSSL private key with the default values (4096 bits, RSA)
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
- name: Generate an OpenSSL private key with the default values (4096 bits, RSA) and a passphrase
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
passphrase: ansible
cipher: aes256
- name: Generate an OpenSSL private key with a different size (2048 bits)
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
size: 2048
- name: Force regenerate an OpenSSL private key if it already exists
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
force: yes
- name: Generate an OpenSSL private key with a different algorithm (DSA)
openssl_privatekey:
path: /etc/ssl/private/ansible.com.pem
type: DSA
'''
RETURN = r'''
size:
description: Size (in bits) of the TLS/SSL private key.
returned: changed or success
type: int
sample: 4096
type:
description: Algorithm used to generate the TLS/SSL private key.
returned: changed or success
type: str
sample: RSA
curve:
description: Elliptic curve used to generate the TLS/SSL private key.
returned: changed or success, and I(type) is C(ECC)
type: str
sample: secp256r1
filename:
description: Path to the generated TLS/SSL private key file.
returned: changed or success
type: str
sample: /etc/ssl/private/ansible.com.pem
fingerprint:
description:
- The fingerprint of the public key. Fingerprint will be generated for each C(hashlib.algorithms) available.
- The PyOpenSSL backend requires PyOpenSSL >= 16.0 for meaningful output.
returned: changed or success
type: dict
sample:
md5: "84:75:71:72:8d:04:b5:6c:4d:37:6d:66:83:f5:4c:29"
sha1: "51:cc:7c:68:5d:eb:41:43:88:7e:1a:ae:c7:f8:24:72:ee:71:f6:10"
sha224: "b1:19:a6:6c:14:ac:33:1d:ed:18:50:d3:06:5c:b2:32:91:f1:f1:52:8c:cb:d5:75:e9:f5:9b:46"
sha256: "41:ab:c7:cb:d5:5f:30:60:46:99:ac:d4:00:70:cf:a1:76:4f:24:5d:10:24:57:5d:51:6e:09:97:df:2f:de:c7"
sha384: "85:39:50:4e:de:d9:19:33:40:70:ae:10:ab:59:24:19:51:c3:a2:e4:0b:1c:b1:6e:dd:b3:0c:d9:9e:6a:46:af:da:18:f8:ef:ae:2e:c0:9a:75:2c:9b:b3:0f:3a:5f:3d"
sha512: "fd:ed:5e:39:48:5f:9f:fe:7f:25:06:3f:79:08:cd:ee:a5:e7:b3:3d:13:82:87:1f:84:e1:f5:c7:28:77:53:94:86:56:38:69:f0:d9:35:22:01:1e:a6:60:...:0f:9b"
backup_file:
description: Name of backup file created.
returned: changed and if I(backup) is C(yes)
type: str
sample: /path/to/privatekey.pem.2019-03-09@11:22~
privatekey:
description:
- The (current or generated) private key's content.
- Will be Base64-encoded if the key is in raw format.
returned: if I(state) is C(present) and I(return_content) is C(yes)
type: str
version_added: "2.10"
'''
import abc
import base64
import os
import traceback
from distutils.version import LooseVersion
MINIMAL_PYOPENSSL_VERSION = '0.6'
MINIMAL_CRYPTOGRAPHY_VERSION = '1.2.3'
PYOPENSSL_IMP_ERR = None
try:
import OpenSSL
from OpenSSL import crypto
PYOPENSSL_VERSION = LooseVersion(OpenSSL.__version__)
except ImportError:
PYOPENSSL_IMP_ERR = traceback.format_exc()
PYOPENSSL_FOUND = False
else:
PYOPENSSL_FOUND = True
CRYPTOGRAPHY_IMP_ERR = None
try:
import cryptography
import cryptography.exceptions
import cryptography.hazmat.backends
import cryptography.hazmat.primitives.serialization
import cryptography.hazmat.primitives.asymmetric.rsa
import cryptography.hazmat.primitives.asymmetric.dsa
import cryptography.hazmat.primitives.asymmetric.ec
import cryptography.hazmat.primitives.asymmetric.utils
CRYPTOGRAPHY_VERSION = LooseVersion(cryptography.__version__)
except ImportError:
CRYPTOGRAPHY_IMP_ERR = traceback.format_exc()
CRYPTOGRAPHY_FOUND = False
else:
CRYPTOGRAPHY_FOUND = True
from ansible.module_utils.crypto import (
CRYPTOGRAPHY_HAS_X25519,
CRYPTOGRAPHY_HAS_X25519_FULL,
CRYPTOGRAPHY_HAS_X448,
CRYPTOGRAPHY_HAS_ED25519,
CRYPTOGRAPHY_HAS_ED448,
)
from ansible.module_utils import crypto as crypto_utils
from ansible.module_utils._text import to_native, to_bytes
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class PrivateKeyError(crypto_utils.OpenSSLObjectError):
pass
class PrivateKeyBase(crypto_utils.OpenSSLObject):
def __init__(self, module):
super(PrivateKeyBase, self).__init__(
module.params['path'],
module.params['state'],
module.params['force'],
module.check_mode
)
self.size = module.params['size']
self.passphrase = module.params['passphrase']
self.cipher = module.params['cipher']
self.privatekey = None
self.fingerprint = {}
self.format = module.params['format']
self.format_mismatch = module.params['format_mismatch']
self.privatekey_bytes = None
self.return_content = module.params['return_content']
self.backup = module.params['backup']
self.backup_file = None
if module.params['mode'] is None:
module.params['mode'] = '0600'
@abc.abstractmethod
def _generate_private_key(self):
"""(Re-)Generate private key."""
pass
@abc.abstractmethod
def _get_private_key_data(self):
"""Return bytes for self.privatekey"""
pass
@abc.abstractmethod
def _get_fingerprint(self):
pass
def generate(self, module):
"""Generate a keypair."""
if not self.check(module, perms_required=False, ignore_conversion=True) or self.force:
# Regenerate
if self.backup:
self.backup_file = module.backup_local(self.path)
self._generate_private_key()
privatekey_data = self._get_private_key_data()
if self.return_content:
self.privatekey_bytes = privatekey_data
crypto_utils.write_file(module, privatekey_data, 0o600)
self.changed = True
elif not self.check(module, perms_required=False, ignore_conversion=False):
# Convert
if self.backup:
self.backup_file = module.backup_local(self.path)
privatekey_data = self._get_private_key_data()
if self.return_content:
self.privatekey_bytes = privatekey_data
crypto_utils.write_file(module, privatekey_data, 0o600)
self.changed = True
self.fingerprint = self._get_fingerprint()
file_args = module.load_file_common_arguments(module.params)
if module.set_fs_attributes_if_different(file_args, False):
self.changed = True
def remove(self, module):
if self.backup:
self.backup_file = module.backup_local(self.path)
super(PrivateKeyBase, self).remove(module)
@abc.abstractmethod
def _check_passphrase(self):
pass
@abc.abstractmethod
def _check_size_and_type(self):
pass
@abc.abstractmethod
def _check_format(self):
pass
def check(self, module, perms_required=True, ignore_conversion=True):
"""Ensure the resource is in its desired state."""
state_and_perms = super(PrivateKeyBase, self).check(module, perms_required)
if not state_and_perms or not self._check_passphrase():
return False
if not self._check_size_and_type():
return False
if not self._check_format():
if not ignore_conversion or self.format_mismatch != 'convert':
return False
return True
def dump(self):
"""Serialize the object into a dictionary."""
result = {
'size': self.size,
'filename': self.path,
'changed': self.changed,
'fingerprint': self.fingerprint,
}
if self.backup_file:
result['backup_file'] = self.backup_file
if self.return_content:
if self.privatekey_bytes is None:
self.privatekey_bytes = crypto_utils.load_file_if_exists(self.path, ignore_errors=True)
if self.privatekey_bytes:
if crypto_utils.identify_private_key_format(self.privatekey_bytes) == 'raw':
result['privatekey'] = base64.b64encode(self.privatekey_bytes)
else:
result['privatekey'] = self.privatekey_bytes.decode('utf-8')
else:
result['privatekey'] = None
return result
# Implementation with using pyOpenSSL
class PrivateKeyPyOpenSSL(PrivateKeyBase):
def __init__(self, module):
super(PrivateKeyPyOpenSSL, self).__init__(module)
if module.params['type'] == 'RSA':
self.type = crypto.TYPE_RSA
elif module.params['type'] == 'DSA':
self.type = crypto.TYPE_DSA
else:
module.fail_json(msg="PyOpenSSL backend only supports RSA and DSA keys.")
if self.format != 'auto_ignore':
module.fail_json(msg="PyOpenSSL backend only supports auto_ignore format.")
def _generate_private_key(self):
"""(Re-)Generate private key."""
self.privatekey = crypto.PKey()
try:
self.privatekey.generate_key(self.type, self.size)
except (TypeError, ValueError) as exc:
raise PrivateKeyError(exc)
def _get_private_key_data(self):
"""Return bytes for self.privatekey"""
if self.cipher and self.passphrase:
return crypto.dump_privatekey(crypto.FILETYPE_PEM, self.privatekey,
self.cipher, to_bytes(self.passphrase))
else:
return crypto.dump_privatekey(crypto.FILETYPE_PEM, self.privatekey)
def _get_fingerprint(self):
return crypto_utils.get_fingerprint(self.path, self.passphrase)
def _check_passphrase(self):
try:
crypto_utils.load_privatekey(self.path, self.passphrase)
return True
except Exception as dummy:
return False
def _check_size_and_type(self):
def _check_size(privatekey):
return self.size == privatekey.bits()
def _check_type(privatekey):
return self.type == privatekey.type()
try:
privatekey = crypto_utils.load_privatekey(self.path, self.passphrase)
except crypto_utils.OpenSSLBadPassphraseError as exc:
raise PrivateKeyError(exc)
return _check_size(privatekey) and _check_type(privatekey)
def _check_format(self):
# Not supported by this backend
return True
def dump(self):
"""Serialize the object into a dictionary."""
result = super(PrivateKeyPyOpenSSL, self).dump()
if self.type == crypto.TYPE_RSA:
result['type'] = 'RSA'
else:
result['type'] = 'DSA'
return result
# Implementation with using cryptography
class PrivateKeyCryptography(PrivateKeyBase):
def _get_ec_class(self, ectype):
ecclass = cryptography.hazmat.primitives.asymmetric.ec.__dict__.get(ectype)
if ecclass is None:
self.module.fail_json(msg='Your cryptography version does not support {0}'.format(ectype))
return ecclass
def _add_curve(self, name, ectype, deprecated=False):
def create(size):
ecclass = self._get_ec_class(ectype)
return ecclass()
def verify(privatekey):
ecclass = self._get_ec_class(ectype)
return isinstance(privatekey.private_numbers().public_numbers.curve, ecclass)
self.curves[name] = {
'create': create,
'verify': verify,
'deprecated': deprecated,
}
def __init__(self, module):
super(PrivateKeyCryptography, self).__init__(module)
self.curves = dict()
self._add_curve('secp384r1', 'SECP384R1')
self._add_curve('secp521r1', 'SECP521R1')
self._add_curve('secp224r1', 'SECP224R1')
self._add_curve('secp192r1', 'SECP192R1')
self._add_curve('secp256r1', 'SECP256R1')
self._add_curve('secp256k1', 'SECP256K1')
self._add_curve('brainpoolP256r1', 'BrainpoolP256R1', deprecated=True)
self._add_curve('brainpoolP384r1', 'BrainpoolP384R1', deprecated=True)
self._add_curve('brainpoolP512r1', 'BrainpoolP512R1', deprecated=True)
self._add_curve('sect571k1', 'SECT571K1', deprecated=True)
self._add_curve('sect409k1', 'SECT409K1', deprecated=True)
self._add_curve('sect283k1', 'SECT283K1', deprecated=True)
self._add_curve('sect233k1', 'SECT233K1', deprecated=True)
self._add_curve('sect163k1', 'SECT163K1', deprecated=True)
self._add_curve('sect571r1', 'SECT571R1', deprecated=True)
self._add_curve('sect409r1', 'SECT409R1', deprecated=True)
self._add_curve('sect283r1', 'SECT283R1', deprecated=True)
self._add_curve('sect233r1', 'SECT233R1', deprecated=True)
self._add_curve('sect163r2', 'SECT163R2', deprecated=True)
self.module = module
self.cryptography_backend = cryptography.hazmat.backends.default_backend()
self.type = module.params['type']
self.curve = module.params['curve']
if not CRYPTOGRAPHY_HAS_X25519 and self.type == 'X25519':
self.module.fail_json(msg='Your cryptography version does not support X25519')
if not CRYPTOGRAPHY_HAS_X25519_FULL and self.type == 'X25519':
self.module.fail_json(msg='Your cryptography version does not support X25519 serialization')
if not CRYPTOGRAPHY_HAS_X448 and self.type == 'X448':
self.module.fail_json(msg='Your cryptography version does not support X448')
if not CRYPTOGRAPHY_HAS_ED25519 and self.type == 'Ed25519':
self.module.fail_json(msg='Your cryptography version does not support Ed25519')
if not CRYPTOGRAPHY_HAS_ED448 and self.type == 'Ed448':
self.module.fail_json(msg='Your cryptography version does not support Ed448')
def _get_wanted_format(self):
if self.format not in ('auto', 'auto_ignore'):
return self.format
if self.type in ('X25519', 'X448', 'Ed25519', 'Ed448'):
return 'pkcs8'
else:
return 'pkcs1'
def _generate_private_key(self):
"""(Re-)Generate private key."""
try:
if self.type == 'RSA':
self.privatekey = cryptography.hazmat.primitives.asymmetric.rsa.generate_private_key(
public_exponent=65537, # OpenSSL always uses this
key_size=self.size,
backend=self.cryptography_backend
)
if self.type == 'DSA':
self.privatekey = cryptography.hazmat.primitives.asymmetric.dsa.generate_private_key(
key_size=self.size,
backend=self.cryptography_backend
)
if CRYPTOGRAPHY_HAS_X25519_FULL and self.type == 'X25519':
self.privatekey = cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.generate()
if CRYPTOGRAPHY_HAS_X448 and self.type == 'X448':
self.privatekey = cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey.generate()
if CRYPTOGRAPHY_HAS_ED25519 and self.type == 'Ed25519':
self.privatekey = cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey.generate()
if CRYPTOGRAPHY_HAS_ED448 and self.type == 'Ed448':
self.privatekey = cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey.generate()
if self.type == 'ECC' and self.curve in self.curves:
if self.curves[self.curve]['deprecated']:
self.module.warn('Elliptic curves of type {0} should not be used for new keys!'.format(self.curve))
self.privatekey = cryptography.hazmat.primitives.asymmetric.ec.generate_private_key(
curve=self.curves[self.curve]['create'](self.size),
backend=self.cryptography_backend
)
except cryptography.exceptions.UnsupportedAlgorithm as dummy:
self.module.fail_json(msg='Cryptography backend does not support the algorithm required for {0}'.format(self.type))
def _get_private_key_data(self):
"""Return bytes for self.privatekey"""
# Select export format and encoding
try:
export_format = self._get_wanted_format()
export_encoding = cryptography.hazmat.primitives.serialization.Encoding.PEM
if export_format == 'pkcs1':
# "TraditionalOpenSSL" format is PKCS1
export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.TraditionalOpenSSL
elif export_format == 'pkcs8':
export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.PKCS8
elif export_format == 'raw':
export_format = cryptography.hazmat.primitives.serialization.PrivateFormat.Raw
export_encoding = cryptography.hazmat.primitives.serialization.Encoding.Raw
except AttributeError:
self.module.fail_json(msg='Cryptography backend does not support the selected output format "{0}"'.format(self.format))
# Select key encryption
encryption_algorithm = cryptography.hazmat.primitives.serialization.NoEncryption()
if self.cipher and self.passphrase:
if self.cipher == 'auto':
encryption_algorithm = cryptography.hazmat.primitives.serialization.BestAvailableEncryption(to_bytes(self.passphrase))
else:
self.module.fail_json(msg='Cryptography backend can only use "auto" for cipher option.')
# Serialize key
try:
return self.privatekey.private_bytes(
encoding=export_encoding,
format=export_format,
encryption_algorithm=encryption_algorithm
)
except ValueError as dummy:
self.module.fail_json(
msg='Cryptography backend cannot serialize the private key in the required format "{0}"'.format(self.format)
)
except Exception as dummy:
self.module.fail_json(
msg='Error while serializing the private key in the required format "{0}"'.format(self.format),
exception=traceback.format_exc()
)
def _load_privatekey(self):
try:
# Read bytes
with open(self.path, 'rb') as f:
data = f.read()
# Interpret bytes depending on format.
format = crypto_utils.identify_private_key_format(data)
if format == 'raw':
if len(data) == 56 and CRYPTOGRAPHY_HAS_X448:
return cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey.from_private_bytes(data)
if len(data) == 57 and CRYPTOGRAPHY_HAS_ED448:
return cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey.from_private_bytes(data)
if len(data) == 32:
if CRYPTOGRAPHY_HAS_X25519 and (self.type == 'X25519' or not CRYPTOGRAPHY_HAS_ED25519):
return cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.from_private_bytes(data)
if CRYPTOGRAPHY_HAS_ED25519 and (self.type == 'Ed25519' or not CRYPTOGRAPHY_HAS_X25519):
return cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey.from_private_bytes(data)
if CRYPTOGRAPHY_HAS_X25519 and CRYPTOGRAPHY_HAS_ED25519:
try:
return cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.from_private_bytes(data)
except Exception:
return cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey.from_private_bytes(data)
raise PrivateKeyError('Cannot load raw key')
else:
return cryptography.hazmat.primitives.serialization.load_pem_private_key(
data,
None if self.passphrase is None else to_bytes(self.passphrase),
backend=self.cryptography_backend
)
except Exception as e:
raise PrivateKeyError(e)
def _get_fingerprint(self):
# Get bytes of public key
private_key = self._load_privatekey()
public_key = private_key.public_key()
public_key_bytes = public_key.public_bytes(
cryptography.hazmat.primitives.serialization.Encoding.DER,
cryptography.hazmat.primitives.serialization.PublicFormat.SubjectPublicKeyInfo
)
# Get fingerprints of public_key_bytes
return crypto_utils.get_fingerprint_of_bytes(public_key_bytes)
def _check_passphrase(self):
try:
with open(self.path, 'rb') as f:
data = f.read()
format = crypto_utils.identify_private_key_format(data)
if format == 'raw':
# Raw keys cannot be encrypted
return self.passphrase is None
else:
return cryptography.hazmat.primitives.serialization.load_pem_private_key(
data,
None if self.passphrase is None else to_bytes(self.passphrase),
backend=self.cryptography_backend
)
except Exception as dummy:
return False
def _check_size_and_type(self):
privatekey = self._load_privatekey()
self.privatekey = privatekey
if isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateKey):
return self.type == 'RSA' and self.size == privatekey.key_size
if isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.dsa.DSAPrivateKey):
return self.type == 'DSA' and self.size == privatekey.key_size
if CRYPTOGRAPHY_HAS_X25519 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey):
return self.type == 'X25519'
if CRYPTOGRAPHY_HAS_X448 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.x448.X448PrivateKey):
return self.type == 'X448'
if CRYPTOGRAPHY_HAS_ED25519 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey):
return self.type == 'Ed25519'
if CRYPTOGRAPHY_HAS_ED448 and isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey):
return self.type == 'Ed448'
if isinstance(privatekey, cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePrivateKey):
if self.type != 'ECC':
return False
if self.curve not in self.curves:
return False
return self.curves[self.curve]['verify'](privatekey)
return False
def _check_format(self):
if self.format == 'auto_ignore':
return True
try:
with open(self.path, 'rb') as f:
content = f.read()
format = crypto_utils.identify_private_key_format(content)
return format == self._get_wanted_format()
except Exception as dummy:
return False
def dump(self):
"""Serialize the object into a dictionary."""
result = super(PrivateKeyCryptography, self).dump()
result['type'] = self.type
if self.type == 'ECC':
result['curve'] = self.curve
return result
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['present', 'absent']),
size=dict(type='int', default=4096),
type=dict(type='str', default='RSA', choices=[
'DSA', 'ECC', 'Ed25519', 'Ed448', 'RSA', 'X25519', 'X448'
]),
curve=dict(type='str', choices=[
'secp384r1', 'secp521r1', 'secp224r1', 'secp192r1', 'secp256r1',
'secp256k1', 'brainpoolP256r1', 'brainpoolP384r1', 'brainpoolP512r1',
'sect571k1', 'sect409k1', 'sect283k1', 'sect233k1', 'sect163k1',
'sect571r1', 'sect409r1', 'sect283r1', 'sect233r1', 'sect163r2',
]),
force=dict(type='bool', default=False),
path=dict(type='path', required=True),
passphrase=dict(type='str', no_log=True),
cipher=dict(type='str'),
backup=dict(type='bool', default=False),
format=dict(type='str', default='auto_ignore', choices=['pkcs1', 'pkcs8', 'raw', 'auto', 'auto_ignore']),
format_mismatch=dict(type='str', default='regenerate', choices=['regenerate', 'convert']),
select_crypto_backend=dict(type='str', choices=['auto', 'pyopenssl', 'cryptography'], default='auto'),
return_content=dict(type='bool', default=False),
),
supports_check_mode=True,
add_file_common_args=True,
required_together=[
['cipher', 'passphrase']
],
required_if=[
['type', 'ECC', ['curve']],
],
)
base_dir = os.path.dirname(module.params['path']) or '.'
if not os.path.isdir(base_dir):
module.fail_json(
name=base_dir,
msg='The directory %s does not exist or the file is not a directory' % base_dir
)
backend = module.params['select_crypto_backend']
if backend == 'auto':
# Detection what is possible
can_use_cryptography = CRYPTOGRAPHY_FOUND and CRYPTOGRAPHY_VERSION >= LooseVersion(MINIMAL_CRYPTOGRAPHY_VERSION)
can_use_pyopenssl = PYOPENSSL_FOUND and PYOPENSSL_VERSION >= LooseVersion(MINIMAL_PYOPENSSL_VERSION)
# Decision
if module.params['cipher'] and module.params['passphrase'] and module.params['cipher'] != 'auto':
# First try pyOpenSSL, then cryptography
if can_use_pyopenssl:
backend = 'pyopenssl'
elif can_use_cryptography:
backend = 'cryptography'
else:
# First try cryptography, then pyOpenSSL
if can_use_cryptography:
backend = 'cryptography'
elif can_use_pyopenssl:
backend = 'pyopenssl'
# Success?
if backend == 'auto':
module.fail_json(msg=("Can't detect any of the required Python libraries "
"cryptography (>= {0}) or PyOpenSSL (>= {1})").format(
MINIMAL_CRYPTOGRAPHY_VERSION,
MINIMAL_PYOPENSSL_VERSION))
try:
if backend == 'pyopenssl':
if not PYOPENSSL_FOUND:
module.fail_json(msg=missing_required_lib('pyOpenSSL >= {0}'.format(MINIMAL_PYOPENSSL_VERSION)),
exception=PYOPENSSL_IMP_ERR)
module.deprecate('The module is using the PyOpenSSL backend. This backend has been deprecated', version='2.13')
private_key = PrivateKeyPyOpenSSL(module)
elif backend == 'cryptography':
if not CRYPTOGRAPHY_FOUND:
module.fail_json(msg=missing_required_lib('cryptography >= {0}'.format(MINIMAL_CRYPTOGRAPHY_VERSION)),
exception=CRYPTOGRAPHY_IMP_ERR)
private_key = PrivateKeyCryptography(module)
if private_key.state == 'present':
if module.check_mode:
result = private_key.dump()
result['changed'] = module.params['force'] or not private_key.check(module)
module.exit_json(**result)
private_key.generate(module)
else:
if module.check_mode:
result = private_key.dump()
result['changed'] = os.path.exists(module.params['path'])
module.exit_json(**result)
private_key.remove(module)
result = private_key.dump()
module.exit_json(**result)
except crypto_utils.OpenSSLObjectError as exc:
module.fail_json(msg=to_native(exc))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,355 |
Fix broken links in docker modules
|
We're trying to fix as many broken links as possible before modules move into collections. This is the batch of broken links on docker modules.
NOTE: the link checker sometimes reports an error where a link actually works. Ignore those if you find them.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
###### BROKEN LINKS
https://docs.ansible.com/ansible/devel/modules/docker_compose_module.html#docker-compose-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_image_module.html#docker-image-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_config_module.html#docker-config-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_container_info_module.html#docker-container-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_host_info_module.html#docker-host-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_image_info_module.html#docker-image-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_login_module.html#docker-login-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_network_module.html#docker-network-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_network_info_module.html#docker-network-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_node_module.html#docker-node-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_node_info_module.html#docker-node-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_prune_module.html#docker-prune-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_secret_module.html#docker-secret-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_module.html#docker-swarm-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_info_module.html#docker-swarm-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_service_module.html#docker-swarm-service-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_service_info_module.html#docker-swarm-service-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_volume_module.html#docker-volume-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_volume_info_module.html#docker-volume-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
|
https://github.com/ansible/ansible/issues/67355
|
https://github.com/ansible/ansible/pull/67360
|
53e043b5febd30f258a233f51b180a543300151b
|
11e75b0af256f9f09c54365282a4969a5fe0390e
| 2020-02-12T20:13:45Z |
python
| 2020-02-12T21:41:40Z |
lib/ansible/modules/net_tools/hetzner_firewall.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2019 Felix Fontein <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: hetzner_firewall
version_added: "2.10"
short_description: Manage Hetzner's dedicated server firewall
author:
- Felix Fontein (@felixfontein)
description:
- Manage Hetzner's dedicated server firewall.
- Note that idempotency check for TCP flags simply compares strings and doesn't
try to interpret the rules. This might change in the future.
seealso:
- name: Firewall documentation
description: Hetzner's documentation on the stateless firewall for dedicated servers
link: https://wiki.hetzner.de/index.php/Robot_Firewall/en
- module: hetzner_firewall_info
description: Retrieve information on firewall configuration.
extends_documentation_fragment:
- hetzner
options:
server_ip:
description: The server's main IP address.
required: yes
type: str
port:
description:
- Switch port of firewall.
type: str
choices: [ main, kvm ]
default: main
state:
description:
- Status of the firewall.
- Firewall is active if state is C(present), and disabled if state is C(absent).
type: str
default: present
choices: [ present, absent ]
whitelist_hos:
description:
- Whether Hetzner services have access.
type: bool
rules:
description:
- Firewall rules.
type: dict
suboptions:
input:
description:
- Input firewall rules.
type: list
elements: dict
suboptions:
name:
description:
- Name of the firewall rule.
type: str
ip_version:
description:
- Internet protocol version.
- Note that currently, only IPv4 is supported by Hetzner.
required: yes
type: str
choices: [ ipv4, ipv6 ]
dst_ip:
description:
- Destination IP address or subnet address.
- CIDR notation.
type: str
dst_port:
description:
- Destination port or port range.
type: str
src_ip:
description:
- Source IP address or subnet address.
- CIDR notation.
type: str
src_port:
description:
- Source port or port range.
type: str
protocol:
description:
- Protocol above IP layer
type: str
tcp_flags:
description:
- TCP flags or logical combination of flags.
- Flags supported by Hetzner are C(syn), C(fin), C(rst), C(psh) and C(urg).
- They can be combined with C(|) (logical or) and C(&) (logical and).
- See U(the documentation,https://wiki.hetzner.de/index.php/Robot_Firewall/en#Parameter)
for more information.
type: str
action:
description:
- Action if rule matches.
required: yes
type: str
choices: [ accept, discard ]
update_timeout:
description:
- Timeout to use when configuring the firewall.
- Note that the API call returns before the firewall has been
successfully set up.
type: int
default: 30
wait_for_configured:
description:
- Whether to wait until the firewall has been successfully configured before
determining what to do, and before returning from the module.
- The API returns status C(in progress) when the firewall is currently
being configured. If this happens, the module will try again until
the status changes to C(active) or C(disabled).
- Please note that there is a request limit. If you have to do multiple
updates, it can be better to disable waiting, and regularly use
M(hetzner_firewall_info) to query status.
type: bool
default: yes
wait_delay:
description:
- Delay to wait (in seconds) before checking again whether the firewall has
been configured.
type: int
default: 10
timeout:
description:
- Timeout (in seconds) for waiting for firewall to be configured.
type: int
default: 180
'''
EXAMPLES = r'''
- name: Configure firewall for server with main IP 1.2.3.4
hetzner_firewall:
hetzner_user: foo
hetzner_password: bar
server_ip: 1.2.3.4
status: active
whitelist_hos: yes
rules:
input:
- name: Allow everything to ports 20-23 from 4.3.2.1/24
ip_version: ipv4
src_ip: 4.3.2.1/24
dst_port: '20-23'
action: accept
- name: Allow everything to port 443
ip_version: ipv4
dst_port: '443'
action: accept
- name: Drop everything else
ip_version: ipv4
action: discard
register: result
- debug:
msg: "{{ result }}"
'''
RETURN = r'''
firewall:
description:
- The firewall configuration.
type: dict
returned: success
contains:
port:
description:
- Switch port of firewall.
- C(main) or C(kvm).
type: str
sample: main
server_ip:
description:
- Server's main IP address.
type: str
sample: 1.2.3.4
server_number:
description:
- Hetzner's internal server number.
type: int
sample: 12345
status:
description:
- Status of the firewall.
- C(active) or C(disabled).
- Will be C(in process) if the firewall is currently updated, and
I(wait_for_configured) is set to C(no) or I(timeout) to a too small value.
type: str
sample: active
whitelist_hos:
description:
- Whether Hetzner services have access.
type: bool
sample: true
rules:
description:
- Firewall rules.
type: dict
contains:
input:
description:
- Input firewall rules.
type: list
elements: dict
contains:
name:
description:
- Name of the firewall rule.
type: str
sample: Allow HTTP access to server
ip_version:
description:
- Internet protocol version.
type: str
sample: ipv4
dst_ip:
description:
- Destination IP address or subnet address.
- CIDR notation.
type: str
sample: 1.2.3.4/32
dst_port:
description:
- Destination port or port range.
type: str
sample: "443"
src_ip:
description:
- Source IP address or subnet address.
- CIDR notation.
type: str
sample: null
src_port:
description:
- Source port or port range.
type: str
sample: null
protocol:
description:
- Protocol above IP layer
type: str
sample: tcp
tcp_flags:
description:
- TCP flags or logical combination of flags.
type: str
sample: null
action:
description:
- Action if rule matches.
- C(accept) or C(discard).
type: str
sample: accept
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.compat import ipaddress as compat_ipaddress
from ansible.module_utils.hetzner import (
HETZNER_DEFAULT_ARGUMENT_SPEC,
BASE_URL,
fetch_url_json,
fetch_url_json_with_retries,
CheckDoneTimeoutException,
)
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils._text import to_native, to_text
RULE_OPTION_NAMES = [
'name', 'ip_version', 'dst_ip', 'dst_port', 'src_ip', 'src_port',
'protocol', 'tcp_flags', 'action',
]
RULES = ['input']
def restrict_dict(dictionary, fields):
result = dict()
for k, v in dictionary.items():
if k in fields:
result[k] = v
return result
def restrict_firewall_config(config):
result = restrict_dict(config, ['port', 'status', 'whitelist_hos'])
result['rules'] = dict()
for ruleset in RULES:
result['rules'][ruleset] = [
restrict_dict(rule, RULE_OPTION_NAMES)
for rule in config['rules'].get(ruleset) or []
]
return result
def update(before, after, params, name):
bv = before.get(name)
after[name] = bv
changed = False
pv = params[name]
if pv is not None:
changed = pv != bv
if changed:
after[name] = pv
return changed
def normalize_ip(ip, ip_version):
if ip is None:
return ip
if '/' in ip:
ip, range = ip.split('/')
else:
ip, range = ip, ''
ip_addr = to_native(compat_ipaddress.ip_address(to_text(ip)).compressed)
if range == '':
range = '32' if ip_version.lower() == 'ipv4' else '128'
return ip_addr + '/' + range
def update_rules(before, after, params, ruleset):
before_rules = before['rules'][ruleset]
after_rules = after['rules'][ruleset]
params_rules = params['rules'][ruleset]
changed = len(before_rules) != len(params_rules)
for no, rule in enumerate(params_rules):
rule['src_ip'] = normalize_ip(rule['src_ip'], rule['ip_version'])
rule['dst_ip'] = normalize_ip(rule['dst_ip'], rule['ip_version'])
if no < len(before_rules):
before_rule = before_rules[no]
before_rule['src_ip'] = normalize_ip(before_rule['src_ip'], before_rule['ip_version'])
before_rule['dst_ip'] = normalize_ip(before_rule['dst_ip'], before_rule['ip_version'])
if before_rule != rule:
changed = True
after_rules.append(rule)
return changed
def encode_rule(output, rulename, input):
for i, rule in enumerate(input['rules'][rulename]):
for k, v in rule.items():
if v is not None:
output['rules[{0}][{1}][{2}]'.format(rulename, i, k)] = v
def create_default_rules_object():
rules = dict()
for ruleset in RULES:
rules[ruleset] = []
return rules
def firewall_configured(result, error):
return result['firewall']['status'] != 'in process'
def main():
argument_spec = dict(
server_ip=dict(type='str', required=True),
port=dict(type='str', default='main', choices=['main', 'kvm']),
state=dict(type='str', default='present', choices=['present', 'absent']),
whitelist_hos=dict(type='bool'),
rules=dict(type='dict', options=dict(
input=dict(type='list', elements='dict', options=dict(
name=dict(type='str'),
ip_version=dict(type='str', required=True, choices=['ipv4', 'ipv6']),
dst_ip=dict(type='str'),
dst_port=dict(type='str'),
src_ip=dict(type='str'),
src_port=dict(type='str'),
protocol=dict(type='str'),
tcp_flags=dict(type='str'),
action=dict(type='str', required=True, choices=['accept', 'discard']),
)),
)),
update_timeout=dict(type='int', default=30),
wait_for_configured=dict(type='bool', default=True),
wait_delay=dict(type='int', default=10),
timeout=dict(type='int', default=180),
)
argument_spec.update(HETZNER_DEFAULT_ARGUMENT_SPEC)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
# Sanitize input
module.params['status'] = 'active' if (module.params['state'] == 'present') else 'disabled'
if module.params['rules'] is None:
module.params['rules'] = {}
if module.params['rules'].get('input') is None:
module.params['rules']['input'] = []
server_ip = module.params['server_ip']
# https://robot.your-server.de/doc/webservice/en.html#get-firewall-server-ip
url = "{0}/firewall/{1}".format(BASE_URL, server_ip)
if module.params['wait_for_configured']:
try:
result, error = fetch_url_json_with_retries(
module,
url,
check_done_callback=firewall_configured,
check_done_delay=module.params['wait_delay'],
check_done_timeout=module.params['timeout'],
)
except CheckDoneTimeoutException as dummy:
module.fail_json(msg='Timeout while waiting for firewall to be configured.')
else:
result, error = fetch_url_json(module, url)
if not firewall_configured(result, error):
module.fail_json(msg='Firewall configuration cannot be read as it is not configured.')
full_before = result['firewall']
if not full_before.get('rules'):
full_before['rules'] = create_default_rules_object()
before = restrict_firewall_config(full_before)
# Build wanted (after) state and compare
after = dict(before)
changed = False
changed |= update(before, after, module.params, 'port')
changed |= update(before, after, module.params, 'status')
changed |= update(before, after, module.params, 'whitelist_hos')
after['rules'] = create_default_rules_object()
if module.params['status'] == 'active':
for ruleset in RULES:
changed |= update_rules(before, after, module.params, ruleset)
# Update if different
construct_result = True
construct_status = None
if changed and not module.check_mode:
# https://robot.your-server.de/doc/webservice/en.html#post-firewall-server-ip
url = "{0}/firewall/{1}".format(BASE_URL, server_ip)
headers = {"Content-type": "application/x-www-form-urlencoded"}
data = dict(after)
data['whitelist_hos'] = str(data['whitelist_hos']).lower()
del data['rules']
for ruleset in RULES:
encode_rule(data, ruleset, after)
result, error = fetch_url_json(
module,
url,
method='POST',
timeout=module.params['update_timeout'],
data=urlencode(data),
headers=headers,
)
if module.params['wait_for_configured'] and not firewall_configured(result, error):
try:
result, error = fetch_url_json_with_retries(
module,
url,
check_done_callback=firewall_configured,
check_done_delay=module.params['wait_delay'],
check_done_timeout=module.params['timeout'],
skip_first=True,
)
except CheckDoneTimeoutException as e:
result, error = e.result, e.error
module.warn('Timeout while waiting for firewall to be configured.')
full_after = result['firewall']
if not full_after.get('rules'):
full_after['rules'] = create_default_rules_object()
construct_status = full_after['status']
if construct_status != 'in process':
# Only use result if configuration is done, so that diff will be ok
after = restrict_firewall_config(full_after)
construct_result = False
if construct_result:
# Construct result (used for check mode, and configuration still in process)
full_after = dict(full_before)
for k, v in after.items():
if k != 'rules':
full_after[k] = after[k]
if construct_status is not None:
# We want 'in process' here
full_after['status'] = construct_status
full_after['rules'] = dict()
for ruleset in RULES:
full_after['rules'][ruleset] = after['rules'][ruleset]
module.exit_json(
changed=changed,
diff=dict(
before=before,
after=after,
),
firewall=full_after,
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,355 |
Fix broken links in docker modules
|
We're trying to fix as many broken links as possible before modules move into collections. This is the batch of broken links on docker modules.
NOTE: the link checker sometimes reports an error where a link actually works. Ignore those if you find them.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
docs.ansible.com
###### BROKEN LINKS
https://docs.ansible.com/ansible/devel/modules/docker_compose_module.html#docker-compose-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_image_module.html#docker-image-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_config_module.html#docker-config-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_container_info_module.html#docker-container-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_host_info_module.html#docker-host-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_image_info_module.html#docker-image-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_login_module.html#docker-login-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_network_module.html#docker-network-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_network_info_module.html#docker-network-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_node_module.html#docker-node-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_node_info_module.html#docker-node-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_prune_module.html#docker-prune-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_secret_module.html#docker-secret-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_module.html#docker-swarm-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_info_module.html#docker-swarm-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_service_module.html#docker-swarm-service-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_swarm_service_info_module.html#docker-swarm-service-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_volume_module.html#docker-volume-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
https://docs.ansible.com/ansible/devel/modules/docker_volume_info_module.html#docker-volume-info-module
├─BROKEN─ https://docker-py.readthedocs.io/en/stable/machine/
|
https://github.com/ansible/ansible/issues/67355
|
https://github.com/ansible/ansible/pull/67360
|
53e043b5febd30f258a233f51b180a543300151b
|
11e75b0af256f9f09c54365282a4969a5fe0390e
| 2020-02-12T20:13:45Z |
python
| 2020-02-12T21:41:40Z |
lib/ansible/plugins/doc_fragments/docker.py
|
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Docker doc fragment
DOCUMENTATION = r'''
options:
docker_host:
description:
- The URL or Unix socket path used to connect to the Docker API. To connect to a remote host, provide the
TCP connection string. For example, C(tcp://192.0.2.23:2376). If TLS is used to encrypt the connection,
the module will automatically replace C(tcp) in the connection URL with C(https).
- If the value is not specified in the task, the value of environment variable C(DOCKER_HOST) will be used
instead. If the environment variable is not set, the default value will be used.
type: str
default: unix://var/run/docker.sock
aliases: [ docker_url ]
tls_hostname:
description:
- When verifying the authenticity of the Docker Host server, provide the expected name of the server.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS_HOSTNAME) will
be used instead. If the environment variable is not set, the default value will be used.
type: str
default: localhost
api_version:
description:
- The version of the Docker API running on the Docker Host.
- Defaults to the latest version of the API supported by Docker SDK for Python and the docker daemon.
- If the value is not specified in the task, the value of environment variable C(DOCKER_API_VERSION) will be
used instead. If the environment variable is not set, the default value will be used.
type: str
default: auto
aliases: [ docker_api_version ]
timeout:
description:
- The maximum amount of time in seconds to wait on a response from the API.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TIMEOUT) will be used
instead. If the environment variable is not set, the default value will be used.
type: int
default: 60
ca_cert:
description:
- Use a CA certificate when performing server verification by providing the path to a CA certificate file.
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
the file C(ca.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
type: path
aliases: [ tls_ca_cert, cacert_path ]
client_cert:
description:
- Path to the client's TLS certificate file.
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
the file C(cert.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
type: path
aliases: [ tls_client_cert, cert_path ]
client_key:
description:
- Path to the client's TLS key file.
- If the value is not specified in the task and the environment variable C(DOCKER_CERT_PATH) is set,
the file C(key.pem) from the directory specified in the environment variable C(DOCKER_CERT_PATH) will be used.
type: path
aliases: [ tls_client_key, key_path ]
ssl_version:
description:
- Provide a valid SSL version number. Default value determined by ssl.py module.
- If the value is not specified in the task, the value of environment variable C(DOCKER_SSL_VERSION) will be
used instead.
type: str
tls:
description:
- Secure the connection to the API by using TLS without verifying the authenticity of the Docker host
server. Note that if I(validate_certs) is set to C(yes) as well, it will take precedence.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS) will be used
instead. If the environment variable is not set, the default value will be used.
type: bool
default: no
validate_certs:
description:
- Secure the connection to the API by using TLS and verifying the authenticity of the Docker host server.
- If the value is not specified in the task, the value of environment variable C(DOCKER_TLS_VERIFY) will be
used instead. If the environment variable is not set, the default value will be used.
type: bool
default: no
aliases: [ tls_verify ]
debug:
description:
- Debug mode
type: bool
default: no
notes:
- Connect to the Docker daemon by providing parameters with each task or by defining environment variables.
You can define C(DOCKER_HOST), C(DOCKER_TLS_HOSTNAME), C(DOCKER_API_VERSION), C(DOCKER_CERT_PATH), C(DOCKER_SSL_VERSION),
C(DOCKER_TLS), C(DOCKER_TLS_VERIFY) and C(DOCKER_TIMEOUT). If you are using docker machine, run the script shipped
with the product that sets up the environment. It will set these variables for you. See
U(https://docker-py.readthedocs.io/en/stable/machine/) for more details.
- When connecting to Docker daemon with TLS, you might need to install additional Python packages.
For the Docker SDK for Python, version 2.4 or newer, this can be done by installing C(docker[tls]) with M(pip).
- Note that the Docker SDK for Python only allows to specify the path to the Docker configuration for very few functions.
In general, it will use C($HOME/.docker/config.json) if the C(DOCKER_CONFIG) environment variable is not specified,
and use C($DOCKER_CONFIG/config.json) otherwise.
'''
# Additional, more specific stuff for minimal Docker SDK for Python version < 2.0
DOCKER_PY_1_DOCUMENTATION = r'''
options: {}
requirements:
- "Docker SDK for Python: Please note that the L(docker-py,https://pypi.org/project/docker-py/)
Python module has been superseded by L(docker,https://pypi.org/project/docker/)
(see L(here,https://github.com/docker/docker-py/issues/1310) for details).
For Python 2.6, C(docker-py) must be used. Otherwise, it is recommended to
install the C(docker) Python module. Note that both modules should *not*
be installed at the same time. Also note that when both modules are installed
and one of them is uninstalled, the other might no longer function and a
reinstall of it is required."
'''
# Additional, more specific stuff for minimal Docker SDK for Python version >= 2.0.
# Note that Docker SDK for Python >= 2.0 requires Python 2.7 or newer.
DOCKER_PY_2_DOCUMENTATION = r'''
options: {}
requirements:
- "Python >= 2.7"
- "Docker SDK for Python: Please note that the L(docker-py,https://pypi.org/project/docker-py/)
Python module has been superseded by L(docker,https://pypi.org/project/docker/)
(see L(here,https://github.com/docker/docker-py/issues/1310) for details).
This module does *not* work with docker-py."
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,758 |
Conditionals documentation should explicitly define what operators are supported
|
##### SUMMARY
The current documentation page for Conditionals and the 'when' clause only imples what the supported comparison operators are. For clarity, the supported operators should be explicitly documented.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
user_guide/playbooks_conditionals.html
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/61758
|
https://github.com/ansible/ansible/pull/61814
|
6cc326dcf19942a7567964f9855389ef34984939
|
b625b430f1f27d350bd6347c4692361566344e03
| 2019-09-04T09:27:43Z |
python
| 2020-02-14T15:53:14Z |
docs/docsite/rst/user_guide/playbooks_conditionals.rst
|
.. _playbooks_conditionals:
Conditionals
============
.. contents:: Topics
Often the result of a play may depend on the value of a variable, fact (something learned about the remote system), or previous task result.
In some cases, the values of variables may depend on other variables.
Additional groups can be created to manage hosts based on whether the hosts match other criteria. This topic covers how conditionals are used in playbooks.
.. note:: There are many options to control execution flow in Ansible. More examples of supported conditionals can be located here: http://jinja.pocoo.org/docs/dev/templates/#comparisons.
.. _the_when_statement:
The When Statement
``````````````````
Sometimes you will want to skip a particular step on a particular host.
This could be something as simple as not installing a certain package if the operating system is a particular version,
or it could be something like performing some cleanup steps if a filesystem is getting full.
This is easy to do in Ansible with the `when` clause, which contains a raw Jinja2 expression without double curly braces (see :ref:`group_by_module`).
It's actually pretty simple::
tasks:
- name: "shut down Debian flavored systems"
command: /sbin/shutdown -t now
when: ansible_facts['os_family'] == "Debian"
# note that all variables can be used directly in conditionals without double curly braces
You can also use parentheses to group conditions::
tasks:
- name: "shut down CentOS 6 and Debian 7 systems"
command: /sbin/shutdown -t now
when: (ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_major_version'] == "6") or
(ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "7")
Multiple conditions that all need to be true (a logical 'and') can also be specified as a list::
tasks:
- name: "shut down CentOS 6 systems"
command: /sbin/shutdown -t now
when:
- ansible_facts['distribution'] == "CentOS"
- ansible_facts['distribution_major_version'] == "6"
A number of Jinja2 "tests" and "filters" can also be used in when statements, some of which are unique
and provided by Ansible. Suppose we want to ignore the error of one statement and then
decide to do something conditionally based on success or failure::
tasks:
- command: /bin/false
register: result
ignore_errors: True
- command: /bin/something
when: result is failed
# In older versions of ansible use ``success``, now both are valid but succeeded uses the correct tense.
- command: /bin/something_else
when: result is succeeded
- command: /bin/still/something_else
when: result is skipped
.. note:: both `success` and `succeeded` work (`fail`/`failed`, etc).
.. warning:: You might expect a variable of a skipped task to be undefined and use `defined` or `default` to check that. **This is incorrect**! Even when a task is failed or skipped the variable is still registered with a failed or skipped status. See :ref:`registered_variables`.
To see what facts are available on a particular system, you can do the following in a playbook::
- debug: var=ansible_facts
Tip: Sometimes you'll get back a variable that's a string and you'll want to do a math operation comparison on it. You can do this like so::
tasks:
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release']|int >= 6
.. note:: the above example requires the lsb_release package on the target host in order to return the 'lsb major_release' fact.
Variables defined in the playbooks or inventory can also be used, just make sure to apply the `|bool` filter to non boolean variables (ex: string variables with content like 'yes', 'on', '1', 'true'). An example may be the execution of a task based on a variable's boolean value::
vars:
epic: true
monumental: "yes"
Then a conditional execution might look like::
tasks:
- shell: echo "This certainly is epic!"
when: epic or monumental|bool
or::
tasks:
- shell: echo "This certainly isn't epic!"
when: not epic
If a required variable has not been set, you can skip or fail using Jinja2's `defined` test. For example::
tasks:
- shell: echo "I've got '{{ foo }}' and am not afraid to use it!"
when: foo is defined
- fail: msg="Bailing out. this play requires 'bar'"
when: bar is undefined
This is especially useful in combination with the conditional import of vars files (see below).
As the examples show, you don't need to use `{{ }}` to use variables inside conditionals, as these are already implied.
.. _loops_and_conditionals:
Loops and Conditionals
``````````````````````
Combining `when` with loops (see :ref:`playbooks_loops`), be aware that the `when` statement is processed separately for each item. This is by design::
tasks:
- command: echo {{ item }}
loop: [ 0, 2, 4, 6, 8, 10 ]
when: item > 5
If you need to skip the whole task depending on the loop variable being defined, used the `|default` filter to provide an empty iterator::
- command: echo {{ item }}
loop: "{{ mylist|default([]) }}"
when: item > 5
If using a dict in a loop::
- command: echo {{ item.key }}
loop: "{{ query('dict', mydict|default({})) }}"
when: item.value > 5
.. _loading_in_custom_facts:
Loading in Custom Facts
```````````````````````
It's also easy to provide your own facts if you want, which is covered in :ref:`developing_modules`. To run them, just
make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned
there will be accessible to future tasks::
tasks:
- name: gather site specific fact data
action: site_facts
- command: /usr/bin/thingy
when: my_custom_fact_just_retrieved_from_the_remote_system == '1234'
.. _when_roles_and_includes:
Applying 'when' to roles, imports, and includes
```````````````````````````````````````````````
Note that if you have several tasks that all share the same conditional statement, you can affix the conditional
to a task include statement as below. All the tasks get evaluated, but the conditional is applied to each and every task::
- import_tasks: tasks/sometasks.yml
when: "'reticulating splines' in output"
.. note:: In versions prior to 2.0 this worked with task includes but not playbook includes. 2.0 allows it to work with both.
Or with a role::
- hosts: webservers
roles:
- role: debian_stock_config
when: ansible_facts['os_family'] == 'Debian'
You will note a lot of 'skipped' output by default in Ansible when using this approach on systems that don't match the criteria.
In many cases the :ref:`group_by module <group_by_module>` can be a more streamlined way to accomplish the same thing; see
:ref:`os_variance`.
When a conditional is used with ``include_*`` tasks instead of imports, it is applied `only` to the include task itself and not
to any other tasks within the included file(s). A common situation where this distinction is important is as follows::
# We wish to include a file to define a variable when it is not
# already defined
# main.yml
- import_tasks: other_tasks.yml # note "import"
when: x is not defined
# other_tasks.yml
- set_fact:
x: foo
- debug:
var: x
This expands at include time to the equivalent of::
- set_fact:
x: foo
when: x is not defined
- debug:
var: x
when: x is not defined
Thus if ``x`` is initially undefined, the ``debug`` task will be skipped. By using ``include_tasks`` instead of ``import_tasks``,
both tasks from ``other_tasks.yml`` will be executed as expected.
For more information on the differences between ``include`` v ``import`` see :ref:`playbooks_reuse`.
.. _conditional_imports:
Conditional Imports
```````````````````
.. note:: This is an advanced topic that is infrequently used.
Sometimes you will want to do certain things differently in a playbook based on certain criteria.
Having one playbook that works on multiple platforms and OS versions is a good example.
As an example, the name of the Apache package may be different between CentOS and Debian,
but it is easily handled with a minimum of syntax in an Ansible Playbook::
---
- hosts: all
remote_user: root
vars_files:
- "vars/common.yml"
- [ "vars/{{ ansible_facts['os_family'] }}.yml", "vars/os_defaults.yml" ]
tasks:
- name: make sure apache is started
service: name={{ apache }} state=started
.. note::
The variable "ansible_facts['os_family']" is being interpolated into
the list of filenames being defined for vars_files.
As a reminder, the various YAML files contain just keys and values::
---
# for vars/RedHat.yml
apache: httpd
somethingelse: 42
How does this work? For Red Hat operating systems ('CentOS', for example), the first file Ansible tries to import
is 'vars/RedHat.yml'. If that file does not exist, Ansible attempts to load 'vars/os_defaults.yml'. If no files in
the list were found, an error is raised.
On Debian, Ansible first looks for 'vars/Debian.yml' instead of 'vars/RedHat.yml', before
falling back on 'vars/os_defaults.yml'.
Ansible's approach to configuration -- separating variables from tasks, keeping your playbooks
from turning into arbitrary code with nested conditionals - results in more streamlined and auditable configuration rules because there are fewer decision points to track.
Selecting Files And Templates Based On Variables
````````````````````````````````````````````````
.. note:: This is an advanced topic that is infrequently used. You can probably skip this section.
Sometimes a configuration file you want to copy, or a template you will use may depend on a variable.
The following construct selects the first available file appropriate for the variables of a given host, which is often much cleaner than putting a lot of if conditionals in a template.
The following example shows how to template out a configuration file that was very different between, say, CentOS and Debian::
- name: template a file
template:
src: "{{ item }}"
dest: /etc/myapp/foo.conf
loop: "{{ query('first_found', { 'files': myfiles, 'paths': mypaths}) }}"
vars:
myfiles:
- "{{ansible_facts['distribution']}}.conf"
- default.conf
mypaths: ['search_location_one/somedir/', '/opt/other_location/somedir/']
Register Variables
``````````````````
Often in a playbook it may be useful to store the result of a given command in a variable and access
it later. Use of the command module in this way can in many ways eliminate the need to write site specific facts, for
instance, you could test for the existence of a particular program.
.. note:: Registration happens even when a task is skipped due to the conditional. This way you can query the variable for `` is skipped`` to know if task was attempted or not.
The 'register' keyword decides what variable to save a result in. The resulting variables can be used in templates, action lines, or *when* statements. It looks like this (in an obviously trivial example)::
- name: test play
hosts: all
tasks:
- shell: cat /etc/motd
register: motd_contents
- shell: echo "motd contains the word hi"
when: motd_contents.stdout.find('hi') != -1
As shown previously, the registered variable's string contents are accessible with the 'stdout' value.
The registered result can be used in the loop of a task if it is converted into
a list (or already is a list) as shown below. "stdout_lines" is already available on the object as
well though you could also call "home_dirs.stdout.split()" if you wanted, and could split by other
fields::
- name: registered variable usage as a loop list
hosts: all
tasks:
- name: retrieve the list of home directories
command: ls /home
register: home_dirs
- name: add home dirs to the backup spooler
file:
path: /mnt/bkspool/{{ item }}
src: /home/{{ item }}
state: link
loop: "{{ home_dirs.stdout_lines }}"
# same as loop: "{{ home_dirs.stdout.split() }}"
As shown previously, the registered variable's string contents are accessible with the 'stdout' value.
You may check the registered variable's string contents for emptiness::
- name: check registered variable for emptiness
hosts: all
tasks:
- name: list contents of directory
command: ls mydir
register: contents
- name: check contents for emptiness
debug:
msg: "Directory is empty"
when: contents.stdout == ""
Commonly Used Facts
```````````````````
The following Facts are frequently used in Conditionals - see above for examples.
.. _ansible_distribution:
ansible_facts['distribution']
-----------------------------
Possible values (sample, not complete list)::
Alpine
Altlinux
Amazon
Archlinux
ClearLinux
Coreos
CentOS
Debian
Fedora
Gentoo
Mandriva
NA
OpenWrt
OracleLinux
RedHat
Slackware
SMGL
SUSE
Ubuntu
VMwareESX
.. See `OSDIST_LIST`
.. _ansible_distribution_major_version:
ansible_facts['distribution_major_version']
-------------------------------------------
This will be the major version of the operating system. For example, the value will be `16` for Ubuntu 16.04.
.. _ansible_os_family:
ansible_facts['os_family']
--------------------------
Possible values (sample, not complete list)::
AIX
Alpine
Altlinux
Archlinux
Darwin
Debian
FreeBSD
Gentoo
HP-UX
Mandrake
RedHat
SGML
Slackware
Solaris
Suse
Windows
.. Ansible checks `OS_FAMILY_MAP`; if there's no match, it returns the value of `platform.system()`.
.. seealso::
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`playbooks_best_practices`
Best practices in playbooks
:ref:`playbooks_variables`
All about variables
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,765 |
VMware: Colon support within VMWare Tags vmware_tag_manager
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
VMWare supports using colons in VMWare tags, however the vmware_tag_manager module does not since colons are used to delineate categories. It would be nice if there was some way to support using colons in tag names.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_tag_manager
##### ADDITIONAL INFORMATION
There should be some way to escape a colon in a tag_name so that we can continue to support categories but intentionally override that functionality when we intend to use a tag with a colon within. Two tag examples below. One with category, one escaping for instance. Feel free to use whatever escape sequence is most supportable.
``` - name: Add tags to a virtual machine
vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
tag_names:
- 'Category:Tag'
- 'TagWith\:Colon'
object_name: 'VMGuest'
object_type: VirtualMachine
state: add
delegate_to: localhost
```
Including the authors:
@Akasurde
@GBrawl
|
https://github.com/ansible/ansible/issues/65765
|
https://github.com/ansible/ansible/pull/66150
|
33d5c68887a063340355bf1a5c24ac2d66e6992b
|
7000c51c0691e176815cd0558c0b79f1f36d8c60
| 2019-12-12T12:38:32Z |
python
| 2020-02-14T21:07:10Z |
changelogs/fragments/65765-vmware_tag_manager.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,765 |
VMware: Colon support within VMWare Tags vmware_tag_manager
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
VMWare supports using colons in VMWare tags, however the vmware_tag_manager module does not since colons are used to delineate categories. It would be nice if there was some way to support using colons in tag names.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_tag_manager
##### ADDITIONAL INFORMATION
There should be some way to escape a colon in a tag_name so that we can continue to support categories but intentionally override that functionality when we intend to use a tag with a colon within. Two tag examples below. One with category, one escaping for instance. Feel free to use whatever escape sequence is most supportable.
``` - name: Add tags to a virtual machine
vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
tag_names:
- 'Category:Tag'
- 'TagWith\:Colon'
object_name: 'VMGuest'
object_type: VirtualMachine
state: add
delegate_to: localhost
```
Including the authors:
@Akasurde
@GBrawl
|
https://github.com/ansible/ansible/issues/65765
|
https://github.com/ansible/ansible/pull/66150
|
33d5c68887a063340355bf1a5c24ac2d66e6992b
|
7000c51c0691e176815cd0558c0b79f1f36d8c60
| 2019-12-12T12:38:32Z |
python
| 2020-02-14T21:07:10Z |
lib/ansible/modules/cloud/vmware/vmware_tag_manager.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_tag_manager
short_description: Manage association of VMware tags with VMware objects
description:
- This module can be used to assign / remove VMware tags from the given VMware objects.
- Tag feature is introduced in vSphere 6 version, so this module is not supported in the earlier versions of vSphere.
- All variables and VMware object names are case sensitive.
version_added: 2.8
author:
- Abhijeet Kasurde (@Akasurde)
- Frederic Van Reet (@GBrawl)
notes:
- Tested on vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
- vSphere Automation SDK
options:
tag_names:
description:
- List of tag(s) to be managed.
- You can also specify category name by specifying colon separated value. For example, "category_name:tag_name".
- You can skip category name if you have unique tag names.
required: True
type: list
state:
description:
- If C(state) is set to C(add) or C(present) will add the tags to the existing tag list of the given object.
- If C(state) is set to C(remove) or C(absent) will remove the tags from the existing tag list of the given object.
- If C(state) is set to C(set) will replace the tags of the given objects with the user defined list of tags.
default: add
choices: [ present, absent, add, remove, set ]
type: str
object_type:
description:
- Type of object to work with.
required: True
choices: [ VirtualMachine, Datacenter, ClusterComputeResource, HostSystem, DistributedVirtualSwitch, DistributedVirtualPortgroup ]
type: str
object_name:
description:
- Name of the object to work with.
- For DistributedVirtualPortgroups the format should be "switch_name:portgroup_name"
required: True
type: str
extends_documentation_fragment: vmware_rest_client.documentation
'''
EXAMPLES = r'''
- name: Add tags to a virtual machine
vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
tag_names:
- Sample_Tag_0002
- Category_0001:Sample_Tag_0003
object_name: Fedora_VM
object_type: VirtualMachine
state: add
delegate_to: localhost
- name: Remove a tag from a virtual machine
vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
tag_names:
- Sample_Tag_0002
object_name: Fedora_VM
object_type: VirtualMachine
state: remove
delegate_to: localhost
- name: Add tags to a distributed virtual switch
vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
tag_names:
- Sample_Tag_0003
object_name: Switch_0001
object_type: DistributedVirtualSwitch
state: add
delegate_to: localhost
- name: Add tags to a distributed virtual portgroup
vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
tag_names:
- Sample_Tag_0004
object_name: Switch_0001:Portgroup_0001
object_type: DistributedVirtualPortgroup
state: add
delegate_to: localhost
'''
RETURN = r'''
tag_status:
description: metadata about tags related to object configuration
returned: on success
type: list
sample: {
"current_tags": [
"backup",
"security"
],
"desired_tags": [
"security"
],
"previous_tags": [
"backup",
"security"
]
}
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware_rest_client import VmwareRestClient
from ansible.module_utils.vmware import (PyVmomi, find_dvs_by_name, find_dvspg_by_name)
try:
from com.vmware.vapi.std_client import DynamicID
from com.vmware.vapi.std.errors_client import Error
except ImportError:
pass
class VmwareTagManager(VmwareRestClient):
def __init__(self, module):
"""
Constructor
"""
super(VmwareTagManager, self).__init__(module)
self.pyv = PyVmomi(module=module)
self.object_type = self.params.get('object_type')
self.object_name = self.params.get('object_name')
self.managed_object = None
if self.object_type == 'VirtualMachine':
self.managed_object = self.pyv.get_vm_or_template(self.object_name)
if self.object_type == 'Datacenter':
self.managed_object = self.pyv.find_datacenter_by_name(self.object_name)
if self.object_type == 'ClusterComputeResource':
self.managed_object = self.pyv.find_cluster_by_name(self.object_name)
if self.object_type == 'HostSystem':
self.managed_object = self.pyv.find_hostsystem_by_name(self.object_name)
if self.object_type == 'DistributedVirtualSwitch':
self.managed_object = find_dvs_by_name(self.pyv.content, self.object_name)
self.object_type = 'VmwareDistributedVirtualSwitch'
if self.object_type == 'DistributedVirtualPortgroup':
dvs_name, pg_name = self.object_name.split(":", 1)
dv_switch = find_dvs_by_name(self.pyv.content, dvs_name)
if dv_switch is None:
self.module.fail_json(msg="A distributed virtual switch with name %s does not exist" % dvs_name)
self.managed_object = find_dvspg_by_name(dv_switch, pg_name)
if self.managed_object is None:
self.module.fail_json(msg="Failed to find the managed object for %s with type %s" % (self.object_name, self.object_type))
if not hasattr(self.managed_object, '_moId'):
self.module.fail_json(msg="Unable to find managed object id for %s managed object" % self.object_name)
self.dynamic_managed_object = DynamicID(type=self.object_type, id=self.managed_object._moId)
self.tag_service = self.api_client.tagging.Tag
self.category_service = self.api_client.tagging.Category
self.tag_association_svc = self.api_client.tagging.TagAssociation
self.tag_names = self.params.get('tag_names')
def ensure_state(self):
"""
Manage the internal state of tags
"""
results = dict(
changed=False,
tag_status=dict(),
)
changed = False
action = self.params.get('state')
available_tag_obj = self.get_tags_for_object(tag_service=self.tag_service,
tag_assoc_svc=self.tag_association_svc,
dobj=self.dynamic_managed_object)
_temp_prev_tags = ["%s:%s" % (tag['category_name'], tag['name']) for tag in self.get_tags_for_dynamic_obj(self.dynamic_managed_object)]
results['tag_status']['previous_tags'] = _temp_prev_tags
results['tag_status']['desired_tags'] = self.tag_names
# Check if category and tag combination exists as per user request
removed_tags_for_set = False
for tag in self.tag_names:
category_obj, category_name, tag_name = None, None, None
if ":" in tag:
# User specified category
category_name, tag_name = tag.split(":", 1)
category_obj = self.search_svc_object_by_name(self.category_service, category_name)
if not category_obj:
self.module.fail_json(msg="Unable to find the category %s" % category_name)
else:
# User specified only tag
tag_name = tag
if category_name:
tag_obj = self.get_tag_by_category(tag_name=tag_name, category_name=category_name)
else:
tag_obj = self.get_tag_by_name(tag_name=tag_name)
if not tag_obj:
self.module.fail_json(msg="Unable to find the tag %s" % tag_name)
if action in ('add', 'present'):
if tag_obj not in available_tag_obj:
# Tag is not already applied
try:
self.tag_association_svc.attach(tag_id=tag_obj.id, object_id=self.dynamic_managed_object)
changed = True
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
elif action == 'set':
# Remove all tags first
try:
if not removed_tags_for_set:
for av_tag in available_tag_obj:
self.tag_association_svc.detach(tag_id=av_tag.id, object_id=self.dynamic_managed_object)
removed_tags_for_set = True
self.tag_association_svc.attach(tag_id=tag_obj.id, object_id=self.dynamic_managed_object)
changed = True
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
elif action in ('remove', 'absent'):
if tag_obj in available_tag_obj:
try:
self.tag_association_svc.detach(tag_id=tag_obj.id, object_id=self.dynamic_managed_object)
changed = True
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
_temp_curr_tags = ["%s:%s" % (tag['category_name'], tag['name']) for tag in self.get_tags_for_dynamic_obj(self.dynamic_managed_object)]
results['tag_status']['current_tags'] = _temp_curr_tags
results['changed'] = changed
self.module.exit_json(**results)
def main():
argument_spec = VmwareRestClient.vmware_client_argument_spec()
argument_spec.update(
tag_names=dict(type='list', required=True),
state=dict(type='str', choices=['absent', 'add', 'present', 'remove', 'set'], default='add'),
object_name=dict(type='str', required=True),
object_type=dict(type='str', required=True, choices=['VirtualMachine', 'Datacenter', 'ClusterComputeResource',
'HostSystem', 'DistributedVirtualSwitch',
'DistributedVirtualPortgroup']),
)
module = AnsibleModule(argument_spec=argument_spec)
vmware_tag_manager = VmwareTagManager(module)
vmware_tag_manager.ensure_state()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,765 |
VMware: Colon support within VMWare Tags vmware_tag_manager
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
VMWare supports using colons in VMWare tags, however the vmware_tag_manager module does not since colons are used to delineate categories. It would be nice if there was some way to support using colons in tag names.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_tag_manager
##### ADDITIONAL INFORMATION
There should be some way to escape a colon in a tag_name so that we can continue to support categories but intentionally override that functionality when we intend to use a tag with a colon within. Two tag examples below. One with category, one escaping for instance. Feel free to use whatever escape sequence is most supportable.
``` - name: Add tags to a virtual machine
vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
tag_names:
- 'Category:Tag'
- 'TagWith\:Colon'
object_name: 'VMGuest'
object_type: VirtualMachine
state: add
delegate_to: localhost
```
Including the authors:
@Akasurde
@GBrawl
|
https://github.com/ansible/ansible/issues/65765
|
https://github.com/ansible/ansible/pull/66150
|
33d5c68887a063340355bf1a5c24ac2d66e6992b
|
7000c51c0691e176815cd0558c0b79f1f36d8c60
| 2019-12-12T12:38:32Z |
python
| 2020-02-14T21:07:10Z |
test/integration/targets/vmware_tag_manager/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,765 |
VMware: Colon support within VMWare Tags vmware_tag_manager
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
VMWare supports using colons in VMWare tags, however the vmware_tag_manager module does not since colons are used to delineate categories. It would be nice if there was some way to support using colons in tag names.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_tag_manager
##### ADDITIONAL INFORMATION
There should be some way to escape a colon in a tag_name so that we can continue to support categories but intentionally override that functionality when we intend to use a tag with a colon within. Two tag examples below. One with category, one escaping for instance. Feel free to use whatever escape sequence is most supportable.
``` - name: Add tags to a virtual machine
vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
tag_names:
- 'Category:Tag'
- 'TagWith\:Colon'
object_name: 'VMGuest'
object_type: VirtualMachine
state: add
delegate_to: localhost
```
Including the authors:
@Akasurde
@GBrawl
|
https://github.com/ansible/ansible/issues/65765
|
https://github.com/ansible/ansible/pull/66150
|
33d5c68887a063340355bf1a5c24ac2d66e6992b
|
7000c51c0691e176815cd0558c0b79f1f36d8c60
| 2019-12-12T12:38:32Z |
python
| 2020-02-14T21:07:10Z |
test/integration/targets/vmware_tag_manager/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,765 |
VMware: Colon support within VMWare Tags vmware_tag_manager
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
VMWare supports using colons in VMWare tags, however the vmware_tag_manager module does not since colons are used to delineate categories. It would be nice if there was some way to support using colons in tag names.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_tag_manager
##### ADDITIONAL INFORMATION
There should be some way to escape a colon in a tag_name so that we can continue to support categories but intentionally override that functionality when we intend to use a tag with a colon within. Two tag examples below. One with category, one escaping for instance. Feel free to use whatever escape sequence is most supportable.
``` - name: Add tags to a virtual machine
vmware_tag_manager:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
validate_certs: no
tag_names:
- 'Category:Tag'
- 'TagWith\:Colon'
object_name: 'VMGuest'
object_type: VirtualMachine
state: add
delegate_to: localhost
```
Including the authors:
@Akasurde
@GBrawl
|
https://github.com/ansible/ansible/issues/65765
|
https://github.com/ansible/ansible/pull/66150
|
33d5c68887a063340355bf1a5c24ac2d66e6992b
|
7000c51c0691e176815cd0558c0b79f1f36d8c60
| 2019-12-12T12:38:32Z |
python
| 2020-02-14T21:07:10Z |
test/integration/targets/vmware_tag_manager/tasks/tag_manager_dict.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,764 |
"Could not match supplied host pattern" warning printed for non-empty group before any plays
|
##### SUMMARY
`[WARNING]: Could not match supplied host pattern, ignoring: <group_name>` is printed prior to the first play for a non-empty group when using a combination of group_by and import_role/import_tasks/include_tasks. Using include_role however works correctly (no warning is displayed).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-playbook
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.4
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vrevelas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vrevelas/ansible-warning-bug/venv/lib/python3.6/site-packages/ansible
executable location = /home/vrevelas/ansible-warning-bug/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
```
##### OS / ENVIRONMENT
Ubuntu 18.04 / Python 3.6.9
Note that this behaviour is not reproducible when using the same version of Ansible (2.9.4) and Python 2.7 as opposed to Python 3.
##### STEPS TO REPRODUCE
test.yml
```yaml
---
- hosts: localhost
tasks:
- name: Group
group_by:
key: test_{{ inventory_hostname }}
- hosts: test_localhost
tasks:
- name: Print
import_tasks: test-tasks.yml
# the below also trigger the warning - but note that it is not issued when include_role is used:
# include_tasks: test-tasks.yml
# import_role:
# name: test
```
test-tasks.yml
```yaml
- name: test
debug:
msg: hello
```
inventory
```
localhost
```
##### EXPECTED RESULTS
No warning should be printed at the beginning of the output. Replacing `import_tasks` with an `include_role` produces the expected result (no warning).
##### ACTUAL RESULTS
A false-positive warning is printed at the beginning of the output.
Note that the same version of Ansible (2.9.4) installed and run under Python 2.7.17 does not print the false positive warning.
```
ansible-playbook -i inventory test.yml
[WARNING]: Could not match supplied host pattern, ignoring: test_localhost
PLAY [localhost] ****************************************************************************************
TASK [Gathering Facts] **********************************************************************************
ok: [localhost]
TASK [Group] ********************************************************************************************
ok: [localhost]
PLAY [test_localhost] ***********************************************************************************
TASK [Gathering Facts] **********************************************************************************
ok: [localhost]
TASK [test] *********************************************************************************************
ok: [localhost] => {
"msg": "hello"
}
PLAY RECAP **********************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66764
|
https://github.com/ansible/ansible/pull/67432
|
c45d193af4ddac6938ac1bab59deca492b5f739b
|
9b28f1f5d931b727f2a06270314f2c2a8a5494bb
| 2020-01-24T14:48:10Z |
python
| 2020-02-14T21:50:52Z |
changelogs/fragments/66764-host-pattern-warning.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,764 |
"Could not match supplied host pattern" warning printed for non-empty group before any plays
|
##### SUMMARY
`[WARNING]: Could not match supplied host pattern, ignoring: <group_name>` is printed prior to the first play for a non-empty group when using a combination of group_by and import_role/import_tasks/include_tasks. Using include_role however works correctly (no warning is displayed).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-playbook
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.4
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vrevelas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vrevelas/ansible-warning-bug/venv/lib/python3.6/site-packages/ansible
executable location = /home/vrevelas/ansible-warning-bug/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
```
##### OS / ENVIRONMENT
Ubuntu 18.04 / Python 3.6.9
Note that this behaviour is not reproducible when using the same version of Ansible (2.9.4) and Python 2.7 as opposed to Python 3.
##### STEPS TO REPRODUCE
test.yml
```yaml
---
- hosts: localhost
tasks:
- name: Group
group_by:
key: test_{{ inventory_hostname }}
- hosts: test_localhost
tasks:
- name: Print
import_tasks: test-tasks.yml
# the below also trigger the warning - but note that it is not issued when include_role is used:
# include_tasks: test-tasks.yml
# import_role:
# name: test
```
test-tasks.yml
```yaml
- name: test
debug:
msg: hello
```
inventory
```
localhost
```
##### EXPECTED RESULTS
No warning should be printed at the beginning of the output. Replacing `import_tasks` with an `include_role` produces the expected result (no warning).
##### ACTUAL RESULTS
A false-positive warning is printed at the beginning of the output.
Note that the same version of Ansible (2.9.4) installed and run under Python 2.7.17 does not print the false positive warning.
```
ansible-playbook -i inventory test.yml
[WARNING]: Could not match supplied host pattern, ignoring: test_localhost
PLAY [localhost] ****************************************************************************************
TASK [Gathering Facts] **********************************************************************************
ok: [localhost]
TASK [Group] ********************************************************************************************
ok: [localhost]
PLAY [test_localhost] ***********************************************************************************
TASK [Gathering Facts] **********************************************************************************
ok: [localhost]
TASK [test] *********************************************************************************************
ok: [localhost] => {
"msg": "hello"
}
PLAY RECAP **********************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66764
|
https://github.com/ansible/ansible/pull/67432
|
c45d193af4ddac6938ac1bab59deca492b5f739b
|
9b28f1f5d931b727f2a06270314f2c2a8a5494bb
| 2020-01-24T14:48:10Z |
python
| 2020-02-14T21:50:52Z |
lib/ansible/playbook/play.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleParserError, AnsibleAssertionError
from ansible.module_utils._text import to_native
from ansible.module_utils.six import string_types
from ansible.playbook.attribute import FieldAttribute
from ansible.playbook.base import Base
from ansible.playbook.block import Block
from ansible.playbook.collectionsearch import CollectionSearch
from ansible.playbook.helpers import load_list_of_blocks, load_list_of_roles
from ansible.playbook.role import Role
from ansible.playbook.taggable import Taggable
from ansible.vars.manager import preprocess_vars
from ansible.utils.display import Display
display = Display()
__all__ = ['Play']
class Play(Base, Taggable, CollectionSearch):
"""
A play is a language feature that represents a list of roles and/or
task/handler blocks to execute on a given set of hosts.
Usage:
Play.load(datastructure) -> Play
Play.something(...)
"""
# =================================================================================
_hosts = FieldAttribute(isa='list', required=True, listof=string_types, always_post_validate=True)
# Facts
_gather_facts = FieldAttribute(isa='bool', default=None, always_post_validate=True)
_gather_subset = FieldAttribute(isa='list', default=(lambda: C.DEFAULT_GATHER_SUBSET), listof=string_types, always_post_validate=True)
_gather_timeout = FieldAttribute(isa='int', default=C.DEFAULT_GATHER_TIMEOUT, always_post_validate=True)
_fact_path = FieldAttribute(isa='string', default=C.DEFAULT_FACT_PATH)
# Variable Attributes
_vars_files = FieldAttribute(isa='list', default=list, priority=99)
_vars_prompt = FieldAttribute(isa='list', default=list, always_post_validate=False)
# Role Attributes
_roles = FieldAttribute(isa='list', default=list, priority=90)
# Block (Task) Lists Attributes
_handlers = FieldAttribute(isa='list', default=list)
_pre_tasks = FieldAttribute(isa='list', default=list)
_post_tasks = FieldAttribute(isa='list', default=list)
_tasks = FieldAttribute(isa='list', default=list)
# Flag/Setting Attributes
_force_handlers = FieldAttribute(isa='bool', default=context.cliargs_deferred_get('force_handlers'), always_post_validate=True)
_max_fail_percentage = FieldAttribute(isa='percent', always_post_validate=True)
_serial = FieldAttribute(isa='list', default=list, always_post_validate=True)
_strategy = FieldAttribute(isa='string', default=C.DEFAULT_STRATEGY, always_post_validate=True)
_order = FieldAttribute(isa='string', always_post_validate=True)
# =================================================================================
def __init__(self):
super(Play, self).__init__()
self._included_conditional = None
self._included_path = None
self._removed_hosts = []
self.ROLE_CACHE = {}
self.only_tags = set(context.CLIARGS.get('tags', [])) or frozenset(('all',))
self.skip_tags = set(context.CLIARGS.get('skip_tags', []))
def __repr__(self):
return self.get_name()
def get_name(self):
''' return the name of the Play '''
return self.name
@staticmethod
def load(data, variable_manager=None, loader=None, vars=None):
if ('name' not in data or data['name'] is None) and 'hosts' in data:
if data['hosts'] is None or all(host is None for host in data['hosts']):
raise AnsibleParserError("Hosts list cannot be empty - please check your playbook")
if isinstance(data['hosts'], list):
data['name'] = ','.join(data['hosts'])
else:
data['name'] = data['hosts']
p = Play()
if vars:
p.vars = vars.copy()
return p.load_data(data, variable_manager=variable_manager, loader=loader)
def preprocess_data(self, ds):
'''
Adjusts play datastructure to cleanup old/legacy items
'''
if not isinstance(ds, dict):
raise AnsibleAssertionError('while preprocessing data (%s), ds should be a dict but was a %s' % (ds, type(ds)))
# The use of 'user' in the Play datastructure was deprecated to
# line up with the same change for Tasks, due to the fact that
# 'user' conflicted with the user module.
if 'user' in ds:
# this should never happen, but error out with a helpful message
# to the user if it does...
if 'remote_user' in ds:
raise AnsibleParserError("both 'user' and 'remote_user' are set for %s. "
"The use of 'user' is deprecated, and should be removed" % self.get_name(), obj=ds)
ds['remote_user'] = ds['user']
del ds['user']
return super(Play, self).preprocess_data(ds)
def _load_tasks(self, attr, ds):
'''
Loads a list of blocks from a list which may be mixed tasks/blocks.
Bare tasks outside of a block are given an implicit block.
'''
try:
return load_list_of_blocks(ds=ds, play=self, variable_manager=self._variable_manager, loader=self._loader)
except AssertionError as e:
raise AnsibleParserError("A malformed block was encountered while loading tasks: %s" % to_native(e), obj=self._ds, orig_exc=e)
def _load_pre_tasks(self, attr, ds):
'''
Loads a list of blocks from a list which may be mixed tasks/blocks.
Bare tasks outside of a block are given an implicit block.
'''
try:
return load_list_of_blocks(ds=ds, play=self, variable_manager=self._variable_manager, loader=self._loader)
except AssertionError as e:
raise AnsibleParserError("A malformed block was encountered while loading pre_tasks", obj=self._ds, orig_exc=e)
def _load_post_tasks(self, attr, ds):
'''
Loads a list of blocks from a list which may be mixed tasks/blocks.
Bare tasks outside of a block are given an implicit block.
'''
try:
return load_list_of_blocks(ds=ds, play=self, variable_manager=self._variable_manager, loader=self._loader)
except AssertionError as e:
raise AnsibleParserError("A malformed block was encountered while loading post_tasks", obj=self._ds, orig_exc=e)
def _load_handlers(self, attr, ds):
'''
Loads a list of blocks from a list which may be mixed handlers/blocks.
Bare handlers outside of a block are given an implicit block.
'''
try:
return self._extend_value(
self.handlers,
load_list_of_blocks(ds=ds, play=self, use_handlers=True, variable_manager=self._variable_manager, loader=self._loader),
prepend=True
)
except AssertionError as e:
raise AnsibleParserError("A malformed block was encountered while loading handlers", obj=self._ds, orig_exc=e)
def _load_roles(self, attr, ds):
'''
Loads and returns a list of RoleInclude objects from the datastructure
list of role definitions and creates the Role from those objects
'''
if ds is None:
ds = []
try:
role_includes = load_list_of_roles(ds, play=self, variable_manager=self._variable_manager,
loader=self._loader, collection_search_list=self.collections)
except AssertionError as e:
raise AnsibleParserError("A malformed role declaration was encountered.", obj=self._ds, orig_exc=e)
roles = []
for ri in role_includes:
roles.append(Role.load(ri, play=self))
self.roles[:0] = roles
return self.roles
def _load_vars_prompt(self, attr, ds):
new_ds = preprocess_vars(ds)
vars_prompts = []
if new_ds is not None:
for prompt_data in new_ds:
if 'name' not in prompt_data:
raise AnsibleParserError("Invalid vars_prompt data structure, missing 'name' key", obj=ds)
for key in prompt_data:
if key not in ('name', 'prompt', 'default', 'private', 'confirm', 'encrypt', 'salt_size', 'salt', 'unsafe'):
raise AnsibleParserError("Invalid vars_prompt data structure, found unsupported key '%s'" % key, obj=ds)
vars_prompts.append(prompt_data)
return vars_prompts
def _compile_roles(self):
'''
Handles the role compilation step, returning a flat list of tasks
with the lowest level dependencies first. For example, if a role R
has a dependency D1, which also has a dependency D2, the tasks from
D2 are merged first, followed by D1, and lastly by the tasks from
the parent role R last. This is done for all roles in the Play.
'''
block_list = []
if len(self.roles) > 0:
for r in self.roles:
# Don't insert tasks from ``import/include_role``, preventing
# duplicate execution at the wrong time
if r.from_include:
continue
block_list.extend(r.compile(play=self))
return block_list
def compile_roles_handlers(self):
'''
Handles the role handler compilation step, returning a flat list of Handlers
This is done for all roles in the Play.
'''
block_list = []
if len(self.roles) > 0:
for r in self.roles:
if r.from_include:
continue
block_list.extend(r.get_handler_blocks(play=self))
return block_list
def compile(self):
'''
Compiles and returns the task list for this play, compiled from the
roles (which are themselves compiled recursively) and/or the list of
tasks specified in the play.
'''
# create a block containing a single flush handlers meta
# task, so we can be sure to run handlers at certain points
# of the playbook execution
flush_block = Block.load(
data={'meta': 'flush_handlers'},
play=self,
variable_manager=self._variable_manager,
loader=self._loader
)
block_list = []
block_list.extend(self.pre_tasks)
block_list.append(flush_block)
block_list.extend(self._compile_roles())
block_list.extend(self.tasks)
block_list.append(flush_block)
block_list.extend(self.post_tasks)
block_list.append(flush_block)
return block_list
def get_vars(self):
return self.vars.copy()
def get_vars_files(self):
if self.vars_files is None:
return []
elif not isinstance(self.vars_files, list):
return [self.vars_files]
return self.vars_files
def get_handlers(self):
return self.handlers[:]
def get_roles(self):
return self.roles[:]
def get_tasks(self):
tasklist = []
for task in self.pre_tasks + self.tasks + self.post_tasks:
if isinstance(task, Block):
tasklist.append(task.block + task.rescue + task.always)
else:
tasklist.append(task)
return tasklist
def serialize(self):
data = super(Play, self).serialize()
roles = []
for role in self.get_roles():
roles.append(role.serialize())
data['roles'] = roles
data['included_path'] = self._included_path
return data
def deserialize(self, data):
super(Play, self).deserialize(data)
self._included_path = data.get('included_path', None)
if 'roles' in data:
role_data = data.get('roles', [])
roles = []
for role in role_data:
r = Role()
r.deserialize(role)
roles.append(r)
setattr(self, 'roles', roles)
del data['roles']
def copy(self):
new_me = super(Play, self).copy()
new_me.ROLE_CACHE = self.ROLE_CACHE.copy()
new_me._included_conditional = self._included_conditional
new_me._included_path = self._included_path
return new_me
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,764 |
"Could not match supplied host pattern" warning printed for non-empty group before any plays
|
##### SUMMARY
`[WARNING]: Could not match supplied host pattern, ignoring: <group_name>` is printed prior to the first play for a non-empty group when using a combination of group_by and import_role/import_tasks/include_tasks. Using include_role however works correctly (no warning is displayed).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-playbook
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.4
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vrevelas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vrevelas/ansible-warning-bug/venv/lib/python3.6/site-packages/ansible
executable location = /home/vrevelas/ansible-warning-bug/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
```
##### OS / ENVIRONMENT
Ubuntu 18.04 / Python 3.6.9
Note that this behaviour is not reproducible when using the same version of Ansible (2.9.4) and Python 2.7 as opposed to Python 3.
##### STEPS TO REPRODUCE
test.yml
```yaml
---
- hosts: localhost
tasks:
- name: Group
group_by:
key: test_{{ inventory_hostname }}
- hosts: test_localhost
tasks:
- name: Print
import_tasks: test-tasks.yml
# the below also trigger the warning - but note that it is not issued when include_role is used:
# include_tasks: test-tasks.yml
# import_role:
# name: test
```
test-tasks.yml
```yaml
- name: test
debug:
msg: hello
```
inventory
```
localhost
```
##### EXPECTED RESULTS
No warning should be printed at the beginning of the output. Replacing `import_tasks` with an `include_role` produces the expected result (no warning).
##### ACTUAL RESULTS
A false-positive warning is printed at the beginning of the output.
Note that the same version of Ansible (2.9.4) installed and run under Python 2.7.17 does not print the false positive warning.
```
ansible-playbook -i inventory test.yml
[WARNING]: Could not match supplied host pattern, ignoring: test_localhost
PLAY [localhost] ****************************************************************************************
TASK [Gathering Facts] **********************************************************************************
ok: [localhost]
TASK [Group] ********************************************************************************************
ok: [localhost]
PLAY [test_localhost] ***********************************************************************************
TASK [Gathering Facts] **********************************************************************************
ok: [localhost]
TASK [test] *********************************************************************************************
ok: [localhost] => {
"msg": "hello"
}
PLAY RECAP **********************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66764
|
https://github.com/ansible/ansible/pull/67432
|
c45d193af4ddac6938ac1bab59deca492b5f739b
|
9b28f1f5d931b727f2a06270314f2c2a8a5494bb
| 2020-01-24T14:48:10Z |
python
| 2020-02-14T21:50:52Z |
test/integration/targets/include_import/empty_group_warning/playbook.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,764 |
"Could not match supplied host pattern" warning printed for non-empty group before any plays
|
##### SUMMARY
`[WARNING]: Could not match supplied host pattern, ignoring: <group_name>` is printed prior to the first play for a non-empty group when using a combination of group_by and import_role/import_tasks/include_tasks. Using include_role however works correctly (no warning is displayed).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-playbook
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.4
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vrevelas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vrevelas/ansible-warning-bug/venv/lib/python3.6/site-packages/ansible
executable location = /home/vrevelas/ansible-warning-bug/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
```
##### OS / ENVIRONMENT
Ubuntu 18.04 / Python 3.6.9
Note that this behaviour is not reproducible when using the same version of Ansible (2.9.4) and Python 2.7 as opposed to Python 3.
##### STEPS TO REPRODUCE
test.yml
```yaml
---
- hosts: localhost
tasks:
- name: Group
group_by:
key: test_{{ inventory_hostname }}
- hosts: test_localhost
tasks:
- name: Print
import_tasks: test-tasks.yml
# the below also trigger the warning - but note that it is not issued when include_role is used:
# include_tasks: test-tasks.yml
# import_role:
# name: test
```
test-tasks.yml
```yaml
- name: test
debug:
msg: hello
```
inventory
```
localhost
```
##### EXPECTED RESULTS
No warning should be printed at the beginning of the output. Replacing `import_tasks` with an `include_role` produces the expected result (no warning).
##### ACTUAL RESULTS
A false-positive warning is printed at the beginning of the output.
Note that the same version of Ansible (2.9.4) installed and run under Python 2.7.17 does not print the false positive warning.
```
ansible-playbook -i inventory test.yml
[WARNING]: Could not match supplied host pattern, ignoring: test_localhost
PLAY [localhost] ****************************************************************************************
TASK [Gathering Facts] **********************************************************************************
ok: [localhost]
TASK [Group] ********************************************************************************************
ok: [localhost]
PLAY [test_localhost] ***********************************************************************************
TASK [Gathering Facts] **********************************************************************************
ok: [localhost]
TASK [test] *********************************************************************************************
ok: [localhost] => {
"msg": "hello"
}
PLAY RECAP **********************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66764
|
https://github.com/ansible/ansible/pull/67432
|
c45d193af4ddac6938ac1bab59deca492b5f739b
|
9b28f1f5d931b727f2a06270314f2c2a8a5494bb
| 2020-01-24T14:48:10Z |
python
| 2020-02-14T21:50:52Z |
test/integration/targets/include_import/empty_group_warning/tasks.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,764 |
"Could not match supplied host pattern" warning printed for non-empty group before any plays
|
##### SUMMARY
`[WARNING]: Could not match supplied host pattern, ignoring: <group_name>` is printed prior to the first play for a non-empty group when using a combination of group_by and import_role/import_tasks/include_tasks. Using include_role however works correctly (no warning is displayed).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-playbook
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.4
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vrevelas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vrevelas/ansible-warning-bug/venv/lib/python3.6/site-packages/ansible
executable location = /home/vrevelas/ansible-warning-bug/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
```
##### OS / ENVIRONMENT
Ubuntu 18.04 / Python 3.6.9
Note that this behaviour is not reproducible when using the same version of Ansible (2.9.4) and Python 2.7 as opposed to Python 3.
##### STEPS TO REPRODUCE
test.yml
```yaml
---
- hosts: localhost
tasks:
- name: Group
group_by:
key: test_{{ inventory_hostname }}
- hosts: test_localhost
tasks:
- name: Print
import_tasks: test-tasks.yml
# the below also trigger the warning - but note that it is not issued when include_role is used:
# include_tasks: test-tasks.yml
# import_role:
# name: test
```
test-tasks.yml
```yaml
- name: test
debug:
msg: hello
```
inventory
```
localhost
```
##### EXPECTED RESULTS
No warning should be printed at the beginning of the output. Replacing `import_tasks` with an `include_role` produces the expected result (no warning).
##### ACTUAL RESULTS
A false-positive warning is printed at the beginning of the output.
Note that the same version of Ansible (2.9.4) installed and run under Python 2.7.17 does not print the false positive warning.
```
ansible-playbook -i inventory test.yml
[WARNING]: Could not match supplied host pattern, ignoring: test_localhost
PLAY [localhost] ****************************************************************************************
TASK [Gathering Facts] **********************************************************************************
ok: [localhost]
TASK [Group] ********************************************************************************************
ok: [localhost]
PLAY [test_localhost] ***********************************************************************************
TASK [Gathering Facts] **********************************************************************************
ok: [localhost]
TASK [test] *********************************************************************************************
ok: [localhost] => {
"msg": "hello"
}
PLAY RECAP **********************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/66764
|
https://github.com/ansible/ansible/pull/67432
|
c45d193af4ddac6938ac1bab59deca492b5f739b
|
9b28f1f5d931b727f2a06270314f2c2a8a5494bb
| 2020-01-24T14:48:10Z |
python
| 2020-02-14T21:50:52Z |
test/integration/targets/include_import/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=./roles
function gen_task_files() {
for i in $(seq -f '%03g' 1 39); do
echo -e "- name: Hello Message\n debug:\n msg: Task file ${i}" > "tasks/hello/tasks-file-${i}.yml"
done
}
## Adhoc
ansible -m include_role -a name=role1 localhost
## Import (static)
# Playbook
ANSIBLE_STRATEGY='linear' ansible-playbook playbook/test_import_playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook playbook/test_import_playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook playbook/test_import_playbook_tags.yml -i inventory "$@" --tags canary1,canary22,validate --skip-tags skipme
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_import_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_import_role.yml -i inventory "$@"
## Include (dynamic)
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_include_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_include_role.yml -i inventory "$@"
## Max Recursion Depth
# https://github.com/ansible/ansible/issues/23609
ANSIBLE_STRATEGY='linear' ansible-playbook test_role_recursion.yml -i inventory "$@"
## Nested tasks
# https://github.com/ansible/ansible/issues/34782
ANSIBLE_STRATEGY='linear' ansible-playbook test_nested_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_nested_tasks.yml -i inventory "$@"
## Tons of top level include_tasks
# https://github.com/ansible/ansible/issues/36053
# Fixed by https://github.com/ansible/ansible/pull/36075
gen_task_files
ANSIBLE_STRATEGY='linear' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
rm -f tasks/hello/*.yml
# Inlcuded tasks should inherit attrs from non-dynamic blocks in parent chain
# https://github.com/ansible/ansible/pull/38827
ANSIBLE_STRATEGY='linear' ansible-playbook test_grandparent_inheritance.yml -i inventory "$@"
# undefined_var
ANSIBLE_STRATEGY='linear' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
# include_ + apply (explicit inheritance)
ANSIBLE_STRATEGY='linear' ansible-playbook apply/include_apply.yml -i inventory "$@" --tags foo
set +e
OUT=$(ANSIBLE_STRATEGY='linear' ansible-playbook apply/import_apply.yml -i inventory "$@" --tags foo 2>&1 | grep 'ERROR! Invalid options for import_tasks: apply')
set -e
if [[ -z "$OUT" ]]; then
echo "apply on import_tasks did not cause error"
exit 1
fi
# Test that duplicate items in loop are not deduped
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ansible-playbook public_exposure/playbook.yml -i inventory "$@"
ansible-playbook public_exposure/no_bleeding.yml -i inventory "$@"
ansible-playbook public_exposure/no_overwrite_roles.yml -i inventory "$@"
# https://github.com/ansible/ansible/pull/48068
ANSIBLE_HOST_PATTERN_MISMATCH=warning ansible-playbook run_once/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/48936
ansible-playbook -v handler_addressing/playbook.yml 2>&1 | tee test_handler_addressing.out
test "$(grep -E -c 'include handler task|ERROR! The requested handler '"'"'do_import'"'"' was not found' test_handler_addressing.out)" = 2
# https://github.com/ansible/ansible/issues/49969
ansible-playbook -v parent_templating/playbook.yml 2>&1 | tee test_parent_templating.out
test "$(grep -E -c 'Templating the path of the parent include_tasks failed.' test_parent_templating.out)" = 0
# https://github.com/ansible/ansible/issues/54618
ansible-playbook test_loop_var_bleed.yaml "$@"
# https://github.com/ansible/ansible/issues/56580
ansible-playbook valid_include_keywords/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/64902
ansible-playbook tasks/test_allow_single_role_dup.yml 2>&1 | tee test_allow_single_role_dup.out
test "$(grep -c 'ok=3' test_allow_single_role_dup.out)" = 1
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,943 |
Redfish_command "IndicatorLed*" command execute failed on Lenovo blade server.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
-When execute "IndicatorLed*" command on blade servers, it will show error message "Key IndicatorLED not found". That is because blade servers have two chassis. /redfish/v1/chassis/1 is for the node itself , and it has the key IndicatorLED. /redfish/v1/chassis/2 is for the chassis the node located in and it doesn't have key IndicatorLED. Ansible will execute set IndicatorLed command on every chassis it found but only return the results of the last chassis, which cause this fail.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
/lib/ansible/modules/remote_management/redfish/redfish_command.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /root/.local/lib/python2.7/site-packages/ansible
executable location = /opt/python/python27/bin/ansible
python version = 2.7.6 (default, Feb 8 2015, 07:53:59) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
/opt/python/python27/lib/python2.7/site-packages/cryptography/hazmat/primitives/constant_time.py:26: CryptographyDeprecationWarning: Support for your Python version is deprecated. The next version of cryptography will remove support. Please upgrade to a release (2.7.7+) that supports hmac.compare_digest as soon as possible.
utils.PersistentlyDeprecated2018,
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Redhat 7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
/opt/python/python27/bin/ansible localhost -m redfish_command -a "category=Chassis command= IndicatorLedOn baseuri=10.245.23.209 username=USERID password= "
```
The remote server is Lenovo blade server SN550, and it located in chassis. Execute IndicatorLedOn command and it failed, which is because the server has two chassis members. chassis/1 is for the node and chassis/2 is for the chassis. The chassis/2 don't have the property IndicatorLed but Ansible will execute IndicatorLedOn command on all memebers it found,
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should return message that on chassis/1 the command execute successfully. And chassis/2 doesn't have IndicatorLED properties.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
It returns false message: "Key IndicatorLED not found".
<!--- Paste verbatim command output between quotes -->
```paste below
"Key IndicatorLED not found".
```
|
https://github.com/ansible/ansible/issues/65943
|
https://github.com/ansible/ansible/pull/66044
|
936bd83614d3db3ee36c92e8a8d2168269ec3bc8
|
fe2a8cb1450edd568f238d9fe71c49e067e84395
| 2019-12-18T10:04:05Z |
python
| 2020-02-15T12:49:18Z |
lib/ansible/module_utils/redfish_utils.py
|
# Copyright (c) 2017-2018 Dell EMC Inc.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import json
from ansible.module_utils.urls import open_url
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves import http_client
from ansible.module_utils.six.moves.urllib.error import URLError, HTTPError
GET_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'}
POST_HEADERS = {'content-type': 'application/json', 'accept': 'application/json',
'OData-Version': '4.0'}
PATCH_HEADERS = {'content-type': 'application/json', 'accept': 'application/json',
'OData-Version': '4.0'}
DELETE_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'}
DEPRECATE_MSG = 'Issuing a data modification command without specifying the '\
'ID of the target %(resource)s resource when there is more '\
'than one %(resource)s will use the first one in the '\
'collection. Use the `resource_id` option to specify the '\
'target %(resource)s ID'
class RedfishUtils(object):
def __init__(self, creds, root_uri, timeout, module, resource_id=None,
data_modification=False):
self.root_uri = root_uri
self.creds = creds
self.timeout = timeout
self.module = module
self.service_root = '/redfish/v1/'
self.resource_id = resource_id
self.data_modification = data_modification
self._init_session()
# The following functions are to send GET/POST/PATCH/DELETE requests
def get_request(self, uri):
try:
resp = open_url(uri, method="GET", headers=GET_HEADERS,
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=False, timeout=self.timeout)
data = json.loads(resp.read())
headers = dict((k.lower(), v) for (k, v) in resp.info().items())
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on GET request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on GET request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed GET request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'data': data, 'headers': headers}
def post_request(self, uri, pyld):
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=POST_HEADERS, method="POST",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=False, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on POST request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on POST request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed POST request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
def patch_request(self, uri, pyld):
headers = PATCH_HEADERS
r = self.get_request(uri)
if r['ret']:
# Get etag from etag header or @odata.etag property
etag = r['headers'].get('etag')
if not etag:
etag = r['data'].get('@odata.etag')
if etag:
# Make copy of headers and add If-Match header
headers = dict(headers)
headers['If-Match'] = etag
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=headers, method="PATCH",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=False, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on PATCH request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on PATCH request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed PATCH request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
def delete_request(self, uri, pyld=None):
try:
data = json.dumps(pyld) if pyld else None
resp = open_url(uri, data=data,
headers=DELETE_HEADERS, method="DELETE",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=False, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on DELETE request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on DELETE request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed DELETE request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
@staticmethod
def _get_extended_message(error):
"""
Get Redfish ExtendedInfo message from response payload if present
:param error: an HTTPError exception
:type error: HTTPError
:return: the ExtendedInfo message if present, else standard HTTP error
"""
msg = http_client.responses.get(error.code, '')
if error.code >= 400:
try:
body = error.read().decode('utf-8')
data = json.loads(body)
ext_info = data['error']['@Message.ExtendedInfo']
msg = ext_info[0]['Message']
except Exception:
pass
return msg
def _init_session(self):
pass
def _find_accountservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'AccountService' not in data:
return {'ret': False, 'msg': "AccountService resource not found"}
else:
account_service = data["AccountService"]["@odata.id"]
response = self.get_request(self.root_uri + account_service)
if response['ret'] is False:
return response
data = response['data']
accounts = data['Accounts']['@odata.id']
if accounts[-1:] == '/':
accounts = accounts[:-1]
self.accounts_uri = accounts
return {'ret': True}
def _find_sessionservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'SessionService' not in data:
return {'ret': False, 'msg': "SessionService resource not found"}
else:
session_service = data["SessionService"]["@odata.id"]
response = self.get_request(self.root_uri + session_service)
if response['ret'] is False:
return response
data = response['data']
sessions = data['Sessions']['@odata.id']
if sessions[-1:] == '/':
sessions = sessions[:-1]
self.sessions_uri = sessions
return {'ret': True}
def _get_resource_uri_by_id(self, uris, id_prop):
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
if id_prop == data.get('Id'):
return uri
return None
def _find_systems_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Systems' not in data:
return {'ret': False, 'msg': "Systems resource not found"}
response = self.get_request(self.root_uri + data['Systems']['@odata.id'])
if response['ret'] is False:
return response
self.systems_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.systems_uris:
return {
'ret': False,
'msg': "ComputerSystem's Members array is either empty or missing"}
self.systems_uri = self.systems_uris[0]
if self.data_modification:
if self.resource_id:
self.systems_uri = self._get_resource_uri_by_id(self.systems_uris,
self.resource_id)
if not self.systems_uri:
return {
'ret': False,
'msg': "System resource %s not found" % self.resource_id}
elif len(self.systems_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'System'},
version='2.14')
return {'ret': True}
def _find_updateservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'UpdateService' not in data:
return {'ret': False, 'msg': "UpdateService resource not found"}
else:
update = data["UpdateService"]["@odata.id"]
self.update_uri = update
response = self.get_request(self.root_uri + update)
if response['ret'] is False:
return response
data = response['data']
self.firmware_uri = self.software_uri = None
if 'FirmwareInventory' in data:
self.firmware_uri = data['FirmwareInventory'][u'@odata.id']
if 'SoftwareInventory' in data:
self.software_uri = data['SoftwareInventory'][u'@odata.id']
return {'ret': True}
def _find_chassis_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Chassis' not in data:
return {'ret': False, 'msg': "Chassis resource not found"}
chassis = data["Chassis"]["@odata.id"]
response = self.get_request(self.root_uri + chassis)
if response['ret'] is False:
return response
self.chassis_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.chassis_uris:
return {'ret': False,
'msg': "Chassis Members array is either empty or missing"}
self.chassis_uri = self.chassis_uris[0]
if self.data_modification:
if self.resource_id:
self.chassis_uri = self._get_resource_uri_by_id(self.chassis_uris,
self.resource_id)
if not self.chassis_uri:
return {
'ret': False,
'msg': "Chassis resource %s not found" % self.resource_id}
elif len(self.chassis_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'Chassis'},
version='2.14')
return {'ret': True}
def _find_managers_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Managers' not in data:
return {'ret': False, 'msg': "Manager resource not found"}
manager = data["Managers"]["@odata.id"]
response = self.get_request(self.root_uri + manager)
if response['ret'] is False:
return response
self.manager_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.manager_uris:
return {'ret': False,
'msg': "Managers Members array is either empty or missing"}
self.manager_uri = self.manager_uris[0]
if self.data_modification:
if self.resource_id:
self.manager_uri = self._get_resource_uri_by_id(self.manager_uris,
self.resource_id)
if not self.manager_uri:
return {
'ret': False,
'msg': "Manager resource %s not found" % self.resource_id}
elif len(self.manager_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'Manager'},
version='2.14')
return {'ret': True}
def get_logs(self):
log_svcs_uri_list = []
list_of_logs = []
properties = ['Severity', 'Created', 'EntryType', 'OemRecordFormat',
'Message', 'MessageId', 'MessageArgs']
# Find LogService
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'LogServices' not in data:
return {'ret': False, 'msg': "LogServices resource not found"}
# Find all entries in LogServices
logs_uri = data["LogServices"]["@odata.id"]
response = self.get_request(self.root_uri + logs_uri)
if response['ret'] is False:
return response
data = response['data']
for log_svcs_entry in data.get('Members', []):
response = self.get_request(self.root_uri + log_svcs_entry[u'@odata.id'])
if response['ret'] is False:
return response
_data = response['data']
if 'Entries' in _data:
log_svcs_uri_list.append(_data['Entries'][u'@odata.id'])
# For each entry in LogServices, get log name and all log entries
for log_svcs_uri in log_svcs_uri_list:
logs = {}
list_of_log_entries = []
response = self.get_request(self.root_uri + log_svcs_uri)
if response['ret'] is False:
return response
data = response['data']
logs['Description'] = data.get('Description',
'Collection of log entries')
# Get all log entries for each type of log found
for logEntry in data.get('Members', []):
entry = {}
for prop in properties:
if prop in logEntry:
entry[prop] = logEntry.get(prop)
if entry:
list_of_log_entries.append(entry)
log_name = log_svcs_uri.split('/')[-1]
logs[log_name] = list_of_log_entries
list_of_logs.append(logs)
# list_of_logs[logs{list_of_log_entries[entry{}]}]
return {'ret': True, 'entries': list_of_logs}
def clear_logs(self):
# Find LogService
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'LogServices' not in data:
return {'ret': False, 'msg': "LogServices resource not found"}
# Find all entries in LogServices
logs_uri = data["LogServices"]["@odata.id"]
response = self.get_request(self.root_uri + logs_uri)
if response['ret'] is False:
return response
data = response['data']
for log_svcs_entry in data[u'Members']:
response = self.get_request(self.root_uri + log_svcs_entry["@odata.id"])
if response['ret'] is False:
return response
_data = response['data']
# Check to make sure option is available, otherwise error is ugly
if "Actions" in _data:
if "#LogService.ClearLog" in _data[u"Actions"]:
self.post_request(self.root_uri + _data[u"Actions"]["#LogService.ClearLog"]["target"], {})
if response['ret'] is False:
return response
return {'ret': True}
def aggregate(self, func, uri_list, uri_name):
ret = True
entries = []
for uri in uri_list:
inventory = func(uri)
ret = inventory.pop('ret') and ret
if 'entries' in inventory:
entries.append(({uri_name: uri},
inventory['entries']))
return dict(ret=ret, entries=entries)
def aggregate_chassis(self, func):
return self.aggregate(func, self.chassis_uris, 'chassis_uri')
def aggregate_managers(self, func):
return self.aggregate(func, self.manager_uris, 'manager_uri')
def aggregate_systems(self, func):
return self.aggregate(func, self.systems_uris, 'system_uri')
def get_storage_controller_inventory(self, systems_uri):
result = {}
controller_list = []
controller_results = []
# Get these entries, but does not fail if not found
properties = ['CacheSummary', 'FirmwareVersion', 'Identifiers',
'Location', 'Manufacturer', 'Model', 'Name',
'PartNumber', 'SerialNumber', 'SpeedGbps', 'Status']
key = "StorageControllers"
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'Storage' not in data:
return {'ret': False, 'msg': "Storage resource not found"}
# Get a list of all storage controllers and build respective URIs
storage_uri = data['Storage']["@odata.id"]
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Loop through Members and their StorageControllers
# and gather properties from each StorageController
if data[u'Members']:
for storage_member in data[u'Members']:
storage_member_uri = storage_member[u'@odata.id']
response = self.get_request(self.root_uri + storage_member_uri)
data = response['data']
if key in data:
controller_list = data[key]
for controller in controller_list:
controller_result = {}
for property in properties:
if property in controller:
controller_result[property] = controller[property]
controller_results.append(controller_result)
result['entries'] = controller_results
return result
else:
return {'ret': False, 'msg': "Storage resource not found"}
def get_multi_storage_controller_inventory(self):
return self.aggregate_systems(self.get_storage_controller_inventory)
def get_disk_inventory(self, systems_uri):
result = {'entries': []}
controller_list = []
# Get these entries, but does not fail if not found
properties = ['BlockSizeBytes', 'CapableSpeedGbs', 'CapacityBytes',
'EncryptionAbility', 'EncryptionStatus',
'FailurePredicted', 'HotspareType', 'Id', 'Identifiers',
'Manufacturer', 'MediaType', 'Model', 'Name',
'PartNumber', 'PhysicalLocation', 'Protocol', 'Revision',
'RotationSpeedRPM', 'SerialNumber', 'Status']
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'SimpleStorage' not in data and 'Storage' not in data:
return {'ret': False, 'msg': "SimpleStorage and Storage resource \
not found"}
if 'Storage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data[u'Storage'][u'@odata.id']
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if data[u'Members']:
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
controller_name = 'Controller 1'
if 'StorageControllers' in data:
sc = data['StorageControllers']
if sc:
if 'Name' in sc[0]:
controller_name = sc[0]['Name']
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
drive_results = []
if 'Drives' in data:
for device in data[u'Drives']:
disk_uri = self.root_uri + device[u'@odata.id']
response = self.get_request(disk_uri)
data = response['data']
drive_result = {}
for property in properties:
if property in data:
if data[property] is not None:
drive_result[property] = data[property]
drive_results.append(drive_result)
drives = {'Controller': controller_name,
'Drives': drive_results}
result["entries"].append(drives)
if 'SimpleStorage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data["SimpleStorage"]["@odata.id"]
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
if 'Name' in data:
controller_name = data['Name']
else:
sc_id = data.get('Id', '1')
controller_name = 'Controller %s' % sc_id
drive_results = []
for device in data[u'Devices']:
drive_result = {}
for property in properties:
if property in device:
drive_result[property] = device[property]
drive_results.append(drive_result)
drives = {'Controller': controller_name,
'Drives': drive_results}
result["entries"].append(drives)
return result
def get_multi_disk_inventory(self):
return self.aggregate_systems(self.get_disk_inventory)
def get_volume_inventory(self, systems_uri):
result = {'entries': []}
controller_list = []
volume_list = []
# Get these entries, but does not fail if not found
properties = ['Id', 'Name', 'RAIDType', 'VolumeType', 'BlockSizeBytes',
'Capacity', 'CapacityBytes', 'CapacitySources',
'Encrypted', 'EncryptionTypes', 'Identifiers',
'Operations', 'OptimumIOSizeBytes', 'AccessCapabilities',
'AllocatedPools', 'Status']
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'SimpleStorage' not in data and 'Storage' not in data:
return {'ret': False, 'msg': "SimpleStorage and Storage resource \
not found"}
if 'Storage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data[u'Storage'][u'@odata.id']
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if data.get('Members'):
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
controller_name = 'Controller 1'
if 'StorageControllers' in data:
sc = data['StorageControllers']
if sc:
if 'Name' in sc[0]:
controller_name = sc[0]['Name']
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
volume_results = []
if 'Volumes' in data:
# Get a list of all volumes and build respective URIs
volumes_uri = data[u'Volumes'][u'@odata.id']
response = self.get_request(self.root_uri + volumes_uri)
data = response['data']
if data.get('Members'):
for volume in data[u'Members']:
volume_list.append(volume[u'@odata.id'])
for v in volume_list:
uri = self.root_uri + v
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
volume_result = {}
for property in properties:
if property in data:
if data[property] is not None:
volume_result[property] = data[property]
# Get related Drives Id
drive_id_list = []
if 'Links' in data:
if 'Drives' in data[u'Links']:
for link in data[u'Links'][u'Drives']:
drive_id_link = link[u'@odata.id']
drive_id = drive_id_link.split("/")[-1]
drive_id_list.append({'Id': drive_id})
volume_result['Linked_drives'] = drive_id_list
volume_results.append(volume_result)
volumes = {'Controller': controller_name,
'Volumes': volume_results}
result["entries"].append(volumes)
else:
return {'ret': False, 'msg': "Storage resource not found"}
return result
def get_multi_volume_inventory(self):
return self.aggregate_systems(self.get_volume_inventory)
def restart_manager_gracefully(self):
result = {}
key = "Actions"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
action_uri = data[key]["#Manager.Reset"]["target"]
payload = {'ResetType': 'GracefulRestart'}
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def manage_indicator_led(self, command):
result = {}
key = 'IndicatorLED'
payloads = {'IndicatorLedOn': 'Lit', 'IndicatorLedOff': 'Off', "IndicatorLedBlink": 'Blinking'}
result = {}
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
if command in payloads.keys():
payload = {'IndicatorLED': payloads[command]}
response = self.patch_request(self.root_uri + chassis_uri, payload)
if response['ret'] is False:
return response
else:
return {'ret': False, 'msg': 'Invalid command'}
return result
def _map_reset_type(self, reset_type, allowable_values):
equiv_types = {
'On': 'ForceOn',
'ForceOn': 'On',
'ForceOff': 'GracefulShutdown',
'GracefulShutdown': 'ForceOff',
'GracefulRestart': 'ForceRestart',
'ForceRestart': 'GracefulRestart'
}
if reset_type in allowable_values:
return reset_type
if reset_type not in equiv_types:
return reset_type
mapped_type = equiv_types[reset_type]
if mapped_type in allowable_values:
return mapped_type
return reset_type
def manage_system_power(self, command):
key = "Actions"
reset_type_values = ['On', 'ForceOff', 'GracefulShutdown',
'GracefulRestart', 'ForceRestart', 'Nmi',
'ForceOn', 'PushPowerButton', 'PowerCycle']
# command should be PowerOn, PowerForceOff, etc.
if not command.startswith('Power'):
return {'ret': False, 'msg': 'Invalid Command (%s)' % command}
reset_type = command[5:]
# map Reboot to a ResetType that does a reboot
if reset_type == 'Reboot':
reset_type = 'GracefulRestart'
if reset_type not in reset_type_values:
return {'ret': False, 'msg': 'Invalid Command (%s)' % command}
# read the system resource and get the current power state
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
data = response['data']
power_state = data.get('PowerState')
# if power is already in target state, nothing to do
if power_state == "On" and reset_type in ['On', 'ForceOn']:
return {'ret': True, 'changed': False}
if power_state == "Off" and reset_type in ['GracefulShutdown', 'ForceOff']:
return {'ret': True, 'changed': False}
# get the #ComputerSystem.Reset Action and target URI
if key not in data or '#ComputerSystem.Reset' not in data[key]:
return {'ret': False, 'msg': 'Action #ComputerSystem.Reset not found'}
reset_action = data[key]['#ComputerSystem.Reset']
if 'target' not in reset_action:
return {'ret': False,
'msg': 'target URI missing from Action #ComputerSystem.Reset'}
action_uri = reset_action['target']
# get AllowableValues from ActionInfo
allowable_values = None
if '@Redfish.ActionInfo' in reset_action:
action_info_uri = reset_action.get('@Redfish.ActionInfo')
response = self.get_request(self.root_uri + action_info_uri)
if response['ret'] is True:
data = response['data']
if 'Parameters' in data:
params = data['Parameters']
for param in params:
if param.get('Name') == 'ResetType':
allowable_values = param.get('AllowableValues')
break
# fallback to @Redfish.AllowableValues annotation
if allowable_values is None:
allowable_values = reset_action.get('[email protected]', [])
# map ResetType to an allowable value if needed
if reset_type not in allowable_values:
reset_type = self._map_reset_type(reset_type, allowable_values)
# define payload
payload = {'ResetType': reset_type}
# POST to Action URI
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True}
def _find_account_uri(self, username=None, acct_id=None):
if not any((username, acct_id)):
return {'ret': False, 'msg':
'Must provide either account_id or account_username'}
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
headers = response['headers']
if username:
if username == data.get('UserName'):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
if acct_id:
if acct_id == data.get('Id'):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
return {'ret': False, 'no_match': True, 'msg':
'No account with the given account_id or account_username found'}
def _find_empty_account_slot(self):
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
if uris:
# first slot may be reserved, so move to end of list
uris += [uris.pop(0)]
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
headers = response['headers']
if data.get('UserName') == "" and not data.get('Enabled', True):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
return {'ret': False, 'no_match': True, 'msg':
'No empty account slot found'}
def list_users(self):
result = {}
# listing all users has always been slower than other operations, why?
user_list = []
users_results = []
# Get these entries, but does not fail if not found
properties = ['Id', 'Name', 'UserName', 'RoleId', 'Locked', 'Enabled']
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for users in data.get('Members', []):
user_list.append(users[u'@odata.id']) # user_list[] are URIs
# for each user, get details
for uri in user_list:
user = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
user[property] = data[property]
users_results.append(user)
result["entries"] = users_results
return result
def add_user_via_patch(self, user):
if user.get('account_id'):
# If Id slot specified, use it
response = self._find_account_uri(acct_id=user.get('account_id'))
else:
# Otherwise find first empty slot
response = self._find_empty_account_slot()
if not response['ret']:
return response
uri = response['uri']
payload = {}
if user.get('account_username'):
payload['UserName'] = user.get('account_username')
if user.get('account_password'):
payload['Password'] = user.get('account_password')
if user.get('account_roleid'):
payload['RoleId'] = user.get('account_roleid')
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def add_user(self, user):
if not user.get('account_username'):
return {'ret': False, 'msg':
'Must provide account_username for AddUser command'}
response = self._find_account_uri(username=user.get('account_username'))
if response['ret']:
# account_username already exists, nothing to do
return {'ret': True, 'changed': False}
response = self.get_request(self.root_uri + self.accounts_uri)
if not response['ret']:
return response
headers = response['headers']
if 'allow' in headers:
methods = [m.strip() for m in headers.get('allow').split(',')]
if 'POST' not in methods:
# if Allow header present and POST not listed, add via PATCH
return self.add_user_via_patch(user)
payload = {}
if user.get('account_username'):
payload['UserName'] = user.get('account_username')
if user.get('account_password'):
payload['Password'] = user.get('account_password')
if user.get('account_roleid'):
payload['RoleId'] = user.get('account_roleid')
response = self.post_request(self.root_uri + self.accounts_uri, payload)
if not response['ret']:
if response.get('status') == 405:
# if POST returned a 405, try to add via PATCH
return self.add_user_via_patch(user)
else:
return response
return {'ret': True}
def enable_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data.get('Enabled', True):
# account already enabled, nothing to do
return {'ret': True, 'changed': False}
payload = {'Enabled': True}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def delete_user_via_patch(self, user, uri=None, data=None):
if not uri:
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data and data.get('UserName') == '' and not data.get('Enabled', False):
# account UserName already cleared, nothing to do
return {'ret': True, 'changed': False}
payload = {'UserName': ''}
if data.get('Enabled', False):
payload['Enabled'] = False
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def delete_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
if response.get('no_match'):
# account does not exist, nothing to do
return {'ret': True, 'changed': False}
else:
# some error encountered
return response
uri = response['uri']
headers = response['headers']
data = response['data']
if 'allow' in headers:
methods = [m.strip() for m in headers.get('allow').split(',')]
if 'DELETE' not in methods:
# if Allow header present and DELETE not listed, del via PATCH
return self.delete_user_via_patch(user, uri=uri, data=data)
response = self.delete_request(self.root_uri + uri)
if not response['ret']:
if response.get('status') == 405:
# if DELETE returned a 405, try to delete via PATCH
return self.delete_user_via_patch(user, uri=uri, data=data)
else:
return response
return {'ret': True}
def disable_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if not data.get('Enabled'):
# account already disabled, nothing to do
return {'ret': True, 'changed': False}
payload = {'Enabled': False}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_role(self, user):
if not user.get('account_roleid'):
return {'ret': False, 'msg':
'Must provide account_roleid for UpdateUserRole command'}
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data.get('RoleId') == user.get('account_roleid'):
# account already has RoleId , nothing to do
return {'ret': True, 'changed': False}
payload = {'RoleId': user.get('account_roleid')}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_password(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
payload = {'Password': user['account_password']}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_name(self, user):
if not user.get('account_updatename'):
return {'ret': False, 'msg':
'Must provide account_updatename for UpdateUserName command'}
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
payload = {'UserName': user['account_updatename']}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_accountservice_properties(self, user):
if user.get('account_properties') is None:
return {'ret': False, 'msg':
'Must provide account_properties for UpdateAccountServiceProperties command'}
account_properties = user.get('account_properties')
# Find AccountService
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'AccountService' not in data:
return {'ret': False, 'msg': "AccountService resource not found"}
accountservice_uri = data["AccountService"]["@odata.id"]
# Check support or not
response = self.get_request(self.root_uri + accountservice_uri)
if response['ret'] is False:
return response
data = response['data']
for property_name in account_properties.keys():
if property_name not in data:
return {'ret': False, 'msg':
'property %s not supported' % property_name}
# if properties is already matched, nothing to do
need_change = False
for property_name in account_properties.keys():
if account_properties[property_name] != data[property_name]:
need_change = True
break
if not need_change:
return {'ret': True, 'changed': False, 'msg': "AccountService properties already set"}
payload = account_properties
response = self.patch_request(self.root_uri + accountservice_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified AccountService properties"}
def get_sessions(self):
result = {}
# listing all users has always been slower than other operations, why?
session_list = []
sessions_results = []
# Get these entries, but does not fail if not found
properties = ['Description', 'Id', 'Name', 'UserName']
response = self.get_request(self.root_uri + self.sessions_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for sessions in data[u'Members']:
session_list.append(sessions[u'@odata.id']) # session_list[] are URIs
# for each session, get details
for uri in session_list:
session = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
session[property] = data[property]
sessions_results.append(session)
result["entries"] = sessions_results
return result
def get_firmware_update_capabilities(self):
result = {}
response = self.get_request(self.root_uri + self.update_uri)
if response['ret'] is False:
return response
result['ret'] = True
result['entries'] = {}
data = response['data']
if "Actions" in data:
actions = data['Actions']
if len(actions) > 0:
for key in actions.keys():
action = actions.get(key)
if 'title' in action:
title = action['title']
else:
title = key
result['entries'][title] = action.get('[email protected]',
["Key [email protected] not found"])
else:
return {'ret': "False", 'msg': "Actions list is empty."}
else:
return {'ret': "False", 'msg': "Key Actions not found."}
return result
def _software_inventory(self, uri):
result = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
result['entries'] = []
for member in data[u'Members']:
uri = self.root_uri + member[u'@odata.id']
# Get details for each software or firmware member
response = self.get_request(uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
software = {}
# Get these standard properties if present
for key in ['Name', 'Id', 'Status', 'Version', 'Updateable',
'SoftwareId', 'LowestSupportedVersion', 'Manufacturer',
'ReleaseDate']:
if key in data:
software[key] = data.get(key)
result['entries'].append(software)
return result
def get_firmware_inventory(self):
if self.firmware_uri is None:
return {'ret': False, 'msg': 'No FirmwareInventory resource found'}
else:
return self._software_inventory(self.firmware_uri)
def get_software_inventory(self):
if self.software_uri is None:
return {'ret': False, 'msg': 'No SoftwareInventory resource found'}
else:
return self._software_inventory(self.software_uri)
def get_bios_attributes(self, systems_uri):
result = {}
bios_attributes = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for attribute in data[u'Attributes'].items():
bios_attributes[attribute[0]] = attribute[1]
result["entries"] = bios_attributes
return result
def get_multi_bios_attributes(self):
return self.aggregate_systems(self.get_bios_attributes)
def _get_boot_options_dict(self, boot):
# Get these entries from BootOption, if present
properties = ['DisplayName', 'BootOptionReference']
# Retrieve BootOptions if present
if 'BootOptions' in boot and '@odata.id' in boot['BootOptions']:
boot_options_uri = boot['BootOptions']["@odata.id"]
# Get BootOptions resource
response = self.get_request(self.root_uri + boot_options_uri)
if response['ret'] is False:
return {}
data = response['data']
# Retrieve Members array
if 'Members' not in data:
return {}
members = data['Members']
else:
members = []
# Build dict of BootOptions keyed by BootOptionReference
boot_options_dict = {}
for member in members:
if '@odata.id' not in member:
return {}
boot_option_uri = member['@odata.id']
response = self.get_request(self.root_uri + boot_option_uri)
if response['ret'] is False:
return {}
data = response['data']
if 'BootOptionReference' not in data:
return {}
boot_option_ref = data['BootOptionReference']
# fetch the props to display for this boot device
boot_props = {}
for prop in properties:
if prop in data:
boot_props[prop] = data[prop]
boot_options_dict[boot_option_ref] = boot_props
return boot_options_dict
def get_boot_order(self, systems_uri):
result = {}
# Retrieve System resource
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Confirm needed Boot properties are present
if 'Boot' not in data or 'BootOrder' not in data['Boot']:
return {'ret': False, 'msg': "Key BootOrder not found"}
boot = data['Boot']
boot_order = boot['BootOrder']
boot_options_dict = self._get_boot_options_dict(boot)
# Build boot device list
boot_device_list = []
for ref in boot_order:
boot_device_list.append(
boot_options_dict.get(ref, {'BootOptionReference': ref}))
result["entries"] = boot_device_list
return result
def get_multi_boot_order(self):
return self.aggregate_systems(self.get_boot_order)
def get_boot_override(self, systems_uri):
result = {}
properties = ["BootSourceOverrideEnabled", "BootSourceOverrideTarget",
"BootSourceOverrideMode", "UefiTargetBootSourceOverride", "[email protected]"]
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if 'Boot' not in data:
return {'ret': False, 'msg': "Key Boot not found"}
boot = data['Boot']
boot_overrides = {}
if "BootSourceOverrideEnabled" in boot:
if boot["BootSourceOverrideEnabled"] is not False:
for property in properties:
if property in boot:
if boot[property] is not None:
boot_overrides[property] = boot[property]
else:
return {'ret': False, 'msg': "No boot override is enabled."}
result['entries'] = boot_overrides
return result
def get_multi_boot_override(self):
return self.aggregate_systems(self.get_boot_override)
def set_bios_default_settings(self):
result = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
# Extract proper URI
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
reset_bios_settings_uri = data["Actions"]["#Bios.ResetBios"]["target"]
response = self.post_request(self.root_uri + reset_bios_settings_uri, {})
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Set BIOS to default settings"}
def set_one_time_boot_device(self, bootdevice, uefi_target, boot_next):
result = {}
key = "Boot"
if not bootdevice:
return {'ret': False,
'msg': "bootdevice option required for SetOneTimeBoot"}
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
boot = data[key]
annotation = '[email protected]'
if annotation in boot:
allowable_values = boot[annotation]
if isinstance(allowable_values, list) and bootdevice not in allowable_values:
return {'ret': False,
'msg': "Boot device %s not in list of allowable values (%s)" %
(bootdevice, allowable_values)}
# read existing values
enabled = boot.get('BootSourceOverrideEnabled')
target = boot.get('BootSourceOverrideTarget')
cur_uefi_target = boot.get('UefiTargetBootSourceOverride')
cur_boot_next = boot.get('BootNext')
if bootdevice == 'UefiTarget':
if not uefi_target:
return {'ret': False,
'msg': "uefi_target option required to SetOneTimeBoot for UefiTarget"}
if enabled == 'Once' and target == bootdevice and uefi_target == cur_uefi_target:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice,
'UefiTargetBootSourceOverride': uefi_target
}
}
elif bootdevice == 'UefiBootNext':
if not boot_next:
return {'ret': False,
'msg': "boot_next option required to SetOneTimeBoot for UefiBootNext"}
if enabled == 'Once' and target == bootdevice and boot_next == cur_boot_next:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice,
'BootNext': boot_next
}
}
else:
if enabled == 'Once' and target == bootdevice:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice
}
}
response = self.patch_request(self.root_uri + self.systems_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True}
def set_bios_attributes(self, attributes):
result = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
# Extract proper URI
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Make a copy of the attributes dict
attrs_to_patch = dict(attributes)
# Check the attributes
for attr in attributes:
if attr not in data[u'Attributes']:
return {'ret': False, 'msg': "BIOS attribute %s not found" % attr}
# If already set to requested value, remove it from PATCH payload
if data[u'Attributes'][attr] == attributes[attr]:
del attrs_to_patch[attr]
# Return success w/ changed=False if no attrs need to be changed
if not attrs_to_patch:
return {'ret': True, 'changed': False,
'msg': "BIOS attributes already set"}
# Get the SettingsObject URI
set_bios_attr_uri = data["@Redfish.Settings"]["SettingsObject"]["@odata.id"]
# Construct payload and issue PATCH command
payload = {"Attributes": attrs_to_patch}
response = self.patch_request(self.root_uri + set_bios_attr_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified BIOS attribute"}
def set_boot_order(self, boot_list):
if not boot_list:
return {'ret': False,
'msg': "boot_order list required for SetBootOrder command"}
systems_uri = self.systems_uri
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
# Confirm needed Boot properties are present
if 'Boot' not in data or 'BootOrder' not in data['Boot']:
return {'ret': False, 'msg': "Key BootOrder not found"}
boot = data['Boot']
boot_order = boot['BootOrder']
boot_options_dict = self._get_boot_options_dict(boot)
# validate boot_list against BootOptionReferences if available
if boot_options_dict:
boot_option_references = boot_options_dict.keys()
for ref in boot_list:
if ref not in boot_option_references:
return {'ret': False,
'msg': "BootOptionReference %s not found in BootOptions" % ref}
# If requested BootOrder is already set, nothing to do
if boot_order == boot_list:
return {'ret': True, 'changed': False,
'msg': "BootOrder already set to %s" % boot_list}
payload = {
'Boot': {
'BootOrder': boot_list
}
}
response = self.patch_request(self.root_uri + systems_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "BootOrder set"}
def set_default_boot_order(self):
systems_uri = self.systems_uri
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
# get the #ComputerSystem.SetDefaultBootOrder Action and target URI
action = '#ComputerSystem.SetDefaultBootOrder'
if 'Actions' not in data or action not in data['Actions']:
return {'ret': False, 'msg': 'Action %s not found' % action}
if 'target' not in data['Actions'][action]:
return {'ret': False,
'msg': 'target URI missing from Action %s' % action}
action_uri = data['Actions'][action]['target']
# POST to Action URI
payload = {}
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True,
'msg': "BootOrder set to default"}
def get_chassis_inventory(self):
result = {}
chassis_results = []
# Get these entries, but does not fail if not found
properties = ['ChassisType', 'PartNumber', 'AssetTag',
'Manufacturer', 'IndicatorLED', 'SerialNumber', 'Model']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
chassis_result = {}
for property in properties:
if property in data:
chassis_result[property] = data[property]
chassis_results.append(chassis_result)
result["entries"] = chassis_results
return result
def get_fan_inventory(self):
result = {}
fan_results = []
key = "Thermal"
# Get these entries, but does not fail if not found
properties = ['FanName', 'Reading', 'ReadingUnits', 'Status']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
# match: found an entry for "Thermal" information = fans
thermal_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + thermal_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for device in data[u'Fans']:
fan = {}
for property in properties:
if property in device:
fan[property] = device[property]
fan_results.append(fan)
result["entries"] = fan_results
return result
def get_chassis_power(self):
result = {}
key = "Power"
# Get these entries, but does not fail if not found
properties = ['Name', 'PowerAllocatedWatts',
'PowerAvailableWatts', 'PowerCapacityWatts',
'PowerConsumedWatts', 'PowerMetrics',
'PowerRequestedWatts', 'RelatedItem', 'Status']
chassis_power_results = []
# Go through list
for chassis_uri in self.chassis_uris:
chassis_power_result = {}
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
response = self.get_request(self.root_uri + data[key]['@odata.id'])
data = response['data']
if 'PowerControl' in data:
if len(data['PowerControl']) > 0:
data = data['PowerControl'][0]
for property in properties:
if property in data:
chassis_power_result[property] = data[property]
else:
return {'ret': False, 'msg': 'Key PowerControl not found.'}
chassis_power_results.append(chassis_power_result)
else:
return {'ret': False, 'msg': 'Key Power not found.'}
result['entries'] = chassis_power_results
return result
def get_chassis_thermals(self):
result = {}
sensors = []
key = "Thermal"
# Get these entries, but does not fail if not found
properties = ['Name', 'PhysicalContext', 'UpperThresholdCritical',
'UpperThresholdFatal', 'UpperThresholdNonCritical',
'LowerThresholdCritical', 'LowerThresholdFatal',
'LowerThresholdNonCritical', 'MaxReadingRangeTemp',
'MinReadingRangeTemp', 'ReadingCelsius', 'RelatedItem',
'SensorNumber']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
thermal_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + thermal_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if "Temperatures" in data:
for sensor in data[u'Temperatures']:
sensor_result = {}
for property in properties:
if property in sensor:
if sensor[property] is not None:
sensor_result[property] = sensor[property]
sensors.append(sensor_result)
if sensors is None:
return {'ret': False, 'msg': 'Key Temperatures was not found.'}
result['entries'] = sensors
return result
def get_cpu_inventory(self, systems_uri):
result = {}
cpu_list = []
cpu_results = []
key = "Processors"
# Get these entries, but does not fail if not found
properties = ['Id', 'Manufacturer', 'Model', 'MaxSpeedMHz', 'TotalCores',
'TotalThreads', 'Status']
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
processors_uri = data[key]["@odata.id"]
# Get a list of all CPUs and build respective URIs
response = self.get_request(self.root_uri + processors_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for cpu in data[u'Members']:
cpu_list.append(cpu[u'@odata.id'])
for c in cpu_list:
cpu = {}
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
cpu[property] = data[property]
cpu_results.append(cpu)
result["entries"] = cpu_results
return result
def get_multi_cpu_inventory(self):
return self.aggregate_systems(self.get_cpu_inventory)
def get_memory_inventory(self, systems_uri):
result = {}
memory_list = []
memory_results = []
key = "Memory"
# Get these entries, but does not fail if not found
properties = ['SerialNumber', 'MemoryDeviceType', 'PartNuber',
'MemoryLocation', 'RankCount', 'CapacityMiB', 'OperatingMemoryModes', 'Status', 'Manufacturer', 'Name']
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
memory_uri = data[key]["@odata.id"]
# Get a list of all DIMMs and build respective URIs
response = self.get_request(self.root_uri + memory_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for dimm in data[u'Members']:
memory_list.append(dimm[u'@odata.id'])
for m in memory_list:
dimm = {}
uri = self.root_uri + m
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
if "Status" in data:
if "State" in data["Status"]:
if data["Status"]["State"] == "Absent":
continue
else:
continue
for property in properties:
if property in data:
dimm[property] = data[property]
memory_results.append(dimm)
result["entries"] = memory_results
return result
def get_multi_memory_inventory(self):
return self.aggregate_systems(self.get_memory_inventory)
def get_nic_inventory(self, resource_uri):
result = {}
nic_list = []
nic_results = []
key = "EthernetInterfaces"
# Get these entries, but does not fail if not found
properties = ['Description', 'FQDN', 'IPv4Addresses', 'IPv6Addresses',
'NameServers', 'MACAddress', 'PermanentMACAddress',
'SpeedMbps', 'MTUSize', 'AutoNeg', 'Status']
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
ethernetinterfaces_uri = data[key]["@odata.id"]
# Get a list of all network controllers and build respective URIs
response = self.get_request(self.root_uri + ethernetinterfaces_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for nic in data[u'Members']:
nic_list.append(nic[u'@odata.id'])
for n in nic_list:
nic = {}
uri = self.root_uri + n
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
nic[property] = data[property]
nic_results.append(nic)
result["entries"] = nic_results
return result
def get_multi_nic_inventory(self, resource_type):
ret = True
entries = []
# Given resource_type, use the proper URI
if resource_type == 'Systems':
resource_uris = self.systems_uris
elif resource_type == 'Manager':
resource_uris = self.manager_uris
for resource_uri in resource_uris:
inventory = self.get_nic_inventory(resource_uri)
ret = inventory.pop('ret') and ret
if 'entries' in inventory:
entries.append(({'resource_uri': resource_uri},
inventory['entries']))
return dict(ret=ret, entries=entries)
def get_virtualmedia(self, resource_uri):
result = {}
virtualmedia_list = []
virtualmedia_results = []
key = "VirtualMedia"
# Get these entries, but does not fail if not found
properties = ['Description', 'ConnectedVia', 'Id', 'MediaTypes',
'Image', 'ImageName', 'Name', 'WriteProtected',
'TransferMethod', 'TransferProtocolType']
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
virtualmedia_uri = data[key]["@odata.id"]
# Get a list of all virtual media and build respective URIs
response = self.get_request(self.root_uri + virtualmedia_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for virtualmedia in data[u'Members']:
virtualmedia_list.append(virtualmedia[u'@odata.id'])
for n in virtualmedia_list:
virtualmedia = {}
uri = self.root_uri + n
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
virtualmedia[property] = data[property]
virtualmedia_results.append(virtualmedia)
result["entries"] = virtualmedia_results
return result
def get_multi_virtualmedia(self):
ret = True
entries = []
resource_uris = self.manager_uris
for resource_uri in resource_uris:
virtualmedia = self.get_virtualmedia(resource_uri)
ret = virtualmedia.pop('ret') and ret
if 'entries' in virtualmedia:
entries.append(({'resource_uri': resource_uri},
virtualmedia['entries']))
return dict(ret=ret, entries=entries)
def get_psu_inventory(self):
result = {}
psu_list = []
psu_results = []
key = "PowerSupplies"
# Get these entries, but does not fail if not found
properties = ['Name', 'Model', 'SerialNumber', 'PartNumber', 'Manufacturer',
'FirmwareVersion', 'PowerCapacityWatts', 'PowerSupplyType',
'Status']
# Get a list of all Chassis and build URIs, then get all PowerSupplies
# from each Power entry in the Chassis
chassis_uri_list = self.chassis_uris
for chassis_uri in chassis_uri_list:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if 'Power' in data:
power_uri = data[u'Power'][u'@odata.id']
else:
continue
response = self.get_request(self.root_uri + power_uri)
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
psu_list = data[key]
for psu in psu_list:
psu_not_present = False
psu_data = {}
for property in properties:
if property in psu:
if psu[property] is not None:
if property == 'Status':
if 'State' in psu[property]:
if psu[property]['State'] == 'Absent':
psu_not_present = True
psu_data[property] = psu[property]
if psu_not_present:
continue
psu_results.append(psu_data)
result["entries"] = psu_results
if not result["entries"]:
return {'ret': False, 'msg': "No PowerSupply objects found"}
return result
def get_multi_psu_inventory(self):
return self.aggregate_systems(self.get_psu_inventory)
def get_system_inventory(self, systems_uri):
result = {}
inventory = {}
# Get these entries, but does not fail if not found
properties = ['Status', 'HostName', 'PowerState', 'Model', 'Manufacturer',
'PartNumber', 'SystemType', 'AssetTag', 'ServiceTag',
'SerialNumber', 'SKU', 'BiosVersion', 'MemorySummary',
'ProcessorSummary', 'TrustedModules']
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for property in properties:
if property in data:
inventory[property] = data[property]
result["entries"] = inventory
return result
def get_multi_system_inventory(self):
return self.aggregate_systems(self.get_system_inventory)
def get_network_protocols(self):
result = {}
service_result = {}
# Find NetworkProtocol
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'NetworkProtocol' not in data:
return {'ret': False, 'msg': "NetworkProtocol resource not found"}
networkprotocol_uri = data["NetworkProtocol"]["@odata.id"]
response = self.get_request(self.root_uri + networkprotocol_uri)
if response['ret'] is False:
return response
data = response['data']
protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH',
'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP',
'RFB']
for protocol_service in protocol_services:
if protocol_service in data.keys():
service_result[protocol_service] = data[protocol_service]
result['ret'] = True
result["entries"] = service_result
return result
def set_network_protocols(self, manager_services):
# Check input data validity
protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH',
'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP',
'RFB']
protocol_state_onlist = ['true', 'True', True, 'on', 1]
protocol_state_offlist = ['false', 'False', False, 'off', 0]
payload = {}
for service_name in manager_services.keys():
if service_name not in protocol_services:
return {'ret': False, 'msg': "Service name %s is invalid" % service_name}
payload[service_name] = {}
for service_property in manager_services[service_name].keys():
value = manager_services[service_name][service_property]
if service_property in ['ProtocolEnabled', 'protocolenabled']:
if value in protocol_state_onlist:
payload[service_name]['ProtocolEnabled'] = True
elif value in protocol_state_offlist:
payload[service_name]['ProtocolEnabled'] = False
else:
return {'ret': False, 'msg': "Value of property %s is invalid" % service_property}
elif service_property in ['port', 'Port']:
if isinstance(value, int):
payload[service_name]['Port'] = value
elif isinstance(value, str) and value.isdigit():
payload[service_name]['Port'] = int(value)
else:
return {'ret': False, 'msg': "Value of property %s is invalid" % service_property}
else:
payload[service_name][service_property] = value
# Find NetworkProtocol
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'NetworkProtocol' not in data:
return {'ret': False, 'msg': "NetworkProtocol resource not found"}
networkprotocol_uri = data["NetworkProtocol"]["@odata.id"]
# Check service property support or not
response = self.get_request(self.root_uri + networkprotocol_uri)
if response['ret'] is False:
return response
data = response['data']
for service_name in payload.keys():
if service_name not in data:
return {'ret': False, 'msg': "%s service not supported" % service_name}
for service_property in payload[service_name].keys():
if service_property not in data[service_name]:
return {'ret': False, 'msg': "%s property for %s service not supported" % (service_property, service_name)}
# if the protocol is already set, nothing to do
need_change = False
for service_name in payload.keys():
for service_property in payload[service_name].keys():
value = payload[service_name][service_property]
if value != data[service_name][service_property]:
need_change = True
break
if not need_change:
return {'ret': True, 'changed': False, 'msg': "Manager NetworkProtocol services already set"}
response = self.patch_request(self.root_uri + networkprotocol_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified Manager NetworkProtocol services"}
@staticmethod
def to_singular(resource_name):
if resource_name.endswith('ies'):
resource_name = resource_name[:-3] + 'y'
elif resource_name.endswith('s'):
resource_name = resource_name[:-1]
return resource_name
def get_health_resource(self, subsystem, uri, health, expanded):
status = 'Status'
if expanded:
d = expanded
else:
r = self.get_request(self.root_uri + uri)
if r.get('ret'):
d = r.get('data')
else:
return
if 'Members' in d: # collections case
for m in d.get('Members'):
u = m.get('@odata.id')
r = self.get_request(self.root_uri + u)
if r.get('ret'):
p = r.get('data')
if p:
e = {self.to_singular(subsystem.lower()) + '_uri': u,
status: p.get(status,
"Status not available")}
health[subsystem].append(e)
else: # non-collections case
e = {self.to_singular(subsystem.lower()) + '_uri': uri,
status: d.get(status,
"Status not available")}
health[subsystem].append(e)
def get_health_subsystem(self, subsystem, data, health):
if subsystem in data:
sub = data.get(subsystem)
if isinstance(sub, list):
for r in sub:
if '@odata.id' in r:
uri = r.get('@odata.id')
expanded = None
if '#' in uri and len(r) > 1:
expanded = r
self.get_health_resource(subsystem, uri, health, expanded)
elif isinstance(sub, dict):
if '@odata.id' in sub:
uri = sub.get('@odata.id')
self.get_health_resource(subsystem, uri, health, None)
elif 'Members' in data:
for m in data.get('Members'):
u = m.get('@odata.id')
r = self.get_request(self.root_uri + u)
if r.get('ret'):
d = r.get('data')
self.get_health_subsystem(subsystem, d, health)
def get_health_report(self, category, uri, subsystems):
result = {}
health = {}
status = 'Status'
# Get health status of top level resource
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
health[category] = {status: data.get(status, "Status not available")}
# Get health status of subsystems
for sub in subsystems:
d = None
if sub.startswith('Links.'): # ex: Links.PCIeDevices
sub = sub[len('Links.'):]
d = data.get('Links', {})
elif '.' in sub: # ex: Thermal.Fans
p, sub = sub.split('.')
u = data.get(p, {}).get('@odata.id')
if u:
r = self.get_request(self.root_uri + u)
if r['ret']:
d = r['data']
if not d:
continue
else: # ex: Memory
d = data
health[sub] = []
self.get_health_subsystem(sub, d, health)
if not health[sub]:
del health[sub]
result["entries"] = health
return result
def get_system_health_report(self, systems_uri):
subsystems = ['Processors', 'Memory', 'SimpleStorage', 'Storage',
'EthernetInterfaces', 'NetworkInterfaces.NetworkPorts',
'NetworkInterfaces.NetworkDeviceFunctions']
return self.get_health_report('System', systems_uri, subsystems)
def get_multi_system_health_report(self):
return self.aggregate_systems(self.get_system_health_report)
def get_chassis_health_report(self, chassis_uri):
subsystems = ['Power.PowerSupplies', 'Thermal.Fans',
'Links.PCIeDevices']
return self.get_health_report('Chassis', chassis_uri, subsystems)
def get_multi_chassis_health_report(self):
return self.aggregate_chassis(self.get_chassis_health_report)
def get_manager_health_report(self, manager_uri):
subsystems = []
return self.get_health_report('Manager', manager_uri, subsystems)
def get_multi_manager_health_report(self):
return self.aggregate_managers(self.get_manager_health_report)
def set_manager_nic(self, nic_addr, nic_config):
# Get EthernetInterface collection
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'EthernetInterfaces' not in data:
return {'ret': False, 'msg': "EthernetInterfaces resource not found"}
ethernetinterfaces_uri = data["EthernetInterfaces"]["@odata.id"]
response = self.get_request(self.root_uri + ethernetinterfaces_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
# Find target EthernetInterface
target_ethernet_uri = None
target_ethernet_current_setting = None
if nic_addr == 'null':
# Find root_uri matched EthernetInterface when nic_addr is not specified
nic_addr = (self.root_uri).split('/')[-1]
nic_addr = nic_addr.split(':')[0] # split port if existing
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
if '"' + nic_addr + '"' in str(data) or "'" + nic_addr + "'" in str(data):
target_ethernet_uri = uri
target_ethernet_current_setting = data
break
if target_ethernet_uri is None:
return {'ret': False, 'msg': "No matched EthernetInterface found under Manager"}
# Convert input to payload and check validity
payload = {}
for property in nic_config.keys():
value = nic_config[property]
if property not in target_ethernet_current_setting:
return {'ret': False, 'msg': "Property %s in nic_config is invalid" % property}
if isinstance(value, dict):
if isinstance(target_ethernet_current_setting[property], dict):
payload[property] = value
elif isinstance(target_ethernet_current_setting[property], list):
payload[property] = list()
payload[property].append(value)
else:
return {'ret': False, 'msg': "Value of property %s in nic_config is invalid" % property}
else:
payload[property] = value
# If no need change, nothing to do. If error detected, report it
need_change = False
for property in payload.keys():
set_value = payload[property]
cur_value = target_ethernet_current_setting[property]
# type is simple(not dict/list)
if not isinstance(set_value, dict) and not isinstance(set_value, list):
if set_value != cur_value:
need_change = True
# type is dict
if isinstance(set_value, dict):
for subprop in payload[property].keys():
if subprop not in target_ethernet_current_setting[property]:
return {'ret': False, 'msg': "Sub-property %s in nic_config is invalid" % subprop}
sub_set_value = payload[property][subprop]
sub_cur_value = target_ethernet_current_setting[property][subprop]
if sub_set_value != sub_cur_value:
need_change = True
# type is list
if isinstance(set_value, list):
for i in range(len(set_value)):
for subprop in payload[property][i].keys():
if subprop not in target_ethernet_current_setting[property][i]:
return {'ret': False, 'msg': "Sub-property %s in nic_config is invalid" % subprop}
sub_set_value = payload[property][i][subprop]
sub_cur_value = target_ethernet_current_setting[property][i][subprop]
if sub_set_value != sub_cur_value:
need_change = True
if not need_change:
return {'ret': True, 'changed': False, 'msg': "Manager NIC already set"}
response = self.patch_request(self.root_uri + target_ethernet_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified Manager NIC"}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,716 |
ec2_asg: Add MaxInstanceLifetime support
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add support for `MaxInstanceLifetime` option to AWS ec2_asg module.
AWS feature introduction link: https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-ec2-auto-scaling-supports-max-instance-lifetime/
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ec2_asg
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Usage example:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- ec2_asg:
name: myasg
region: us-east-1
launch_config_name: my_new_lc
min_size: 1
max_size: 5
desired_capacity: 3
max_instance_lifetime: 604800 # seconds
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/66716
|
https://github.com/ansible/ansible/pull/66863
|
d2f4d305ee4175cc0315a705824b168b3096e06a
|
f98874e4f98837e4b9868780b19cf6614b00282a
| 2020-01-23T12:18:38Z |
python
| 2020-02-15T12:56:39Z |
changelogs/fragments/66863-ec2_asg-max_instance_lifetime-and-honor-wait-on-replace.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,716 |
ec2_asg: Add MaxInstanceLifetime support
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add support for `MaxInstanceLifetime` option to AWS ec2_asg module.
AWS feature introduction link: https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-ec2-auto-scaling-supports-max-instance-lifetime/
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ec2_asg
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Usage example:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- ec2_asg:
name: myasg
region: us-east-1
launch_config_name: my_new_lc
min_size: 1
max_size: 5
desired_capacity: 3
max_instance_lifetime: 604800 # seconds
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/66716
|
https://github.com/ansible/ansible/pull/66863
|
d2f4d305ee4175cc0315a705824b168b3096e06a
|
f98874e4f98837e4b9868780b19cf6614b00282a
| 2020-01-23T12:18:38Z |
python
| 2020-02-15T12:56:39Z |
lib/ansible/modules/cloud/amazon/ec2_asg.py
|
#!/usr/bin/python
# This file is part of Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'community'}
DOCUMENTATION = """
---
module: ec2_asg
short_description: Create or delete AWS AutoScaling Groups (ASGs)
description:
- Can create or delete AWS AutoScaling Groups.
- Can be used with the M(ec2_lc) module to manage Launch Configurations.
version_added: "1.6"
author: "Gareth Rushgrove (@garethr)"
requirements: [ "boto3", "botocore" ]
options:
state:
description:
- Register or deregister the instance.
choices: ['present', 'absent']
default: present
type: str
name:
description:
- Unique name for group to be created or deleted.
required: true
type: str
load_balancers:
description:
- List of ELB names to use for the group. Use for classic load balancers.
type: list
elements: str
target_group_arns:
description:
- List of target group ARNs to use for the group. Use for application load balancers.
version_added: "2.4"
type: list
elements: str
availability_zones:
description:
- List of availability zone names in which to create the group.
- Defaults to all the availability zones in the region if I(vpc_zone_identifier) is not set.
type: list
elements: str
launch_config_name:
description:
- Name of the Launch configuration to use for the group. See the M(ec2_lc) module for managing these.
- If unspecified then the current group value will be used. One of I(launch_config_name) or I(launch_template) must be provided.
type: str
launch_template:
description:
- Dictionary describing the Launch Template to use
suboptions:
version:
description:
- The version number of the launch template to use.
- Defaults to latest version if not provided.
type: str
launch_template_name:
description:
- The name of the launch template. Only one of I(launch_template_name) or I(launch_template_id) is required.
type: str
launch_template_id:
description:
- The id of the launch template. Only one of I(launch_template_name) or I(launch_template_id) is required.
type: str
type: dict
version_added: "2.8"
min_size:
description:
- Minimum number of instances in group, if unspecified then the current group value will be used.
type: int
max_size:
description:
- Maximum number of instances in group, if unspecified then the current group value will be used.
type: int
mixed_instances_policy:
description:
- A mixed instance policy to use for the ASG.
- Only used when the ASG is configured to use a Launch Template (I(launch_template)).
- 'See also U(https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-autoscaling-autoscalinggroup-mixedinstancespolicy.html)'
required: false
version_added: "2.10"
suboptions:
instance_types:
description:
- A list of instance_types.
type: list
elements: str
type: dict
placement_group:
description:
- Physical location of your cluster placement group created in Amazon EC2.
version_added: "2.3"
type: str
desired_capacity:
description:
- Desired number of instances in group, if unspecified then the current group value will be used.
type: int
replace_all_instances:
description:
- In a rolling fashion, replace all instances that used the old launch configuration with one from the new launch configuration.
It increases the ASG size by I(replace_batch_size), waits for the new instances to be up and running.
After that, it terminates a batch of old instances, waits for the replacements, and repeats, until all old instances are replaced.
Once that's done the ASG size is reduced back to the expected size.
version_added: "1.8"
default: false
type: bool
replace_batch_size:
description:
- Number of instances you'd like to replace at a time. Used with I(replace_all_instances).
required: false
version_added: "1.8"
default: 1
type: int
replace_instances:
description:
- List of I(instance_ids) belonging to the named AutoScalingGroup that you would like to terminate and be replaced with instances
matching the current launch configuration.
version_added: "1.8"
type: list
elements: str
lc_check:
description:
- Check to make sure instances that are being replaced with I(replace_instances) do not already have the current I(launch_config).
version_added: "1.8"
default: true
type: bool
lt_check:
description:
- Check to make sure instances that are being replaced with I(replace_instances) do not already have the current
I(launch_template or I(launch_template) I(version).
version_added: "2.8"
default: true
type: bool
vpc_zone_identifier:
description:
- List of VPC subnets to use
type: list
elements: str
tags:
description:
- A list of tags to add to the Auto Scale Group.
- Optional key is I(propagate_at_launch), which defaults to true.
- When I(propagate_at_launch) is true the tags will be propagated to the Instances created.
version_added: "1.7"
type: list
elements: dict
health_check_period:
description:
- Length of time in seconds after a new EC2 instance comes into service that Auto Scaling starts checking its health.
required: false
default: 300
version_added: "1.7"
type: int
health_check_type:
description:
- The service you want the health status from, Amazon EC2 or Elastic Load Balancer.
required: false
default: EC2
version_added: "1.7"
choices: ['EC2', 'ELB']
type: str
default_cooldown:
description:
- The number of seconds after a scaling activity completes before another can begin.
default: 300
version_added: "2.0"
type: int
wait_timeout:
description:
- How long to wait for instances to become viable when replaced. If you experience the error "Waited too long for ELB instances to be healthy",
try increasing this value.
default: 300
type: int
version_added: "1.8"
wait_for_instances:
description:
- Wait for the ASG instances to be in a ready state before exiting. If instances are behind an ELB, it will wait until the ELB determines all
instances have a lifecycle_state of "InService" and a health_status of "Healthy".
version_added: "1.9"
default: true
type: bool
termination_policies:
description:
- An ordered list of criteria used for selecting instances to be removed from the Auto Scaling group when reducing capacity.
- Using I(termination_policies=Default) when modifying an existing AutoScalingGroup will result in the existing policy being retained
instead of changed to C(Default).
- 'Valid values include: C(Default), C(OldestInstance), C(NewestInstance), C(OldestLaunchConfiguration), C(ClosestToNextInstanceHour)'
- 'Full documentation of valid values can be found in the AWS documentation:'
- 'U(https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html#custom-termination-policy)'
default: Default
version_added: "2.0"
type: list
elements: str
notification_topic:
description:
- A SNS topic ARN to send auto scaling notifications to.
version_added: "2.2"
type: str
notification_types:
description:
- A list of auto scaling events to trigger notifications on.
default:
- 'autoscaling:EC2_INSTANCE_LAUNCH'
- 'autoscaling:EC2_INSTANCE_LAUNCH_ERROR'
- 'autoscaling:EC2_INSTANCE_TERMINATE'
- 'autoscaling:EC2_INSTANCE_TERMINATE_ERROR'
required: false
version_added: "2.2"
type: list
elements: str
suspend_processes:
description:
- A list of scaling processes to suspend.
- 'Valid values include:'
- C(Launch), C(Terminate), C(HealthCheck), C(ReplaceUnhealthy), C(AZRebalance), C(AlarmNotification), C(ScheduledActions), C(AddToLoadBalancer)
- 'Full documentation of valid values can be found in the AWS documentation:'
- 'U(https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html)'
default: []
version_added: "2.3"
type: list
elements: str
metrics_collection:
description:
- Enable ASG metrics collection.
type: bool
default: false
version_added: "2.6"
metrics_granularity:
description:
- When I(metrics_collection=true) this will determine the granularity of metrics collected by CloudWatch.
default: "1Minute"
version_added: "2.6"
type: str
metrics_list:
description:
- List of autoscaling metrics to collect when I(metrics_collection=true).
default:
- 'GroupMinSize'
- 'GroupMaxSize'
- 'GroupDesiredCapacity'
- 'GroupInServiceInstances'
- 'GroupPendingInstances'
- 'GroupStandbyInstances'
- 'GroupTerminatingInstances'
- 'GroupTotalInstances'
version_added: "2.6"
type: list
elements: str
extends_documentation_fragment:
- aws
- ec2
"""
EXAMPLES = '''
# Basic configuration with Launch Configuration
- ec2_asg:
name: special
load_balancers: [ 'lb1', 'lb2' ]
availability_zones: [ 'eu-west-1a', 'eu-west-1b' ]
launch_config_name: 'lc-1'
min_size: 1
max_size: 10
desired_capacity: 5
vpc_zone_identifier: [ 'subnet-abcd1234', 'subnet-1a2b3c4d' ]
tags:
- environment: production
propagate_at_launch: no
# Rolling ASG Updates
# Below is an example of how to assign a new launch config to an ASG and terminate old instances.
#
# All instances in "myasg" that do not have the launch configuration named "my_new_lc" will be terminated in
# a rolling fashion with instances using the current launch configuration, "my_new_lc".
#
# This could also be considered a rolling deploy of a pre-baked AMI.
#
# If this is a newly created group, the instances will not be replaced since all instances
# will have the current launch configuration.
- name: create launch config
ec2_lc:
name: my_new_lc
image_id: ami-lkajsf
key_name: mykey
region: us-east-1
security_groups: sg-23423
instance_type: m1.small
assign_public_ip: yes
- ec2_asg:
name: myasg
launch_config_name: my_new_lc
health_check_period: 60
health_check_type: ELB
replace_all_instances: yes
min_size: 5
max_size: 5
desired_capacity: 5
region: us-east-1
# To only replace a couple of instances instead of all of them, supply a list
# to "replace_instances":
- ec2_asg:
name: myasg
launch_config_name: my_new_lc
health_check_period: 60
health_check_type: ELB
replace_instances:
- i-b345231
- i-24c2931
min_size: 5
max_size: 5
desired_capacity: 5
region: us-east-1
# Basic Configuration with Launch Template
- ec2_asg:
name: special
load_balancers: [ 'lb1', 'lb2' ]
availability_zones: [ 'eu-west-1a', 'eu-west-1b' ]
launch_template:
version: '1'
launch_template_name: 'lt-example'
launch_template_id: 'lt-123456'
min_size: 1
max_size: 10
desired_capacity: 5
vpc_zone_identifier: [ 'subnet-abcd1234', 'subnet-1a2b3c4d' ]
tags:
- environment: production
propagate_at_launch: no
# Basic Configuration with Launch Template using mixed instance policy
- ec2_asg:
name: special
load_balancers: [ 'lb1', 'lb2' ]
availability_zones: [ 'eu-west-1a', 'eu-west-1b' ]
launch_template:
version: '1'
launch_template_name: 'lt-example'
launch_template_id: 'lt-123456'
mixed_instances_policy:
instance_types:
- t3a.large
- t3.large
- t2.large
min_size: 1
max_size: 10
desired_capacity: 5
vpc_zone_identifier: [ 'subnet-abcd1234', 'subnet-1a2b3c4d' ]
tags:
- environment: production
propagate_at_launch: no
'''
RETURN = '''
---
auto_scaling_group_name:
description: The unique name of the auto scaling group
returned: success
type: str
sample: "myasg"
auto_scaling_group_arn:
description: The unique ARN of the autoscaling group
returned: success
type: str
sample: "arn:aws:autoscaling:us-east-1:123456789012:autoScalingGroup:6a09ad6d-eeee-1234-b987-ee123ced01ad:autoScalingGroupName/myasg"
availability_zones:
description: The availability zones for the auto scaling group
returned: success
type: list
sample: [
"us-east-1d"
]
created_time:
description: Timestamp of create time of the auto scaling group
returned: success
type: str
sample: "2017-11-08T14:41:48.272000+00:00"
default_cooldown:
description: The default cooldown time in seconds.
returned: success
type: int
sample: 300
desired_capacity:
description: The number of EC2 instances that should be running in this group.
returned: success
type: int
sample: 3
healthcheck_period:
description: Length of time in seconds after a new EC2 instance comes into service that Auto Scaling starts checking its health.
returned: success
type: int
sample: 30
healthcheck_type:
description: The service you want the health status from, one of "EC2" or "ELB".
returned: success
type: str
sample: "ELB"
healthy_instances:
description: Number of instances in a healthy state
returned: success
type: int
sample: 5
in_service_instances:
description: Number of instances in service
returned: success
type: int
sample: 3
instance_facts:
description: Dictionary of EC2 instances and their status as it relates to the ASG.
returned: success
type: dict
sample: {
"i-0123456789012": {
"health_status": "Healthy",
"launch_config_name": "public-webapp-production-1",
"lifecycle_state": "InService"
}
}
instances:
description: list of instance IDs in the ASG
returned: success
type: list
sample: [
"i-0123456789012"
]
launch_config_name:
description: >
Name of launch configuration associated with the ASG. Same as launch_configuration_name,
provided for compatibility with ec2_asg module.
returned: success
type: str
sample: "public-webapp-production-1"
load_balancers:
description: List of load balancers names attached to the ASG.
returned: success
type: list
sample: ["elb-webapp-prod"]
max_size:
description: Maximum size of group
returned: success
type: int
sample: 3
min_size:
description: Minimum size of group
returned: success
type: int
sample: 1
mixed_instance_policy:
description: Returns the list of instance types if a mixed instance policy is set.
returned: success
type: list
sample: ["t3.micro", "t3a.micro"]
pending_instances:
description: Number of instances in pending state
returned: success
type: int
sample: 1
tags:
description: List of tags for the ASG, and whether or not each tag propagates to instances at launch.
returned: success
type: list
sample: [
{
"key": "Name",
"value": "public-webapp-production-1",
"resource_id": "public-webapp-production-1",
"resource_type": "auto-scaling-group",
"propagate_at_launch": "true"
},
{
"key": "env",
"value": "production",
"resource_id": "public-webapp-production-1",
"resource_type": "auto-scaling-group",
"propagate_at_launch": "true"
}
]
target_group_arns:
description: List of ARNs of the target groups that the ASG populates
returned: success
type: list
sample: [
"arn:aws:elasticloadbalancing:ap-southeast-2:123456789012:targetgroup/target-group-host-hello/1a2b3c4d5e6f1a2b",
"arn:aws:elasticloadbalancing:ap-southeast-2:123456789012:targetgroup/target-group-path-world/abcd1234abcd1234"
]
target_group_names:
description: List of names of the target groups that the ASG populates
returned: success
type: list
sample: [
"target-group-host-hello",
"target-group-path-world"
]
termination_policies:
description: A list of termination policies for the group.
returned: success
type: str
sample: ["Default"]
unhealthy_instances:
description: Number of instances in an unhealthy state
returned: success
type: int
sample: 0
viable_instances:
description: Number of instances in a viable state
returned: success
type: int
sample: 1
vpc_zone_identifier:
description: VPC zone ID / subnet id for the auto scaling group
returned: success
type: str
sample: "subnet-a31ef45f"
metrics_collection:
description: List of enabled AutosSalingGroup metrics
returned: success
type: list
sample: [
{
"Granularity": "1Minute",
"Metric": "GroupInServiceInstances"
}
]
'''
import time
import traceback
from ansible.module_utils._text import to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ec2 import boto3_conn, ec2_argument_spec, HAS_BOTO3, camel_dict_to_snake_dict, get_aws_connection_info, AWSRetry
try:
import botocore
except ImportError:
pass # will be detected by imported HAS_BOTO3
from ansible.module_utils.aws.core import AnsibleAWSModule
ASG_ATTRIBUTES = ('AvailabilityZones', 'DefaultCooldown', 'DesiredCapacity',
'HealthCheckGracePeriod', 'HealthCheckType', 'LaunchConfigurationName',
'LoadBalancerNames', 'MaxSize', 'MinSize', 'AutoScalingGroupName', 'PlacementGroup',
'TerminationPolicies', 'VPCZoneIdentifier')
INSTANCE_ATTRIBUTES = ('instance_id', 'health_status', 'lifecycle_state', 'launch_config_name')
backoff_params = dict(tries=10, delay=3, backoff=1.5)
@AWSRetry.backoff(**backoff_params)
def describe_autoscaling_groups(connection, group_name):
pg = connection.get_paginator('describe_auto_scaling_groups')
return pg.paginate(AutoScalingGroupNames=[group_name]).build_full_result().get('AutoScalingGroups', [])
@AWSRetry.backoff(**backoff_params)
def deregister_lb_instances(connection, lb_name, instance_id):
connection.deregister_instances_from_load_balancer(LoadBalancerName=lb_name, Instances=[dict(InstanceId=instance_id)])
@AWSRetry.backoff(**backoff_params)
def describe_instance_health(connection, lb_name, instances):
params = dict(LoadBalancerName=lb_name)
if instances:
params.update(Instances=instances)
return connection.describe_instance_health(**params)
@AWSRetry.backoff(**backoff_params)
def describe_target_health(connection, target_group_arn, instances):
return connection.describe_target_health(TargetGroupArn=target_group_arn, Targets=instances)
@AWSRetry.backoff(**backoff_params)
def suspend_asg_processes(connection, asg_name, processes):
connection.suspend_processes(AutoScalingGroupName=asg_name, ScalingProcesses=processes)
@AWSRetry.backoff(**backoff_params)
def resume_asg_processes(connection, asg_name, processes):
connection.resume_processes(AutoScalingGroupName=asg_name, ScalingProcesses=processes)
@AWSRetry.backoff(**backoff_params)
def describe_launch_configurations(connection, launch_config_name):
pg = connection.get_paginator('describe_launch_configurations')
return pg.paginate(LaunchConfigurationNames=[launch_config_name]).build_full_result()
@AWSRetry.backoff(**backoff_params)
def describe_launch_templates(connection, launch_template):
if launch_template['launch_template_id'] is not None:
try:
lt = connection.describe_launch_templates(LaunchTemplateIds=[launch_template['launch_template_id']])
return lt
except (botocore.exceptions.ClientError) as e:
module.fail_json(msg="No launch template found matching: %s" % launch_template)
else:
try:
lt = connection.describe_launch_templates(LaunchTemplateNames=[launch_template['launch_template_name']])
return lt
except (botocore.exceptions.ClientError) as e:
module.fail_json(msg="No launch template found matching: %s" % launch_template)
@AWSRetry.backoff(**backoff_params)
def create_asg(connection, **params):
connection.create_auto_scaling_group(**params)
@AWSRetry.backoff(**backoff_params)
def put_notification_config(connection, asg_name, topic_arn, notification_types):
connection.put_notification_configuration(
AutoScalingGroupName=asg_name,
TopicARN=topic_arn,
NotificationTypes=notification_types
)
@AWSRetry.backoff(**backoff_params)
def del_notification_config(connection, asg_name, topic_arn):
connection.delete_notification_configuration(
AutoScalingGroupName=asg_name,
TopicARN=topic_arn
)
@AWSRetry.backoff(**backoff_params)
def attach_load_balancers(connection, asg_name, load_balancers):
connection.attach_load_balancers(AutoScalingGroupName=asg_name, LoadBalancerNames=load_balancers)
@AWSRetry.backoff(**backoff_params)
def detach_load_balancers(connection, asg_name, load_balancers):
connection.detach_load_balancers(AutoScalingGroupName=asg_name, LoadBalancerNames=load_balancers)
@AWSRetry.backoff(**backoff_params)
def attach_lb_target_groups(connection, asg_name, target_group_arns):
connection.attach_load_balancer_target_groups(AutoScalingGroupName=asg_name, TargetGroupARNs=target_group_arns)
@AWSRetry.backoff(**backoff_params)
def detach_lb_target_groups(connection, asg_name, target_group_arns):
connection.detach_load_balancer_target_groups(AutoScalingGroupName=asg_name, TargetGroupARNs=target_group_arns)
@AWSRetry.backoff(**backoff_params)
def update_asg(connection, **params):
connection.update_auto_scaling_group(**params)
@AWSRetry.backoff(catch_extra_error_codes=['ScalingActivityInProgress'], **backoff_params)
def delete_asg(connection, asg_name, force_delete):
connection.delete_auto_scaling_group(AutoScalingGroupName=asg_name, ForceDelete=force_delete)
@AWSRetry.backoff(**backoff_params)
def terminate_asg_instance(connection, instance_id, decrement_capacity):
connection.terminate_instance_in_auto_scaling_group(InstanceId=instance_id,
ShouldDecrementDesiredCapacity=decrement_capacity)
def enforce_required_arguments_for_create():
''' As many arguments are not required for autoscale group deletion
they cannot be mandatory arguments for the module, so we enforce
them here '''
missing_args = []
if module.params.get('launch_config_name') is None and module.params.get('launch_template') is None:
module.fail_json(msg="Missing either launch_config_name or launch_template for autoscaling group create")
for arg in ('min_size', 'max_size'):
if module.params[arg] is None:
missing_args.append(arg)
if missing_args:
module.fail_json(msg="Missing required arguments for autoscaling group create: %s" % ",".join(missing_args))
def get_properties(autoscaling_group):
properties = dict()
properties['healthy_instances'] = 0
properties['in_service_instances'] = 0
properties['unhealthy_instances'] = 0
properties['pending_instances'] = 0
properties['viable_instances'] = 0
properties['terminating_instances'] = 0
instance_facts = dict()
autoscaling_group_instances = autoscaling_group.get('Instances')
if autoscaling_group_instances:
properties['instances'] = [i['InstanceId'] for i in autoscaling_group_instances]
for i in autoscaling_group_instances:
if i.get('LaunchConfigurationName'):
instance_facts[i['InstanceId']] = {'health_status': i['HealthStatus'],
'lifecycle_state': i['LifecycleState'],
'launch_config_name': i['LaunchConfigurationName']}
elif i.get('LaunchTemplate'):
instance_facts[i['InstanceId']] = {'health_status': i['HealthStatus'],
'lifecycle_state': i['LifecycleState'],
'launch_template': i['LaunchTemplate']}
else:
instance_facts[i['InstanceId']] = {'health_status': i['HealthStatus'],
'lifecycle_state': i['LifecycleState']}
if i['HealthStatus'] == 'Healthy' and i['LifecycleState'] == 'InService':
properties['viable_instances'] += 1
if i['HealthStatus'] == 'Healthy':
properties['healthy_instances'] += 1
else:
properties['unhealthy_instances'] += 1
if i['LifecycleState'] == 'InService':
properties['in_service_instances'] += 1
if i['LifecycleState'] == 'Terminating':
properties['terminating_instances'] += 1
if i['LifecycleState'] == 'Pending':
properties['pending_instances'] += 1
else:
properties['instances'] = []
properties['auto_scaling_group_name'] = autoscaling_group.get('AutoScalingGroupName')
properties['auto_scaling_group_arn'] = autoscaling_group.get('AutoScalingGroupARN')
properties['availability_zones'] = autoscaling_group.get('AvailabilityZones')
properties['created_time'] = autoscaling_group.get('CreatedTime')
properties['instance_facts'] = instance_facts
properties['load_balancers'] = autoscaling_group.get('LoadBalancerNames')
if autoscaling_group.get('LaunchConfigurationName'):
properties['launch_config_name'] = autoscaling_group.get('LaunchConfigurationName')
else:
properties['launch_template'] = autoscaling_group.get('LaunchTemplate')
properties['tags'] = autoscaling_group.get('Tags')
properties['min_size'] = autoscaling_group.get('MinSize')
properties['max_size'] = autoscaling_group.get('MaxSize')
properties['desired_capacity'] = autoscaling_group.get('DesiredCapacity')
properties['default_cooldown'] = autoscaling_group.get('DefaultCooldown')
properties['healthcheck_grace_period'] = autoscaling_group.get('HealthCheckGracePeriod')
properties['healthcheck_type'] = autoscaling_group.get('HealthCheckType')
properties['default_cooldown'] = autoscaling_group.get('DefaultCooldown')
properties['termination_policies'] = autoscaling_group.get('TerminationPolicies')
properties['target_group_arns'] = autoscaling_group.get('TargetGroupARNs')
properties['vpc_zone_identifier'] = autoscaling_group.get('VPCZoneIdentifier')
raw_mixed_instance_object = autoscaling_group.get('MixedInstancesPolicy')
if raw_mixed_instance_object:
properties['mixed_instances_policy'] = [x['InstanceType'] for x in raw_mixed_instance_object.get('LaunchTemplate').get('Overrides')]
metrics = autoscaling_group.get('EnabledMetrics')
if metrics:
metrics.sort(key=lambda x: x["Metric"])
properties['metrics_collection'] = metrics
if properties['target_group_arns']:
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
elbv2_connection = boto3_conn(module,
conn_type='client',
resource='elbv2',
region=region,
endpoint=ec2_url,
**aws_connect_params)
tg_paginator = elbv2_connection.get_paginator('describe_target_groups')
tg_result = tg_paginator.paginate(TargetGroupArns=properties['target_group_arns']).build_full_result()
target_groups = tg_result['TargetGroups']
else:
target_groups = []
properties['target_group_names'] = [tg['TargetGroupName'] for tg in target_groups]
return properties
def get_launch_object(connection, ec2_connection):
launch_object = dict()
launch_config_name = module.params.get('launch_config_name')
launch_template = module.params.get('launch_template')
mixed_instances_policy = module.params.get('mixed_instances_policy')
if launch_config_name is None and launch_template is None:
return launch_object
elif launch_config_name:
try:
launch_configs = describe_launch_configurations(connection, launch_config_name)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to describe launch configurations",
exception=traceback.format_exc())
if len(launch_configs['LaunchConfigurations']) == 0:
module.fail_json(msg="No launch config found with name %s" % launch_config_name)
launch_object = {"LaunchConfigurationName": launch_configs['LaunchConfigurations'][0]['LaunchConfigurationName']}
return launch_object
elif launch_template:
lt = describe_launch_templates(ec2_connection, launch_template)['LaunchTemplates'][0]
if launch_template['version'] is not None:
launch_object = {"LaunchTemplate": {"LaunchTemplateId": lt['LaunchTemplateId'], "Version": launch_template['version']}}
else:
launch_object = {"LaunchTemplate": {"LaunchTemplateId": lt['LaunchTemplateId'], "Version": str(lt['LatestVersionNumber'])}}
if mixed_instances_policy:
instance_types = mixed_instances_policy.get('instance_types', [])
policy = {
'LaunchTemplate': {
'LaunchTemplateSpecification': launch_object['LaunchTemplate']
}
}
if instance_types:
policy['LaunchTemplate']['Overrides'] = []
for instance_type in instance_types:
instance_type_dict = {'InstanceType': instance_type}
policy['LaunchTemplate']['Overrides'].append(instance_type_dict)
launch_object['MixedInstancesPolicy'] = policy
return launch_object
def elb_dreg(asg_connection, group_name, instance_id):
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
as_group = describe_autoscaling_groups(asg_connection, group_name)[0]
wait_timeout = module.params.get('wait_timeout')
count = 1
if as_group['LoadBalancerNames'] and as_group['HealthCheckType'] == 'ELB':
elb_connection = boto3_conn(module,
conn_type='client',
resource='elb',
region=region,
endpoint=ec2_url,
**aws_connect_params)
else:
return
for lb in as_group['LoadBalancerNames']:
deregister_lb_instances(elb_connection, lb, instance_id)
module.debug("De-registering %s from ELB %s" % (instance_id, lb))
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time() and count > 0:
count = 0
for lb in as_group['LoadBalancerNames']:
lb_instances = describe_instance_health(elb_connection, lb, [])
for i in lb_instances['InstanceStates']:
if i['InstanceId'] == instance_id and i['State'] == "InService":
count += 1
module.debug("%s: %s, %s" % (i['InstanceId'], i['State'], i['Description']))
time.sleep(10)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for instance to deregister. {0}".format(time.asctime()))
def elb_healthy(asg_connection, elb_connection, group_name):
healthy_instances = set()
as_group = describe_autoscaling_groups(asg_connection, group_name)[0]
props = get_properties(as_group)
# get healthy, inservice instances from ASG
instances = []
for instance, settings in props['instance_facts'].items():
if settings['lifecycle_state'] == 'InService' and settings['health_status'] == 'Healthy':
instances.append(dict(InstanceId=instance))
module.debug("ASG considers the following instances InService and Healthy: %s" % instances)
module.debug("ELB instance status:")
lb_instances = list()
for lb in as_group.get('LoadBalancerNames'):
# we catch a race condition that sometimes happens if the instance exists in the ASG
# but has not yet show up in the ELB
try:
lb_instances = describe_instance_health(elb_connection, lb, instances)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'InvalidInstance':
return None
module.fail_json(msg="Failed to get load balancer.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to get load balancer.",
exception=traceback.format_exc())
for i in lb_instances.get('InstanceStates'):
if i['State'] == "InService":
healthy_instances.add(i['InstanceId'])
module.debug("ELB Health State %s: %s" % (i['InstanceId'], i['State']))
return len(healthy_instances)
def tg_healthy(asg_connection, elbv2_connection, group_name):
healthy_instances = set()
as_group = describe_autoscaling_groups(asg_connection, group_name)[0]
props = get_properties(as_group)
# get healthy, inservice instances from ASG
instances = []
for instance, settings in props['instance_facts'].items():
if settings['lifecycle_state'] == 'InService' and settings['health_status'] == 'Healthy':
instances.append(dict(Id=instance))
module.debug("ASG considers the following instances InService and Healthy: %s" % instances)
module.debug("Target Group instance status:")
tg_instances = list()
for tg in as_group.get('TargetGroupARNs'):
# we catch a race condition that sometimes happens if the instance exists in the ASG
# but has not yet show up in the ELB
try:
tg_instances = describe_target_health(elbv2_connection, tg, instances)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'InvalidInstance':
return None
module.fail_json(msg="Failed to get target group.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to get target group.",
exception=traceback.format_exc())
for i in tg_instances.get('TargetHealthDescriptions'):
if i['TargetHealth']['State'] == "healthy":
healthy_instances.add(i['Target']['Id'])
module.debug("Target Group Health State %s: %s" % (i['Target']['Id'], i['TargetHealth']['State']))
return len(healthy_instances)
def wait_for_elb(asg_connection, group_name):
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
wait_timeout = module.params.get('wait_timeout')
# if the health_check_type is ELB, we want to query the ELBs directly for instance
# status as to avoid health_check_grace period that is awarded to ASG instances
as_group = describe_autoscaling_groups(asg_connection, group_name)[0]
if as_group.get('LoadBalancerNames') and as_group.get('HealthCheckType') == 'ELB':
module.debug("Waiting for ELB to consider instances healthy.")
elb_connection = boto3_conn(module,
conn_type='client',
resource='elb',
region=region,
endpoint=ec2_url,
**aws_connect_params)
wait_timeout = time.time() + wait_timeout
healthy_instances = elb_healthy(asg_connection, elb_connection, group_name)
while healthy_instances < as_group.get('MinSize') and wait_timeout > time.time():
healthy_instances = elb_healthy(asg_connection, elb_connection, group_name)
module.debug("ELB thinks %s instances are healthy." % healthy_instances)
time.sleep(10)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for ELB instances to be healthy. %s" % time.asctime())
module.debug("Waiting complete. ELB thinks %s instances are healthy." % healthy_instances)
def wait_for_target_group(asg_connection, group_name):
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
wait_timeout = module.params.get('wait_timeout')
# if the health_check_type is ELB, we want to query the ELBs directly for instance
# status as to avoid health_check_grace period that is awarded to ASG instances
as_group = describe_autoscaling_groups(asg_connection, group_name)[0]
if as_group.get('TargetGroupARNs') and as_group.get('HealthCheckType') == 'ELB':
module.debug("Waiting for Target Group to consider instances healthy.")
elbv2_connection = boto3_conn(module,
conn_type='client',
resource='elbv2',
region=region,
endpoint=ec2_url,
**aws_connect_params)
wait_timeout = time.time() + wait_timeout
healthy_instances = tg_healthy(asg_connection, elbv2_connection, group_name)
while healthy_instances < as_group.get('MinSize') and wait_timeout > time.time():
healthy_instances = tg_healthy(asg_connection, elbv2_connection, group_name)
module.debug("Target Group thinks %s instances are healthy." % healthy_instances)
time.sleep(10)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for ELB instances to be healthy. %s" % time.asctime())
module.debug("Waiting complete. Target Group thinks %s instances are healthy." % healthy_instances)
def suspend_processes(ec2_connection, as_group):
suspend_processes = set(module.params.get('suspend_processes'))
try:
suspended_processes = set([p['ProcessName'] for p in as_group['SuspendedProcesses']])
except AttributeError:
# New ASG being created, no suspended_processes defined yet
suspended_processes = set()
if suspend_processes == suspended_processes:
return False
resume_processes = list(suspended_processes - suspend_processes)
if resume_processes:
resume_asg_processes(ec2_connection, module.params.get('name'), resume_processes)
if suspend_processes:
suspend_asg_processes(ec2_connection, module.params.get('name'), list(suspend_processes))
return True
def create_autoscaling_group(connection):
group_name = module.params.get('name')
load_balancers = module.params['load_balancers']
target_group_arns = module.params['target_group_arns']
availability_zones = module.params['availability_zones']
launch_config_name = module.params.get('launch_config_name')
launch_template = module.params.get('launch_template')
mixed_instances_policy = module.params.get('mixed_instances_policy')
min_size = module.params['min_size']
max_size = module.params['max_size']
placement_group = module.params.get('placement_group')
desired_capacity = module.params.get('desired_capacity')
vpc_zone_identifier = module.params.get('vpc_zone_identifier')
set_tags = module.params.get('tags')
health_check_period = module.params.get('health_check_period')
health_check_type = module.params.get('health_check_type')
default_cooldown = module.params.get('default_cooldown')
wait_for_instances = module.params.get('wait_for_instances')
wait_timeout = module.params.get('wait_timeout')
termination_policies = module.params.get('termination_policies')
notification_topic = module.params.get('notification_topic')
notification_types = module.params.get('notification_types')
metrics_collection = module.params.get('metrics_collection')
metrics_granularity = module.params.get('metrics_granularity')
metrics_list = module.params.get('metrics_list')
try:
as_groups = describe_autoscaling_groups(connection, group_name)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to describe auto scaling groups.",
exception=traceback.format_exc())
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
ec2_connection = boto3_conn(module,
conn_type='client',
resource='ec2',
region=region,
endpoint=ec2_url,
**aws_connect_params)
if vpc_zone_identifier:
vpc_zone_identifier = ','.join(vpc_zone_identifier)
asg_tags = []
for tag in set_tags:
for k, v in tag.items():
if k != 'propagate_at_launch':
asg_tags.append(dict(Key=k,
Value=to_native(v),
PropagateAtLaunch=bool(tag.get('propagate_at_launch', True)),
ResourceType='auto-scaling-group',
ResourceId=group_name))
if not as_groups:
if not vpc_zone_identifier and not availability_zones:
availability_zones = module.params['availability_zones'] = [zone['ZoneName'] for
zone in ec2_connection.describe_availability_zones()['AvailabilityZones']]
enforce_required_arguments_for_create()
if desired_capacity is None:
desired_capacity = min_size
ag = dict(
AutoScalingGroupName=group_name,
MinSize=min_size,
MaxSize=max_size,
DesiredCapacity=desired_capacity,
Tags=asg_tags,
HealthCheckGracePeriod=health_check_period,
HealthCheckType=health_check_type,
DefaultCooldown=default_cooldown,
TerminationPolicies=termination_policies)
if vpc_zone_identifier:
ag['VPCZoneIdentifier'] = vpc_zone_identifier
if availability_zones:
ag['AvailabilityZones'] = availability_zones
if placement_group:
ag['PlacementGroup'] = placement_group
if load_balancers:
ag['LoadBalancerNames'] = load_balancers
if target_group_arns:
ag['TargetGroupARNs'] = target_group_arns
launch_object = get_launch_object(connection, ec2_connection)
if 'LaunchConfigurationName' in launch_object:
ag['LaunchConfigurationName'] = launch_object['LaunchConfigurationName']
elif 'LaunchTemplate' in launch_object:
if 'MixedInstancesPolicy' in launch_object:
ag['MixedInstancesPolicy'] = launch_object['MixedInstancesPolicy']
else:
ag['LaunchTemplate'] = launch_object['LaunchTemplate']
else:
module.fail_json(msg="Missing LaunchConfigurationName or LaunchTemplate",
exception=traceback.format_exc())
try:
create_asg(connection, **ag)
if metrics_collection:
connection.enable_metrics_collection(AutoScalingGroupName=group_name, Granularity=metrics_granularity, Metrics=metrics_list)
all_ag = describe_autoscaling_groups(connection, group_name)
if len(all_ag) == 0:
module.fail_json(msg="No auto scaling group found with the name %s" % group_name)
as_group = all_ag[0]
suspend_processes(connection, as_group)
if wait_for_instances:
wait_for_new_inst(connection, group_name, wait_timeout, desired_capacity, 'viable_instances')
if load_balancers:
wait_for_elb(connection, group_name)
# Wait for target group health if target group(s)defined
if target_group_arns:
wait_for_target_group(connection, group_name)
if notification_topic:
put_notification_config(connection, group_name, notification_topic, notification_types)
as_group = describe_autoscaling_groups(connection, group_name)[0]
asg_properties = get_properties(as_group)
changed = True
return changed, asg_properties
except botocore.exceptions.ClientError as e:
module.fail_json(msg="Failed to create Autoscaling Group.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to create Autoscaling Group.",
exception=traceback.format_exc())
else:
as_group = as_groups[0]
initial_asg_properties = get_properties(as_group)
changed = False
if suspend_processes(connection, as_group):
changed = True
# process tag changes
if len(set_tags) > 0:
have_tags = as_group.get('Tags')
want_tags = asg_tags
if have_tags:
have_tags.sort(key=lambda x: x["Key"])
if want_tags:
want_tags.sort(key=lambda x: x["Key"])
dead_tags = []
have_tag_keyvals = [x['Key'] for x in have_tags]
want_tag_keyvals = [x['Key'] for x in want_tags]
for dead_tag in set(have_tag_keyvals).difference(want_tag_keyvals):
changed = True
dead_tags.append(dict(ResourceId=as_group['AutoScalingGroupName'],
ResourceType='auto-scaling-group', Key=dead_tag))
have_tags = [have_tag for have_tag in have_tags if have_tag['Key'] != dead_tag]
if dead_tags:
connection.delete_tags(Tags=dead_tags)
zipped = zip(have_tags, want_tags)
if len(have_tags) != len(want_tags) or not all(x == y for x, y in zipped):
changed = True
connection.create_or_update_tags(Tags=asg_tags)
# Handle load balancer attachments/detachments
# Attach load balancers if they are specified but none currently exist
if load_balancers and not as_group['LoadBalancerNames']:
changed = True
try:
attach_load_balancers(connection, group_name, load_balancers)
except botocore.exceptions.ClientError as e:
module.fail_json(msg="Failed to update Autoscaling Group.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to update Autoscaling Group.",
exception=traceback.format_exc())
# Update load balancers if they are specified and one or more already exists
elif as_group['LoadBalancerNames']:
change_load_balancers = load_balancers is not None
# Get differences
if not load_balancers:
load_balancers = list()
wanted_elbs = set(load_balancers)
has_elbs = set(as_group['LoadBalancerNames'])
# check if all requested are already existing
if has_elbs - wanted_elbs and change_load_balancers:
# if wanted contains less than existing, then we need to delete some
elbs_to_detach = has_elbs.difference(wanted_elbs)
if elbs_to_detach:
changed = True
try:
detach_load_balancers(connection, group_name, list(elbs_to_detach))
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to detach load balancers %s: %s." % (elbs_to_detach, to_native(e)),
exception=traceback.format_exc())
if wanted_elbs - has_elbs:
# if has contains less than wanted, then we need to add some
elbs_to_attach = wanted_elbs.difference(has_elbs)
if elbs_to_attach:
changed = True
try:
attach_load_balancers(connection, group_name, list(elbs_to_attach))
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to attach load balancers %s: %s." % (elbs_to_attach, to_native(e)),
exception=traceback.format_exc())
# Handle target group attachments/detachments
# Attach target groups if they are specified but none currently exist
if target_group_arns and not as_group['TargetGroupARNs']:
changed = True
try:
attach_lb_target_groups(connection, group_name, target_group_arns)
except botocore.exceptions.ClientError as e:
module.fail_json(msg="Failed to update Autoscaling Group.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to update Autoscaling Group.",
exception=traceback.format_exc())
# Update target groups if they are specified and one or more already exists
elif target_group_arns is not None and as_group['TargetGroupARNs']:
# Get differences
wanted_tgs = set(target_group_arns)
has_tgs = set(as_group['TargetGroupARNs'])
# check if all requested are already existing
if has_tgs.issuperset(wanted_tgs):
# if wanted contains less than existing, then we need to delete some
tgs_to_detach = has_tgs.difference(wanted_tgs)
if tgs_to_detach:
changed = True
try:
detach_lb_target_groups(connection, group_name, list(tgs_to_detach))
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to detach load balancer target groups %s: %s" % (tgs_to_detach, to_native(e)),
exception=traceback.format_exc())
if wanted_tgs.issuperset(has_tgs):
# if has contains less than wanted, then we need to add some
tgs_to_attach = wanted_tgs.difference(has_tgs)
if tgs_to_attach:
changed = True
try:
attach_lb_target_groups(connection, group_name, list(tgs_to_attach))
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to attach load balancer target groups %s: %s" % (tgs_to_attach, to_native(e)),
exception=traceback.format_exc())
# check for attributes that aren't required for updating an existing ASG
# check if min_size/max_size/desired capacity have been specified and if not use ASG values
if min_size is None:
min_size = as_group['MinSize']
if max_size is None:
max_size = as_group['MaxSize']
if desired_capacity is None:
desired_capacity = as_group['DesiredCapacity']
ag = dict(
AutoScalingGroupName=group_name,
MinSize=min_size,
MaxSize=max_size,
DesiredCapacity=desired_capacity,
HealthCheckGracePeriod=health_check_period,
HealthCheckType=health_check_type,
DefaultCooldown=default_cooldown,
TerminationPolicies=termination_policies)
# Get the launch object (config or template) if one is provided in args or use the existing one attached to ASG if not.
launch_object = get_launch_object(connection, ec2_connection)
if 'LaunchConfigurationName' in launch_object:
ag['LaunchConfigurationName'] = launch_object['LaunchConfigurationName']
elif 'LaunchTemplate' in launch_object:
if 'MixedInstancesPolicy' in launch_object:
ag['MixedInstancesPolicy'] = launch_object['MixedInstancesPolicy']
else:
ag['LaunchTemplate'] = launch_object['LaunchTemplate']
else:
try:
ag['LaunchConfigurationName'] = as_group['LaunchConfigurationName']
except Exception:
launch_template = as_group['LaunchTemplate']
# Prefer LaunchTemplateId over Name as it's more specific. Only one can be used for update_asg.
ag['LaunchTemplate'] = {"LaunchTemplateId": launch_template['LaunchTemplateId'], "Version": launch_template['Version']}
if availability_zones:
ag['AvailabilityZones'] = availability_zones
if vpc_zone_identifier:
ag['VPCZoneIdentifier'] = vpc_zone_identifier
try:
update_asg(connection, **ag)
if metrics_collection:
connection.enable_metrics_collection(AutoScalingGroupName=group_name, Granularity=metrics_granularity, Metrics=metrics_list)
else:
connection.disable_metrics_collection(AutoScalingGroupName=group_name, Metrics=metrics_list)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to update autoscaling group: %s" % to_native(e),
exception=traceback.format_exc())
if notification_topic:
try:
put_notification_config(connection, group_name, notification_topic, notification_types)
except botocore.exceptions.ClientError as e:
module.fail_json(msg="Failed to update Autoscaling Group notifications.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to update Autoscaling Group notifications.",
exception=traceback.format_exc())
if wait_for_instances:
wait_for_new_inst(connection, group_name, wait_timeout, desired_capacity, 'viable_instances')
# Wait for ELB health if ELB(s)defined
if load_balancers:
module.debug('\tWAITING FOR ELB HEALTH')
wait_for_elb(connection, group_name)
# Wait for target group health if target group(s)defined
if target_group_arns:
module.debug('\tWAITING FOR TG HEALTH')
wait_for_target_group(connection, group_name)
try:
as_group = describe_autoscaling_groups(connection, group_name)[0]
asg_properties = get_properties(as_group)
if asg_properties != initial_asg_properties:
changed = True
except botocore.exceptions.ClientError as e:
module.fail_json(msg="Failed to read existing Autoscaling Groups.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to read existing Autoscaling Groups.",
exception=traceback.format_exc())
return changed, asg_properties
def delete_autoscaling_group(connection):
group_name = module.params.get('name')
notification_topic = module.params.get('notification_topic')
wait_for_instances = module.params.get('wait_for_instances')
wait_timeout = module.params.get('wait_timeout')
if notification_topic:
del_notification_config(connection, group_name, notification_topic)
groups = describe_autoscaling_groups(connection, group_name)
if groups:
wait_timeout = time.time() + wait_timeout
if not wait_for_instances:
delete_asg(connection, group_name, force_delete=True)
else:
updated_params = dict(AutoScalingGroupName=group_name, MinSize=0, MaxSize=0, DesiredCapacity=0)
update_asg(connection, **updated_params)
instances = True
while instances and wait_for_instances and wait_timeout >= time.time():
tmp_groups = describe_autoscaling_groups(connection, group_name)
if tmp_groups:
tmp_group = tmp_groups[0]
if not tmp_group.get('Instances'):
instances = False
time.sleep(10)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for old instances to terminate. %s" % time.asctime())
delete_asg(connection, group_name, force_delete=False)
while describe_autoscaling_groups(connection, group_name) and wait_timeout >= time.time():
time.sleep(5)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for ASG to delete. %s" % time.asctime())
return True
return False
def get_chunks(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]
def update_size(connection, group, max_size, min_size, dc):
module.debug("setting ASG sizes")
module.debug("minimum size: %s, desired_capacity: %s, max size: %s" % (min_size, dc, max_size))
updated_group = dict()
updated_group['AutoScalingGroupName'] = group['AutoScalingGroupName']
updated_group['MinSize'] = min_size
updated_group['MaxSize'] = max_size
updated_group['DesiredCapacity'] = dc
update_asg(connection, **updated_group)
def replace(connection):
batch_size = module.params.get('replace_batch_size')
wait_timeout = module.params.get('wait_timeout')
group_name = module.params.get('name')
max_size = module.params.get('max_size')
min_size = module.params.get('min_size')
desired_capacity = module.params.get('desired_capacity')
launch_config_name = module.params.get('launch_config_name')
# Required to maintain the default value being set to 'true'
if launch_config_name:
lc_check = module.params.get('lc_check')
else:
lc_check = False
# Mirror above behaviour for Launch Templates
launch_template = module.params.get('launch_template')
if launch_template:
lt_check = module.params.get('lt_check')
else:
lt_check = False
replace_instances = module.params.get('replace_instances')
replace_all_instances = module.params.get('replace_all_instances')
as_group = describe_autoscaling_groups(connection, group_name)[0]
if desired_capacity is None:
desired_capacity = as_group['DesiredCapacity']
wait_for_new_inst(connection, group_name, wait_timeout, as_group['MinSize'], 'viable_instances')
props = get_properties(as_group)
instances = props['instances']
if replace_all_instances:
# If replacing all instances, then set replace_instances to current set
# This allows replace_instances and replace_all_instances to behave same
replace_instances = instances
if replace_instances:
instances = replace_instances
# check to see if instances are replaceable if checking launch configs
if launch_config_name:
new_instances, old_instances = get_instances_by_launch_config(props, lc_check, instances)
elif launch_template:
new_instances, old_instances = get_instances_by_launch_template(props, lt_check, instances)
num_new_inst_needed = desired_capacity - len(new_instances)
if lc_check or lt_check:
if num_new_inst_needed == 0 and old_instances:
module.debug("No new instances needed, but old instances are present. Removing old instances")
terminate_batch(connection, old_instances, instances, True)
as_group = describe_autoscaling_groups(connection, group_name)[0]
props = get_properties(as_group)
changed = True
return(changed, props)
# we don't want to spin up extra instances if not necessary
if num_new_inst_needed < batch_size:
module.debug("Overriding batch size to %s" % num_new_inst_needed)
batch_size = num_new_inst_needed
if not old_instances:
changed = False
return(changed, props)
# check if min_size/max_size/desired capacity have been specified and if not use ASG values
if min_size is None:
min_size = as_group['MinSize']
if max_size is None:
max_size = as_group['MaxSize']
# set temporary settings and wait for them to be reached
# This should get overwritten if the number of instances left is less than the batch size.
as_group = describe_autoscaling_groups(connection, group_name)[0]
update_size(connection, as_group, max_size + batch_size, min_size + batch_size, desired_capacity + batch_size)
wait_for_new_inst(connection, group_name, wait_timeout, as_group['MinSize'] + batch_size, 'viable_instances')
wait_for_elb(connection, group_name)
wait_for_target_group(connection, group_name)
as_group = describe_autoscaling_groups(connection, group_name)[0]
props = get_properties(as_group)
instances = props['instances']
if replace_instances:
instances = replace_instances
module.debug("beginning main loop")
for i in get_chunks(instances, batch_size):
# break out of this loop if we have enough new instances
break_early, desired_size, term_instances = terminate_batch(connection, i, instances, False)
wait_for_term_inst(connection, term_instances)
wait_for_new_inst(connection, group_name, wait_timeout, desired_size, 'viable_instances')
wait_for_elb(connection, group_name)
wait_for_target_group(connection, group_name)
as_group = describe_autoscaling_groups(connection, group_name)[0]
if break_early:
module.debug("breaking loop")
break
update_size(connection, as_group, max_size, min_size, desired_capacity)
as_group = describe_autoscaling_groups(connection, group_name)[0]
asg_properties = get_properties(as_group)
module.debug("Rolling update complete.")
changed = True
return(changed, asg_properties)
def get_instances_by_launch_config(props, lc_check, initial_instances):
new_instances = []
old_instances = []
# old instances are those that have the old launch config
if lc_check:
for i in props['instances']:
# Check if migrating from launch_template to launch_config first
if 'launch_template' in props['instance_facts'][i]:
old_instances.append(i)
elif props['instance_facts'][i].get('launch_config_name') == props['launch_config_name']:
new_instances.append(i)
else:
old_instances.append(i)
else:
module.debug("Comparing initial instances with current: %s" % initial_instances)
for i in props['instances']:
if i not in initial_instances:
new_instances.append(i)
else:
old_instances.append(i)
module.debug("New instances: %s, %s" % (len(new_instances), new_instances))
module.debug("Old instances: %s, %s" % (len(old_instances), old_instances))
return new_instances, old_instances
def get_instances_by_launch_template(props, lt_check, initial_instances):
new_instances = []
old_instances = []
# old instances are those that have the old launch template or version of the same launch template
if lt_check:
for i in props['instances']:
# Check if migrating from launch_config_name to launch_template_name first
if 'launch_config_name' in props['instance_facts'][i]:
old_instances.append(i)
elif props['instance_facts'][i].get('launch_template') == props['launch_template']:
new_instances.append(i)
else:
old_instances.append(i)
else:
module.debug("Comparing initial instances with current: %s" % initial_instances)
for i in props['instances']:
if i not in initial_instances:
new_instances.append(i)
else:
old_instances.append(i)
module.debug("New instances: %s, %s" % (len(new_instances), new_instances))
module.debug("Old instances: %s, %s" % (len(old_instances), old_instances))
return new_instances, old_instances
def list_purgeable_instances(props, lc_check, lt_check, replace_instances, initial_instances):
instances_to_terminate = []
instances = (inst_id for inst_id in replace_instances if inst_id in props['instances'])
# check to make sure instances given are actually in the given ASG
# and they have a non-current launch config
if module.params.get('launch_config_name'):
if lc_check:
for i in instances:
if 'launch_template' in props['instance_facts'][i]:
instances_to_terminate.append(i)
elif props['instance_facts'][i]['launch_config_name'] != props['launch_config_name']:
instances_to_terminate.append(i)
else:
for i in instances:
if i in initial_instances:
instances_to_terminate.append(i)
elif module.params.get('launch_template'):
if lt_check:
for i in instances:
if 'launch_config_name' in props['instance_facts'][i]:
instances_to_terminate.append(i)
elif props['instance_facts'][i]['launch_template'] != props['launch_template']:
instances_to_terminate.append(i)
else:
for i in instances:
if i in initial_instances:
instances_to_terminate.append(i)
return instances_to_terminate
def terminate_batch(connection, replace_instances, initial_instances, leftovers=False):
batch_size = module.params.get('replace_batch_size')
min_size = module.params.get('min_size')
desired_capacity = module.params.get('desired_capacity')
group_name = module.params.get('name')
lc_check = module.params.get('lc_check')
lt_check = module.params.get('lt_check')
decrement_capacity = False
break_loop = False
as_group = describe_autoscaling_groups(connection, group_name)[0]
if desired_capacity is None:
desired_capacity = as_group['DesiredCapacity']
props = get_properties(as_group)
desired_size = as_group['MinSize']
if module.params.get('launch_config_name'):
new_instances, old_instances = get_instances_by_launch_config(props, lc_check, initial_instances)
else:
new_instances, old_instances = get_instances_by_launch_template(props, lt_check, initial_instances)
num_new_inst_needed = desired_capacity - len(new_instances)
# check to make sure instances given are actually in the given ASG
# and they have a non-current launch config
instances_to_terminate = list_purgeable_instances(props, lc_check, lt_check, replace_instances, initial_instances)
module.debug("new instances needed: %s" % num_new_inst_needed)
module.debug("new instances: %s" % new_instances)
module.debug("old instances: %s" % old_instances)
module.debug("batch instances: %s" % ",".join(instances_to_terminate))
if num_new_inst_needed == 0:
decrement_capacity = True
if as_group['MinSize'] != min_size:
if min_size is None:
min_size = as_group['MinSize']
updated_params = dict(AutoScalingGroupName=as_group['AutoScalingGroupName'], MinSize=min_size)
update_asg(connection, **updated_params)
module.debug("Updating minimum size back to original of %s" % min_size)
# if are some leftover old instances, but we are already at capacity with new ones
# we don't want to decrement capacity
if leftovers:
decrement_capacity = False
break_loop = True
instances_to_terminate = old_instances
desired_size = min_size
module.debug("No new instances needed")
if num_new_inst_needed < batch_size and num_new_inst_needed != 0:
instances_to_terminate = instances_to_terminate[:num_new_inst_needed]
decrement_capacity = False
break_loop = False
module.debug("%s new instances needed" % num_new_inst_needed)
module.debug("decrementing capacity: %s" % decrement_capacity)
for instance_id in instances_to_terminate:
elb_dreg(connection, group_name, instance_id)
module.debug("terminating instance: %s" % instance_id)
terminate_asg_instance(connection, instance_id, decrement_capacity)
# we wait to make sure the machines we marked as Unhealthy are
# no longer in the list
return break_loop, desired_size, instances_to_terminate
def wait_for_term_inst(connection, term_instances):
wait_timeout = module.params.get('wait_timeout')
group_name = module.params.get('name')
as_group = describe_autoscaling_groups(connection, group_name)[0]
count = 1
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time() and count > 0:
module.debug("waiting for instances to terminate")
count = 0
as_group = describe_autoscaling_groups(connection, group_name)[0]
props = get_properties(as_group)
instance_facts = props['instance_facts']
instances = (i for i in instance_facts if i in term_instances)
for i in instances:
lifecycle = instance_facts[i]['lifecycle_state']
health = instance_facts[i]['health_status']
module.debug("Instance %s has state of %s,%s" % (i, lifecycle, health))
if lifecycle.startswith('Terminating') or health == 'Unhealthy':
count += 1
time.sleep(10)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for old instances to terminate. %s" % time.asctime())
def wait_for_new_inst(connection, group_name, wait_timeout, desired_size, prop):
# make sure we have the latest stats after that last loop.
as_group = describe_autoscaling_groups(connection, group_name)[0]
props = get_properties(as_group)
module.debug("Waiting for %s = %s, currently %s" % (prop, desired_size, props[prop]))
# now we make sure that we have enough instances in a viable state
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time() and desired_size > props[prop]:
module.debug("Waiting for %s = %s, currently %s" % (prop, desired_size, props[prop]))
time.sleep(10)
as_group = describe_autoscaling_groups(connection, group_name)[0]
props = get_properties(as_group)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for new instances to become viable. %s" % time.asctime())
module.debug("Reached %s: %s" % (prop, desired_size))
return props
def asg_exists(connection):
group_name = module.params.get('name')
as_group = describe_autoscaling_groups(connection, group_name)
return bool(len(as_group))
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
name=dict(required=True, type='str'),
load_balancers=dict(type='list'),
target_group_arns=dict(type='list'),
availability_zones=dict(type='list'),
launch_config_name=dict(type='str'),
launch_template=dict(type='dict',
default=None,
options=dict(
version=dict(type='str'),
launch_template_name=dict(type='str'),
launch_template_id=dict(type='str'),
),
),
mixed_instances_policy=dict(type='dict',
default=None,
options=dict(
instance_types=dict(type='list', elements='str'),
)),
min_size=dict(type='int'),
max_size=dict(type='int'),
placement_group=dict(type='str'),
desired_capacity=dict(type='int'),
vpc_zone_identifier=dict(type='list'),
replace_batch_size=dict(type='int', default=1),
replace_all_instances=dict(type='bool', default=False),
replace_instances=dict(type='list', default=[]),
lc_check=dict(type='bool', default=True),
lt_check=dict(type='bool', default=True),
wait_timeout=dict(type='int', default=300),
state=dict(default='present', choices=['present', 'absent']),
tags=dict(type='list', default=[]),
health_check_period=dict(type='int', default=300),
health_check_type=dict(default='EC2', choices=['EC2', 'ELB']),
default_cooldown=dict(type='int', default=300),
wait_for_instances=dict(type='bool', default=True),
termination_policies=dict(type='list', default='Default'),
notification_topic=dict(type='str', default=None),
notification_types=dict(type='list', default=[
'autoscaling:EC2_INSTANCE_LAUNCH',
'autoscaling:EC2_INSTANCE_LAUNCH_ERROR',
'autoscaling:EC2_INSTANCE_TERMINATE',
'autoscaling:EC2_INSTANCE_TERMINATE_ERROR'
]),
suspend_processes=dict(type='list', default=[]),
metrics_collection=dict(type='bool', default=False),
metrics_granularity=dict(type='str', default='1Minute'),
metrics_list=dict(type='list', default=[
'GroupMinSize',
'GroupMaxSize',
'GroupDesiredCapacity',
'GroupInServiceInstances',
'GroupPendingInstances',
'GroupStandbyInstances',
'GroupTerminatingInstances',
'GroupTotalInstances'
])
),
)
global module
module = AnsibleAWSModule(
argument_spec=argument_spec,
mutually_exclusive=[
['replace_all_instances', 'replace_instances'],
['launch_config_name', 'launch_template']]
)
if not HAS_BOTO3:
module.fail_json(msg='boto3 required for this module')
if module.params.get('mixed_instance_type') and not module.botocore_at_least('1.12.45'):
module.fail_json(msg="mixed_instance_type is only supported with botocore >= 1.12.45")
state = module.params.get('state')
replace_instances = module.params.get('replace_instances')
replace_all_instances = module.params.get('replace_all_instances')
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
connection = boto3_conn(module,
conn_type='client',
resource='autoscaling',
region=region,
endpoint=ec2_url,
**aws_connect_params)
changed = create_changed = replace_changed = False
exists = asg_exists(connection)
if state == 'present':
create_changed, asg_properties = create_autoscaling_group(connection)
elif state == 'absent':
changed = delete_autoscaling_group(connection)
module.exit_json(changed=changed)
# Only replace instances if asg existed at start of call
if exists and (replace_all_instances or replace_instances) and (module.params.get('launch_config_name') or module.params.get('launch_template')):
replace_changed, asg_properties = replace(connection)
if create_changed or replace_changed:
changed = True
module.exit_json(changed=changed, **asg_properties)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,716 |
ec2_asg: Add MaxInstanceLifetime support
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add support for `MaxInstanceLifetime` option to AWS ec2_asg module.
AWS feature introduction link: https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-ec2-auto-scaling-supports-max-instance-lifetime/
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ec2_asg
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Usage example:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- ec2_asg:
name: myasg
region: us-east-1
launch_config_name: my_new_lc
min_size: 1
max_size: 5
desired_capacity: 3
max_instance_lifetime: 604800 # seconds
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/66716
|
https://github.com/ansible/ansible/pull/66863
|
d2f4d305ee4175cc0315a705824b168b3096e06a
|
f98874e4f98837e4b9868780b19cf6614b00282a
| 2020-01-23T12:18:38Z |
python
| 2020-02-15T12:56:39Z |
test/integration/targets/ec2_asg/tasks/main.yml
|
---
# tasks file for test_ec2_asg
- name: Test incomplete credentials with ec2_asg
block:
# ============================================================
- name: test invalid profile
ec2_asg:
name: "{{ resource_prefix }}-asg"
region: "{{ aws_region }}"
profile: notavalidprofile
ignore_errors: yes
register: result
- name:
assert:
that:
- "'The config profile (notavalidprofile) could not be found' in result.msg"
- name: test partial credentials
ec2_asg:
name: "{{ resource_prefix }}-asg"
region: "{{ aws_region }}"
aws_access_key: "{{ aws_access_key }}"
ignore_errors: yes
register: result
- name:
assert:
that:
- "'Partial credentials found in explicit, missing: aws_secret_access_key' in result.msg"
- name: test without specifying region
ec2_asg:
name: "{{ resource_prefix }}-asg"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token | default(omit) }}"
ignore_errors: yes
register: result
- name:
assert:
that:
- result.msg == 'The ec2_asg module requires a region and none was found in configuration, environment variables or module parameters'
# ============================================================
- name: Test incomplete arguments with ec2_asg
block:
# ============================================================
- name: test without specifying required module options
ec2_asg:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token | default(omit) }}"
ignore_errors: yes
register: result
- name: assert name is a required module option
assert:
that:
- "result.msg == 'missing required arguments: name'"
- name: Run ec2_asg integration tests.
module_defaults:
group/aws:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token | default(omit) }}"
region: "{{ aws_region }}"
block:
# ============================================================
- name: Find AMI to use
ec2_ami_info:
owners: 'amazon'
filters:
name: '{{ ec2_ami_name }}'
register: ec2_amis
- set_fact:
ec2_ami_image: '{{ ec2_amis.images[0].image_id }}'
- name: load balancer name has to be less than 32 characters
# the 8 digit identifier at the end of resource_prefix helps determine during which test something
# was created
set_fact:
load_balancer_name: "{{ item }}-lb"
with_items: "{{ resource_prefix | regex_findall('.{8}$') }}"
# Set up the testing dependencies: VPC, subnet, security group, and two launch configurations
- name: Create VPC for use in testing
ec2_vpc_net:
name: "{{ resource_prefix }}-vpc"
cidr_block: 10.55.77.0/24
tenancy: default
register: testing_vpc
- name: Create internet gateway for use in testing
ec2_vpc_igw:
vpc_id: "{{ testing_vpc.vpc.id }}"
state: present
register: igw
- name: Create subnet for use in testing
ec2_vpc_subnet:
state: present
vpc_id: "{{ testing_vpc.vpc.id }}"
cidr: 10.55.77.0/24
az: "{{ aws_region }}a"
resource_tags:
Name: "{{ resource_prefix }}-subnet"
register: testing_subnet
- name: create routing rules
ec2_vpc_route_table:
vpc_id: "{{ testing_vpc.vpc.id }}"
tags:
created: "{{ resource_prefix }}-route"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
subnets:
- "{{ testing_subnet.subnet.id }}"
- name: create a security group with the vpc created in the ec2_setup
ec2_group:
name: "{{ resource_prefix }}-sg"
description: a security group for ansible tests
vpc_id: "{{ testing_vpc.vpc.id }}"
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
register: sg
- name: ensure launch configs exist
ec2_lc:
name: "{{ item }}"
assign_public_ip: true
image_id: "{{ ec2_ami_image }}"
user_data: |
#cloud-config
package_upgrade: true
package_update: true
packages:
- httpd
runcmd:
- "service httpd start"
security_groups: "{{ sg.group_id }}"
instance_type: t3.micro
with_items:
- "{{ resource_prefix }}-lc"
- "{{ resource_prefix }}-lc-2"
# ============================================================
- name: launch asg and wait for instances to be deemed healthy (no ELB)
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
desired_capacity: 1
min_size: 1
max_size: 1
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
state: present
wait_for_instances: yes
register: output
- assert:
that:
- "output.viable_instances == 1"
- name: Tag asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
tags:
- tag_a: 'value 1'
propagate_at_launch: no
- tag_b: 'value 2'
propagate_at_launch: yes
register: output
- assert:
that:
- "output.tags | length == 2"
- output is changed
- name: Re-Tag asg (different order)
ec2_asg:
name: "{{ resource_prefix }}-asg"
tags:
- tag_b: 'value 2'
propagate_at_launch: yes
- tag_a: 'value 1'
propagate_at_launch: no
register: output
- assert:
that:
- "output.tags | length == 2"
- output is not changed
- name: Re-Tag asg new tags
ec2_asg:
name: "{{ resource_prefix }}-asg"
tags:
- tag_c: 'value 3'
propagate_at_launch: no
register: output
- assert:
that:
- "output.tags | length == 1"
- output is changed
- name: Re-Tag asg update propagate_at_launch
ec2_asg:
name: "{{ resource_prefix }}-asg"
tags:
- tag_c: 'value 3'
propagate_at_launch: yes
register: output
- assert:
that:
- "output.tags | length == 1"
- output is changed
- name: Enable metrics collection
ec2_asg:
name: "{{ resource_prefix }}-asg"
metrics_collection: yes
register: output
- assert:
that:
- output is changed
- name: Enable metrics collection (check idempotency)
ec2_asg:
name: "{{ resource_prefix }}-asg"
metrics_collection: yes
register: output
- assert:
that:
- output is not changed
- name: Disable metrics collection
ec2_asg:
name: "{{ resource_prefix }}-asg"
metrics_collection: no
register: output
- assert:
that:
- output is changed
- name: Disable metrics collection (check idempotency)
ec2_asg:
name: "{{ resource_prefix }}-asg"
metrics_collection: no
register: output
- assert:
that:
- output is not changed
# - name: pause for a bit to make sure that the group can't be trivially deleted
# pause: seconds=30
- name: kill asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
state: absent
wait_timeout: 800
async: 400
# ============================================================
- name: launch asg and do not wait for instances to be deemed healthy (no ELB)
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
desired_capacity: 1
min_size: 1
max_size: 1
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
wait_for_instances: no
state: present
register: output
- assert:
that:
- "output.viable_instances == 0"
- name: kill asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
state: absent
wait_timeout: 800
async: 400
# ============================================================
- name: create asg with asg metrics enabled
ec2_asg:
name: "{{ resource_prefix }}-asg"
metrics_collection: true
launch_config_name: "{{ resource_prefix }}-lc"
desired_capacity: 0
min_size: 0
max_size: 0
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
state: present
register: output
- assert:
that:
- "'Group' in output.metrics_collection.0.Metric"
- name: kill asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
state: absent
wait_timeout: 800
async: 400
# ============================================================
- name: launch load balancer
ec2_elb_lb:
name: "{{ load_balancer_name }}"
state: present
security_group_ids:
- "{{ sg.group_id }}"
subnets: "{{ testing_subnet.subnet.id }}"
connection_draining_timeout: 60
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
health_check:
ping_protocol: tcp
ping_port: 80
ping_path: "/"
response_timeout: 5
interval: 10
unhealthy_threshold: 4
healthy_threshold: 2
register: load_balancer
- name: launch asg and wait for instances to be deemed healthy (ELB)
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
health_check_type: ELB
desired_capacity: 1
min_size: 1
max_size: 1
health_check_period: 300
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
load_balancers: "{{ load_balancer_name }}"
wait_for_instances: yes
wait_timeout: 900
state: present
register: output
- assert:
that:
- "output.viable_instances == 1"
# ============================================================
# grow scaling group to 3
- name: add 2 more instances wait for instances to be deemed healthy (ELB)
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
health_check_type: ELB
desired_capacity: 3
min_size: 3
max_size: 5
health_check_period: 600
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
load_balancers: "{{ load_balancer_name }}"
wait_for_instances: yes
wait_timeout: 1200
state: present
register: output
- assert:
that:
- "output.viable_instances == 3"
# ============================================================
# # perform rolling replace with different launch configuration
- name: perform rolling update to new AMI
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc-2"
health_check_type: ELB
desired_capacity: 3
min_size: 1
max_size: 5
health_check_period: 900
load_balancers: "{{ load_balancer_name }}"
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
wait_for_instances: yes
replace_all_instances: yes
wait_timeout: 1800
state: present
register: output
# ensure that all instances have new launch config
- assert:
that:
- "item.value.launch_config_name == '{{ resource_prefix }}-lc-2'"
with_dict: "{{ output.instance_facts }}"
# assert they are all healthy and that the rolling update resulted in the appropriate number of instances
- assert:
that:
- "output.viable_instances == 3"
# ============================================================
# perform rolling replace with the original launch configuration
- name: perform rolling update to new AMI while removing the load balancer
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
health_check_type: EC2
desired_capacity: 3
min_size: 1
max_size: 5
health_check_period: 900
load_balancers: []
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
wait_for_instances: yes
replace_all_instances: yes
wait_timeout: 1800
state: present
register: output
# ensure that all instances have new launch config
- assert:
that:
- "item.value.launch_config_name == '{{ resource_prefix }}-lc'"
with_dict: "{{ output.instance_facts }}"
# assert they are all healthy and that the rolling update resulted in the appropriate number of instances
# there should be the same number of instances as there were before the rolling update was performed
- assert:
that:
- "output.viable_instances == 3"
# ============================================================
# perform rolling replace with new launch configuration and lc_check:false
# Note - this is done async so we can query asg_facts during
# the execution. Issues #28087 and #35993 result in correct
# end result, but spin up extraneous instances during execution.
- name: "perform rolling update to new AMI with lc_check: false"
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc-2"
health_check_type: EC2
desired_capacity: 3
min_size: 1
max_size: 5
health_check_period: 900
load_balancers: []
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
wait_for_instances: yes
replace_all_instances: yes
replace_batch_size: 3
lc_check: false
wait_timeout: 1800
state: present
async: 1800
poll: 0
register: asg_job
- name: get ec2_asg facts for 3 minutes
ec2_asg_info:
name: "{{ resource_prefix }}-asg"
register: output
loop_control:
pause: 15
with_sequence: count=12
- set_fact:
inst_id_json_query: 'results[*].results[*].instances[*].instance_id'
# Since we started with 3 servers and replace all of them.
# We should see 6 servers total.
- assert:
that:
- "lookup('flattened',output|json_query(inst_id_json_query)).split(',')|unique|length == 6"
- name: Ensure ec2_asg task completes
async_status: jid="{{ asg_job.ansible_job_id }}"
register: status
until: status is finished
retries: 200
delay: 15
# ============================================================
- name: kill asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
state: absent
wait_timeout: 800
async: 400
# Create new asg with replace_all_instances and lc_check:false
# Note - this is done async so we can query asg_facts during
# the execution. Issues #28087 results in correct
# end result, but spin up extraneous instances during execution.
- name: "new asg with lc_check: false"
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
health_check_type: EC2
desired_capacity: 3
min_size: 1
max_size: 5
health_check_period: 900
load_balancers: []
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
wait_for_instances: yes
replace_all_instances: yes
replace_batch_size: 3
lc_check: false
wait_timeout: 1800
state: present
async: 1800
poll: 0
register: asg_job
# Collect ec2_asg_info for 3 minutes
- name: get ec2_asg information
ec2_asg_info:
name: "{{ resource_prefix }}-asg"
register: output
loop_control:
pause: 15
with_sequence: count=12
- set_fact:
inst_id_json_query: 'results[*].results[*].instances[*].instance_id'
# Get all instance_ids we saw and assert we saw number expected
# Should only see 3 (don't replace instances we just created)
- assert:
that:
- "lookup('flattened',output|json_query(inst_id_json_query)).split(',')|unique|length == 3"
- name: Ensure ec2_asg task completes
async_status: jid="{{ asg_job.ansible_job_id }}"
register: status
until: status is finished
retries: 200
delay: 15
# we need a launch template, otherwise we cannot test the mixed instance policy
- name: create launch template for autoscaling group to test its mixed instance policy
ec2_launch_template:
template_name: "{{ resource_prefix }}-lt"
image_id: "{{ ec2_ami_image }}"
instance_type: t3.micro
credit_specification:
cpu_credits: standard
network_interfaces:
- associate_public_ip_address: yes
delete_on_termination: yes
device_index: 0
groups:
- "{{ sg.group_id }}"
- name: update autoscaling group with mixed-instance policy
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_template:
launch_template_name: "{{ resource_prefix }}-lt"
desired_capacity: 1
min_size: 1
max_size: 1
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
state: present
mixed_instances_policy:
instance_types:
- t3.micro
- t3a.micro
wait_for_instances: yes
register: output
- assert:
that:
- "output.mixed_instances_policy | length == 2"
- "output.mixed_instances_policy[0] == 't3.micro'"
- "output.mixed_instances_policy[1] == 't3a.micro'"
# ============================================================
always:
- name: kill asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
state: absent
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
# Remove the testing dependencies
- name: remove the load balancer
ec2_elb_lb:
name: "{{ load_balancer_name }}"
state: absent
security_group_ids:
- "{{ sg.group_id }}"
subnets: "{{ testing_subnet.subnet.id }}"
wait: yes
connection_draining_timeout: 60
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
health_check:
ping_protocol: tcp
ping_port: 80
ping_path: "/"
response_timeout: 5
interval: 10
unhealthy_threshold: 4
healthy_threshold: 2
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
- name: remove launch configs
ec2_lc:
name: "{{ resource_prefix }}-lc"
state: absent
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
with_items:
- "{{ resource_prefix }}-lc"
- "{{ resource_prefix }}-lc-2"
- name: delete launch template
ec2_launch_template:
name: "{{ resource_prefix }}-lt"
state: absent
register: del_lt
retries: 10
until: del_lt is not failed
ignore_errors: true
- name: remove the security group
ec2_group:
name: "{{ resource_prefix }}-sg"
description: a security group for ansible tests
vpc_id: "{{ testing_vpc.vpc.id }}"
state: absent
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
- name: remove routing rules
ec2_vpc_route_table:
state: absent
vpc_id: "{{ testing_vpc.vpc.id }}"
tags:
created: "{{ resource_prefix }}-route"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
subnets:
- "{{ testing_subnet.subnet.id }}"
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
- name: remove internet gateway
ec2_vpc_igw:
vpc_id: "{{ testing_vpc.vpc.id }}"
state: absent
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
- name: remove the subnet
ec2_vpc_subnet:
state: absent
vpc_id: "{{ testing_vpc.vpc.id }}"
cidr: 10.55.77.0/24
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
- name: remove the VPC
ec2_vpc_net:
name: "{{ resource_prefix }}-vpc"
cidr_block: 10.55.77.0/24
state: absent
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,936 |
AWS ec2_asg with replace_all_instances:yes will wait for instances to start/terminate even when wait_for_instances:no
|
##### SUMMARY
When using the `ec2_asg` module with `replace_all_instances: yes`, then `wait_for_instances` is effectively ignored.
I would expect that if `wait_for_instances: no` then it would not wait for instances to complete starting/termination.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_asg
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.2
config file = None
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/.local/lib/python3.7/site-packages/ansible
executable location = /home/user/.local/bin/ansible
python version = 3.7.5 (default, Nov 20 2019, 09:21:52) [GCC 9.2.1 20191008]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
(empty)
```
##### OS / ENVIRONMENT
OS: Ubuntu 19.10
Ansible installed via pip3
##### STEPS TO REPRODUCE
```
- name: Update ASG
ec2_asg:
name: "my-asg"
launch_config_name: "my-launchconfig"
min_size: 1
max_size: 1
replace_batch_size: 1
replace_all_instances: yes
wait_for_instances: no
```
This will not return successfully until new instance(s) have started, and any existing instance(s) have completed termination.
If your timeout is not set long enough, then you'll get a 'Waited too long for old instances to terminate' response.
##### EXPECTED RESULTS
I would expect that it would issue commands to terminate old instances and start new instances, without waiting.
For those who have to replace in smaller batches than the current `desired_capacity`, then they would need to set `wait_for_instances: yes`. Perhaps a warning might be in order if `replace_batch_size` is less than `desired_capacity`
##### ACTUAL RESULTS
Playbook output when it takes too long to execute:
```
TASK [test : Update ASG] ***********************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Waited too long for old instances to terminate. Wed Dec 18 12:51:55 2019"}
```
|
https://github.com/ansible/ansible/issues/65936
|
https://github.com/ansible/ansible/pull/66863
|
d2f4d305ee4175cc0315a705824b168b3096e06a
|
f98874e4f98837e4b9868780b19cf6614b00282a
| 2019-12-18T02:30:02Z |
python
| 2020-02-15T12:56:39Z |
changelogs/fragments/66863-ec2_asg-max_instance_lifetime-and-honor-wait-on-replace.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,936 |
AWS ec2_asg with replace_all_instances:yes will wait for instances to start/terminate even when wait_for_instances:no
|
##### SUMMARY
When using the `ec2_asg` module with `replace_all_instances: yes`, then `wait_for_instances` is effectively ignored.
I would expect that if `wait_for_instances: no` then it would not wait for instances to complete starting/termination.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_asg
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.2
config file = None
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/.local/lib/python3.7/site-packages/ansible
executable location = /home/user/.local/bin/ansible
python version = 3.7.5 (default, Nov 20 2019, 09:21:52) [GCC 9.2.1 20191008]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
(empty)
```
##### OS / ENVIRONMENT
OS: Ubuntu 19.10
Ansible installed via pip3
##### STEPS TO REPRODUCE
```
- name: Update ASG
ec2_asg:
name: "my-asg"
launch_config_name: "my-launchconfig"
min_size: 1
max_size: 1
replace_batch_size: 1
replace_all_instances: yes
wait_for_instances: no
```
This will not return successfully until new instance(s) have started, and any existing instance(s) have completed termination.
If your timeout is not set long enough, then you'll get a 'Waited too long for old instances to terminate' response.
##### EXPECTED RESULTS
I would expect that it would issue commands to terminate old instances and start new instances, without waiting.
For those who have to replace in smaller batches than the current `desired_capacity`, then they would need to set `wait_for_instances: yes`. Perhaps a warning might be in order if `replace_batch_size` is less than `desired_capacity`
##### ACTUAL RESULTS
Playbook output when it takes too long to execute:
```
TASK [test : Update ASG] ***********************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Waited too long for old instances to terminate. Wed Dec 18 12:51:55 2019"}
```
|
https://github.com/ansible/ansible/issues/65936
|
https://github.com/ansible/ansible/pull/66863
|
d2f4d305ee4175cc0315a705824b168b3096e06a
|
f98874e4f98837e4b9868780b19cf6614b00282a
| 2019-12-18T02:30:02Z |
python
| 2020-02-15T12:56:39Z |
lib/ansible/modules/cloud/amazon/ec2_asg.py
|
#!/usr/bin/python
# This file is part of Ansible
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'community'}
DOCUMENTATION = """
---
module: ec2_asg
short_description: Create or delete AWS AutoScaling Groups (ASGs)
description:
- Can create or delete AWS AutoScaling Groups.
- Can be used with the M(ec2_lc) module to manage Launch Configurations.
version_added: "1.6"
author: "Gareth Rushgrove (@garethr)"
requirements: [ "boto3", "botocore" ]
options:
state:
description:
- Register or deregister the instance.
choices: ['present', 'absent']
default: present
type: str
name:
description:
- Unique name for group to be created or deleted.
required: true
type: str
load_balancers:
description:
- List of ELB names to use for the group. Use for classic load balancers.
type: list
elements: str
target_group_arns:
description:
- List of target group ARNs to use for the group. Use for application load balancers.
version_added: "2.4"
type: list
elements: str
availability_zones:
description:
- List of availability zone names in which to create the group.
- Defaults to all the availability zones in the region if I(vpc_zone_identifier) is not set.
type: list
elements: str
launch_config_name:
description:
- Name of the Launch configuration to use for the group. See the M(ec2_lc) module for managing these.
- If unspecified then the current group value will be used. One of I(launch_config_name) or I(launch_template) must be provided.
type: str
launch_template:
description:
- Dictionary describing the Launch Template to use
suboptions:
version:
description:
- The version number of the launch template to use.
- Defaults to latest version if not provided.
type: str
launch_template_name:
description:
- The name of the launch template. Only one of I(launch_template_name) or I(launch_template_id) is required.
type: str
launch_template_id:
description:
- The id of the launch template. Only one of I(launch_template_name) or I(launch_template_id) is required.
type: str
type: dict
version_added: "2.8"
min_size:
description:
- Minimum number of instances in group, if unspecified then the current group value will be used.
type: int
max_size:
description:
- Maximum number of instances in group, if unspecified then the current group value will be used.
type: int
mixed_instances_policy:
description:
- A mixed instance policy to use for the ASG.
- Only used when the ASG is configured to use a Launch Template (I(launch_template)).
- 'See also U(https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-autoscaling-autoscalinggroup-mixedinstancespolicy.html)'
required: false
version_added: "2.10"
suboptions:
instance_types:
description:
- A list of instance_types.
type: list
elements: str
type: dict
placement_group:
description:
- Physical location of your cluster placement group created in Amazon EC2.
version_added: "2.3"
type: str
desired_capacity:
description:
- Desired number of instances in group, if unspecified then the current group value will be used.
type: int
replace_all_instances:
description:
- In a rolling fashion, replace all instances that used the old launch configuration with one from the new launch configuration.
It increases the ASG size by I(replace_batch_size), waits for the new instances to be up and running.
After that, it terminates a batch of old instances, waits for the replacements, and repeats, until all old instances are replaced.
Once that's done the ASG size is reduced back to the expected size.
version_added: "1.8"
default: false
type: bool
replace_batch_size:
description:
- Number of instances you'd like to replace at a time. Used with I(replace_all_instances).
required: false
version_added: "1.8"
default: 1
type: int
replace_instances:
description:
- List of I(instance_ids) belonging to the named AutoScalingGroup that you would like to terminate and be replaced with instances
matching the current launch configuration.
version_added: "1.8"
type: list
elements: str
lc_check:
description:
- Check to make sure instances that are being replaced with I(replace_instances) do not already have the current I(launch_config).
version_added: "1.8"
default: true
type: bool
lt_check:
description:
- Check to make sure instances that are being replaced with I(replace_instances) do not already have the current
I(launch_template or I(launch_template) I(version).
version_added: "2.8"
default: true
type: bool
vpc_zone_identifier:
description:
- List of VPC subnets to use
type: list
elements: str
tags:
description:
- A list of tags to add to the Auto Scale Group.
- Optional key is I(propagate_at_launch), which defaults to true.
- When I(propagate_at_launch) is true the tags will be propagated to the Instances created.
version_added: "1.7"
type: list
elements: dict
health_check_period:
description:
- Length of time in seconds after a new EC2 instance comes into service that Auto Scaling starts checking its health.
required: false
default: 300
version_added: "1.7"
type: int
health_check_type:
description:
- The service you want the health status from, Amazon EC2 or Elastic Load Balancer.
required: false
default: EC2
version_added: "1.7"
choices: ['EC2', 'ELB']
type: str
default_cooldown:
description:
- The number of seconds after a scaling activity completes before another can begin.
default: 300
version_added: "2.0"
type: int
wait_timeout:
description:
- How long to wait for instances to become viable when replaced. If you experience the error "Waited too long for ELB instances to be healthy",
try increasing this value.
default: 300
type: int
version_added: "1.8"
wait_for_instances:
description:
- Wait for the ASG instances to be in a ready state before exiting. If instances are behind an ELB, it will wait until the ELB determines all
instances have a lifecycle_state of "InService" and a health_status of "Healthy".
version_added: "1.9"
default: true
type: bool
termination_policies:
description:
- An ordered list of criteria used for selecting instances to be removed from the Auto Scaling group when reducing capacity.
- Using I(termination_policies=Default) when modifying an existing AutoScalingGroup will result in the existing policy being retained
instead of changed to C(Default).
- 'Valid values include: C(Default), C(OldestInstance), C(NewestInstance), C(OldestLaunchConfiguration), C(ClosestToNextInstanceHour)'
- 'Full documentation of valid values can be found in the AWS documentation:'
- 'U(https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html#custom-termination-policy)'
default: Default
version_added: "2.0"
type: list
elements: str
notification_topic:
description:
- A SNS topic ARN to send auto scaling notifications to.
version_added: "2.2"
type: str
notification_types:
description:
- A list of auto scaling events to trigger notifications on.
default:
- 'autoscaling:EC2_INSTANCE_LAUNCH'
- 'autoscaling:EC2_INSTANCE_LAUNCH_ERROR'
- 'autoscaling:EC2_INSTANCE_TERMINATE'
- 'autoscaling:EC2_INSTANCE_TERMINATE_ERROR'
required: false
version_added: "2.2"
type: list
elements: str
suspend_processes:
description:
- A list of scaling processes to suspend.
- 'Valid values include:'
- C(Launch), C(Terminate), C(HealthCheck), C(ReplaceUnhealthy), C(AZRebalance), C(AlarmNotification), C(ScheduledActions), C(AddToLoadBalancer)
- 'Full documentation of valid values can be found in the AWS documentation:'
- 'U(https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html)'
default: []
version_added: "2.3"
type: list
elements: str
metrics_collection:
description:
- Enable ASG metrics collection.
type: bool
default: false
version_added: "2.6"
metrics_granularity:
description:
- When I(metrics_collection=true) this will determine the granularity of metrics collected by CloudWatch.
default: "1Minute"
version_added: "2.6"
type: str
metrics_list:
description:
- List of autoscaling metrics to collect when I(metrics_collection=true).
default:
- 'GroupMinSize'
- 'GroupMaxSize'
- 'GroupDesiredCapacity'
- 'GroupInServiceInstances'
- 'GroupPendingInstances'
- 'GroupStandbyInstances'
- 'GroupTerminatingInstances'
- 'GroupTotalInstances'
version_added: "2.6"
type: list
elements: str
extends_documentation_fragment:
- aws
- ec2
"""
EXAMPLES = '''
# Basic configuration with Launch Configuration
- ec2_asg:
name: special
load_balancers: [ 'lb1', 'lb2' ]
availability_zones: [ 'eu-west-1a', 'eu-west-1b' ]
launch_config_name: 'lc-1'
min_size: 1
max_size: 10
desired_capacity: 5
vpc_zone_identifier: [ 'subnet-abcd1234', 'subnet-1a2b3c4d' ]
tags:
- environment: production
propagate_at_launch: no
# Rolling ASG Updates
# Below is an example of how to assign a new launch config to an ASG and terminate old instances.
#
# All instances in "myasg" that do not have the launch configuration named "my_new_lc" will be terminated in
# a rolling fashion with instances using the current launch configuration, "my_new_lc".
#
# This could also be considered a rolling deploy of a pre-baked AMI.
#
# If this is a newly created group, the instances will not be replaced since all instances
# will have the current launch configuration.
- name: create launch config
ec2_lc:
name: my_new_lc
image_id: ami-lkajsf
key_name: mykey
region: us-east-1
security_groups: sg-23423
instance_type: m1.small
assign_public_ip: yes
- ec2_asg:
name: myasg
launch_config_name: my_new_lc
health_check_period: 60
health_check_type: ELB
replace_all_instances: yes
min_size: 5
max_size: 5
desired_capacity: 5
region: us-east-1
# To only replace a couple of instances instead of all of them, supply a list
# to "replace_instances":
- ec2_asg:
name: myasg
launch_config_name: my_new_lc
health_check_period: 60
health_check_type: ELB
replace_instances:
- i-b345231
- i-24c2931
min_size: 5
max_size: 5
desired_capacity: 5
region: us-east-1
# Basic Configuration with Launch Template
- ec2_asg:
name: special
load_balancers: [ 'lb1', 'lb2' ]
availability_zones: [ 'eu-west-1a', 'eu-west-1b' ]
launch_template:
version: '1'
launch_template_name: 'lt-example'
launch_template_id: 'lt-123456'
min_size: 1
max_size: 10
desired_capacity: 5
vpc_zone_identifier: [ 'subnet-abcd1234', 'subnet-1a2b3c4d' ]
tags:
- environment: production
propagate_at_launch: no
# Basic Configuration with Launch Template using mixed instance policy
- ec2_asg:
name: special
load_balancers: [ 'lb1', 'lb2' ]
availability_zones: [ 'eu-west-1a', 'eu-west-1b' ]
launch_template:
version: '1'
launch_template_name: 'lt-example'
launch_template_id: 'lt-123456'
mixed_instances_policy:
instance_types:
- t3a.large
- t3.large
- t2.large
min_size: 1
max_size: 10
desired_capacity: 5
vpc_zone_identifier: [ 'subnet-abcd1234', 'subnet-1a2b3c4d' ]
tags:
- environment: production
propagate_at_launch: no
'''
RETURN = '''
---
auto_scaling_group_name:
description: The unique name of the auto scaling group
returned: success
type: str
sample: "myasg"
auto_scaling_group_arn:
description: The unique ARN of the autoscaling group
returned: success
type: str
sample: "arn:aws:autoscaling:us-east-1:123456789012:autoScalingGroup:6a09ad6d-eeee-1234-b987-ee123ced01ad:autoScalingGroupName/myasg"
availability_zones:
description: The availability zones for the auto scaling group
returned: success
type: list
sample: [
"us-east-1d"
]
created_time:
description: Timestamp of create time of the auto scaling group
returned: success
type: str
sample: "2017-11-08T14:41:48.272000+00:00"
default_cooldown:
description: The default cooldown time in seconds.
returned: success
type: int
sample: 300
desired_capacity:
description: The number of EC2 instances that should be running in this group.
returned: success
type: int
sample: 3
healthcheck_period:
description: Length of time in seconds after a new EC2 instance comes into service that Auto Scaling starts checking its health.
returned: success
type: int
sample: 30
healthcheck_type:
description: The service you want the health status from, one of "EC2" or "ELB".
returned: success
type: str
sample: "ELB"
healthy_instances:
description: Number of instances in a healthy state
returned: success
type: int
sample: 5
in_service_instances:
description: Number of instances in service
returned: success
type: int
sample: 3
instance_facts:
description: Dictionary of EC2 instances and their status as it relates to the ASG.
returned: success
type: dict
sample: {
"i-0123456789012": {
"health_status": "Healthy",
"launch_config_name": "public-webapp-production-1",
"lifecycle_state": "InService"
}
}
instances:
description: list of instance IDs in the ASG
returned: success
type: list
sample: [
"i-0123456789012"
]
launch_config_name:
description: >
Name of launch configuration associated with the ASG. Same as launch_configuration_name,
provided for compatibility with ec2_asg module.
returned: success
type: str
sample: "public-webapp-production-1"
load_balancers:
description: List of load balancers names attached to the ASG.
returned: success
type: list
sample: ["elb-webapp-prod"]
max_size:
description: Maximum size of group
returned: success
type: int
sample: 3
min_size:
description: Minimum size of group
returned: success
type: int
sample: 1
mixed_instance_policy:
description: Returns the list of instance types if a mixed instance policy is set.
returned: success
type: list
sample: ["t3.micro", "t3a.micro"]
pending_instances:
description: Number of instances in pending state
returned: success
type: int
sample: 1
tags:
description: List of tags for the ASG, and whether or not each tag propagates to instances at launch.
returned: success
type: list
sample: [
{
"key": "Name",
"value": "public-webapp-production-1",
"resource_id": "public-webapp-production-1",
"resource_type": "auto-scaling-group",
"propagate_at_launch": "true"
},
{
"key": "env",
"value": "production",
"resource_id": "public-webapp-production-1",
"resource_type": "auto-scaling-group",
"propagate_at_launch": "true"
}
]
target_group_arns:
description: List of ARNs of the target groups that the ASG populates
returned: success
type: list
sample: [
"arn:aws:elasticloadbalancing:ap-southeast-2:123456789012:targetgroup/target-group-host-hello/1a2b3c4d5e6f1a2b",
"arn:aws:elasticloadbalancing:ap-southeast-2:123456789012:targetgroup/target-group-path-world/abcd1234abcd1234"
]
target_group_names:
description: List of names of the target groups that the ASG populates
returned: success
type: list
sample: [
"target-group-host-hello",
"target-group-path-world"
]
termination_policies:
description: A list of termination policies for the group.
returned: success
type: str
sample: ["Default"]
unhealthy_instances:
description: Number of instances in an unhealthy state
returned: success
type: int
sample: 0
viable_instances:
description: Number of instances in a viable state
returned: success
type: int
sample: 1
vpc_zone_identifier:
description: VPC zone ID / subnet id for the auto scaling group
returned: success
type: str
sample: "subnet-a31ef45f"
metrics_collection:
description: List of enabled AutosSalingGroup metrics
returned: success
type: list
sample: [
{
"Granularity": "1Minute",
"Metric": "GroupInServiceInstances"
}
]
'''
import time
import traceback
from ansible.module_utils._text import to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ec2 import boto3_conn, ec2_argument_spec, HAS_BOTO3, camel_dict_to_snake_dict, get_aws_connection_info, AWSRetry
try:
import botocore
except ImportError:
pass # will be detected by imported HAS_BOTO3
from ansible.module_utils.aws.core import AnsibleAWSModule
ASG_ATTRIBUTES = ('AvailabilityZones', 'DefaultCooldown', 'DesiredCapacity',
'HealthCheckGracePeriod', 'HealthCheckType', 'LaunchConfigurationName',
'LoadBalancerNames', 'MaxSize', 'MinSize', 'AutoScalingGroupName', 'PlacementGroup',
'TerminationPolicies', 'VPCZoneIdentifier')
INSTANCE_ATTRIBUTES = ('instance_id', 'health_status', 'lifecycle_state', 'launch_config_name')
backoff_params = dict(tries=10, delay=3, backoff=1.5)
@AWSRetry.backoff(**backoff_params)
def describe_autoscaling_groups(connection, group_name):
pg = connection.get_paginator('describe_auto_scaling_groups')
return pg.paginate(AutoScalingGroupNames=[group_name]).build_full_result().get('AutoScalingGroups', [])
@AWSRetry.backoff(**backoff_params)
def deregister_lb_instances(connection, lb_name, instance_id):
connection.deregister_instances_from_load_balancer(LoadBalancerName=lb_name, Instances=[dict(InstanceId=instance_id)])
@AWSRetry.backoff(**backoff_params)
def describe_instance_health(connection, lb_name, instances):
params = dict(LoadBalancerName=lb_name)
if instances:
params.update(Instances=instances)
return connection.describe_instance_health(**params)
@AWSRetry.backoff(**backoff_params)
def describe_target_health(connection, target_group_arn, instances):
return connection.describe_target_health(TargetGroupArn=target_group_arn, Targets=instances)
@AWSRetry.backoff(**backoff_params)
def suspend_asg_processes(connection, asg_name, processes):
connection.suspend_processes(AutoScalingGroupName=asg_name, ScalingProcesses=processes)
@AWSRetry.backoff(**backoff_params)
def resume_asg_processes(connection, asg_name, processes):
connection.resume_processes(AutoScalingGroupName=asg_name, ScalingProcesses=processes)
@AWSRetry.backoff(**backoff_params)
def describe_launch_configurations(connection, launch_config_name):
pg = connection.get_paginator('describe_launch_configurations')
return pg.paginate(LaunchConfigurationNames=[launch_config_name]).build_full_result()
@AWSRetry.backoff(**backoff_params)
def describe_launch_templates(connection, launch_template):
if launch_template['launch_template_id'] is not None:
try:
lt = connection.describe_launch_templates(LaunchTemplateIds=[launch_template['launch_template_id']])
return lt
except (botocore.exceptions.ClientError) as e:
module.fail_json(msg="No launch template found matching: %s" % launch_template)
else:
try:
lt = connection.describe_launch_templates(LaunchTemplateNames=[launch_template['launch_template_name']])
return lt
except (botocore.exceptions.ClientError) as e:
module.fail_json(msg="No launch template found matching: %s" % launch_template)
@AWSRetry.backoff(**backoff_params)
def create_asg(connection, **params):
connection.create_auto_scaling_group(**params)
@AWSRetry.backoff(**backoff_params)
def put_notification_config(connection, asg_name, topic_arn, notification_types):
connection.put_notification_configuration(
AutoScalingGroupName=asg_name,
TopicARN=topic_arn,
NotificationTypes=notification_types
)
@AWSRetry.backoff(**backoff_params)
def del_notification_config(connection, asg_name, topic_arn):
connection.delete_notification_configuration(
AutoScalingGroupName=asg_name,
TopicARN=topic_arn
)
@AWSRetry.backoff(**backoff_params)
def attach_load_balancers(connection, asg_name, load_balancers):
connection.attach_load_balancers(AutoScalingGroupName=asg_name, LoadBalancerNames=load_balancers)
@AWSRetry.backoff(**backoff_params)
def detach_load_balancers(connection, asg_name, load_balancers):
connection.detach_load_balancers(AutoScalingGroupName=asg_name, LoadBalancerNames=load_balancers)
@AWSRetry.backoff(**backoff_params)
def attach_lb_target_groups(connection, asg_name, target_group_arns):
connection.attach_load_balancer_target_groups(AutoScalingGroupName=asg_name, TargetGroupARNs=target_group_arns)
@AWSRetry.backoff(**backoff_params)
def detach_lb_target_groups(connection, asg_name, target_group_arns):
connection.detach_load_balancer_target_groups(AutoScalingGroupName=asg_name, TargetGroupARNs=target_group_arns)
@AWSRetry.backoff(**backoff_params)
def update_asg(connection, **params):
connection.update_auto_scaling_group(**params)
@AWSRetry.backoff(catch_extra_error_codes=['ScalingActivityInProgress'], **backoff_params)
def delete_asg(connection, asg_name, force_delete):
connection.delete_auto_scaling_group(AutoScalingGroupName=asg_name, ForceDelete=force_delete)
@AWSRetry.backoff(**backoff_params)
def terminate_asg_instance(connection, instance_id, decrement_capacity):
connection.terminate_instance_in_auto_scaling_group(InstanceId=instance_id,
ShouldDecrementDesiredCapacity=decrement_capacity)
def enforce_required_arguments_for_create():
''' As many arguments are not required for autoscale group deletion
they cannot be mandatory arguments for the module, so we enforce
them here '''
missing_args = []
if module.params.get('launch_config_name') is None and module.params.get('launch_template') is None:
module.fail_json(msg="Missing either launch_config_name or launch_template for autoscaling group create")
for arg in ('min_size', 'max_size'):
if module.params[arg] is None:
missing_args.append(arg)
if missing_args:
module.fail_json(msg="Missing required arguments for autoscaling group create: %s" % ",".join(missing_args))
def get_properties(autoscaling_group):
properties = dict()
properties['healthy_instances'] = 0
properties['in_service_instances'] = 0
properties['unhealthy_instances'] = 0
properties['pending_instances'] = 0
properties['viable_instances'] = 0
properties['terminating_instances'] = 0
instance_facts = dict()
autoscaling_group_instances = autoscaling_group.get('Instances')
if autoscaling_group_instances:
properties['instances'] = [i['InstanceId'] for i in autoscaling_group_instances]
for i in autoscaling_group_instances:
if i.get('LaunchConfigurationName'):
instance_facts[i['InstanceId']] = {'health_status': i['HealthStatus'],
'lifecycle_state': i['LifecycleState'],
'launch_config_name': i['LaunchConfigurationName']}
elif i.get('LaunchTemplate'):
instance_facts[i['InstanceId']] = {'health_status': i['HealthStatus'],
'lifecycle_state': i['LifecycleState'],
'launch_template': i['LaunchTemplate']}
else:
instance_facts[i['InstanceId']] = {'health_status': i['HealthStatus'],
'lifecycle_state': i['LifecycleState']}
if i['HealthStatus'] == 'Healthy' and i['LifecycleState'] == 'InService':
properties['viable_instances'] += 1
if i['HealthStatus'] == 'Healthy':
properties['healthy_instances'] += 1
else:
properties['unhealthy_instances'] += 1
if i['LifecycleState'] == 'InService':
properties['in_service_instances'] += 1
if i['LifecycleState'] == 'Terminating':
properties['terminating_instances'] += 1
if i['LifecycleState'] == 'Pending':
properties['pending_instances'] += 1
else:
properties['instances'] = []
properties['auto_scaling_group_name'] = autoscaling_group.get('AutoScalingGroupName')
properties['auto_scaling_group_arn'] = autoscaling_group.get('AutoScalingGroupARN')
properties['availability_zones'] = autoscaling_group.get('AvailabilityZones')
properties['created_time'] = autoscaling_group.get('CreatedTime')
properties['instance_facts'] = instance_facts
properties['load_balancers'] = autoscaling_group.get('LoadBalancerNames')
if autoscaling_group.get('LaunchConfigurationName'):
properties['launch_config_name'] = autoscaling_group.get('LaunchConfigurationName')
else:
properties['launch_template'] = autoscaling_group.get('LaunchTemplate')
properties['tags'] = autoscaling_group.get('Tags')
properties['min_size'] = autoscaling_group.get('MinSize')
properties['max_size'] = autoscaling_group.get('MaxSize')
properties['desired_capacity'] = autoscaling_group.get('DesiredCapacity')
properties['default_cooldown'] = autoscaling_group.get('DefaultCooldown')
properties['healthcheck_grace_period'] = autoscaling_group.get('HealthCheckGracePeriod')
properties['healthcheck_type'] = autoscaling_group.get('HealthCheckType')
properties['default_cooldown'] = autoscaling_group.get('DefaultCooldown')
properties['termination_policies'] = autoscaling_group.get('TerminationPolicies')
properties['target_group_arns'] = autoscaling_group.get('TargetGroupARNs')
properties['vpc_zone_identifier'] = autoscaling_group.get('VPCZoneIdentifier')
raw_mixed_instance_object = autoscaling_group.get('MixedInstancesPolicy')
if raw_mixed_instance_object:
properties['mixed_instances_policy'] = [x['InstanceType'] for x in raw_mixed_instance_object.get('LaunchTemplate').get('Overrides')]
metrics = autoscaling_group.get('EnabledMetrics')
if metrics:
metrics.sort(key=lambda x: x["Metric"])
properties['metrics_collection'] = metrics
if properties['target_group_arns']:
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
elbv2_connection = boto3_conn(module,
conn_type='client',
resource='elbv2',
region=region,
endpoint=ec2_url,
**aws_connect_params)
tg_paginator = elbv2_connection.get_paginator('describe_target_groups')
tg_result = tg_paginator.paginate(TargetGroupArns=properties['target_group_arns']).build_full_result()
target_groups = tg_result['TargetGroups']
else:
target_groups = []
properties['target_group_names'] = [tg['TargetGroupName'] for tg in target_groups]
return properties
def get_launch_object(connection, ec2_connection):
launch_object = dict()
launch_config_name = module.params.get('launch_config_name')
launch_template = module.params.get('launch_template')
mixed_instances_policy = module.params.get('mixed_instances_policy')
if launch_config_name is None and launch_template is None:
return launch_object
elif launch_config_name:
try:
launch_configs = describe_launch_configurations(connection, launch_config_name)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to describe launch configurations",
exception=traceback.format_exc())
if len(launch_configs['LaunchConfigurations']) == 0:
module.fail_json(msg="No launch config found with name %s" % launch_config_name)
launch_object = {"LaunchConfigurationName": launch_configs['LaunchConfigurations'][0]['LaunchConfigurationName']}
return launch_object
elif launch_template:
lt = describe_launch_templates(ec2_connection, launch_template)['LaunchTemplates'][0]
if launch_template['version'] is not None:
launch_object = {"LaunchTemplate": {"LaunchTemplateId": lt['LaunchTemplateId'], "Version": launch_template['version']}}
else:
launch_object = {"LaunchTemplate": {"LaunchTemplateId": lt['LaunchTemplateId'], "Version": str(lt['LatestVersionNumber'])}}
if mixed_instances_policy:
instance_types = mixed_instances_policy.get('instance_types', [])
policy = {
'LaunchTemplate': {
'LaunchTemplateSpecification': launch_object['LaunchTemplate']
}
}
if instance_types:
policy['LaunchTemplate']['Overrides'] = []
for instance_type in instance_types:
instance_type_dict = {'InstanceType': instance_type}
policy['LaunchTemplate']['Overrides'].append(instance_type_dict)
launch_object['MixedInstancesPolicy'] = policy
return launch_object
def elb_dreg(asg_connection, group_name, instance_id):
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
as_group = describe_autoscaling_groups(asg_connection, group_name)[0]
wait_timeout = module.params.get('wait_timeout')
count = 1
if as_group['LoadBalancerNames'] and as_group['HealthCheckType'] == 'ELB':
elb_connection = boto3_conn(module,
conn_type='client',
resource='elb',
region=region,
endpoint=ec2_url,
**aws_connect_params)
else:
return
for lb in as_group['LoadBalancerNames']:
deregister_lb_instances(elb_connection, lb, instance_id)
module.debug("De-registering %s from ELB %s" % (instance_id, lb))
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time() and count > 0:
count = 0
for lb in as_group['LoadBalancerNames']:
lb_instances = describe_instance_health(elb_connection, lb, [])
for i in lb_instances['InstanceStates']:
if i['InstanceId'] == instance_id and i['State'] == "InService":
count += 1
module.debug("%s: %s, %s" % (i['InstanceId'], i['State'], i['Description']))
time.sleep(10)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for instance to deregister. {0}".format(time.asctime()))
def elb_healthy(asg_connection, elb_connection, group_name):
healthy_instances = set()
as_group = describe_autoscaling_groups(asg_connection, group_name)[0]
props = get_properties(as_group)
# get healthy, inservice instances from ASG
instances = []
for instance, settings in props['instance_facts'].items():
if settings['lifecycle_state'] == 'InService' and settings['health_status'] == 'Healthy':
instances.append(dict(InstanceId=instance))
module.debug("ASG considers the following instances InService and Healthy: %s" % instances)
module.debug("ELB instance status:")
lb_instances = list()
for lb in as_group.get('LoadBalancerNames'):
# we catch a race condition that sometimes happens if the instance exists in the ASG
# but has not yet show up in the ELB
try:
lb_instances = describe_instance_health(elb_connection, lb, instances)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'InvalidInstance':
return None
module.fail_json(msg="Failed to get load balancer.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to get load balancer.",
exception=traceback.format_exc())
for i in lb_instances.get('InstanceStates'):
if i['State'] == "InService":
healthy_instances.add(i['InstanceId'])
module.debug("ELB Health State %s: %s" % (i['InstanceId'], i['State']))
return len(healthy_instances)
def tg_healthy(asg_connection, elbv2_connection, group_name):
healthy_instances = set()
as_group = describe_autoscaling_groups(asg_connection, group_name)[0]
props = get_properties(as_group)
# get healthy, inservice instances from ASG
instances = []
for instance, settings in props['instance_facts'].items():
if settings['lifecycle_state'] == 'InService' and settings['health_status'] == 'Healthy':
instances.append(dict(Id=instance))
module.debug("ASG considers the following instances InService and Healthy: %s" % instances)
module.debug("Target Group instance status:")
tg_instances = list()
for tg in as_group.get('TargetGroupARNs'):
# we catch a race condition that sometimes happens if the instance exists in the ASG
# but has not yet show up in the ELB
try:
tg_instances = describe_target_health(elbv2_connection, tg, instances)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'InvalidInstance':
return None
module.fail_json(msg="Failed to get target group.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to get target group.",
exception=traceback.format_exc())
for i in tg_instances.get('TargetHealthDescriptions'):
if i['TargetHealth']['State'] == "healthy":
healthy_instances.add(i['Target']['Id'])
module.debug("Target Group Health State %s: %s" % (i['Target']['Id'], i['TargetHealth']['State']))
return len(healthy_instances)
def wait_for_elb(asg_connection, group_name):
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
wait_timeout = module.params.get('wait_timeout')
# if the health_check_type is ELB, we want to query the ELBs directly for instance
# status as to avoid health_check_grace period that is awarded to ASG instances
as_group = describe_autoscaling_groups(asg_connection, group_name)[0]
if as_group.get('LoadBalancerNames') and as_group.get('HealthCheckType') == 'ELB':
module.debug("Waiting for ELB to consider instances healthy.")
elb_connection = boto3_conn(module,
conn_type='client',
resource='elb',
region=region,
endpoint=ec2_url,
**aws_connect_params)
wait_timeout = time.time() + wait_timeout
healthy_instances = elb_healthy(asg_connection, elb_connection, group_name)
while healthy_instances < as_group.get('MinSize') and wait_timeout > time.time():
healthy_instances = elb_healthy(asg_connection, elb_connection, group_name)
module.debug("ELB thinks %s instances are healthy." % healthy_instances)
time.sleep(10)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for ELB instances to be healthy. %s" % time.asctime())
module.debug("Waiting complete. ELB thinks %s instances are healthy." % healthy_instances)
def wait_for_target_group(asg_connection, group_name):
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
wait_timeout = module.params.get('wait_timeout')
# if the health_check_type is ELB, we want to query the ELBs directly for instance
# status as to avoid health_check_grace period that is awarded to ASG instances
as_group = describe_autoscaling_groups(asg_connection, group_name)[0]
if as_group.get('TargetGroupARNs') and as_group.get('HealthCheckType') == 'ELB':
module.debug("Waiting for Target Group to consider instances healthy.")
elbv2_connection = boto3_conn(module,
conn_type='client',
resource='elbv2',
region=region,
endpoint=ec2_url,
**aws_connect_params)
wait_timeout = time.time() + wait_timeout
healthy_instances = tg_healthy(asg_connection, elbv2_connection, group_name)
while healthy_instances < as_group.get('MinSize') and wait_timeout > time.time():
healthy_instances = tg_healthy(asg_connection, elbv2_connection, group_name)
module.debug("Target Group thinks %s instances are healthy." % healthy_instances)
time.sleep(10)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for ELB instances to be healthy. %s" % time.asctime())
module.debug("Waiting complete. Target Group thinks %s instances are healthy." % healthy_instances)
def suspend_processes(ec2_connection, as_group):
suspend_processes = set(module.params.get('suspend_processes'))
try:
suspended_processes = set([p['ProcessName'] for p in as_group['SuspendedProcesses']])
except AttributeError:
# New ASG being created, no suspended_processes defined yet
suspended_processes = set()
if suspend_processes == suspended_processes:
return False
resume_processes = list(suspended_processes - suspend_processes)
if resume_processes:
resume_asg_processes(ec2_connection, module.params.get('name'), resume_processes)
if suspend_processes:
suspend_asg_processes(ec2_connection, module.params.get('name'), list(suspend_processes))
return True
def create_autoscaling_group(connection):
group_name = module.params.get('name')
load_balancers = module.params['load_balancers']
target_group_arns = module.params['target_group_arns']
availability_zones = module.params['availability_zones']
launch_config_name = module.params.get('launch_config_name')
launch_template = module.params.get('launch_template')
mixed_instances_policy = module.params.get('mixed_instances_policy')
min_size = module.params['min_size']
max_size = module.params['max_size']
placement_group = module.params.get('placement_group')
desired_capacity = module.params.get('desired_capacity')
vpc_zone_identifier = module.params.get('vpc_zone_identifier')
set_tags = module.params.get('tags')
health_check_period = module.params.get('health_check_period')
health_check_type = module.params.get('health_check_type')
default_cooldown = module.params.get('default_cooldown')
wait_for_instances = module.params.get('wait_for_instances')
wait_timeout = module.params.get('wait_timeout')
termination_policies = module.params.get('termination_policies')
notification_topic = module.params.get('notification_topic')
notification_types = module.params.get('notification_types')
metrics_collection = module.params.get('metrics_collection')
metrics_granularity = module.params.get('metrics_granularity')
metrics_list = module.params.get('metrics_list')
try:
as_groups = describe_autoscaling_groups(connection, group_name)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to describe auto scaling groups.",
exception=traceback.format_exc())
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
ec2_connection = boto3_conn(module,
conn_type='client',
resource='ec2',
region=region,
endpoint=ec2_url,
**aws_connect_params)
if vpc_zone_identifier:
vpc_zone_identifier = ','.join(vpc_zone_identifier)
asg_tags = []
for tag in set_tags:
for k, v in tag.items():
if k != 'propagate_at_launch':
asg_tags.append(dict(Key=k,
Value=to_native(v),
PropagateAtLaunch=bool(tag.get('propagate_at_launch', True)),
ResourceType='auto-scaling-group',
ResourceId=group_name))
if not as_groups:
if not vpc_zone_identifier and not availability_zones:
availability_zones = module.params['availability_zones'] = [zone['ZoneName'] for
zone in ec2_connection.describe_availability_zones()['AvailabilityZones']]
enforce_required_arguments_for_create()
if desired_capacity is None:
desired_capacity = min_size
ag = dict(
AutoScalingGroupName=group_name,
MinSize=min_size,
MaxSize=max_size,
DesiredCapacity=desired_capacity,
Tags=asg_tags,
HealthCheckGracePeriod=health_check_period,
HealthCheckType=health_check_type,
DefaultCooldown=default_cooldown,
TerminationPolicies=termination_policies)
if vpc_zone_identifier:
ag['VPCZoneIdentifier'] = vpc_zone_identifier
if availability_zones:
ag['AvailabilityZones'] = availability_zones
if placement_group:
ag['PlacementGroup'] = placement_group
if load_balancers:
ag['LoadBalancerNames'] = load_balancers
if target_group_arns:
ag['TargetGroupARNs'] = target_group_arns
launch_object = get_launch_object(connection, ec2_connection)
if 'LaunchConfigurationName' in launch_object:
ag['LaunchConfigurationName'] = launch_object['LaunchConfigurationName']
elif 'LaunchTemplate' in launch_object:
if 'MixedInstancesPolicy' in launch_object:
ag['MixedInstancesPolicy'] = launch_object['MixedInstancesPolicy']
else:
ag['LaunchTemplate'] = launch_object['LaunchTemplate']
else:
module.fail_json(msg="Missing LaunchConfigurationName or LaunchTemplate",
exception=traceback.format_exc())
try:
create_asg(connection, **ag)
if metrics_collection:
connection.enable_metrics_collection(AutoScalingGroupName=group_name, Granularity=metrics_granularity, Metrics=metrics_list)
all_ag = describe_autoscaling_groups(connection, group_name)
if len(all_ag) == 0:
module.fail_json(msg="No auto scaling group found with the name %s" % group_name)
as_group = all_ag[0]
suspend_processes(connection, as_group)
if wait_for_instances:
wait_for_new_inst(connection, group_name, wait_timeout, desired_capacity, 'viable_instances')
if load_balancers:
wait_for_elb(connection, group_name)
# Wait for target group health if target group(s)defined
if target_group_arns:
wait_for_target_group(connection, group_name)
if notification_topic:
put_notification_config(connection, group_name, notification_topic, notification_types)
as_group = describe_autoscaling_groups(connection, group_name)[0]
asg_properties = get_properties(as_group)
changed = True
return changed, asg_properties
except botocore.exceptions.ClientError as e:
module.fail_json(msg="Failed to create Autoscaling Group.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to create Autoscaling Group.",
exception=traceback.format_exc())
else:
as_group = as_groups[0]
initial_asg_properties = get_properties(as_group)
changed = False
if suspend_processes(connection, as_group):
changed = True
# process tag changes
if len(set_tags) > 0:
have_tags = as_group.get('Tags')
want_tags = asg_tags
if have_tags:
have_tags.sort(key=lambda x: x["Key"])
if want_tags:
want_tags.sort(key=lambda x: x["Key"])
dead_tags = []
have_tag_keyvals = [x['Key'] for x in have_tags]
want_tag_keyvals = [x['Key'] for x in want_tags]
for dead_tag in set(have_tag_keyvals).difference(want_tag_keyvals):
changed = True
dead_tags.append(dict(ResourceId=as_group['AutoScalingGroupName'],
ResourceType='auto-scaling-group', Key=dead_tag))
have_tags = [have_tag for have_tag in have_tags if have_tag['Key'] != dead_tag]
if dead_tags:
connection.delete_tags(Tags=dead_tags)
zipped = zip(have_tags, want_tags)
if len(have_tags) != len(want_tags) or not all(x == y for x, y in zipped):
changed = True
connection.create_or_update_tags(Tags=asg_tags)
# Handle load balancer attachments/detachments
# Attach load balancers if they are specified but none currently exist
if load_balancers and not as_group['LoadBalancerNames']:
changed = True
try:
attach_load_balancers(connection, group_name, load_balancers)
except botocore.exceptions.ClientError as e:
module.fail_json(msg="Failed to update Autoscaling Group.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to update Autoscaling Group.",
exception=traceback.format_exc())
# Update load balancers if they are specified and one or more already exists
elif as_group['LoadBalancerNames']:
change_load_balancers = load_balancers is not None
# Get differences
if not load_balancers:
load_balancers = list()
wanted_elbs = set(load_balancers)
has_elbs = set(as_group['LoadBalancerNames'])
# check if all requested are already existing
if has_elbs - wanted_elbs and change_load_balancers:
# if wanted contains less than existing, then we need to delete some
elbs_to_detach = has_elbs.difference(wanted_elbs)
if elbs_to_detach:
changed = True
try:
detach_load_balancers(connection, group_name, list(elbs_to_detach))
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to detach load balancers %s: %s." % (elbs_to_detach, to_native(e)),
exception=traceback.format_exc())
if wanted_elbs - has_elbs:
# if has contains less than wanted, then we need to add some
elbs_to_attach = wanted_elbs.difference(has_elbs)
if elbs_to_attach:
changed = True
try:
attach_load_balancers(connection, group_name, list(elbs_to_attach))
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to attach load balancers %s: %s." % (elbs_to_attach, to_native(e)),
exception=traceback.format_exc())
# Handle target group attachments/detachments
# Attach target groups if they are specified but none currently exist
if target_group_arns and not as_group['TargetGroupARNs']:
changed = True
try:
attach_lb_target_groups(connection, group_name, target_group_arns)
except botocore.exceptions.ClientError as e:
module.fail_json(msg="Failed to update Autoscaling Group.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to update Autoscaling Group.",
exception=traceback.format_exc())
# Update target groups if they are specified and one or more already exists
elif target_group_arns is not None and as_group['TargetGroupARNs']:
# Get differences
wanted_tgs = set(target_group_arns)
has_tgs = set(as_group['TargetGroupARNs'])
# check if all requested are already existing
if has_tgs.issuperset(wanted_tgs):
# if wanted contains less than existing, then we need to delete some
tgs_to_detach = has_tgs.difference(wanted_tgs)
if tgs_to_detach:
changed = True
try:
detach_lb_target_groups(connection, group_name, list(tgs_to_detach))
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to detach load balancer target groups %s: %s" % (tgs_to_detach, to_native(e)),
exception=traceback.format_exc())
if wanted_tgs.issuperset(has_tgs):
# if has contains less than wanted, then we need to add some
tgs_to_attach = wanted_tgs.difference(has_tgs)
if tgs_to_attach:
changed = True
try:
attach_lb_target_groups(connection, group_name, list(tgs_to_attach))
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to attach load balancer target groups %s: %s" % (tgs_to_attach, to_native(e)),
exception=traceback.format_exc())
# check for attributes that aren't required for updating an existing ASG
# check if min_size/max_size/desired capacity have been specified and if not use ASG values
if min_size is None:
min_size = as_group['MinSize']
if max_size is None:
max_size = as_group['MaxSize']
if desired_capacity is None:
desired_capacity = as_group['DesiredCapacity']
ag = dict(
AutoScalingGroupName=group_name,
MinSize=min_size,
MaxSize=max_size,
DesiredCapacity=desired_capacity,
HealthCheckGracePeriod=health_check_period,
HealthCheckType=health_check_type,
DefaultCooldown=default_cooldown,
TerminationPolicies=termination_policies)
# Get the launch object (config or template) if one is provided in args or use the existing one attached to ASG if not.
launch_object = get_launch_object(connection, ec2_connection)
if 'LaunchConfigurationName' in launch_object:
ag['LaunchConfigurationName'] = launch_object['LaunchConfigurationName']
elif 'LaunchTemplate' in launch_object:
if 'MixedInstancesPolicy' in launch_object:
ag['MixedInstancesPolicy'] = launch_object['MixedInstancesPolicy']
else:
ag['LaunchTemplate'] = launch_object['LaunchTemplate']
else:
try:
ag['LaunchConfigurationName'] = as_group['LaunchConfigurationName']
except Exception:
launch_template = as_group['LaunchTemplate']
# Prefer LaunchTemplateId over Name as it's more specific. Only one can be used for update_asg.
ag['LaunchTemplate'] = {"LaunchTemplateId": launch_template['LaunchTemplateId'], "Version": launch_template['Version']}
if availability_zones:
ag['AvailabilityZones'] = availability_zones
if vpc_zone_identifier:
ag['VPCZoneIdentifier'] = vpc_zone_identifier
try:
update_asg(connection, **ag)
if metrics_collection:
connection.enable_metrics_collection(AutoScalingGroupName=group_name, Granularity=metrics_granularity, Metrics=metrics_list)
else:
connection.disable_metrics_collection(AutoScalingGroupName=group_name, Metrics=metrics_list)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json(msg="Failed to update autoscaling group: %s" % to_native(e),
exception=traceback.format_exc())
if notification_topic:
try:
put_notification_config(connection, group_name, notification_topic, notification_types)
except botocore.exceptions.ClientError as e:
module.fail_json(msg="Failed to update Autoscaling Group notifications.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to update Autoscaling Group notifications.",
exception=traceback.format_exc())
if wait_for_instances:
wait_for_new_inst(connection, group_name, wait_timeout, desired_capacity, 'viable_instances')
# Wait for ELB health if ELB(s)defined
if load_balancers:
module.debug('\tWAITING FOR ELB HEALTH')
wait_for_elb(connection, group_name)
# Wait for target group health if target group(s)defined
if target_group_arns:
module.debug('\tWAITING FOR TG HEALTH')
wait_for_target_group(connection, group_name)
try:
as_group = describe_autoscaling_groups(connection, group_name)[0]
asg_properties = get_properties(as_group)
if asg_properties != initial_asg_properties:
changed = True
except botocore.exceptions.ClientError as e:
module.fail_json(msg="Failed to read existing Autoscaling Groups.",
exception=traceback.format_exc(), **camel_dict_to_snake_dict(e.response))
except botocore.exceptions.BotoCoreError as e:
module.fail_json(msg="Failed to read existing Autoscaling Groups.",
exception=traceback.format_exc())
return changed, asg_properties
def delete_autoscaling_group(connection):
group_name = module.params.get('name')
notification_topic = module.params.get('notification_topic')
wait_for_instances = module.params.get('wait_for_instances')
wait_timeout = module.params.get('wait_timeout')
if notification_topic:
del_notification_config(connection, group_name, notification_topic)
groups = describe_autoscaling_groups(connection, group_name)
if groups:
wait_timeout = time.time() + wait_timeout
if not wait_for_instances:
delete_asg(connection, group_name, force_delete=True)
else:
updated_params = dict(AutoScalingGroupName=group_name, MinSize=0, MaxSize=0, DesiredCapacity=0)
update_asg(connection, **updated_params)
instances = True
while instances and wait_for_instances and wait_timeout >= time.time():
tmp_groups = describe_autoscaling_groups(connection, group_name)
if tmp_groups:
tmp_group = tmp_groups[0]
if not tmp_group.get('Instances'):
instances = False
time.sleep(10)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for old instances to terminate. %s" % time.asctime())
delete_asg(connection, group_name, force_delete=False)
while describe_autoscaling_groups(connection, group_name) and wait_timeout >= time.time():
time.sleep(5)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for ASG to delete. %s" % time.asctime())
return True
return False
def get_chunks(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]
def update_size(connection, group, max_size, min_size, dc):
module.debug("setting ASG sizes")
module.debug("minimum size: %s, desired_capacity: %s, max size: %s" % (min_size, dc, max_size))
updated_group = dict()
updated_group['AutoScalingGroupName'] = group['AutoScalingGroupName']
updated_group['MinSize'] = min_size
updated_group['MaxSize'] = max_size
updated_group['DesiredCapacity'] = dc
update_asg(connection, **updated_group)
def replace(connection):
batch_size = module.params.get('replace_batch_size')
wait_timeout = module.params.get('wait_timeout')
group_name = module.params.get('name')
max_size = module.params.get('max_size')
min_size = module.params.get('min_size')
desired_capacity = module.params.get('desired_capacity')
launch_config_name = module.params.get('launch_config_name')
# Required to maintain the default value being set to 'true'
if launch_config_name:
lc_check = module.params.get('lc_check')
else:
lc_check = False
# Mirror above behaviour for Launch Templates
launch_template = module.params.get('launch_template')
if launch_template:
lt_check = module.params.get('lt_check')
else:
lt_check = False
replace_instances = module.params.get('replace_instances')
replace_all_instances = module.params.get('replace_all_instances')
as_group = describe_autoscaling_groups(connection, group_name)[0]
if desired_capacity is None:
desired_capacity = as_group['DesiredCapacity']
wait_for_new_inst(connection, group_name, wait_timeout, as_group['MinSize'], 'viable_instances')
props = get_properties(as_group)
instances = props['instances']
if replace_all_instances:
# If replacing all instances, then set replace_instances to current set
# This allows replace_instances and replace_all_instances to behave same
replace_instances = instances
if replace_instances:
instances = replace_instances
# check to see if instances are replaceable if checking launch configs
if launch_config_name:
new_instances, old_instances = get_instances_by_launch_config(props, lc_check, instances)
elif launch_template:
new_instances, old_instances = get_instances_by_launch_template(props, lt_check, instances)
num_new_inst_needed = desired_capacity - len(new_instances)
if lc_check or lt_check:
if num_new_inst_needed == 0 and old_instances:
module.debug("No new instances needed, but old instances are present. Removing old instances")
terminate_batch(connection, old_instances, instances, True)
as_group = describe_autoscaling_groups(connection, group_name)[0]
props = get_properties(as_group)
changed = True
return(changed, props)
# we don't want to spin up extra instances if not necessary
if num_new_inst_needed < batch_size:
module.debug("Overriding batch size to %s" % num_new_inst_needed)
batch_size = num_new_inst_needed
if not old_instances:
changed = False
return(changed, props)
# check if min_size/max_size/desired capacity have been specified and if not use ASG values
if min_size is None:
min_size = as_group['MinSize']
if max_size is None:
max_size = as_group['MaxSize']
# set temporary settings and wait for them to be reached
# This should get overwritten if the number of instances left is less than the batch size.
as_group = describe_autoscaling_groups(connection, group_name)[0]
update_size(connection, as_group, max_size + batch_size, min_size + batch_size, desired_capacity + batch_size)
wait_for_new_inst(connection, group_name, wait_timeout, as_group['MinSize'] + batch_size, 'viable_instances')
wait_for_elb(connection, group_name)
wait_for_target_group(connection, group_name)
as_group = describe_autoscaling_groups(connection, group_name)[0]
props = get_properties(as_group)
instances = props['instances']
if replace_instances:
instances = replace_instances
module.debug("beginning main loop")
for i in get_chunks(instances, batch_size):
# break out of this loop if we have enough new instances
break_early, desired_size, term_instances = terminate_batch(connection, i, instances, False)
wait_for_term_inst(connection, term_instances)
wait_for_new_inst(connection, group_name, wait_timeout, desired_size, 'viable_instances')
wait_for_elb(connection, group_name)
wait_for_target_group(connection, group_name)
as_group = describe_autoscaling_groups(connection, group_name)[0]
if break_early:
module.debug("breaking loop")
break
update_size(connection, as_group, max_size, min_size, desired_capacity)
as_group = describe_autoscaling_groups(connection, group_name)[0]
asg_properties = get_properties(as_group)
module.debug("Rolling update complete.")
changed = True
return(changed, asg_properties)
def get_instances_by_launch_config(props, lc_check, initial_instances):
new_instances = []
old_instances = []
# old instances are those that have the old launch config
if lc_check:
for i in props['instances']:
# Check if migrating from launch_template to launch_config first
if 'launch_template' in props['instance_facts'][i]:
old_instances.append(i)
elif props['instance_facts'][i].get('launch_config_name') == props['launch_config_name']:
new_instances.append(i)
else:
old_instances.append(i)
else:
module.debug("Comparing initial instances with current: %s" % initial_instances)
for i in props['instances']:
if i not in initial_instances:
new_instances.append(i)
else:
old_instances.append(i)
module.debug("New instances: %s, %s" % (len(new_instances), new_instances))
module.debug("Old instances: %s, %s" % (len(old_instances), old_instances))
return new_instances, old_instances
def get_instances_by_launch_template(props, lt_check, initial_instances):
new_instances = []
old_instances = []
# old instances are those that have the old launch template or version of the same launch template
if lt_check:
for i in props['instances']:
# Check if migrating from launch_config_name to launch_template_name first
if 'launch_config_name' in props['instance_facts'][i]:
old_instances.append(i)
elif props['instance_facts'][i].get('launch_template') == props['launch_template']:
new_instances.append(i)
else:
old_instances.append(i)
else:
module.debug("Comparing initial instances with current: %s" % initial_instances)
for i in props['instances']:
if i not in initial_instances:
new_instances.append(i)
else:
old_instances.append(i)
module.debug("New instances: %s, %s" % (len(new_instances), new_instances))
module.debug("Old instances: %s, %s" % (len(old_instances), old_instances))
return new_instances, old_instances
def list_purgeable_instances(props, lc_check, lt_check, replace_instances, initial_instances):
instances_to_terminate = []
instances = (inst_id for inst_id in replace_instances if inst_id in props['instances'])
# check to make sure instances given are actually in the given ASG
# and they have a non-current launch config
if module.params.get('launch_config_name'):
if lc_check:
for i in instances:
if 'launch_template' in props['instance_facts'][i]:
instances_to_terminate.append(i)
elif props['instance_facts'][i]['launch_config_name'] != props['launch_config_name']:
instances_to_terminate.append(i)
else:
for i in instances:
if i in initial_instances:
instances_to_terminate.append(i)
elif module.params.get('launch_template'):
if lt_check:
for i in instances:
if 'launch_config_name' in props['instance_facts'][i]:
instances_to_terminate.append(i)
elif props['instance_facts'][i]['launch_template'] != props['launch_template']:
instances_to_terminate.append(i)
else:
for i in instances:
if i in initial_instances:
instances_to_terminate.append(i)
return instances_to_terminate
def terminate_batch(connection, replace_instances, initial_instances, leftovers=False):
batch_size = module.params.get('replace_batch_size')
min_size = module.params.get('min_size')
desired_capacity = module.params.get('desired_capacity')
group_name = module.params.get('name')
lc_check = module.params.get('lc_check')
lt_check = module.params.get('lt_check')
decrement_capacity = False
break_loop = False
as_group = describe_autoscaling_groups(connection, group_name)[0]
if desired_capacity is None:
desired_capacity = as_group['DesiredCapacity']
props = get_properties(as_group)
desired_size = as_group['MinSize']
if module.params.get('launch_config_name'):
new_instances, old_instances = get_instances_by_launch_config(props, lc_check, initial_instances)
else:
new_instances, old_instances = get_instances_by_launch_template(props, lt_check, initial_instances)
num_new_inst_needed = desired_capacity - len(new_instances)
# check to make sure instances given are actually in the given ASG
# and they have a non-current launch config
instances_to_terminate = list_purgeable_instances(props, lc_check, lt_check, replace_instances, initial_instances)
module.debug("new instances needed: %s" % num_new_inst_needed)
module.debug("new instances: %s" % new_instances)
module.debug("old instances: %s" % old_instances)
module.debug("batch instances: %s" % ",".join(instances_to_terminate))
if num_new_inst_needed == 0:
decrement_capacity = True
if as_group['MinSize'] != min_size:
if min_size is None:
min_size = as_group['MinSize']
updated_params = dict(AutoScalingGroupName=as_group['AutoScalingGroupName'], MinSize=min_size)
update_asg(connection, **updated_params)
module.debug("Updating minimum size back to original of %s" % min_size)
# if are some leftover old instances, but we are already at capacity with new ones
# we don't want to decrement capacity
if leftovers:
decrement_capacity = False
break_loop = True
instances_to_terminate = old_instances
desired_size = min_size
module.debug("No new instances needed")
if num_new_inst_needed < batch_size and num_new_inst_needed != 0:
instances_to_terminate = instances_to_terminate[:num_new_inst_needed]
decrement_capacity = False
break_loop = False
module.debug("%s new instances needed" % num_new_inst_needed)
module.debug("decrementing capacity: %s" % decrement_capacity)
for instance_id in instances_to_terminate:
elb_dreg(connection, group_name, instance_id)
module.debug("terminating instance: %s" % instance_id)
terminate_asg_instance(connection, instance_id, decrement_capacity)
# we wait to make sure the machines we marked as Unhealthy are
# no longer in the list
return break_loop, desired_size, instances_to_terminate
def wait_for_term_inst(connection, term_instances):
wait_timeout = module.params.get('wait_timeout')
group_name = module.params.get('name')
as_group = describe_autoscaling_groups(connection, group_name)[0]
count = 1
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time() and count > 0:
module.debug("waiting for instances to terminate")
count = 0
as_group = describe_autoscaling_groups(connection, group_name)[0]
props = get_properties(as_group)
instance_facts = props['instance_facts']
instances = (i for i in instance_facts if i in term_instances)
for i in instances:
lifecycle = instance_facts[i]['lifecycle_state']
health = instance_facts[i]['health_status']
module.debug("Instance %s has state of %s,%s" % (i, lifecycle, health))
if lifecycle.startswith('Terminating') or health == 'Unhealthy':
count += 1
time.sleep(10)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for old instances to terminate. %s" % time.asctime())
def wait_for_new_inst(connection, group_name, wait_timeout, desired_size, prop):
# make sure we have the latest stats after that last loop.
as_group = describe_autoscaling_groups(connection, group_name)[0]
props = get_properties(as_group)
module.debug("Waiting for %s = %s, currently %s" % (prop, desired_size, props[prop]))
# now we make sure that we have enough instances in a viable state
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time() and desired_size > props[prop]:
module.debug("Waiting for %s = %s, currently %s" % (prop, desired_size, props[prop]))
time.sleep(10)
as_group = describe_autoscaling_groups(connection, group_name)[0]
props = get_properties(as_group)
if wait_timeout <= time.time():
# waiting took too long
module.fail_json(msg="Waited too long for new instances to become viable. %s" % time.asctime())
module.debug("Reached %s: %s" % (prop, desired_size))
return props
def asg_exists(connection):
group_name = module.params.get('name')
as_group = describe_autoscaling_groups(connection, group_name)
return bool(len(as_group))
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
name=dict(required=True, type='str'),
load_balancers=dict(type='list'),
target_group_arns=dict(type='list'),
availability_zones=dict(type='list'),
launch_config_name=dict(type='str'),
launch_template=dict(type='dict',
default=None,
options=dict(
version=dict(type='str'),
launch_template_name=dict(type='str'),
launch_template_id=dict(type='str'),
),
),
mixed_instances_policy=dict(type='dict',
default=None,
options=dict(
instance_types=dict(type='list', elements='str'),
)),
min_size=dict(type='int'),
max_size=dict(type='int'),
placement_group=dict(type='str'),
desired_capacity=dict(type='int'),
vpc_zone_identifier=dict(type='list'),
replace_batch_size=dict(type='int', default=1),
replace_all_instances=dict(type='bool', default=False),
replace_instances=dict(type='list', default=[]),
lc_check=dict(type='bool', default=True),
lt_check=dict(type='bool', default=True),
wait_timeout=dict(type='int', default=300),
state=dict(default='present', choices=['present', 'absent']),
tags=dict(type='list', default=[]),
health_check_period=dict(type='int', default=300),
health_check_type=dict(default='EC2', choices=['EC2', 'ELB']),
default_cooldown=dict(type='int', default=300),
wait_for_instances=dict(type='bool', default=True),
termination_policies=dict(type='list', default='Default'),
notification_topic=dict(type='str', default=None),
notification_types=dict(type='list', default=[
'autoscaling:EC2_INSTANCE_LAUNCH',
'autoscaling:EC2_INSTANCE_LAUNCH_ERROR',
'autoscaling:EC2_INSTANCE_TERMINATE',
'autoscaling:EC2_INSTANCE_TERMINATE_ERROR'
]),
suspend_processes=dict(type='list', default=[]),
metrics_collection=dict(type='bool', default=False),
metrics_granularity=dict(type='str', default='1Minute'),
metrics_list=dict(type='list', default=[
'GroupMinSize',
'GroupMaxSize',
'GroupDesiredCapacity',
'GroupInServiceInstances',
'GroupPendingInstances',
'GroupStandbyInstances',
'GroupTerminatingInstances',
'GroupTotalInstances'
])
),
)
global module
module = AnsibleAWSModule(
argument_spec=argument_spec,
mutually_exclusive=[
['replace_all_instances', 'replace_instances'],
['launch_config_name', 'launch_template']]
)
if not HAS_BOTO3:
module.fail_json(msg='boto3 required for this module')
if module.params.get('mixed_instance_type') and not module.botocore_at_least('1.12.45'):
module.fail_json(msg="mixed_instance_type is only supported with botocore >= 1.12.45")
state = module.params.get('state')
replace_instances = module.params.get('replace_instances')
replace_all_instances = module.params.get('replace_all_instances')
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
connection = boto3_conn(module,
conn_type='client',
resource='autoscaling',
region=region,
endpoint=ec2_url,
**aws_connect_params)
changed = create_changed = replace_changed = False
exists = asg_exists(connection)
if state == 'present':
create_changed, asg_properties = create_autoscaling_group(connection)
elif state == 'absent':
changed = delete_autoscaling_group(connection)
module.exit_json(changed=changed)
# Only replace instances if asg existed at start of call
if exists and (replace_all_instances or replace_instances) and (module.params.get('launch_config_name') or module.params.get('launch_template')):
replace_changed, asg_properties = replace(connection)
if create_changed or replace_changed:
changed = True
module.exit_json(changed=changed, **asg_properties)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,936 |
AWS ec2_asg with replace_all_instances:yes will wait for instances to start/terminate even when wait_for_instances:no
|
##### SUMMARY
When using the `ec2_asg` module with `replace_all_instances: yes`, then `wait_for_instances` is effectively ignored.
I would expect that if `wait_for_instances: no` then it would not wait for instances to complete starting/termination.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_asg
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.2
config file = None
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/.local/lib/python3.7/site-packages/ansible
executable location = /home/user/.local/bin/ansible
python version = 3.7.5 (default, Nov 20 2019, 09:21:52) [GCC 9.2.1 20191008]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
(empty)
```
##### OS / ENVIRONMENT
OS: Ubuntu 19.10
Ansible installed via pip3
##### STEPS TO REPRODUCE
```
- name: Update ASG
ec2_asg:
name: "my-asg"
launch_config_name: "my-launchconfig"
min_size: 1
max_size: 1
replace_batch_size: 1
replace_all_instances: yes
wait_for_instances: no
```
This will not return successfully until new instance(s) have started, and any existing instance(s) have completed termination.
If your timeout is not set long enough, then you'll get a 'Waited too long for old instances to terminate' response.
##### EXPECTED RESULTS
I would expect that it would issue commands to terminate old instances and start new instances, without waiting.
For those who have to replace in smaller batches than the current `desired_capacity`, then they would need to set `wait_for_instances: yes`. Perhaps a warning might be in order if `replace_batch_size` is less than `desired_capacity`
##### ACTUAL RESULTS
Playbook output when it takes too long to execute:
```
TASK [test : Update ASG] ***********************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Waited too long for old instances to terminate. Wed Dec 18 12:51:55 2019"}
```
|
https://github.com/ansible/ansible/issues/65936
|
https://github.com/ansible/ansible/pull/66863
|
d2f4d305ee4175cc0315a705824b168b3096e06a
|
f98874e4f98837e4b9868780b19cf6614b00282a
| 2019-12-18T02:30:02Z |
python
| 2020-02-15T12:56:39Z |
test/integration/targets/ec2_asg/tasks/main.yml
|
---
# tasks file for test_ec2_asg
- name: Test incomplete credentials with ec2_asg
block:
# ============================================================
- name: test invalid profile
ec2_asg:
name: "{{ resource_prefix }}-asg"
region: "{{ aws_region }}"
profile: notavalidprofile
ignore_errors: yes
register: result
- name:
assert:
that:
- "'The config profile (notavalidprofile) could not be found' in result.msg"
- name: test partial credentials
ec2_asg:
name: "{{ resource_prefix }}-asg"
region: "{{ aws_region }}"
aws_access_key: "{{ aws_access_key }}"
ignore_errors: yes
register: result
- name:
assert:
that:
- "'Partial credentials found in explicit, missing: aws_secret_access_key' in result.msg"
- name: test without specifying region
ec2_asg:
name: "{{ resource_prefix }}-asg"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token | default(omit) }}"
ignore_errors: yes
register: result
- name:
assert:
that:
- result.msg == 'The ec2_asg module requires a region and none was found in configuration, environment variables or module parameters'
# ============================================================
- name: Test incomplete arguments with ec2_asg
block:
# ============================================================
- name: test without specifying required module options
ec2_asg:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token | default(omit) }}"
ignore_errors: yes
register: result
- name: assert name is a required module option
assert:
that:
- "result.msg == 'missing required arguments: name'"
- name: Run ec2_asg integration tests.
module_defaults:
group/aws:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token | default(omit) }}"
region: "{{ aws_region }}"
block:
# ============================================================
- name: Find AMI to use
ec2_ami_info:
owners: 'amazon'
filters:
name: '{{ ec2_ami_name }}'
register: ec2_amis
- set_fact:
ec2_ami_image: '{{ ec2_amis.images[0].image_id }}'
- name: load balancer name has to be less than 32 characters
# the 8 digit identifier at the end of resource_prefix helps determine during which test something
# was created
set_fact:
load_balancer_name: "{{ item }}-lb"
with_items: "{{ resource_prefix | regex_findall('.{8}$') }}"
# Set up the testing dependencies: VPC, subnet, security group, and two launch configurations
- name: Create VPC for use in testing
ec2_vpc_net:
name: "{{ resource_prefix }}-vpc"
cidr_block: 10.55.77.0/24
tenancy: default
register: testing_vpc
- name: Create internet gateway for use in testing
ec2_vpc_igw:
vpc_id: "{{ testing_vpc.vpc.id }}"
state: present
register: igw
- name: Create subnet for use in testing
ec2_vpc_subnet:
state: present
vpc_id: "{{ testing_vpc.vpc.id }}"
cidr: 10.55.77.0/24
az: "{{ aws_region }}a"
resource_tags:
Name: "{{ resource_prefix }}-subnet"
register: testing_subnet
- name: create routing rules
ec2_vpc_route_table:
vpc_id: "{{ testing_vpc.vpc.id }}"
tags:
created: "{{ resource_prefix }}-route"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
subnets:
- "{{ testing_subnet.subnet.id }}"
- name: create a security group with the vpc created in the ec2_setup
ec2_group:
name: "{{ resource_prefix }}-sg"
description: a security group for ansible tests
vpc_id: "{{ testing_vpc.vpc.id }}"
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
register: sg
- name: ensure launch configs exist
ec2_lc:
name: "{{ item }}"
assign_public_ip: true
image_id: "{{ ec2_ami_image }}"
user_data: |
#cloud-config
package_upgrade: true
package_update: true
packages:
- httpd
runcmd:
- "service httpd start"
security_groups: "{{ sg.group_id }}"
instance_type: t3.micro
with_items:
- "{{ resource_prefix }}-lc"
- "{{ resource_prefix }}-lc-2"
# ============================================================
- name: launch asg and wait for instances to be deemed healthy (no ELB)
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
desired_capacity: 1
min_size: 1
max_size: 1
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
state: present
wait_for_instances: yes
register: output
- assert:
that:
- "output.viable_instances == 1"
- name: Tag asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
tags:
- tag_a: 'value 1'
propagate_at_launch: no
- tag_b: 'value 2'
propagate_at_launch: yes
register: output
- assert:
that:
- "output.tags | length == 2"
- output is changed
- name: Re-Tag asg (different order)
ec2_asg:
name: "{{ resource_prefix }}-asg"
tags:
- tag_b: 'value 2'
propagate_at_launch: yes
- tag_a: 'value 1'
propagate_at_launch: no
register: output
- assert:
that:
- "output.tags | length == 2"
- output is not changed
- name: Re-Tag asg new tags
ec2_asg:
name: "{{ resource_prefix }}-asg"
tags:
- tag_c: 'value 3'
propagate_at_launch: no
register: output
- assert:
that:
- "output.tags | length == 1"
- output is changed
- name: Re-Tag asg update propagate_at_launch
ec2_asg:
name: "{{ resource_prefix }}-asg"
tags:
- tag_c: 'value 3'
propagate_at_launch: yes
register: output
- assert:
that:
- "output.tags | length == 1"
- output is changed
- name: Enable metrics collection
ec2_asg:
name: "{{ resource_prefix }}-asg"
metrics_collection: yes
register: output
- assert:
that:
- output is changed
- name: Enable metrics collection (check idempotency)
ec2_asg:
name: "{{ resource_prefix }}-asg"
metrics_collection: yes
register: output
- assert:
that:
- output is not changed
- name: Disable metrics collection
ec2_asg:
name: "{{ resource_prefix }}-asg"
metrics_collection: no
register: output
- assert:
that:
- output is changed
- name: Disable metrics collection (check idempotency)
ec2_asg:
name: "{{ resource_prefix }}-asg"
metrics_collection: no
register: output
- assert:
that:
- output is not changed
# - name: pause for a bit to make sure that the group can't be trivially deleted
# pause: seconds=30
- name: kill asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
state: absent
wait_timeout: 800
async: 400
# ============================================================
- name: launch asg and do not wait for instances to be deemed healthy (no ELB)
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
desired_capacity: 1
min_size: 1
max_size: 1
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
wait_for_instances: no
state: present
register: output
- assert:
that:
- "output.viable_instances == 0"
- name: kill asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
state: absent
wait_timeout: 800
async: 400
# ============================================================
- name: create asg with asg metrics enabled
ec2_asg:
name: "{{ resource_prefix }}-asg"
metrics_collection: true
launch_config_name: "{{ resource_prefix }}-lc"
desired_capacity: 0
min_size: 0
max_size: 0
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
state: present
register: output
- assert:
that:
- "'Group' in output.metrics_collection.0.Metric"
- name: kill asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
state: absent
wait_timeout: 800
async: 400
# ============================================================
- name: launch load balancer
ec2_elb_lb:
name: "{{ load_balancer_name }}"
state: present
security_group_ids:
- "{{ sg.group_id }}"
subnets: "{{ testing_subnet.subnet.id }}"
connection_draining_timeout: 60
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
health_check:
ping_protocol: tcp
ping_port: 80
ping_path: "/"
response_timeout: 5
interval: 10
unhealthy_threshold: 4
healthy_threshold: 2
register: load_balancer
- name: launch asg and wait for instances to be deemed healthy (ELB)
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
health_check_type: ELB
desired_capacity: 1
min_size: 1
max_size: 1
health_check_period: 300
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
load_balancers: "{{ load_balancer_name }}"
wait_for_instances: yes
wait_timeout: 900
state: present
register: output
- assert:
that:
- "output.viable_instances == 1"
# ============================================================
# grow scaling group to 3
- name: add 2 more instances wait for instances to be deemed healthy (ELB)
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
health_check_type: ELB
desired_capacity: 3
min_size: 3
max_size: 5
health_check_period: 600
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
load_balancers: "{{ load_balancer_name }}"
wait_for_instances: yes
wait_timeout: 1200
state: present
register: output
- assert:
that:
- "output.viable_instances == 3"
# ============================================================
# # perform rolling replace with different launch configuration
- name: perform rolling update to new AMI
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc-2"
health_check_type: ELB
desired_capacity: 3
min_size: 1
max_size: 5
health_check_period: 900
load_balancers: "{{ load_balancer_name }}"
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
wait_for_instances: yes
replace_all_instances: yes
wait_timeout: 1800
state: present
register: output
# ensure that all instances have new launch config
- assert:
that:
- "item.value.launch_config_name == '{{ resource_prefix }}-lc-2'"
with_dict: "{{ output.instance_facts }}"
# assert they are all healthy and that the rolling update resulted in the appropriate number of instances
- assert:
that:
- "output.viable_instances == 3"
# ============================================================
# perform rolling replace with the original launch configuration
- name: perform rolling update to new AMI while removing the load balancer
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
health_check_type: EC2
desired_capacity: 3
min_size: 1
max_size: 5
health_check_period: 900
load_balancers: []
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
wait_for_instances: yes
replace_all_instances: yes
wait_timeout: 1800
state: present
register: output
# ensure that all instances have new launch config
- assert:
that:
- "item.value.launch_config_name == '{{ resource_prefix }}-lc'"
with_dict: "{{ output.instance_facts }}"
# assert they are all healthy and that the rolling update resulted in the appropriate number of instances
# there should be the same number of instances as there were before the rolling update was performed
- assert:
that:
- "output.viable_instances == 3"
# ============================================================
# perform rolling replace with new launch configuration and lc_check:false
# Note - this is done async so we can query asg_facts during
# the execution. Issues #28087 and #35993 result in correct
# end result, but spin up extraneous instances during execution.
- name: "perform rolling update to new AMI with lc_check: false"
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc-2"
health_check_type: EC2
desired_capacity: 3
min_size: 1
max_size: 5
health_check_period: 900
load_balancers: []
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
wait_for_instances: yes
replace_all_instances: yes
replace_batch_size: 3
lc_check: false
wait_timeout: 1800
state: present
async: 1800
poll: 0
register: asg_job
- name: get ec2_asg facts for 3 minutes
ec2_asg_info:
name: "{{ resource_prefix }}-asg"
register: output
loop_control:
pause: 15
with_sequence: count=12
- set_fact:
inst_id_json_query: 'results[*].results[*].instances[*].instance_id'
# Since we started with 3 servers and replace all of them.
# We should see 6 servers total.
- assert:
that:
- "lookup('flattened',output|json_query(inst_id_json_query)).split(',')|unique|length == 6"
- name: Ensure ec2_asg task completes
async_status: jid="{{ asg_job.ansible_job_id }}"
register: status
until: status is finished
retries: 200
delay: 15
# ============================================================
- name: kill asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
state: absent
wait_timeout: 800
async: 400
# Create new asg with replace_all_instances and lc_check:false
# Note - this is done async so we can query asg_facts during
# the execution. Issues #28087 results in correct
# end result, but spin up extraneous instances during execution.
- name: "new asg with lc_check: false"
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_config_name: "{{ resource_prefix }}-lc"
health_check_type: EC2
desired_capacity: 3
min_size: 1
max_size: 5
health_check_period: 900
load_balancers: []
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
wait_for_instances: yes
replace_all_instances: yes
replace_batch_size: 3
lc_check: false
wait_timeout: 1800
state: present
async: 1800
poll: 0
register: asg_job
# Collect ec2_asg_info for 3 minutes
- name: get ec2_asg information
ec2_asg_info:
name: "{{ resource_prefix }}-asg"
register: output
loop_control:
pause: 15
with_sequence: count=12
- set_fact:
inst_id_json_query: 'results[*].results[*].instances[*].instance_id'
# Get all instance_ids we saw and assert we saw number expected
# Should only see 3 (don't replace instances we just created)
- assert:
that:
- "lookup('flattened',output|json_query(inst_id_json_query)).split(',')|unique|length == 3"
- name: Ensure ec2_asg task completes
async_status: jid="{{ asg_job.ansible_job_id }}"
register: status
until: status is finished
retries: 200
delay: 15
# we need a launch template, otherwise we cannot test the mixed instance policy
- name: create launch template for autoscaling group to test its mixed instance policy
ec2_launch_template:
template_name: "{{ resource_prefix }}-lt"
image_id: "{{ ec2_ami_image }}"
instance_type: t3.micro
credit_specification:
cpu_credits: standard
network_interfaces:
- associate_public_ip_address: yes
delete_on_termination: yes
device_index: 0
groups:
- "{{ sg.group_id }}"
- name: update autoscaling group with mixed-instance policy
ec2_asg:
name: "{{ resource_prefix }}-asg"
launch_template:
launch_template_name: "{{ resource_prefix }}-lt"
desired_capacity: 1
min_size: 1
max_size: 1
vpc_zone_identifier: "{{ testing_subnet.subnet.id }}"
state: present
mixed_instances_policy:
instance_types:
- t3.micro
- t3a.micro
wait_for_instances: yes
register: output
- assert:
that:
- "output.mixed_instances_policy | length == 2"
- "output.mixed_instances_policy[0] == 't3.micro'"
- "output.mixed_instances_policy[1] == 't3a.micro'"
# ============================================================
always:
- name: kill asg
ec2_asg:
name: "{{ resource_prefix }}-asg"
state: absent
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
# Remove the testing dependencies
- name: remove the load balancer
ec2_elb_lb:
name: "{{ load_balancer_name }}"
state: absent
security_group_ids:
- "{{ sg.group_id }}"
subnets: "{{ testing_subnet.subnet.id }}"
wait: yes
connection_draining_timeout: 60
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
health_check:
ping_protocol: tcp
ping_port: 80
ping_path: "/"
response_timeout: 5
interval: 10
unhealthy_threshold: 4
healthy_threshold: 2
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
- name: remove launch configs
ec2_lc:
name: "{{ resource_prefix }}-lc"
state: absent
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
with_items:
- "{{ resource_prefix }}-lc"
- "{{ resource_prefix }}-lc-2"
- name: delete launch template
ec2_launch_template:
name: "{{ resource_prefix }}-lt"
state: absent
register: del_lt
retries: 10
until: del_lt is not failed
ignore_errors: true
- name: remove the security group
ec2_group:
name: "{{ resource_prefix }}-sg"
description: a security group for ansible tests
vpc_id: "{{ testing_vpc.vpc.id }}"
state: absent
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
- name: remove routing rules
ec2_vpc_route_table:
state: absent
vpc_id: "{{ testing_vpc.vpc.id }}"
tags:
created: "{{ resource_prefix }}-route"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
subnets:
- "{{ testing_subnet.subnet.id }}"
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
- name: remove internet gateway
ec2_vpc_igw:
vpc_id: "{{ testing_vpc.vpc.id }}"
state: absent
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
- name: remove the subnet
ec2_vpc_subnet:
state: absent
vpc_id: "{{ testing_vpc.vpc.id }}"
cidr: 10.55.77.0/24
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
- name: remove the VPC
ec2_vpc_net:
name: "{{ resource_prefix }}-vpc"
cidr_block: 10.55.77.0/24
state: absent
register: removed
until: removed is not failed
ignore_errors: yes
retries: 10
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,599 |
redfish_command - Manager - ClearSessions
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
This feature would implement a ClearSessions command for the Sessions category of redfish_command, to clear all active sessions.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
redfish_command.py
redfish_utils.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
This command would help user to clear all active sessions.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/65599
|
https://github.com/ansible/ansible/pull/65600
|
f21ee7f685e8f58e58de2400b134ba5c7a2536b1
|
435bd91d2e406a227b20ce5f42c858253e0a97c3
| 2019-12-06T08:43:45Z |
python
| 2020-02-15T13:00:55Z |
lib/ansible/module_utils/redfish_utils.py
|
# Copyright (c) 2017-2018 Dell EMC Inc.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import json
from ansible.module_utils.urls import open_url
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves import http_client
from ansible.module_utils.six.moves.urllib.error import URLError, HTTPError
GET_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'}
POST_HEADERS = {'content-type': 'application/json', 'accept': 'application/json',
'OData-Version': '4.0'}
PATCH_HEADERS = {'content-type': 'application/json', 'accept': 'application/json',
'OData-Version': '4.0'}
DELETE_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'}
DEPRECATE_MSG = 'Issuing a data modification command without specifying the '\
'ID of the target %(resource)s resource when there is more '\
'than one %(resource)s will use the first one in the '\
'collection. Use the `resource_id` option to specify the '\
'target %(resource)s ID'
class RedfishUtils(object):
def __init__(self, creds, root_uri, timeout, module, resource_id=None,
data_modification=False):
self.root_uri = root_uri
self.creds = creds
self.timeout = timeout
self.module = module
self.service_root = '/redfish/v1/'
self.resource_id = resource_id
self.data_modification = data_modification
self._init_session()
# The following functions are to send GET/POST/PATCH/DELETE requests
def get_request(self, uri):
try:
resp = open_url(uri, method="GET", headers=GET_HEADERS,
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
data = json.loads(resp.read())
headers = dict((k.lower(), v) for (k, v) in resp.info().items())
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on GET request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on GET request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed GET request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'data': data, 'headers': headers}
def post_request(self, uri, pyld):
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=POST_HEADERS, method="POST",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on POST request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on POST request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed POST request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
def patch_request(self, uri, pyld):
headers = PATCH_HEADERS
r = self.get_request(uri)
if r['ret']:
# Get etag from etag header or @odata.etag property
etag = r['headers'].get('etag')
if not etag:
etag = r['data'].get('@odata.etag')
if etag:
# Make copy of headers and add If-Match header
headers = dict(headers)
headers['If-Match'] = etag
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=headers, method="PATCH",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on PATCH request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on PATCH request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed PATCH request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
def delete_request(self, uri, pyld=None):
try:
data = json.dumps(pyld) if pyld else None
resp = open_url(uri, data=data,
headers=DELETE_HEADERS, method="DELETE",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on DELETE request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on DELETE request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed DELETE request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
@staticmethod
def _get_extended_message(error):
"""
Get Redfish ExtendedInfo message from response payload if present
:param error: an HTTPError exception
:type error: HTTPError
:return: the ExtendedInfo message if present, else standard HTTP error
"""
msg = http_client.responses.get(error.code, '')
if error.code >= 400:
try:
body = error.read().decode('utf-8')
data = json.loads(body)
ext_info = data['error']['@Message.ExtendedInfo']
msg = ext_info[0]['Message']
except Exception:
pass
return msg
def _init_session(self):
pass
def _find_accountservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'AccountService' not in data:
return {'ret': False, 'msg': "AccountService resource not found"}
else:
account_service = data["AccountService"]["@odata.id"]
response = self.get_request(self.root_uri + account_service)
if response['ret'] is False:
return response
data = response['data']
accounts = data['Accounts']['@odata.id']
if accounts[-1:] == '/':
accounts = accounts[:-1]
self.accounts_uri = accounts
return {'ret': True}
def _find_sessionservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'SessionService' not in data:
return {'ret': False, 'msg': "SessionService resource not found"}
else:
session_service = data["SessionService"]["@odata.id"]
response = self.get_request(self.root_uri + session_service)
if response['ret'] is False:
return response
data = response['data']
sessions = data['Sessions']['@odata.id']
if sessions[-1:] == '/':
sessions = sessions[:-1]
self.sessions_uri = sessions
return {'ret': True}
def _get_resource_uri_by_id(self, uris, id_prop):
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
if id_prop == data.get('Id'):
return uri
return None
def _find_systems_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Systems' not in data:
return {'ret': False, 'msg': "Systems resource not found"}
response = self.get_request(self.root_uri + data['Systems']['@odata.id'])
if response['ret'] is False:
return response
self.systems_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.systems_uris:
return {
'ret': False,
'msg': "ComputerSystem's Members array is either empty or missing"}
self.systems_uri = self.systems_uris[0]
if self.data_modification:
if self.resource_id:
self.systems_uri = self._get_resource_uri_by_id(self.systems_uris,
self.resource_id)
if not self.systems_uri:
return {
'ret': False,
'msg': "System resource %s not found" % self.resource_id}
elif len(self.systems_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'System'},
version='2.14')
return {'ret': True}
def _find_updateservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'UpdateService' not in data:
return {'ret': False, 'msg': "UpdateService resource not found"}
else:
update = data["UpdateService"]["@odata.id"]
self.update_uri = update
response = self.get_request(self.root_uri + update)
if response['ret'] is False:
return response
data = response['data']
self.firmware_uri = self.software_uri = None
if 'FirmwareInventory' in data:
self.firmware_uri = data['FirmwareInventory'][u'@odata.id']
if 'SoftwareInventory' in data:
self.software_uri = data['SoftwareInventory'][u'@odata.id']
return {'ret': True}
def _find_chassis_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Chassis' not in data:
return {'ret': False, 'msg': "Chassis resource not found"}
chassis = data["Chassis"]["@odata.id"]
response = self.get_request(self.root_uri + chassis)
if response['ret'] is False:
return response
self.chassis_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.chassis_uris:
return {'ret': False,
'msg': "Chassis Members array is either empty or missing"}
self.chassis_uri = self.chassis_uris[0]
if self.data_modification:
if self.resource_id:
self.chassis_uri = self._get_resource_uri_by_id(self.chassis_uris,
self.resource_id)
if not self.chassis_uri:
return {
'ret': False,
'msg': "Chassis resource %s not found" % self.resource_id}
elif len(self.chassis_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'Chassis'},
version='2.14')
return {'ret': True}
def _find_managers_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Managers' not in data:
return {'ret': False, 'msg': "Manager resource not found"}
manager = data["Managers"]["@odata.id"]
response = self.get_request(self.root_uri + manager)
if response['ret'] is False:
return response
self.manager_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.manager_uris:
return {'ret': False,
'msg': "Managers Members array is either empty or missing"}
self.manager_uri = self.manager_uris[0]
if self.data_modification:
if self.resource_id:
self.manager_uri = self._get_resource_uri_by_id(self.manager_uris,
self.resource_id)
if not self.manager_uri:
return {
'ret': False,
'msg': "Manager resource %s not found" % self.resource_id}
elif len(self.manager_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'Manager'},
version='2.14')
return {'ret': True}
def get_logs(self):
log_svcs_uri_list = []
list_of_logs = []
properties = ['Severity', 'Created', 'EntryType', 'OemRecordFormat',
'Message', 'MessageId', 'MessageArgs']
# Find LogService
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'LogServices' not in data:
return {'ret': False, 'msg': "LogServices resource not found"}
# Find all entries in LogServices
logs_uri = data["LogServices"]["@odata.id"]
response = self.get_request(self.root_uri + logs_uri)
if response['ret'] is False:
return response
data = response['data']
for log_svcs_entry in data.get('Members', []):
response = self.get_request(self.root_uri + log_svcs_entry[u'@odata.id'])
if response['ret'] is False:
return response
_data = response['data']
if 'Entries' in _data:
log_svcs_uri_list.append(_data['Entries'][u'@odata.id'])
# For each entry in LogServices, get log name and all log entries
for log_svcs_uri in log_svcs_uri_list:
logs = {}
list_of_log_entries = []
response = self.get_request(self.root_uri + log_svcs_uri)
if response['ret'] is False:
return response
data = response['data']
logs['Description'] = data.get('Description',
'Collection of log entries')
# Get all log entries for each type of log found
for logEntry in data.get('Members', []):
entry = {}
for prop in properties:
if prop in logEntry:
entry[prop] = logEntry.get(prop)
if entry:
list_of_log_entries.append(entry)
log_name = log_svcs_uri.split('/')[-1]
logs[log_name] = list_of_log_entries
list_of_logs.append(logs)
# list_of_logs[logs{list_of_log_entries[entry{}]}]
return {'ret': True, 'entries': list_of_logs}
def clear_logs(self):
# Find LogService
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'LogServices' not in data:
return {'ret': False, 'msg': "LogServices resource not found"}
# Find all entries in LogServices
logs_uri = data["LogServices"]["@odata.id"]
response = self.get_request(self.root_uri + logs_uri)
if response['ret'] is False:
return response
data = response['data']
for log_svcs_entry in data[u'Members']:
response = self.get_request(self.root_uri + log_svcs_entry["@odata.id"])
if response['ret'] is False:
return response
_data = response['data']
# Check to make sure option is available, otherwise error is ugly
if "Actions" in _data:
if "#LogService.ClearLog" in _data[u"Actions"]:
self.post_request(self.root_uri + _data[u"Actions"]["#LogService.ClearLog"]["target"], {})
if response['ret'] is False:
return response
return {'ret': True}
def aggregate(self, func, uri_list, uri_name):
ret = True
entries = []
for uri in uri_list:
inventory = func(uri)
ret = inventory.pop('ret') and ret
if 'entries' in inventory:
entries.append(({uri_name: uri},
inventory['entries']))
return dict(ret=ret, entries=entries)
def aggregate_chassis(self, func):
return self.aggregate(func, self.chassis_uris, 'chassis_uri')
def aggregate_managers(self, func):
return self.aggregate(func, self.manager_uris, 'manager_uri')
def aggregate_systems(self, func):
return self.aggregate(func, self.systems_uris, 'system_uri')
def get_storage_controller_inventory(self, systems_uri):
result = {}
controller_list = []
controller_results = []
# Get these entries, but does not fail if not found
properties = ['CacheSummary', 'FirmwareVersion', 'Identifiers',
'Location', 'Manufacturer', 'Model', 'Name',
'PartNumber', 'SerialNumber', 'SpeedGbps', 'Status']
key = "StorageControllers"
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'Storage' not in data:
return {'ret': False, 'msg': "Storage resource not found"}
# Get a list of all storage controllers and build respective URIs
storage_uri = data['Storage']["@odata.id"]
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Loop through Members and their StorageControllers
# and gather properties from each StorageController
if data[u'Members']:
for storage_member in data[u'Members']:
storage_member_uri = storage_member[u'@odata.id']
response = self.get_request(self.root_uri + storage_member_uri)
data = response['data']
if key in data:
controller_list = data[key]
for controller in controller_list:
controller_result = {}
for property in properties:
if property in controller:
controller_result[property] = controller[property]
controller_results.append(controller_result)
result['entries'] = controller_results
return result
else:
return {'ret': False, 'msg': "Storage resource not found"}
def get_multi_storage_controller_inventory(self):
return self.aggregate_systems(self.get_storage_controller_inventory)
def get_disk_inventory(self, systems_uri):
result = {'entries': []}
controller_list = []
# Get these entries, but does not fail if not found
properties = ['BlockSizeBytes', 'CapableSpeedGbs', 'CapacityBytes',
'EncryptionAbility', 'EncryptionStatus',
'FailurePredicted', 'HotspareType', 'Id', 'Identifiers',
'Manufacturer', 'MediaType', 'Model', 'Name',
'PartNumber', 'PhysicalLocation', 'Protocol', 'Revision',
'RotationSpeedRPM', 'SerialNumber', 'Status']
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'SimpleStorage' not in data and 'Storage' not in data:
return {'ret': False, 'msg': "SimpleStorage and Storage resource \
not found"}
if 'Storage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data[u'Storage'][u'@odata.id']
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if data[u'Members']:
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
controller_name = 'Controller 1'
if 'StorageControllers' in data:
sc = data['StorageControllers']
if sc:
if 'Name' in sc[0]:
controller_name = sc[0]['Name']
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
drive_results = []
if 'Drives' in data:
for device in data[u'Drives']:
disk_uri = self.root_uri + device[u'@odata.id']
response = self.get_request(disk_uri)
data = response['data']
drive_result = {}
for property in properties:
if property in data:
if data[property] is not None:
drive_result[property] = data[property]
drive_results.append(drive_result)
drives = {'Controller': controller_name,
'Drives': drive_results}
result["entries"].append(drives)
if 'SimpleStorage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data["SimpleStorage"]["@odata.id"]
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
if 'Name' in data:
controller_name = data['Name']
else:
sc_id = data.get('Id', '1')
controller_name = 'Controller %s' % sc_id
drive_results = []
for device in data[u'Devices']:
drive_result = {}
for property in properties:
if property in device:
drive_result[property] = device[property]
drive_results.append(drive_result)
drives = {'Controller': controller_name,
'Drives': drive_results}
result["entries"].append(drives)
return result
def get_multi_disk_inventory(self):
return self.aggregate_systems(self.get_disk_inventory)
def get_volume_inventory(self, systems_uri):
result = {'entries': []}
controller_list = []
volume_list = []
# Get these entries, but does not fail if not found
properties = ['Id', 'Name', 'RAIDType', 'VolumeType', 'BlockSizeBytes',
'Capacity', 'CapacityBytes', 'CapacitySources',
'Encrypted', 'EncryptionTypes', 'Identifiers',
'Operations', 'OptimumIOSizeBytes', 'AccessCapabilities',
'AllocatedPools', 'Status']
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'SimpleStorage' not in data and 'Storage' not in data:
return {'ret': False, 'msg': "SimpleStorage and Storage resource \
not found"}
if 'Storage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data[u'Storage'][u'@odata.id']
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if data.get('Members'):
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
controller_name = 'Controller 1'
if 'StorageControllers' in data:
sc = data['StorageControllers']
if sc:
if 'Name' in sc[0]:
controller_name = sc[0]['Name']
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
volume_results = []
if 'Volumes' in data:
# Get a list of all volumes and build respective URIs
volumes_uri = data[u'Volumes'][u'@odata.id']
response = self.get_request(self.root_uri + volumes_uri)
data = response['data']
if data.get('Members'):
for volume in data[u'Members']:
volume_list.append(volume[u'@odata.id'])
for v in volume_list:
uri = self.root_uri + v
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
volume_result = {}
for property in properties:
if property in data:
if data[property] is not None:
volume_result[property] = data[property]
# Get related Drives Id
drive_id_list = []
if 'Links' in data:
if 'Drives' in data[u'Links']:
for link in data[u'Links'][u'Drives']:
drive_id_link = link[u'@odata.id']
drive_id = drive_id_link.split("/")[-1]
drive_id_list.append({'Id': drive_id})
volume_result['Linked_drives'] = drive_id_list
volume_results.append(volume_result)
volumes = {'Controller': controller_name,
'Volumes': volume_results}
result["entries"].append(volumes)
else:
return {'ret': False, 'msg': "Storage resource not found"}
return result
def get_multi_volume_inventory(self):
return self.aggregate_systems(self.get_volume_inventory)
def restart_manager_gracefully(self):
result = {}
key = "Actions"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
action_uri = data[key]["#Manager.Reset"]["target"]
payload = {'ResetType': 'GracefulRestart'}
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def manage_indicator_led(self, command):
result = {}
key = 'IndicatorLED'
payloads = {'IndicatorLedOn': 'Lit', 'IndicatorLedOff': 'Off', "IndicatorLedBlink": 'Blinking'}
result = {}
response = self.get_request(self.root_uri + self.chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
if command in payloads.keys():
payload = {'IndicatorLED': payloads[command]}
response = self.patch_request(self.root_uri + self.chassis_uri, payload)
if response['ret'] is False:
return response
else:
return {'ret': False, 'msg': 'Invalid command'}
return result
def _map_reset_type(self, reset_type, allowable_values):
equiv_types = {
'On': 'ForceOn',
'ForceOn': 'On',
'ForceOff': 'GracefulShutdown',
'GracefulShutdown': 'ForceOff',
'GracefulRestart': 'ForceRestart',
'ForceRestart': 'GracefulRestart'
}
if reset_type in allowable_values:
return reset_type
if reset_type not in equiv_types:
return reset_type
mapped_type = equiv_types[reset_type]
if mapped_type in allowable_values:
return mapped_type
return reset_type
def manage_system_power(self, command):
key = "Actions"
reset_type_values = ['On', 'ForceOff', 'GracefulShutdown',
'GracefulRestart', 'ForceRestart', 'Nmi',
'ForceOn', 'PushPowerButton', 'PowerCycle']
# command should be PowerOn, PowerForceOff, etc.
if not command.startswith('Power'):
return {'ret': False, 'msg': 'Invalid Command (%s)' % command}
reset_type = command[5:]
# map Reboot to a ResetType that does a reboot
if reset_type == 'Reboot':
reset_type = 'GracefulRestart'
if reset_type not in reset_type_values:
return {'ret': False, 'msg': 'Invalid Command (%s)' % command}
# read the system resource and get the current power state
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
data = response['data']
power_state = data.get('PowerState')
# if power is already in target state, nothing to do
if power_state == "On" and reset_type in ['On', 'ForceOn']:
return {'ret': True, 'changed': False}
if power_state == "Off" and reset_type in ['GracefulShutdown', 'ForceOff']:
return {'ret': True, 'changed': False}
# get the #ComputerSystem.Reset Action and target URI
if key not in data or '#ComputerSystem.Reset' not in data[key]:
return {'ret': False, 'msg': 'Action #ComputerSystem.Reset not found'}
reset_action = data[key]['#ComputerSystem.Reset']
if 'target' not in reset_action:
return {'ret': False,
'msg': 'target URI missing from Action #ComputerSystem.Reset'}
action_uri = reset_action['target']
# get AllowableValues from ActionInfo
allowable_values = None
if '@Redfish.ActionInfo' in reset_action:
action_info_uri = reset_action.get('@Redfish.ActionInfo')
response = self.get_request(self.root_uri + action_info_uri)
if response['ret'] is True:
data = response['data']
if 'Parameters' in data:
params = data['Parameters']
for param in params:
if param.get('Name') == 'ResetType':
allowable_values = param.get('AllowableValues')
break
# fallback to @Redfish.AllowableValues annotation
if allowable_values is None:
allowable_values = reset_action.get('[email protected]', [])
# map ResetType to an allowable value if needed
if reset_type not in allowable_values:
reset_type = self._map_reset_type(reset_type, allowable_values)
# define payload
payload = {'ResetType': reset_type}
# POST to Action URI
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True}
def _find_account_uri(self, username=None, acct_id=None):
if not any((username, acct_id)):
return {'ret': False, 'msg':
'Must provide either account_id or account_username'}
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
headers = response['headers']
if username:
if username == data.get('UserName'):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
if acct_id:
if acct_id == data.get('Id'):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
return {'ret': False, 'no_match': True, 'msg':
'No account with the given account_id or account_username found'}
def _find_empty_account_slot(self):
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
if uris:
# first slot may be reserved, so move to end of list
uris += [uris.pop(0)]
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
headers = response['headers']
if data.get('UserName') == "" and not data.get('Enabled', True):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
return {'ret': False, 'no_match': True, 'msg':
'No empty account slot found'}
def list_users(self):
result = {}
# listing all users has always been slower than other operations, why?
user_list = []
users_results = []
# Get these entries, but does not fail if not found
properties = ['Id', 'Name', 'UserName', 'RoleId', 'Locked', 'Enabled']
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for users in data.get('Members', []):
user_list.append(users[u'@odata.id']) # user_list[] are URIs
# for each user, get details
for uri in user_list:
user = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
user[property] = data[property]
users_results.append(user)
result["entries"] = users_results
return result
def add_user_via_patch(self, user):
if user.get('account_id'):
# If Id slot specified, use it
response = self._find_account_uri(acct_id=user.get('account_id'))
else:
# Otherwise find first empty slot
response = self._find_empty_account_slot()
if not response['ret']:
return response
uri = response['uri']
payload = {}
if user.get('account_username'):
payload['UserName'] = user.get('account_username')
if user.get('account_password'):
payload['Password'] = user.get('account_password')
if user.get('account_roleid'):
payload['RoleId'] = user.get('account_roleid')
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def add_user(self, user):
if not user.get('account_username'):
return {'ret': False, 'msg':
'Must provide account_username for AddUser command'}
response = self._find_account_uri(username=user.get('account_username'))
if response['ret']:
# account_username already exists, nothing to do
return {'ret': True, 'changed': False}
response = self.get_request(self.root_uri + self.accounts_uri)
if not response['ret']:
return response
headers = response['headers']
if 'allow' in headers:
methods = [m.strip() for m in headers.get('allow').split(',')]
if 'POST' not in methods:
# if Allow header present and POST not listed, add via PATCH
return self.add_user_via_patch(user)
payload = {}
if user.get('account_username'):
payload['UserName'] = user.get('account_username')
if user.get('account_password'):
payload['Password'] = user.get('account_password')
if user.get('account_roleid'):
payload['RoleId'] = user.get('account_roleid')
response = self.post_request(self.root_uri + self.accounts_uri, payload)
if not response['ret']:
if response.get('status') == 405:
# if POST returned a 405, try to add via PATCH
return self.add_user_via_patch(user)
else:
return response
return {'ret': True}
def enable_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data.get('Enabled', True):
# account already enabled, nothing to do
return {'ret': True, 'changed': False}
payload = {'Enabled': True}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def delete_user_via_patch(self, user, uri=None, data=None):
if not uri:
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data and data.get('UserName') == '' and not data.get('Enabled', False):
# account UserName already cleared, nothing to do
return {'ret': True, 'changed': False}
payload = {'UserName': ''}
if data.get('Enabled', False):
payload['Enabled'] = False
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def delete_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
if response.get('no_match'):
# account does not exist, nothing to do
return {'ret': True, 'changed': False}
else:
# some error encountered
return response
uri = response['uri']
headers = response['headers']
data = response['data']
if 'allow' in headers:
methods = [m.strip() for m in headers.get('allow').split(',')]
if 'DELETE' not in methods:
# if Allow header present and DELETE not listed, del via PATCH
return self.delete_user_via_patch(user, uri=uri, data=data)
response = self.delete_request(self.root_uri + uri)
if not response['ret']:
if response.get('status') == 405:
# if DELETE returned a 405, try to delete via PATCH
return self.delete_user_via_patch(user, uri=uri, data=data)
else:
return response
return {'ret': True}
def disable_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if not data.get('Enabled'):
# account already disabled, nothing to do
return {'ret': True, 'changed': False}
payload = {'Enabled': False}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_role(self, user):
if not user.get('account_roleid'):
return {'ret': False, 'msg':
'Must provide account_roleid for UpdateUserRole command'}
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data.get('RoleId') == user.get('account_roleid'):
# account already has RoleId , nothing to do
return {'ret': True, 'changed': False}
payload = {'RoleId': user.get('account_roleid')}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_password(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
payload = {'Password': user['account_password']}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_name(self, user):
if not user.get('account_updatename'):
return {'ret': False, 'msg':
'Must provide account_updatename for UpdateUserName command'}
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
payload = {'UserName': user['account_updatename']}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_accountservice_properties(self, user):
if user.get('account_properties') is None:
return {'ret': False, 'msg':
'Must provide account_properties for UpdateAccountServiceProperties command'}
account_properties = user.get('account_properties')
# Find AccountService
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'AccountService' not in data:
return {'ret': False, 'msg': "AccountService resource not found"}
accountservice_uri = data["AccountService"]["@odata.id"]
# Check support or not
response = self.get_request(self.root_uri + accountservice_uri)
if response['ret'] is False:
return response
data = response['data']
for property_name in account_properties.keys():
if property_name not in data:
return {'ret': False, 'msg':
'property %s not supported' % property_name}
# if properties is already matched, nothing to do
need_change = False
for property_name in account_properties.keys():
if account_properties[property_name] != data[property_name]:
need_change = True
break
if not need_change:
return {'ret': True, 'changed': False, 'msg': "AccountService properties already set"}
payload = account_properties
response = self.patch_request(self.root_uri + accountservice_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified AccountService properties"}
def get_sessions(self):
result = {}
# listing all users has always been slower than other operations, why?
session_list = []
sessions_results = []
# Get these entries, but does not fail if not found
properties = ['Description', 'Id', 'Name', 'UserName']
response = self.get_request(self.root_uri + self.sessions_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for sessions in data[u'Members']:
session_list.append(sessions[u'@odata.id']) # session_list[] are URIs
# for each session, get details
for uri in session_list:
session = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
session[property] = data[property]
sessions_results.append(session)
result["entries"] = sessions_results
return result
def get_firmware_update_capabilities(self):
result = {}
response = self.get_request(self.root_uri + self.update_uri)
if response['ret'] is False:
return response
result['ret'] = True
result['entries'] = {}
data = response['data']
if "Actions" in data:
actions = data['Actions']
if len(actions) > 0:
for key in actions.keys():
action = actions.get(key)
if 'title' in action:
title = action['title']
else:
title = key
result['entries'][title] = action.get('[email protected]',
["Key [email protected] not found"])
else:
return {'ret': "False", 'msg': "Actions list is empty."}
else:
return {'ret': "False", 'msg': "Key Actions not found."}
return result
def _software_inventory(self, uri):
result = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
result['entries'] = []
for member in data[u'Members']:
uri = self.root_uri + member[u'@odata.id']
# Get details for each software or firmware member
response = self.get_request(uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
software = {}
# Get these standard properties if present
for key in ['Name', 'Id', 'Status', 'Version', 'Updateable',
'SoftwareId', 'LowestSupportedVersion', 'Manufacturer',
'ReleaseDate']:
if key in data:
software[key] = data.get(key)
result['entries'].append(software)
return result
def get_firmware_inventory(self):
if self.firmware_uri is None:
return {'ret': False, 'msg': 'No FirmwareInventory resource found'}
else:
return self._software_inventory(self.firmware_uri)
def get_software_inventory(self):
if self.software_uri is None:
return {'ret': False, 'msg': 'No SoftwareInventory resource found'}
else:
return self._software_inventory(self.software_uri)
def get_bios_attributes(self, systems_uri):
result = {}
bios_attributes = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for attribute in data[u'Attributes'].items():
bios_attributes[attribute[0]] = attribute[1]
result["entries"] = bios_attributes
return result
def get_multi_bios_attributes(self):
return self.aggregate_systems(self.get_bios_attributes)
def _get_boot_options_dict(self, boot):
# Get these entries from BootOption, if present
properties = ['DisplayName', 'BootOptionReference']
# Retrieve BootOptions if present
if 'BootOptions' in boot and '@odata.id' in boot['BootOptions']:
boot_options_uri = boot['BootOptions']["@odata.id"]
# Get BootOptions resource
response = self.get_request(self.root_uri + boot_options_uri)
if response['ret'] is False:
return {}
data = response['data']
# Retrieve Members array
if 'Members' not in data:
return {}
members = data['Members']
else:
members = []
# Build dict of BootOptions keyed by BootOptionReference
boot_options_dict = {}
for member in members:
if '@odata.id' not in member:
return {}
boot_option_uri = member['@odata.id']
response = self.get_request(self.root_uri + boot_option_uri)
if response['ret'] is False:
return {}
data = response['data']
if 'BootOptionReference' not in data:
return {}
boot_option_ref = data['BootOptionReference']
# fetch the props to display for this boot device
boot_props = {}
for prop in properties:
if prop in data:
boot_props[prop] = data[prop]
boot_options_dict[boot_option_ref] = boot_props
return boot_options_dict
def get_boot_order(self, systems_uri):
result = {}
# Retrieve System resource
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Confirm needed Boot properties are present
if 'Boot' not in data or 'BootOrder' not in data['Boot']:
return {'ret': False, 'msg': "Key BootOrder not found"}
boot = data['Boot']
boot_order = boot['BootOrder']
boot_options_dict = self._get_boot_options_dict(boot)
# Build boot device list
boot_device_list = []
for ref in boot_order:
boot_device_list.append(
boot_options_dict.get(ref, {'BootOptionReference': ref}))
result["entries"] = boot_device_list
return result
def get_multi_boot_order(self):
return self.aggregate_systems(self.get_boot_order)
def get_boot_override(self, systems_uri):
result = {}
properties = ["BootSourceOverrideEnabled", "BootSourceOverrideTarget",
"BootSourceOverrideMode", "UefiTargetBootSourceOverride", "[email protected]"]
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if 'Boot' not in data:
return {'ret': False, 'msg': "Key Boot not found"}
boot = data['Boot']
boot_overrides = {}
if "BootSourceOverrideEnabled" in boot:
if boot["BootSourceOverrideEnabled"] is not False:
for property in properties:
if property in boot:
if boot[property] is not None:
boot_overrides[property] = boot[property]
else:
return {'ret': False, 'msg': "No boot override is enabled."}
result['entries'] = boot_overrides
return result
def get_multi_boot_override(self):
return self.aggregate_systems(self.get_boot_override)
def set_bios_default_settings(self):
result = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
# Extract proper URI
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
reset_bios_settings_uri = data["Actions"]["#Bios.ResetBios"]["target"]
response = self.post_request(self.root_uri + reset_bios_settings_uri, {})
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Set BIOS to default settings"}
def set_one_time_boot_device(self, bootdevice, uefi_target, boot_next):
result = {}
key = "Boot"
if not bootdevice:
return {'ret': False,
'msg': "bootdevice option required for SetOneTimeBoot"}
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
boot = data[key]
annotation = '[email protected]'
if annotation in boot:
allowable_values = boot[annotation]
if isinstance(allowable_values, list) and bootdevice not in allowable_values:
return {'ret': False,
'msg': "Boot device %s not in list of allowable values (%s)" %
(bootdevice, allowable_values)}
# read existing values
enabled = boot.get('BootSourceOverrideEnabled')
target = boot.get('BootSourceOverrideTarget')
cur_uefi_target = boot.get('UefiTargetBootSourceOverride')
cur_boot_next = boot.get('BootNext')
if bootdevice == 'UefiTarget':
if not uefi_target:
return {'ret': False,
'msg': "uefi_target option required to SetOneTimeBoot for UefiTarget"}
if enabled == 'Once' and target == bootdevice and uefi_target == cur_uefi_target:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice,
'UefiTargetBootSourceOverride': uefi_target
}
}
elif bootdevice == 'UefiBootNext':
if not boot_next:
return {'ret': False,
'msg': "boot_next option required to SetOneTimeBoot for UefiBootNext"}
if enabled == 'Once' and target == bootdevice and boot_next == cur_boot_next:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice,
'BootNext': boot_next
}
}
else:
if enabled == 'Once' and target == bootdevice:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice
}
}
response = self.patch_request(self.root_uri + self.systems_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True}
def set_bios_attributes(self, attributes):
result = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
# Extract proper URI
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Make a copy of the attributes dict
attrs_to_patch = dict(attributes)
# Check the attributes
for attr in attributes:
if attr not in data[u'Attributes']:
return {'ret': False, 'msg': "BIOS attribute %s not found" % attr}
# If already set to requested value, remove it from PATCH payload
if data[u'Attributes'][attr] == attributes[attr]:
del attrs_to_patch[attr]
# Return success w/ changed=False if no attrs need to be changed
if not attrs_to_patch:
return {'ret': True, 'changed': False,
'msg': "BIOS attributes already set"}
# Get the SettingsObject URI
set_bios_attr_uri = data["@Redfish.Settings"]["SettingsObject"]["@odata.id"]
# Construct payload and issue PATCH command
payload = {"Attributes": attrs_to_patch}
response = self.patch_request(self.root_uri + set_bios_attr_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified BIOS attribute"}
def set_boot_order(self, boot_list):
if not boot_list:
return {'ret': False,
'msg': "boot_order list required for SetBootOrder command"}
systems_uri = self.systems_uri
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
# Confirm needed Boot properties are present
if 'Boot' not in data or 'BootOrder' not in data['Boot']:
return {'ret': False, 'msg': "Key BootOrder not found"}
boot = data['Boot']
boot_order = boot['BootOrder']
boot_options_dict = self._get_boot_options_dict(boot)
# validate boot_list against BootOptionReferences if available
if boot_options_dict:
boot_option_references = boot_options_dict.keys()
for ref in boot_list:
if ref not in boot_option_references:
return {'ret': False,
'msg': "BootOptionReference %s not found in BootOptions" % ref}
# If requested BootOrder is already set, nothing to do
if boot_order == boot_list:
return {'ret': True, 'changed': False,
'msg': "BootOrder already set to %s" % boot_list}
payload = {
'Boot': {
'BootOrder': boot_list
}
}
response = self.patch_request(self.root_uri + systems_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "BootOrder set"}
def set_default_boot_order(self):
systems_uri = self.systems_uri
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
# get the #ComputerSystem.SetDefaultBootOrder Action and target URI
action = '#ComputerSystem.SetDefaultBootOrder'
if 'Actions' not in data or action not in data['Actions']:
return {'ret': False, 'msg': 'Action %s not found' % action}
if 'target' not in data['Actions'][action]:
return {'ret': False,
'msg': 'target URI missing from Action %s' % action}
action_uri = data['Actions'][action]['target']
# POST to Action URI
payload = {}
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True,
'msg': "BootOrder set to default"}
def get_chassis_inventory(self):
result = {}
chassis_results = []
# Get these entries, but does not fail if not found
properties = ['ChassisType', 'PartNumber', 'AssetTag',
'Manufacturer', 'IndicatorLED', 'SerialNumber', 'Model']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
chassis_result = {}
for property in properties:
if property in data:
chassis_result[property] = data[property]
chassis_results.append(chassis_result)
result["entries"] = chassis_results
return result
def get_fan_inventory(self):
result = {}
fan_results = []
key = "Thermal"
# Get these entries, but does not fail if not found
properties = ['FanName', 'Reading', 'ReadingUnits', 'Status']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
# match: found an entry for "Thermal" information = fans
thermal_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + thermal_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for device in data[u'Fans']:
fan = {}
for property in properties:
if property in device:
fan[property] = device[property]
fan_results.append(fan)
result["entries"] = fan_results
return result
def get_chassis_power(self):
result = {}
key = "Power"
# Get these entries, but does not fail if not found
properties = ['Name', 'PowerAllocatedWatts',
'PowerAvailableWatts', 'PowerCapacityWatts',
'PowerConsumedWatts', 'PowerMetrics',
'PowerRequestedWatts', 'RelatedItem', 'Status']
chassis_power_results = []
# Go through list
for chassis_uri in self.chassis_uris:
chassis_power_result = {}
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
response = self.get_request(self.root_uri + data[key]['@odata.id'])
data = response['data']
if 'PowerControl' in data:
if len(data['PowerControl']) > 0:
data = data['PowerControl'][0]
for property in properties:
if property in data:
chassis_power_result[property] = data[property]
else:
return {'ret': False, 'msg': 'Key PowerControl not found.'}
chassis_power_results.append(chassis_power_result)
else:
return {'ret': False, 'msg': 'Key Power not found.'}
result['entries'] = chassis_power_results
return result
def get_chassis_thermals(self):
result = {}
sensors = []
key = "Thermal"
# Get these entries, but does not fail if not found
properties = ['Name', 'PhysicalContext', 'UpperThresholdCritical',
'UpperThresholdFatal', 'UpperThresholdNonCritical',
'LowerThresholdCritical', 'LowerThresholdFatal',
'LowerThresholdNonCritical', 'MaxReadingRangeTemp',
'MinReadingRangeTemp', 'ReadingCelsius', 'RelatedItem',
'SensorNumber']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
thermal_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + thermal_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if "Temperatures" in data:
for sensor in data[u'Temperatures']:
sensor_result = {}
for property in properties:
if property in sensor:
if sensor[property] is not None:
sensor_result[property] = sensor[property]
sensors.append(sensor_result)
if sensors is None:
return {'ret': False, 'msg': 'Key Temperatures was not found.'}
result['entries'] = sensors
return result
def get_cpu_inventory(self, systems_uri):
result = {}
cpu_list = []
cpu_results = []
key = "Processors"
# Get these entries, but does not fail if not found
properties = ['Id', 'Manufacturer', 'Model', 'MaxSpeedMHz', 'TotalCores',
'TotalThreads', 'Status']
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
processors_uri = data[key]["@odata.id"]
# Get a list of all CPUs and build respective URIs
response = self.get_request(self.root_uri + processors_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for cpu in data[u'Members']:
cpu_list.append(cpu[u'@odata.id'])
for c in cpu_list:
cpu = {}
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
cpu[property] = data[property]
cpu_results.append(cpu)
result["entries"] = cpu_results
return result
def get_multi_cpu_inventory(self):
return self.aggregate_systems(self.get_cpu_inventory)
def get_memory_inventory(self, systems_uri):
result = {}
memory_list = []
memory_results = []
key = "Memory"
# Get these entries, but does not fail if not found
properties = ['SerialNumber', 'MemoryDeviceType', 'PartNuber',
'MemoryLocation', 'RankCount', 'CapacityMiB', 'OperatingMemoryModes', 'Status', 'Manufacturer', 'Name']
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
memory_uri = data[key]["@odata.id"]
# Get a list of all DIMMs and build respective URIs
response = self.get_request(self.root_uri + memory_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for dimm in data[u'Members']:
memory_list.append(dimm[u'@odata.id'])
for m in memory_list:
dimm = {}
uri = self.root_uri + m
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
if "Status" in data:
if "State" in data["Status"]:
if data["Status"]["State"] == "Absent":
continue
else:
continue
for property in properties:
if property in data:
dimm[property] = data[property]
memory_results.append(dimm)
result["entries"] = memory_results
return result
def get_multi_memory_inventory(self):
return self.aggregate_systems(self.get_memory_inventory)
def get_nic_inventory(self, resource_uri):
result = {}
nic_list = []
nic_results = []
key = "EthernetInterfaces"
# Get these entries, but does not fail if not found
properties = ['Description', 'FQDN', 'IPv4Addresses', 'IPv6Addresses',
'NameServers', 'MACAddress', 'PermanentMACAddress',
'SpeedMbps', 'MTUSize', 'AutoNeg', 'Status']
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
ethernetinterfaces_uri = data[key]["@odata.id"]
# Get a list of all network controllers and build respective URIs
response = self.get_request(self.root_uri + ethernetinterfaces_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for nic in data[u'Members']:
nic_list.append(nic[u'@odata.id'])
for n in nic_list:
nic = {}
uri = self.root_uri + n
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
nic[property] = data[property]
nic_results.append(nic)
result["entries"] = nic_results
return result
def get_multi_nic_inventory(self, resource_type):
ret = True
entries = []
# Given resource_type, use the proper URI
if resource_type == 'Systems':
resource_uris = self.systems_uris
elif resource_type == 'Manager':
resource_uris = self.manager_uris
for resource_uri in resource_uris:
inventory = self.get_nic_inventory(resource_uri)
ret = inventory.pop('ret') and ret
if 'entries' in inventory:
entries.append(({'resource_uri': resource_uri},
inventory['entries']))
return dict(ret=ret, entries=entries)
def get_virtualmedia(self, resource_uri):
result = {}
virtualmedia_list = []
virtualmedia_results = []
key = "VirtualMedia"
# Get these entries, but does not fail if not found
properties = ['Description', 'ConnectedVia', 'Id', 'MediaTypes',
'Image', 'ImageName', 'Name', 'WriteProtected',
'TransferMethod', 'TransferProtocolType']
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
virtualmedia_uri = data[key]["@odata.id"]
# Get a list of all virtual media and build respective URIs
response = self.get_request(self.root_uri + virtualmedia_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for virtualmedia in data[u'Members']:
virtualmedia_list.append(virtualmedia[u'@odata.id'])
for n in virtualmedia_list:
virtualmedia = {}
uri = self.root_uri + n
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
virtualmedia[property] = data[property]
virtualmedia_results.append(virtualmedia)
result["entries"] = virtualmedia_results
return result
def get_multi_virtualmedia(self):
ret = True
entries = []
resource_uris = self.manager_uris
for resource_uri in resource_uris:
virtualmedia = self.get_virtualmedia(resource_uri)
ret = virtualmedia.pop('ret') and ret
if 'entries' in virtualmedia:
entries.append(({'resource_uri': resource_uri},
virtualmedia['entries']))
return dict(ret=ret, entries=entries)
def get_psu_inventory(self):
result = {}
psu_list = []
psu_results = []
key = "PowerSupplies"
# Get these entries, but does not fail if not found
properties = ['Name', 'Model', 'SerialNumber', 'PartNumber', 'Manufacturer',
'FirmwareVersion', 'PowerCapacityWatts', 'PowerSupplyType',
'Status']
# Get a list of all Chassis and build URIs, then get all PowerSupplies
# from each Power entry in the Chassis
chassis_uri_list = self.chassis_uris
for chassis_uri in chassis_uri_list:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if 'Power' in data:
power_uri = data[u'Power'][u'@odata.id']
else:
continue
response = self.get_request(self.root_uri + power_uri)
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
psu_list = data[key]
for psu in psu_list:
psu_not_present = False
psu_data = {}
for property in properties:
if property in psu:
if psu[property] is not None:
if property == 'Status':
if 'State' in psu[property]:
if psu[property]['State'] == 'Absent':
psu_not_present = True
psu_data[property] = psu[property]
if psu_not_present:
continue
psu_results.append(psu_data)
result["entries"] = psu_results
if not result["entries"]:
return {'ret': False, 'msg': "No PowerSupply objects found"}
return result
def get_multi_psu_inventory(self):
return self.aggregate_systems(self.get_psu_inventory)
def get_system_inventory(self, systems_uri):
result = {}
inventory = {}
# Get these entries, but does not fail if not found
properties = ['Status', 'HostName', 'PowerState', 'Model', 'Manufacturer',
'PartNumber', 'SystemType', 'AssetTag', 'ServiceTag',
'SerialNumber', 'SKU', 'BiosVersion', 'MemorySummary',
'ProcessorSummary', 'TrustedModules']
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for property in properties:
if property in data:
inventory[property] = data[property]
result["entries"] = inventory
return result
def get_multi_system_inventory(self):
return self.aggregate_systems(self.get_system_inventory)
def get_network_protocols(self):
result = {}
service_result = {}
# Find NetworkProtocol
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'NetworkProtocol' not in data:
return {'ret': False, 'msg': "NetworkProtocol resource not found"}
networkprotocol_uri = data["NetworkProtocol"]["@odata.id"]
response = self.get_request(self.root_uri + networkprotocol_uri)
if response['ret'] is False:
return response
data = response['data']
protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH',
'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP',
'RFB']
for protocol_service in protocol_services:
if protocol_service in data.keys():
service_result[protocol_service] = data[protocol_service]
result['ret'] = True
result["entries"] = service_result
return result
def set_network_protocols(self, manager_services):
# Check input data validity
protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH',
'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP',
'RFB']
protocol_state_onlist = ['true', 'True', True, 'on', 1]
protocol_state_offlist = ['false', 'False', False, 'off', 0]
payload = {}
for service_name in manager_services.keys():
if service_name not in protocol_services:
return {'ret': False, 'msg': "Service name %s is invalid" % service_name}
payload[service_name] = {}
for service_property in manager_services[service_name].keys():
value = manager_services[service_name][service_property]
if service_property in ['ProtocolEnabled', 'protocolenabled']:
if value in protocol_state_onlist:
payload[service_name]['ProtocolEnabled'] = True
elif value in protocol_state_offlist:
payload[service_name]['ProtocolEnabled'] = False
else:
return {'ret': False, 'msg': "Value of property %s is invalid" % service_property}
elif service_property in ['port', 'Port']:
if isinstance(value, int):
payload[service_name]['Port'] = value
elif isinstance(value, str) and value.isdigit():
payload[service_name]['Port'] = int(value)
else:
return {'ret': False, 'msg': "Value of property %s is invalid" % service_property}
else:
payload[service_name][service_property] = value
# Find NetworkProtocol
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'NetworkProtocol' not in data:
return {'ret': False, 'msg': "NetworkProtocol resource not found"}
networkprotocol_uri = data["NetworkProtocol"]["@odata.id"]
# Check service property support or not
response = self.get_request(self.root_uri + networkprotocol_uri)
if response['ret'] is False:
return response
data = response['data']
for service_name in payload.keys():
if service_name not in data:
return {'ret': False, 'msg': "%s service not supported" % service_name}
for service_property in payload[service_name].keys():
if service_property not in data[service_name]:
return {'ret': False, 'msg': "%s property for %s service not supported" % (service_property, service_name)}
# if the protocol is already set, nothing to do
need_change = False
for service_name in payload.keys():
for service_property in payload[service_name].keys():
value = payload[service_name][service_property]
if value != data[service_name][service_property]:
need_change = True
break
if not need_change:
return {'ret': True, 'changed': False, 'msg': "Manager NetworkProtocol services already set"}
response = self.patch_request(self.root_uri + networkprotocol_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified Manager NetworkProtocol services"}
@staticmethod
def to_singular(resource_name):
if resource_name.endswith('ies'):
resource_name = resource_name[:-3] + 'y'
elif resource_name.endswith('s'):
resource_name = resource_name[:-1]
return resource_name
def get_health_resource(self, subsystem, uri, health, expanded):
status = 'Status'
if expanded:
d = expanded
else:
r = self.get_request(self.root_uri + uri)
if r.get('ret'):
d = r.get('data')
else:
return
if 'Members' in d: # collections case
for m in d.get('Members'):
u = m.get('@odata.id')
r = self.get_request(self.root_uri + u)
if r.get('ret'):
p = r.get('data')
if p:
e = {self.to_singular(subsystem.lower()) + '_uri': u,
status: p.get(status,
"Status not available")}
health[subsystem].append(e)
else: # non-collections case
e = {self.to_singular(subsystem.lower()) + '_uri': uri,
status: d.get(status,
"Status not available")}
health[subsystem].append(e)
def get_health_subsystem(self, subsystem, data, health):
if subsystem in data:
sub = data.get(subsystem)
if isinstance(sub, list):
for r in sub:
if '@odata.id' in r:
uri = r.get('@odata.id')
expanded = None
if '#' in uri and len(r) > 1:
expanded = r
self.get_health_resource(subsystem, uri, health, expanded)
elif isinstance(sub, dict):
if '@odata.id' in sub:
uri = sub.get('@odata.id')
self.get_health_resource(subsystem, uri, health, None)
elif 'Members' in data:
for m in data.get('Members'):
u = m.get('@odata.id')
r = self.get_request(self.root_uri + u)
if r.get('ret'):
d = r.get('data')
self.get_health_subsystem(subsystem, d, health)
def get_health_report(self, category, uri, subsystems):
result = {}
health = {}
status = 'Status'
# Get health status of top level resource
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
health[category] = {status: data.get(status, "Status not available")}
# Get health status of subsystems
for sub in subsystems:
d = None
if sub.startswith('Links.'): # ex: Links.PCIeDevices
sub = sub[len('Links.'):]
d = data.get('Links', {})
elif '.' in sub: # ex: Thermal.Fans
p, sub = sub.split('.')
u = data.get(p, {}).get('@odata.id')
if u:
r = self.get_request(self.root_uri + u)
if r['ret']:
d = r['data']
if not d:
continue
else: # ex: Memory
d = data
health[sub] = []
self.get_health_subsystem(sub, d, health)
if not health[sub]:
del health[sub]
result["entries"] = health
return result
def get_system_health_report(self, systems_uri):
subsystems = ['Processors', 'Memory', 'SimpleStorage', 'Storage',
'EthernetInterfaces', 'NetworkInterfaces.NetworkPorts',
'NetworkInterfaces.NetworkDeviceFunctions']
return self.get_health_report('System', systems_uri, subsystems)
def get_multi_system_health_report(self):
return self.aggregate_systems(self.get_system_health_report)
def get_chassis_health_report(self, chassis_uri):
subsystems = ['Power.PowerSupplies', 'Thermal.Fans',
'Links.PCIeDevices']
return self.get_health_report('Chassis', chassis_uri, subsystems)
def get_multi_chassis_health_report(self):
return self.aggregate_chassis(self.get_chassis_health_report)
def get_manager_health_report(self, manager_uri):
subsystems = []
return self.get_health_report('Manager', manager_uri, subsystems)
def get_multi_manager_health_report(self):
return self.aggregate_managers(self.get_manager_health_report)
def set_manager_nic(self, nic_addr, nic_config):
# Get EthernetInterface collection
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'EthernetInterfaces' not in data:
return {'ret': False, 'msg': "EthernetInterfaces resource not found"}
ethernetinterfaces_uri = data["EthernetInterfaces"]["@odata.id"]
response = self.get_request(self.root_uri + ethernetinterfaces_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
# Find target EthernetInterface
target_ethernet_uri = None
target_ethernet_current_setting = None
if nic_addr == 'null':
# Find root_uri matched EthernetInterface when nic_addr is not specified
nic_addr = (self.root_uri).split('/')[-1]
nic_addr = nic_addr.split(':')[0] # split port if existing
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
if '"' + nic_addr + '"' in str(data) or "'" + nic_addr + "'" in str(data):
target_ethernet_uri = uri
target_ethernet_current_setting = data
break
if target_ethernet_uri is None:
return {'ret': False, 'msg': "No matched EthernetInterface found under Manager"}
# Convert input to payload and check validity
payload = {}
for property in nic_config.keys():
value = nic_config[property]
if property not in target_ethernet_current_setting:
return {'ret': False, 'msg': "Property %s in nic_config is invalid" % property}
if isinstance(value, dict):
if isinstance(target_ethernet_current_setting[property], dict):
payload[property] = value
elif isinstance(target_ethernet_current_setting[property], list):
payload[property] = list()
payload[property].append(value)
else:
return {'ret': False, 'msg': "Value of property %s in nic_config is invalid" % property}
else:
payload[property] = value
# If no need change, nothing to do. If error detected, report it
need_change = False
for property in payload.keys():
set_value = payload[property]
cur_value = target_ethernet_current_setting[property]
# type is simple(not dict/list)
if not isinstance(set_value, dict) and not isinstance(set_value, list):
if set_value != cur_value:
need_change = True
# type is dict
if isinstance(set_value, dict):
for subprop in payload[property].keys():
if subprop not in target_ethernet_current_setting[property]:
return {'ret': False, 'msg': "Sub-property %s in nic_config is invalid" % subprop}
sub_set_value = payload[property][subprop]
sub_cur_value = target_ethernet_current_setting[property][subprop]
if sub_set_value != sub_cur_value:
need_change = True
# type is list
if isinstance(set_value, list):
for i in range(len(set_value)):
for subprop in payload[property][i].keys():
if subprop not in target_ethernet_current_setting[property][i]:
return {'ret': False, 'msg': "Sub-property %s in nic_config is invalid" % subprop}
sub_set_value = payload[property][i][subprop]
sub_cur_value = target_ethernet_current_setting[property][i][subprop]
if sub_set_value != sub_cur_value:
need_change = True
if not need_change:
return {'ret': True, 'changed': False, 'msg': "Manager NIC already set"}
response = self.patch_request(self.root_uri + target_ethernet_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified Manager NIC"}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,599 |
redfish_command - Manager - ClearSessions
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
This feature would implement a ClearSessions command for the Sessions category of redfish_command, to clear all active sessions.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
redfish_command.py
redfish_utils.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
This command would help user to clear all active sessions.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/65599
|
https://github.com/ansible/ansible/pull/65600
|
f21ee7f685e8f58e58de2400b134ba5c7a2536b1
|
435bd91d2e406a227b20ce5f42c858253e0a97c3
| 2019-12-06T08:43:45Z |
python
| 2020-02-15T13:00:55Z |
lib/ansible/modules/remote_management/redfish/redfish_command.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2017-2018 Dell EMC Inc.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: redfish_command
version_added: "2.7"
short_description: Manages Out-Of-Band controllers using Redfish APIs
description:
- Builds Redfish URIs locally and sends them to remote OOB controllers to
perform an action.
- Manages OOB controller ex. reboot, log management.
- Manages OOB controller users ex. add, remove, update.
- Manages system power ex. on, off, graceful and forced reboot.
options:
category:
required: true
description:
- Category to execute on OOB controller
type: str
command:
required: true
description:
- List of commands to execute on OOB controller
type: list
baseuri:
required: true
description:
- Base URI of OOB controller
type: str
username:
required: true
description:
- Username for authentication with OOB controller
type: str
version_added: "2.8"
password:
required: true
description:
- Password for authentication with OOB controller
type: str
id:
required: false
aliases: [ account_id ]
description:
- ID of account to delete/modify
type: str
version_added: "2.8"
new_username:
required: false
aliases: [ account_username ]
description:
- Username of account to add/delete/modify
type: str
version_added: "2.8"
new_password:
required: false
aliases: [ account_password ]
description:
- New password of account to add/modify
type: str
version_added: "2.8"
roleid:
required: false
aliases: [ account_roleid ]
description:
- Role of account to add/modify
type: str
version_added: "2.8"
bootdevice:
required: false
description:
- bootdevice when setting boot configuration
type: str
timeout:
description:
- Timeout in seconds for URL requests to OOB controller
default: 10
type: int
version_added: '2.8'
uefi_target:
required: false
description:
- UEFI target when bootdevice is "UefiTarget"
type: str
version_added: "2.9"
boot_next:
required: false
description:
- BootNext target when bootdevice is "UefiBootNext"
type: str
version_added: "2.9"
update_username:
required: false
aliases: [ account_updatename ]
description:
- new update user name for account_username
type: str
version_added: "2.10"
account_properties:
required: false
description:
- properties of account service to update
type: dict
version_added: "2.10"
resource_id:
required: false
description:
- The ID of the System, Manager or Chassis to modify
type: str
version_added: "2.10"
author: "Jose Delarosa (@jose-delarosa)"
'''
EXAMPLES = '''
- name: Restart system power gracefully
redfish_command:
category: Systems
command: PowerGracefulRestart
resource_id: 437XR1138R2
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set one-time boot device to {{ bootdevice }}
redfish_command:
category: Systems
command: SetOneTimeBoot
resource_id: 437XR1138R2
bootdevice: "{{ bootdevice }}"
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set one-time boot device to UefiTarget of "/0x31/0x33/0x01/0x01"
redfish_command:
category: Systems
command: SetOneTimeBoot
resource_id: 437XR1138R2
bootdevice: "UefiTarget"
uefi_target: "/0x31/0x33/0x01/0x01"
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set one-time boot device to BootNext target of "Boot0001"
redfish_command:
category: Systems
command: SetOneTimeBoot
resource_id: 437XR1138R2
bootdevice: "UefiBootNext"
boot_next: "Boot0001"
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set chassis indicator LED to blink
redfish_command:
category: Chassis
command: IndicatorLedBlink
resource_id: 1U
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Add user
redfish_command:
category: Accounts
command: AddUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
new_username: "{{ new_username }}"
new_password: "{{ new_password }}"
roleid: "{{ roleid }}"
- name: Add user using new option aliases
redfish_command:
category: Accounts
command: AddUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
account_password: "{{ account_password }}"
account_roleid: "{{ account_roleid }}"
- name: Delete user
redfish_command:
category: Accounts
command: DeleteUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
- name: Disable user
redfish_command:
category: Accounts
command: DisableUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
- name: Enable user
redfish_command:
category: Accounts
command: EnableUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
- name: Add and enable user
redfish_command:
category: Accounts
command: AddUser,EnableUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
new_username: "{{ new_username }}"
new_password: "{{ new_password }}"
roleid: "{{ roleid }}"
- name: Update user password
redfish_command:
category: Accounts
command: UpdateUserPassword
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
account_password: "{{ account_password }}"
- name: Update user role
redfish_command:
category: Accounts
command: UpdateUserRole
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
roleid: "{{ roleid }}"
- name: Update user name
redfish_command:
category: Accounts
command: UpdateUserName
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
account_updatename: "{{ account_updatename }}"
- name: Update user name
redfish_command:
category: Accounts
command: UpdateUserName
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
update_username: "{{ update_username }}"
- name: Update AccountService properties
redfish_command:
category: Accounts
command: UpdateAccountServiceProperties
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_properties:
AccountLockoutThreshold: 5
AccountLockoutDuration: 600
- name: Clear Manager Logs with a timeout of 20 seconds
redfish_command:
category: Manager
command: ClearLogs
resource_id: BMC
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
timeout: 20
'''
RETURN = '''
msg:
description: Message with action result or error description
returned: always
type: str
sample: "Action was successful"
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.redfish_utils import RedfishUtils
from ansible.module_utils._text import to_native
# More will be added as module features are expanded
CATEGORY_COMMANDS_ALL = {
"Systems": ["PowerOn", "PowerForceOff", "PowerForceRestart", "PowerGracefulRestart",
"PowerGracefulShutdown", "PowerReboot", "SetOneTimeBoot"],
"Chassis": ["IndicatorLedOn", "IndicatorLedOff", "IndicatorLedBlink"],
"Accounts": ["AddUser", "EnableUser", "DeleteUser", "DisableUser",
"UpdateUserRole", "UpdateUserPassword", "UpdateUserName",
"UpdateAccountServiceProperties"],
"Manager": ["GracefulRestart", "ClearLogs"],
}
def main():
result = {}
module = AnsibleModule(
argument_spec=dict(
category=dict(required=True),
command=dict(required=True, type='list'),
baseuri=dict(required=True),
username=dict(required=True),
password=dict(required=True, no_log=True),
id=dict(aliases=["account_id"]),
new_username=dict(aliases=["account_username"]),
new_password=dict(aliases=["account_password"], no_log=True),
roleid=dict(aliases=["account_roleid"]),
update_username=dict(type='str', aliases=["account_updatename"]),
account_properties=dict(type='dict', default={}),
bootdevice=dict(),
timeout=dict(type='int', default=10),
uefi_target=dict(),
boot_next=dict(),
resource_id=dict()
),
supports_check_mode=False
)
category = module.params['category']
command_list = module.params['command']
# admin credentials used for authentication
creds = {'user': module.params['username'],
'pswd': module.params['password']}
# user to add/modify/delete
user = {'account_id': module.params['id'],
'account_username': module.params['new_username'],
'account_password': module.params['new_password'],
'account_roleid': module.params['roleid'],
'account_updatename': module.params['update_username'],
'account_properties': module.params['account_properties']}
# timeout
timeout = module.params['timeout']
# System, Manager or Chassis ID to modify
resource_id = module.params['resource_id']
# Build root URI
root_uri = "https://" + module.params['baseuri']
rf_utils = RedfishUtils(creds, root_uri, timeout, module,
resource_id=resource_id, data_modification=True)
# Check that Category is valid
if category not in CATEGORY_COMMANDS_ALL:
module.fail_json(msg=to_native("Invalid Category '%s'. Valid Categories = %s" % (category, CATEGORY_COMMANDS_ALL.keys())))
# Check that all commands are valid
for cmd in command_list:
# Fail if even one command given is invalid
if cmd not in CATEGORY_COMMANDS_ALL[category]:
module.fail_json(msg=to_native("Invalid Command '%s'. Valid Commands = %s" % (cmd, CATEGORY_COMMANDS_ALL[category])))
# Organize by Categories / Commands
if category == "Accounts":
ACCOUNTS_COMMANDS = {
"AddUser": rf_utils.add_user,
"EnableUser": rf_utils.enable_user,
"DeleteUser": rf_utils.delete_user,
"DisableUser": rf_utils.disable_user,
"UpdateUserRole": rf_utils.update_user_role,
"UpdateUserPassword": rf_utils.update_user_password,
"UpdateUserName": rf_utils.update_user_name,
"UpdateAccountServiceProperties": rf_utils.update_accountservice_properties
}
# execute only if we find an Account service resource
result = rf_utils._find_accountservice_resource()
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
for command in command_list:
result = ACCOUNTS_COMMANDS[command](user)
elif category == "Systems":
# execute only if we find a System resource
result = rf_utils._find_systems_resource()
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
for command in command_list:
if "Power" in command:
result = rf_utils.manage_system_power(command)
elif command == "SetOneTimeBoot":
result = rf_utils.set_one_time_boot_device(
module.params['bootdevice'],
module.params['uefi_target'],
module.params['boot_next'])
elif category == "Chassis":
result = rf_utils._find_chassis_resource()
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
led_commands = ["IndicatorLedOn", "IndicatorLedOff", "IndicatorLedBlink"]
# Check if more than one led_command is present
num_led_commands = sum([command in led_commands for command in command_list])
if num_led_commands > 1:
result = {'ret': False, 'msg': "Only one IndicatorLed command should be sent at a time."}
else:
for command in command_list:
if command in led_commands:
result = rf_utils.manage_indicator_led(command)
elif category == "Manager":
MANAGER_COMMANDS = {
"GracefulRestart": rf_utils.restart_manager_gracefully,
"ClearLogs": rf_utils.clear_logs
}
# execute only if we find a Manager service resource
result = rf_utils._find_managers_resource()
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
for command in command_list:
result = MANAGER_COMMANDS[command]()
# Return data back or fail with proper message
if result['ret'] is True:
del result['ret']
changed = result.get('changed', True)
module.exit_json(changed=changed, msg='Action was successful')
else:
module.fail_json(msg=to_native(result['msg']))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,377 |
postgresql_set converts value to uppercase if "mb" or "gb" or "tb" is in the string
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
in postgresql_set.py:303 the value to be set is converted to uppercase if it contains "mb" or "gb" or "tb".
for example an archive command will fail if the case is not correct.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
postgresql_set.py:303
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
python version = 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
debian stretch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
postgresql_set:
name: 'archive_command'
value: 'test ! -f /mnt/postgres/mb/%f && cp %p /mnt/postgres/mb/%f'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
grep archive_command postgresql.auto.conf:
```
archive_command = 'test ! -f /mnt/postgres/mb/%f && cp %p /mnt/postgres/mb/%f'
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
grep archive_command postgresql.auto.conf:
```
archive_command = 'TEST ! -F /MNT/POSTGRES/MB/%F && CP %P /MNT/POSTGRES/MB/%F'
```
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/67377
|
https://github.com/ansible/ansible/pull/67418
|
a4f5c2e9934a178e90b26ccd911de12851a4999e
|
59bcc9f739d40c35ec1f471dbd7f30934bccfd94
| 2020-02-13T09:49:51Z |
python
| 2020-02-15T13:03:53Z |
changelogs/fragments/67418-postgresql_set_converts_value_to_uppercase.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,377 |
postgresql_set converts value to uppercase if "mb" or "gb" or "tb" is in the string
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
in postgresql_set.py:303 the value to be set is converted to uppercase if it contains "mb" or "gb" or "tb".
for example an archive command will fail if the case is not correct.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
postgresql_set.py:303
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
python version = 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
debian stretch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
postgresql_set:
name: 'archive_command'
value: 'test ! -f /mnt/postgres/mb/%f && cp %p /mnt/postgres/mb/%f'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
grep archive_command postgresql.auto.conf:
```
archive_command = 'test ! -f /mnt/postgres/mb/%f && cp %p /mnt/postgres/mb/%f'
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
grep archive_command postgresql.auto.conf:
```
archive_command = 'TEST ! -F /MNT/POSTGRES/MB/%F && CP %P /MNT/POSTGRES/MB/%F'
```
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/67377
|
https://github.com/ansible/ansible/pull/67418
|
a4f5c2e9934a178e90b26ccd911de12851a4999e
|
59bcc9f739d40c35ec1f471dbd7f30934bccfd94
| 2020-02-13T09:49:51Z |
python
| 2020-02-15T13:03:53Z |
lib/ansible/modules/database/postgresql/postgresql_set.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: postgresql_set
short_description: Change a PostgreSQL server configuration parameter
description:
- Allows to change a PostgreSQL server configuration parameter.
- The module uses ALTER SYSTEM command and applies changes by reload server configuration.
- ALTER SYSTEM is used for changing server configuration parameters across the entire database cluster.
- It can be more convenient and safe than the traditional method of manually editing the postgresql.conf file.
- ALTER SYSTEM writes the given parameter setting to the $PGDATA/postgresql.auto.conf file,
which is read in addition to postgresql.conf.
- The module allows to reset parameter to boot_val (cluster initial value) by I(reset=yes) or remove parameter
string from postgresql.auto.conf and reload I(value=default) (for settings with postmaster context restart is required).
- After change you can see in the ansible output the previous and
the new parameter value and other information using returned values and M(debug) module.
version_added: '2.8'
options:
name:
description:
- Name of PostgreSQL server parameter.
type: str
required: true
value:
description:
- Parameter value to set.
- To remove parameter string from postgresql.auto.conf and
reload the server configuration you must pass I(value=default).
With I(value=default) the playbook always returns changed is true.
type: str
reset:
description:
- Restore parameter to initial state (boot_val). Mutually exclusive with I(value).
type: bool
default: false
session_role:
description:
- Switch to session_role after connecting. The specified session_role must
be a role that the current login_user is a member of.
- Permissions checking for SQL commands is carried out as though
the session_role were the one that had logged in originally.
type: str
db:
description:
- Name of database to connect.
type: str
aliases:
- login_db
notes:
- Supported version of PostgreSQL is 9.4 and later.
- Pay attention, change setting with 'postmaster' context can return changed is true
when actually nothing changes because the same value may be presented in
several different form, for example, 1024MB, 1GB, etc. However in pg_settings
system view it can be defined like 131072 number of 8kB pages.
The final check of the parameter value cannot compare it because the server was
not restarted and the value in pg_settings is not updated yet.
- For some parameters restart of PostgreSQL server is required.
See official documentation U(https://www.postgresql.org/docs/current/view-pg-settings.html).
seealso:
- module: postgresql_info
- name: PostgreSQL server configuration
description: General information about PostgreSQL server configuration.
link: https://www.postgresql.org/docs/current/runtime-config.html
- name: PostgreSQL view pg_settings reference
description: Complete reference of the pg_settings view documentation.
link: https://www.postgresql.org/docs/current/view-pg-settings.html
- name: PostgreSQL ALTER SYSTEM command reference
description: Complete reference of the ALTER SYSTEM command documentation.
link: https://www.postgresql.org/docs/current/sql-altersystem.html
author:
- Andrew Klychkov (@Andersson007)
extends_documentation_fragment: postgres
'''
EXAMPLES = r'''
- name: Restore wal_keep_segments parameter to initial state
postgresql_set:
name: wal_keep_segments
reset: yes
# Set work_mem parameter to 32MB and show what's been changed and restart is required or not
# (output example: "msg": "work_mem 4MB >> 64MB restart_req: False")
- name: Set work mem parameter
postgresql_set:
name: work_mem
value: 32mb
register: set
- debug:
msg: "{{ set.name }} {{ set.prev_val_pretty }} >> {{ set.value_pretty }} restart_req: {{ set.restart_required }}"
when: set.changed
# Ensure that the restart of PostgreSQL server must be required for some parameters.
# In this situation you see the same parameter in prev_val and value_prettyue, but 'changed=True'
# (If you passed the value that was different from the current server setting).
- name: Set log_min_duration_statement parameter to 1 second
postgresql_set:
name: log_min_duration_statement
value: 1s
- name: Set wal_log_hints parameter to default value (remove parameter from postgresql.auto.conf)
postgresql_set:
name: wal_log_hints
value: default
'''
RETURN = r'''
name:
description: Name of PostgreSQL server parameter.
returned: always
type: str
sample: 'shared_buffers'
restart_required:
description: Information about parameter current state.
returned: always
type: bool
sample: true
prev_val_pretty:
description: Information about previous state of the parameter.
returned: always
type: str
sample: '4MB'
value_pretty:
description: Information about current state of the parameter.
returned: always
type: str
sample: '64MB'
value:
description:
- Dictionary that contains the current parameter value (at the time of playbook finish).
- Pay attention that for real change some parameters restart of PostgreSQL server is required.
- Returns the current value in the check mode.
returned: always
type: dict
sample: { "value": 67108864, "unit": "b" }
context:
description:
- PostgreSQL setting context.
returned: always
type: str
sample: user
'''
try:
from psycopg2.extras import DictCursor
except Exception:
# psycopg2 is checked by connect_to_db()
# from ansible.module_utils.postgres
pass
from copy import deepcopy
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.postgres import (
connect_to_db,
get_conn_params,
postgres_common_argument_spec,
)
from ansible.module_utils._text import to_native
PG_REQ_VER = 90400
# To allow to set value like 1mb instead of 1MB, etc:
POSSIBLE_SIZE_UNITS = ("mb", "gb", "tb")
# ===========================================
# PostgreSQL module specific support methods.
#
def param_get(cursor, module, name):
query = ("SELECT name, setting, unit, context, boot_val "
"FROM pg_settings WHERE name = %(name)s")
try:
cursor.execute(query, {'name': name})
info = cursor.fetchall()
cursor.execute("SHOW %s" % name)
val = cursor.fetchone()
except Exception as e:
module.fail_json(msg="Unable to get %s value due to : %s" % (name, to_native(e)))
raw_val = info[0][1]
unit = info[0][2]
context = info[0][3]
boot_val = info[0][4]
if val[0] == 'True':
val[0] = 'on'
elif val[0] == 'False':
val[0] = 'off'
if unit == 'kB':
if int(raw_val) > 0:
raw_val = int(raw_val) * 1024
if int(boot_val) > 0:
boot_val = int(boot_val) * 1024
unit = 'b'
elif unit == 'MB':
if int(raw_val) > 0:
raw_val = int(raw_val) * 1024 * 1024
if int(boot_val) > 0:
boot_val = int(boot_val) * 1024 * 1024
unit = 'b'
return (val[0], raw_val, unit, boot_val, context)
def pretty_to_bytes(pretty_val):
# The function returns a value in bytes
# if the value contains 'B', 'kB', 'MB', 'GB', 'TB'.
# Otherwise it returns the passed argument.
val_in_bytes = None
if 'kB' in pretty_val:
num_part = int(''.join(d for d in pretty_val if d.isdigit()))
val_in_bytes = num_part * 1024
elif 'MB' in pretty_val.upper():
num_part = int(''.join(d for d in pretty_val if d.isdigit()))
val_in_bytes = num_part * 1024 * 1024
elif 'GB' in pretty_val.upper():
num_part = int(''.join(d for d in pretty_val if d.isdigit()))
val_in_bytes = num_part * 1024 * 1024 * 1024
elif 'TB' in pretty_val.upper():
num_part = int(''.join(d for d in pretty_val if d.isdigit()))
val_in_bytes = num_part * 1024 * 1024 * 1024 * 1024
elif 'B' in pretty_val.upper():
num_part = int(''.join(d for d in pretty_val if d.isdigit()))
val_in_bytes = num_part
else:
return pretty_val
return val_in_bytes
def param_set(cursor, module, name, value, context):
try:
if str(value).lower() == 'default':
query = "ALTER SYSTEM SET %s = DEFAULT" % name
else:
query = "ALTER SYSTEM SET %s = '%s'" % (name, value)
cursor.execute(query)
if context != 'postmaster':
cursor.execute("SELECT pg_reload_conf()")
except Exception as e:
module.fail_json(msg="Unable to get %s value due to : %s" % (name, to_native(e)))
return True
# ===========================================
# Module execution.
#
def main():
argument_spec = postgres_common_argument_spec()
argument_spec.update(
name=dict(type='str', required=True),
db=dict(type='str', aliases=['login_db']),
value=dict(type='str'),
reset=dict(type='bool'),
session_role=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
name = module.params["name"]
value = module.params["value"]
reset = module.params["reset"]
# Allow to pass values like 1mb instead of 1MB, etc:
if value:
for unit in POSSIBLE_SIZE_UNITS:
if unit in value:
value = value.upper()
if value and reset:
module.fail_json(msg="%s: value and reset params are mutually exclusive" % name)
if not value and not reset:
module.fail_json(msg="%s: at least one of value or reset param must be specified" % name)
conn_params = get_conn_params(module, module.params, warn_db_default=False)
db_connection = connect_to_db(module, conn_params, autocommit=True)
cursor = db_connection.cursor(cursor_factory=DictCursor)
kw = {}
# Check server version (needs 9.4 or later):
ver = db_connection.server_version
if ver < PG_REQ_VER:
module.warn("PostgreSQL is %s version but %s or later is required" % (ver, PG_REQ_VER))
kw = dict(
changed=False,
restart_required=False,
value_pretty="",
prev_val_pretty="",
value={"value": "", "unit": ""},
)
kw['name'] = name
db_connection.close()
module.exit_json(**kw)
# Set default returned values:
restart_required = False
changed = False
kw['name'] = name
kw['restart_required'] = False
# Get info about param state:
res = param_get(cursor, module, name)
current_value = res[0]
raw_val = res[1]
unit = res[2]
boot_val = res[3]
context = res[4]
if value == 'True':
value = 'on'
elif value == 'False':
value = 'off'
kw['prev_val_pretty'] = current_value
kw['value_pretty'] = deepcopy(kw['prev_val_pretty'])
kw['context'] = context
# Do job
if context == "internal":
module.fail_json(msg="%s: cannot be changed (internal context). See "
"https://www.postgresql.org/docs/current/runtime-config-preset.html" % name)
if context == "postmaster":
restart_required = True
# If check_mode, just compare and exit:
if module.check_mode:
if pretty_to_bytes(value) == pretty_to_bytes(current_value):
kw['changed'] = False
else:
kw['value_pretty'] = value
kw['changed'] = True
# Anyway returns current raw value in the check_mode:
kw['value'] = dict(
value=raw_val,
unit=unit,
)
kw['restart_required'] = restart_required
module.exit_json(**kw)
# Set param:
if value and value != current_value:
changed = param_set(cursor, module, name, value, context)
kw['value_pretty'] = value
# Reset param:
elif reset:
if raw_val == boot_val:
# nothing to change, exit:
kw['value'] = dict(
value=raw_val,
unit=unit,
)
module.exit_json(**kw)
changed = param_set(cursor, module, name, boot_val, context)
if restart_required:
module.warn("Restart of PostgreSQL is required for setting %s" % name)
cursor.close()
db_connection.close()
# Reconnect and recheck current value:
if context in ('sighup', 'superuser-backend', 'backend', 'superuser', 'user'):
db_connection = connect_to_db(module, conn_params, autocommit=True)
cursor = db_connection.cursor(cursor_factory=DictCursor)
res = param_get(cursor, module, name)
# f_ means 'final'
f_value = res[0]
f_raw_val = res[1]
if raw_val == f_raw_val:
changed = False
else:
changed = True
kw['value_pretty'] = f_value
kw['value'] = dict(
value=f_raw_val,
unit=unit,
)
cursor.close()
db_connection.close()
kw['changed'] = changed
kw['restart_required'] = restart_required
module.exit_json(**kw)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,377 |
postgresql_set converts value to uppercase if "mb" or "gb" or "tb" is in the string
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
in postgresql_set.py:303 the value to be set is converted to uppercase if it contains "mb" or "gb" or "tb".
for example an archive command will fail if the case is not correct.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
postgresql_set.py:303
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
python version = 2.7.13 (default, Sep 26 2018, 18:42:22) [GCC 6.3.0 20170516]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
debian stretch
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
postgresql_set:
name: 'archive_command'
value: 'test ! -f /mnt/postgres/mb/%f && cp %p /mnt/postgres/mb/%f'
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
grep archive_command postgresql.auto.conf:
```
archive_command = 'test ! -f /mnt/postgres/mb/%f && cp %p /mnt/postgres/mb/%f'
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
grep archive_command postgresql.auto.conf:
```
archive_command = 'TEST ! -F /MNT/POSTGRES/MB/%F && CP %P /MNT/POSTGRES/MB/%F'
```
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/67377
|
https://github.com/ansible/ansible/pull/67418
|
a4f5c2e9934a178e90b26ccd911de12851a4999e
|
59bcc9f739d40c35ec1f471dbd7f30934bccfd94
| 2020-02-13T09:49:51Z |
python
| 2020-02-15T13:03:53Z |
test/integration/targets/postgresql_set/tasks/postgresql_set_initial.yml
|
# Test code for the postgresql_set module
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#
# Notice: assertions are different for Ubuntu 16.04 and FreeBSD because they don't work
# correctly for these tests. There are some stranges exactly in Shippable CI.
# However I checked it manually for all points (including Ubuntu 16.05 and FreeBSD)
# and it worked as expected.
- vars:
task_parameters: &task_parameters
become_user: '{{ pg_user }}'
become: yes
pg_parameters: &pg_parameters
login_user: '{{ pg_user }}'
login_db: postgres
block:
- name: postgresql_set - preparation to the next step
<<: *task_parameters
become_user: "{{ pg_user }}"
become: yes
postgresql_set:
<<: *pg_parameters
name: work_mem
reset: yes
#####################
# Testing check_mode:
- name: postgresql_set - get work_mem initial value
<<: *task_parameters
postgresql_query:
<<: *pg_parameters
query: SHOW work_mem
register: before
- name: postgresql_set - set work_mem (restart is not required), check_mode
<<: *task_parameters
postgresql_set:
<<: *pg_parameters
name: work_mem
value: 12MB
register: set_wm
check_mode: yes
- assert:
that:
- set_wm.name == 'work_mem'
- set_wm.changed == true
- set_wm.prev_val_pretty == before.query_result[0].work_mem
- set_wm.value_pretty == '12MB'
- set_wm.restart_required == false
- name: postgresql_set - get work_mem value to check, must be the same as initial
<<: *task_parameters
postgresql_query:
<<: *pg_parameters
query: SHOW work_mem
register: after
- assert:
that:
- before.query_result[0].work_mem == after.query_result[0].work_mem
######
#
- name: postgresql_set - set work_mem (restart is not required)
<<: *task_parameters
postgresql_set:
<<: *pg_parameters
name: work_mem
value: 12MB
register: set_wm
- assert:
that:
- set_wm.name == 'work_mem'
- set_wm.changed == true
- set_wm.value_pretty == '12MB'
- set_wm.value_pretty != set_wm.prev_val_pretty
- set_wm.restart_required == false
- set_wm.value.value == 12582912
- set_wm.value.unit == 'b'
when:
- ansible_distribution != "Ubuntu"
- ansible_distribution_major_version != '16'
- ansible_distribution != "FreeBSD"
- assert:
that:
- set_wm.name == 'work_mem'
- set_wm.changed == true
- set_wm.restart_required == false
when:
- ansible_distribution == "Ubuntu"
- ansible_distribution_major_version == '16'
- name: postgresql_set - reset work_mem (restart is not required)
<<: *task_parameters
postgresql_set:
<<: *pg_parameters
name: work_mem
reset: yes
register: reset_wm
- assert:
that:
- reset_wm.name == 'work_mem'
- reset_wm.changed == true
- reset_wm.value_pretty != reset_wm.prev_val_pretty
- reset_wm.restart_required == false
- reset_wm.value.value != '12582912'
when:
- ansible_distribution != "Ubuntu"
- ansible_distribution_major_version != '16'
- ansible_distribution != "FreeBSD"
- assert:
that:
- reset_wm.name == 'work_mem'
- reset_wm.changed == true
- reset_wm.restart_required == false
when:
- ansible_distribution == "Ubuntu"
- ansible_distribution_major_version == '16'
- name: postgresql_set - reset work_mem again to check that nothing changed (restart is not required)
<<: *task_parameters
postgresql_set:
<<: *pg_parameters
name: work_mem
reset: yes
register: reset_wm2
- assert:
that:
- reset_wm2.name == 'work_mem'
- reset_wm2.changed == false
- reset_wm2.value_pretty == reset_wm2.prev_val_pretty
- reset_wm2.restart_required == false
when:
- ansible_distribution != "Ubuntu"
- ansible_distribution_major_version != '16'
- assert:
that:
- reset_wm2.name == 'work_mem'
- reset_wm2.changed == false
- reset_wm2.restart_required == false
when:
- ansible_distribution == "Ubuntu"
- ansible_distribution_major_version == '16'
- name: postgresql_set - preparation to the next step
<<: *task_parameters
postgresql_set:
<<: *pg_parameters
name: work_mem
value: 14MB
- name: postgresql_set - set work_mem to initial state (restart is not required)
<<: *task_parameters
postgresql_set:
<<: *pg_parameters
name: work_mem
value: default
register: def_wm
- assert:
that:
- def_wm.name == 'work_mem'
- def_wm.changed == true
- def_wm.value_pretty != def_wm.prev_val_pretty
- def_wm.restart_required == false
- def_wm.value.value != '14680064'
when:
- ansible_distribution != "Ubuntu"
- ansible_distribution_major_version != '16'
- ansible_distribution != 'FreeBSD'
- assert:
that:
- def_wm.name == 'work_mem'
- def_wm.changed == true
- def_wm.restart_required == false
when:
- ansible_distribution == "Ubuntu"
- ansible_distribution_major_version == '16'
- ansible_distribution != 'FreeBSD'
- name: postgresql_set - set shared_buffers (restart is required)
<<: *task_parameters
postgresql_set:
<<: *pg_parameters
name: shared_buffers
value: 111MB
register: set_shb
- assert:
that:
- set_shb.name == 'shared_buffers'
- set_shb.changed == true
- set_shb.restart_required == true
# We don't check value.unit because it is none
- name: postgresql_set - set autovacuum (enabled by default, restart is not required)
<<: *task_parameters
postgresql_set:
<<: *pg_parameters
name: autovacuum
value: off
register: set_aut
- assert:
that:
- set_aut.name == 'autovacuum'
- set_aut.changed == true
- set_aut.restart_required == false
- set_aut.value.value == 'off'
# Test check_mode, step 1. At the previous test we set autovacuum = 'off'
- name: postgresql - try to change autovacuum again in check_mode
<<: *task_parameters
postgresql_set:
<<: *pg_parameters
name: autovacuum
value: on
register: set_aut
check_mode: yes
- assert:
that:
- set_aut.name == 'autovacuum'
- set_aut.changed == true
- set_aut.restart_required == false
- set_aut.value.value == 'off'
# Test check_mode, step 2
- name: postgresql - check that autovacuum wasn't actually changed after change in check_mode
<<: *task_parameters
postgresql_set:
<<: *pg_parameters
name: autovacuum
value: off
register: set_aut
check_mode: yes
- assert:
that:
- set_aut.name == 'autovacuum'
- set_aut.changed == false
- set_aut.restart_required == false
- set_aut.value.value == 'off'
# Additional check by SQL query:
- name: postgresql_set - get autovacuum value to check, must be off
<<: *task_parameters
postgresql_query:
<<: *pg_parameters
query: SHOW autovacuum
register: result
- assert:
that:
- result.query_result[0].autovacuum == 'off'
# Test check_mode, step 3. It is different from
# the prev test - it runs without check_mode: yes.
# Before the check_mode tests autovacuum was off
- name: postgresql - check that autovacuum wasn't actually changed after change in check_mode
<<: *task_parameters
postgresql_set:
<<: *pg_parameters
name: autovacuum
value: off
register: set_aut
- assert:
that:
- set_aut.name == 'autovacuum'
- set_aut.changed == false
- set_aut.restart_required == false
- set_aut.value.value == 'off'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,267 |
open_iscsi: string field conversion warning for the default port argument
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The default `port` value of the `open_iscsi` module throws a type int to string conversion warning.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
- open_iscsi
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.3
config file = /Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/ansible.cfg
configured module search path = [u'/Users/pascal/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/.venv/lib/python2.7/site-packages/ansible
executable location = /Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/.venv/bin/ansible
python version = 2.7.16 (default, Nov 9 2019, 05:55:08) [GCC 4.2.1 Compatible Apple LLVM 11.0.0 (clang-1100.0.32.4) (-macos10.15-objc-s
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s
ANSIBLE_SSH_CONTROL_PATH(/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/ansible.cfg) = %(directory)s/ansible-ssh-%%h-%%p-%%r
DEFAULT_BECOME(/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/ansible.cfg) = True
DEFAULT_FORKS(/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/ansible.cfg) = 20
DEFAULT_HOST_LIST(/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/ansible.cfg) = [u'/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/hosts']
DEFAULT_MANAGED_STR(/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/ansible.cfg) = !!! WARNING: This file is managed by Ansible, DON'T EDIT it manually !!!
DEFAULT_ROLES_PATH(/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/ansible.cfg) = [u'/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/roles']
INTERPRETER_PYTHON(/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/ansible.cfg) = /usr/bin/python3
RETRY_FILES_ENABLED(/Users/pascal/Documents/02_business/01_confirm/confirm-git/infrastructure/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
╰─ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: perform a discovery and show available target nodes
open_iscsi:
show_nodes: yes
discover: yes
portal: '{{ iscsi_server }}'
tags:
- iscsi
- config
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```paste below
TASK [iscsi : perform a discovery and show available target nodes] *********************************************************************************************************************************************************************************************************************************************************
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [iscsi : perform a discovery and show available target nodes] *********************************************************************************************************************************************************************************************************************************************************
[WARNING]: The value 3260 (type int) in a string field was converted to '3260' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
```
|
https://github.com/ansible/ansible/issues/67267
|
https://github.com/ansible/ansible/pull/67270
|
59bcc9f739d40c35ec1f471dbd7f30934bccfd94
|
24ce97a49b0a98bec777f77d34241bb51c12e648
| 2020-02-10T12:21:32Z |
python
| 2020-02-15T13:06:16Z |
lib/ansible/modules/system/open_iscsi.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2013, Serge van Ginderachter <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: open_iscsi
author:
- Serge van Ginderachter (@srvg)
version_added: "1.4"
short_description: Manage iSCSI targets with Open-iSCSI
description:
- Discover targets on given portal, (dis)connect targets, mark targets to
manually or auto start, return device nodes of connected targets.
requirements:
- open_iscsi library and tools (iscsiadm)
options:
portal:
description:
- The IP address of the iSCSI target.
type: str
aliases: [ ip ]
port:
description:
- The port on which the iSCSI target process listens.
type: str
default: 3260
target:
description:
- The iSCSI target name.
type: str
aliases: [ name, targetname ]
login:
description:
- Whether the target node should be connected.
type: bool
aliases: [ state ]
node_auth:
description:
- The value for C(discovery.sendtargets.auth.authmethod).
type: str
default: CHAP
node_user:
description:
- The value for C(discovery.sendtargets.auth.username).
type: str
node_pass:
description:
- The value for C(discovery.sendtargets.auth.password).
type: str
auto_node_startup:
description:
- Whether the target node should be automatically connected at startup.
type: bool
aliases: [ automatic ]
discover:
description:
- Whether the list of target nodes on the portal should be
(re)discovered and added to the persistent iSCSI database.
- Keep in mind that C(iscsiadm) discovery resets configuration, like C(node.startup)
to manual, hence combined with C(auto_node_startup=yes) will always return
a changed state.
type: bool
show_nodes:
description:
- Whether the list of nodes in the persistent iSCSI database should be returned by the module.
type: bool
'''
EXAMPLES = r'''
- name: Perform a discovery on 10.1.2.3 and show available target nodes
open_iscsi:
show_nodes: yes
discover: yes
portal: 10.1.2.3
# NOTE: Only works if exactly one target is exported to the initiator
- name: Discover targets on portal and login to the one available
open_iscsi:
portal: '{{ iscsi_target }}'
login: yes
discover: yes
- name: Connect to the named target, after updating the local persistent database (cache)
open_iscsi:
login: yes
target: iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d
- name: Disconnect from the cached named target
open_iscsi:
login: no
target: iqn.1986-03.com.sun:02:f8c1f9e0-c3ec-ec84-c9c9-8bfb0cd5de3d
'''
import glob
import os
import time
from ansible.module_utils.basic import AnsibleModule
ISCSIADM = 'iscsiadm'
def compare_nodelists(l1, l2):
l1.sort()
l2.sort()
return l1 == l2
def iscsi_get_cached_nodes(module, portal=None):
cmd = '%s --mode node' % iscsiadm_cmd
(rc, out, err) = module.run_command(cmd)
if rc == 0:
lines = out.splitlines()
nodes = []
for line in lines:
# line format is "ip:port,target_portal_group_tag targetname"
parts = line.split()
if len(parts) > 2:
module.fail_json(msg='error parsing output', cmd=cmd)
target = parts[1]
parts = parts[0].split(':')
target_portal = parts[0]
if portal is None or portal == target_portal:
nodes.append(target)
# older versions of scsiadm don't have nice return codes
# for newer versions see iscsiadm(8); also usr/iscsiadm.c for details
# err can contain [N|n]o records...
elif rc == 21 or (rc == 255 and "o records found" in err):
nodes = []
else:
module.fail_json(cmd=cmd, rc=rc, msg=err)
return nodes
def iscsi_discover(module, portal, port):
cmd = '%s --mode discovery --type sendtargets --portal %s:%s' % (iscsiadm_cmd, portal, port)
(rc, out, err) = module.run_command(cmd)
if rc > 0:
module.fail_json(cmd=cmd, rc=rc, msg=err)
def target_loggedon(module, target):
cmd = '%s --mode session' % iscsiadm_cmd
(rc, out, err) = module.run_command(cmd)
if rc == 0:
return target in out
elif rc == 21:
return False
else:
module.fail_json(cmd=cmd, rc=rc, msg=err)
def target_login(module, target, portal=None, port=None):
node_auth = module.params['node_auth']
node_user = module.params['node_user']
node_pass = module.params['node_pass']
if node_user:
params = [('node.session.auth.authmethod', node_auth),
('node.session.auth.username', node_user),
('node.session.auth.password', node_pass)]
for (name, value) in params:
cmd = '%s --mode node --targetname %s --op=update --name %s --value %s' % (iscsiadm_cmd, target, name, value)
(rc, out, err) = module.run_command(cmd)
if rc > 0:
module.fail_json(cmd=cmd, rc=rc, msg=err)
cmd = '%s --mode node --targetname %s --login' % (iscsiadm_cmd, target)
if portal is not None and port is not None:
cmd += ' --portal %s:%s' % (portal, port)
(rc, out, err) = module.run_command(cmd)
if rc > 0:
module.fail_json(cmd=cmd, rc=rc, msg=err)
def target_logout(module, target):
cmd = '%s --mode node --targetname %s --logout' % (iscsiadm_cmd, target)
(rc, out, err) = module.run_command(cmd)
if rc > 0:
module.fail_json(cmd=cmd, rc=rc, msg=err)
def target_device_node(module, target):
# if anyone know a better way to find out which devicenodes get created for
# a given target...
devices = glob.glob('/dev/disk/by-path/*%s*' % target)
devdisks = []
for dev in devices:
# exclude partitions
if "-part" not in dev:
devdisk = os.path.realpath(dev)
# only add once (multi-path?)
if devdisk not in devdisks:
devdisks.append(devdisk)
return devdisks
def target_isauto(module, target):
cmd = '%s --mode node --targetname %s' % (iscsiadm_cmd, target)
(rc, out, err) = module.run_command(cmd)
if rc == 0:
lines = out.splitlines()
for line in lines:
if 'node.startup' in line:
return 'automatic' in line
return False
else:
module.fail_json(cmd=cmd, rc=rc, msg=err)
def target_setauto(module, target):
cmd = '%s --mode node --targetname %s --op=update --name node.startup --value automatic' % (iscsiadm_cmd, target)
(rc, out, err) = module.run_command(cmd)
if rc > 0:
module.fail_json(cmd=cmd, rc=rc, msg=err)
def target_setmanual(module, target):
cmd = '%s --mode node --targetname %s --op=update --name node.startup --value manual' % (iscsiadm_cmd, target)
(rc, out, err) = module.run_command(cmd)
if rc > 0:
module.fail_json(cmd=cmd, rc=rc, msg=err)
def main():
# load ansible module object
module = AnsibleModule(
argument_spec=dict(
# target
portal=dict(type='str', aliases=['ip']),
port=dict(type='str', default=3260),
target=dict(type='str', aliases=['name', 'targetname']),
node_auth=dict(type='str', default='CHAP'),
node_user=dict(type='str'),
node_pass=dict(type='str', no_log=True),
# actions
login=dict(type='bool', aliases=['state']),
auto_node_startup=dict(type='bool', aliases=['automatic']),
discover=dict(type='bool', default=False),
show_nodes=dict(type='bool', default=False),
),
required_together=[['discover_user', 'discover_pass'],
['node_user', 'node_pass']],
supports_check_mode=True,
)
global iscsiadm_cmd
iscsiadm_cmd = module.get_bin_path('iscsiadm', required=True)
# parameters
portal = module.params['portal']
target = module.params['target']
port = module.params['port']
login = module.params['login']
automatic = module.params['auto_node_startup']
discover = module.params['discover']
show_nodes = module.params['show_nodes']
check = module.check_mode
cached = iscsi_get_cached_nodes(module, portal)
# return json dict
result = {}
result['changed'] = False
if discover:
if portal is None:
module.fail_json(msg="Need to specify at least the portal (ip) to discover")
elif check:
nodes = cached
else:
iscsi_discover(module, portal, port)
nodes = iscsi_get_cached_nodes(module, portal)
if not compare_nodelists(cached, nodes):
result['changed'] |= True
result['cache_updated'] = True
else:
nodes = cached
if login is not None or automatic is not None:
if target is None:
if len(nodes) > 1:
module.fail_json(msg="Need to specify a target")
else:
target = nodes[0]
else:
# check given target is in cache
check_target = False
for node in nodes:
if node == target:
check_target = True
break
if not check_target:
module.fail_json(msg="Specified target not found")
if show_nodes:
result['nodes'] = nodes
if login is not None:
loggedon = target_loggedon(module, target)
if (login and loggedon) or (not login and not loggedon):
result['changed'] |= False
if login:
result['devicenodes'] = target_device_node(module, target)
elif not check:
if login:
target_login(module, target, portal, port)
# give udev some time
time.sleep(1)
result['devicenodes'] = target_device_node(module, target)
else:
target_logout(module, target)
result['changed'] |= True
result['connection_changed'] = True
else:
result['changed'] |= True
result['connection_changed'] = True
if automatic is not None:
isauto = target_isauto(module, target)
if (automatic and isauto) or (not automatic and not isauto):
result['changed'] |= False
result['automatic_changed'] = False
elif not check:
if automatic:
target_setauto(module, target)
else:
target_setmanual(module, target)
result['changed'] |= True
result['automatic_changed'] = True
else:
result['changed'] |= True
result['automatic_changed'] = True
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,537 |
SyntaxWarning over comparison of literals using is in cobbler module.
|
##### SUMMARY
SyntaxWarning over using is for comparison of literals. The fix is simple and is a good beginner issue.
```
contrib/inventory/cobbler.py:218: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if this_dns_name is not None and this_dns_name is not "":
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
cobbler
##### ANSIBLE VERSION
devel branch
|
https://github.com/ansible/ansible/issues/66537
|
https://github.com/ansible/ansible/pull/66543
|
717b7fee9f369d5b588c1e82cd2c80300c4e8caf
|
c2ad25020b5fa2f334d61581fb936d3fa58c3280
| 2020-01-16T16:39:30Z |
python
| 2020-02-15T13:19:17Z |
contrib/inventory/cobbler.py
|
#!/usr/bin/env python
"""
Cobbler external inventory script
=================================
Ansible has a feature where instead of reading from /etc/ansible/hosts
as a text file, it can query external programs to obtain the list
of hosts, groups the hosts are in, and even variables to assign to each host.
To use this, copy this file over /etc/ansible/hosts and chmod +x the file.
This, more or less, allows you to keep one central database containing
info about all of your managed instances.
This script is an example of sourcing that data from Cobbler
(https://cobbler.github.io). With cobbler each --mgmt-class in cobbler
will correspond to a group in Ansible, and --ks-meta variables will be
passed down for use in templates or even in argument lines.
NOTE: The cobbler system names will not be used. Make sure a
cobbler --dns-name is set for each cobbler system. If a system
appears with two DNS names we do not add it twice because we don't want
ansible talking to it twice. The first one found will be used. If no
--dns-name is set the system will NOT be visible to ansible. We do
not add cobbler system names because there is no requirement in cobbler
that those correspond to addresses.
Tested with Cobbler 2.0.11.
Changelog:
- 2015-06-21 dmccue: Modified to support run-once _meta retrieval, results in
higher performance at ansible startup. Groups are determined by owner rather than
default mgmt_classes. DNS name determined from hostname. cobbler values are written
to a 'cobbler' fact namespace
- 2013-09-01 pgehres: Refactored implementation to make use of caching and to
limit the number of connections to external cobbler server for performance.
Added use of cobbler.ini file to configure settings. Tested with Cobbler 2.4.0
"""
# (c) 2012, Michael DeHaan <[email protected]>
#
# This file is part of Ansible,
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <https://www.gnu.org/licenses/>.
######################################################################
import argparse
import os
import re
from time import time
import xmlrpclib
import json
from ansible.module_utils.six import iteritems
from ansible.module_utils.six.moves import configparser as ConfigParser
# NOTE -- this file assumes Ansible is being accessed FROM the cobbler
# server, so it does not attempt to login with a username and password.
# this will be addressed in a future version of this script.
orderby_keyname = 'owners' # alternatively 'mgmt_classes'
class CobblerInventory(object):
def __init__(self):
""" Main execution path """
self.conn = None
self.inventory = dict() # A list of groups and the hosts in that group
self.cache = dict() # Details about hosts in the inventory
self.ignore_settings = False # used to only look at env vars for settings.
# Read env vars, read settings, and parse CLI arguments
self.parse_env_vars()
self.read_settings()
self.parse_cli_args()
# Cache
if self.args.refresh_cache:
self.update_cache()
elif not self.is_cache_valid():
self.update_cache()
else:
self.load_inventory_from_cache()
self.load_cache_from_cache()
data_to_print = ""
# Data to print
if self.args.host:
data_to_print += self.get_host_info()
else:
self.inventory['_meta'] = {'hostvars': {}}
for hostname in self.cache:
self.inventory['_meta']['hostvars'][hostname] = {'cobbler': self.cache[hostname]}
data_to_print += self.json_format_dict(self.inventory, True)
print(data_to_print)
def _connect(self):
if not self.conn:
self.conn = xmlrpclib.Server(self.cobbler_host, allow_none=True)
self.token = None
if self.cobbler_username is not None:
self.token = self.conn.login(self.cobbler_username, self.cobbler_password)
def is_cache_valid(self):
""" Determines if the cache files have expired, or if it is still valid """
if os.path.isfile(self.cache_path_cache):
mod_time = os.path.getmtime(self.cache_path_cache)
current_time = time()
if (mod_time + self.cache_max_age) > current_time:
if os.path.isfile(self.cache_path_inventory):
return True
return False
def read_settings(self):
""" Reads the settings from the cobbler.ini file """
if(self.ignore_settings):
return
config = ConfigParser.SafeConfigParser()
config.read(os.path.dirname(os.path.realpath(__file__)) + '/cobbler.ini')
self.cobbler_host = config.get('cobbler', 'host')
self.cobbler_username = None
self.cobbler_password = None
if config.has_option('cobbler', 'username'):
self.cobbler_username = config.get('cobbler', 'username')
if config.has_option('cobbler', 'password'):
self.cobbler_password = config.get('cobbler', 'password')
# Cache related
cache_path = config.get('cobbler', 'cache_path')
self.cache_path_cache = cache_path + "/ansible-cobbler.cache"
self.cache_path_inventory = cache_path + "/ansible-cobbler.index"
self.cache_max_age = config.getint('cobbler', 'cache_max_age')
def parse_env_vars(self):
""" Reads the settings from the environment """
# Env. Vars:
# COBBLER_host
# COBBLER_username
# COBBLER_password
# COBBLER_cache_path
# COBBLER_cache_max_age
# COBBLER_ignore_settings
self.cobbler_host = os.getenv('COBBLER_host', None)
self.cobbler_username = os.getenv('COBBLER_username', None)
self.cobbler_password = os.getenv('COBBLER_password', None)
# Cache related
cache_path = os.getenv('COBBLER_cache_path', None)
if(cache_path is not None):
self.cache_path_cache = cache_path + "/ansible-cobbler.cache"
self.cache_path_inventory = cache_path + "/ansible-cobbler.index"
self.cache_max_age = int(os.getenv('COBBLER_cache_max_age', "30"))
# ignore_settings is used to ignore the settings file, for use in Ansible
# Tower (or AWX inventory scripts and not throw python exceptions.)
if(os.getenv('COBBLER_ignore_settings', False) == "True"):
self.ignore_settings = True
def parse_cli_args(self):
""" Command line argument processing """
parser = argparse.ArgumentParser(description='Produce an Ansible Inventory file based on Cobbler')
parser.add_argument('--list', action='store_true', default=True, help='List instances (default: True)')
parser.add_argument('--host', action='store', help='Get all the variables about a specific instance')
parser.add_argument('--refresh-cache', action='store_true', default=False,
help='Force refresh of cache by making API requests to cobbler (default: False - use cache files)')
self.args = parser.parse_args()
def update_cache(self):
""" Make calls to cobbler and save the output in a cache """
self._connect()
self.groups = dict()
self.hosts = dict()
if self.token is not None:
data = self.conn.get_systems(self.token)
else:
data = self.conn.get_systems()
for host in data:
# Get the FQDN for the host and add it to the right groups
dns_name = host['hostname'] # None
ksmeta = None
interfaces = host['interfaces']
# hostname is often empty for non-static IP hosts
if dns_name == '':
for (iname, ivalue) in iteritems(interfaces):
if ivalue['management'] or not ivalue['static']:
this_dns_name = ivalue.get('dns_name', None)
if this_dns_name is not None and this_dns_name is not "":
dns_name = this_dns_name
if dns_name == '' or dns_name is None:
continue
status = host['status']
profile = host['profile']
classes = host[orderby_keyname]
if status not in self.inventory:
self.inventory[status] = []
self.inventory[status].append(dns_name)
if profile not in self.inventory:
self.inventory[profile] = []
self.inventory[profile].append(dns_name)
for cls in classes:
if cls not in self.inventory:
self.inventory[cls] = []
self.inventory[cls].append(dns_name)
# Since we already have all of the data for the host, update the host details as well
# The old way was ksmeta only -- provide backwards compatibility
self.cache[dns_name] = host
if "ks_meta" in host:
for key, value in iteritems(host["ks_meta"]):
self.cache[dns_name][key] = value
self.write_to_cache(self.cache, self.cache_path_cache)
self.write_to_cache(self.inventory, self.cache_path_inventory)
def get_host_info(self):
""" Get variables about a specific host """
if not self.cache or len(self.cache) == 0:
# Need to load index from cache
self.load_cache_from_cache()
if self.args.host not in self.cache:
# try updating the cache
self.update_cache()
if self.args.host not in self.cache:
# host might not exist anymore
return self.json_format_dict({}, True)
return self.json_format_dict(self.cache[self.args.host], True)
def push(self, my_dict, key, element):
""" Pushed an element onto an array that may not have been defined in the dict """
if key in my_dict:
my_dict[key].append(element)
else:
my_dict[key] = [element]
def load_inventory_from_cache(self):
""" Reads the index from the cache file sets self.index """
cache = open(self.cache_path_inventory, 'r')
json_inventory = cache.read()
self.inventory = json.loads(json_inventory)
def load_cache_from_cache(self):
""" Reads the cache from the cache file sets self.cache """
cache = open(self.cache_path_cache, 'r')
json_cache = cache.read()
self.cache = json.loads(json_cache)
def write_to_cache(self, data, filename):
""" Writes data in JSON format to a file """
json_data = self.json_format_dict(data, True)
cache = open(filename, 'w')
cache.write(json_data)
cache.close()
def to_safe(self, word):
""" Converts 'bad' characters in a string to underscores so they can be used as Ansible groups """
return re.sub(r"[^A-Za-z0-9\-]", "_", word)
def json_format_dict(self, data, pretty=False):
""" Converts a dict to a JSON object and dumps it as a formatted string """
if pretty:
return json.dumps(data, sort_keys=True, indent=2)
else:
return json.dumps(data)
CobblerInventory()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,353 |
Close MongoDB client connection in mongodb_user when done
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Thanks for the great software :)
When setting up a MongoDB database server, one will generally want to create a root user and then create a second, more limited app user. Here is my attempt at this:
```yaml
- name: Add MongoDB admin user
mongodb_user:
database: "admin"
name: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/username', region=region) }}"
password: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/password', decrypt=true, region=region) }}"
roles: "root"
- name: Copy config file
copy:
src: mongod.conf
dest: /etc/mongod.conf
owner: root
group: root
mode: 0644
- name: Create /etc/security/limits.d/mongod.conf
copy:
src: security-mongod.conf
dest: /etc/security/limits.d/mongod.conf
owner: root
group: root
mode: 0644
- name: Restart MongoDB
service:
name: mongod
state: restarted
- name: Add MongoDB app user
mongodb_user:
login_database: "admin"
login_user: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/username', region=region) }}"
login_password: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/password', decrypt=true, region=region) }}"
name: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/app-user/username', region=region) }}"
password: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/app-user/password', decrypt=true, region=region) }}"
roles:
- db: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/name', region=region) }}"
role: "readWrite"
- name: Reboot
reboot:
```
Creating the admin user succeeds, but once I enable authorization, restart MongoDB, and attempt user creation, I receive the following error every time I run the playbook:
```
TASK [mongodb : Add MongoDB admin user] ****************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pymongo.errors.OperationFailure: command createUser requires authentication
fatal: [3.222.242.183]: FAILED! => {"changed": false, "msg": "Unable to add or update user: command createUser requires authentication"}
to retry, use: --limit @/root/repo/modules/database/provision.retry
```
However, if I comment out admin user creation
```yaml
# - name: Add MongoDB admin user
# mongodb_user:
# database: "admin"
# name: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/username', region=region) }}"
# password: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/password', decrypt=true, region=region) }}"
# roles: "root"
```
and run the playbook again, app user creation succeeds.
Further, I was able to workaround this issue by rebooting the server prior to app user creation:
```yaml
- name: Add MongoDB admin user
mongodb_user:
login_database: "admin"
login_user: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/username', region=region) }}"
login_password: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/password', decrypt=true, region=region) }}"
database: "admin"
name: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/username', region=region) }}"
password: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/password', decrypt=true, region=region) }}"
update_password: "on_create"
roles: "root"
- name: Copy config file
copy:
src: mongod.conf
dest: /etc/mongod.conf
owner: root
group: root
mode: 0644
- name: Create /etc/security/limits.d/mongod.conf
copy:
src: security-mongod.conf
dest: /etc/security/limits.d/mongod.conf
owner: root
group: root
mode: 0644
# Reboot before adding the app user to MongoDB or authentication will fail probably due to connection pooling
# with pymongo which is a problem because authorization was enabled after adding the admin account.
- name: Reboot
reboot:
- name: Add MongoDB app user
mongodb_user:
login_database: "admin"
login_user: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/username', region=region) }}"
login_password: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/admin-user/password', decrypt=true, region=region) }}"
database: "admin"
name: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/app-user/username', region=region) }}"
password: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/app-user/password', decrypt=true, region=region) }}"
update_password: "on_create"
roles:
- db: "{{ lookup('aws_ssm', '/app-name/test/database/mongodb/name', region=region) }}"
role: "readWrite"
```
I believe the new connection details related to creating the app user are ignored. Please see [this discussion](https://github.com/MongoEngine/mongoengine/issues/2010#issuecomment-487317501) for details. If this is true, calling `client.close()` at [the end of the mongodb_user](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/database/mongodb/mongodb_user.py#L438) as done [at the end of postgresql_table](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/database/postgresql/postgresql_table.py#L580).
Closing the connection would likely help because once authorization is turned on, authentication is required, so no connection reuse or pooling would occur.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
mongodb_user
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
2.8.2
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_HOST_LIST(/Users/chad/development/projects/app-name/infrastructure/modules/database/ansible.cfg) = ['/Users/chad/development/projects/app-name/infrastructure/modules/database/hosts']
DEFAULT_TRANSPORT(/Users/chad/development/projects/app-name/infrastructure/modules/database/ansible.cfg) = ssh
HOST_KEY_CHECKING(/Users/chad/development/projects/app-name/infrastructure/modules/database/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
MacOS Mojave 10.14 (18A391)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Described above.
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Creating a second user account should not fail.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Creating the second user account failed.
<!--- Paste verbatim command output between quotes -->
|
https://github.com/ansible/ansible/issues/59353
|
https://github.com/ansible/ansible/pull/65665
|
c9b38bd74e4562b01b3a4b67b3b5b61d486faeaa
|
25181e1b7021430f495f249dc3b9adf37ae3afd3
| 2019-07-21T17:38:23Z |
python
| 2020-02-15T14:10:27Z |
lib/ansible/modules/database/mongodb/mongodb_user.py
|
#!/usr/bin/python
# (c) 2012, Elliott Foster <[email protected]>
# Sponsored by Four Kitchens http://fourkitchens.com.
# (c) 2014, Epic Games, Inc.
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: mongodb_user
short_description: Adds or removes a user from a MongoDB database.
description:
- Adds or removes a user from a MongoDB database.
version_added: "1.1"
options:
login_user:
description:
- The username used to authenticate with
login_password:
description:
- The password used to authenticate with
login_host:
description:
- The host running the database
default: localhost
login_port:
description:
- The port to connect to
default: 27017
login_database:
version_added: "2.0"
description:
- The database where login credentials are stored
replica_set:
version_added: "1.6"
description:
- Replica set to connect to (automatically connects to primary for writes)
database:
description:
- The name of the database to add/remove the user from
required: true
name:
description:
- The name of the user to add or remove
required: true
aliases: [ 'user' ]
password:
description:
- The password to use for the user
ssl:
version_added: "1.8"
description:
- Whether to use an SSL connection when connecting to the database
type: bool
ssl_cert_reqs:
version_added: "2.2"
description:
- Specifies whether a certificate is required from the other side of the connection, and whether it will be validated if provided.
default: "CERT_REQUIRED"
choices: ["CERT_REQUIRED", "CERT_OPTIONAL", "CERT_NONE"]
roles:
version_added: "1.3"
description:
- >
The database user roles valid values could either be one or more of the following strings:
'read', 'readWrite', 'dbAdmin', 'userAdmin', 'clusterAdmin', 'readAnyDatabase', 'readWriteAnyDatabase', 'userAdminAnyDatabase',
'dbAdminAnyDatabase'
- "Or the following dictionary '{ db: DATABASE_NAME, role: ROLE_NAME }'."
- "This param requires pymongo 2.5+. If it is a string, mongodb 2.4+ is also required. If it is a dictionary, mongo 2.6+ is required."
state:
description:
- The database user state
default: present
choices: [ "present", "absent" ]
update_password:
default: always
choices: ['always', 'on_create']
version_added: "2.1"
description:
- C(always) will update passwords if they differ. C(on_create) will only set the password for newly created users.
notes:
- Requires the pymongo Python package on the remote host, version 2.4.2+. This
can be installed using pip or the OS package manager. @see http://api.mongodb.org/python/current/installation.html
requirements: [ "pymongo" ]
author:
- "Elliott Foster (@elliotttf)"
- "Julien Thebault (@Lujeni)"
'''
EXAMPLES = '''
# Create 'burgers' database user with name 'bob' and password '12345'.
- mongodb_user:
database: burgers
name: bob
password: 12345
state: present
# Create a database user via SSL (MongoDB must be compiled with the SSL option and configured properly)
- mongodb_user:
database: burgers
name: bob
password: 12345
state: present
ssl: True
# Delete 'burgers' database user with name 'bob'.
- mongodb_user:
database: burgers
name: bob
state: absent
# Define more users with various specific roles (if not defined, no roles is assigned, and the user will be added via pre mongo 2.2 style)
- mongodb_user:
database: burgers
name: ben
password: 12345
roles: read
state: present
- mongodb_user:
database: burgers
name: jim
password: 12345
roles: readWrite,dbAdmin,userAdmin
state: present
- mongodb_user:
database: burgers
name: joe
password: 12345
roles: readWriteAnyDatabase
state: present
# add a user to database in a replica set, the primary server is automatically discovered and written to
- mongodb_user:
database: burgers
name: bob
replica_set: belcher
password: 12345
roles: readWriteAnyDatabase
state: present
# add a user 'oplog_reader' with read only access to the 'local' database on the replica_set 'belcher'. This is useful for oplog access (MONGO_OPLOG_URL).
# please notice the credentials must be added to the 'admin' database because the 'local' database is not synchronized and can't receive user credentials
# To login with such user, the connection string should be MONGO_OPLOG_URL="mongodb://oplog_reader:oplog_reader_password@server1,server2/local?authSource=admin"
# This syntax requires mongodb 2.6+ and pymongo 2.5+
- mongodb_user:
login_user: root
login_password: root_password
database: admin
user: oplog_reader
password: oplog_reader_password
state: present
replica_set: belcher
roles:
- db: local
role: read
'''
RETURN = '''
user:
description: The name of the user to add or remove.
returned: success
type: str
'''
import os
import ssl as ssl_lib
import traceback
from distutils.version import LooseVersion
from operator import itemgetter
try:
from pymongo.errors import ConnectionFailure
from pymongo.errors import OperationFailure
from pymongo import version as PyMongoVersion
from pymongo import MongoClient
except ImportError:
try: # for older PyMongo 2.2
from pymongo import Connection as MongoClient
except ImportError:
pymongo_found = False
else:
pymongo_found = True
else:
pymongo_found = True
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.six import binary_type, text_type
from ansible.module_utils.six.moves import configparser
from ansible.module_utils._text import to_native
# =========================================
# MongoDB module specific support methods.
#
def check_compatibility(module, client):
"""Check the compatibility between the driver and the database.
See: https://docs.mongodb.com/ecosystem/drivers/driver-compatibility-reference/#python-driver-compatibility
Args:
module: Ansible module.
client (cursor): Mongodb cursor on admin database.
"""
loose_srv_version = LooseVersion(client.server_info()['version'])
loose_driver_version = LooseVersion(PyMongoVersion)
if loose_srv_version >= LooseVersion('3.2') and loose_driver_version < LooseVersion('3.2'):
module.fail_json(msg=' (Note: you must use pymongo 3.2+ with MongoDB >= 3.2)')
elif loose_srv_version >= LooseVersion('3.0') and loose_driver_version <= LooseVersion('2.8'):
module.fail_json(msg=' (Note: you must use pymongo 2.8+ with MongoDB 3.0)')
elif loose_srv_version >= LooseVersion('2.6') and loose_driver_version <= LooseVersion('2.7'):
module.fail_json(msg=' (Note: you must use pymongo 2.7+ with MongoDB 2.6)')
elif LooseVersion(PyMongoVersion) <= LooseVersion('2.5'):
module.fail_json(msg=' (Note: you must be on mongodb 2.4+ and pymongo 2.5+ to use the roles param)')
def user_find(client, user, db_name):
"""Check if the user exists.
Args:
client (cursor): Mongodb cursor on admin database.
user (str): User to check.
db_name (str): User's database.
Returns:
dict: when user exists, False otherwise.
"""
for mongo_user in client["admin"].system.users.find():
if mongo_user['user'] == user:
# NOTE: there is no 'db' field in mongo 2.4.
if 'db' not in mongo_user:
return mongo_user
if mongo_user["db"] == db_name:
return mongo_user
return False
def user_add(module, client, db_name, user, password, roles):
# pymongo's user_add is a _create_or_update_user so we won't know if it was changed or updated
# without reproducing a lot of the logic in database.py of pymongo
db = client[db_name]
if roles is None:
db.add_user(user, password, False)
else:
db.add_user(user, password, None, roles=roles)
def user_remove(module, client, db_name, user):
exists = user_find(client, user, db_name)
if exists:
if module.check_mode:
module.exit_json(changed=True, user=user)
db = client[db_name]
db.remove_user(user)
else:
module.exit_json(changed=False, user=user)
def load_mongocnf():
config = configparser.RawConfigParser()
mongocnf = os.path.expanduser('~/.mongodb.cnf')
try:
config.readfp(open(mongocnf))
creds = dict(
user=config.get('client', 'user'),
password=config.get('client', 'pass')
)
except (configparser.NoOptionError, IOError):
return False
return creds
def check_if_roles_changed(uinfo, roles, db_name):
# We must be aware of users which can read the oplog on a replicaset
# Such users must have access to the local DB, but since this DB does not store users credentials
# and is not synchronized among replica sets, the user must be stored on the admin db
# Therefore their structure is the following :
# {
# "_id" : "admin.oplog_reader",
# "user" : "oplog_reader",
# "db" : "admin", # <-- admin DB
# "roles" : [
# {
# "role" : "read",
# "db" : "local" # <-- local DB
# }
# ]
# }
def make_sure_roles_are_a_list_of_dict(roles, db_name):
output = list()
for role in roles:
if isinstance(role, (binary_type, text_type)):
new_role = {"role": role, "db": db_name}
output.append(new_role)
else:
output.append(role)
return output
roles_as_list_of_dict = make_sure_roles_are_a_list_of_dict(roles, db_name)
uinfo_roles = uinfo.get('roles', [])
if sorted(roles_as_list_of_dict, key=itemgetter('db')) == sorted(uinfo_roles, key=itemgetter('db')):
return False
return True
# =========================================
# Module execution.
#
def main():
module = AnsibleModule(
argument_spec=dict(
login_user=dict(default=None),
login_password=dict(default=None, no_log=True),
login_host=dict(default='localhost'),
login_port=dict(default='27017'),
login_database=dict(default=None),
replica_set=dict(default=None),
database=dict(required=True, aliases=['db']),
name=dict(required=True, aliases=['user']),
password=dict(aliases=['pass'], no_log=True),
ssl=dict(default=False, type='bool'),
roles=dict(default=None, type='list'),
state=dict(default='present', choices=['absent', 'present']),
update_password=dict(default="always", choices=["always", "on_create"]),
ssl_cert_reqs=dict(default='CERT_REQUIRED', choices=['CERT_NONE', 'CERT_OPTIONAL', 'CERT_REQUIRED']),
),
supports_check_mode=True
)
if not pymongo_found:
module.fail_json(msg=missing_required_lib('pymongo'))
login_user = module.params['login_user']
login_password = module.params['login_password']
login_host = module.params['login_host']
login_port = module.params['login_port']
login_database = module.params['login_database']
replica_set = module.params['replica_set']
db_name = module.params['database']
user = module.params['name']
password = module.params['password']
ssl = module.params['ssl']
roles = module.params['roles'] or []
state = module.params['state']
update_password = module.params['update_password']
try:
connection_params = {
"host": login_host,
"port": int(login_port),
}
if replica_set:
connection_params["replicaset"] = replica_set
if ssl:
connection_params["ssl"] = ssl
connection_params["ssl_cert_reqs"] = getattr(ssl_lib, module.params['ssl_cert_reqs'])
client = MongoClient(**connection_params)
# NOTE: this check must be done ASAP.
# We doesn't need to be authenticated (this ability has lost in PyMongo 3.6)
if LooseVersion(PyMongoVersion) <= LooseVersion('3.5'):
check_compatibility(module, client)
if login_user is None and login_password is None:
mongocnf_creds = load_mongocnf()
if mongocnf_creds is not False:
login_user = mongocnf_creds['user']
login_password = mongocnf_creds['password']
elif login_password is None or login_user is None:
module.fail_json(msg='when supplying login arguments, both login_user and login_password must be provided')
if login_user is not None and login_password is not None:
client.admin.authenticate(login_user, login_password, source=login_database)
elif LooseVersion(PyMongoVersion) >= LooseVersion('3.0'):
if db_name != "admin":
module.fail_json(msg='The localhost login exception only allows the first admin account to be created')
# else: this has to be the first admin user added
except Exception as e:
module.fail_json(msg='unable to connect to database: %s' % to_native(e), exception=traceback.format_exc())
if state == 'present':
if password is None and update_password == 'always':
module.fail_json(msg='password parameter required when adding a user unless update_password is set to on_create')
try:
if update_password != 'always':
uinfo = user_find(client, user, db_name)
if uinfo:
password = None
if not check_if_roles_changed(uinfo, roles, db_name):
module.exit_json(changed=False, user=user)
if module.check_mode:
module.exit_json(changed=True, user=user)
user_add(module, client, db_name, user, password, roles)
except Exception as e:
module.fail_json(msg='Unable to add or update user: %s' % to_native(e), exception=traceback.format_exc())
# Here we can check password change if mongo provide a query for that : https://jira.mongodb.org/browse/SERVER-22848
# newuinfo = user_find(client, user, db_name)
# if uinfo['role'] == newuinfo['role'] and CheckPasswordHere:
# module.exit_json(changed=False, user=user)
elif state == 'absent':
try:
user_remove(module, client, db_name, user)
except Exception as e:
module.fail_json(msg='Unable to remove user: %s' % to_native(e), exception=traceback.format_exc())
module.exit_json(changed=True, user=user)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,347 |
docker_login not create config.json file with correct permission
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The module docker_login save the config.json with 644 permission instead of 600 (default by docker login command)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_login
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/gael/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04 server
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Log into private registry and force re-authorization
docker_login:
registry: "{{ registry_host }}"
username: "{{ registry_user }}"
password: "{{ vault_registry_password }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
ll /root/.docker/config.json
-rw------- 1 root root 181 Jan 10 10:53 /root/.docker/config.json
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
ll /root/.docker/config.json
-rw-r--r-- 1 root root 181 Jan 10 10:53 /root/.docker/config.json
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/67347
|
https://github.com/ansible/ansible/pull/67353
|
25181e1b7021430f495f249dc3b9adf37ae3afd3
|
55cb8c53887c081f645cf9853ace4f94f56d99a9
| 2020-02-12T17:58:05Z |
python
| 2020-02-15T14:38:58Z |
changelogs/fragments/67353-docker_login-permissions.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,347 |
docker_login not create config.json file with correct permission
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The module docker_login save the config.json with 644 permission instead of 600 (default by docker login command)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_login
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/gael/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04 server
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Log into private registry and force re-authorization
docker_login:
registry: "{{ registry_host }}"
username: "{{ registry_user }}"
password: "{{ vault_registry_password }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
ll /root/.docker/config.json
-rw------- 1 root root 181 Jan 10 10:53 /root/.docker/config.json
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
ll /root/.docker/config.json
-rw-r--r-- 1 root root 181 Jan 10 10:53 /root/.docker/config.json
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/67347
|
https://github.com/ansible/ansible/pull/67353
|
25181e1b7021430f495f249dc3b9adf37ae3afd3
|
55cb8c53887c081f645cf9853ace4f94f56d99a9
| 2020-02-12T17:58:05Z |
python
| 2020-02-15T14:38:58Z |
lib/ansible/modules/cloud/docker/docker_login.py
|
#!/usr/bin/python
#
# (c) 2016 Olaf Kilian <[email protected]>
# Chris Houseknecht, <[email protected]>
# James Tanner, <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: docker_login
short_description: Log into a Docker registry.
version_added: "2.0"
description:
- Provides functionality similar to the "docker login" command.
- Authenticate with a docker registry and add the credentials to your local Docker config file respectively the
credentials store associated to the registry. Adding the credentials to the config files resp. the credential
store allows future connections to the registry using tools such as Ansible's Docker modules, the Docker CLI
and Docker SDK for Python without needing to provide credentials.
- Running in check mode will perform the authentication without updating the config file.
options:
registry_url:
description:
- The registry URL.
type: str
default: "https://index.docker.io/v1/"
aliases:
- registry
- url
username:
description:
- The username for the registry account.
- Required when I(state) is C(present).
type: str
password:
description:
- The plaintext password for the registry account.
- Required when I(state) is C(present).
type: str
email:
description:
- Does nothing, do not use.
- Will be removed in Ansible 2.14.
type: str
reauthorize:
description:
- Refresh existing authentication found in the configuration file.
type: bool
default: no
aliases:
- reauth
config_path:
description:
- Custom path to the Docker CLI configuration file.
type: path
default: ~/.docker/config.json
aliases:
- dockercfg_path
state:
version_added: '2.3'
description:
- This controls the current state of the user. C(present) will login in a user, C(absent) will log them out.
- To logout you only need the registry server, which defaults to DockerHub.
- Before 2.1 you could ONLY log in.
- Docker does not support 'logout' with a custom config file.
type: str
default: 'present'
choices: ['present', 'absent']
extends_documentation_fragment:
- docker
- docker.docker_py_1_documentation
requirements:
- "L(Docker SDK for Python,https://docker-py.readthedocs.io/en/stable/) >= 1.8.0 (use L(docker-py,https://pypi.org/project/docker-py/) for Python 2.6)"
- "L(Python bindings for docker credentials store API) >= 0.2.1
(use L(docker-pycreds,https://pypi.org/project/docker-pycreds/) when using Docker SDK for Python < 4.0.0)"
- "Docker API >= 1.20"
author:
- Olaf Kilian (@olsaki) <[email protected]>
- Chris Houseknecht (@chouseknecht)
'''
EXAMPLES = '''
- name: Log into DockerHub
docker_login:
username: docker
password: rekcod
- name: Log into private registry and force re-authorization
docker_login:
registry: your.private.registry.io
username: yourself
password: secrets3
reauthorize: yes
- name: Log into DockerHub using a custom config file
docker_login:
username: docker
password: rekcod
config_path: /tmp/.mydockercfg
- name: Log out of DockerHub
docker_login:
state: absent
'''
RETURN = '''
login_results:
description: Results from the login.
returned: when state='present'
type: dict
sample: {
"serveraddress": "localhost:5000",
"username": "testuser"
}
'''
import base64
import json
import os
import re
import traceback
from ansible.module_utils._text import to_bytes, to_text
try:
from docker.errors import DockerException
from docker import auth
# Earlier versions of docker/docker-py put decode_auth
# in docker.auth.auth instead of docker.auth
if hasattr(auth, 'decode_auth'):
from docker.auth import decode_auth
else:
from docker.auth.auth import decode_auth
except ImportError:
# missing Docker SDK for Python handled in ansible.module_utils.docker.common
pass
from ansible.module_utils.docker.common import (
AnsibleDockerClient,
HAS_DOCKER_PY,
DEFAULT_DOCKER_REGISTRY,
DockerBaseClass,
EMAIL_REGEX,
RequestException,
)
NEEDS_DOCKER_PYCREDS = False
# Early versions of docker/docker-py rely on docker-pycreds for
# the credential store api.
if HAS_DOCKER_PY:
try:
from docker.credentials.errors import StoreError, CredentialsNotFound
from docker.credentials import Store
except ImportError:
try:
from dockerpycreds.errors import StoreError, CredentialsNotFound
from dockerpycreds.store import Store
except ImportError as exc:
HAS_DOCKER_ERROR = str(exc)
NEEDS_DOCKER_PYCREDS = True
if NEEDS_DOCKER_PYCREDS:
# docker-pycreds missing, so we need to create some place holder classes
# to allow instantiation.
class StoreError(Exception):
pass
class CredentialsNotFound(Exception):
pass
class DockerFileStore(object):
'''
A custom credential store class that implements only the functionality we need to
update the docker config file when no credential helpers is provided.
'''
program = "<legacy config>"
def __init__(self, config_path):
self._config_path = config_path
# Make sure we have a minimal config if none is available.
self._config = dict(
auths=dict()
)
try:
# Attempt to read the existing config.
with open(self._config_path, "r") as f:
config = json.load(f)
except (ValueError, IOError):
# No config found or an invalid config found so we'll ignore it.
config = dict()
# Update our internal config with what ever was loaded.
self._config.update(config)
@property
def config_path(self):
'''
Return the config path configured in this DockerFileStore instance.
'''
return self._config_path
def get(self, server):
'''
Retrieve credentials for `server` if there are any in the config file.
Otherwise raise a `StoreError`
'''
server_creds = self._config['auths'].get(server)
if not server_creds:
raise CredentialsNotFound('No matching credentials')
(username, password) = decode_auth(server_creds['auth'])
return dict(
Username=username,
Secret=password
)
def _write(self):
'''
Write config back out to disk.
'''
# Make sure directory exists
dir = os.path.dirname(self._config_path)
if not os.path.exists(dir):
os.makedirs(dir)
# Write config
with open(self._config_path, "w") as f:
json.dump(self._config, f, indent=4, sort_keys=True)
def store(self, server, username, password):
'''
Add a credentials for `server` to the current configuration.
'''
b64auth = base64.b64encode(
to_bytes(username) + b':' + to_bytes(password)
)
auth = to_text(b64auth)
# build up the auth structure
new_auth = dict(
auths=dict()
)
new_auth['auths'][server] = dict(
auth=auth
)
self._config.update(new_auth)
self._write()
def erase(self, server):
'''
Remove credentials for the given server from the configuration.
'''
self._config['auths'].pop(server)
self._write()
class LoginManager(DockerBaseClass):
def __init__(self, client, results):
super(LoginManager, self).__init__()
self.client = client
self.results = results
parameters = self.client.module.params
self.check_mode = self.client.check_mode
self.registry_url = parameters.get('registry_url')
self.username = parameters.get('username')
self.password = parameters.get('password')
self.email = parameters.get('email')
self.reauthorize = parameters.get('reauthorize')
self.config_path = parameters.get('config_path')
self.state = parameters.get('state')
def run(self):
'''
Do the actuall work of this task here. This allows instantiation for partial
testing.
'''
if self.state == 'present':
self.login()
else:
self.logout()
def fail(self, msg):
self.client.fail(msg)
def login(self):
'''
Log into the registry with provided username/password. On success update the config
file with the new authorization.
:return: None
'''
if self.email and not re.match(EMAIL_REGEX, self.email):
self.fail("Parameter error: the email address appears to be incorrect. Expecting it to match "
"/%s/" % (EMAIL_REGEX))
self.results['actions'].append("Logged into %s" % (self.registry_url))
self.log("Log into %s with username %s" % (self.registry_url, self.username))
try:
response = self.client.login(
self.username,
password=self.password,
email=self.email,
registry=self.registry_url,
reauth=self.reauthorize,
dockercfg_path=self.config_path
)
except Exception as exc:
self.fail("Logging into %s for user %s failed - %s" % (self.registry_url, self.username, str(exc)))
# If user is already logged in, then response contains password for user
if 'password' in response:
# This returns correct password if user is logged in and wrong password is given.
# So if it returns another password as we passed, and the user didn't request to
# reauthorize, still do it.
if not self.reauthorize and response['password'] != self.password:
try:
response = self.client.login(
self.username,
password=self.password,
email=self.email,
registry=self.registry_url,
reauth=True,
dockercfg_path=self.config_path
)
except Exception as exc:
self.fail("Logging into %s for user %s failed - %s" % (self.registry_url, self.username, str(exc)))
response.pop('password', None)
self.results['login_result'] = response
self.update_credentials()
def logout(self):
'''
Log out of the registry. On success update the config file.
:return: None
'''
# Get the configuration store.
store = self.get_credential_store_instance(self.registry_url, self.config_path)
try:
current = store.get(self.registry_url)
except CredentialsNotFound:
# get raises an exception on not found.
self.log("Credentials for %s not present, doing nothing." % (self.registry_url))
self.results['changed'] = False
return
if not self.check_mode:
store.erase(self.registry_url)
self.results['changed'] = True
def update_credentials(self):
'''
If the authorization is not stored attempt to store authorization values via
the appropriate credential helper or to the config file.
:return: None
'''
# Check to see if credentials already exist.
store = self.get_credential_store_instance(self.registry_url, self.config_path)
try:
current = store.get(self.registry_url)
except CredentialsNotFound:
# get raises an exception on not found.
current = dict(
Username='',
Secret=''
)
if current['Username'] != self.username or current['Secret'] != self.password or self.reauthorize:
if not self.check_mode:
store.store(self.registry_url, self.username, self.password)
self.log("Writing credentials to configured helper %s for %s" % (store.program, self.registry_url))
self.results['actions'].append("Wrote credentials to configured helper %s for %s" % (
store.program, self.registry_url))
self.results['changed'] = True
def get_credential_store_instance(self, registry, dockercfg_path):
'''
Return an instance of docker.credentials.Store used by the given registry.
:return: A Store or None
:rtype: Union[docker.credentials.Store, NoneType]
'''
# Older versions of docker-py don't have this feature.
try:
credstore_env = self.client.credstore_env
except AttributeError:
credstore_env = None
config = auth.load_config(config_path=dockercfg_path)
if hasattr(auth, 'get_credential_store'):
store_name = auth.get_credential_store(config, registry)
elif 'credsStore' in config:
store_name = config['credsStore']
else:
store_name = None
# Make sure that there is a credential helper before trying to instantiate a
# Store object.
if store_name:
self.log("Found credential store %s" % store_name)
return Store(store_name, environment=credstore_env)
return DockerFileStore(dockercfg_path)
def main():
argument_spec = dict(
registry_url=dict(type='str', default=DEFAULT_DOCKER_REGISTRY, aliases=['registry', 'url']),
username=dict(type='str'),
password=dict(type='str', no_log=True),
email=dict(type='str', removed_in_version='2.14'),
reauthorize=dict(type='bool', default=False, aliases=['reauth']),
state=dict(type='str', default='present', choices=['present', 'absent']),
config_path=dict(type='path', default='~/.docker/config.json', aliases=['dockercfg_path']),
)
required_if = [
('state', 'present', ['username', 'password']),
]
client = AnsibleDockerClient(
argument_spec=argument_spec,
supports_check_mode=True,
required_if=required_if,
min_docker_api_version='1.20',
)
try:
results = dict(
changed=False,
actions=[],
login_result={}
)
manager = LoginManager(client, results)
manager.run()
if 'actions' in results:
del results['actions']
client.module.exit_json(**results)
except DockerException as e:
client.fail('An unexpected docker error occurred: {0}'.format(e), exception=traceback.format_exc())
except RequestException as e:
client.fail('An unexpected requests error occurred when docker-py tried to talk to the docker daemon: {0}'.format(e), exception=traceback.format_exc())
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,347 |
docker_login not create config.json file with correct permission
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The module docker_login save the config.json with 644 permission instead of 600 (default by docker login command)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_login
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/gael/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04 server
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Log into private registry and force re-authorization
docker_login:
registry: "{{ registry_host }}"
username: "{{ registry_user }}"
password: "{{ vault_registry_password }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
ll /root/.docker/config.json
-rw------- 1 root root 181 Jan 10 10:53 /root/.docker/config.json
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
ll /root/.docker/config.json
-rw-r--r-- 1 root root 181 Jan 10 10:53 /root/.docker/config.json
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/67347
|
https://github.com/ansible/ansible/pull/67353
|
25181e1b7021430f495f249dc3b9adf37ae3afd3
|
55cb8c53887c081f645cf9853ace4f94f56d99a9
| 2020-02-12T17:58:05Z |
python
| 2020-02-15T14:38:58Z |
test/integration/targets/docker_login/tasks/tests/docker_login.yml
|
---
- name: Log in with wrong password (check mode)
docker_login:
registry_url: "{{ registry_frontend_address }}"
username: testuser
password: "1234"
state: present
register: login_failed_check
ignore_errors: yes
check_mode: yes
- name: Log in with wrong password
docker_login:
registry_url: "{{ registry_frontend_address }}"
username: testuser
password: "1234"
state: present
register: login_failed
ignore_errors: yes
- name: Make sure that login failed
assert:
that:
- login_failed_check is failed
- "('login attempt to http://' ~ registry_frontend_address ~ '/v2/ failed') in login_failed_check.msg"
- login_failed is failed
- "('login attempt to http://' ~ registry_frontend_address ~ '/v2/ failed') in login_failed.msg"
- name: Log in (check mode)
docker_login:
registry_url: "{{ registry_frontend_address }}"
username: testuser
password: hunter2
state: present
register: login_1
check_mode: yes
- name: Log in
docker_login:
registry_url: "{{ registry_frontend_address }}"
username: testuser
password: hunter2
state: present
register: login_2
- name: Log in (idempotent)
docker_login:
registry_url: "{{ registry_frontend_address }}"
username: testuser
password: hunter2
state: present
register: login_3
- name: Log in (idempotent, check mode)
docker_login:
registry_url: "{{ registry_frontend_address }}"
username: testuser
password: hunter2
state: present
register: login_4
check_mode: yes
- name: Make sure that login worked
assert:
that:
- login_1 is changed
- login_2 is changed
- login_3 is not changed
- login_4 is not changed
- name: Log in again with wrong password (check mode)
docker_login:
registry_url: "{{ registry_frontend_address }}"
username: testuser
password: "1234"
state: present
register: login_failed_check
ignore_errors: yes
check_mode: yes
- name: Log in again with wrong password
docker_login:
registry_url: "{{ registry_frontend_address }}"
username: testuser
password: "1234"
state: present
register: login_failed
ignore_errors: yes
- name: Make sure that login failed again
assert:
that:
- login_failed_check is failed
- "('login attempt to http://' ~ registry_frontend_address ~ '/v2/ failed') in login_failed_check.msg"
- login_failed is failed
- "('login attempt to http://' ~ registry_frontend_address ~ '/v2/ failed') in login_failed.msg"
- name: Log out (check mode)
docker_login:
registry_url: "{{ registry_frontend_address }}"
state: absent
register: logout_1
check_mode: yes
- name: Log out
docker_login:
registry_url: "{{ registry_frontend_address }}"
state: absent
register: logout_2
- name: Log out (idempotent)
docker_login:
registry_url: "{{ registry_frontend_address }}"
state: absent
register: logout_3
- name: Log out (idempotent, check mode)
docker_login:
registry_url: "{{ registry_frontend_address }}"
state: absent
register: logout_4
check_mode: yes
- name: Make sure that login worked
assert:
that:
- logout_1 is changed
- logout_2 is changed
- logout_3 is not changed
- logout_4 is not changed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,689 |
sefcontext: fatal failure for socket file type
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The sefcotext module encounters an error when trying to run with parameter
`ftype` set to `"s"` when the fcontext is already present.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
sefcontext module
lib/ansible/modules/system/sefcontext.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
...
python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Fedora 31 WS. Also broken for CentOS 8.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run the given reproducer two or more times on the same target.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "reproducer"
sefcontext:
ftype: "s"
path: "/foo/bar"
setype: "var_t"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
- if no fcontext record exists, create record
- if fcontext record exists, ensure the type and context are set correctly
- first run: adds record (`changed: ...`)
- second run: fails (`ok: ...`)
```
TASK [reproducer : reproducer] *********************************************************************************
changed: [localhost]
...
TASK [reproducer : reproducer] *********************************************************************************
ok: [localhost]
```
##### ACTUAL RESULTS
- if no fcontext rule exists, create rule (this works fine)
- a existing rule does not get recognised and the module tries to create
- first run: adds record (`changed: ...`)
- second run: fails (`fatal: ...`)
```
TASK [reproducer : reproducer] *********************************************************************************
changed: [localhost]
...
TASK [reproducer : reproducer] *********************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "ValueError: File context for /foo/bar already defined\n"}
```
## Proposed fix
When running `semanage fcontext -l` one can see the socket file type is called `socket` and not `socket file`. Also if you look in records at [this position](https://github.com/ansible/ansible/blob/ddd786eedfb416ee3fc0dbec3a2b65d58440d026/lib/ansible/modules/system/sefcontext.py#L163) some tuples with `(something, "socket")` show up but none with `(something, "socket file")`.
The following patch seems to fix the mentioned issue.
```diff
diff --git a/lib/ansible/modules/system/sefcontext.py b/lib/ansible/modules/system/sefcontext.py
index dfe846e7f2..33e3fd2e40 100644
--- a/lib/ansible/modules/system/sefcontext.py
+++ b/lib/ansible/modules/system/sefcontext.py
@@ -148,7 +148,7 @@ option_to_file_type_str = dict(
f='regular file',
l='symbolic link',
p='named pipe',
- s='socket file',
+ s='socket',
)
```
---
CC: @dagwieers
|
https://github.com/ansible/ansible/issues/65689
|
https://github.com/ansible/ansible/pull/65690
|
9541377a20e61e293f0ea87a09de4ff32b470919
|
fe6848baddaf5a5e872e91b428cdec3f9b1bc1cb
| 2019-12-10T13:13:56Z |
python
| 2020-02-15T14:51:06Z |
lib/ansible/modules/system/sefcontext.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Dag Wieers (@dagwieers) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: sefcontext
short_description: Manages SELinux file context mapping definitions
description:
- Manages SELinux file context mapping definitions.
- Similar to the C(semanage fcontext) command.
version_added: '2.2'
options:
target:
description:
- Target path (expression).
type: str
required: yes
aliases: [ path ]
ftype:
description:
- The file type that should have SELinux contexts applied.
- "The following file type options are available:"
- C(a) for all files,
- C(b) for block devices,
- C(c) for character devices,
- C(d) for directories,
- C(f) for regular files,
- C(l) for symbolic links,
- C(p) for named pipes,
- C(s) for socket files.
type: str
choices: [ a, b, c, d, f, l, p, s ]
default: a
setype:
description:
- SELinux type for the specified target.
type: str
required: yes
seuser:
description:
- SELinux user for the specified target.
type: str
selevel:
description:
- SELinux range for the specified target.
type: str
aliases: [ serange ]
state:
description:
- Whether the SELinux file context must be C(absent) or C(present).
type: str
choices: [ absent, present ]
default: present
reload:
description:
- Reload SELinux policy after commit.
- Note that this does not apply SELinux file contexts to existing files.
type: bool
default: yes
ignore_selinux_state:
description:
- Useful for scenarios (chrooted environment) that you can't get the real SELinux state.
type: bool
default: no
version_added: '2.8'
notes:
- The changes are persistent across reboots.
- The M(sefcontext) module does not modify existing files to the new
SELinux context(s), so it is advisable to first create the SELinux
file contexts before creating files, or run C(restorecon) manually
for the existing files that require the new SELinux file contexts.
- Not applying SELinux fcontexts to existing files is a deliberate
decision as it would be unclear what reported changes would entail
to, and there's no guarantee that applying SELinux fcontext does
not pick up other unrelated prior changes.
requirements:
- libselinux-python
- policycoreutils-python
author:
- Dag Wieers (@dagwieers)
'''
EXAMPLES = r'''
- name: Allow apache to modify files in /srv/git_repos
sefcontext:
target: '/srv/git_repos(/.*)?'
setype: httpd_git_rw_content_t
state: present
- name: Apply new SELinux file context to filesystem
command: restorecon -irv /srv/git_repos
'''
RETURN = r'''
# Default return values
'''
import traceback
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
SELINUX_IMP_ERR = None
try:
import selinux
HAVE_SELINUX = True
except ImportError:
SELINUX_IMP_ERR = traceback.format_exc()
HAVE_SELINUX = False
SEOBJECT_IMP_ERR = None
try:
import seobject
HAVE_SEOBJECT = True
except ImportError:
SEOBJECT_IMP_ERR = traceback.format_exc()
HAVE_SEOBJECT = False
# Add missing entries (backward compatible)
if HAVE_SEOBJECT:
seobject.file_types.update(
a=seobject.SEMANAGE_FCONTEXT_ALL,
b=seobject.SEMANAGE_FCONTEXT_BLOCK,
c=seobject.SEMANAGE_FCONTEXT_CHAR,
d=seobject.SEMANAGE_FCONTEXT_DIR,
f=seobject.SEMANAGE_FCONTEXT_REG,
l=seobject.SEMANAGE_FCONTEXT_LINK,
p=seobject.SEMANAGE_FCONTEXT_PIPE,
s=seobject.SEMANAGE_FCONTEXT_SOCK,
)
# Make backward compatible
option_to_file_type_str = dict(
a='all files',
b='block device',
c='character device',
d='directory',
f='regular file',
l='symbolic link',
p='named pipe',
s='socket file',
)
def get_runtime_status(ignore_selinux_state=False):
return True if ignore_selinux_state is True else selinux.is_selinux_enabled()
def semanage_fcontext_exists(sefcontext, target, ftype):
''' Get the SELinux file context mapping definition from policy. Return None if it does not exist. '''
# Beware that records comprise of a string representation of the file_type
record = (target, option_to_file_type_str[ftype])
records = sefcontext.get_all()
try:
return records[record]
except KeyError:
return None
def semanage_fcontext_modify(module, result, target, ftype, setype, do_reload, serange, seuser, sestore=''):
''' Add or modify SELinux file context mapping definition to the policy. '''
changed = False
prepared_diff = ''
try:
sefcontext = seobject.fcontextRecords(sestore)
sefcontext.set_reload(do_reload)
exists = semanage_fcontext_exists(sefcontext, target, ftype)
if exists:
# Modify existing entry
orig_seuser, orig_serole, orig_setype, orig_serange = exists
if seuser is None:
seuser = orig_seuser
if serange is None:
serange = orig_serange
if setype != orig_setype or seuser != orig_seuser or serange != orig_serange:
if not module.check_mode:
sefcontext.modify(target, setype, ftype, serange, seuser)
changed = True
if module._diff:
prepared_diff += '# Change to semanage file context mappings\n'
prepared_diff += '-%s %s %s:%s:%s:%s\n' % (target, ftype, orig_seuser, orig_serole, orig_setype, orig_serange)
prepared_diff += '+%s %s %s:%s:%s:%s\n' % (target, ftype, seuser, orig_serole, setype, serange)
else:
# Add missing entry
if seuser is None:
seuser = 'system_u'
if serange is None:
serange = 's0'
if not module.check_mode:
sefcontext.add(target, setype, ftype, serange, seuser)
changed = True
if module._diff:
prepared_diff += '# Addition to semanage file context mappings\n'
prepared_diff += '+%s %s %s:%s:%s:%s\n' % (target, ftype, seuser, 'object_r', setype, serange)
except Exception as e:
module.fail_json(msg="%s: %s\n" % (e.__class__.__name__, to_native(e)))
if module._diff and prepared_diff:
result['diff'] = dict(prepared=prepared_diff)
module.exit_json(changed=changed, seuser=seuser, serange=serange, **result)
def semanage_fcontext_delete(module, result, target, ftype, do_reload, sestore=''):
''' Delete SELinux file context mapping definition from the policy. '''
changed = False
prepared_diff = ''
try:
sefcontext = seobject.fcontextRecords(sestore)
sefcontext.set_reload(do_reload)
exists = semanage_fcontext_exists(sefcontext, target, ftype)
if exists:
# Remove existing entry
orig_seuser, orig_serole, orig_setype, orig_serange = exists
if not module.check_mode:
sefcontext.delete(target, ftype)
changed = True
if module._diff:
prepared_diff += '# Deletion to semanage file context mappings\n'
prepared_diff += '-%s %s %s:%s:%s:%s\n' % (target, ftype, exists[0], exists[1], exists[2], exists[3])
except Exception as e:
module.fail_json(msg="%s: %s\n" % (e.__class__.__name__, to_native(e)))
if module._diff and prepared_diff:
result['diff'] = dict(prepared=prepared_diff)
module.exit_json(changed=changed, **result)
def main():
module = AnsibleModule(
argument_spec=dict(
ignore_selinux_state=dict(type='bool', default=False),
target=dict(type='str', required=True, aliases=['path']),
ftype=dict(type='str', default='a', choices=option_to_file_type_str.keys()),
setype=dict(type='str', required=True),
seuser=dict(type='str'),
selevel=dict(type='str', aliases=['serange']),
state=dict(type='str', default='present', choices=['absent', 'present']),
reload=dict(type='bool', default=True),
),
supports_check_mode=True,
)
if not HAVE_SELINUX:
module.fail_json(msg=missing_required_lib("libselinux-python"), exception=SELINUX_IMP_ERR)
if not HAVE_SEOBJECT:
module.fail_json(msg=missing_required_lib("policycoreutils-python"), exception=SEOBJECT_IMP_ERR)
ignore_selinux_state = module.params['ignore_selinux_state']
if not get_runtime_status(ignore_selinux_state):
module.fail_json(msg="SELinux is disabled on this host.")
target = module.params['target']
ftype = module.params['ftype']
setype = module.params['setype']
seuser = module.params['seuser']
serange = module.params['selevel']
state = module.params['state']
do_reload = module.params['reload']
result = dict(target=target, ftype=ftype, setype=setype, state=state)
if state == 'present':
semanage_fcontext_modify(module, result, target, ftype, setype, do_reload, serange, seuser)
elif state == 'absent':
semanage_fcontext_delete(module, result, target, ftype, do_reload)
else:
module.fail_json(msg='Invalid value of argument "state": {0}'.format(state))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,242 |
uptimerobot: fails with stacktrace
|
##### SUMMARY
The uptimerobot module fails with stacktrace on decoding the upstream response.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
The uptimerobot module.
##### ANSIBLE VERSION
```
ansible 2.8.1
config file = /var/lib/jenkins/workspace/Subtask_Openstack_Playbook/ansible.cfg
configured module search path = ['/var/lib/jenkins/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.5/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(/var/lib/jenkins/workspace/Subtask_Openstack_Playbook/ansible.cfg) = True
ANSIBLE_PIPELINING(/var/lib/jenkins/workspace/Subtask_Openstack_Playbook/ansible.cfg) = True
CONDITIONAL_BARE_VARS(/var/lib/jenkins/workspace/Subtask_Openstack_Playbook/ansible.cfg) = False
DEFAULT_FORKS(/var/lib/jenkins/workspace/Subtask_Openstack_Playbook/ansible.cfg) = 10
DEFAULT_HOST_LIST(/var/lib/jenkins/workspace/Subtask_Openstack_Playbook/ansible.cfg) = ['/var/lib/jenkins/workspace/Subtask_Openstack_Playbook/<redacted>/inventory.ini']
DEFAULT_ROLES_PATH(/var/lib/jenkins/workspace/Subtask_Openstack_Playbook/ansible.cfg) = ['/var/lib/jenkins/workspace/Subtask_Openstack_Playbook/roles']
DEFAULT_VAULT_PASSWORD_FILE(/var/lib/jenkins/workspace/Subtask_Openstack_Playbook/ansible.cfg) = /var/lib/jenkins/.vault
MAX_FILE_SIZE_FOR_DIFF(/var/lib/jenkins/workspace/Subtask_Openstack_Playbook/ansible.cfg) = 10485760
```
##### OS / ENVIRONMENT
Not relevant.
##### STEPS TO REPRODUCE
Run something like the following task.
```yaml
- name: Monitor - {{ desired_state }}
uptimerobot:
monitorid: "{{ monitor_id | string }}"
apikey: "{{ uptimerobot_api_key }}"
state: "{{ desired_state }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
The playbook runs without stacktraces and changes the uptimerobot state to `{{ desired_state }}`.
##### ACTUAL RESULTS
```
FAILED - RETRYING: Monitor - paused (291 retries left).Result was: {
"attempts": 10,
"changed": true,
"module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 114, in <module>\n File \"<stdin>\", line 106, in _ansiballz_main\n File \"<stdin>\", line 49, in invoke_module\n File \"/usr/lib/python3.5/imp.py\", line 234, in load_module\n return load_sourc
e(name, filename, file)\n File \"/usr/lib/python3.5/imp.py\", line 170, in load_source\n module = _exec(spec, sys.modules[name])\n File \"<frozen importlib._bootstrap>\", line 626, in _exec\n File \"<frozen importlib._bootstrap_external>\", line 665, in exec_module\n File \"<
frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\n File \"/tmp/ansible_uptimerobot_payload_gp2xmpdx/__main__.py\", line 151, in <module>\n File \"/tmp/ansible_uptimerobot_payload_gp2xmpdx/__main__.py\", line 131, in main\n File \"/tmp/ansible_uptimerobot_pay
load_gp2xmpdx/__main__.py\", line 83, in checkID\n File \"/usr/lib/python3.5/json/__init__.py\", line 312, in loads\n s.__class__.__name__))\nTypeError: the JSON object must be str, not 'bytes'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1,
"retries": 301
}
```
|
https://github.com/ansible/ansible/issues/66242
|
https://github.com/ansible/ansible/pull/66244
|
11483921f24bdabe6c3d898a7a7b689cfc7006bc
|
eaf879a7a7e7022939906cbeff5818638985cdf3
| 2020-01-07T15:36:57Z |
python
| 2020-02-15T16:10:23Z |
lib/ansible/modules/monitoring/uptimerobot.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
module: uptimerobot
short_description: Pause and start Uptime Robot monitoring
description:
- This module will let you start and pause Uptime Robot Monitoring
author: "Nate Kingsley (@nate-kingsley)"
version_added: "1.9"
requirements:
- Valid Uptime Robot API Key
options:
state:
description:
- Define whether or not the monitor should be running or paused.
required: true
choices: [ "started", "paused" ]
monitorid:
description:
- ID of the monitor to check.
required: true
apikey:
description:
- Uptime Robot API key.
required: true
notes:
- Support for adding and removing monitors and alert contacts has not yet been implemented.
'''
EXAMPLES = '''
# Pause the monitor with an ID of 12345.
- uptimerobot:
monitorid: 12345
apikey: 12345-1234512345
state: paused
# Start the monitor with an ID of 12345.
- uptimerobot:
monitorid: 12345
apikey: 12345-1234512345
state: started
'''
import json
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils.urls import fetch_url
API_BASE = "https://api.uptimerobot.com/"
API_ACTIONS = dict(
status='getMonitors?',
editMonitor='editMonitor?'
)
API_FORMAT = 'json'
API_NOJSONCALLBACK = 1
CHANGED_STATE = False
SUPPORTS_CHECK_MODE = False
def checkID(module, params):
data = urlencode(params)
full_uri = API_BASE + API_ACTIONS['status'] + data
req, info = fetch_url(module, full_uri)
result = req.read()
jsonresult = json.loads(result)
req.close()
return jsonresult
def startMonitor(module, params):
params['monitorStatus'] = 1
data = urlencode(params)
full_uri = API_BASE + API_ACTIONS['editMonitor'] + data
req, info = fetch_url(module, full_uri)
result = req.read()
jsonresult = json.loads(result)
req.close()
return jsonresult['stat']
def pauseMonitor(module, params):
params['monitorStatus'] = 0
data = urlencode(params)
full_uri = API_BASE + API_ACTIONS['editMonitor'] + data
req, info = fetch_url(module, full_uri)
result = req.read()
jsonresult = json.loads(result)
req.close()
return jsonresult['stat']
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(required=True, choices=['started', 'paused']),
apikey=dict(required=True, no_log=True),
monitorid=dict(required=True)
),
supports_check_mode=SUPPORTS_CHECK_MODE
)
params = dict(
apiKey=module.params['apikey'],
monitors=module.params['monitorid'],
monitorID=module.params['monitorid'],
format=API_FORMAT,
noJsonCallback=API_NOJSONCALLBACK
)
check_result = checkID(module, params)
if check_result['stat'] != "ok":
module.fail_json(
msg="failed",
result=check_result['message']
)
if module.params['state'] == 'started':
monitor_result = startMonitor(module, params)
else:
monitor_result = pauseMonitor(module, params)
module.exit_json(
msg="success",
result=monitor_result
)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,450 |
win_dns_client should properly report status when setting DHCP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `win_dns_client` module always shows "changed" when setting the adapter to use DHCP because the PowerShell cmdlets and WMI classes don't have a reliable way to determine that DHCP assigned nameservers are already in use.
A comment in the module suggests checking the registry.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_dns_client
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
As is, it's not possible to know whether a change was actually made (or would be made in check mode).
<!--- Paste example playbooks or commands between quotes below -->
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/66450
|
https://github.com/ansible/ansible/pull/66451
|
e3d5dc0ed065de989318e8e1ccc10b574c57056b
|
be26f4916f6bfa07be119b91aadb5fa209f0bf13
| 2020-01-14T00:52:53Z |
python
| 2020-02-17T05:35:54Z |
changelogs/fragments/66451-win_dns_client-dhcp-support.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,450 |
win_dns_client should properly report status when setting DHCP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `win_dns_client` module always shows "changed" when setting the adapter to use DHCP because the PowerShell cmdlets and WMI classes don't have a reliable way to determine that DHCP assigned nameservers are already in use.
A comment in the module suggests checking the registry.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_dns_client
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
As is, it's not possible to know whether a change was actually made (or would be made in check mode).
<!--- Paste example playbooks or commands between quotes below -->
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/66450
|
https://github.com/ansible/ansible/pull/66451
|
e3d5dc0ed065de989318e8e1ccc10b574c57056b
|
be26f4916f6bfa07be119b91aadb5fa209f0bf13
| 2020-01-14T00:52:53Z |
python
| 2020-02-17T05:35:54Z |
lib/ansible/modules/windows/win_dns_client.ps1
|
#!powershell
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#Requires -Module Ansible.ModuleUtils.Legacy
# FUTURE: check statically-set values via registry so we can determine difference between DHCP-source values and static values? (prevent spurious changed
# notifications on DHCP-sourced values)
Set-StrictMode -Version 2
$ErrorActionPreference = "Stop"
$ConfirmPreference = "None"
Set-Variable -Visibility Public -Option ReadOnly,AllScope,Constant -Name "AddressFamilies" -Value @(
[System.Net.Sockets.AddressFamily]::InterNetworkV6,
[System.Net.Sockets.AddressFamily]::InterNetwork
)
$result = @{changed=$false}
$params = Parse-Args -arguments $args -supports_check_mode $true
Set-Variable -Visibility Public -Option ReadOnly,AllScope,Constant -Name "log_path" -Value (
Get-AnsibleParam $params "log_path"
)
$adapter_names = Get-AnsibleParam $params "adapter_names" -Default "*"
$dns_servers = Get-AnsibleParam $params "dns_servers" -aliases "ipv4_addresses","ip_addresses","addresses" -FailIfEmpty $result
$check_mode = Get-AnsibleParam $params "_ansible_check_mode" -Default $false
Function Write-DebugLog {
Param(
[string]$msg
)
$DebugPreference = "Continue"
$ErrorActionPreference = "Continue"
$date_str = Get-Date -Format u
$msg = "$date_str $msg"
Write-Debug $msg
if($log_path) {
Add-Content $log_path $msg
}
}
Function Get-NetAdapterInfo {
[CmdletBinding()]
Param (
[Parameter(ValueFromPipeline=$true)]
[String]$Name = "*"
)
Process {
if (Get-Command -Name Get-NetAdapter -ErrorAction SilentlyContinue) {
$adapter_info = Get-NetAdapter @PSBoundParameters | Select-Object -Property Name, InterfaceIndex
} else {
# Older hosts 2008/2008R2 don't have Get-NetAdapter, fallback to deprecated Win32_NetworkAdapter
$cim_params = @{
ClassName = "Win32_NetworkAdapter"
Property = "InterfaceIndex", "NetConnectionID"
}
if ($Name.Contains("*")) {
$cim_params.Filter = "NetConnectionID LIKE '$($Name.Replace("*", "%"))'"
} else {
$cim_params.Filter = "NetConnectionID = '$Name'"
}
$adapter_info = Get-CimInstance @cim_params | Select-Object -Property @(
@{Name="Name"; Expression={$_.NetConnectionID}},
@{Name="InterfaceIndex"; Expression={$_.InterfaceIndex}}
)
}
# Need to filter the adapter that are not IPEnabled, while we are at it, also get the DNS config.
$net_info = $adapter_info | ForEach-Object -Process {
$cim_params = @{
ClassName = "Win32_NetworkAdapterConfiguration"
Filter = "InterfaceIndex = $($_.InterfaceIndex)"
Property = "DNSServerSearchOrder", "IPEnabled"
}
$adapter_config = Get-CimInstance @cim_params
if ($adapter_config.IPEnabled -eq $false) {
return
}
if (Get-Command -Name Get-DnsClientServerAddress -ErrorAction SilentlyContinue) {
$dns_servers = Get-DnsClientServerAddress -InterfaceIndex $_.InterfaceIndex | Select-Object -Property @(
"AddressFamily",
"ServerAddresses"
)
} else {
$dns_servers = @(
[PSCustomObject]@{
AddressFamily = [System.Net.Sockets.AddressFamily]::InterNetwork
ServerAddresses = $adapter_config.DNSServerSearchOrder
},
[PSCustomObject]@{
AddressFamily = [System.Net.Sockets.AddressFamily]::InterNetworkV6
ServerAddresses = @() # WMI does not support IPv6 so we just keep it blank.
}
)
}
[PSCustomObject]@{
Name = $_.Name
InterfaceIndex = $_.InterfaceIndex
DNSServers = $dns_servers
}
}
if (@($net_info).Count -eq 0 -and -not $Name.Contains("*")) {
throw "Get-NetAdapterInfo: Failed to find network adapter(s) that are IP enabled with the name '$Name'"
}
$net_info
}
}
# minimal impl of Set-DnsClientServerAddress for 2008/2008R2
Function Set-DnsClientServerAddressLegacy {
Param(
[int]$InterfaceIndex,
[Array]$ServerAddresses=@(),
[switch]$ResetServerAddresses
)
$cim_params = @{
ClassName = "Win32_NetworkAdapterConfiguration"
Filter = "InterfaceIndex = $InterfaceIndex"
KeyOnly = $true
}
$adapter_config = Get-CimInstance @cim_params
If($ResetServerAddresses) {
$arguments = @{}
}
Else {
$arguments = @{ DNSServerSearchOrder = [string[]]$ServerAddresses }
}
$res = Invoke-CimMethod -InputObject $adapter_config -MethodName SetDNSServerSearchOrder -Arguments $arguments
If($res.ReturnValue -ne 0) {
throw "Set-DnsClientServerAddressLegacy: Error calling SetDNSServerSearchOrder, code $($res.ReturnValue))"
}
}
If(-not $(Get-Command Set-DnsClientServerAddress -ErrorAction SilentlyContinue)) {
New-Alias Set-DnsClientServerAddress Set-DnsClientServerAddressLegacy
}
Function Test-DnsClientMatch {
Param(
[PSCustomObject]$AdapterInfo,
[System.Net.IPAddress[]] $dns_servers
)
Write-DebugLog ("Getting DNS config for adapter {0}" -f $AdapterInfo.Name)
$current_dns = [System.Net.IPAddress[]]($AdapterInfo.DNSServers.ServerAddresses)
Write-DebugLog ("Current DNS settings: {0}" -f ([string[]]$dns_servers -join ", "))
if(($null -eq $current_dns) -and ($null -eq $dns_servers)) {
Write-DebugLog "Neither are dns servers configured nor specified within the playbook."
return $true
} elseif ($null -eq $current_dns) {
Write-DebugLog "There are currently no dns servers specified, but they should be present."
return $false
} elseif ($null -eq $dns_servers) {
Write-DebugLog "There are currently dns servers specified, but they should be absent."
return $false
}
foreach($address in $current_dns) {
if($address -notin $dns_servers) {
Write-DebugLog "There are currently fewer dns servers present than specified within the playbook."
return $false
}
}
foreach($address in $dns_servers) {
if($address -notin $current_dns) {
Write-DebugLog "There are currently further dns servers present than specified within the playbook."
return $false
}
}
Write-DebugLog ("Current DNS settings match ({0})." -f ([string[]]$dns_servers -join ", "))
return $true
}
Function Assert-IPAddress {
Param([string] $address)
$addrout = $null
return [System.Net.IPAddress]::TryParse($address, [ref] $addrout)
}
Function Set-DnsClientAddresses
{
Param(
[PSCustomObject]$AdapterInfo,
[System.Net.IPAddress[]] $dns_servers
)
Write-DebugLog ("Setting DNS addresses for adapter {0} to ({1})" -f $AdapterInfo.Name, ([string[]]$dns_servers -join ", "))
If ($dns_servers) {
Set-DnsClientServerAddress -InterfaceIndex $AdapterInfo.InterfaceIndex -ServerAddresses $dns_servers
} Else {
Set-DnsClientServerAddress -InterfaceIndex $AdapterInfo.InterfaceIndex -ResetServerAddress
}
}
if($dns_servers -is [string]) {
if($dns_servers.Length -gt 0) {
$dns_servers = @($dns_servers)
} else {
$dns_servers = @()
}
}
# Using object equals here, to check for exact match (without implicit type conversion)
if([System.Object]::Equals($adapter_names, "*")) {
$adapters = Get-NetAdapterInfo
} else {
$adapters = $adapter_names | Get-NetAdapterInfo
}
Try {
Write-DebugLog ("Validating IP addresses ({0})" -f ($dns_servers -join ", "))
$invalid_addresses = @($dns_servers | Where-Object { -not (Assert-IPAddress $_) })
if($invalid_addresses.Count -gt 0) {
throw "Invalid IP address(es): ({0})" -f ($invalid_addresses -join ", ")
}
foreach($adapter_info in $adapters) {
Write-DebugLog ("Validating adapter name {0}" -f $adapter_info.Name)
if(-not (Test-DnsClientMatch $adapter_info $dns_servers)) {
$result.changed = $true
if(-not $check_mode) {
Set-DnsClientAddresses $adapter_info $dns_servers
} else {
Write-DebugLog "Check mode, skipping"
}
}
}
Exit-Json $result
}
Catch {
$excep = $_
Write-DebugLog "Exception: $($excep | out-string)"
Throw
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,450 |
win_dns_client should properly report status when setting DHCP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `win_dns_client` module always shows "changed" when setting the adapter to use DHCP because the PowerShell cmdlets and WMI classes don't have a reliable way to determine that DHCP assigned nameservers are already in use.
A comment in the module suggests checking the registry.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_dns_client
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
As is, it's not possible to know whether a change was actually made (or would be made in check mode).
<!--- Paste example playbooks or commands between quotes below -->
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/66450
|
https://github.com/ansible/ansible/pull/66451
|
e3d5dc0ed065de989318e8e1ccc10b574c57056b
|
be26f4916f6bfa07be119b91aadb5fa209f0bf13
| 2020-01-14T00:52:53Z |
python
| 2020-02-17T05:35:54Z |
lib/ansible/modules/windows/win_dns_client.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Red Hat, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: win_dns_client
version_added: "2.3"
short_description: Configures DNS lookup on Windows hosts
description:
- The C(win_dns_client) module configures the DNS client on Windows network adapters.
options:
adapter_names:
description:
- Adapter name or list of adapter names for which to manage DNS settings ('*' is supported as a wildcard value).
- The adapter name used is the connection caption in the Network Control Panel or the InterfaceAlias of C(Get-DnsClientServerAddress).
type: list
required: yes
dns_servers:
description:
- Single or ordered list of DNS servers (IPv4 and IPv6 addresses) to configure for lookup. An empty list will configure the adapter to use the
DHCP-assigned values on connections where DHCP is enabled, or disable DNS lookup on statically-configured connections.
- IPv6 DNS servers can only be set on Windows Server 2012 or newer, older hosts can only set IPv4 addresses.
- Before 2.10 use ipv4_addresses instead.
type: list
required: yes
aliases: [ "ipv4_addresses", "ip_addresses", "addresses" ]
notes:
- When setting an empty list of DNS server addresses on an adapter with DHCP enabled, a change will always be registered, since it is not possible to
detect the difference between a DHCP-sourced server value and one that is statically set.
author:
- Matt Davis (@nitzmahone)
'''
EXAMPLES = r'''
- name: Set a single address on the adapter named Ethernet
win_dns_client:
adapter_names: Ethernet
dns_servers: 192.168.34.5
- name: Set multiple lookup addresses on all visible adapters (usually physical adapters that are in the Up state), with debug logging to a file
win_dns_client:
adapter_names: '*'
dns_servers:
- 192.168.34.5
- 192.168.34.6
log_path: C:\dns_log.txt
- name: Set IPv6 DNS servers on the adapter named Ethernet
win_dns_client:
adapter_names: Ethernet
dns_servers:
- '2001:db8::2'
- '2001:db8::3'
- name: Configure all adapters whose names begin with Ethernet to use DHCP-assigned DNS values
win_dns_client:
adapter_names: 'Ethernet*'
dns_servers: []
'''
RETURN = r'''
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 66,450 |
win_dns_client should properly report status when setting DHCP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `win_dns_client` module always shows "changed" when setting the adapter to use DHCP because the PowerShell cmdlets and WMI classes don't have a reliable way to determine that DHCP assigned nameservers are already in use.
A comment in the module suggests checking the registry.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
win_dns_client
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
As is, it's not possible to know whether a change was actually made (or would be made in check mode).
<!--- Paste example playbooks or commands between quotes below -->
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/66450
|
https://github.com/ansible/ansible/pull/66451
|
e3d5dc0ed065de989318e8e1ccc10b574c57056b
|
be26f4916f6bfa07be119b91aadb5fa209f0bf13
| 2020-01-14T00:52:53Z |
python
| 2020-02-17T05:35:54Z |
test/integration/targets/win_dns_client/tasks/main.yml
|
---
- set_fact:
get_ip_script: |
$adapter = Get-CimInstance -ClassName Win32_NetworkAdapter -Filter "NetConnectionID='{{ network_adapter_name }}'"
$config = Get-CimInstance -ClassName Win32_NetworkAdapterConfiguration -Filter "Index=$($adapter.DeviceID)"
$ips = $config.DNSServerSearchOrder
if ($ips) {
$config.DNSServerSearchOrder[0]
$config.DNSServerSearchOrder[1]
}
- name: set a single IPv4 address (check mode)
win_dns_client:
adapter_names: '{{ network_adapter_name }}'
ipv4_addresses: 192.168.34.5
register: set_single_check
check_mode: yes
- name: get result of set a single IPv4 address (check mode)
win_shell: '{{ get_ip_script }}'
changed_when: no
register: set_single_actual_check
- name: assert set a single IPv4 address (check mode)
assert:
that:
- set_single_check is changed
- set_single_actual_check.stdout_lines == []
- name: set a single IPv4 address
win_dns_client:
adapter_names: '{{ network_adapter_name }}'
ipv4_addresses: 192.168.34.5
register: set_single
- name: get result of set a single IPv4 address
win_shell: '{{ get_ip_script }}'
changed_when: no
register: set_single_actual
- name: assert set a single IPv4 address
assert:
that:
- set_single is changed
- set_single_actual.stdout_lines == ["192.168.34.5"]
- name: set a single IPv4 address (idempotent)
win_dns_client:
adapter_names: '{{ network_adapter_name }}'
ipv4_addresses: 192.168.34.5
register: set_single_again
- name: assert set a single IPv4 address (idempotent)
assert:
that:
- not set_single_again is changed
- name: change IPv4 address to another value (check mode)
win_dns_client:
adapter_names: '{{ network_adapter_name }}'
ipv4_addresses: 192.168.34.6
register: change_single_check
check_mode: yes
- name: get result of change IPv4 address to another value (check mode)
win_shell: '{{ get_ip_script }}'
changed_when: no
register: check_single_actual_check
- name: assert change IPv4 address to another value (check mode)
assert:
that:
- change_single_check is changed
- check_single_actual_check.stdout_lines == ["192.168.34.5"]
- name: change IPv4 address to another value
win_dns_client:
adapter_names: '{{ network_adapter_name }}'
ipv4_addresses: 192.168.34.6
register: change_single
- name: get result of change IPv4 address to another value
win_shell: '{{ get_ip_script }}'
changed_when: no
register: check_single_actual
- name: assert change IPv4 address to another value
assert:
that:
- change_single is changed
- check_single_actual.stdout_lines == ["192.168.34.6"]
- name: set multiple IPv4 addresses (check mode)
win_dns_client:
adapter_names: '{{ network_adapter_name }}'
ipv4_addresses:
- 192.168.34.7
- 192.168.34.8
register: set_multiple_check
check_mode: yes
- name: get result of set multiple IPv4 addresses (check mode)
win_shell: '{{ get_ip_script }}'
changed_when: no
register: set_multiple_actual_check
- name: assert set multiple IPv4 addresses (check mode)
assert:
that:
- set_multiple_check is changed
- set_multiple_actual_check.stdout_lines == ["192.168.34.6"]
- name: set multiple IPv4 addresses
win_dns_client:
adapter_names: '{{ network_adapter_name }}'
ipv4_addresses:
- 192.168.34.7
- 192.168.34.8
register: set_multiple
- name: get result of set multiple IPv4 addresses
win_shell: '{{ get_ip_script }}'
changed_when: no
register: set_multiple_actual
- name: assert set multiple IPv4 addresses
assert:
that:
- set_multiple is changed
- set_multiple_actual.stdout_lines == ["192.168.34.7", "192.168.34.8"]
- name: set multiple IPv4 addresses (idempotent)
win_dns_client:
adapter_names: '{{ network_adapter_name }}'
ipv4_addresses:
- 192.168.34.7
- 192.168.34.8
register: set_multiple_again
- name: assert set multiple IPv4 addresses (idempotent)
assert:
that:
- not set_multiple_again is changed
- name: reset IPv4 DNS back to DHCP (check mode)
win_dns_client:
adapter_names: '{{ network_adapter_name }}'
ipv4_addresses: []
register: set_dhcp_check
check_mode: yes
- name: get result of reset IPv4 DNS back to DHCP (check mode)
win_shell: '{{ get_ip_script }}'
changed_when: no
register: set_dhcp_actual_check
- name: assert reset IPv4 DNS back to DHCP (check mode)
assert:
that:
- set_dhcp_check is changed
- set_dhcp_actual_check.stdout_lines == ["192.168.34.7", "192.168.34.8"]
- name: reset IPv4 DNS back to DHCP
win_dns_client:
adapter_names: '{{ network_adapter_name }}'
ipv4_addresses: []
register: set_dhcp
- name: get result of reset IPv4 DNS back to DHCP
win_shell: '{{ get_ip_script }}'
changed_when: no
register: set_dhcp_actual
- name: assert reset IPv4 DNS back to DHCP
assert:
that:
- set_dhcp is changed
- set_dhcp_actual.stdout_lines == []
# Legacy WMI does not support setting IPv6 addresses so we can only test this on newer hosts that have the new cmdlets
- name: check if server supports IPv6
win_shell: if (Get-Command -Name Get-NetAdapter -ErrorAction SilentlyContinue) { $true } else { $false }
changed_when: no
register: new_os
- name: run IPv6 tests
when: new_os.stdout | trim | bool
block:
- name: set IPv6 DNS address
win_dns_client:
adapter_names: '{{ network_adapter_name }}'
dns_servers:
- 2001:db8::1
- 2001:db8::2
register: set_ipv6
- name: get result of set IPv6 DNS address
win_shell: (Get-DnsClientServerAddress -InterfaceAlias '{{ network_adapter_name }}' -AddressFAmily IPv6).ServerAddresses
changed_when: no
register: set_ipv6_actual
- name: assert set IPv6 DNS address
assert:
that:
- set_ipv6 is changed
- set_ipv6_actual.stdout_lines == ['2001:db8::1', '2001:db8::2']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,483 |
hcloud dynamic inventory network filtering
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
@LKaemmerling As discussed in https://github.com/ansible/ansible/issues/59133#issuecomment-515133800, you said there would be network filtering in the hcloud dynamic inventory plugin (https://docs.ansible.com/ansible/latest/plugins/inventory/hcloud.html) in Ansible 2.9. But this functionality still doesn't appear. Could it be added? There are many cases where you only want to run ansible commands against servers in some networks only.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
hcloud dynamic inventory
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/64483
|
https://github.com/ansible/ansible/pull/67453
|
e76630c4e4039ccf4b1ff1d0846962038bd25a41
|
62cc120dced87b1bf75cde8578ef17457b218c7c
| 2019-11-06T06:34:51Z |
python
| 2020-02-17T13:39:37Z |
lib/ansible/plugins/inventory/hcloud.py
|
# Copyright (c) 2019 Hetzner Cloud GmbH <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = r"""
name: hcloud
plugin_type: inventory
author:
- Lukas Kaemmerling (@lkaemmerling)
short_description: Ansible dynamic inventory plugin for the Hetzner Cloud.
version_added: "2.8"
requirements:
- python >= 2.7
- hcloud-python >= 1.0.0
description:
- Reads inventories from the Hetzner Cloud API.
- Uses a YAML configuration file that ends with hcloud.(yml|yaml).
extends_documentation_fragment:
- constructed
options:
plugin:
description: marks this as an instance of the "hcloud" plugin
required: true
choices: ["hcloud"]
token:
description: The Hetzner Cloud API Token.
required: true
env:
- name: HCLOUD_TOKEN
connect_with:
description: Connect to the server using the value from this field.
default: public_ipv4
type: str
choices:
- public_ipv4
- hostname
- ipv4_dns_ptr
locations:
description: Populate inventory with instances in this location.
default: []
type: list
required: false
types:
description: Populate inventory with instances with this type.
default: []
type: list
required: false
images:
description: Populate inventory with instances with this image name, only available for system images.
default: []
type: list
required: false
label_selector:
description: Populate inventory with instances with this label.
default: ""
type: str
required: false
"""
EXAMPLES = r"""
# Minimal example. `HCLOUD_TOKEN` is exposed in environment.
plugin: hcloud
# Example with locations, types, groups and token
plugin: hcloud
token: foobar
locations:
- nbg1
types:
- cx11
# Group by a location with prefix e.g. "hcloud_location_nbg1"
# and image_os_flavor without prefix and separator e.g. "ubuntu"
# and status with prefix e.g. "server_status_running"
plugin: hcloud
keyed_groups:
- key: location
prefix: hcloud_location
- key: image_os_flavor
separator: ""
- key: status
prefix: server_status
"""
import os
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_native
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable
from ansible.release import __version__
try:
from hcloud import hcloud
except ImportError:
raise AnsibleError("The Hetzner Cloud dynamic inventory plugin requires hcloud-python.")
class InventoryModule(BaseInventoryPlugin, Constructable):
NAME = "hcloud"
def _configure_hcloud_client(self):
self.api_token = self.get_option("token")
if self.api_token is None:
raise AnsibleError(
"Please specify a token, via the option token or via environment variable HCLOUD_TOKEN")
self.endpoint = os.getenv("HCLOUD_ENDPOINT") or "https://api.hetzner.cloud/v1"
self.client = hcloud.Client(token=self.api_token,
api_endpoint=self.endpoint,
application_name="ansible-inventory",
application_version=__version__)
def _test_hcloud_token(self):
try:
# We test the API Token against the location API, because this is the API with the smallest result
# and not controllable from the customer.
self.client.locations.get_all()
except hcloud.APIException:
raise AnsibleError("Invalid Hetzner Cloud API Token.")
def _get_servers(self):
if len(self.get_option("label_selector")) > 0:
self.servers = self.client.servers.get_all(label_selector=self.get_option("label_selector"))
else:
self.servers = self.client.servers.get_all()
def _filter_servers(self):
if self.get_option("locations"):
tmp = []
for server in self.servers:
if server.datacenter.location.name in self.get_option("locations"):
tmp.append(server)
self.servers = tmp
if self.get_option("types"):
tmp = []
for server in self.servers:
if server.server_type.name in self.get_option("types"):
tmp.append(server)
self.servers = tmp
if self.get_option("images"):
tmp = []
for server in self.servers:
if server.image is not None and server.image.os_flavor in self.get_option("images"):
tmp.append(server)
self.servers = tmp
def _set_server_attributes(self, server):
self.inventory.set_variable(server.name, "id", to_native(server.id))
self.inventory.set_variable(server.name, "name", to_native(server.name))
self.inventory.set_variable(server.name, "status", to_native(server.status))
self.inventory.set_variable(server.name, "type", to_native(server.server_type.name))
# Network
self.inventory.set_variable(server.name, "ipv4", to_native(server.public_net.ipv4.ip))
self.inventory.set_variable(server.name, "ipv6_network", to_native(server.public_net.ipv6.network))
self.inventory.set_variable(server.name, "ipv6_network_mask", to_native(server.public_net.ipv6.network_mask))
if self.get_option("connect_with") == "public_ipv4":
self.inventory.set_variable(server.name, "ansible_host", to_native(server.public_net.ipv4.ip))
elif self.get_option("connect_with") == "hostname":
self.inventory.set_variable(server.name, "ansible_host", to_native(server.name))
elif self.get_option("connect_with") == "ipv4_dns_ptr":
self.inventory.set_variable(server.name, "ansible_host", to_native(server.public_net.ipv4.dns_ptr))
# Server Type
if server.image is not None and server.image.name is not None:
self.inventory.set_variable(server.name, "server_type", to_native(server.image.name))
else:
self.inventory.set_variable(server.name, "server_type", to_native("No Image name found."))
# Datacenter
self.inventory.set_variable(server.name, "datacenter", to_native(server.datacenter.name))
self.inventory.set_variable(server.name, "location", to_native(server.datacenter.location.name))
# Image
if server.image is not None:
self.inventory.set_variable(server.name, "image_id", to_native(server.image.id))
self.inventory.set_variable(server.name, "image_os_flavor", to_native(server.image.os_flavor))
if server.image.name is not None:
self.inventory.set_variable(server.name, "image_name", to_native(server.image.name))
else:
self.inventory.set_variable(server.name, "image_name", to_native(server.image.description))
else:
self.inventory.set_variable(server.name, "image_id", to_native("No Image ID found"))
self.inventory.set_variable(server.name, "image_name", to_native("No Image Name found"))
self.inventory.set_variable(server.name, "image_os_flavor", to_native("No Image OS Flavor found"))
# Labels
self.inventory.set_variable(server.name, "labels", dict(server.labels))
def verify_file(self, path):
"""Return the possibly of a file being consumable by this plugin."""
return (
super(InventoryModule, self).verify_file(path) and
path.endswith((self.NAME + ".yaml", self.NAME + ".yml"))
)
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path, cache)
self._read_config_data(path)
self._configure_hcloud_client()
self._test_hcloud_token()
self._get_servers()
self._filter_servers()
# Add a top group 'hcloud'
self.inventory.add_group(group="hcloud")
for server in self.servers:
self.inventory.add_host(server.name, group="hcloud")
self._set_server_attributes(server)
# Use constructed if applicable
strict = self.get_option('strict')
# Composed variables
self._set_composite_vars(self.get_option('compose'), self.inventory.get_host(server.name).get_vars(), server.name, strict=strict)
# Complex groups based on jinja2 conditionals, hosts that meet the conditional are added to group
self._add_host_to_composed_groups(self.get_option('groups'), {}, server.name, strict=strict)
# Create groups based on variable values and add the corresponding hosts to it
self._add_host_to_keyed_groups(self.get_option('keyed_groups'), {}, server.name, strict=strict)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,478 |
b64encode(encoding='utf-16-le') or b64decode(encoding='utf-16-le') broken
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
`"{{ 'Test' | b64encode(encoding='utf-16-le') | b64decode(encoding='utf-16-le') }}"` doesn't return `'Test'`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/plugins/filter/core.py
lib/ansible/module_utils/_text.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.4
config file = /mnt/e/ansible_win/ansible.cfg
configured module search path = [u'/home/calbertsen/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/calbertsen/.local/lib/python2.7/site-packages/ansible
executable location = /home/calbertsen/.local/bin/ansible
python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Local execution on a Ubuntu host will do
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Just execute the following playbook
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- debug:
msg: "{{ 'Test' | b64encode(encoding='utf-16-le') | b64decode(encoding='utf-16-le') }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Ansible to output
```paste below
PLAY [localhost] ****************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************
ok: [localhost]
TASK [debug] ********************************************************************************************************
ok: [localhost] => {
"msg": "Test"
}
PLAY RECAP **********************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible writes:
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [localhost] ****************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************
ok: [localhost]
TASK [debug] ********************************************************************************************************
ok: [localhost] => {
"msg": "敔瑳"
}
PLAY RECAP **********************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0
```
|
https://github.com/ansible/ansible/issues/67478
|
https://github.com/ansible/ansible/pull/67488
|
36ed3321fd29ff578885e9c800288adda316dcb6
|
423a900791d2cd2494a3407e4cfba62623e758cf
| 2020-02-17T15:31:51Z |
python
| 2020-02-17T19:31:03Z |
docs/docsite/rst/user_guide/playbooks_filters.rst
|
.. _playbooks_filters:
*******
Filters
*******
Filters let you transform data inside template expressions. This page documents mainly Ansible-specific filters, but you can use any of the standard filters shipped with Jinja2 - see the list of :ref:`builtin filters <jinja2:builtin-filters>` in the official Jinja2 template documentation. You can also use :ref:`Python methods <jinja2:python-methods>` to manipulate variables. A few useful filters are typically added with each new Ansible release. The development documentation shows
how to create custom Ansible filters as plugins, though we generally welcome new filters into the core code so everyone can use them.
Templating happens on the Ansible controller, **not** on the target host, so filters execute on the controller and manipulate data locally.
.. contents::
:local:
Handling undefined variables
============================
Filters can help you manage missing or undefined variables by providing defaults or making some variable optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the ``mandatory`` filter.
.. _defaulting_undefined_variables:
Providing default values
------------------------
You can provide default values for variables directly in your templates using the Jinja2 'default' filter. This is often a better approach than failing if a variable is not defined::
{{ some_variable | default(5) }}
In the above example, if the variable 'some_variable' is not defined, Ansible uses the default value 5, rather than raising an "undefined variable" error and failing. If you are working within a role, you can also add a ``defaults/main.yml`` to define the default values for variables in your role.
Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use
a default with a value in a nested data structure (i.e :code:`{{ foo.bar.baz | default('DEFAULT') }}`) when you do not know if the intermediate values are defined.
If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to ``true``::
{{ lookup('env', 'MY_USER') | default('admin', true) }}
.. _omitting_undefined_variables:
Making variables optional
-------------------------
In some cases, you want to make a variable optional. For example, if you want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable ``omit``::
- name: touch files with an optional mode
file:
dest: "{{ item.path }}"
state: touch
mode: "{{ item.mode | default(omit) }}"
loop:
- path: /tmp/foo
- path: /tmp/bar
- path: /tmp/baz
mode: "0444"
In this example, the default mode for the files ``/tmp/foo`` and ``/tmp/bar`` is determined by the umask of the system. Ansible does not send a value for ``mode``. Only the third file, ``/tmp/baz``, receives the `mode=0444` option.
.. note:: If you are "chaining" additional filters after the ``default(omit)`` filter, you should instead do something like this:
``"{{ foo | default(None) | some_filter or omit }}"``. In this example, the default ``None`` (Python null) value will cause the
later filters to fail, which will trigger the ``or omit`` portion of the logic. Using ``omit`` in this manner is very specific to
the later filters you're chaining though, so be prepared for some trial and error if you do this.
.. _forcing_variables_to_be_defined:
Defining mandatory values
-------------------------
If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting :ref:`DEFAULT_UNDEFINED_VAR_BEHAVIOR` to ``false``. In that case, you may want to require some variables to be defined. You can do with this with::
{{ variable | mandatory }}
The variable value will be used as is, but the template evaluation will raise an error if it is undefined.
Defining different values for true/false/null
=============================================
You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9)::
{{ (name == "John") | ternary('Mr','Ms') }}
In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8)::
{{ enabled | ternary('no shutdown', 'shutdown', omit) }}
Manipulating data types
=======================
Sometimes a variables file or registered variable contains a dictionary when your playbook needs a list. Sometimes you have a list when your template needs a dictionary. These filters help you transform these data types.
.. _dict_filter:
Transforming dictionaries into lists
------------------------------------
.. versionadded:: 2.6
To turn a dictionary into a list of items, suitable for looping, use `dict2items`::
{{ dict | dict2items }}
Which turns::
tags:
Application: payment
Environment: dev
into::
- key: Application
value: payment
- key: Environment
value: dev
.. versionadded:: 2.8
``dict2items`` accepts 2 keyword arguments, ``key_name`` and ``value_name`` that allow configuration of the names of the keys to use for the transformation::
{{ files | dict2items(key_name='file', value_name='path') }}
Which turns::
files:
users: /etc/passwd
groups: /etc/group
into::
- file: users
path: /etc/passwd
- file: groups
path: /etc/group
Transforming lists into dictionaries
------------------------------------
.. versionadded:: 2.7
This filter turns a list of dicts with 2 keys, into a dict, mapping the values of those keys into ``key: value`` pairs::
{{ tags | items2dict }}
Which turns::
tags:
- key: Application
value: payment
- key: Environment
value: dev
into::
Application: payment
Environment: dev
This is the reverse of the ``dict2items`` filter.
``items2dict`` accepts 2 keyword arguments, ``key_name`` and ``value_name`` that allow configuration of the names of the keys to use for the transformation::
{{ tags | items2dict(key_name='key', value_name='value') }}
Discovering the data type
-------------------------
.. versionadded:: 2.3
If you are unsure of the underlying Python type of a variable, you can use the ``type_debug`` filter to display it. This is useful in debugging when you need a particular type of variable::
{{ myvar | type_debug }}
Forcing the data type
---------------------
You can cast values as certain types. For example, if you expect the input "True" from a :ref:`vars_prompt <playbooks_prompts>` and you want Ansible to recognize it as a Boolean value instead of a string::
- debug:
msg: test
when: some_string_value | bool
.. versionadded:: 1.6
.. _filters_for_formatting_data:
Controlling data formats: YAML and JSON
=======================================
The following filters will take a data structure in a template and manipulate it or switch it from or to JSON or YAML format. These are occasionally useful for debugging::
{{ some_variable | to_json }}
{{ some_variable | to_yaml }}
For human readable output, you can use::
{{ some_variable | to_nice_json }}
{{ some_variable | to_nice_yaml }}
You can change the indentation of either format::
{{ some_variable | to_nice_json(indent=2) }}
{{ some_variable | to_nice_yaml(indent=8) }}
The ``to_yaml`` and ``to_nice_yaml`` filters use the `PyYAML library`_ which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol)
To avoid such behavior and generate long lines, use the ``width`` option. You must use a hardcoded number to define the width, instead of a construction like ``float("inf")``, because the filter does not support proxying Python functions. For example::
{{ some_variable | to_yaml(indent=8, width=1337) }}
{{ some_variable | to_nice_yaml(indent=8, width=1337) }}
The filter does support passing through other YAML parameters. For a full list, see the `PyYAML documentation`_.
If you are reading in some already formatted data::
{{ some_variable | from_json }}
{{ some_variable | from_yaml }}
for example::
tasks:
- shell: cat /some/path/to/file.json
register: result
- set_fact:
myvar: "{{ result.stdout | from_json }}"
.. versionadded:: 2.7
To parse multi-document YAML strings, the ``from_yaml_all`` filter is provided.
The ``from_yaml_all`` filter will return a generator of parsed YAML documents.
for example::
tasks:
- shell: cat /some/path/to/multidoc-file.yaml
register: result
- debug:
msg: '{{ item }}'
loop: '{{ result.stdout | from_yaml_all | list }}'
Combining and selecting data
============================
These filters let you manipulate data from multiple sources and types and manage large data structures, giving you precise control over complex data.
.. _zip_filter:
Combining items from multiple lists: zip and zip_longest
--------------------------------------------------------
.. versionadded:: 2.3
To get a list combining the elements of other lists use ``zip``::
- name: give me list combo of two lists
debug:
msg: "{{ [1,2,3,4,5] | zip(['a','b','c','d','e','f']) | list }}"
- name: give me shortest combo of two lists
debug:
msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}"
To always exhaust all list use ``zip_longest``::
- name: give me longest combo of three lists , fill with X
debug:
msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}"
Similarly to the output of the ``items2dict`` filter mentioned above, these filters can be used to construct a ``dict``::
{{ dict(keys_list | zip(values_list)) }}
Which turns::
keys_list:
- one
- two
values_list:
- apple
- orange
into::
one: apple
two: orange
Combining objects and subelements
---------------------------------
.. versionadded:: 2.7
The ``subelements`` filter produces a product of an object and the subelement values of that object, similar to the ``subelements`` lookup. This lets you specify individual subelements to use in a template. For example, this expression::
{{ users | subelements('groups', skip_missing=True) }}
turns this data::
users:
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
groups:
- wheel
- docker
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
Into this data::
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- wheel
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- docker
-
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
- docker
You can use the transformed data with ``loop`` to iterate over the same subelement for multiple objects::
- name: Set authorized ssh key, extracting just that data from 'users'
authorized_key:
user: "{{ item.0.name }}"
key: "{{ lookup('file', item.1) }}"
loop: "{{ users | subelements('authorized') }}"
.. _combine_filter:
Combining hashes/dictionaries
-----------------------------
.. versionadded:: 2.0
The ``combine`` filter allows hashes to be merged.
For example, the following would override keys in one hash::
{{ {'a':1, 'b':2} | combine({'b':3}) }}
The resulting hash would be::
{'a':1, 'b':3}
The filter can also take multiple arguments to merge::
{{ a | combine(b, c, d) }}
{{ [a, b, c, d] | combine }}
In this case, keys in ``d`` would override those in ``c``, which would
override those in ``b``, and so on.
The filter also accepts two optional parameters: ``recursive`` and ``list_merge``.
recursive
Is a boolean, default to ``False``.
Should the ``combine`` recursively merge nested hashes.
Note: It does **not** depend on the value of the ``hash_behaviour`` setting in ``ansible.cfg``.
list_merge
Is a string, its possible values are ``replace`` (default), ``keep``, ``append``, ``prepend``, ``append_rp`` or ``prepend_rp``.
It modifies the behaviour of ``combine`` when the hashes to merge contain arrays/lists.
.. code-block:: yaml
default:
a:
x: default
y: default
b: default
c: default
patch:
a:
y: patch
z: patch
b: patch
If ``recursive=False`` (the default), nested hash aren't merged::
{{ default | combine(patch) }}
This would result in::
a:
y: patch
z: patch
b: patch
c: default
If ``recursive=True``, recurse into nested hash and merge their keys::
{{ default | combine(patch, recursive=True) }}
This would result in::
a:
x: default
y: patch
z: patch
b: patch
c: default
If ``list_merge='replace'`` (the default), arrays from the right hash will "replace" the ones in the left hash::
default:
a:
- default
patch:
a:
- patch
.. code-block:: jinja
{{ default | combine(patch) }}
This would result in::
a:
- patch
If ``list_merge='keep'``, arrays from the left hash will be kept::
{{ default | combine(patch, list_merge='keep') }}
This would result in::
a:
- default
If ``list_merge='append'``, arrays from the right hash will be appended to the ones in the left hash::
{{ default | combine(patch, list_merge='append') }}
This would result in::
a:
- default
- patch
If ``list_merge='prepend'``, arrays from the right hash will be prepended to the ones in the left hash::
{{ default | combine(patch, list_merge='prepend') }}
This would result in::
a:
- patch
- default
If ``list_merge='append_rp'``, arrays from the right hash will be appended to the ones in the left hash.
Elements of arrays in the left hash that are also in the corresponding array of the right hash will be removed ("rp" stands for "remove present").
Duplicate elements that aren't in both hashes are kept::
default:
a:
- 1
- 1
- 2
- 3
patch:
a:
- 3
- 4
- 5
- 5
.. code-block:: jinja
{{ default | combine(patch, list_merge='append_rp') }}
This would result in::
a:
- 1
- 1
- 2
- 3
- 4
- 5
- 5
If ``list_merge='prepend_rp'``, the behavior is similar to the one for ``append_rp``, but elements of arrays in the right hash are prepended::
{{ default | combine(patch, list_merge='prepend_rp') }}
This would result in::
a:
- 3
- 4
- 5
- 5
- 1
- 1
- 2
``recursive`` and ``list_merge`` can be used together::
default:
a:
a':
x: default_value
y: default_value
list:
- default_value
b:
- 1
- 1
- 2
- 3
patch:
a:
a':
y: patch_value
z: patch_value
list:
- patch_value
b:
- 3
- 4
- 4
- key: value
.. code-block:: jinja
{{ default | combine(patch, recursive=True, list_merge='append_rp') }}
This would result in::
a:
a':
x: default_value
y: patch_value
z: patch_value
list:
- default_value
- patch_value
b:
- 1
- 1
- 2
- 3
- 4
- 4
- key: value
.. _extract_filter:
Selecting values from arrays or hashtables
-------------------------------------------
.. versionadded:: 2.1
The `extract` filter is used to map from a list of indices to a list of
values from a container (hash or array)::
{{ [0,2] | map('extract', ['x','y','z']) | list }}
{{ ['x','y'] | map('extract', {'x': 42, 'y': 31}) | list }}
The results of the above expressions would be::
['x', 'z']
[42, 31]
The filter can take another argument::
{{ groups['x'] | map('extract', hostvars, 'ec2_ip_address') | list }}
This takes the list of hosts in group 'x', looks them up in `hostvars`,
and then looks up the `ec2_ip_address` of the result. The final result
is a list of IP addresses for the hosts in group 'x'.
The third argument to the filter can also be a list, for a recursive
lookup inside the container::
{{ ['a'] | map('extract', b, ['x','y']) | list }}
This would return a list containing the value of `b['a']['x']['y']`.
Combining lists
---------------
This set of filters returns a list of combined lists.
permutations
^^^^^^^^^^^^
To get permutations of a list::
- name: give me largest permutations (order matters)
debug:
msg: "{{ [1,2,3,4,5] | permutations | list }}"
- name: give me permutations of sets of three
debug:
msg: "{{ [1,2,3,4,5] | permutations(3) | list }}"
combinations
^^^^^^^^^^^^
Combinations always require a set size::
- name: give me combinations for sets of two
debug:
msg: "{{ [1,2,3,4,5] | combinations(2) | list }}"
Also see the :ref:`zip_filter`
products
^^^^^^^^
The product filter returns the `cartesian product <https://docs.python.org/3/library/itertools.html#itertools.product>`_ of the input iterables.
This is roughly equivalent to nested for-loops in a generator expression.
For example::
- name: generate multiple hostnames
debug:
msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}"
This would result in::
{ "msg": "foo.com,bar.com" }
.. json_query_filter:
Selecting JSON data: JSON queries
---------------------------------
Sometimes you end up with a complex data structure in JSON format and you need to extract only a small set of data within it. The **json_query** filter lets you query a complex JSON structure and iterate over it using a loop structure.
.. note:: This filter is built upon **jmespath**, and you can use the same syntax. For examples, see `jmespath examples <http://jmespath.org/examples.html>`_.
Consider this data structure::
{
"domain_definition": {
"domain": {
"cluster": [
{
"name": "cluster1"
},
{
"name": "cluster2"
}
],
"server": [
{
"name": "server11",
"cluster": "cluster1",
"port": "8080"
},
{
"name": "server12",
"cluster": "cluster1",
"port": "8090"
},
{
"name": "server21",
"cluster": "cluster2",
"port": "9080"
},
{
"name": "server22",
"cluster": "cluster2",
"port": "9090"
}
],
"library": [
{
"name": "lib1",
"target": "cluster1"
},
{
"name": "lib2",
"target": "cluster2"
}
]
}
}
}
To extract all clusters from this structure, you can use the following query::
- name: "Display all cluster names"
debug:
var: item
loop: "{{ domain_definition | json_query('domain.cluster[*].name') }}"
Same thing for all server names::
- name: "Display all server names"
debug:
var: item
loop: "{{ domain_definition | json_query('domain.server[*].name') }}"
This example shows ports from cluster1::
- name: "Display all ports from cluster1"
debug:
var: item
loop: "{{ domain_definition | json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port"
.. note:: You can use a variable to make the query more readable.
Or, alternatively print out the ports in a comma separated string::
- name: "Display all ports from cluster1 as a string"
debug:
msg: "{{ domain_definition | json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}"
.. note:: Here, quoting literals using backticks avoids escaping quotes and maintains readability.
Or, using YAML `single quote escaping <https://yaml.org/spec/current.html#id2534365>`_::
- name: "Display all ports from cluster1"
debug:
var: item
loop: "{{ domain_definition | json_query('domain.server[?cluster==''cluster1''].port') }}"
.. note:: Escaping single quotes within single quotes in YAML is done by doubling the single quote.
In this example, we get a hash map with all ports and names of a cluster::
- name: "Display all server ports and names from cluster1"
debug:
var: item
loop: "{{ domain_definition | json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}"
Randomizing data
================
When you need a randomly generated value, use one of these filters.
.. _random_mac_filter:
Random MAC addresses
--------------------
.. versionadded:: 2.6
This filter can be used to generate a random MAC address from a string prefix.
To get a random MAC address from a string prefix starting with '52:54:00'::
"{{ '52:54:00' | random_mac }}"
# => '52:54:00:ef:1c:03'
Note that if anything is wrong with the prefix string, the filter will issue an error.
.. versionadded:: 2.9
As of Ansible version 2.9, you can also initialize the random number generator from a seed. This way, you can create random-but-idempotent MAC addresses::
"{{ '52:54:00' | random_mac(seed=inventory_hostname) }}"
.. _random_filter:
Random items or numbers
-----------------------
This filter can be used similar to the default Jinja2 random filter (returning a random item from a sequence of
items), but can also generate a random number based on a range.
To get a random item from a list::
"{{ ['a','b','c'] | random }}"
# => 'c'
To get a random number between 0 and a specified number::
"{{ 60 | random }} * * * * root /script/from/cron"
# => '21 * * * * root /script/from/cron'
Get a random number from 0 to 100 but in steps of 10::
{{ 101 | random(step=10) }}
# => 70
Get a random number from 1 to 100 but in steps of 10::
{{ 101 | random(1, 10) }}
# => 31
{{ 101 | random(start=1, step=10) }}
# => 51
It's also possible to initialize the random number generator from a seed. This way, you can create random-but-idempotent numbers::
"{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron"
Shuffling a list
----------------
This filter will randomize an existing list, giving a different order every invocation.
To get a random list from an existing list::
{{ ['a','b','c'] | shuffle }}
# => ['c','a','b']
{{ ['a','b','c'] | shuffle }}
# => ['b','c','a']
It's also possible to shuffle a list idempotent. All you need is a seed.::
{{ ['a','b','c'] | shuffle(seed=inventory_hostname) }}
# => ['b','a','c']
The shuffle filter returns a list whenever possible. If you use it with a non 'listable' item, the filter does nothing.
.. _list_filters:
List filters
============
These filters all operate on list variables.
To get the minimum value from list of numbers::
{{ list1 | min }}
To get the maximum value from a list of numbers::
{{ [3, 4, 2] | max }}
.. versionadded:: 2.5
Flatten a list (same thing the `flatten` lookup does)::
{{ [3, [4, 2] ] | flatten }}
Flatten only the first level of a list (akin to the `items` lookup)::
{{ [3, [4, [2]] ] | flatten(levels=1) }}
.. _set_theory_filters:
Set theory filters
==================
These functions return a unique set from sets or lists.
.. versionadded:: 1.4
To get a unique set from a list::
{{ list1 | unique }}
To get a union of two lists::
{{ list1 | union(list2) }}
To get the intersection of 2 lists (unique list of all items in both)::
{{ list1 | intersect(list2) }}
To get the difference of 2 lists (items in 1 that don't exist in 2)::
{{ list1 | difference(list2) }}
To get the symmetric difference of 2 lists (items exclusive to each list)::
{{ list1 | symmetric_difference(list2) }}
.. _math_stuff:
Math filters
============
.. versionadded:: 1.9
Get the logarithm (default is e)::
{{ myvar | log }}
Get the base 10 logarithm::
{{ myvar | log(10) }}
Give me the power of 2! (or 5)::
{{ myvar | pow(2) }}
{{ myvar | pow(5) }}
Square root, or the 5th::
{{ myvar | root }}
{{ myvar | root(5) }}
Note that jinja2 already provides some like abs() and round().
Network filters
===============
These filters help you with common network tasks.
.. _ipaddr_filter:
IP address filters
------------------
.. versionadded:: 1.9
To test if a string is a valid IP address::
{{ myvar | ipaddr }}
You can also require a specific IP protocol version::
{{ myvar | ipv4 }}
{{ myvar | ipv6 }}
IP address filter can also be used to extract specific information from an IP
address. For example, to get the IP address itself from a CIDR, you can use::
{{ '192.0.2.1/24' | ipaddr('address') }}
More information about ``ipaddr`` filter and complete usage guide can be found
in :ref:`playbooks_filters_ipaddr`.
.. _network_filters:
Network CLI filters
-------------------
.. versionadded:: 2.4
To convert the output of a network device CLI command into structured JSON
output, use the ``parse_cli`` filter::
{{ output | parse_cli('path/to/spec') }}
The ``parse_cli`` filter will load the spec file and pass the command output
through it, returning JSON output. The YAML spec file defines how to parse the CLI output.
The spec file should be valid formatted YAML. It defines how to parse the CLI
output and return JSON data. Below is an example of a valid spec file that
will parse the output from the ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
Another common use case for parsing CLI commands is to break a large command
into blocks that can be parsed. This can be done using the ``start_block`` and
``end_block`` directives to break the command into blocks that can be parsed.
.. code-block:: yaml
---
vars:
interface:
name: "{{ item[0].match[0] }}"
state: "{{ item[1].state }}"
mode: "{{ item[2].match[0] }}"
keys:
interfaces:
value: "{{ interface }}"
start_block: "^Ethernet.*$"
end_block: "^$"
items:
- "^(?P<name>Ethernet\\d\\/\\d*)"
- "admin state is (?P<state>.+),"
- "Port mode is (.+)"
The example above will parse the output of ``show interface`` into a list of
hashes.
The network filters also support parsing the output of a CLI command using the
TextFSM library. To parse the CLI output with TextFSM use the following
filter::
{{ output.stdout[0] | parse_cli_textfsm('path/to/fsm') }}
Use of the TextFSM filter requires the TextFSM library to be installed.
Network XML filters
-------------------
.. versionadded:: 2.5
To convert the XML output of a network device command into structured JSON
output, use the ``parse_xml`` filter::
{{ output | parse_xml('path/to/spec') }}
The ``parse_xml`` filter will load the spec file and pass the command output
through formatted as JSON.
The spec file should be valid formatted YAML. It defines how to parse the XML
output and return JSON data.
Below is an example of a valid spec file that
will parse the output from the ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The value of ``top`` is the XPath relative to the XML root node.
In the example XML output given below, the value of ``top`` is ``configuration/vlans/vlan``,
which is an XPath expression relative to the root node (<rpc-reply>).
``configuration`` in the value of ``top`` is the outer most container node, and ``vlan``
is the inner-most container node.
``items`` is a dictionary of key-value pairs that map user-defined names to XPath expressions
that select elements. The Xpath expression is relative to the value of the XPath value contained in ``top``.
For example, the ``vlan_id`` in the spec file is a user defined name and its value ``vlan-id`` is the
relative to the value of XPath in ``top``
Attributes of XML tags can be extracted using XPath expressions. The value of ``state`` in the spec
is an XPath expression used to get the attributes of the ``vlan`` tag in output XML.::
<rpc-reply>
<configuration>
<vlans>
<vlan inactive="inactive">
<name>vlan-1</name>
<vlan-id>200</vlan-id>
<description>This is vlan-1</description>
</vlan>
</vlans>
</configuration>
</rpc-reply>
.. note:: For more information on supported XPath expressions, see `<https://docs.python.org/2/library/xml.etree.elementtree.html#xpath-support>`_.
Network VLAN filters
--------------------
.. versionadded:: 2.8
Use the ``vlan_parser`` filter to manipulate an unsorted list of VLAN integers into a
sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties:
* Vlans are listed in ascending order.
* Three or more consecutive VLANs are listed with a dash.
* The first line of the list can be first_line_len characters long.
* Subsequent list lines can be other_line_len characters.
To sort a VLAN list::
{{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | vlan_parser }}
This example renders the following sorted list::
['100,1688,3002-3005,3999']
Another example Jinja template::
{% set parsed_vlans = vlans | vlan_parser %}
switchport trunk allowed vlan {{ parsed_vlans[0] }}
{% for i in range (1, parsed_vlans | count) %}
switchport trunk allowed vlan add {{ parsed_vlans[i] }}
This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration.
.. _hash_filters:
Encryption filters
==================
.. versionadded:: 1.9
To get the sha1 hash of a string::
{{ 'test1' | hash('sha1') }}
To get the md5 hash of a string::
{{ 'test1' | hash('md5') }}
Get a string checksum::
{{ 'test2' | checksum }}
Other hashes (platform dependent)::
{{ 'test2' | hash('blowfish') }}
To get a sha512 password hash (random salt)::
{{ 'passwordsaresecret' | password_hash('sha512') }}
To get a sha256 password hash with a specific salt::
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt') }}
An idempotent method to generate unique hashes per system is to use a salt that is consistent between runs::
{{ 'secretpassword' | password_hash('sha512', 65534 | random(seed=inventory_hostname) | string) }}
Hash types available depend on the master system running ansible,
'hash' depends on hashlib password_hash depends on passlib (https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html).
.. versionadded:: 2.7
Some hash types allow providing a rounds parameter::
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=10000) }}
.. _other_useful_filters:
Text filters
============
These filters work with strings and text.
.. _comment_filter:
Adding comments to files
------------------------
The `comment` filter lets you turn text in a template into comments in a file, with a variety of comment styles. By default Ansible uses ``#`` to start a comment line and adds a blank comment line above and below your comment text. For example the following::
{{ "Plain style (default)" | comment }}
produces this output:
.. code-block:: text
#
# Plain style (default)
#
Ansible offers styles for comments in C (``//...``), C block
(``/*...*/``), Erlang (``%...``) and XML (``<!--...-->``)::
{{ "C style" | comment('c') }}
{{ "C block style" | comment('cblock') }}
{{ "Erlang style" | comment('erlang') }}
{{ "XML style" | comment('xml') }}
You can define a custom comment character. This filter::
{{ "My Special Case" | comment(decoration="! ") }}
produces:
.. code-block:: text
!
! My Special Case
!
You can fully customize the comment style::
{{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }}
That creates the following output:
.. code-block:: text
#######
#
# Custom style
#
#######
###
#
The filter can also be applied to any Ansible variable. For example to
make the output of the ``ansible_managed`` variable more readable, we can
change the definition in the ``ansible.cfg`` file to this:
.. code-block:: jinja
[defaults]
ansible_managed = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
and then use the variable with the `comment` filter::
{{ ansible_managed | comment }}
which produces this output:
.. code-block:: sh
#
# This file is managed by Ansible.
#
# template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2
# date: 2015-09-10 11:02:58
# user: ansible
# host: myhost
#
Splitting URLs
--------------
.. versionadded:: 2.4
The ``urlsplit`` filter extracts the fragment, hostname, netloc, password, path, port, query, scheme, and username from an URL. With no arguments, returns a dictionary of all the fields::
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('hostname') }}
# => 'www.acme.com'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('netloc') }}
# => 'user:[email protected]:9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('username') }}
# => 'user'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('password') }}
# => 'password'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('path') }}
# => '/dir/index.html'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('port') }}
# => '9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('scheme') }}
# => 'http'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('query') }}
# => 'query=term'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('fragment') }}
# => 'fragment'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit }}
# =>
# {
# "fragment": "fragment",
# "hostname": "www.acme.com",
# "netloc": "user:[email protected]:9000",
# "password": "password",
# "path": "/dir/index.html",
# "port": 9000,
# "query": "query=term",
# "scheme": "http",
# "username": "user"
# }
Searching strings with regular expressions
------------------------------------------
To search a string with a regex, use the "regex_search" filter::
# search for "foo" in "foobar"
{{ 'foobar' | regex_search('(foo)') }}
# will return empty if it cannot find a match
{{ 'ansible' | regex_search('(foobar)') }}
# case insensitive search in multiline mode
{{ 'foo\nBAR' | regex_search("^bar", multiline=True, ignorecase=True) }}
To search for all occurrences of regex matches, use the "regex_findall" filter::
# Return a list of all IPv4 addresses in the string
{{ 'Some DNS servers are 8.8.8.8 and 8.8.4.4' | regex_findall('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }}
To replace text in a string with regex, use the "regex_replace" filter::
# convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
# convert "localhost:80" to "localhost, 80" using named groups
{{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }}
# convert "localhost:80" to "localhost"
{{ 'localhost:80' | regex_replace(':80') }}
.. note:: If you want to match the whole string and you are using ``*`` make sure to always wraparound your regular expression with the start/end anchors.
For example ``^(.*)$`` will always match only one result, while ``(.*)`` on some Python versions will match the whole string and an empty string at the
end, which means it will make two replacements::
# add "https://" prefix to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '^', 'https://') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }}
# append ':80' to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }}
{{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }}
{{ hosts | map('regex_replace', '$', ':80') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }}
.. note:: Prior to ansible 2.0, if "regex_replace" filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments),
then you needed to escape backreferences (e.g. ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
.. versionadded:: 2.0
To escape special characters within a standard Python regex, use the "regex_escape" filter (using the default re_type='python' option)::
# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }}
.. versionadded:: 2.8
To escape special characters within a POSIX basic regex, use the "regex_escape" filter with the re_type='posix_basic' option::
# convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$'
{{ '^f.*o(.*)$' | regex_escape('posix_basic') }}
Working with filenames and pathnames
------------------------------------
To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt'::
{{ path | basename }}
To get the last name of a windows style file path (new in version 2.0)::
{{ path | win_basename }}
To separate the windows drive letter from the rest of a file path (new in version 2.0)::
{{ path | win_splitdrive }}
To get only the windows drive letter::
{{ path | win_splitdrive | first }}
To get the rest of the path without the drive letter::
{{ path | win_splitdrive | last }}
To get the directory from a path::
{{ path | dirname }}
To get the directory from a windows path (new version 2.0)::
{{ path | win_dirname }}
To expand a path containing a tilde (`~`) character (new in version 1.5)::
{{ path | expanduser }}
To expand a path containing environment variables::
{{ path | expandvars }}
.. note:: `expandvars` expands local variables; using it on remote paths can lead to errors.
.. versionadded:: 2.6
To get the real path of a link (new in version 1.8)::
{{ path | realpath }}
To get the relative path of a link, from a start point (new in version 1.7)::
{{ path | relpath('/etc') }}
To get the root and extension of a path or filename (new in version 2.0)::
# with path == 'nginx.conf' the return would be ('nginx', '.conf')
{{ path | splitext }}
To join one or more path components::
{{ ('/etc', path, 'subdir', file) | path_join }}
.. versionadded:: 2.10
String filters
==============
To add quotes for shell usage::
- shell: echo {{ string_value | quote }}
To concatenate a list into a string::
{{ list | join(" ") }}
To work with Base64 encoded strings::
{{ encoded | b64decode }}
{{ decoded | b64encode }}
As of version 2.6, you can define the type of encoding to use, the default is ``utf-8``::
{{ encoded | b64decode(encoding='utf-16-le') }}
{{ decoded | b64encode(encoding='utf-16-le') }}
.. versionadded:: 2.6
UUID filters
============
To create a namespaced UUIDv5::
{{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }}
.. versionadded:: 2.10
To create a namespaced UUIDv5 using the default Ansible namespace '361E6D51-FAEC-444A-9079-341386DA8E2E'::
{{ string | to_uuid }}
.. versionadded:: 1.9
To make use of one attribute from each item in a list of complex variables, use the :func:`Jinja2 map filter <jinja2:map>`::
# get a comma-separated list of the mount points (e.g. "/,/mnt/stuff") on a host
{{ ansible_mounts | map(attribute='mount') | join(',') }}
Date and time filters
=====================
To get a date object from a string use the `to_datetime` filter::
# Get total amount of seconds between two dates. Default date format is %Y-%m-%d %H:%M:%S but you can pass your own format
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).total_seconds() }}
# Get remaining seconds after delta has been calculated. NOTE: This does NOT convert years, days, hours, etc to seconds. For that, use total_seconds()
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2016-08-14 18:00:00" | to_datetime)).seconds }}
# This expression evaluates to "12" and not "132". Delta is 2 hours, 12 seconds
# get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }}
.. versionadded:: 2.4
To format a date using a string (like with the shell date command), use the "strftime" filter::
# Display year-month-day
{{ '%Y-%m-%d' | strftime }}
# Display hour:min:sec
{{ '%H:%M:%S' | strftime }}
# Use ansible_date_time.epoch fact
{{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }}
# Use arbitrary epoch value
{{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01
{{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04
.. note:: To get all string possibilities, check https://docs.python.org/2/library/time.html#time.strftime
Kubernetes filters
==================
Use the "k8s_config_resource_name" filter to obtain the name of a Kubernetes ConfigMap or Secret,
including its hash::
{{ configmap_resource_definition | k8s_config_resource_name }}
This can then be used to reference hashes in Pod specifications::
my_secret:
kind: Secret
name: my_secret_name
deployment_resource:
kind: Deployment
spec:
template:
spec:
containers:
- envFrom:
- secretRef:
name: {{ my_secret | k8s_config_resource_name }}
.. versionadded:: 2.8
.. _PyYAML library: https://pyyaml.org/
.. _PyYAML documentation: https://pyyaml.org/wiki/PyYAMLDocumentation
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
:ref:`playbooks_loops`
Looping in playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`playbooks_best_practices`
Best practices in playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,365 |
ansible-galaxy list is not listing certain valid roles
|
##### SUMMARY
I am trying to see what roles Ansible will pick up from a given roles directory, and it seems like it's only picking up roles that I've downloaded from Galaxy, not any other roles (like custom ones I've created via `ansible-galaxy init`).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/Users/jgeerling/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.17 (default, Feb 9 2020, 19:49:15) [GCC 4.2.1 Compatible Apple LLVM 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH(/etc/ansible/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 20
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = [u'/etc/ansible/hosts']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/Users/jgeerling/Dropbox/VMs/roles']
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
macOS Catalina, Ansible installed via `pip3`
##### STEPS TO REPRODUCE
```
$ mkdir testing-roles && cd testing-roles
$ $ ANSIBLE_ROLES_PATH=$(pwd) ansible-galaxy list
# /Users/jgeerling/Downloads/testing-roles
(no roles listed)
$ ansible-galaxy init testing
- Role testing was created successfully
$ ANSIBLE_ROLES_PATH=$(pwd) ansible-galaxy list
# /Users/jgeerling/Downloads/testing-roles
(still no roles listed)
```
##### EXPECTED RESULTS
I would expect the new `testing` role would be listed.
##### ACTUAL RESULTS
The new `testing` role is not listed.
##### ADDITIONAL INFO
After failing the above scenario, I installed a role from Galaxy, and it _was_ listed:
```
$ ansible-galaxy install -p ./ geerlingguy.php
- downloading role 'php', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-php/archive/3.7.0.tar.gz
- extracting geerlingguy.php to /Users/jgeerling/Downloads/testing-roles/geerlingguy.php
- geerlingguy.php (3.7.0) was installed successfully
$ ANSIBLE_ROLES_PATH=$(pwd) ansible-galaxy list
# /Users/jgeerling/Downloads/testing-roles
- geerlingguy.php, 3.7.0
```
|
https://github.com/ansible/ansible/issues/67365
|
https://github.com/ansible/ansible/pull/67391
|
343de73f2d3d49068e8bddc53f94e94d71e567b9
|
c64202a49563fefb35bd8de59bceb0b3b2fa5fa1
| 2020-02-12T22:45:30Z |
python
| 2020-02-17T21:16:14Z |
changelogs/fragments/67365-role-list-role-name-in-path.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,365 |
ansible-galaxy list is not listing certain valid roles
|
##### SUMMARY
I am trying to see what roles Ansible will pick up from a given roles directory, and it seems like it's only picking up roles that I've downloaded from Galaxy, not any other roles (like custom ones I've created via `ansible-galaxy init`).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/Users/jgeerling/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.17 (default, Feb 9 2020, 19:49:15) [GCC 4.2.1 Compatible Apple LLVM 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH(/etc/ansible/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 20
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = [u'/etc/ansible/hosts']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/Users/jgeerling/Dropbox/VMs/roles']
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
macOS Catalina, Ansible installed via `pip3`
##### STEPS TO REPRODUCE
```
$ mkdir testing-roles && cd testing-roles
$ $ ANSIBLE_ROLES_PATH=$(pwd) ansible-galaxy list
# /Users/jgeerling/Downloads/testing-roles
(no roles listed)
$ ansible-galaxy init testing
- Role testing was created successfully
$ ANSIBLE_ROLES_PATH=$(pwd) ansible-galaxy list
# /Users/jgeerling/Downloads/testing-roles
(still no roles listed)
```
##### EXPECTED RESULTS
I would expect the new `testing` role would be listed.
##### ACTUAL RESULTS
The new `testing` role is not listed.
##### ADDITIONAL INFO
After failing the above scenario, I installed a role from Galaxy, and it _was_ listed:
```
$ ansible-galaxy install -p ./ geerlingguy.php
- downloading role 'php', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-php/archive/3.7.0.tar.gz
- extracting geerlingguy.php to /Users/jgeerling/Downloads/testing-roles/geerlingguy.php
- geerlingguy.php (3.7.0) was installed successfully
$ ANSIBLE_ROLES_PATH=$(pwd) ansible-galaxy list
# /Users/jgeerling/Downloads/testing-roles
- geerlingguy.php, 3.7.0
```
|
https://github.com/ansible/ansible/issues/67365
|
https://github.com/ansible/ansible/pull/67391
|
343de73f2d3d49068e8bddc53f94e94d71e567b9
|
c64202a49563fefb35bd8de59bceb0b3b2fa5fa1
| 2020-02-12T22:45:30Z |
python
| 2020-02-17T21:16:14Z |
lib/ansible/galaxy/role.py
|
########################################################################
#
# (C) 2015, Brian Coca <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import datetime
import os
import tarfile
import tempfile
import yaml
from distutils.version import LooseVersion
from shutil import rmtree
from ansible import context
from ansible.errors import AnsibleError
from ansible.galaxy.user_agent import user_agent
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import open_url
from ansible.playbook.role.requirement import RoleRequirement
from ansible.utils.display import Display
display = Display()
class GalaxyRole(object):
SUPPORTED_SCMS = set(['git', 'hg'])
META_MAIN = (os.path.join('meta', 'main.yml'), os.path.join('meta', 'main.yaml'))
META_INSTALL = os.path.join('meta', '.galaxy_install_info')
ROLE_DIRS = ('defaults', 'files', 'handlers', 'meta', 'tasks', 'templates', 'vars', 'tests')
def __init__(self, galaxy, api, name, src=None, version=None, scm=None, path=None):
self._metadata = None
self._install_info = None
self._validate_certs = not context.CLIARGS['ignore_certs']
display.debug('Validate TLS certificates: %s' % self._validate_certs)
self.galaxy = galaxy
self.api = api
self.name = name
self.version = version
self.src = src or name
self.scm = scm
if path is not None:
if self.name not in path:
path = os.path.join(path, self.name)
self.path = path
else:
# use the first path by default
self.path = os.path.join(galaxy.roles_paths[0], self.name)
# create list of possible paths
self.paths = [x for x in galaxy.roles_paths]
self.paths = [os.path.join(x, self.name) for x in self.paths]
def __repr__(self):
"""
Returns "rolename (version)" if version is not null
Returns "rolename" otherwise
"""
if self.version:
return "%s (%s)" % (self.name, self.version)
else:
return self.name
def __eq__(self, other):
return self.name == other.name
@property
def metadata(self):
"""
Returns role metadata
"""
if self._metadata is None:
for meta_main in self.META_MAIN:
meta_path = os.path.join(self.path, meta_main)
if os.path.isfile(meta_path):
try:
f = open(meta_path, 'r')
self._metadata = yaml.safe_load(f)
except Exception:
display.vvvvv("Unable to load metadata for %s" % self.name)
return False
finally:
f.close()
return self._metadata
@property
def install_info(self):
"""
Returns role install info
"""
if self._install_info is None:
info_path = os.path.join(self.path, self.META_INSTALL)
if os.path.isfile(info_path):
try:
f = open(info_path, 'r')
self._install_info = yaml.safe_load(f)
except Exception:
display.vvvvv("Unable to load Galaxy install info for %s" % self.name)
return False
finally:
f.close()
return self._install_info
def _write_galaxy_install_info(self):
"""
Writes a YAML-formatted file to the role's meta/ directory
(named .galaxy_install_info) which contains some information
we can use later for commands like 'list' and 'info'.
"""
info = dict(
version=self.version,
install_date=datetime.datetime.utcnow().strftime("%c"),
)
if not os.path.exists(os.path.join(self.path, 'meta')):
os.makedirs(os.path.join(self.path, 'meta'))
info_path = os.path.join(self.path, self.META_INSTALL)
with open(info_path, 'w+') as f:
try:
self._install_info = yaml.safe_dump(info, f)
except Exception:
return False
return True
def remove(self):
"""
Removes the specified role from the roles path.
There is a sanity check to make sure there's a meta/main.yml file at this
path so the user doesn't blow away random directories.
"""
if self.metadata:
try:
rmtree(self.path)
return True
except Exception:
pass
return False
def fetch(self, role_data):
"""
Downloads the archived role to a temp location based on role data
"""
if role_data:
# first grab the file and save it to a temp location
if "github_user" in role_data and "github_repo" in role_data:
archive_url = 'https://github.com/%s/%s/archive/%s.tar.gz' % (role_data["github_user"], role_data["github_repo"], self.version)
else:
archive_url = self.src
display.display("- downloading role from %s" % archive_url)
try:
url_file = open_url(archive_url, validate_certs=self._validate_certs, http_agent=user_agent())
temp_file = tempfile.NamedTemporaryFile(delete=False)
data = url_file.read()
while data:
temp_file.write(data)
data = url_file.read()
temp_file.close()
return temp_file.name
except Exception as e:
display.error(u"failed to download the file: %s" % to_text(e))
return False
def install(self):
if self.scm:
# create tar file from scm url
tmp_file = RoleRequirement.scm_archive_role(keep_scm_meta=context.CLIARGS['keep_scm_meta'], **self.spec)
elif self.src:
if os.path.isfile(self.src):
tmp_file = self.src
elif '://' in self.src:
role_data = self.src
tmp_file = self.fetch(role_data)
else:
role_data = self.api.lookup_role_by_name(self.src)
if not role_data:
raise AnsibleError("- sorry, %s was not found on %s." % (self.src, self.api.api_server))
if role_data.get('role_type') == 'APP':
# Container Role
display.warning("%s is a Container App role, and should only be installed using Ansible "
"Container" % self.name)
role_versions = self.api.fetch_role_related('versions', role_data['id'])
if not self.version:
# convert the version names to LooseVersion objects
# and sort them to get the latest version. If there
# are no versions in the list, we'll grab the head
# of the master branch
if len(role_versions) > 0:
loose_versions = [LooseVersion(a.get('name', None)) for a in role_versions]
try:
loose_versions.sort()
except TypeError:
raise AnsibleError(
'Unable to compare role versions (%s) to determine the most recent version due to incompatible version formats. '
'Please contact the role author to resolve versioning conflicts, or specify an explicit role version to '
'install.' % ', '.join([v.vstring for v in loose_versions])
)
self.version = to_text(loose_versions[-1])
elif role_data.get('github_branch', None):
self.version = role_data['github_branch']
else:
self.version = 'master'
elif self.version != 'master':
if role_versions and to_text(self.version) not in [a.get('name', None) for a in role_versions]:
raise AnsibleError("- the specified version (%s) of %s was not found in the list of available versions (%s)." % (self.version,
self.name,
role_versions))
# check if there's a source link for our role_version
for role_version in role_versions:
if role_version['name'] == self.version and 'source' in role_version:
self.src = role_version['source']
tmp_file = self.fetch(role_data)
else:
raise AnsibleError("No valid role data found")
if tmp_file:
display.debug("installing from %s" % tmp_file)
if not tarfile.is_tarfile(tmp_file):
raise AnsibleError("the downloaded file does not appear to be a valid tar archive.")
else:
role_tar_file = tarfile.open(tmp_file, "r")
# verify the role's meta file
meta_file = None
members = role_tar_file.getmembers()
# next find the metadata file
for member in members:
for meta_main in self.META_MAIN:
if meta_main in member.name:
# Look for parent of meta/main.yml
# Due to possibility of sub roles each containing meta/main.yml
# look for shortest length parent
meta_parent_dir = os.path.dirname(os.path.dirname(member.name))
if not meta_file:
archive_parent_dir = meta_parent_dir
meta_file = member
else:
if len(meta_parent_dir) < len(archive_parent_dir):
archive_parent_dir = meta_parent_dir
meta_file = member
if not meta_file:
raise AnsibleError("this role does not appear to have a meta/main.yml file.")
else:
try:
self._metadata = yaml.safe_load(role_tar_file.extractfile(meta_file))
except Exception:
raise AnsibleError("this role does not appear to have a valid meta/main.yml file.")
# we strip off any higher-level directories for all of the files contained within
# the tar file here. The default is 'github_repo-target'. Gerrit instances, on the other
# hand, does not have a parent directory at all.
installed = False
while not installed:
display.display("- extracting %s to %s" % (self.name, self.path))
try:
if os.path.exists(self.path):
if not os.path.isdir(self.path):
raise AnsibleError("the specified roles path exists and is not a directory.")
elif not context.CLIARGS.get("force", False):
raise AnsibleError("the specified role %s appears to already exist. Use --force to replace it." % self.name)
else:
# using --force, remove the old path
if not self.remove():
raise AnsibleError("%s doesn't appear to contain a role.\n please remove this directory manually if you really "
"want to put the role here." % self.path)
else:
os.makedirs(self.path)
# now we do the actual extraction to the path
for member in members:
# we only extract files, and remove any relative path
# bits that might be in the file for security purposes
# and drop any containing directory, as mentioned above
if member.isreg() or member.issym():
parts = member.name.replace(archive_parent_dir, "", 1).split(os.sep)
final_parts = []
for part in parts:
if part != '..' and '~' not in part and '$' not in part:
final_parts.append(part)
member.name = os.path.join(*final_parts)
role_tar_file.extract(member, self.path)
# write out the install info file for later use
self._write_galaxy_install_info()
installed = True
except OSError as e:
error = True
if e.errno == errno.EACCES and len(self.paths) > 1:
current = self.paths.index(self.path)
if len(self.paths) > current:
self.path = self.paths[current + 1]
error = False
if error:
raise AnsibleError("Could not update files in %s: %s" % (self.path, to_native(e)))
# return the parsed yaml metadata
display.display("- %s was installed successfully" % str(self))
if not (self.src and os.path.isfile(self.src)):
try:
os.unlink(tmp_file)
except (OSError, IOError) as e:
display.warning(u"Unable to remove tmp file (%s): %s" % (tmp_file, to_text(e)))
return True
return False
@property
def spec(self):
"""
Returns role spec info
{
'scm': 'git',
'src': 'http://git.example.com/repos/repo.git',
'version': 'v1.0',
'name': 'repo'
}
"""
return dict(scm=self.scm, src=self.src, version=self.version, name=self.name)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,365 |
ansible-galaxy list is not listing certain valid roles
|
##### SUMMARY
I am trying to see what roles Ansible will pick up from a given roles directory, and it seems like it's only picking up roles that I've downloaded from Galaxy, not any other roles (like custom ones I've created via `ansible-galaxy init`).
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-galaxy
##### ANSIBLE VERSION
```
ansible 2.9.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/Users/jgeerling/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.17 (default, Feb 9 2020, 19:49:15) [GCC 4.2.1 Compatible Apple LLVM 11.0.0 (clang-1100.0.33.17)]
```
##### CONFIGURATION
```
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_CONTROL_PATH(/etc/ansible/ansible.cfg) = /tmp/ansible-ssh-%%h-%%p-%%r
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 20
DEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = [u'/etc/ansible/hosts']
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/Users/jgeerling/Dropbox/VMs/roles']
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
macOS Catalina, Ansible installed via `pip3`
##### STEPS TO REPRODUCE
```
$ mkdir testing-roles && cd testing-roles
$ $ ANSIBLE_ROLES_PATH=$(pwd) ansible-galaxy list
# /Users/jgeerling/Downloads/testing-roles
(no roles listed)
$ ansible-galaxy init testing
- Role testing was created successfully
$ ANSIBLE_ROLES_PATH=$(pwd) ansible-galaxy list
# /Users/jgeerling/Downloads/testing-roles
(still no roles listed)
```
##### EXPECTED RESULTS
I would expect the new `testing` role would be listed.
##### ACTUAL RESULTS
The new `testing` role is not listed.
##### ADDITIONAL INFO
After failing the above scenario, I installed a role from Galaxy, and it _was_ listed:
```
$ ansible-galaxy install -p ./ geerlingguy.php
- downloading role 'php', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-php/archive/3.7.0.tar.gz
- extracting geerlingguy.php to /Users/jgeerling/Downloads/testing-roles/geerlingguy.php
- geerlingguy.php (3.7.0) was installed successfully
$ ANSIBLE_ROLES_PATH=$(pwd) ansible-galaxy list
# /Users/jgeerling/Downloads/testing-roles
- geerlingguy.php, 3.7.0
```
|
https://github.com/ansible/ansible/issues/67365
|
https://github.com/ansible/ansible/pull/67391
|
343de73f2d3d49068e8bddc53f94e94d71e567b9
|
c64202a49563fefb35bd8de59bceb0b3b2fa5fa1
| 2020-02-12T22:45:30Z |
python
| 2020-02-17T21:16:14Z |
test/integration/targets/ansible-galaxy/runme.sh
|
#!/usr/bin/env bash
set -eux -o pipefail
ansible-playbook setup.yml "$@"
trap 'ansible-playbook ${ANSIBLE_PLAYBOOK_DIR}/cleanup.yml' EXIT
# Very simple version test
ansible-galaxy --version
# Need a relative custom roles path for testing various scenarios of -p
galaxy_relative_rolespath="my/custom/roles/path"
# Prep the local git repo with a role and make a tar archive so we can test
# different things
galaxy_local_test_role="test-role"
galaxy_local_test_role_dir=$(mktemp -d)
galaxy_local_test_role_git_repo="${galaxy_local_test_role_dir}/${galaxy_local_test_role}"
galaxy_local_test_role_tar="${galaxy_local_test_role_dir}/${galaxy_local_test_role}.tar"
pushd "${galaxy_local_test_role_dir}"
ansible-galaxy init "${galaxy_local_test_role}"
pushd "${galaxy_local_test_role}"
git init .
# Prep git, becuase it doesn't work inside a docker container without it
git config user.email "[email protected]"
git config user.name "Ansible Tester"
git add .
git commit -m "local testing ansible galaxy role"
git archive \
--format=tar \
--prefix="${galaxy_local_test_role}/" \
master > "${galaxy_local_test_role_tar}"
popd # "${galaxy_local_test_role}"
popd # "${galaxy_local_test_role_dir}"
# Status message function (f_ to designate that it's a function)
f_ansible_galaxy_status()
{
printf "\n\n\n### Testing ansible-galaxy: %s\n" "${@}"
}
# Galaxy install test case
#
# Install local git repo
f_ansible_galaxy_status "install of local git repo"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy install git+file:///"${galaxy_local_test_role_git_repo}" "$@"
# Test that the role was installed to the expected directory
[[ -d "${HOME}/.ansible/roles/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
# Galaxy install test case
#
# Install local git repo and ensure that if a role_path is passed, it is in fact used
f_ansible_galaxy_status "install of local git repo with -p \$role_path"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
mkdir -p "${galaxy_relative_rolespath}"
ansible-galaxy install git+file:///"${galaxy_local_test_role_git_repo}" -p "${galaxy_relative_rolespath}" "$@"
# Test that the role was installed to the expected directory
[[ -d "${galaxy_relative_rolespath}/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
# Galaxy install test case
#
# Ensure that if both a role_file and role_path is provided, they are both
# honored
#
# Protect against regression (GitHub Issue #35217)
# https://github.com/ansible/ansible/issues/35217
f_ansible_galaxy_status \
"install of local git repo and local tarball with -p \$role_path and -r \$role_file" \
"Protect against regression (Issue #35217)"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
git clone "${galaxy_local_test_role_git_repo}" "${galaxy_local_test_role}"
ansible-galaxy init roles-path-bug "$@"
pushd roles-path-bug
cat <<EOF > ansible.cfg
[defaults]
roles_path = ../:../../:../roles:roles/
EOF
cat <<EOF > requirements.yml
---
- src: ${galaxy_local_test_role_tar}
name: ${galaxy_local_test_role}
EOF
ansible-galaxy install -r requirements.yml -p roles/ "$@"
popd # roles-path-bug
# Test that the role was installed to the expected directory
[[ -d "${galaxy_testdir}/roles-path-bug/roles/${galaxy_local_test_role}" ]]
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
# Galaxy role list test case
#
# Basic tests to ensure listing roles works
f_ansible_galaxy_status \
"role list"
ansible-galaxy role list | tee out.txt
ansible-galaxy role list test-role | tee -a out.txt
[[ $(grep -c '^- test-role' out.txt ) -eq 2 ]]
#################################
# ansible-galaxy collection tests
#################################
f_ansible_galaxy_status \
"collection init tests to make sure the relative dir logic works"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy collection init ansible_test.my_collection "$@"
# Test that the collection skeleton was created in the expected directory
for galaxy_collection_dir in "docs" "plugins" "roles"
do
[[ -d "${galaxy_testdir}/ansible_test/my_collection/${galaxy_collection_dir}" ]]
done
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
f_ansible_galaxy_status \
"collection init tests to make sure the --init-path logic works"
galaxy_testdir=$(mktemp -d)
pushd "${galaxy_testdir}"
ansible-galaxy collection init ansible_test.my_collection --init-path "${galaxy_testdir}/test" "$@"
# Test that the collection skeleton was created in the expected directory
for galaxy_collection_dir in "docs" "plugins" "roles"
do
[[ -d "${galaxy_testdir}/test/ansible_test/my_collection/${galaxy_collection_dir}" ]]
done
popd # ${galaxy_testdir}
f_ansible_galaxy_status \
"collection build test creating artifact in current directory"
pushd "${galaxy_testdir}/test/ansible_test/my_collection"
ansible-galaxy collection build "$@"
[[ -f "${galaxy_testdir}/test/ansible_test/my_collection/ansible_test-my_collection-1.0.0.tar.gz" ]]
popd # ${galaxy_testdir}/ansible_test/my_collection
f_ansible_galaxy_status \
"collection build test to make sure we can specify a relative path"
pushd "${galaxy_testdir}"
ansible-galaxy collection build "test/ansible_test/my_collection" "$@"
[[ -f "${galaxy_testdir}/ansible_test-my_collection-1.0.0.tar.gz" ]]
# Make sure --force works
ansible-galaxy collection build "test/ansible_test/my_collection" --force "$@"
[[ -f "${galaxy_testdir}/ansible_test-my_collection-1.0.0.tar.gz" ]]
f_ansible_galaxy_status \
"collection install from local tarball test"
ansible-galaxy collection install "ansible_test-my_collection-1.0.0.tar.gz" -p ./install "$@" | tee out.txt
[[ -f "${galaxy_testdir}/install/ansible_collections/ansible_test/my_collection/MANIFEST.json" ]]
grep "Installing 'ansible_test.my_collection:1.0.0' to .*" out.txt
f_ansible_galaxy_status \
"collection install with existing collection and without --force"
ansible-galaxy collection install "ansible_test-my_collection-1.0.0.tar.gz" -p ./install "$@" | tee out.txt
[[ -f "${galaxy_testdir}/install/ansible_collections/ansible_test/my_collection/MANIFEST.json" ]]
grep "Skipping 'ansible_test.my_collection' as it is already installed" out.txt
f_ansible_galaxy_status \
"collection install with existing collection and with --force"
ansible-galaxy collection install "ansible_test-my_collection-1.0.0.tar.gz" -p ./install --force "$@" | tee out.txt
[[ -f "${galaxy_testdir}/install/ansible_collections/ansible_test/my_collection/MANIFEST.json" ]]
grep "Installing 'ansible_test.my_collection:1.0.0' to .*" out.txt
f_ansible_galaxy_status \
"ansible-galaxy with a sever list with an undefined URL"
ANSIBLE_GALAXY_SERVER_LIST=undefined ansible-galaxy collection install "ansible_test-my_collection-1.0.0.tar.gz" -p ./install --force "$@" 2>&1 | tee out.txt || echo "expected failure"
grep "No setting was provided for required configuration plugin_type: galaxy_server plugin: undefined setting: url" out.txt
f_ansible_galaxy_status \
"ansible-galaxy with an empty server list"
ANSIBLE_GALAXY_SERVER_LIST='' ansible-galaxy collection install "ansible_test-my_collection-1.0.0.tar.gz" -p ./install --force "$@" | tee out.txt
[[ -f "${galaxy_testdir}/install/ansible_collections/ansible_test/my_collection/MANIFEST.json" ]]
grep "Installing 'ansible_test.my_collection:1.0.0' to .*" out.txt
## ansible-galaxy collection list tests
# Create more collections and put them in various places
f_ansible_galaxy_status \
"setting up for collection list tests"
rm -rf ansible_test/* install/*
NAMES=(zoo museum airport)
for n in "${NAMES[@]}"; do
ansible-galaxy collection init "ansible_test.$n"
ansible-galaxy collection build "ansible_test/$n"
done
ansible-galaxy collection install ansible_test-zoo-1.0.0.tar.gz
ansible-galaxy collection install ansible_test-museum-1.0.0.tar.gz -p ./install
ansible-galaxy collection install ansible_test-airport-1.0.0.tar.gz -p ./local
# Change the collection version and install to another location
sed -i -e 's#^version:.*#version: 2.5.0#' ansible_test/zoo/galaxy.yml
ansible-galaxy collection build ansible_test/zoo
ansible-galaxy collection install ansible_test-zoo-2.5.0.tar.gz -p ./local
export ANSIBLE_COLLECTIONS_PATHS=~/.ansible/collections:${galaxy_testdir}/local
f_ansible_galaxy_status \
"collection list all collections"
ansible-galaxy collection list -p ./install | tee out.txt
[[ $(grep -c ansible_test out.txt) -eq 4 ]]
f_ansible_galaxy_status \
"collection list specific collection"
ansible-galaxy collection list -p ./install ansible_test.airport | tee out.txt
[[ $(grep -c 'ansible_test\.airport' out.txt) -eq 1 ]]
f_ansible_galaxy_status \
"collection list specific collection found in multiple places"
ansible-galaxy collection list -p ./install ansible_test.zoo | tee out.txt
[[ $(grep -c 'ansible_test\.zoo' out.txt) -eq 2 ]]
f_ansible_galaxy_status \
"collection list all with duplicate paths"
ansible-galaxy collection list -p ~/.ansible/collections | tee out.txt
[[ $(grep -c '# /root/.ansible/collections/ansible_collections' out.txt) -eq 1 ]]
f_ansible_galaxy_status \
"collection list invalid collection name"
ansible-galaxy collection list -p ./install dirty.wraughten.name "$@" 2>&1 | tee out.txt || echo "expected failure"
grep 'ERROR! Invalid collection name' out.txt
f_ansible_galaxy_status \
"collection list path not found"
ansible-galaxy collection list -p ./nope "$@" 2>&1 | tee out.txt || echo "expected failure"
grep '\[WARNING\]: - the configured path' out.txt
f_ansible_galaxy_status \
"collection list missing ansible_collections dir inside path"
mkdir emptydir
ansible-galaxy collection list -p ./emptydir "$@"
rmdir emptydir
unset ANSIBLE_COLLECTIONS_PATHS
## end ansible-galaxy collection list
popd # ${galaxy_testdir}
rm -fr "${galaxy_testdir}"
rm -fr "${galaxy_local_test_role_dir}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,925 |
Add Redfish commands to perform updates
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add new commands to the Redfish remote_management modules to perform firmware update requests.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
redfish_config.py
redfish_utils.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Operators need a way to update firmware on Redfish managed systems. Would like an extension made to be able to perform SimpleUpdate actions with a specified firmware image. At the very minimum we can point to a remote HTTP/TFTP/SCP/etc server, but if possible, it might be good to spin up a lightweight HTTP server to service the request in case one isn't provisioned.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Pull style update
redfish_config:
category: Update
command: PullUpdate
image: http://my.file.repo/someimage.bin
targets:
- /redfish/v1/Systems/1
- /redfish/v1/Systems/2
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63925
|
https://github.com/ansible/ansible/pull/65074
|
c64202a49563fefb35bd8de59bceb0b3b2fa5fa1
|
b5b23efdcc8fc9f0feef00f247013ac131525bf8
| 2019-10-24T19:30:11Z |
python
| 2020-02-17T21:19:47Z |
lib/ansible/module_utils/redfish_utils.py
|
# Copyright (c) 2017-2018 Dell EMC Inc.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import json
from ansible.module_utils.urls import open_url
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves import http_client
from ansible.module_utils.six.moves.urllib.error import URLError, HTTPError
GET_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'}
POST_HEADERS = {'content-type': 'application/json', 'accept': 'application/json',
'OData-Version': '4.0'}
PATCH_HEADERS = {'content-type': 'application/json', 'accept': 'application/json',
'OData-Version': '4.0'}
DELETE_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'}
DEPRECATE_MSG = 'Issuing a data modification command without specifying the '\
'ID of the target %(resource)s resource when there is more '\
'than one %(resource)s will use the first one in the '\
'collection. Use the `resource_id` option to specify the '\
'target %(resource)s ID'
class RedfishUtils(object):
def __init__(self, creds, root_uri, timeout, module, resource_id=None,
data_modification=False):
self.root_uri = root_uri
self.creds = creds
self.timeout = timeout
self.module = module
self.service_root = '/redfish/v1/'
self.resource_id = resource_id
self.data_modification = data_modification
self._init_session()
# The following functions are to send GET/POST/PATCH/DELETE requests
def get_request(self, uri):
try:
resp = open_url(uri, method="GET", headers=GET_HEADERS,
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
data = json.loads(resp.read())
headers = dict((k.lower(), v) for (k, v) in resp.info().items())
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on GET request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on GET request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed GET request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'data': data, 'headers': headers}
def post_request(self, uri, pyld):
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=POST_HEADERS, method="POST",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on POST request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on POST request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed POST request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
def patch_request(self, uri, pyld):
headers = PATCH_HEADERS
r = self.get_request(uri)
if r['ret']:
# Get etag from etag header or @odata.etag property
etag = r['headers'].get('etag')
if not etag:
etag = r['data'].get('@odata.etag')
if etag:
# Make copy of headers and add If-Match header
headers = dict(headers)
headers['If-Match'] = etag
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=headers, method="PATCH",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on PATCH request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on PATCH request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed PATCH request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
def delete_request(self, uri, pyld=None):
try:
data = json.dumps(pyld) if pyld else None
resp = open_url(uri, data=data,
headers=DELETE_HEADERS, method="DELETE",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
follow_redirects='all',
use_proxy=True, timeout=self.timeout)
except HTTPError as e:
msg = self._get_extended_message(e)
return {'ret': False,
'msg': "HTTP Error %s on DELETE request to '%s', extended message: '%s'"
% (e.code, uri, msg),
'status': e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error on DELETE request to '%s': '%s'"
% (uri, e.reason)}
# Almost all errors should be caught above, but just in case
except Exception as e:
return {'ret': False,
'msg': "Failed DELETE request to '%s': '%s'" % (uri, to_text(e))}
return {'ret': True, 'resp': resp}
@staticmethod
def _get_extended_message(error):
"""
Get Redfish ExtendedInfo message from response payload if present
:param error: an HTTPError exception
:type error: HTTPError
:return: the ExtendedInfo message if present, else standard HTTP error
"""
msg = http_client.responses.get(error.code, '')
if error.code >= 400:
try:
body = error.read().decode('utf-8')
data = json.loads(body)
ext_info = data['error']['@Message.ExtendedInfo']
msg = ext_info[0]['Message']
except Exception:
pass
return msg
def _init_session(self):
pass
def _find_accountservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'AccountService' not in data:
return {'ret': False, 'msg': "AccountService resource not found"}
else:
account_service = data["AccountService"]["@odata.id"]
response = self.get_request(self.root_uri + account_service)
if response['ret'] is False:
return response
data = response['data']
accounts = data['Accounts']['@odata.id']
if accounts[-1:] == '/':
accounts = accounts[:-1]
self.accounts_uri = accounts
return {'ret': True}
def _find_sessionservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'SessionService' not in data:
return {'ret': False, 'msg': "SessionService resource not found"}
else:
session_service = data["SessionService"]["@odata.id"]
response = self.get_request(self.root_uri + session_service)
if response['ret'] is False:
return response
data = response['data']
sessions = data['Sessions']['@odata.id']
if sessions[-1:] == '/':
sessions = sessions[:-1]
self.sessions_uri = sessions
return {'ret': True}
def _get_resource_uri_by_id(self, uris, id_prop):
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
if id_prop == data.get('Id'):
return uri
return None
def _find_systems_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Systems' not in data:
return {'ret': False, 'msg': "Systems resource not found"}
response = self.get_request(self.root_uri + data['Systems']['@odata.id'])
if response['ret'] is False:
return response
self.systems_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.systems_uris:
return {
'ret': False,
'msg': "ComputerSystem's Members array is either empty or missing"}
self.systems_uri = self.systems_uris[0]
if self.data_modification:
if self.resource_id:
self.systems_uri = self._get_resource_uri_by_id(self.systems_uris,
self.resource_id)
if not self.systems_uri:
return {
'ret': False,
'msg': "System resource %s not found" % self.resource_id}
elif len(self.systems_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'System'},
version='2.14')
return {'ret': True}
def _find_updateservice_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'UpdateService' not in data:
return {'ret': False, 'msg': "UpdateService resource not found"}
else:
update = data["UpdateService"]["@odata.id"]
self.update_uri = update
response = self.get_request(self.root_uri + update)
if response['ret'] is False:
return response
data = response['data']
self.firmware_uri = self.software_uri = None
if 'FirmwareInventory' in data:
self.firmware_uri = data['FirmwareInventory'][u'@odata.id']
if 'SoftwareInventory' in data:
self.software_uri = data['SoftwareInventory'][u'@odata.id']
return {'ret': True}
def _find_chassis_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Chassis' not in data:
return {'ret': False, 'msg': "Chassis resource not found"}
chassis = data["Chassis"]["@odata.id"]
response = self.get_request(self.root_uri + chassis)
if response['ret'] is False:
return response
self.chassis_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.chassis_uris:
return {'ret': False,
'msg': "Chassis Members array is either empty or missing"}
self.chassis_uri = self.chassis_uris[0]
if self.data_modification:
if self.resource_id:
self.chassis_uri = self._get_resource_uri_by_id(self.chassis_uris,
self.resource_id)
if not self.chassis_uri:
return {
'ret': False,
'msg': "Chassis resource %s not found" % self.resource_id}
elif len(self.chassis_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'Chassis'},
version='2.14')
return {'ret': True}
def _find_managers_resource(self):
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'Managers' not in data:
return {'ret': False, 'msg': "Manager resource not found"}
manager = data["Managers"]["@odata.id"]
response = self.get_request(self.root_uri + manager)
if response['ret'] is False:
return response
self.manager_uris = [
i['@odata.id'] for i in response['data'].get('Members', [])]
if not self.manager_uris:
return {'ret': False,
'msg': "Managers Members array is either empty or missing"}
self.manager_uri = self.manager_uris[0]
if self.data_modification:
if self.resource_id:
self.manager_uri = self._get_resource_uri_by_id(self.manager_uris,
self.resource_id)
if not self.manager_uri:
return {
'ret': False,
'msg': "Manager resource %s not found" % self.resource_id}
elif len(self.manager_uris) > 1:
self.module.deprecate(DEPRECATE_MSG % {'resource': 'Manager'},
version='2.14')
return {'ret': True}
def get_logs(self):
log_svcs_uri_list = []
list_of_logs = []
properties = ['Severity', 'Created', 'EntryType', 'OemRecordFormat',
'Message', 'MessageId', 'MessageArgs']
# Find LogService
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'LogServices' not in data:
return {'ret': False, 'msg': "LogServices resource not found"}
# Find all entries in LogServices
logs_uri = data["LogServices"]["@odata.id"]
response = self.get_request(self.root_uri + logs_uri)
if response['ret'] is False:
return response
data = response['data']
for log_svcs_entry in data.get('Members', []):
response = self.get_request(self.root_uri + log_svcs_entry[u'@odata.id'])
if response['ret'] is False:
return response
_data = response['data']
if 'Entries' in _data:
log_svcs_uri_list.append(_data['Entries'][u'@odata.id'])
# For each entry in LogServices, get log name and all log entries
for log_svcs_uri in log_svcs_uri_list:
logs = {}
list_of_log_entries = []
response = self.get_request(self.root_uri + log_svcs_uri)
if response['ret'] is False:
return response
data = response['data']
logs['Description'] = data.get('Description',
'Collection of log entries')
# Get all log entries for each type of log found
for logEntry in data.get('Members', []):
entry = {}
for prop in properties:
if prop in logEntry:
entry[prop] = logEntry.get(prop)
if entry:
list_of_log_entries.append(entry)
log_name = log_svcs_uri.split('/')[-1]
logs[log_name] = list_of_log_entries
list_of_logs.append(logs)
# list_of_logs[logs{list_of_log_entries[entry{}]}]
return {'ret': True, 'entries': list_of_logs}
def clear_logs(self):
# Find LogService
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'LogServices' not in data:
return {'ret': False, 'msg': "LogServices resource not found"}
# Find all entries in LogServices
logs_uri = data["LogServices"]["@odata.id"]
response = self.get_request(self.root_uri + logs_uri)
if response['ret'] is False:
return response
data = response['data']
for log_svcs_entry in data[u'Members']:
response = self.get_request(self.root_uri + log_svcs_entry["@odata.id"])
if response['ret'] is False:
return response
_data = response['data']
# Check to make sure option is available, otherwise error is ugly
if "Actions" in _data:
if "#LogService.ClearLog" in _data[u"Actions"]:
self.post_request(self.root_uri + _data[u"Actions"]["#LogService.ClearLog"]["target"], {})
if response['ret'] is False:
return response
return {'ret': True}
def aggregate(self, func, uri_list, uri_name):
ret = True
entries = []
for uri in uri_list:
inventory = func(uri)
ret = inventory.pop('ret') and ret
if 'entries' in inventory:
entries.append(({uri_name: uri},
inventory['entries']))
return dict(ret=ret, entries=entries)
def aggregate_chassis(self, func):
return self.aggregate(func, self.chassis_uris, 'chassis_uri')
def aggregate_managers(self, func):
return self.aggregate(func, self.manager_uris, 'manager_uri')
def aggregate_systems(self, func):
return self.aggregate(func, self.systems_uris, 'system_uri')
def get_storage_controller_inventory(self, systems_uri):
result = {}
controller_list = []
controller_results = []
# Get these entries, but does not fail if not found
properties = ['CacheSummary', 'FirmwareVersion', 'Identifiers',
'Location', 'Manufacturer', 'Model', 'Name',
'PartNumber', 'SerialNumber', 'SpeedGbps', 'Status']
key = "StorageControllers"
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'Storage' not in data:
return {'ret': False, 'msg': "Storage resource not found"}
# Get a list of all storage controllers and build respective URIs
storage_uri = data['Storage']["@odata.id"]
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Loop through Members and their StorageControllers
# and gather properties from each StorageController
if data[u'Members']:
for storage_member in data[u'Members']:
storage_member_uri = storage_member[u'@odata.id']
response = self.get_request(self.root_uri + storage_member_uri)
data = response['data']
if key in data:
controller_list = data[key]
for controller in controller_list:
controller_result = {}
for property in properties:
if property in controller:
controller_result[property] = controller[property]
controller_results.append(controller_result)
result['entries'] = controller_results
return result
else:
return {'ret': False, 'msg': "Storage resource not found"}
def get_multi_storage_controller_inventory(self):
return self.aggregate_systems(self.get_storage_controller_inventory)
def get_disk_inventory(self, systems_uri):
result = {'entries': []}
controller_list = []
# Get these entries, but does not fail if not found
properties = ['BlockSizeBytes', 'CapableSpeedGbs', 'CapacityBytes',
'EncryptionAbility', 'EncryptionStatus',
'FailurePredicted', 'HotspareType', 'Id', 'Identifiers',
'Manufacturer', 'MediaType', 'Model', 'Name',
'PartNumber', 'PhysicalLocation', 'Protocol', 'Revision',
'RotationSpeedRPM', 'SerialNumber', 'Status']
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'SimpleStorage' not in data and 'Storage' not in data:
return {'ret': False, 'msg': "SimpleStorage and Storage resource \
not found"}
if 'Storage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data[u'Storage'][u'@odata.id']
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if data[u'Members']:
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
controller_name = 'Controller 1'
if 'StorageControllers' in data:
sc = data['StorageControllers']
if sc:
if 'Name' in sc[0]:
controller_name = sc[0]['Name']
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
drive_results = []
if 'Drives' in data:
for device in data[u'Drives']:
disk_uri = self.root_uri + device[u'@odata.id']
response = self.get_request(disk_uri)
data = response['data']
drive_result = {}
for property in properties:
if property in data:
if data[property] is not None:
drive_result[property] = data[property]
drive_results.append(drive_result)
drives = {'Controller': controller_name,
'Drives': drive_results}
result["entries"].append(drives)
if 'SimpleStorage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data["SimpleStorage"]["@odata.id"]
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
if 'Name' in data:
controller_name = data['Name']
else:
sc_id = data.get('Id', '1')
controller_name = 'Controller %s' % sc_id
drive_results = []
for device in data[u'Devices']:
drive_result = {}
for property in properties:
if property in device:
drive_result[property] = device[property]
drive_results.append(drive_result)
drives = {'Controller': controller_name,
'Drives': drive_results}
result["entries"].append(drives)
return result
def get_multi_disk_inventory(self):
return self.aggregate_systems(self.get_disk_inventory)
def get_volume_inventory(self, systems_uri):
result = {'entries': []}
controller_list = []
volume_list = []
# Get these entries, but does not fail if not found
properties = ['Id', 'Name', 'RAIDType', 'VolumeType', 'BlockSizeBytes',
'Capacity', 'CapacityBytes', 'CapacitySources',
'Encrypted', 'EncryptionTypes', 'Identifiers',
'Operations', 'OptimumIOSizeBytes', 'AccessCapabilities',
'AllocatedPools', 'Status']
# Find Storage service
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'SimpleStorage' not in data and 'Storage' not in data:
return {'ret': False, 'msg': "SimpleStorage and Storage resource \
not found"}
if 'Storage' in data:
# Get a list of all storage controllers and build respective URIs
storage_uri = data[u'Storage'][u'@odata.id']
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if data.get('Members'):
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
controller_name = 'Controller 1'
if 'StorageControllers' in data:
sc = data['StorageControllers']
if sc:
if 'Name' in sc[0]:
controller_name = sc[0]['Name']
else:
sc_id = sc[0].get('Id', '1')
controller_name = 'Controller %s' % sc_id
volume_results = []
if 'Volumes' in data:
# Get a list of all volumes and build respective URIs
volumes_uri = data[u'Volumes'][u'@odata.id']
response = self.get_request(self.root_uri + volumes_uri)
data = response['data']
if data.get('Members'):
for volume in data[u'Members']:
volume_list.append(volume[u'@odata.id'])
for v in volume_list:
uri = self.root_uri + v
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
volume_result = {}
for property in properties:
if property in data:
if data[property] is not None:
volume_result[property] = data[property]
# Get related Drives Id
drive_id_list = []
if 'Links' in data:
if 'Drives' in data[u'Links']:
for link in data[u'Links'][u'Drives']:
drive_id_link = link[u'@odata.id']
drive_id = drive_id_link.split("/")[-1]
drive_id_list.append({'Id': drive_id})
volume_result['Linked_drives'] = drive_id_list
volume_results.append(volume_result)
volumes = {'Controller': controller_name,
'Volumes': volume_results}
result["entries"].append(volumes)
else:
return {'ret': False, 'msg': "Storage resource not found"}
return result
def get_multi_volume_inventory(self):
return self.aggregate_systems(self.get_volume_inventory)
def restart_manager_gracefully(self):
result = {}
key = "Actions"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
action_uri = data[key]["#Manager.Reset"]["target"]
payload = {'ResetType': 'GracefulRestart'}
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def manage_indicator_led(self, command):
result = {}
key = 'IndicatorLED'
payloads = {'IndicatorLedOn': 'Lit', 'IndicatorLedOff': 'Off', "IndicatorLedBlink": 'Blinking'}
result = {}
response = self.get_request(self.root_uri + self.chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
if command in payloads.keys():
payload = {'IndicatorLED': payloads[command]}
response = self.patch_request(self.root_uri + self.chassis_uri, payload)
if response['ret'] is False:
return response
else:
return {'ret': False, 'msg': 'Invalid command'}
return result
def _map_reset_type(self, reset_type, allowable_values):
equiv_types = {
'On': 'ForceOn',
'ForceOn': 'On',
'ForceOff': 'GracefulShutdown',
'GracefulShutdown': 'ForceOff',
'GracefulRestart': 'ForceRestart',
'ForceRestart': 'GracefulRestart'
}
if reset_type in allowable_values:
return reset_type
if reset_type not in equiv_types:
return reset_type
mapped_type = equiv_types[reset_type]
if mapped_type in allowable_values:
return mapped_type
return reset_type
def manage_system_power(self, command):
key = "Actions"
reset_type_values = ['On', 'ForceOff', 'GracefulShutdown',
'GracefulRestart', 'ForceRestart', 'Nmi',
'ForceOn', 'PushPowerButton', 'PowerCycle']
# command should be PowerOn, PowerForceOff, etc.
if not command.startswith('Power'):
return {'ret': False, 'msg': 'Invalid Command (%s)' % command}
reset_type = command[5:]
# map Reboot to a ResetType that does a reboot
if reset_type == 'Reboot':
reset_type = 'GracefulRestart'
if reset_type not in reset_type_values:
return {'ret': False, 'msg': 'Invalid Command (%s)' % command}
# read the system resource and get the current power state
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
data = response['data']
power_state = data.get('PowerState')
# if power is already in target state, nothing to do
if power_state == "On" and reset_type in ['On', 'ForceOn']:
return {'ret': True, 'changed': False}
if power_state == "Off" and reset_type in ['GracefulShutdown', 'ForceOff']:
return {'ret': True, 'changed': False}
# get the #ComputerSystem.Reset Action and target URI
if key not in data or '#ComputerSystem.Reset' not in data[key]:
return {'ret': False, 'msg': 'Action #ComputerSystem.Reset not found'}
reset_action = data[key]['#ComputerSystem.Reset']
if 'target' not in reset_action:
return {'ret': False,
'msg': 'target URI missing from Action #ComputerSystem.Reset'}
action_uri = reset_action['target']
# get AllowableValues from ActionInfo
allowable_values = None
if '@Redfish.ActionInfo' in reset_action:
action_info_uri = reset_action.get('@Redfish.ActionInfo')
response = self.get_request(self.root_uri + action_info_uri)
if response['ret'] is True:
data = response['data']
if 'Parameters' in data:
params = data['Parameters']
for param in params:
if param.get('Name') == 'ResetType':
allowable_values = param.get('AllowableValues')
break
# fallback to @Redfish.AllowableValues annotation
if allowable_values is None:
allowable_values = reset_action.get('[email protected]', [])
# map ResetType to an allowable value if needed
if reset_type not in allowable_values:
reset_type = self._map_reset_type(reset_type, allowable_values)
# define payload
payload = {'ResetType': reset_type}
# POST to Action URI
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True}
def _find_account_uri(self, username=None, acct_id=None):
if not any((username, acct_id)):
return {'ret': False, 'msg':
'Must provide either account_id or account_username'}
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
headers = response['headers']
if username:
if username == data.get('UserName'):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
if acct_id:
if acct_id == data.get('Id'):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
return {'ret': False, 'no_match': True, 'msg':
'No account with the given account_id or account_username found'}
def _find_empty_account_slot(self):
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
if uris:
# first slot may be reserved, so move to end of list
uris += [uris.pop(0)]
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
continue
data = response['data']
headers = response['headers']
if data.get('UserName') == "" and not data.get('Enabled', True):
return {'ret': True, 'data': data,
'headers': headers, 'uri': uri}
return {'ret': False, 'no_match': True, 'msg':
'No empty account slot found'}
def list_users(self):
result = {}
# listing all users has always been slower than other operations, why?
user_list = []
users_results = []
# Get these entries, but does not fail if not found
properties = ['Id', 'Name', 'UserName', 'RoleId', 'Locked', 'Enabled']
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for users in data.get('Members', []):
user_list.append(users[u'@odata.id']) # user_list[] are URIs
# for each user, get details
for uri in user_list:
user = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
user[property] = data[property]
users_results.append(user)
result["entries"] = users_results
return result
def add_user_via_patch(self, user):
if user.get('account_id'):
# If Id slot specified, use it
response = self._find_account_uri(acct_id=user.get('account_id'))
else:
# Otherwise find first empty slot
response = self._find_empty_account_slot()
if not response['ret']:
return response
uri = response['uri']
payload = {}
if user.get('account_username'):
payload['UserName'] = user.get('account_username')
if user.get('account_password'):
payload['Password'] = user.get('account_password')
if user.get('account_roleid'):
payload['RoleId'] = user.get('account_roleid')
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def add_user(self, user):
if not user.get('account_username'):
return {'ret': False, 'msg':
'Must provide account_username for AddUser command'}
response = self._find_account_uri(username=user.get('account_username'))
if response['ret']:
# account_username already exists, nothing to do
return {'ret': True, 'changed': False}
response = self.get_request(self.root_uri + self.accounts_uri)
if not response['ret']:
return response
headers = response['headers']
if 'allow' in headers:
methods = [m.strip() for m in headers.get('allow').split(',')]
if 'POST' not in methods:
# if Allow header present and POST not listed, add via PATCH
return self.add_user_via_patch(user)
payload = {}
if user.get('account_username'):
payload['UserName'] = user.get('account_username')
if user.get('account_password'):
payload['Password'] = user.get('account_password')
if user.get('account_roleid'):
payload['RoleId'] = user.get('account_roleid')
response = self.post_request(self.root_uri + self.accounts_uri, payload)
if not response['ret']:
if response.get('status') == 405:
# if POST returned a 405, try to add via PATCH
return self.add_user_via_patch(user)
else:
return response
return {'ret': True}
def enable_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data.get('Enabled', True):
# account already enabled, nothing to do
return {'ret': True, 'changed': False}
payload = {'Enabled': True}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def delete_user_via_patch(self, user, uri=None, data=None):
if not uri:
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data and data.get('UserName') == '' and not data.get('Enabled', False):
# account UserName already cleared, nothing to do
return {'ret': True, 'changed': False}
payload = {'UserName': ''}
if data.get('Enabled', False):
payload['Enabled'] = False
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def delete_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
if response.get('no_match'):
# account does not exist, nothing to do
return {'ret': True, 'changed': False}
else:
# some error encountered
return response
uri = response['uri']
headers = response['headers']
data = response['data']
if 'allow' in headers:
methods = [m.strip() for m in headers.get('allow').split(',')]
if 'DELETE' not in methods:
# if Allow header present and DELETE not listed, del via PATCH
return self.delete_user_via_patch(user, uri=uri, data=data)
response = self.delete_request(self.root_uri + uri)
if not response['ret']:
if response.get('status') == 405:
# if DELETE returned a 405, try to delete via PATCH
return self.delete_user_via_patch(user, uri=uri, data=data)
else:
return response
return {'ret': True}
def disable_user(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if not data.get('Enabled'):
# account already disabled, nothing to do
return {'ret': True, 'changed': False}
payload = {'Enabled': False}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_role(self, user):
if not user.get('account_roleid'):
return {'ret': False, 'msg':
'Must provide account_roleid for UpdateUserRole command'}
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
data = response['data']
if data.get('RoleId') == user.get('account_roleid'):
# account already has RoleId , nothing to do
return {'ret': True, 'changed': False}
payload = {'RoleId': user.get('account_roleid')}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_password(self, user):
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
payload = {'Password': user['account_password']}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_name(self, user):
if not user.get('account_updatename'):
return {'ret': False, 'msg':
'Must provide account_updatename for UpdateUserName command'}
response = self._find_account_uri(username=user.get('account_username'),
acct_id=user.get('account_id'))
if not response['ret']:
return response
uri = response['uri']
payload = {'UserName': user['account_updatename']}
response = self.patch_request(self.root_uri + uri, payload)
if response['ret'] is False:
return response
return {'ret': True}
def update_accountservice_properties(self, user):
if user.get('account_properties') is None:
return {'ret': False, 'msg':
'Must provide account_properties for UpdateAccountServiceProperties command'}
account_properties = user.get('account_properties')
# Find AccountService
response = self.get_request(self.root_uri + self.service_root)
if response['ret'] is False:
return response
data = response['data']
if 'AccountService' not in data:
return {'ret': False, 'msg': "AccountService resource not found"}
accountservice_uri = data["AccountService"]["@odata.id"]
# Check support or not
response = self.get_request(self.root_uri + accountservice_uri)
if response['ret'] is False:
return response
data = response['data']
for property_name in account_properties.keys():
if property_name not in data:
return {'ret': False, 'msg':
'property %s not supported' % property_name}
# if properties is already matched, nothing to do
need_change = False
for property_name in account_properties.keys():
if account_properties[property_name] != data[property_name]:
need_change = True
break
if not need_change:
return {'ret': True, 'changed': False, 'msg': "AccountService properties already set"}
payload = account_properties
response = self.patch_request(self.root_uri + accountservice_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified AccountService properties"}
def get_sessions(self):
result = {}
# listing all users has always been slower than other operations, why?
session_list = []
sessions_results = []
# Get these entries, but does not fail if not found
properties = ['Description', 'Id', 'Name', 'UserName']
response = self.get_request(self.root_uri + self.sessions_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for sessions in data[u'Members']:
session_list.append(sessions[u'@odata.id']) # session_list[] are URIs
# for each session, get details
for uri in session_list:
session = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
session[property] = data[property]
sessions_results.append(session)
result["entries"] = sessions_results
return result
def clear_sessions(self):
response = self.get_request(self.root_uri + self.sessions_uri)
if response['ret'] is False:
return response
data = response['data']
# if no active sessions, return as success
if data['[email protected]'] == 0:
return {'ret': True, 'changed': False, 'msg': "There is no active sessions"}
# loop to delete every active session
for session in data[u'Members']:
response = self.delete_request(self.root_uri + session[u'@odata.id'])
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Clear all sessions successfully"}
def get_firmware_update_capabilities(self):
result = {}
response = self.get_request(self.root_uri + self.update_uri)
if response['ret'] is False:
return response
result['ret'] = True
result['entries'] = {}
data = response['data']
if "Actions" in data:
actions = data['Actions']
if len(actions) > 0:
for key in actions.keys():
action = actions.get(key)
if 'title' in action:
title = action['title']
else:
title = key
result['entries'][title] = action.get('[email protected]',
["Key [email protected] not found"])
else:
return {'ret': "False", 'msg': "Actions list is empty."}
else:
return {'ret': "False", 'msg': "Key Actions not found."}
return result
def _software_inventory(self, uri):
result = {}
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
result['entries'] = []
for member in data[u'Members']:
uri = self.root_uri + member[u'@odata.id']
# Get details for each software or firmware member
response = self.get_request(uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
software = {}
# Get these standard properties if present
for key in ['Name', 'Id', 'Status', 'Version', 'Updateable',
'SoftwareId', 'LowestSupportedVersion', 'Manufacturer',
'ReleaseDate']:
if key in data:
software[key] = data.get(key)
result['entries'].append(software)
return result
def get_firmware_inventory(self):
if self.firmware_uri is None:
return {'ret': False, 'msg': 'No FirmwareInventory resource found'}
else:
return self._software_inventory(self.firmware_uri)
def get_software_inventory(self):
if self.software_uri is None:
return {'ret': False, 'msg': 'No SoftwareInventory resource found'}
else:
return self._software_inventory(self.software_uri)
def get_bios_attributes(self, systems_uri):
result = {}
bios_attributes = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for attribute in data[u'Attributes'].items():
bios_attributes[attribute[0]] = attribute[1]
result["entries"] = bios_attributes
return result
def get_multi_bios_attributes(self):
return self.aggregate_systems(self.get_bios_attributes)
def _get_boot_options_dict(self, boot):
# Get these entries from BootOption, if present
properties = ['DisplayName', 'BootOptionReference']
# Retrieve BootOptions if present
if 'BootOptions' in boot and '@odata.id' in boot['BootOptions']:
boot_options_uri = boot['BootOptions']["@odata.id"]
# Get BootOptions resource
response = self.get_request(self.root_uri + boot_options_uri)
if response['ret'] is False:
return {}
data = response['data']
# Retrieve Members array
if 'Members' not in data:
return {}
members = data['Members']
else:
members = []
# Build dict of BootOptions keyed by BootOptionReference
boot_options_dict = {}
for member in members:
if '@odata.id' not in member:
return {}
boot_option_uri = member['@odata.id']
response = self.get_request(self.root_uri + boot_option_uri)
if response['ret'] is False:
return {}
data = response['data']
if 'BootOptionReference' not in data:
return {}
boot_option_ref = data['BootOptionReference']
# fetch the props to display for this boot device
boot_props = {}
for prop in properties:
if prop in data:
boot_props[prop] = data[prop]
boot_options_dict[boot_option_ref] = boot_props
return boot_options_dict
def get_boot_order(self, systems_uri):
result = {}
# Retrieve System resource
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Confirm needed Boot properties are present
if 'Boot' not in data or 'BootOrder' not in data['Boot']:
return {'ret': False, 'msg': "Key BootOrder not found"}
boot = data['Boot']
boot_order = boot['BootOrder']
boot_options_dict = self._get_boot_options_dict(boot)
# Build boot device list
boot_device_list = []
for ref in boot_order:
boot_device_list.append(
boot_options_dict.get(ref, {'BootOptionReference': ref}))
result["entries"] = boot_device_list
return result
def get_multi_boot_order(self):
return self.aggregate_systems(self.get_boot_order)
def get_boot_override(self, systems_uri):
result = {}
properties = ["BootSourceOverrideEnabled", "BootSourceOverrideTarget",
"BootSourceOverrideMode", "UefiTargetBootSourceOverride", "[email protected]"]
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if 'Boot' not in data:
return {'ret': False, 'msg': "Key Boot not found"}
boot = data['Boot']
boot_overrides = {}
if "BootSourceOverrideEnabled" in boot:
if boot["BootSourceOverrideEnabled"] is not False:
for property in properties:
if property in boot:
if boot[property] is not None:
boot_overrides[property] = boot[property]
else:
return {'ret': False, 'msg': "No boot override is enabled."}
result['entries'] = boot_overrides
return result
def get_multi_boot_override(self):
return self.aggregate_systems(self.get_boot_override)
def set_bios_default_settings(self):
result = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
# Extract proper URI
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
reset_bios_settings_uri = data["Actions"]["#Bios.ResetBios"]["target"]
response = self.post_request(self.root_uri + reset_bios_settings_uri, {})
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Set BIOS to default settings"}
def set_one_time_boot_device(self, bootdevice, uefi_target, boot_next):
result = {}
key = "Boot"
if not bootdevice:
return {'ret': False,
'msg': "bootdevice option required for SetOneTimeBoot"}
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
boot = data[key]
annotation = '[email protected]'
if annotation in boot:
allowable_values = boot[annotation]
if isinstance(allowable_values, list) and bootdevice not in allowable_values:
return {'ret': False,
'msg': "Boot device %s not in list of allowable values (%s)" %
(bootdevice, allowable_values)}
# read existing values
enabled = boot.get('BootSourceOverrideEnabled')
target = boot.get('BootSourceOverrideTarget')
cur_uefi_target = boot.get('UefiTargetBootSourceOverride')
cur_boot_next = boot.get('BootNext')
if bootdevice == 'UefiTarget':
if not uefi_target:
return {'ret': False,
'msg': "uefi_target option required to SetOneTimeBoot for UefiTarget"}
if enabled == 'Once' and target == bootdevice and uefi_target == cur_uefi_target:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice,
'UefiTargetBootSourceOverride': uefi_target
}
}
elif bootdevice == 'UefiBootNext':
if not boot_next:
return {'ret': False,
'msg': "boot_next option required to SetOneTimeBoot for UefiBootNext"}
if enabled == 'Once' and target == bootdevice and boot_next == cur_boot_next:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice,
'BootNext': boot_next
}
}
else:
if enabled == 'Once' and target == bootdevice:
# If properties are already set, no changes needed
return {'ret': True, 'changed': False}
payload = {
'Boot': {
'BootSourceOverrideEnabled': 'Once',
'BootSourceOverrideTarget': bootdevice
}
}
response = self.patch_request(self.root_uri + self.systems_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True}
def set_bios_attributes(self, attributes):
result = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
bios_uri = data[key]["@odata.id"]
# Extract proper URI
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# Make a copy of the attributes dict
attrs_to_patch = dict(attributes)
# Check the attributes
for attr in attributes:
if attr not in data[u'Attributes']:
return {'ret': False, 'msg': "BIOS attribute %s not found" % attr}
# If already set to requested value, remove it from PATCH payload
if data[u'Attributes'][attr] == attributes[attr]:
del attrs_to_patch[attr]
# Return success w/ changed=False if no attrs need to be changed
if not attrs_to_patch:
return {'ret': True, 'changed': False,
'msg': "BIOS attributes already set"}
# Get the SettingsObject URI
set_bios_attr_uri = data["@Redfish.Settings"]["SettingsObject"]["@odata.id"]
# Construct payload and issue PATCH command
payload = {"Attributes": attrs_to_patch}
response = self.patch_request(self.root_uri + set_bios_attr_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified BIOS attribute"}
def set_boot_order(self, boot_list):
if not boot_list:
return {'ret': False,
'msg': "boot_order list required for SetBootOrder command"}
systems_uri = self.systems_uri
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
# Confirm needed Boot properties are present
if 'Boot' not in data or 'BootOrder' not in data['Boot']:
return {'ret': False, 'msg': "Key BootOrder not found"}
boot = data['Boot']
boot_order = boot['BootOrder']
boot_options_dict = self._get_boot_options_dict(boot)
# validate boot_list against BootOptionReferences if available
if boot_options_dict:
boot_option_references = boot_options_dict.keys()
for ref in boot_list:
if ref not in boot_option_references:
return {'ret': False,
'msg': "BootOptionReference %s not found in BootOptions" % ref}
# If requested BootOrder is already set, nothing to do
if boot_order == boot_list:
return {'ret': True, 'changed': False,
'msg': "BootOrder already set to %s" % boot_list}
payload = {
'Boot': {
'BootOrder': boot_list
}
}
response = self.patch_request(self.root_uri + systems_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "BootOrder set"}
def set_default_boot_order(self):
systems_uri = self.systems_uri
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
data = response['data']
# get the #ComputerSystem.SetDefaultBootOrder Action and target URI
action = '#ComputerSystem.SetDefaultBootOrder'
if 'Actions' not in data or action not in data['Actions']:
return {'ret': False, 'msg': 'Action %s not found' % action}
if 'target' not in data['Actions'][action]:
return {'ret': False,
'msg': 'target URI missing from Action %s' % action}
action_uri = data['Actions'][action]['target']
# POST to Action URI
payload = {}
response = self.post_request(self.root_uri + action_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True,
'msg': "BootOrder set to default"}
def get_chassis_inventory(self):
result = {}
chassis_results = []
# Get these entries, but does not fail if not found
properties = ['ChassisType', 'PartNumber', 'AssetTag',
'Manufacturer', 'IndicatorLED', 'SerialNumber', 'Model']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
chassis_result = {}
for property in properties:
if property in data:
chassis_result[property] = data[property]
chassis_results.append(chassis_result)
result["entries"] = chassis_results
return result
def get_fan_inventory(self):
result = {}
fan_results = []
key = "Thermal"
# Get these entries, but does not fail if not found
properties = ['FanName', 'Reading', 'ReadingUnits', 'Status']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
# match: found an entry for "Thermal" information = fans
thermal_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + thermal_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for device in data[u'Fans']:
fan = {}
for property in properties:
if property in device:
fan[property] = device[property]
fan_results.append(fan)
result["entries"] = fan_results
return result
def get_chassis_power(self):
result = {}
key = "Power"
# Get these entries, but does not fail if not found
properties = ['Name', 'PowerAllocatedWatts',
'PowerAvailableWatts', 'PowerCapacityWatts',
'PowerConsumedWatts', 'PowerMetrics',
'PowerRequestedWatts', 'RelatedItem', 'Status']
chassis_power_results = []
# Go through list
for chassis_uri in self.chassis_uris:
chassis_power_result = {}
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
response = self.get_request(self.root_uri + data[key]['@odata.id'])
data = response['data']
if 'PowerControl' in data:
if len(data['PowerControl']) > 0:
data = data['PowerControl'][0]
for property in properties:
if property in data:
chassis_power_result[property] = data[property]
else:
return {'ret': False, 'msg': 'Key PowerControl not found.'}
chassis_power_results.append(chassis_power_result)
else:
return {'ret': False, 'msg': 'Key Power not found.'}
result['entries'] = chassis_power_results
return result
def get_chassis_thermals(self):
result = {}
sensors = []
key = "Thermal"
# Get these entries, but does not fail if not found
properties = ['Name', 'PhysicalContext', 'UpperThresholdCritical',
'UpperThresholdFatal', 'UpperThresholdNonCritical',
'LowerThresholdCritical', 'LowerThresholdFatal',
'LowerThresholdNonCritical', 'MaxReadingRangeTemp',
'MinReadingRangeTemp', 'ReadingCelsius', 'RelatedItem',
'SensorNumber']
# Go through list
for chassis_uri in self.chassis_uris:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
thermal_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + thermal_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if "Temperatures" in data:
for sensor in data[u'Temperatures']:
sensor_result = {}
for property in properties:
if property in sensor:
if sensor[property] is not None:
sensor_result[property] = sensor[property]
sensors.append(sensor_result)
if sensors is None:
return {'ret': False, 'msg': 'Key Temperatures was not found.'}
result['entries'] = sensors
return result
def get_cpu_inventory(self, systems_uri):
result = {}
cpu_list = []
cpu_results = []
key = "Processors"
# Get these entries, but does not fail if not found
properties = ['Id', 'Manufacturer', 'Model', 'MaxSpeedMHz', 'TotalCores',
'TotalThreads', 'Status']
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
processors_uri = data[key]["@odata.id"]
# Get a list of all CPUs and build respective URIs
response = self.get_request(self.root_uri + processors_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for cpu in data[u'Members']:
cpu_list.append(cpu[u'@odata.id'])
for c in cpu_list:
cpu = {}
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
cpu[property] = data[property]
cpu_results.append(cpu)
result["entries"] = cpu_results
return result
def get_multi_cpu_inventory(self):
return self.aggregate_systems(self.get_cpu_inventory)
def get_memory_inventory(self, systems_uri):
result = {}
memory_list = []
memory_results = []
key = "Memory"
# Get these entries, but does not fail if not found
properties = ['SerialNumber', 'MemoryDeviceType', 'PartNuber',
'MemoryLocation', 'RankCount', 'CapacityMiB', 'OperatingMemoryModes', 'Status', 'Manufacturer', 'Name']
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
memory_uri = data[key]["@odata.id"]
# Get a list of all DIMMs and build respective URIs
response = self.get_request(self.root_uri + memory_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for dimm in data[u'Members']:
memory_list.append(dimm[u'@odata.id'])
for m in memory_list:
dimm = {}
uri = self.root_uri + m
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
if "Status" in data:
if "State" in data["Status"]:
if data["Status"]["State"] == "Absent":
continue
else:
continue
for property in properties:
if property in data:
dimm[property] = data[property]
memory_results.append(dimm)
result["entries"] = memory_results
return result
def get_multi_memory_inventory(self):
return self.aggregate_systems(self.get_memory_inventory)
def get_nic_inventory(self, resource_uri):
result = {}
nic_list = []
nic_results = []
key = "EthernetInterfaces"
# Get these entries, but does not fail if not found
properties = ['Description', 'FQDN', 'IPv4Addresses', 'IPv6Addresses',
'NameServers', 'MACAddress', 'PermanentMACAddress',
'SpeedMbps', 'MTUSize', 'AutoNeg', 'Status']
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
ethernetinterfaces_uri = data[key]["@odata.id"]
# Get a list of all network controllers and build respective URIs
response = self.get_request(self.root_uri + ethernetinterfaces_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for nic in data[u'Members']:
nic_list.append(nic[u'@odata.id'])
for n in nic_list:
nic = {}
uri = self.root_uri + n
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
nic[property] = data[property]
nic_results.append(nic)
result["entries"] = nic_results
return result
def get_multi_nic_inventory(self, resource_type):
ret = True
entries = []
# Given resource_type, use the proper URI
if resource_type == 'Systems':
resource_uris = self.systems_uris
elif resource_type == 'Manager':
resource_uris = self.manager_uris
for resource_uri in resource_uris:
inventory = self.get_nic_inventory(resource_uri)
ret = inventory.pop('ret') and ret
if 'entries' in inventory:
entries.append(({'resource_uri': resource_uri},
inventory['entries']))
return dict(ret=ret, entries=entries)
def get_virtualmedia(self, resource_uri):
result = {}
virtualmedia_list = []
virtualmedia_results = []
key = "VirtualMedia"
# Get these entries, but does not fail if not found
properties = ['Description', 'ConnectedVia', 'Id', 'MediaTypes',
'Image', 'ImageName', 'Name', 'WriteProtected',
'TransferMethod', 'TransferProtocolType']
response = self.get_request(self.root_uri + resource_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
virtualmedia_uri = data[key]["@odata.id"]
# Get a list of all virtual media and build respective URIs
response = self.get_request(self.root_uri + virtualmedia_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for virtualmedia in data[u'Members']:
virtualmedia_list.append(virtualmedia[u'@odata.id'])
for n in virtualmedia_list:
virtualmedia = {}
uri = self.root_uri + n
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for property in properties:
if property in data:
virtualmedia[property] = data[property]
virtualmedia_results.append(virtualmedia)
result["entries"] = virtualmedia_results
return result
def get_multi_virtualmedia(self):
ret = True
entries = []
resource_uris = self.manager_uris
for resource_uri in resource_uris:
virtualmedia = self.get_virtualmedia(resource_uri)
ret = virtualmedia.pop('ret') and ret
if 'entries' in virtualmedia:
entries.append(({'resource_uri': resource_uri},
virtualmedia['entries']))
return dict(ret=ret, entries=entries)
def get_psu_inventory(self):
result = {}
psu_list = []
psu_results = []
key = "PowerSupplies"
# Get these entries, but does not fail if not found
properties = ['Name', 'Model', 'SerialNumber', 'PartNumber', 'Manufacturer',
'FirmwareVersion', 'PowerCapacityWatts', 'PowerSupplyType',
'Status']
# Get a list of all Chassis and build URIs, then get all PowerSupplies
# from each Power entry in the Chassis
chassis_uri_list = self.chassis_uris
for chassis_uri in chassis_uri_list:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if 'Power' in data:
power_uri = data[u'Power'][u'@odata.id']
else:
continue
response = self.get_request(self.root_uri + power_uri)
data = response['data']
if key not in data:
return {'ret': False, 'msg': "Key %s not found" % key}
psu_list = data[key]
for psu in psu_list:
psu_not_present = False
psu_data = {}
for property in properties:
if property in psu:
if psu[property] is not None:
if property == 'Status':
if 'State' in psu[property]:
if psu[property]['State'] == 'Absent':
psu_not_present = True
psu_data[property] = psu[property]
if psu_not_present:
continue
psu_results.append(psu_data)
result["entries"] = psu_results
if not result["entries"]:
return {'ret': False, 'msg': "No PowerSupply objects found"}
return result
def get_multi_psu_inventory(self):
return self.aggregate_systems(self.get_psu_inventory)
def get_system_inventory(self, systems_uri):
result = {}
inventory = {}
# Get these entries, but does not fail if not found
properties = ['Status', 'HostName', 'PowerState', 'Model', 'Manufacturer',
'PartNumber', 'SystemType', 'AssetTag', 'ServiceTag',
'SerialNumber', 'SKU', 'BiosVersion', 'MemorySummary',
'ProcessorSummary', 'TrustedModules']
response = self.get_request(self.root_uri + systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for property in properties:
if property in data:
inventory[property] = data[property]
result["entries"] = inventory
return result
def get_multi_system_inventory(self):
return self.aggregate_systems(self.get_system_inventory)
def get_network_protocols(self):
result = {}
service_result = {}
# Find NetworkProtocol
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'NetworkProtocol' not in data:
return {'ret': False, 'msg': "NetworkProtocol resource not found"}
networkprotocol_uri = data["NetworkProtocol"]["@odata.id"]
response = self.get_request(self.root_uri + networkprotocol_uri)
if response['ret'] is False:
return response
data = response['data']
protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH',
'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP',
'RFB']
for protocol_service in protocol_services:
if protocol_service in data.keys():
service_result[protocol_service] = data[protocol_service]
result['ret'] = True
result["entries"] = service_result
return result
def set_network_protocols(self, manager_services):
# Check input data validity
protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH',
'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP',
'RFB']
protocol_state_onlist = ['true', 'True', True, 'on', 1]
protocol_state_offlist = ['false', 'False', False, 'off', 0]
payload = {}
for service_name in manager_services.keys():
if service_name not in protocol_services:
return {'ret': False, 'msg': "Service name %s is invalid" % service_name}
payload[service_name] = {}
for service_property in manager_services[service_name].keys():
value = manager_services[service_name][service_property]
if service_property in ['ProtocolEnabled', 'protocolenabled']:
if value in protocol_state_onlist:
payload[service_name]['ProtocolEnabled'] = True
elif value in protocol_state_offlist:
payload[service_name]['ProtocolEnabled'] = False
else:
return {'ret': False, 'msg': "Value of property %s is invalid" % service_property}
elif service_property in ['port', 'Port']:
if isinstance(value, int):
payload[service_name]['Port'] = value
elif isinstance(value, str) and value.isdigit():
payload[service_name]['Port'] = int(value)
else:
return {'ret': False, 'msg': "Value of property %s is invalid" % service_property}
else:
payload[service_name][service_property] = value
# Find NetworkProtocol
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'NetworkProtocol' not in data:
return {'ret': False, 'msg': "NetworkProtocol resource not found"}
networkprotocol_uri = data["NetworkProtocol"]["@odata.id"]
# Check service property support or not
response = self.get_request(self.root_uri + networkprotocol_uri)
if response['ret'] is False:
return response
data = response['data']
for service_name in payload.keys():
if service_name not in data:
return {'ret': False, 'msg': "%s service not supported" % service_name}
for service_property in payload[service_name].keys():
if service_property not in data[service_name]:
return {'ret': False, 'msg': "%s property for %s service not supported" % (service_property, service_name)}
# if the protocol is already set, nothing to do
need_change = False
for service_name in payload.keys():
for service_property in payload[service_name].keys():
value = payload[service_name][service_property]
if value != data[service_name][service_property]:
need_change = True
break
if not need_change:
return {'ret': True, 'changed': False, 'msg': "Manager NetworkProtocol services already set"}
response = self.patch_request(self.root_uri + networkprotocol_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified Manager NetworkProtocol services"}
@staticmethod
def to_singular(resource_name):
if resource_name.endswith('ies'):
resource_name = resource_name[:-3] + 'y'
elif resource_name.endswith('s'):
resource_name = resource_name[:-1]
return resource_name
def get_health_resource(self, subsystem, uri, health, expanded):
status = 'Status'
if expanded:
d = expanded
else:
r = self.get_request(self.root_uri + uri)
if r.get('ret'):
d = r.get('data')
else:
return
if 'Members' in d: # collections case
for m in d.get('Members'):
u = m.get('@odata.id')
r = self.get_request(self.root_uri + u)
if r.get('ret'):
p = r.get('data')
if p:
e = {self.to_singular(subsystem.lower()) + '_uri': u,
status: p.get(status,
"Status not available")}
health[subsystem].append(e)
else: # non-collections case
e = {self.to_singular(subsystem.lower()) + '_uri': uri,
status: d.get(status,
"Status not available")}
health[subsystem].append(e)
def get_health_subsystem(self, subsystem, data, health):
if subsystem in data:
sub = data.get(subsystem)
if isinstance(sub, list):
for r in sub:
if '@odata.id' in r:
uri = r.get('@odata.id')
expanded = None
if '#' in uri and len(r) > 1:
expanded = r
self.get_health_resource(subsystem, uri, health, expanded)
elif isinstance(sub, dict):
if '@odata.id' in sub:
uri = sub.get('@odata.id')
self.get_health_resource(subsystem, uri, health, None)
elif 'Members' in data:
for m in data.get('Members'):
u = m.get('@odata.id')
r = self.get_request(self.root_uri + u)
if r.get('ret'):
d = r.get('data')
self.get_health_subsystem(subsystem, d, health)
def get_health_report(self, category, uri, subsystems):
result = {}
health = {}
status = 'Status'
# Get health status of top level resource
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
health[category] = {status: data.get(status, "Status not available")}
# Get health status of subsystems
for sub in subsystems:
d = None
if sub.startswith('Links.'): # ex: Links.PCIeDevices
sub = sub[len('Links.'):]
d = data.get('Links', {})
elif '.' in sub: # ex: Thermal.Fans
p, sub = sub.split('.')
u = data.get(p, {}).get('@odata.id')
if u:
r = self.get_request(self.root_uri + u)
if r['ret']:
d = r['data']
if not d:
continue
else: # ex: Memory
d = data
health[sub] = []
self.get_health_subsystem(sub, d, health)
if not health[sub]:
del health[sub]
result["entries"] = health
return result
def get_system_health_report(self, systems_uri):
subsystems = ['Processors', 'Memory', 'SimpleStorage', 'Storage',
'EthernetInterfaces', 'NetworkInterfaces.NetworkPorts',
'NetworkInterfaces.NetworkDeviceFunctions']
return self.get_health_report('System', systems_uri, subsystems)
def get_multi_system_health_report(self):
return self.aggregate_systems(self.get_system_health_report)
def get_chassis_health_report(self, chassis_uri):
subsystems = ['Power.PowerSupplies', 'Thermal.Fans',
'Links.PCIeDevices']
return self.get_health_report('Chassis', chassis_uri, subsystems)
def get_multi_chassis_health_report(self):
return self.aggregate_chassis(self.get_chassis_health_report)
def get_manager_health_report(self, manager_uri):
subsystems = []
return self.get_health_report('Manager', manager_uri, subsystems)
def get_multi_manager_health_report(self):
return self.aggregate_managers(self.get_manager_health_report)
def set_manager_nic(self, nic_addr, nic_config):
# Get EthernetInterface collection
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'EthernetInterfaces' not in data:
return {'ret': False, 'msg': "EthernetInterfaces resource not found"}
ethernetinterfaces_uri = data["EthernetInterfaces"]["@odata.id"]
response = self.get_request(self.root_uri + ethernetinterfaces_uri)
if response['ret'] is False:
return response
data = response['data']
uris = [a.get('@odata.id') for a in data.get('Members', []) if
a.get('@odata.id')]
# Find target EthernetInterface
target_ethernet_uri = None
target_ethernet_current_setting = None
if nic_addr == 'null':
# Find root_uri matched EthernetInterface when nic_addr is not specified
nic_addr = (self.root_uri).split('/')[-1]
nic_addr = nic_addr.split(':')[0] # split port if existing
for uri in uris:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
if '"' + nic_addr + '"' in str(data) or "'" + nic_addr + "'" in str(data):
target_ethernet_uri = uri
target_ethernet_current_setting = data
break
if target_ethernet_uri is None:
return {'ret': False, 'msg': "No matched EthernetInterface found under Manager"}
# Convert input to payload and check validity
payload = {}
for property in nic_config.keys():
value = nic_config[property]
if property not in target_ethernet_current_setting:
return {'ret': False, 'msg': "Property %s in nic_config is invalid" % property}
if isinstance(value, dict):
if isinstance(target_ethernet_current_setting[property], dict):
payload[property] = value
elif isinstance(target_ethernet_current_setting[property], list):
payload[property] = list()
payload[property].append(value)
else:
return {'ret': False, 'msg': "Value of property %s in nic_config is invalid" % property}
else:
payload[property] = value
# If no need change, nothing to do. If error detected, report it
need_change = False
for property in payload.keys():
set_value = payload[property]
cur_value = target_ethernet_current_setting[property]
# type is simple(not dict/list)
if not isinstance(set_value, dict) and not isinstance(set_value, list):
if set_value != cur_value:
need_change = True
# type is dict
if isinstance(set_value, dict):
for subprop in payload[property].keys():
if subprop not in target_ethernet_current_setting[property]:
return {'ret': False, 'msg': "Sub-property %s in nic_config is invalid" % subprop}
sub_set_value = payload[property][subprop]
sub_cur_value = target_ethernet_current_setting[property][subprop]
if sub_set_value != sub_cur_value:
need_change = True
# type is list
if isinstance(set_value, list):
for i in range(len(set_value)):
for subprop in payload[property][i].keys():
if subprop not in target_ethernet_current_setting[property][i]:
return {'ret': False, 'msg': "Sub-property %s in nic_config is invalid" % subprop}
sub_set_value = payload[property][i][subprop]
sub_cur_value = target_ethernet_current_setting[property][i][subprop]
if sub_set_value != sub_cur_value:
need_change = True
if not need_change:
return {'ret': True, 'changed': False, 'msg': "Manager NIC already set"}
response = self.patch_request(self.root_uri + target_ethernet_uri, payload)
if response['ret'] is False:
return response
return {'ret': True, 'changed': True, 'msg': "Modified Manager NIC"}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,925 |
Add Redfish commands to perform updates
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Add new commands to the Redfish remote_management modules to perform firmware update requests.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
redfish_config.py
redfish_utils.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Operators need a way to update firmware on Redfish managed systems. Would like an extension made to be able to perform SimpleUpdate actions with a specified firmware image. At the very minimum we can point to a remote HTTP/TFTP/SCP/etc server, but if possible, it might be good to spin up a lightweight HTTP server to service the request in case one isn't provisioned.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Pull style update
redfish_config:
category: Update
command: PullUpdate
image: http://my.file.repo/someimage.bin
targets:
- /redfish/v1/Systems/1
- /redfish/v1/Systems/2
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63925
|
https://github.com/ansible/ansible/pull/65074
|
c64202a49563fefb35bd8de59bceb0b3b2fa5fa1
|
b5b23efdcc8fc9f0feef00f247013ac131525bf8
| 2019-10-24T19:30:11Z |
python
| 2020-02-17T21:19:47Z |
lib/ansible/modules/remote_management/redfish/redfish_command.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2017-2018 Dell EMC Inc.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: redfish_command
version_added: "2.7"
short_description: Manages Out-Of-Band controllers using Redfish APIs
description:
- Builds Redfish URIs locally and sends them to remote OOB controllers to
perform an action.
- Manages OOB controller ex. reboot, log management.
- Manages OOB controller users ex. add, remove, update.
- Manages system power ex. on, off, graceful and forced reboot.
options:
category:
required: true
description:
- Category to execute on OOB controller
type: str
command:
required: true
description:
- List of commands to execute on OOB controller
type: list
baseuri:
required: true
description:
- Base URI of OOB controller
type: str
username:
required: true
description:
- Username for authentication with OOB controller
type: str
version_added: "2.8"
password:
required: true
description:
- Password for authentication with OOB controller
type: str
id:
required: false
aliases: [ account_id ]
description:
- ID of account to delete/modify
type: str
version_added: "2.8"
new_username:
required: false
aliases: [ account_username ]
description:
- Username of account to add/delete/modify
type: str
version_added: "2.8"
new_password:
required: false
aliases: [ account_password ]
description:
- New password of account to add/modify
type: str
version_added: "2.8"
roleid:
required: false
aliases: [ account_roleid ]
description:
- Role of account to add/modify
type: str
version_added: "2.8"
bootdevice:
required: false
description:
- bootdevice when setting boot configuration
type: str
timeout:
description:
- Timeout in seconds for URL requests to OOB controller
default: 10
type: int
version_added: '2.8'
uefi_target:
required: false
description:
- UEFI target when bootdevice is "UefiTarget"
type: str
version_added: "2.9"
boot_next:
required: false
description:
- BootNext target when bootdevice is "UefiBootNext"
type: str
version_added: "2.9"
update_username:
required: false
aliases: [ account_updatename ]
description:
- new update user name for account_username
type: str
version_added: "2.10"
account_properties:
required: false
description:
- properties of account service to update
type: dict
version_added: "2.10"
resource_id:
required: false
description:
- The ID of the System, Manager or Chassis to modify
type: str
version_added: "2.10"
author: "Jose Delarosa (@jose-delarosa)"
'''
EXAMPLES = '''
- name: Restart system power gracefully
redfish_command:
category: Systems
command: PowerGracefulRestart
resource_id: 437XR1138R2
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set one-time boot device to {{ bootdevice }}
redfish_command:
category: Systems
command: SetOneTimeBoot
resource_id: 437XR1138R2
bootdevice: "{{ bootdevice }}"
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set one-time boot device to UefiTarget of "/0x31/0x33/0x01/0x01"
redfish_command:
category: Systems
command: SetOneTimeBoot
resource_id: 437XR1138R2
bootdevice: "UefiTarget"
uefi_target: "/0x31/0x33/0x01/0x01"
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set one-time boot device to BootNext target of "Boot0001"
redfish_command:
category: Systems
command: SetOneTimeBoot
resource_id: 437XR1138R2
bootdevice: "UefiBootNext"
boot_next: "Boot0001"
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set chassis indicator LED to blink
redfish_command:
category: Chassis
command: IndicatorLedBlink
resource_id: 1U
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Add user
redfish_command:
category: Accounts
command: AddUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
new_username: "{{ new_username }}"
new_password: "{{ new_password }}"
roleid: "{{ roleid }}"
- name: Add user using new option aliases
redfish_command:
category: Accounts
command: AddUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
account_password: "{{ account_password }}"
account_roleid: "{{ account_roleid }}"
- name: Delete user
redfish_command:
category: Accounts
command: DeleteUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
- name: Disable user
redfish_command:
category: Accounts
command: DisableUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
- name: Enable user
redfish_command:
category: Accounts
command: EnableUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
- name: Add and enable user
redfish_command:
category: Accounts
command: AddUser,EnableUser
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
new_username: "{{ new_username }}"
new_password: "{{ new_password }}"
roleid: "{{ roleid }}"
- name: Update user password
redfish_command:
category: Accounts
command: UpdateUserPassword
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
account_password: "{{ account_password }}"
- name: Update user role
redfish_command:
category: Accounts
command: UpdateUserRole
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
roleid: "{{ roleid }}"
- name: Update user name
redfish_command:
category: Accounts
command: UpdateUserName
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
account_updatename: "{{ account_updatename }}"
- name: Update user name
redfish_command:
category: Accounts
command: UpdateUserName
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_username: "{{ account_username }}"
update_username: "{{ update_username }}"
- name: Update AccountService properties
redfish_command:
category: Accounts
command: UpdateAccountServiceProperties
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
account_properties:
AccountLockoutThreshold: 5
AccountLockoutDuration: 600
- name: Clear Manager Logs with a timeout of 20 seconds
redfish_command:
category: Manager
command: ClearLogs
resource_id: BMC
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
timeout: 20
- name: Clear Sessions
redfish_command:
category: Sessions
command: ClearSessions
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
'''
RETURN = '''
msg:
description: Message with action result or error description
returned: always
type: str
sample: "Action was successful"
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.redfish_utils import RedfishUtils
from ansible.module_utils._text import to_native
# More will be added as module features are expanded
CATEGORY_COMMANDS_ALL = {
"Systems": ["PowerOn", "PowerForceOff", "PowerForceRestart", "PowerGracefulRestart",
"PowerGracefulShutdown", "PowerReboot", "SetOneTimeBoot"],
"Chassis": ["IndicatorLedOn", "IndicatorLedOff", "IndicatorLedBlink"],
"Accounts": ["AddUser", "EnableUser", "DeleteUser", "DisableUser",
"UpdateUserRole", "UpdateUserPassword", "UpdateUserName",
"UpdateAccountServiceProperties"],
"Sessions": ["ClearSessions"],
"Manager": ["GracefulRestart", "ClearLogs"],
}
def main():
result = {}
module = AnsibleModule(
argument_spec=dict(
category=dict(required=True),
command=dict(required=True, type='list'),
baseuri=dict(required=True),
username=dict(required=True),
password=dict(required=True, no_log=True),
id=dict(aliases=["account_id"]),
new_username=dict(aliases=["account_username"]),
new_password=dict(aliases=["account_password"], no_log=True),
roleid=dict(aliases=["account_roleid"]),
update_username=dict(type='str', aliases=["account_updatename"]),
account_properties=dict(type='dict', default={}),
bootdevice=dict(),
timeout=dict(type='int', default=10),
uefi_target=dict(),
boot_next=dict(),
resource_id=dict()
),
supports_check_mode=False
)
category = module.params['category']
command_list = module.params['command']
# admin credentials used for authentication
creds = {'user': module.params['username'],
'pswd': module.params['password']}
# user to add/modify/delete
user = {'account_id': module.params['id'],
'account_username': module.params['new_username'],
'account_password': module.params['new_password'],
'account_roleid': module.params['roleid'],
'account_updatename': module.params['update_username'],
'account_properties': module.params['account_properties']}
# timeout
timeout = module.params['timeout']
# System, Manager or Chassis ID to modify
resource_id = module.params['resource_id']
# Build root URI
root_uri = "https://" + module.params['baseuri']
rf_utils = RedfishUtils(creds, root_uri, timeout, module,
resource_id=resource_id, data_modification=True)
# Check that Category is valid
if category not in CATEGORY_COMMANDS_ALL:
module.fail_json(msg=to_native("Invalid Category '%s'. Valid Categories = %s" % (category, CATEGORY_COMMANDS_ALL.keys())))
# Check that all commands are valid
for cmd in command_list:
# Fail if even one command given is invalid
if cmd not in CATEGORY_COMMANDS_ALL[category]:
module.fail_json(msg=to_native("Invalid Command '%s'. Valid Commands = %s" % (cmd, CATEGORY_COMMANDS_ALL[category])))
# Organize by Categories / Commands
if category == "Accounts":
ACCOUNTS_COMMANDS = {
"AddUser": rf_utils.add_user,
"EnableUser": rf_utils.enable_user,
"DeleteUser": rf_utils.delete_user,
"DisableUser": rf_utils.disable_user,
"UpdateUserRole": rf_utils.update_user_role,
"UpdateUserPassword": rf_utils.update_user_password,
"UpdateUserName": rf_utils.update_user_name,
"UpdateAccountServiceProperties": rf_utils.update_accountservice_properties
}
# execute only if we find an Account service resource
result = rf_utils._find_accountservice_resource()
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
for command in command_list:
result = ACCOUNTS_COMMANDS[command](user)
elif category == "Systems":
# execute only if we find a System resource
result = rf_utils._find_systems_resource()
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
for command in command_list:
if "Power" in command:
result = rf_utils.manage_system_power(command)
elif command == "SetOneTimeBoot":
result = rf_utils.set_one_time_boot_device(
module.params['bootdevice'],
module.params['uefi_target'],
module.params['boot_next'])
elif category == "Chassis":
result = rf_utils._find_chassis_resource()
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
led_commands = ["IndicatorLedOn", "IndicatorLedOff", "IndicatorLedBlink"]
# Check if more than one led_command is present
num_led_commands = sum([command in led_commands for command in command_list])
if num_led_commands > 1:
result = {'ret': False, 'msg': "Only one IndicatorLed command should be sent at a time."}
else:
for command in command_list:
if command in led_commands:
result = rf_utils.manage_indicator_led(command)
elif category == "Sessions":
# execute only if we find SessionService resources
resource = rf_utils._find_sessionservice_resource()
if resource['ret'] is False:
module.fail_json(msg=resource['msg'])
for command in command_list:
if command == "ClearSessions":
result = rf_utils.clear_sessions()
elif category == "Manager":
MANAGER_COMMANDS = {
"GracefulRestart": rf_utils.restart_manager_gracefully,
"ClearLogs": rf_utils.clear_logs
}
# execute only if we find a Manager service resource
result = rf_utils._find_managers_resource()
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
for command in command_list:
result = MANAGER_COMMANDS[command]()
# Return data back or fail with proper message
if result['ret'] is True:
del result['ret']
changed = result.get('changed', True)
module.exit_json(changed=changed, msg='Action was successful')
else:
module.fail_json(msg=to_native(result['msg']))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,829 |
win_domain_group_membership
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
win_domain_group_membership does not support cross domain membership.
This module needs extra parameters so that both group and members have server specified. At the moment the users and groups have to be in the same domain for the module to work! We have a multi-domain forest and need the ability to add users to groups from the other domains
$user = Get-ADUser fredb -Server fabricam.com
$group = Get-ADGroup "Box Permanent user group" -Server contoso.com
Add-ADGroupMember -Identity $Group -Members $User -Server contoso.com
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_domain_group_membership
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target Active Directory. Host Windows Server 2012 R2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
-
hosts: all
ignore_errors: true
vars:
basic_user_group:
- "group1"
- "group2"
userId:
- "fredb"
tasks:
- name: Set fact basic_user_group
set_fact:
box_groups: "{{ basic_user_group }}"
when: user_profile == "basic"
- name: Add Username to GroupName
win_domain_group_membership:
name: "{{ basic_user_group }}"
domain_server: "contoso.com"
members:
- "{{ userId }}"
state: present
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The full traceback is:
Cannot find an object with identity: 'System.Object[]' under: 'DC=xxx,DC=xxxx,DC=net'.
At line:51 char:19
+ $members_before = Get-AdGroupMember -Identity $name @extra_args
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (System.Object[]:ADGroup) [Get-ADGroupMember], ADIdentityNotFoundException
+ FullyQualifiedErrorId : ActiveDirectoryCmdlet:Microsoft.ActiveDirectory.Management.ADIdentityNotFoundException,Microsoft.ActiveDirectory.Management.Commands.GetADGroupMember
ScriptStackTrace:
at <ScriptBlock>, <No file>: line 51
Microsoft.ActiveDirectory.Management.ADIdentityNotFoundException: Cannot find an object with identity: 'System.Object[]' under: 'DC=xxx,DC=xxxx,DC=net'.
at Microsoft.ActiveDirectory.Management.Commands.ADFactoryUtil.GetObjectFromIdentitySearcher(ADObjectSearcher searcher, ADEntity identityObj, String searchRoot, AttributeSetRequest attrs, CmdletSessionInfo cmdletSessionInfo, String[]& warningMessages)
at Microsoft.ActiveDirectory.Management.Commands.ADFactory`1.GetDirectoryObjectFromIdentity(T identityObj, String searchRoot, Boolean showDeleted)
at Microsoft.ActiveDirectory.Management.Commands.GetADGroupMember.GetADGroupMemberProcessCSRoutine()
at Microsoft.ActiveDirectory.Management.CmdletSubroutinePipeline.Invoke()
at Microsoft.ActiveDirectory.Management.Commands.ADCmdletBase`1.ProcessRecord()
fatal: [xxxxxxxxxxxxxxx.xxx.xxxxxxxxxxx.xxx]: FAILED! => {
"changed": false,
"msg": "Unhandled exception while executing module: Cannot find an object with identity: 'System.Object[]' under: 'DC=xxx,DC=xxxx,DC=net'."
}
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59829
|
https://github.com/ansible/ansible/pull/65138
|
a60feeb3c1e5a6a6c96bee49bec98f0a5f5b7ca8
|
cbc38d2e5a8b6ca0550a38f5a6eccad59b0b12b7
| 2019-07-30T23:28:40Z |
python
| 2020-02-17T22:43:17Z |
changelogs/fragments/65138-Windows_Multidomain_support.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,829 |
win_domain_group_membership
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
win_domain_group_membership does not support cross domain membership.
This module needs extra parameters so that both group and members have server specified. At the moment the users and groups have to be in the same domain for the module to work! We have a multi-domain forest and need the ability to add users to groups from the other domains
$user = Get-ADUser fredb -Server fabricam.com
$group = Get-ADGroup "Box Permanent user group" -Server contoso.com
Add-ADGroupMember -Identity $Group -Members $User -Server contoso.com
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_domain_group_membership
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target Active Directory. Host Windows Server 2012 R2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
-
hosts: all
ignore_errors: true
vars:
basic_user_group:
- "group1"
- "group2"
userId:
- "fredb"
tasks:
- name: Set fact basic_user_group
set_fact:
box_groups: "{{ basic_user_group }}"
when: user_profile == "basic"
- name: Add Username to GroupName
win_domain_group_membership:
name: "{{ basic_user_group }}"
domain_server: "contoso.com"
members:
- "{{ userId }}"
state: present
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The full traceback is:
Cannot find an object with identity: 'System.Object[]' under: 'DC=xxx,DC=xxxx,DC=net'.
At line:51 char:19
+ $members_before = Get-AdGroupMember -Identity $name @extra_args
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (System.Object[]:ADGroup) [Get-ADGroupMember], ADIdentityNotFoundException
+ FullyQualifiedErrorId : ActiveDirectoryCmdlet:Microsoft.ActiveDirectory.Management.ADIdentityNotFoundException,Microsoft.ActiveDirectory.Management.Commands.GetADGroupMember
ScriptStackTrace:
at <ScriptBlock>, <No file>: line 51
Microsoft.ActiveDirectory.Management.ADIdentityNotFoundException: Cannot find an object with identity: 'System.Object[]' under: 'DC=xxx,DC=xxxx,DC=net'.
at Microsoft.ActiveDirectory.Management.Commands.ADFactoryUtil.GetObjectFromIdentitySearcher(ADObjectSearcher searcher, ADEntity identityObj, String searchRoot, AttributeSetRequest attrs, CmdletSessionInfo cmdletSessionInfo, String[]& warningMessages)
at Microsoft.ActiveDirectory.Management.Commands.ADFactory`1.GetDirectoryObjectFromIdentity(T identityObj, String searchRoot, Boolean showDeleted)
at Microsoft.ActiveDirectory.Management.Commands.GetADGroupMember.GetADGroupMemberProcessCSRoutine()
at Microsoft.ActiveDirectory.Management.CmdletSubroutinePipeline.Invoke()
at Microsoft.ActiveDirectory.Management.Commands.ADCmdletBase`1.ProcessRecord()
fatal: [xxxxxxxxxxxxxxx.xxx.xxxxxxxxxxx.xxx]: FAILED! => {
"changed": false,
"msg": "Unhandled exception while executing module: Cannot find an object with identity: 'System.Object[]' under: 'DC=xxx,DC=xxxx,DC=net'."
}
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59829
|
https://github.com/ansible/ansible/pull/65138
|
a60feeb3c1e5a6a6c96bee49bec98f0a5f5b7ca8
|
cbc38d2e5a8b6ca0550a38f5a6eccad59b0b12b7
| 2019-07-30T23:28:40Z |
python
| 2020-02-17T22:43:17Z |
lib/ansible/modules/windows/win_domain_group_membership.ps1
|
#!powershell
# Copyright: (c) 2019, Marius Rieder <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#Requires -Module Ansible.ModuleUtils.Legacy
try {
Import-Module ActiveDirectory
}
catch {
Fail-Json -obj @{} -message "win_domain_group_membership requires the ActiveDirectory PS module to be installed"
}
$params = Parse-Args $args -supports_check_mode $true
$check_mode = Get-AnsibleParam -obj $params -name "_ansible_check_mode" -type "bool" -default $false
$diff_mode = Get-AnsibleParam -obj $params -name "_ansible_diff" -type "bool" -default $false
# Module control parameters
$state = Get-AnsibleParam -obj $params -name "state" -type "str" -default "present" -validateset "present","absent","pure"
$domain_username = Get-AnsibleParam -obj $params -name "domain_username" -type "str"
$domain_password = Get-AnsibleParam -obj $params -name "domain_password" -type "str" -failifempty ($null -ne $domain_username)
$domain_server = Get-AnsibleParam -obj $params -name "domain_server" -type "str"
# Group Membership parameters
$name = Get-AnsibleParam -obj $params -name "name" -type "str" -failifempty $true
$members = Get-AnsibleParam -obj $params -name "members" -type "list" -failifempty $true
# Filter ADObjects by ObjectClass
$ad_object_class_filter = "(ObjectClass -eq 'user' -or ObjectClass -eq 'group' -or ObjectClass -eq 'computer' -or ObjectClass -eq 'msDS-ManagedServiceAccount')"
$extra_args = @{}
if ($null -ne $domain_username) {
$domain_password = ConvertTo-SecureString $domain_password -AsPlainText -Force
$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $domain_username, $domain_password
$extra_args.Credential = $credential
}
if ($null -ne $domain_server) {
$extra_args.Server = $domain_server
}
$result = @{
changed = $false
added = [System.Collections.Generic.List`1[String]]@()
removed = [System.Collections.Generic.List`1[String]]@()
}
if ($diff_mode) {
$result.diff = @{}
}
$members_before = Get-AdGroupMember -Identity $name @extra_args
$pure_members = [System.Collections.Generic.List`1[String]]@()
foreach ($member in $members) {
$group_member = Get-ADObject -Filter "SamAccountName -eq '$member' -and $ad_object_class_filter" -Properties objectSid, sAMAccountName @extra_args
if (!$group_member) {
Fail-Json -obj $result "Could not find domain user, group, service account or computer named $member"
}
if ($state -eq "pure") {
$pure_members.Add($group_member.objectSid)
}
$user_in_group = $false
foreach ($current_member in $members_before) {
if ($current_member.sid -eq $group_member.objectSid) {
$user_in_group = $true
break
}
}
if ($state -in @("present", "pure") -and !$user_in_group) {
Add-ADGroupMember -Identity $name -Members $group_member -WhatIf:$check_mode @extra_args
$result.added.Add($group_member.SamAccountName)
$result.changed = $true
} elseif ($state -eq "absent" -and $user_in_group) {
Remove-ADGroupMember -Identity $name -Members $group_member -WhatIf:$check_mode @extra_args -Confirm:$False
$result.removed.Add($group_member.SamAccountName)
$result.changed = $true
}
}
if ($state -eq "pure") {
# Perform removals for existing group members not defined in $members
$current_members = Get-AdGroupMember -Identity $name @extra_args
foreach ($current_member in $current_members) {
$user_to_remove = $true
foreach ($pure_member in $pure_members) {
if ($pure_member -eq $current_member.sid) {
$user_to_remove = $false
break
}
}
if ($user_to_remove) {
Remove-ADGroupMember -Identity $name -Members $current_member -WhatIf:$check_mode @extra_args -Confirm:$False
$result.removed.Add($current_member.SamAccountName)
$result.changed = $true
}
}
}
$final_members = Get-AdGroupMember -Identity $name @extra_args
if ($final_members) {
$result.members = [Array]$final_members.SamAccountName
} else {
$result.members = @()
}
if ($diff_mode -and $result.changed) {
$result.diff.before = $members_before.SamAccountName | Out-String
if (!$check_mode) {
$result.diff.after = [Array]$final_members.SamAccountName | Out-String
} else {
$after = [System.Collections.Generic.List`1[String]]$result.members
$result.removed | ForEach-Object { $after.Remove($_) > $null }
$after.AddRange($result.added)
$result.diff.after = $after | Out-String
}
}
Exit-Json -obj $result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,829 |
win_domain_group_membership
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
win_domain_group_membership does not support cross domain membership.
This module needs extra parameters so that both group and members have server specified. At the moment the users and groups have to be in the same domain for the module to work! We have a multi-domain forest and need the ability to add users to groups from the other domains
$user = Get-ADUser fredb -Server fabricam.com
$group = Get-ADGroup "Box Permanent user group" -Server contoso.com
Add-ADGroupMember -Identity $Group -Members $User -Server contoso.com
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_domain_group_membership
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 12:19:05) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target Active Directory. Host Windows Server 2012 R2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
-
hosts: all
ignore_errors: true
vars:
basic_user_group:
- "group1"
- "group2"
userId:
- "fredb"
tasks:
- name: Set fact basic_user_group
set_fact:
box_groups: "{{ basic_user_group }}"
when: user_profile == "basic"
- name: Add Username to GroupName
win_domain_group_membership:
name: "{{ basic_user_group }}"
domain_server: "contoso.com"
members:
- "{{ userId }}"
state: present
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The full traceback is:
Cannot find an object with identity: 'System.Object[]' under: 'DC=xxx,DC=xxxx,DC=net'.
At line:51 char:19
+ $members_before = Get-AdGroupMember -Identity $name @extra_args
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (System.Object[]:ADGroup) [Get-ADGroupMember], ADIdentityNotFoundException
+ FullyQualifiedErrorId : ActiveDirectoryCmdlet:Microsoft.ActiveDirectory.Management.ADIdentityNotFoundException,Microsoft.ActiveDirectory.Management.Commands.GetADGroupMember
ScriptStackTrace:
at <ScriptBlock>, <No file>: line 51
Microsoft.ActiveDirectory.Management.ADIdentityNotFoundException: Cannot find an object with identity: 'System.Object[]' under: 'DC=xxx,DC=xxxx,DC=net'.
at Microsoft.ActiveDirectory.Management.Commands.ADFactoryUtil.GetObjectFromIdentitySearcher(ADObjectSearcher searcher, ADEntity identityObj, String searchRoot, AttributeSetRequest attrs, CmdletSessionInfo cmdletSessionInfo, String[]& warningMessages)
at Microsoft.ActiveDirectory.Management.Commands.ADFactory`1.GetDirectoryObjectFromIdentity(T identityObj, String searchRoot, Boolean showDeleted)
at Microsoft.ActiveDirectory.Management.Commands.GetADGroupMember.GetADGroupMemberProcessCSRoutine()
at Microsoft.ActiveDirectory.Management.CmdletSubroutinePipeline.Invoke()
at Microsoft.ActiveDirectory.Management.Commands.ADCmdletBase`1.ProcessRecord()
fatal: [xxxxxxxxxxxxxxx.xxx.xxxxxxxxxxx.xxx]: FAILED! => {
"changed": false,
"msg": "Unhandled exception while executing module: Cannot find an object with identity: 'System.Object[]' under: 'DC=xxx,DC=xxxx,DC=net'."
}
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/59829
|
https://github.com/ansible/ansible/pull/65138
|
a60feeb3c1e5a6a6c96bee49bec98f0a5f5b7ca8
|
cbc38d2e5a8b6ca0550a38f5a6eccad59b0b12b7
| 2019-07-30T23:28:40Z |
python
| 2020-02-17T22:43:17Z |
lib/ansible/modules/windows/win_domain_group_membership.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Andrew Saraceni <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_domain_group_membership
version_added: "2.8"
short_description: Manage Windows domain group membership
description:
- Allows the addition and removal of domain users
and domain groups from/to a domain group.
options:
name:
description:
- Name of the domain group to manage membership on.
type: str
required: yes
members:
description:
- A list of members to ensure are present/absent from the group.
- The given names must be a SamAccountName of a user, group, service account, or computer.
- For computers, you must add "$" after the name; for example, to add "Mycomputer" to a group, use "Mycomputer$" as the member.
type: list
required: yes
state:
description:
- Desired state of the members in the group.
- When C(state) is C(pure), only the members specified will exist,
and all other existing members not specified are removed.
type: str
choices: [ absent, present, pure ]
default: present
domain_username:
description:
- The username to use when interacting with AD.
- If this is not set then the user Ansible used to log in with will be
used instead when using CredSSP or Kerberos with credential delegation.
type: str
domain_password:
description:
- The password for I(username).
type: str
domain_server:
description:
- Specifies the Active Directory Domain Services instance to connect to.
- Can be in the form of an FQDN or NetBIOS name.
- If not specified then the value is based on the domain of the computer
running PowerShell.
type: str
notes:
- This must be run on a host that has the ActiveDirectory powershell module installed.
seealso:
- module: win_domain_user
- module: win_domain_group
author:
- Marius Rieder (@jiuka)
'''
EXAMPLES = r'''
- name: Add a domain user/group to a domain group
win_domain_group_membership:
name: Foo
members:
- Bar
state: present
- name: Remove a domain user/group from a domain group
win_domain_group_membership:
name: Foo
members:
- Bar
state: absent
- name: Ensure only a domain user/group exists in a domain group
win_domain_group_membership:
name: Foo
members:
- Bar
state: pure
- name: Add a computer to a domain group
win_domain_group_membership:
name: Foo
members:
- DESKTOP$
state: present
'''
RETURN = r'''
name:
description: The name of the target domain group.
returned: always
type: str
sample: Domain-Admins
added:
description: A list of members added when C(state) is C(present) or
C(pure); this is empty if no members are added.
returned: success and C(state) is C(present) or C(pure)
type: list
sample: ["UserName", "GroupName"]
removed:
description: A list of members removed when C(state) is C(absent) or
C(pure); this is empty if no members are removed.
returned: success and C(state) is C(absent) or C(pure)
type: list
sample: ["UserName", "GroupName"]
members:
description: A list of all domain group members at completion; this is empty
if the group contains no members.
returned: success
type: list
sample: ["UserName", "GroupName"]
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,213 |
openssl_privatekey breaks in FIPS mode
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When attempting to create an openssl key on a system in FIPS mode, the module crashes with error:
> ValueError: error:060800A3:digital envelope routines:EVP_DigestInit_ex:disabled for fips
Module attempts to fingerprint key using all listed algorithms, even though some of them are forbidden by FIPS. In particular, md5 does not work.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.4
config file = /home/chris.kiick/services-performance-lab-master/ansible.cfg
configured module search path = [u'/home/chris.kiick/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.16 (default, Dec 12 2019, 23:58:22) [GCC 7.3.1 20180712 (Red Hat 7.3.1-6)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_RETRIES(/home/chris.kiick/services-performance-lab-master/ansible.cf
DEFAULT_FORKS(/home/chris.kiick/services-performance-lab-master/ansible.cfg) = 1
DEFAULT_GATHERING(/home/chris.kiick/services-performance-lab-master/ansible.cfg)
DEFAULT_HOST_LIST(/home/chris.kiick/services-performance-lab-master/ansible.cfg)
DISPLAY_SKIPPED_HOSTS(/home/chris.kiick/services-performance-lab-master/ansible.
HOST_KEY_CHECKING(/home/chris.kiick/services-performance-lab-master/ansible.cfg)
RETRY_FILES_ENABLED(/home/chris.kiick/services-performance-lab-master/ansible.cf)
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Target system was RHEL 7 with FIPS mode enabled.
Playbook:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
--
- hosts: host-with-FIPS-enabled
name: create SSL cert key
tasks:
- openssl_privatekey:
backup: true
path: "/tmp/foo"
state: present
become: true
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
changed: true
failed: false
File /tmp/foo exists and contains a private key in PEM format.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Module crashes with FIPS specific error.
<!--- Paste verbatim command output between quotes -->
```paste below
> ansible-playbook -vvv bug.yml
ansible-playbook 2.9.4
config file = /home/chris.kiick/services-performance-lab-master/ansible.cfg
configured module search path = [u'/home/chris.kiick/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.16 (default, Dec 12 2019, 23:58:22) [GCC 7.3.1 20180712 (Red Hat 7.3.1-6)]
Using /home/chris.kiick/services-performance-lab-master/ansible.cfg as config file
host_list declined parsing /home/chris.kiick/services-performance-lab-master/inventory/dynamic.py as it did not pass its verify_file() method
Parsed /home/chris.kiick/services-performance-lab-master/inventory/dynamic.py inventory source with script plugin
PLAYBOOK: bug.yml **************************************************************
1 plays in bug.yml
PLAY [create SSL cert key] *****************************************************
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'echo ~ec2-user && sleep 0'"'"''
<100.64.12.7> (0, '/home/ec2-user\n', "Warning: Permanently added '100.64.12.7' (ECDSA) to the list of known hosts.\r\nAuthorized uses only. All activity may be monitored and reported.\n")
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070 `" && echo ansible-tmp-1581102287.88-194846480658070="` echo /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070 `" ) && sleep 0'"'"''
<100.64.12.7> (0, 'ansible-tmp-1581102287.88-194846480658070=/home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070\n', '')
<prod-task1> Attempting python interpreter discovery
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<100.64.12.7> (0, 'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python3.6\n/usr/bin/python2.7\n/usr/libexec/platform-python\n/usr/bin/python3\n/usr/bin/python\nENDFOUND\n', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<100.64.12.7> (0, '{"osrelease_content": "NAME=\\"Red Hat Enterprise Linux Server\\"\\nVERSION=\\"7.7 (Maipo)\\"\\nID=\\"rhel\\"\\nID_LIKE=\\"fedora\\"\\nVARIANT=\\"Server\\"\\nVARIANT_ID=\\"server\\"\\nVERSION_ID=\\"7.7\\"\\nPRETTY_NAME=\\"Red Hat Enterprise Linux Server 7.7 (Maipo)\\"\\nANSI_COLOR=\\"0;31\\"\\nCPE_NAME=\\"cpe:/o:redhat:enterprise_linux:7.7:GA:server\\"\\nHOME_URL=\\"https://www.redhat.com/\\"\\nBUG_REPORT_URL=\\"https://bugzilla.redhat.com/\\"\\n\\nREDHAT_BUGZILLA_PRODUCT=\\"Red Hat Enterprise Linux 7\\"\\nREDHAT_BUGZILLA_PRODUCT_VERSION=7.7\\nREDHAT_SUPPORT_PRODUCT=\\"Red Hat Enterprise Linux\\"\\nREDHAT_SUPPORT_PRODUCT_VERSION=\\"7.7\\"\\n", "platform_dist_result": ["redhat", "7.7", "Maipo"]}\n', '')
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
<100.64.12.7> PUT /home/chris.kiick/.ansible/tmp/ansible-local-16723ohhUk2/tmpzbPAGm TO /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/AnsiballZ_setup.py
<100.64.12.7> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f '[100.64.12.7]'
<100.64.12.7> (0, 'sftp> put /home/chris.kiick/.ansible/tmp/ansible-local-16723ohhUk2/tmpzbPAGm /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/AnsiballZ_setup.py\n', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/ /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/AnsiballZ_setup.py && sleep 0'"'"''
<100.64.12.7> (0, '', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f -tt 100.64.12.7 '/bin/sh -c '"'"'/usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/AnsiballZ_setup.py && sleep 0'"'"''
<100.64.12.7> (0, '\r\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_fibre_channel_wwn": [], "module_setup": true, "ansible_distribution_version": "7.7", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "en_US.UTF-8", "TERM": "xterm-256color", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/1000", "SHLVL": "2", "SSH_TTY": "/dev/pts/0", "_": "/usr/bin/python", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "PWD": "/home/ec2-user", "SELINUX_LEVEL_REQUESTED": "", "PATH": "/usr/local/bin:/usr/bin", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "ec2-user", "USER": "ec2-user", "HOME": "/home/ec2-user", "MAIL": "/var/mail/ec2-user", "LS_COLORS": "rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:", "XDG_SESSION_ID": "486", "SSH_CLIENT": "100.64.4.47 39200 22", "SSH_CONNECTION": "100.64.4.47 39200 100.64.12.7 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "06:9c:05:33:da:3a", "network": "100.64.12.0", "mtu": 9001, "broadcast": "100.64.12.15", "alias": "eth0", "netmask": "255.255.255.240", "address": "100.64.12.7", "interface": "eth0", "type": "ether", "gateway": "100.64.12.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/boot/vmlinuz-3.10.0-1062.9.1.el7.x86_64", "rd.blacklist": "nouveau", "net.ifnames": "0", "fips": "1", "crashkernel": "auto", "console": "tty0", "ro": true, "root": "UUID=1698b607-b2a7-455f-b2ee-ed7f6e17ed9f"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "NA", "ansible_pkg_mgr": "yum", "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "", "ansible_all_ipv6_addresses": ["fe80::447:87ff:fe7a:b5e", "fe80::49c:5ff:fe33:da3a"], "ansible_uptime_seconds": 691103, "ansible_kernel": "3.10.0-1062.9.1.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": true, "ansible_hostnqn": "", "ansible_user_shell": "/bin/bash", "ansible_product_serial": "NA", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": true, "ansible_user_id": "ec2-user", "ansible_selinux_python_present": true, "ansible_kernel_version": "#1 SMP Mon Dec 2 08:31:54 EST 2019", "ansible_local": {}, "ansible_processor_vcpus": 2, "ansible_processor": ["0", "AuthenticAMD", "AMD EPYC 7571", "1", "AuthenticAMD", "AMD EPYC 7571"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBbDFjCVSkQFuLO6i5YjJ6zoHvgcPeJb1MhEZHtiL3st1ylLxKUWzWY6TmAWtDA26RnM4iPdpcZtRy+x/Ff20eo=", "ansible_user_gid": 1000, "ansible_system_vendor": "Amazon EC2", "ansible_swaptotal_mb": 0, "ansible_distribution_major_version": "7", "ansible_real_group_id": 1000, "ansible_lsb": {}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDFkd7ihqpFXEkX0prdeX/9AXeNHxeMwJvC9dp4ZpVqZC9qYV6spo7xPxNgSaHu0JN+NsI30UE4HL3gBTJyMKVDLwpvVQ9VfGU0zzeBAV8rOGhom9qjpP1OIy2n5FMy9J5tNyQ9WLfYXQH+jS5/JtrSdax8c1E7IFJRrZmJXV2hsIFbBKqgWN4a8xdSADGgg3C24upJbtb+VFa8RWoLsbglPYUTS7P+Zwf5cmozEFQK+zy2idD51D0Rsyk+QTujlGpsOqmE1h/tETi/ezq4JccVE+5010BIQ3uqh2vGT3ABDcWabKav9yT9LDotWzvVWmvlSil1HC1NfyRbYFnq0sLp", "ansible_user_gecos": "Cloud User", "ansible_processor_threads_per_core": 2, "ansible_eth0": {"macaddress": "06:9c:05:33:da:3a", "features": {"tx_checksum_ipv4": "on", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off [fixed]", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "off [fixed]", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "off", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "pciid": "0000:00:05.0", "module": "ena", "mtu": 9001, "device": "eth0", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "100.64.12.15", "netmask": "255.255.255.240", "network": "100.64.12.0", "address": "100.64.12.7"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::49c:5ff:fe33:da3a"}], "active": true, "type": "ether", "hw_timestamp_filters": []}, "ansible_eth1": {"macaddress": "06:47:87:7a:0b:5e", "features": {"tx_checksum_ipv4": "on", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off [fixed]", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "off [fixed]", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "off", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "pciid": "0000:00:06.0", "module": "ena", "mtu": 9001, "device": "eth1", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "10.0.0.31", "netmask": "255.255.255.224", "network": "10.0.0.0", "address": "10.0.0.30"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::447:87ff:fe7a:b5e"}], "active": true, "type": "ether", "hw_timestamp_filters": []}, "ansible_product_name": "m5a.large", "ansible_all_ipv4_addresses": ["10.0.0.30", "100.64.12.7"], "ansible_python_version": "2.7.5", "ansible_product_version": "NA", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 7569, "used": 6081, "free": 1488}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 4044, "free": 3525}}, "ansible_user_dir": "/home/ec2-user", "gather_subset": ["all"], "ansible_real_user_id": 1000, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["100.64.0.5", "100.64.0.45"], "search": ["fed.sailpoint.loc"]}, "ansible_effective_group_id": 1000, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 7569, "ansible_device_links": {"masters": {}, "labels": {}, "ids": {"nvme0n1p1": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1-part1", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part1"], "nvme0n1p2": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1-part2", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part2"], "nvme0n1": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001"]}, "uuids": {"nvme0n1p2": ["1698b607-b2a7-455f-b2ee-ed7f6e17ed9f"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_proc_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/boot/vmlinuz-3.10.0-1062.9.1.el7.x86_64", "rd.blacklist": "nouveau", "net.ifnames": "0", "fips": "1", "crashkernel": "auto", "console": ["ttyS0,115200n8", "tty0"], "ro": true, "root": "UUID=1698b607-b2a7-455f-b2ee-ed7f6e17ed9f"}, "ansible_memfree_mb": 1488, "ansible_processor_count": 1, "ansible_hostname": "prod-task0", "ansible_interfaces": ["lo", "eth1", "eth0"], "ansible_machine_id": "ec2e9527ba63e63e1f4f148a6b533b0b", "ansible_fqdn": "prod-task0.fed.sailpoint.loc", "ansible_mounts": [{"block_used": 1003765, "uuid": "1698b607-b2a7-455f-b2ee-ed7f6e17ed9f", "size_total": 214735761408, "block_total": 52425723, "mount": "/", "block_available": 51421958, "size_available": 210624339968, "fstype": "xfs", "inode_total": 104856560, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/nvme0n1p2", "inode_used": 59777, "block_size": 4096, "inode_available": 104796783}], "ansible_nodename": "prod-task0.fed.sailpoint.loc", "ansible_distribution_file_search_string": "Red Hat", "ansible_domain": "fed.sailpoint.loc", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "kvm", "ansible_processor_cores": 1, "ansible_bios_version": "1.0", "ansible_date_time": {"weekday_number": "5", "iso8601_basic_short": "20200207T190449", "tz": "UTC", "weeknumber": "05", "hour": "19", "year": "2020", "minute": "04", "tz_offset": "+0000", "month": "02", "epoch": "1581102289", "iso8601_micro": "2020-02-07T19:04:49.229373Z", "weekday": "Friday", "time": "19:04:49", "date": "2020-02-07", "iso8601": "2020-02-07T19:04:49Z", "day": "07", "iso8601_basic": "20200207T190449229284", "second": "49"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 1000, "ansible_system": "Linux", "ansible_devices": {"nvme0n1": {"scheduler_mode": "none", "rotational": "0", "vendor": null, "sectors": "419430400", "links": {"masters": [], "labels": [], "ids": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Amazon Elastic Block Store", "partitions": {"nvme0n1p1": {"sectorsize": 512, "uuid": null, "links": {"masters": [], "labels": [], "ids": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1-part1", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part1"], "uuids": []}, "sectors": "2048", "start": "2048", "holders": [], "size": "1.00 MB"}, "nvme0n1p2": {"sectorsize": 512, "uuid": "1698b607-b2a7-455f-b2ee-ed7f6e17ed9f", "links": {"masters": [], "labels": [], "ids": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1-part2", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part2"], "uuids": ["1698b607-b2a7-455f-b2ee-ed7f6e17ed9f"]}, "sectors": "419426270", "start": "4096", "holders": [], "size": "200.00 GB"}}, "holders": [], "size": "200.00 GB"}}, "ansible_user_uid": 1000, "ansible_bios_date": "10/16/2017", "ansible_system_capabilities": [""]}}\r\n', 'Shared connection to 100.64.12.7 closed.\r\n')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'rm -f -r /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/ > /dev/null 2>&1 && sleep 0'"'"''
<100.64.12.7> (0, '', '')
TASK [Gathering Facts] *********************************************************
task path: /home/chris.kiick/services-performance-lab-master/bug.yml:4
ok: [prod-task1]
META: ran handlers
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'echo ~ec2-user && sleep 0'"'"''
<100.64.12.7> (0, '/home/ec2-user\n', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836 `" && echo ansible-tmp-1581102289.37-275304619929836="` echo /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836 `" ) && sleep 0'"'"''
<100.64.12.7> (0, 'ansible-tmp-1581102289.37-275304619929836=/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836\n', '')
Using module file /usr/lib/python2.7/site-packages/ansible/modules/crypto/openssl_privatekey.py
<100.64.12.7> PUT /home/chris.kiick/.ansible/tmp/ansible-local-16723ohhUk2/tmpugGucZ TO /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py
<100.64.12.7> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f '[100.64.12.7]'
<100.64.12.7> (0, 'sftp> put /home/chris.kiick/.ansible/tmp/ansible-local-16723ohhUk2/tmpugGucZ /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py\n', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/ /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py && sleep 0'"'"''
<100.64.12.7> (0, '', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f -tt 100.64.12.7 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-qegpjmyptpfgtqkxxglsjhfnewsepfpj ; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<100.64.12.7> (1, 'Traceback (most recent call last):\r\n File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 102, in <module>\r\n _ansiballz_main()\r\n File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 40, in invoke_module\r\n runpy.run_module(mod_name=\'ansible.modules.crypto.openssl_privatekey\', init_globals=None, run_name=\'__main__\', alter_sys=True)\r\n File "/usr/lib64/python2.7/runpy.py", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File "/usr/lib64/python2.7/runpy.py", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code\r\n exec code in run_globals\r\n File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 692, in <module>\r\n File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 676, in main\r\n File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 303, in generate\r\n File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 545, in _get_fingerprint\r\n File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/module_utils/crypto.py", line 157, in get_fingerprint_of_bytes\r\nValueError: error:060800A3:digital envelope routines:EVP_DigestInit_ex:disabled for fips\r\n', 'Shared connection to 100.64.12.7 closed.\r\n')
<100.64.12.7> Failed to connect to the host via ssh: Shared connection to 100.64.12.7 closed.
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'rm -f -r /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/ > /dev/null 2>&1 && sleep 0'"'"''
<100.64.12.7> (0, '', '')
TASK [openssl_privatekey] ******************************************************
task path: /home/chris.kiick/services-performance-lab-master/bug.yml:7
The full traceback is:
Traceback (most recent call last):
File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 102, in <module>
_ansiballz_main()
File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.crypto.openssl_privatekey', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib64/python2.7/runpy.py", line 176, in run_module
fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 82, in _run_module_code
mod_name, mod_fname, mod_loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 692, in <module>
File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 676, in main
File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 303, in generate
File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 545, in _get_fingerprint
File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/module_utils/crypto.py", line 157, in get_fingerprint_of_bytes
ValueError: error:060800A3:digital envelope routines:EVP_DigestInit_ex:disabled for fips
fatal: [prod-task1]: FAILED! => {
"changed": false,
"module_stderr": "Shared connection to 100.64.12.7 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible.modules.crypto.openssl_privatekey', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py\", line 692, in <module>\r\n File \"/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py\", line 676, in main\r\n File \"/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py\", line 303, in generate\r\n File \"/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py\", line 545, in _get_fingerprint\r\n File \"/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/module_utils/crypto.py\", line 157, in get_fingerprint_of_bytes\r\nValueError: error:060800A3:digital envelope routines:EVP_DigestInit_ex:disabled for fips\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
PLAY RECAP *********************************************************************
prod-task1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/67213
|
https://github.com/ansible/ansible/pull/67515
|
9f41d0e9147590159645469e5a7e5a15a9999945
|
ca57871954fd3a0d79321d1c9b4abf1c51249b8d
| 2020-02-07T19:08:47Z |
python
| 2020-02-18T08:43:22Z |
changelogs/fragments/67515-openssl-fingerprint-fips.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 67,213 |
openssl_privatekey breaks in FIPS mode
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When attempting to create an openssl key on a system in FIPS mode, the module crashes with error:
> ValueError: error:060800A3:digital envelope routines:EVP_DigestInit_ex:disabled for fips
Module attempts to fingerprint key using all listed algorithms, even though some of them are forbidden by FIPS. In particular, md5 does not work.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.4
config file = /home/chris.kiick/services-performance-lab-master/ansible.cfg
configured module search path = [u'/home/chris.kiick/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.16 (default, Dec 12 2019, 23:58:22) [GCC 7.3.1 20180712 (Red Hat 7.3.1-6)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_SSH_RETRIES(/home/chris.kiick/services-performance-lab-master/ansible.cf
DEFAULT_FORKS(/home/chris.kiick/services-performance-lab-master/ansible.cfg) = 1
DEFAULT_GATHERING(/home/chris.kiick/services-performance-lab-master/ansible.cfg)
DEFAULT_HOST_LIST(/home/chris.kiick/services-performance-lab-master/ansible.cfg)
DISPLAY_SKIPPED_HOSTS(/home/chris.kiick/services-performance-lab-master/ansible.
HOST_KEY_CHECKING(/home/chris.kiick/services-performance-lab-master/ansible.cfg)
RETRY_FILES_ENABLED(/home/chris.kiick/services-performance-lab-master/ansible.cf)
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Target system was RHEL 7 with FIPS mode enabled.
Playbook:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
--
- hosts: host-with-FIPS-enabled
name: create SSL cert key
tasks:
- openssl_privatekey:
backup: true
path: "/tmp/foo"
state: present
become: true
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
changed: true
failed: false
File /tmp/foo exists and contains a private key in PEM format.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Module crashes with FIPS specific error.
<!--- Paste verbatim command output between quotes -->
```paste below
> ansible-playbook -vvv bug.yml
ansible-playbook 2.9.4
config file = /home/chris.kiick/services-performance-lab-master/ansible.cfg
configured module search path = [u'/home/chris.kiick/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.16 (default, Dec 12 2019, 23:58:22) [GCC 7.3.1 20180712 (Red Hat 7.3.1-6)]
Using /home/chris.kiick/services-performance-lab-master/ansible.cfg as config file
host_list declined parsing /home/chris.kiick/services-performance-lab-master/inventory/dynamic.py as it did not pass its verify_file() method
Parsed /home/chris.kiick/services-performance-lab-master/inventory/dynamic.py inventory source with script plugin
PLAYBOOK: bug.yml **************************************************************
1 plays in bug.yml
PLAY [create SSL cert key] *****************************************************
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'echo ~ec2-user && sleep 0'"'"''
<100.64.12.7> (0, '/home/ec2-user\n', "Warning: Permanently added '100.64.12.7' (ECDSA) to the list of known hosts.\r\nAuthorized uses only. All activity may be monitored and reported.\n")
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070 `" && echo ansible-tmp-1581102287.88-194846480658070="` echo /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070 `" ) && sleep 0'"'"''
<100.64.12.7> (0, 'ansible-tmp-1581102287.88-194846480658070=/home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070\n', '')
<prod-task1> Attempting python interpreter discovery
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<100.64.12.7> (0, 'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python3.6\n/usr/bin/python2.7\n/usr/libexec/platform-python\n/usr/bin/python3\n/usr/bin/python\nENDFOUND\n', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<100.64.12.7> (0, '{"osrelease_content": "NAME=\\"Red Hat Enterprise Linux Server\\"\\nVERSION=\\"7.7 (Maipo)\\"\\nID=\\"rhel\\"\\nID_LIKE=\\"fedora\\"\\nVARIANT=\\"Server\\"\\nVARIANT_ID=\\"server\\"\\nVERSION_ID=\\"7.7\\"\\nPRETTY_NAME=\\"Red Hat Enterprise Linux Server 7.7 (Maipo)\\"\\nANSI_COLOR=\\"0;31\\"\\nCPE_NAME=\\"cpe:/o:redhat:enterprise_linux:7.7:GA:server\\"\\nHOME_URL=\\"https://www.redhat.com/\\"\\nBUG_REPORT_URL=\\"https://bugzilla.redhat.com/\\"\\n\\nREDHAT_BUGZILLA_PRODUCT=\\"Red Hat Enterprise Linux 7\\"\\nREDHAT_BUGZILLA_PRODUCT_VERSION=7.7\\nREDHAT_SUPPORT_PRODUCT=\\"Red Hat Enterprise Linux\\"\\nREDHAT_SUPPORT_PRODUCT_VERSION=\\"7.7\\"\\n", "platform_dist_result": ["redhat", "7.7", "Maipo"]}\n', '')
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
<100.64.12.7> PUT /home/chris.kiick/.ansible/tmp/ansible-local-16723ohhUk2/tmpzbPAGm TO /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/AnsiballZ_setup.py
<100.64.12.7> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f '[100.64.12.7]'
<100.64.12.7> (0, 'sftp> put /home/chris.kiick/.ansible/tmp/ansible-local-16723ohhUk2/tmpzbPAGm /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/AnsiballZ_setup.py\n', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/ /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/AnsiballZ_setup.py && sleep 0'"'"''
<100.64.12.7> (0, '', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f -tt 100.64.12.7 '/bin/sh -c '"'"'/usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/AnsiballZ_setup.py && sleep 0'"'"''
<100.64.12.7> (0, '\r\n{"invocation": {"module_args": {"filter": "*", "gather_subset": ["all"], "fact_path": "/etc/ansible/facts.d", "gather_timeout": 10}}, "ansible_facts": {"ansible_fibre_channel_wwn": [], "module_setup": true, "ansible_distribution_version": "7.7", "ansible_distribution_file_variety": "RedHat", "ansible_env": {"LANG": "en_US.UTF-8", "TERM": "xterm-256color", "SHELL": "/bin/bash", "XDG_RUNTIME_DIR": "/run/user/1000", "SHLVL": "2", "SSH_TTY": "/dev/pts/0", "_": "/usr/bin/python", "LESSOPEN": "||/usr/bin/lesspipe.sh %s", "PWD": "/home/ec2-user", "SELINUX_LEVEL_REQUESTED": "", "PATH": "/usr/local/bin:/usr/bin", "SELINUX_ROLE_REQUESTED": "", "SELINUX_USE_CURRENT_RANGE": "", "LOGNAME": "ec2-user", "USER": "ec2-user", "HOME": "/home/ec2-user", "MAIL": "/var/mail/ec2-user", "LS_COLORS": "rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:", "XDG_SESSION_ID": "486", "SSH_CLIENT": "100.64.4.47 39200 22", "SSH_CONNECTION": "100.64.4.47 39200 100.64.12.7 22"}, "ansible_userspace_bits": "64", "ansible_architecture": "x86_64", "ansible_default_ipv4": {"macaddress": "06:9c:05:33:da:3a", "network": "100.64.12.0", "mtu": 9001, "broadcast": "100.64.12.15", "alias": "eth0", "netmask": "255.255.255.240", "address": "100.64.12.7", "interface": "eth0", "type": "ether", "gateway": "100.64.12.1"}, "ansible_swapfree_mb": 0, "ansible_default_ipv6": {}, "ansible_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/boot/vmlinuz-3.10.0-1062.9.1.el7.x86_64", "rd.blacklist": "nouveau", "net.ifnames": "0", "fips": "1", "crashkernel": "auto", "console": "tty0", "ro": true, "root": "UUID=1698b607-b2a7-455f-b2ee-ed7f6e17ed9f"}, "ansible_selinux": {"status": "enabled", "policyvers": 31, "type": "targeted", "mode": "enforcing", "config_mode": "enforcing"}, "ansible_userspace_architecture": "x86_64", "ansible_product_uuid": "NA", "ansible_pkg_mgr": "yum", "ansible_distribution": "RedHat", "ansible_iscsi_iqn": "", "ansible_all_ipv6_addresses": ["fe80::447:87ff:fe7a:b5e", "fe80::49c:5ff:fe33:da3a"], "ansible_uptime_seconds": 691103, "ansible_kernel": "3.10.0-1062.9.1.el7.x86_64", "ansible_system_capabilities_enforced": "True", "ansible_python": {"executable": "/usr/bin/python", "version": {"micro": 5, "major": 2, "releaselevel": "final", "serial": 0, "minor": 7}, "type": "CPython", "has_sslcontext": true, "version_info": [2, 7, 5, "final", 0]}, "ansible_is_chroot": true, "ansible_hostnqn": "", "ansible_user_shell": "/bin/bash", "ansible_product_serial": "NA", "ansible_form_factor": "Other", "ansible_distribution_file_parsed": true, "ansible_fips": true, "ansible_user_id": "ec2-user", "ansible_selinux_python_present": true, "ansible_kernel_version": "#1 SMP Mon Dec 2 08:31:54 EST 2019", "ansible_local": {}, "ansible_processor_vcpus": 2, "ansible_processor": ["0", "AuthenticAMD", "AMD EPYC 7571", "1", "AuthenticAMD", "AMD EPYC 7571"], "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBbDFjCVSkQFuLO6i5YjJ6zoHvgcPeJb1MhEZHtiL3st1ylLxKUWzWY6TmAWtDA26RnM4iPdpcZtRy+x/Ff20eo=", "ansible_user_gid": 1000, "ansible_system_vendor": "Amazon EC2", "ansible_swaptotal_mb": 0, "ansible_distribution_major_version": "7", "ansible_real_group_id": 1000, "ansible_lsb": {}, "ansible_machine": "x86_64", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDFkd7ihqpFXEkX0prdeX/9AXeNHxeMwJvC9dp4ZpVqZC9qYV6spo7xPxNgSaHu0JN+NsI30UE4HL3gBTJyMKVDLwpvVQ9VfGU0zzeBAV8rOGhom9qjpP1OIy2n5FMy9J5tNyQ9WLfYXQH+jS5/JtrSdax8c1E7IFJRrZmJXV2hsIFbBKqgWN4a8xdSADGgg3C24upJbtb+VFa8RWoLsbglPYUTS7P+Zwf5cmozEFQK+zy2idD51D0Rsyk+QTujlGpsOqmE1h/tETi/ezq4JccVE+5010BIQ3uqh2vGT3ABDcWabKav9yT9LDotWzvVWmvlSil1HC1NfyRbYFnq0sLp", "ansible_user_gecos": "Cloud User", "ansible_processor_threads_per_core": 2, "ansible_eth0": {"macaddress": "06:9c:05:33:da:3a", "features": {"tx_checksum_ipv4": "on", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off [fixed]", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "off [fixed]", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "off", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "pciid": "0000:00:05.0", "module": "ena", "mtu": 9001, "device": "eth0", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "100.64.12.15", "netmask": "255.255.255.240", "network": "100.64.12.0", "address": "100.64.12.7"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::49c:5ff:fe33:da3a"}], "active": true, "type": "ether", "hw_timestamp_filters": []}, "ansible_eth1": {"macaddress": "06:47:87:7a:0b:5e", "features": {"tx_checksum_ipv4": "on", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "off [fixed]", "rx_all": "off [fixed]", "highdma": "on", "rx_fcs": "off [fixed]", "tx_lockless": "off [fixed]", "tx_tcp_ecn_segmentation": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off [fixed]", "tx_checksumming": "on", "vlan_challenged": "off [fixed]", "loopback": "off [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on", "rx_checksumming": "on", "tx_tcp_segmentation": "off [fixed]", "netns_local": "off [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "off", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "on", "tx_gre_segmentation": "off [fixed]"}, "pciid": "0000:00:06.0", "module": "ena", "mtu": 9001, "device": "eth1", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "10.0.0.31", "netmask": "255.255.255.224", "network": "10.0.0.0", "address": "10.0.0.30"}, "ipv6": [{"scope": "link", "prefix": "64", "address": "fe80::447:87ff:fe7a:b5e"}], "active": true, "type": "ether", "hw_timestamp_filters": []}, "ansible_product_name": "m5a.large", "ansible_all_ipv4_addresses": ["10.0.0.30", "100.64.12.7"], "ansible_python_version": "2.7.5", "ansible_product_version": "NA", "ansible_service_mgr": "systemd", "ansible_memory_mb": {"real": {"total": 7569, "used": 6081, "free": 1488}, "swap": {"cached": 0, "total": 0, "free": 0, "used": 0}, "nocache": {"used": 4044, "free": 3525}}, "ansible_user_dir": "/home/ec2-user", "gather_subset": ["all"], "ansible_real_user_id": 1000, "ansible_virtualization_role": "guest", "ansible_dns": {"nameservers": ["100.64.0.5", "100.64.0.45"], "search": ["fed.sailpoint.loc"]}, "ansible_effective_group_id": 1000, "ansible_lo": {"features": {"tx_checksum_ipv4": "off [fixed]", "generic_receive_offload": "on", "tx_checksum_ipv6": "off [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "rx_all": "off [fixed]", "highdma": "on [fixed]", "rx_fcs": "off [fixed]", "tx_lockless": "on [fixed]", "tx_tcp_ecn_segmentation": "on", "rx_udp_tunnel_port_offload": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "on", "tx_checksumming": "on", "vlan_challenged": "on [fixed]", "loopback": "on [fixed]", "fcoe_mtu": "off [fixed]", "scatter_gather": "on", "tx_checksum_sctp": "on [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "tx_gso_partial": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "large_receive_offload": "off [fixed]", "tx_scatter_gather": "on [fixed]", "rx_checksumming": "on [fixed]", "tx_tcp_segmentation": "on", "netns_local": "on [fixed]", "busy_poll": "off [fixed]", "generic_segmentation_offload": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tcp_segmentation_offload": "on", "l2_fwd_offload": "off [fixed]", "rx_vlan_offload": "off [fixed]", "ntuple_filters": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_nocache_copy": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "udp_fragmentation_offload": "on", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "hw_tc_offload": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_fcoe_segmentation": "off [fixed]", "rx_vlan_filter": "off [fixed]", "tx_vlan_offload": "off [fixed]", "receive_hashing": "off [fixed]", "tx_gre_segmentation": "off [fixed]"}, "hw_timestamp_filters": [], "mtu": 65536, "device": "lo", "promisc": false, "timestamping": ["rx_software", "software"], "ipv4": {"broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0", "address": "127.0.0.1"}, "ipv6": [{"scope": "host", "prefix": "128", "address": "::1"}], "active": true, "type": "loopback"}, "ansible_memtotal_mb": 7569, "ansible_device_links": {"masters": {}, "labels": {}, "ids": {"nvme0n1p1": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1-part1", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part1"], "nvme0n1p2": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1-part2", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part2"], "nvme0n1": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001"]}, "uuids": {"nvme0n1p2": ["1698b607-b2a7-455f-b2ee-ed7f6e17ed9f"]}}, "ansible_apparmor": {"status": "disabled"}, "ansible_proc_cmdline": {"LANG": "en_US.UTF-8", "BOOT_IMAGE": "/boot/vmlinuz-3.10.0-1062.9.1.el7.x86_64", "rd.blacklist": "nouveau", "net.ifnames": "0", "fips": "1", "crashkernel": "auto", "console": ["ttyS0,115200n8", "tty0"], "ro": true, "root": "UUID=1698b607-b2a7-455f-b2ee-ed7f6e17ed9f"}, "ansible_memfree_mb": 1488, "ansible_processor_count": 1, "ansible_hostname": "prod-task0", "ansible_interfaces": ["lo", "eth1", "eth0"], "ansible_machine_id": "ec2e9527ba63e63e1f4f148a6b533b0b", "ansible_fqdn": "prod-task0.fed.sailpoint.loc", "ansible_mounts": [{"block_used": 1003765, "uuid": "1698b607-b2a7-455f-b2ee-ed7f6e17ed9f", "size_total": 214735761408, "block_total": 52425723, "mount": "/", "block_available": 51421958, "size_available": 210624339968, "fstype": "xfs", "inode_total": 104856560, "options": "rw,seclabel,relatime,attr2,inode64,noquota", "device": "/dev/nvme0n1p2", "inode_used": 59777, "block_size": 4096, "inode_available": 104796783}], "ansible_nodename": "prod-task0.fed.sailpoint.loc", "ansible_distribution_file_search_string": "Red Hat", "ansible_domain": "fed.sailpoint.loc", "ansible_distribution_file_path": "/etc/redhat-release", "ansible_virtualization_type": "kvm", "ansible_processor_cores": 1, "ansible_bios_version": "1.0", "ansible_date_time": {"weekday_number": "5", "iso8601_basic_short": "20200207T190449", "tz": "UTC", "weeknumber": "05", "hour": "19", "year": "2020", "minute": "04", "tz_offset": "+0000", "month": "02", "epoch": "1581102289", "iso8601_micro": "2020-02-07T19:04:49.229373Z", "weekday": "Friday", "time": "19:04:49", "date": "2020-02-07", "iso8601": "2020-02-07T19:04:49Z", "day": "07", "iso8601_basic": "20200207T190449229284", "second": "49"}, "ansible_distribution_release": "Maipo", "ansible_os_family": "RedHat", "ansible_effective_user_id": 1000, "ansible_system": "Linux", "ansible_devices": {"nvme0n1": {"scheduler_mode": "none", "rotational": "0", "vendor": null, "sectors": "419430400", "links": {"masters": [], "labels": [], "ids": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001"], "uuids": []}, "sas_device_handle": null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", "removable": "0", "support_discard": "0", "model": "Amazon Elastic Block Store", "partitions": {"nvme0n1p1": {"sectorsize": 512, "uuid": null, "links": {"masters": [], "labels": [], "ids": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1-part1", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part1"], "uuids": []}, "sectors": "2048", "start": "2048", "holders": [], "size": "1.00 MB"}, "nvme0n1p2": {"sectorsize": 512, "uuid": "1698b607-b2a7-455f-b2ee-ed7f6e17ed9f", "links": {"masters": [], "labels": [], "ids": ["nvme-Amazon_Elastic_Block_Store_vol0c7628dcf19c306f1-part2", "nvme-nvme.1d0f-766f6c3063373632386463663139633330366631-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001-part2"], "uuids": ["1698b607-b2a7-455f-b2ee-ed7f6e17ed9f"]}, "sectors": "419426270", "start": "4096", "holders": [], "size": "200.00 GB"}}, "holders": [], "size": "200.00 GB"}}, "ansible_user_uid": 1000, "ansible_bios_date": "10/16/2017", "ansible_system_capabilities": [""]}}\r\n', 'Shared connection to 100.64.12.7 closed.\r\n')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'rm -f -r /home/ec2-user/.ansible/tmp/ansible-tmp-1581102287.88-194846480658070/ > /dev/null 2>&1 && sleep 0'"'"''
<100.64.12.7> (0, '', '')
TASK [Gathering Facts] *********************************************************
task path: /home/chris.kiick/services-performance-lab-master/bug.yml:4
ok: [prod-task1]
META: ran handlers
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'echo ~ec2-user && sleep 0'"'"''
<100.64.12.7> (0, '/home/ec2-user\n', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836 `" && echo ansible-tmp-1581102289.37-275304619929836="` echo /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836 `" ) && sleep 0'"'"''
<100.64.12.7> (0, 'ansible-tmp-1581102289.37-275304619929836=/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836\n', '')
Using module file /usr/lib/python2.7/site-packages/ansible/modules/crypto/openssl_privatekey.py
<100.64.12.7> PUT /home/chris.kiick/.ansible/tmp/ansible-local-16723ohhUk2/tmpugGucZ TO /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py
<100.64.12.7> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f '[100.64.12.7]'
<100.64.12.7> (0, 'sftp> put /home/chris.kiick/.ansible/tmp/ansible-local-16723ohhUk2/tmpugGucZ /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py\n', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/ /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py && sleep 0'"'"''
<100.64.12.7> (0, '', '')
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f -tt 100.64.12.7 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-qegpjmyptpfgtqkxxglsjhfnewsepfpj ; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<100.64.12.7> (1, 'Traceback (most recent call last):\r\n File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 102, in <module>\r\n _ansiballz_main()\r\n File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 40, in invoke_module\r\n runpy.run_module(mod_name=\'ansible.modules.crypto.openssl_privatekey\', init_globals=None, run_name=\'__main__\', alter_sys=True)\r\n File "/usr/lib64/python2.7/runpy.py", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File "/usr/lib64/python2.7/runpy.py", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code\r\n exec code in run_globals\r\n File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 692, in <module>\r\n File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 676, in main\r\n File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 303, in generate\r\n File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 545, in _get_fingerprint\r\n File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/module_utils/crypto.py", line 157, in get_fingerprint_of_bytes\r\nValueError: error:060800A3:digital envelope routines:EVP_DigestInit_ex:disabled for fips\r\n', 'Shared connection to 100.64.12.7 closed.\r\n')
<100.64.12.7> Failed to connect to the host via ssh: Shared connection to 100.64.12.7 closed.
<100.64.12.7> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<100.64.12.7> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="iiq-key.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ec2-user"' -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ControlPath=/home/chris.kiick/.ansible/cp/3f67b8c86f 100.64.12.7 '/bin/sh -c '"'"'rm -f -r /home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/ > /dev/null 2>&1 && sleep 0'"'"''
<100.64.12.7> (0, '', '')
TASK [openssl_privatekey] ******************************************************
task path: /home/chris.kiick/services-performance-lab-master/bug.yml:7
The full traceback is:
Traceback (most recent call last):
File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 102, in <module>
_ansiballz_main()
File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.crypto.openssl_privatekey', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib64/python2.7/runpy.py", line 176, in run_module
fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 82, in _run_module_code
mod_name, mod_fname, mod_loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 692, in <module>
File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 676, in main
File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 303, in generate
File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py", line 545, in _get_fingerprint
File "/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/module_utils/crypto.py", line 157, in get_fingerprint_of_bytes
ValueError: error:060800A3:digital envelope routines:EVP_DigestInit_ex:disabled for fips
fatal: [prod-task1]: FAILED! => {
"changed": false,
"module_stderr": "Shared connection to 100.64.12.7 closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/ec2-user/.ansible/tmp/ansible-tmp-1581102289.37-275304619929836/AnsiballZ_openssl_privatekey.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible.modules.crypto.openssl_privatekey', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n fname, loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py\", line 692, in <module>\r\n File \"/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py\", line 676, in main\r\n File \"/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py\", line 303, in generate\r\n File \"/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/modules/crypto/openssl_privatekey.py\", line 545, in _get_fingerprint\r\n File \"/tmp/ansible_openssl_privatekey_payload_bq5DCF/ansible_openssl_privatekey_payload.zip/ansible/module_utils/crypto.py\", line 157, in get_fingerprint_of_bytes\r\nValueError: error:060800A3:digital envelope routines:EVP_DigestInit_ex:disabled for fips\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
PLAY RECAP *********************************************************************
prod-task1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/67213
|
https://github.com/ansible/ansible/pull/67515
|
9f41d0e9147590159645469e5a7e5a15a9999945
|
ca57871954fd3a0d79321d1c9b4abf1c51249b8d
| 2020-02-07T19:08:47Z |
python
| 2020-02-18T08:43:22Z |
lib/ansible/module_utils/crypto.py
|
# -*- coding: utf-8 -*-
#
# (c) 2016, Yanis Guenane <[email protected]>
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
# ----------------------------------------------------------------------
# A clearly marked portion of this file is licensed under the BSD license
# Copyright (c) 2015, 2016 Paul Kehrer (@reaperhulk)
# Copyright (c) 2017 Fraser Tweedale (@frasertweedale)
# For more details, search for the function _obj2txt().
# ---------------------------------------------------------------------
# A clearly marked portion of this file is extracted from a project that
# is licensed under the Apache License 2.0
# Copyright (c) the OpenSSL contributors
# For more details, search for the function _OID_MAP.
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import sys
from distutils.version import LooseVersion
try:
import OpenSSL
from OpenSSL import crypto
except ImportError:
# An error will be raised in the calling class to let the end
# user know that OpenSSL couldn't be found.
pass
try:
import cryptography
from cryptography import x509
from cryptography.hazmat.backends import default_backend as cryptography_backend
from cryptography.hazmat.primitives.serialization import load_pem_private_key
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives import serialization
import ipaddress
# Older versions of cryptography (< 2.1) do not have __hash__ functions for
# general name objects (DNSName, IPAddress, ...), while providing overloaded
# equality and string representation operations. This makes it impossible to
# use them in hash-based data structures such as set or dict. Since we are
# actually doing that in openssl_certificate, and potentially in other code,
# we need to monkey-patch __hash__ for these classes to make sure our code
# works fine.
if LooseVersion(cryptography.__version__) < LooseVersion('2.1'):
# A very simply hash function which relies on the representation
# of an object to be implemented. This is the case since at least
# cryptography 1.0, see
# https://github.com/pyca/cryptography/commit/7a9abce4bff36c05d26d8d2680303a6f64a0e84f
def simple_hash(self):
return hash(repr(self))
# The hash functions for the following types were added for cryptography 2.1:
# https://github.com/pyca/cryptography/commit/fbfc36da2a4769045f2373b004ddf0aff906cf38
x509.DNSName.__hash__ = simple_hash
x509.DirectoryName.__hash__ = simple_hash
x509.GeneralName.__hash__ = simple_hash
x509.IPAddress.__hash__ = simple_hash
x509.OtherName.__hash__ = simple_hash
x509.RegisteredID.__hash__ = simple_hash
if LooseVersion(cryptography.__version__) < LooseVersion('1.2'):
# The hash functions for the following types were added for cryptography 1.2:
# https://github.com/pyca/cryptography/commit/b642deed88a8696e5f01ce6855ccf89985fc35d0
# https://github.com/pyca/cryptography/commit/d1b5681f6db2bde7a14625538bd7907b08dfb486
x509.RFC822Name.__hash__ = simple_hash
x509.UniformResourceIdentifier.__hash__ = simple_hash
# Test whether we have support for X25519, X448, Ed25519 and/or Ed448
try:
import cryptography.hazmat.primitives.asymmetric.x25519
CRYPTOGRAPHY_HAS_X25519 = True
try:
cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.private_bytes
CRYPTOGRAPHY_HAS_X25519_FULL = True
except AttributeError:
CRYPTOGRAPHY_HAS_X25519_FULL = False
except ImportError:
CRYPTOGRAPHY_HAS_X25519 = False
CRYPTOGRAPHY_HAS_X25519_FULL = False
try:
import cryptography.hazmat.primitives.asymmetric.x448
CRYPTOGRAPHY_HAS_X448 = True
except ImportError:
CRYPTOGRAPHY_HAS_X448 = False
try:
import cryptography.hazmat.primitives.asymmetric.ed25519
CRYPTOGRAPHY_HAS_ED25519 = True
except ImportError:
CRYPTOGRAPHY_HAS_ED25519 = False
try:
import cryptography.hazmat.primitives.asymmetric.ed448
CRYPTOGRAPHY_HAS_ED448 = True
except ImportError:
CRYPTOGRAPHY_HAS_ED448 = False
except ImportError:
# Error handled in the calling module.
CRYPTOGRAPHY_HAS_X25519 = False
CRYPTOGRAPHY_HAS_X25519_FULL = False
CRYPTOGRAPHY_HAS_X448 = False
CRYPTOGRAPHY_HAS_ED25519 = False
CRYPTOGRAPHY_HAS_ED448 = False
import abc
import base64
import binascii
import datetime
import errno
import hashlib
import os
import re
import tempfile
from ansible.module_utils import six
from ansible.module_utils._text import to_bytes, to_text
class OpenSSLObjectError(Exception):
pass
class OpenSSLBadPassphraseError(OpenSSLObjectError):
pass
def get_fingerprint_of_bytes(source):
"""Generate the fingerprint of the given bytes."""
fingerprint = {}
try:
algorithms = hashlib.algorithms
except AttributeError:
try:
algorithms = hashlib.algorithms_guaranteed
except AttributeError:
return None
for algo in algorithms:
f = getattr(hashlib, algo)
h = f(source)
try:
# Certain hash functions have a hexdigest() which expects a length parameter
pubkey_digest = h.hexdigest()
except TypeError:
pubkey_digest = h.hexdigest(32)
fingerprint[algo] = ':'.join(pubkey_digest[i:i + 2] for i in range(0, len(pubkey_digest), 2))
return fingerprint
def get_fingerprint(path, passphrase=None, content=None, backend='pyopenssl'):
"""Generate the fingerprint of the public key. """
privatekey = load_privatekey(path, passphrase=passphrase, content=content, check_passphrase=False, backend=backend)
if backend == 'pyopenssl':
try:
publickey = crypto.dump_publickey(crypto.FILETYPE_ASN1, privatekey)
except AttributeError:
# If PyOpenSSL < 16.0 crypto.dump_publickey() will fail.
try:
bio = crypto._new_mem_buf()
rc = crypto._lib.i2d_PUBKEY_bio(bio, privatekey._pkey)
if rc != 1:
crypto._raise_current_error()
publickey = crypto._bio_to_string(bio)
except AttributeError:
# By doing this we prevent the code from raising an error
# yet we return no value in the fingerprint hash.
return None
elif backend == 'cryptography':
publickey = privatekey.public_key().public_bytes(
serialization.Encoding.DER,
serialization.PublicFormat.SubjectPublicKeyInfo
)
return get_fingerprint_of_bytes(publickey)
def load_file_if_exists(path, module=None, ignore_errors=False):
try:
with open(path, 'rb') as f:
return f.read()
except EnvironmentError as exc:
if exc.errno == errno.ENOENT:
return None
if ignore_errors:
return None
if module is None:
raise
module.fail_json('Error while loading {0} - {1}'.format(path, str(exc)))
except Exception as exc:
if ignore_errors:
return None
if module is None:
raise
module.fail_json('Error while loading {0} - {1}'.format(path, str(exc)))
def load_privatekey(path, passphrase=None, check_passphrase=True, content=None, backend='pyopenssl'):
"""Load the specified OpenSSL private key.
The content can also be specified via content; in that case,
this function will not load the key from disk.
"""
try:
if content is None:
with open(path, 'rb') as b_priv_key_fh:
priv_key_detail = b_priv_key_fh.read()
else:
priv_key_detail = content
if backend == 'pyopenssl':
# First try: try to load with real passphrase (resp. empty string)
# Will work if this is the correct passphrase, or the key is not
# password-protected.
try:
result = crypto.load_privatekey(crypto.FILETYPE_PEM,
priv_key_detail,
to_bytes(passphrase or ''))
except crypto.Error as e:
if len(e.args) > 0 and len(e.args[0]) > 0:
if e.args[0][0][2] in ('bad decrypt', 'bad password read'):
# This happens in case we have the wrong passphrase.
if passphrase is not None:
raise OpenSSLBadPassphraseError('Wrong passphrase provided for private key!')
else:
raise OpenSSLBadPassphraseError('No passphrase provided, but private key is password-protected!')
raise OpenSSLObjectError('Error while deserializing key: {0}'.format(e))
if check_passphrase:
# Next we want to make sure that the key is actually protected by
# a passphrase (in case we did try the empty string before, make
# sure that the key is not protected by the empty string)
try:
crypto.load_privatekey(crypto.FILETYPE_PEM,
priv_key_detail,
to_bytes('y' if passphrase == 'x' else 'x'))
if passphrase is not None:
# Since we can load the key without an exception, the
# key isn't password-protected
raise OpenSSLBadPassphraseError('Passphrase provided, but private key is not password-protected!')
except crypto.Error as e:
if passphrase is None and len(e.args) > 0 and len(e.args[0]) > 0:
if e.args[0][0][2] in ('bad decrypt', 'bad password read'):
# The key is obviously protected by the empty string.
# Don't do this at home (if it's possible at all)...
raise OpenSSLBadPassphraseError('No passphrase provided, but private key is password-protected!')
elif backend == 'cryptography':
try:
result = load_pem_private_key(priv_key_detail,
None if passphrase is None else to_bytes(passphrase),
cryptography_backend())
except TypeError as dummy:
raise OpenSSLBadPassphraseError('Wrong or empty passphrase provided for private key')
except ValueError as dummy:
raise OpenSSLBadPassphraseError('Wrong passphrase provided for private key')
return result
except (IOError, OSError) as exc:
raise OpenSSLObjectError(exc)
def load_certificate(path, content=None, backend='pyopenssl'):
"""Load the specified certificate."""
try:
if content is None:
with open(path, 'rb') as cert_fh:
cert_content = cert_fh.read()
else:
cert_content = content
if backend == 'pyopenssl':
return crypto.load_certificate(crypto.FILETYPE_PEM, cert_content)
elif backend == 'cryptography':
return x509.load_pem_x509_certificate(cert_content, cryptography_backend())
except (IOError, OSError) as exc:
raise OpenSSLObjectError(exc)
def load_certificate_request(path, content=None, backend='pyopenssl'):
"""Load the specified certificate signing request."""
try:
if content is None:
with open(path, 'rb') as csr_fh:
csr_content = csr_fh.read()
else:
csr_content = content
except (IOError, OSError) as exc:
raise OpenSSLObjectError(exc)
if backend == 'pyopenssl':
return crypto.load_certificate_request(crypto.FILETYPE_PEM, csr_content)
elif backend == 'cryptography':
return x509.load_pem_x509_csr(csr_content, cryptography_backend())
def parse_name_field(input_dict):
"""Take a dict with key: value or key: list_of_values mappings and return a list of tuples"""
result = []
for key in input_dict:
if isinstance(input_dict[key], list):
for entry in input_dict[key]:
result.append((key, entry))
else:
result.append((key, input_dict[key]))
return result
def convert_relative_to_datetime(relative_time_string):
"""Get a datetime.datetime or None from a string in the time format described in sshd_config(5)"""
parsed_result = re.match(
r"^(?P<prefix>[+-])((?P<weeks>\d+)[wW])?((?P<days>\d+)[dD])?((?P<hours>\d+)[hH])?((?P<minutes>\d+)[mM])?((?P<seconds>\d+)[sS]?)?$",
relative_time_string)
if parsed_result is None or len(relative_time_string) == 1:
# not matched or only a single "+" or "-"
return None
offset = datetime.timedelta(0)
if parsed_result.group("weeks") is not None:
offset += datetime.timedelta(weeks=int(parsed_result.group("weeks")))
if parsed_result.group("days") is not None:
offset += datetime.timedelta(days=int(parsed_result.group("days")))
if parsed_result.group("hours") is not None:
offset += datetime.timedelta(hours=int(parsed_result.group("hours")))
if parsed_result.group("minutes") is not None:
offset += datetime.timedelta(
minutes=int(parsed_result.group("minutes")))
if parsed_result.group("seconds") is not None:
offset += datetime.timedelta(
seconds=int(parsed_result.group("seconds")))
if parsed_result.group("prefix") == "+":
return datetime.datetime.utcnow() + offset
else:
return datetime.datetime.utcnow() - offset
def select_message_digest(digest_string):
digest = None
if digest_string == 'sha256':
digest = hashes.SHA256()
elif digest_string == 'sha384':
digest = hashes.SHA384()
elif digest_string == 'sha512':
digest = hashes.SHA512()
elif digest_string == 'sha1':
digest = hashes.SHA1()
elif digest_string == 'md5':
digest = hashes.MD5()
return digest
def write_file(module, content, default_mode=None, path=None):
'''
Writes content into destination file as securely as possible.
Uses file arguments from module.
'''
# Find out parameters for file
file_args = module.load_file_common_arguments(module.params, path=path)
if file_args['mode'] is None:
file_args['mode'] = default_mode
# Create tempfile name
tmp_fd, tmp_name = tempfile.mkstemp(prefix=b'.ansible_tmp')
try:
os.close(tmp_fd)
except Exception as dummy:
pass
module.add_cleanup_file(tmp_name) # if we fail, let Ansible try to remove the file
try:
try:
# Create tempfile
file = os.open(tmp_name, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600)
os.write(file, content)
os.close(file)
except Exception as e:
try:
os.remove(tmp_name)
except Exception as dummy:
pass
module.fail_json(msg='Error while writing result into temporary file: {0}'.format(e))
# Update destination to wanted permissions
if os.path.exists(file_args['path']):
module.set_fs_attributes_if_different(file_args, False)
# Move tempfile to final destination
module.atomic_move(tmp_name, file_args['path'])
# Try to update permissions again
module.set_fs_attributes_if_different(file_args, False)
except Exception as e:
try:
os.remove(tmp_name)
except Exception as dummy:
pass
module.fail_json(msg='Error while writing result: {0}'.format(e))
@six.add_metaclass(abc.ABCMeta)
class OpenSSLObject(object):
def __init__(self, path, state, force, check_mode):
self.path = path
self.state = state
self.force = force
self.name = os.path.basename(path)
self.changed = False
self.check_mode = check_mode
def check(self, module, perms_required=True):
"""Ensure the resource is in its desired state."""
def _check_state():
return os.path.exists(self.path)
def _check_perms(module):
file_args = module.load_file_common_arguments(module.params)
return not module.set_fs_attributes_if_different(file_args, False)
if not perms_required:
return _check_state()
return _check_state() and _check_perms(module)
@abc.abstractmethod
def dump(self):
"""Serialize the object into a dictionary."""
pass
@abc.abstractmethod
def generate(self):
"""Generate the resource."""
pass
def remove(self, module):
"""Remove the resource from the filesystem."""
try:
os.remove(self.path)
self.changed = True
except OSError as exc:
if exc.errno != errno.ENOENT:
raise OpenSSLObjectError(exc)
else:
pass
# #####################################################################################
# #####################################################################################
# This has been extracted from the OpenSSL project's objects.txt:
# https://github.com/openssl/openssl/blob/9537fe5757bb07761fa275d779bbd40bcf5530e4/crypto/objects/objects.txt
# Extracted with https://gist.github.com/felixfontein/376748017ad65ead093d56a45a5bf376
#
# In case the following data structure has any copyrightable content, note that it is licensed as follows:
# Copyright (c) the OpenSSL contributors
# Licensed under the Apache License 2.0
# https://github.com/openssl/openssl/blob/master/LICENSE
_OID_MAP = {
'0': ('itu-t', 'ITU-T', 'ccitt'),
'0.3.4401.5': ('ntt-ds', ),
'0.3.4401.5.3.1.9': ('camellia', ),
'0.3.4401.5.3.1.9.1': ('camellia-128-ecb', 'CAMELLIA-128-ECB'),
'0.3.4401.5.3.1.9.3': ('camellia-128-ofb', 'CAMELLIA-128-OFB'),
'0.3.4401.5.3.1.9.4': ('camellia-128-cfb', 'CAMELLIA-128-CFB'),
'0.3.4401.5.3.1.9.6': ('camellia-128-gcm', 'CAMELLIA-128-GCM'),
'0.3.4401.5.3.1.9.7': ('camellia-128-ccm', 'CAMELLIA-128-CCM'),
'0.3.4401.5.3.1.9.9': ('camellia-128-ctr', 'CAMELLIA-128-CTR'),
'0.3.4401.5.3.1.9.10': ('camellia-128-cmac', 'CAMELLIA-128-CMAC'),
'0.3.4401.5.3.1.9.21': ('camellia-192-ecb', 'CAMELLIA-192-ECB'),
'0.3.4401.5.3.1.9.23': ('camellia-192-ofb', 'CAMELLIA-192-OFB'),
'0.3.4401.5.3.1.9.24': ('camellia-192-cfb', 'CAMELLIA-192-CFB'),
'0.3.4401.5.3.1.9.26': ('camellia-192-gcm', 'CAMELLIA-192-GCM'),
'0.3.4401.5.3.1.9.27': ('camellia-192-ccm', 'CAMELLIA-192-CCM'),
'0.3.4401.5.3.1.9.29': ('camellia-192-ctr', 'CAMELLIA-192-CTR'),
'0.3.4401.5.3.1.9.30': ('camellia-192-cmac', 'CAMELLIA-192-CMAC'),
'0.3.4401.5.3.1.9.41': ('camellia-256-ecb', 'CAMELLIA-256-ECB'),
'0.3.4401.5.3.1.9.43': ('camellia-256-ofb', 'CAMELLIA-256-OFB'),
'0.3.4401.5.3.1.9.44': ('camellia-256-cfb', 'CAMELLIA-256-CFB'),
'0.3.4401.5.3.1.9.46': ('camellia-256-gcm', 'CAMELLIA-256-GCM'),
'0.3.4401.5.3.1.9.47': ('camellia-256-ccm', 'CAMELLIA-256-CCM'),
'0.3.4401.5.3.1.9.49': ('camellia-256-ctr', 'CAMELLIA-256-CTR'),
'0.3.4401.5.3.1.9.50': ('camellia-256-cmac', 'CAMELLIA-256-CMAC'),
'0.9': ('data', ),
'0.9.2342': ('pss', ),
'0.9.2342.19200300': ('ucl', ),
'0.9.2342.19200300.100': ('pilot', ),
'0.9.2342.19200300.100.1': ('pilotAttributeType', ),
'0.9.2342.19200300.100.1.1': ('userId', 'UID'),
'0.9.2342.19200300.100.1.2': ('textEncodedORAddress', ),
'0.9.2342.19200300.100.1.3': ('rfc822Mailbox', 'mail'),
'0.9.2342.19200300.100.1.4': ('info', ),
'0.9.2342.19200300.100.1.5': ('favouriteDrink', ),
'0.9.2342.19200300.100.1.6': ('roomNumber', ),
'0.9.2342.19200300.100.1.7': ('photo', ),
'0.9.2342.19200300.100.1.8': ('userClass', ),
'0.9.2342.19200300.100.1.9': ('host', ),
'0.9.2342.19200300.100.1.10': ('manager', ),
'0.9.2342.19200300.100.1.11': ('documentIdentifier', ),
'0.9.2342.19200300.100.1.12': ('documentTitle', ),
'0.9.2342.19200300.100.1.13': ('documentVersion', ),
'0.9.2342.19200300.100.1.14': ('documentAuthor', ),
'0.9.2342.19200300.100.1.15': ('documentLocation', ),
'0.9.2342.19200300.100.1.20': ('homeTelephoneNumber', ),
'0.9.2342.19200300.100.1.21': ('secretary', ),
'0.9.2342.19200300.100.1.22': ('otherMailbox', ),
'0.9.2342.19200300.100.1.23': ('lastModifiedTime', ),
'0.9.2342.19200300.100.1.24': ('lastModifiedBy', ),
'0.9.2342.19200300.100.1.25': ('domainComponent', 'DC'),
'0.9.2342.19200300.100.1.26': ('aRecord', ),
'0.9.2342.19200300.100.1.27': ('pilotAttributeType27', ),
'0.9.2342.19200300.100.1.28': ('mXRecord', ),
'0.9.2342.19200300.100.1.29': ('nSRecord', ),
'0.9.2342.19200300.100.1.30': ('sOARecord', ),
'0.9.2342.19200300.100.1.31': ('cNAMERecord', ),
'0.9.2342.19200300.100.1.37': ('associatedDomain', ),
'0.9.2342.19200300.100.1.38': ('associatedName', ),
'0.9.2342.19200300.100.1.39': ('homePostalAddress', ),
'0.9.2342.19200300.100.1.40': ('personalTitle', ),
'0.9.2342.19200300.100.1.41': ('mobileTelephoneNumber', ),
'0.9.2342.19200300.100.1.42': ('pagerTelephoneNumber', ),
'0.9.2342.19200300.100.1.43': ('friendlyCountryName', ),
'0.9.2342.19200300.100.1.44': ('uniqueIdentifier', 'uid'),
'0.9.2342.19200300.100.1.45': ('organizationalStatus', ),
'0.9.2342.19200300.100.1.46': ('janetMailbox', ),
'0.9.2342.19200300.100.1.47': ('mailPreferenceOption', ),
'0.9.2342.19200300.100.1.48': ('buildingName', ),
'0.9.2342.19200300.100.1.49': ('dSAQuality', ),
'0.9.2342.19200300.100.1.50': ('singleLevelQuality', ),
'0.9.2342.19200300.100.1.51': ('subtreeMinimumQuality', ),
'0.9.2342.19200300.100.1.52': ('subtreeMaximumQuality', ),
'0.9.2342.19200300.100.1.53': ('personalSignature', ),
'0.9.2342.19200300.100.1.54': ('dITRedirect', ),
'0.9.2342.19200300.100.1.55': ('audio', ),
'0.9.2342.19200300.100.1.56': ('documentPublisher', ),
'0.9.2342.19200300.100.3': ('pilotAttributeSyntax', ),
'0.9.2342.19200300.100.3.4': ('iA5StringSyntax', ),
'0.9.2342.19200300.100.3.5': ('caseIgnoreIA5StringSyntax', ),
'0.9.2342.19200300.100.4': ('pilotObjectClass', ),
'0.9.2342.19200300.100.4.3': ('pilotObject', ),
'0.9.2342.19200300.100.4.4': ('pilotPerson', ),
'0.9.2342.19200300.100.4.5': ('account', ),
'0.9.2342.19200300.100.4.6': ('document', ),
'0.9.2342.19200300.100.4.7': ('room', ),
'0.9.2342.19200300.100.4.9': ('documentSeries', ),
'0.9.2342.19200300.100.4.13': ('Domain', 'domain'),
'0.9.2342.19200300.100.4.14': ('rFC822localPart', ),
'0.9.2342.19200300.100.4.15': ('dNSDomain', ),
'0.9.2342.19200300.100.4.17': ('domainRelatedObject', ),
'0.9.2342.19200300.100.4.18': ('friendlyCountry', ),
'0.9.2342.19200300.100.4.19': ('simpleSecurityObject', ),
'0.9.2342.19200300.100.4.20': ('pilotOrganization', ),
'0.9.2342.19200300.100.4.21': ('pilotDSA', ),
'0.9.2342.19200300.100.4.22': ('qualityLabelledData', ),
'0.9.2342.19200300.100.10': ('pilotGroups', ),
'1': ('iso', 'ISO'),
'1.0.9797.3.4': ('gmac', 'GMAC'),
'1.0.10118.3.0.55': ('whirlpool', ),
'1.2': ('ISO Member Body', 'member-body'),
'1.2.156': ('ISO CN Member Body', 'ISO-CN'),
'1.2.156.10197': ('oscca', ),
'1.2.156.10197.1': ('sm-scheme', ),
'1.2.156.10197.1.104.1': ('sm4-ecb', 'SM4-ECB'),
'1.2.156.10197.1.104.2': ('sm4-cbc', 'SM4-CBC'),
'1.2.156.10197.1.104.3': ('sm4-ofb', 'SM4-OFB'),
'1.2.156.10197.1.104.4': ('sm4-cfb', 'SM4-CFB'),
'1.2.156.10197.1.104.5': ('sm4-cfb1', 'SM4-CFB1'),
'1.2.156.10197.1.104.6': ('sm4-cfb8', 'SM4-CFB8'),
'1.2.156.10197.1.104.7': ('sm4-ctr', 'SM4-CTR'),
'1.2.156.10197.1.301': ('sm2', 'SM2'),
'1.2.156.10197.1.401': ('sm3', 'SM3'),
'1.2.156.10197.1.501': ('SM2-with-SM3', 'SM2-SM3'),
'1.2.156.10197.1.504': ('sm3WithRSAEncryption', 'RSA-SM3'),
'1.2.392.200011.61.1.1.1.2': ('camellia-128-cbc', 'CAMELLIA-128-CBC'),
'1.2.392.200011.61.1.1.1.3': ('camellia-192-cbc', 'CAMELLIA-192-CBC'),
'1.2.392.200011.61.1.1.1.4': ('camellia-256-cbc', 'CAMELLIA-256-CBC'),
'1.2.392.200011.61.1.1.3.2': ('id-camellia128-wrap', ),
'1.2.392.200011.61.1.1.3.3': ('id-camellia192-wrap', ),
'1.2.392.200011.61.1.1.3.4': ('id-camellia256-wrap', ),
'1.2.410.200004': ('kisa', 'KISA'),
'1.2.410.200004.1.3': ('seed-ecb', 'SEED-ECB'),
'1.2.410.200004.1.4': ('seed-cbc', 'SEED-CBC'),
'1.2.410.200004.1.5': ('seed-cfb', 'SEED-CFB'),
'1.2.410.200004.1.6': ('seed-ofb', 'SEED-OFB'),
'1.2.410.200046.1.1': ('aria', ),
'1.2.410.200046.1.1.1': ('aria-128-ecb', 'ARIA-128-ECB'),
'1.2.410.200046.1.1.2': ('aria-128-cbc', 'ARIA-128-CBC'),
'1.2.410.200046.1.1.3': ('aria-128-cfb', 'ARIA-128-CFB'),
'1.2.410.200046.1.1.4': ('aria-128-ofb', 'ARIA-128-OFB'),
'1.2.410.200046.1.1.5': ('aria-128-ctr', 'ARIA-128-CTR'),
'1.2.410.200046.1.1.6': ('aria-192-ecb', 'ARIA-192-ECB'),
'1.2.410.200046.1.1.7': ('aria-192-cbc', 'ARIA-192-CBC'),
'1.2.410.200046.1.1.8': ('aria-192-cfb', 'ARIA-192-CFB'),
'1.2.410.200046.1.1.9': ('aria-192-ofb', 'ARIA-192-OFB'),
'1.2.410.200046.1.1.10': ('aria-192-ctr', 'ARIA-192-CTR'),
'1.2.410.200046.1.1.11': ('aria-256-ecb', 'ARIA-256-ECB'),
'1.2.410.200046.1.1.12': ('aria-256-cbc', 'ARIA-256-CBC'),
'1.2.410.200046.1.1.13': ('aria-256-cfb', 'ARIA-256-CFB'),
'1.2.410.200046.1.1.14': ('aria-256-ofb', 'ARIA-256-OFB'),
'1.2.410.200046.1.1.15': ('aria-256-ctr', 'ARIA-256-CTR'),
'1.2.410.200046.1.1.34': ('aria-128-gcm', 'ARIA-128-GCM'),
'1.2.410.200046.1.1.35': ('aria-192-gcm', 'ARIA-192-GCM'),
'1.2.410.200046.1.1.36': ('aria-256-gcm', 'ARIA-256-GCM'),
'1.2.410.200046.1.1.37': ('aria-128-ccm', 'ARIA-128-CCM'),
'1.2.410.200046.1.1.38': ('aria-192-ccm', 'ARIA-192-CCM'),
'1.2.410.200046.1.1.39': ('aria-256-ccm', 'ARIA-256-CCM'),
'1.2.643.2.2': ('cryptopro', ),
'1.2.643.2.2.3': ('GOST R 34.11-94 with GOST R 34.10-2001', 'id-GostR3411-94-with-GostR3410-2001'),
'1.2.643.2.2.4': ('GOST R 34.11-94 with GOST R 34.10-94', 'id-GostR3411-94-with-GostR3410-94'),
'1.2.643.2.2.9': ('GOST R 34.11-94', 'md_gost94'),
'1.2.643.2.2.10': ('HMAC GOST 34.11-94', 'id-HMACGostR3411-94'),
'1.2.643.2.2.14.0': ('id-Gost28147-89-None-KeyMeshing', ),
'1.2.643.2.2.14.1': ('id-Gost28147-89-CryptoPro-KeyMeshing', ),
'1.2.643.2.2.19': ('GOST R 34.10-2001', 'gost2001'),
'1.2.643.2.2.20': ('GOST R 34.10-94', 'gost94'),
'1.2.643.2.2.20.1': ('id-GostR3410-94-a', ),
'1.2.643.2.2.20.2': ('id-GostR3410-94-aBis', ),
'1.2.643.2.2.20.3': ('id-GostR3410-94-b', ),
'1.2.643.2.2.20.4': ('id-GostR3410-94-bBis', ),
'1.2.643.2.2.21': ('GOST 28147-89', 'gost89'),
'1.2.643.2.2.22': ('GOST 28147-89 MAC', 'gost-mac'),
'1.2.643.2.2.23': ('GOST R 34.11-94 PRF', 'prf-gostr3411-94'),
'1.2.643.2.2.30.0': ('id-GostR3411-94-TestParamSet', ),
'1.2.643.2.2.30.1': ('id-GostR3411-94-CryptoProParamSet', ),
'1.2.643.2.2.31.0': ('id-Gost28147-89-TestParamSet', ),
'1.2.643.2.2.31.1': ('id-Gost28147-89-CryptoPro-A-ParamSet', ),
'1.2.643.2.2.31.2': ('id-Gost28147-89-CryptoPro-B-ParamSet', ),
'1.2.643.2.2.31.3': ('id-Gost28147-89-CryptoPro-C-ParamSet', ),
'1.2.643.2.2.31.4': ('id-Gost28147-89-CryptoPro-D-ParamSet', ),
'1.2.643.2.2.31.5': ('id-Gost28147-89-CryptoPro-Oscar-1-1-ParamSet', ),
'1.2.643.2.2.31.6': ('id-Gost28147-89-CryptoPro-Oscar-1-0-ParamSet', ),
'1.2.643.2.2.31.7': ('id-Gost28147-89-CryptoPro-RIC-1-ParamSet', ),
'1.2.643.2.2.32.0': ('id-GostR3410-94-TestParamSet', ),
'1.2.643.2.2.32.2': ('id-GostR3410-94-CryptoPro-A-ParamSet', ),
'1.2.643.2.2.32.3': ('id-GostR3410-94-CryptoPro-B-ParamSet', ),
'1.2.643.2.2.32.4': ('id-GostR3410-94-CryptoPro-C-ParamSet', ),
'1.2.643.2.2.32.5': ('id-GostR3410-94-CryptoPro-D-ParamSet', ),
'1.2.643.2.2.33.1': ('id-GostR3410-94-CryptoPro-XchA-ParamSet', ),
'1.2.643.2.2.33.2': ('id-GostR3410-94-CryptoPro-XchB-ParamSet', ),
'1.2.643.2.2.33.3': ('id-GostR3410-94-CryptoPro-XchC-ParamSet', ),
'1.2.643.2.2.35.0': ('id-GostR3410-2001-TestParamSet', ),
'1.2.643.2.2.35.1': ('id-GostR3410-2001-CryptoPro-A-ParamSet', ),
'1.2.643.2.2.35.2': ('id-GostR3410-2001-CryptoPro-B-ParamSet', ),
'1.2.643.2.2.35.3': ('id-GostR3410-2001-CryptoPro-C-ParamSet', ),
'1.2.643.2.2.36.0': ('id-GostR3410-2001-CryptoPro-XchA-ParamSet', ),
'1.2.643.2.2.36.1': ('id-GostR3410-2001-CryptoPro-XchB-ParamSet', ),
'1.2.643.2.2.98': ('GOST R 34.10-2001 DH', 'id-GostR3410-2001DH'),
'1.2.643.2.2.99': ('GOST R 34.10-94 DH', 'id-GostR3410-94DH'),
'1.2.643.2.9': ('cryptocom', ),
'1.2.643.2.9.1.3.3': ('GOST R 34.11-94 with GOST R 34.10-94 Cryptocom', 'id-GostR3411-94-with-GostR3410-94-cc'),
'1.2.643.2.9.1.3.4': ('GOST R 34.11-94 with GOST R 34.10-2001 Cryptocom', 'id-GostR3411-94-with-GostR3410-2001-cc'),
'1.2.643.2.9.1.5.3': ('GOST 34.10-94 Cryptocom', 'gost94cc'),
'1.2.643.2.9.1.5.4': ('GOST 34.10-2001 Cryptocom', 'gost2001cc'),
'1.2.643.2.9.1.6.1': ('GOST 28147-89 Cryptocom ParamSet', 'id-Gost28147-89-cc'),
'1.2.643.2.9.1.8.1': ('GOST R 3410-2001 Parameter Set Cryptocom', 'id-GostR3410-2001-ParamSet-cc'),
'1.2.643.3.131.1.1': ('INN', 'INN'),
'1.2.643.7.1': ('id-tc26', ),
'1.2.643.7.1.1': ('id-tc26-algorithms', ),
'1.2.643.7.1.1.1': ('id-tc26-sign', ),
'1.2.643.7.1.1.1.1': ('GOST R 34.10-2012 with 256 bit modulus', 'gost2012_256'),
'1.2.643.7.1.1.1.2': ('GOST R 34.10-2012 with 512 bit modulus', 'gost2012_512'),
'1.2.643.7.1.1.2': ('id-tc26-digest', ),
'1.2.643.7.1.1.2.2': ('GOST R 34.11-2012 with 256 bit hash', 'md_gost12_256'),
'1.2.643.7.1.1.2.3': ('GOST R 34.11-2012 with 512 bit hash', 'md_gost12_512'),
'1.2.643.7.1.1.3': ('id-tc26-signwithdigest', ),
'1.2.643.7.1.1.3.2': ('GOST R 34.10-2012 with GOST R 34.11-2012 (256 bit)', 'id-tc26-signwithdigest-gost3410-2012-256'),
'1.2.643.7.1.1.3.3': ('GOST R 34.10-2012 with GOST R 34.11-2012 (512 bit)', 'id-tc26-signwithdigest-gost3410-2012-512'),
'1.2.643.7.1.1.4': ('id-tc26-mac', ),
'1.2.643.7.1.1.4.1': ('HMAC GOST 34.11-2012 256 bit', 'id-tc26-hmac-gost-3411-2012-256'),
'1.2.643.7.1.1.4.2': ('HMAC GOST 34.11-2012 512 bit', 'id-tc26-hmac-gost-3411-2012-512'),
'1.2.643.7.1.1.5': ('id-tc26-cipher', ),
'1.2.643.7.1.1.5.1': ('id-tc26-cipher-gostr3412-2015-magma', ),
'1.2.643.7.1.1.5.1.1': ('id-tc26-cipher-gostr3412-2015-magma-ctracpkm', ),
'1.2.643.7.1.1.5.1.2': ('id-tc26-cipher-gostr3412-2015-magma-ctracpkm-omac', ),
'1.2.643.7.1.1.5.2': ('id-tc26-cipher-gostr3412-2015-kuznyechik', ),
'1.2.643.7.1.1.5.2.1': ('id-tc26-cipher-gostr3412-2015-kuznyechik-ctracpkm', ),
'1.2.643.7.1.1.5.2.2': ('id-tc26-cipher-gostr3412-2015-kuznyechik-ctracpkm-omac', ),
'1.2.643.7.1.1.6': ('id-tc26-agreement', ),
'1.2.643.7.1.1.6.1': ('id-tc26-agreement-gost-3410-2012-256', ),
'1.2.643.7.1.1.6.2': ('id-tc26-agreement-gost-3410-2012-512', ),
'1.2.643.7.1.1.7': ('id-tc26-wrap', ),
'1.2.643.7.1.1.7.1': ('id-tc26-wrap-gostr3412-2015-magma', ),
'1.2.643.7.1.1.7.1.1': ('id-tc26-wrap-gostr3412-2015-magma-kexp15', 'id-tc26-wrap-gostr3412-2015-kuznyechik-kexp15'),
'1.2.643.7.1.1.7.2': ('id-tc26-wrap-gostr3412-2015-kuznyechik', ),
'1.2.643.7.1.2': ('id-tc26-constants', ),
'1.2.643.7.1.2.1': ('id-tc26-sign-constants', ),
'1.2.643.7.1.2.1.1': ('id-tc26-gost-3410-2012-256-constants', ),
'1.2.643.7.1.2.1.1.1': ('GOST R 34.10-2012 (256 bit) ParamSet A', 'id-tc26-gost-3410-2012-256-paramSetA'),
'1.2.643.7.1.2.1.1.2': ('GOST R 34.10-2012 (256 bit) ParamSet B', 'id-tc26-gost-3410-2012-256-paramSetB'),
'1.2.643.7.1.2.1.1.3': ('GOST R 34.10-2012 (256 bit) ParamSet C', 'id-tc26-gost-3410-2012-256-paramSetC'),
'1.2.643.7.1.2.1.1.4': ('GOST R 34.10-2012 (256 bit) ParamSet D', 'id-tc26-gost-3410-2012-256-paramSetD'),
'1.2.643.7.1.2.1.2': ('id-tc26-gost-3410-2012-512-constants', ),
'1.2.643.7.1.2.1.2.0': ('GOST R 34.10-2012 (512 bit) testing parameter set', 'id-tc26-gost-3410-2012-512-paramSetTest'),
'1.2.643.7.1.2.1.2.1': ('GOST R 34.10-2012 (512 bit) ParamSet A', 'id-tc26-gost-3410-2012-512-paramSetA'),
'1.2.643.7.1.2.1.2.2': ('GOST R 34.10-2012 (512 bit) ParamSet B', 'id-tc26-gost-3410-2012-512-paramSetB'),
'1.2.643.7.1.2.1.2.3': ('GOST R 34.10-2012 (512 bit) ParamSet C', 'id-tc26-gost-3410-2012-512-paramSetC'),
'1.2.643.7.1.2.2': ('id-tc26-digest-constants', ),
'1.2.643.7.1.2.5': ('id-tc26-cipher-constants', ),
'1.2.643.7.1.2.5.1': ('id-tc26-gost-28147-constants', ),
'1.2.643.7.1.2.5.1.1': ('GOST 28147-89 TC26 parameter set', 'id-tc26-gost-28147-param-Z'),
'1.2.643.100.1': ('OGRN', 'OGRN'),
'1.2.643.100.3': ('SNILS', 'SNILS'),
'1.2.643.100.111': ('Signing Tool of Subject', 'subjectSignTool'),
'1.2.643.100.112': ('Signing Tool of Issuer', 'issuerSignTool'),
'1.2.804': ('ISO-UA', ),
'1.2.804.2.1.1.1': ('ua-pki', ),
'1.2.804.2.1.1.1.1.1.1': ('DSTU Gost 28147-2009', 'dstu28147'),
'1.2.804.2.1.1.1.1.1.1.2': ('DSTU Gost 28147-2009 OFB mode', 'dstu28147-ofb'),
'1.2.804.2.1.1.1.1.1.1.3': ('DSTU Gost 28147-2009 CFB mode', 'dstu28147-cfb'),
'1.2.804.2.1.1.1.1.1.1.5': ('DSTU Gost 28147-2009 key wrap', 'dstu28147-wrap'),
'1.2.804.2.1.1.1.1.1.2': ('HMAC DSTU Gost 34311-95', 'hmacWithDstu34311'),
'1.2.804.2.1.1.1.1.2.1': ('DSTU Gost 34311-95', 'dstu34311'),
'1.2.804.2.1.1.1.1.3.1.1': ('DSTU 4145-2002 little endian', 'dstu4145le'),
'1.2.804.2.1.1.1.1.3.1.1.1.1': ('DSTU 4145-2002 big endian', 'dstu4145be'),
'1.2.804.2.1.1.1.1.3.1.1.2.0': ('DSTU curve 0', 'uacurve0'),
'1.2.804.2.1.1.1.1.3.1.1.2.1': ('DSTU curve 1', 'uacurve1'),
'1.2.804.2.1.1.1.1.3.1.1.2.2': ('DSTU curve 2', 'uacurve2'),
'1.2.804.2.1.1.1.1.3.1.1.2.3': ('DSTU curve 3', 'uacurve3'),
'1.2.804.2.1.1.1.1.3.1.1.2.4': ('DSTU curve 4', 'uacurve4'),
'1.2.804.2.1.1.1.1.3.1.1.2.5': ('DSTU curve 5', 'uacurve5'),
'1.2.804.2.1.1.1.1.3.1.1.2.6': ('DSTU curve 6', 'uacurve6'),
'1.2.804.2.1.1.1.1.3.1.1.2.7': ('DSTU curve 7', 'uacurve7'),
'1.2.804.2.1.1.1.1.3.1.1.2.8': ('DSTU curve 8', 'uacurve8'),
'1.2.804.2.1.1.1.1.3.1.1.2.9': ('DSTU curve 9', 'uacurve9'),
'1.2.840': ('ISO US Member Body', 'ISO-US'),
'1.2.840.10040': ('X9.57', 'X9-57'),
'1.2.840.10040.2': ('holdInstruction', ),
'1.2.840.10040.2.1': ('Hold Instruction None', 'holdInstructionNone'),
'1.2.840.10040.2.2': ('Hold Instruction Call Issuer', 'holdInstructionCallIssuer'),
'1.2.840.10040.2.3': ('Hold Instruction Reject', 'holdInstructionReject'),
'1.2.840.10040.4': ('X9.57 CM ?', 'X9cm'),
'1.2.840.10040.4.1': ('dsaEncryption', 'DSA'),
'1.2.840.10040.4.3': ('dsaWithSHA1', 'DSA-SHA1'),
'1.2.840.10045': ('ANSI X9.62', 'ansi-X9-62'),
'1.2.840.10045.1': ('id-fieldType', ),
'1.2.840.10045.1.1': ('prime-field', ),
'1.2.840.10045.1.2': ('characteristic-two-field', ),
'1.2.840.10045.1.2.3': ('id-characteristic-two-basis', ),
'1.2.840.10045.1.2.3.1': ('onBasis', ),
'1.2.840.10045.1.2.3.2': ('tpBasis', ),
'1.2.840.10045.1.2.3.3': ('ppBasis', ),
'1.2.840.10045.2': ('id-publicKeyType', ),
'1.2.840.10045.2.1': ('id-ecPublicKey', ),
'1.2.840.10045.3': ('ellipticCurve', ),
'1.2.840.10045.3.0': ('c-TwoCurve', ),
'1.2.840.10045.3.0.1': ('c2pnb163v1', ),
'1.2.840.10045.3.0.2': ('c2pnb163v2', ),
'1.2.840.10045.3.0.3': ('c2pnb163v3', ),
'1.2.840.10045.3.0.4': ('c2pnb176v1', ),
'1.2.840.10045.3.0.5': ('c2tnb191v1', ),
'1.2.840.10045.3.0.6': ('c2tnb191v2', ),
'1.2.840.10045.3.0.7': ('c2tnb191v3', ),
'1.2.840.10045.3.0.8': ('c2onb191v4', ),
'1.2.840.10045.3.0.9': ('c2onb191v5', ),
'1.2.840.10045.3.0.10': ('c2pnb208w1', ),
'1.2.840.10045.3.0.11': ('c2tnb239v1', ),
'1.2.840.10045.3.0.12': ('c2tnb239v2', ),
'1.2.840.10045.3.0.13': ('c2tnb239v3', ),
'1.2.840.10045.3.0.14': ('c2onb239v4', ),
'1.2.840.10045.3.0.15': ('c2onb239v5', ),
'1.2.840.10045.3.0.16': ('c2pnb272w1', ),
'1.2.840.10045.3.0.17': ('c2pnb304w1', ),
'1.2.840.10045.3.0.18': ('c2tnb359v1', ),
'1.2.840.10045.3.0.19': ('c2pnb368w1', ),
'1.2.840.10045.3.0.20': ('c2tnb431r1', ),
'1.2.840.10045.3.1': ('primeCurve', ),
'1.2.840.10045.3.1.1': ('prime192v1', ),
'1.2.840.10045.3.1.2': ('prime192v2', ),
'1.2.840.10045.3.1.3': ('prime192v3', ),
'1.2.840.10045.3.1.4': ('prime239v1', ),
'1.2.840.10045.3.1.5': ('prime239v2', ),
'1.2.840.10045.3.1.6': ('prime239v3', ),
'1.2.840.10045.3.1.7': ('prime256v1', ),
'1.2.840.10045.4': ('id-ecSigType', ),
'1.2.840.10045.4.1': ('ecdsa-with-SHA1', ),
'1.2.840.10045.4.2': ('ecdsa-with-Recommended', ),
'1.2.840.10045.4.3': ('ecdsa-with-Specified', ),
'1.2.840.10045.4.3.1': ('ecdsa-with-SHA224', ),
'1.2.840.10045.4.3.2': ('ecdsa-with-SHA256', ),
'1.2.840.10045.4.3.3': ('ecdsa-with-SHA384', ),
'1.2.840.10045.4.3.4': ('ecdsa-with-SHA512', ),
'1.2.840.10046.2.1': ('X9.42 DH', 'dhpublicnumber'),
'1.2.840.113533.7.66.10': ('cast5-cbc', 'CAST5-CBC'),
'1.2.840.113533.7.66.12': ('pbeWithMD5AndCast5CBC', ),
'1.2.840.113533.7.66.13': ('password based MAC', 'id-PasswordBasedMAC'),
'1.2.840.113533.7.66.30': ('Diffie-Hellman based MAC', 'id-DHBasedMac'),
'1.2.840.113549': ('RSA Data Security, Inc.', 'rsadsi'),
'1.2.840.113549.1': ('RSA Data Security, Inc. PKCS', 'pkcs'),
'1.2.840.113549.1.1': ('pkcs1', ),
'1.2.840.113549.1.1.1': ('rsaEncryption', ),
'1.2.840.113549.1.1.2': ('md2WithRSAEncryption', 'RSA-MD2'),
'1.2.840.113549.1.1.3': ('md4WithRSAEncryption', 'RSA-MD4'),
'1.2.840.113549.1.1.4': ('md5WithRSAEncryption', 'RSA-MD5'),
'1.2.840.113549.1.1.5': ('sha1WithRSAEncryption', 'RSA-SHA1'),
'1.2.840.113549.1.1.6': ('rsaOAEPEncryptionSET', ),
'1.2.840.113549.1.1.7': ('rsaesOaep', 'RSAES-OAEP'),
'1.2.840.113549.1.1.8': ('mgf1', 'MGF1'),
'1.2.840.113549.1.1.9': ('pSpecified', 'PSPECIFIED'),
'1.2.840.113549.1.1.10': ('rsassaPss', 'RSASSA-PSS'),
'1.2.840.113549.1.1.11': ('sha256WithRSAEncryption', 'RSA-SHA256'),
'1.2.840.113549.1.1.12': ('sha384WithRSAEncryption', 'RSA-SHA384'),
'1.2.840.113549.1.1.13': ('sha512WithRSAEncryption', 'RSA-SHA512'),
'1.2.840.113549.1.1.14': ('sha224WithRSAEncryption', 'RSA-SHA224'),
'1.2.840.113549.1.1.15': ('sha512-224WithRSAEncryption', 'RSA-SHA512/224'),
'1.2.840.113549.1.1.16': ('sha512-256WithRSAEncryption', 'RSA-SHA512/256'),
'1.2.840.113549.1.3': ('pkcs3', ),
'1.2.840.113549.1.3.1': ('dhKeyAgreement', ),
'1.2.840.113549.1.5': ('pkcs5', ),
'1.2.840.113549.1.5.1': ('pbeWithMD2AndDES-CBC', 'PBE-MD2-DES'),
'1.2.840.113549.1.5.3': ('pbeWithMD5AndDES-CBC', 'PBE-MD5-DES'),
'1.2.840.113549.1.5.4': ('pbeWithMD2AndRC2-CBC', 'PBE-MD2-RC2-64'),
'1.2.840.113549.1.5.6': ('pbeWithMD5AndRC2-CBC', 'PBE-MD5-RC2-64'),
'1.2.840.113549.1.5.10': ('pbeWithSHA1AndDES-CBC', 'PBE-SHA1-DES'),
'1.2.840.113549.1.5.11': ('pbeWithSHA1AndRC2-CBC', 'PBE-SHA1-RC2-64'),
'1.2.840.113549.1.5.12': ('PBKDF2', ),
'1.2.840.113549.1.5.13': ('PBES2', ),
'1.2.840.113549.1.5.14': ('PBMAC1', ),
'1.2.840.113549.1.7': ('pkcs7', ),
'1.2.840.113549.1.7.1': ('pkcs7-data', ),
'1.2.840.113549.1.7.2': ('pkcs7-signedData', ),
'1.2.840.113549.1.7.3': ('pkcs7-envelopedData', ),
'1.2.840.113549.1.7.4': ('pkcs7-signedAndEnvelopedData', ),
'1.2.840.113549.1.7.5': ('pkcs7-digestData', ),
'1.2.840.113549.1.7.6': ('pkcs7-encryptedData', ),
'1.2.840.113549.1.9': ('pkcs9', ),
'1.2.840.113549.1.9.1': ('emailAddress', ),
'1.2.840.113549.1.9.2': ('unstructuredName', ),
'1.2.840.113549.1.9.3': ('contentType', ),
'1.2.840.113549.1.9.4': ('messageDigest', ),
'1.2.840.113549.1.9.5': ('signingTime', ),
'1.2.840.113549.1.9.6': ('countersignature', ),
'1.2.840.113549.1.9.7': ('challengePassword', ),
'1.2.840.113549.1.9.8': ('unstructuredAddress', ),
'1.2.840.113549.1.9.9': ('extendedCertificateAttributes', ),
'1.2.840.113549.1.9.14': ('Extension Request', 'extReq'),
'1.2.840.113549.1.9.15': ('S/MIME Capabilities', 'SMIME-CAPS'),
'1.2.840.113549.1.9.16': ('S/MIME', 'SMIME'),
'1.2.840.113549.1.9.16.0': ('id-smime-mod', ),
'1.2.840.113549.1.9.16.0.1': ('id-smime-mod-cms', ),
'1.2.840.113549.1.9.16.0.2': ('id-smime-mod-ess', ),
'1.2.840.113549.1.9.16.0.3': ('id-smime-mod-oid', ),
'1.2.840.113549.1.9.16.0.4': ('id-smime-mod-msg-v3', ),
'1.2.840.113549.1.9.16.0.5': ('id-smime-mod-ets-eSignature-88', ),
'1.2.840.113549.1.9.16.0.6': ('id-smime-mod-ets-eSignature-97', ),
'1.2.840.113549.1.9.16.0.7': ('id-smime-mod-ets-eSigPolicy-88', ),
'1.2.840.113549.1.9.16.0.8': ('id-smime-mod-ets-eSigPolicy-97', ),
'1.2.840.113549.1.9.16.1': ('id-smime-ct', ),
'1.2.840.113549.1.9.16.1.1': ('id-smime-ct-receipt', ),
'1.2.840.113549.1.9.16.1.2': ('id-smime-ct-authData', ),
'1.2.840.113549.1.9.16.1.3': ('id-smime-ct-publishCert', ),
'1.2.840.113549.1.9.16.1.4': ('id-smime-ct-TSTInfo', ),
'1.2.840.113549.1.9.16.1.5': ('id-smime-ct-TDTInfo', ),
'1.2.840.113549.1.9.16.1.6': ('id-smime-ct-contentInfo', ),
'1.2.840.113549.1.9.16.1.7': ('id-smime-ct-DVCSRequestData', ),
'1.2.840.113549.1.9.16.1.8': ('id-smime-ct-DVCSResponseData', ),
'1.2.840.113549.1.9.16.1.9': ('id-smime-ct-compressedData', ),
'1.2.840.113549.1.9.16.1.19': ('id-smime-ct-contentCollection', ),
'1.2.840.113549.1.9.16.1.23': ('id-smime-ct-authEnvelopedData', ),
'1.2.840.113549.1.9.16.1.27': ('id-ct-asciiTextWithCRLF', ),
'1.2.840.113549.1.9.16.1.28': ('id-ct-xml', ),
'1.2.840.113549.1.9.16.2': ('id-smime-aa', ),
'1.2.840.113549.1.9.16.2.1': ('id-smime-aa-receiptRequest', ),
'1.2.840.113549.1.9.16.2.2': ('id-smime-aa-securityLabel', ),
'1.2.840.113549.1.9.16.2.3': ('id-smime-aa-mlExpandHistory', ),
'1.2.840.113549.1.9.16.2.4': ('id-smime-aa-contentHint', ),
'1.2.840.113549.1.9.16.2.5': ('id-smime-aa-msgSigDigest', ),
'1.2.840.113549.1.9.16.2.6': ('id-smime-aa-encapContentType', ),
'1.2.840.113549.1.9.16.2.7': ('id-smime-aa-contentIdentifier', ),
'1.2.840.113549.1.9.16.2.8': ('id-smime-aa-macValue', ),
'1.2.840.113549.1.9.16.2.9': ('id-smime-aa-equivalentLabels', ),
'1.2.840.113549.1.9.16.2.10': ('id-smime-aa-contentReference', ),
'1.2.840.113549.1.9.16.2.11': ('id-smime-aa-encrypKeyPref', ),
'1.2.840.113549.1.9.16.2.12': ('id-smime-aa-signingCertificate', ),
'1.2.840.113549.1.9.16.2.13': ('id-smime-aa-smimeEncryptCerts', ),
'1.2.840.113549.1.9.16.2.14': ('id-smime-aa-timeStampToken', ),
'1.2.840.113549.1.9.16.2.15': ('id-smime-aa-ets-sigPolicyId', ),
'1.2.840.113549.1.9.16.2.16': ('id-smime-aa-ets-commitmentType', ),
'1.2.840.113549.1.9.16.2.17': ('id-smime-aa-ets-signerLocation', ),
'1.2.840.113549.1.9.16.2.18': ('id-smime-aa-ets-signerAttr', ),
'1.2.840.113549.1.9.16.2.19': ('id-smime-aa-ets-otherSigCert', ),
'1.2.840.113549.1.9.16.2.20': ('id-smime-aa-ets-contentTimestamp', ),
'1.2.840.113549.1.9.16.2.21': ('id-smime-aa-ets-CertificateRefs', ),
'1.2.840.113549.1.9.16.2.22': ('id-smime-aa-ets-RevocationRefs', ),
'1.2.840.113549.1.9.16.2.23': ('id-smime-aa-ets-certValues', ),
'1.2.840.113549.1.9.16.2.24': ('id-smime-aa-ets-revocationValues', ),
'1.2.840.113549.1.9.16.2.25': ('id-smime-aa-ets-escTimeStamp', ),
'1.2.840.113549.1.9.16.2.26': ('id-smime-aa-ets-certCRLTimestamp', ),
'1.2.840.113549.1.9.16.2.27': ('id-smime-aa-ets-archiveTimeStamp', ),
'1.2.840.113549.1.9.16.2.28': ('id-smime-aa-signatureType', ),
'1.2.840.113549.1.9.16.2.29': ('id-smime-aa-dvcs-dvc', ),
'1.2.840.113549.1.9.16.2.47': ('id-smime-aa-signingCertificateV2', ),
'1.2.840.113549.1.9.16.3': ('id-smime-alg', ),
'1.2.840.113549.1.9.16.3.1': ('id-smime-alg-ESDHwith3DES', ),
'1.2.840.113549.1.9.16.3.2': ('id-smime-alg-ESDHwithRC2', ),
'1.2.840.113549.1.9.16.3.3': ('id-smime-alg-3DESwrap', ),
'1.2.840.113549.1.9.16.3.4': ('id-smime-alg-RC2wrap', ),
'1.2.840.113549.1.9.16.3.5': ('id-smime-alg-ESDH', ),
'1.2.840.113549.1.9.16.3.6': ('id-smime-alg-CMS3DESwrap', ),
'1.2.840.113549.1.9.16.3.7': ('id-smime-alg-CMSRC2wrap', ),
'1.2.840.113549.1.9.16.3.8': ('zlib compression', 'ZLIB'),
'1.2.840.113549.1.9.16.3.9': ('id-alg-PWRI-KEK', ),
'1.2.840.113549.1.9.16.4': ('id-smime-cd', ),
'1.2.840.113549.1.9.16.4.1': ('id-smime-cd-ldap', ),
'1.2.840.113549.1.9.16.5': ('id-smime-spq', ),
'1.2.840.113549.1.9.16.5.1': ('id-smime-spq-ets-sqt-uri', ),
'1.2.840.113549.1.9.16.5.2': ('id-smime-spq-ets-sqt-unotice', ),
'1.2.840.113549.1.9.16.6': ('id-smime-cti', ),
'1.2.840.113549.1.9.16.6.1': ('id-smime-cti-ets-proofOfOrigin', ),
'1.2.840.113549.1.9.16.6.2': ('id-smime-cti-ets-proofOfReceipt', ),
'1.2.840.113549.1.9.16.6.3': ('id-smime-cti-ets-proofOfDelivery', ),
'1.2.840.113549.1.9.16.6.4': ('id-smime-cti-ets-proofOfSender', ),
'1.2.840.113549.1.9.16.6.5': ('id-smime-cti-ets-proofOfApproval', ),
'1.2.840.113549.1.9.16.6.6': ('id-smime-cti-ets-proofOfCreation', ),
'1.2.840.113549.1.9.20': ('friendlyName', ),
'1.2.840.113549.1.9.21': ('localKeyID', ),
'1.2.840.113549.1.9.22': ('certTypes', ),
'1.2.840.113549.1.9.22.1': ('x509Certificate', ),
'1.2.840.113549.1.9.22.2': ('sdsiCertificate', ),
'1.2.840.113549.1.9.23': ('crlTypes', ),
'1.2.840.113549.1.9.23.1': ('x509Crl', ),
'1.2.840.113549.1.12': ('pkcs12', ),
'1.2.840.113549.1.12.1': ('pkcs12-pbeids', ),
'1.2.840.113549.1.12.1.1': ('pbeWithSHA1And128BitRC4', 'PBE-SHA1-RC4-128'),
'1.2.840.113549.1.12.1.2': ('pbeWithSHA1And40BitRC4', 'PBE-SHA1-RC4-40'),
'1.2.840.113549.1.12.1.3': ('pbeWithSHA1And3-KeyTripleDES-CBC', 'PBE-SHA1-3DES'),
'1.2.840.113549.1.12.1.4': ('pbeWithSHA1And2-KeyTripleDES-CBC', 'PBE-SHA1-2DES'),
'1.2.840.113549.1.12.1.5': ('pbeWithSHA1And128BitRC2-CBC', 'PBE-SHA1-RC2-128'),
'1.2.840.113549.1.12.1.6': ('pbeWithSHA1And40BitRC2-CBC', 'PBE-SHA1-RC2-40'),
'1.2.840.113549.1.12.10': ('pkcs12-Version1', ),
'1.2.840.113549.1.12.10.1': ('pkcs12-BagIds', ),
'1.2.840.113549.1.12.10.1.1': ('keyBag', ),
'1.2.840.113549.1.12.10.1.2': ('pkcs8ShroudedKeyBag', ),
'1.2.840.113549.1.12.10.1.3': ('certBag', ),
'1.2.840.113549.1.12.10.1.4': ('crlBag', ),
'1.2.840.113549.1.12.10.1.5': ('secretBag', ),
'1.2.840.113549.1.12.10.1.6': ('safeContentsBag', ),
'1.2.840.113549.2.2': ('md2', 'MD2'),
'1.2.840.113549.2.4': ('md4', 'MD4'),
'1.2.840.113549.2.5': ('md5', 'MD5'),
'1.2.840.113549.2.6': ('hmacWithMD5', ),
'1.2.840.113549.2.7': ('hmacWithSHA1', ),
'1.2.840.113549.2.8': ('hmacWithSHA224', ),
'1.2.840.113549.2.9': ('hmacWithSHA256', ),
'1.2.840.113549.2.10': ('hmacWithSHA384', ),
'1.2.840.113549.2.11': ('hmacWithSHA512', ),
'1.2.840.113549.2.12': ('hmacWithSHA512-224', ),
'1.2.840.113549.2.13': ('hmacWithSHA512-256', ),
'1.2.840.113549.3.2': ('rc2-cbc', 'RC2-CBC'),
'1.2.840.113549.3.4': ('rc4', 'RC4'),
'1.2.840.113549.3.7': ('des-ede3-cbc', 'DES-EDE3-CBC'),
'1.2.840.113549.3.8': ('rc5-cbc', 'RC5-CBC'),
'1.2.840.113549.3.10': ('des-cdmf', 'DES-CDMF'),
'1.3': ('identified-organization', 'org', 'ORG'),
'1.3.6': ('dod', 'DOD'),
'1.3.6.1': ('iana', 'IANA', 'internet'),
'1.3.6.1.1': ('Directory', 'directory'),
'1.3.6.1.2': ('Management', 'mgmt'),
'1.3.6.1.3': ('Experimental', 'experimental'),
'1.3.6.1.4': ('Private', 'private'),
'1.3.6.1.4.1': ('Enterprises', 'enterprises'),
'1.3.6.1.4.1.188.7.1.1.2': ('idea-cbc', 'IDEA-CBC'),
'1.3.6.1.4.1.311.2.1.14': ('Microsoft Extension Request', 'msExtReq'),
'1.3.6.1.4.1.311.2.1.21': ('Microsoft Individual Code Signing', 'msCodeInd'),
'1.3.6.1.4.1.311.2.1.22': ('Microsoft Commercial Code Signing', 'msCodeCom'),
'1.3.6.1.4.1.311.10.3.1': ('Microsoft Trust List Signing', 'msCTLSign'),
'1.3.6.1.4.1.311.10.3.3': ('Microsoft Server Gated Crypto', 'msSGC'),
'1.3.6.1.4.1.311.10.3.4': ('Microsoft Encrypted File System', 'msEFS'),
'1.3.6.1.4.1.311.17.1': ('Microsoft CSP Name', 'CSPName'),
'1.3.6.1.4.1.311.17.2': ('Microsoft Local Key set', 'LocalKeySet'),
'1.3.6.1.4.1.311.20.2.2': ('Microsoft Smartcardlogin', 'msSmartcardLogin'),
'1.3.6.1.4.1.311.20.2.3': ('Microsoft Universal Principal Name', 'msUPN'),
'1.3.6.1.4.1.311.60.2.1.1': ('jurisdictionLocalityName', 'jurisdictionL'),
'1.3.6.1.4.1.311.60.2.1.2': ('jurisdictionStateOrProvinceName', 'jurisdictionST'),
'1.3.6.1.4.1.311.60.2.1.3': ('jurisdictionCountryName', 'jurisdictionC'),
'1.3.6.1.4.1.1466.344': ('dcObject', 'dcobject'),
'1.3.6.1.4.1.1722.12.2.1.16': ('blake2b512', 'BLAKE2b512'),
'1.3.6.1.4.1.1722.12.2.2.8': ('blake2s256', 'BLAKE2s256'),
'1.3.6.1.4.1.3029.1.2': ('bf-cbc', 'BF-CBC'),
'1.3.6.1.4.1.11129.2.4.2': ('CT Precertificate SCTs', 'ct_precert_scts'),
'1.3.6.1.4.1.11129.2.4.3': ('CT Precertificate Poison', 'ct_precert_poison'),
'1.3.6.1.4.1.11129.2.4.4': ('CT Precertificate Signer', 'ct_precert_signer'),
'1.3.6.1.4.1.11129.2.4.5': ('CT Certificate SCTs', 'ct_cert_scts'),
'1.3.6.1.4.1.11591.4.11': ('scrypt', 'id-scrypt'),
'1.3.6.1.5': ('Security', 'security'),
'1.3.6.1.5.2.3': ('id-pkinit', ),
'1.3.6.1.5.2.3.4': ('PKINIT Client Auth', 'pkInitClientAuth'),
'1.3.6.1.5.2.3.5': ('Signing KDC Response', 'pkInitKDC'),
'1.3.6.1.5.5.7': ('PKIX', ),
'1.3.6.1.5.5.7.0': ('id-pkix-mod', ),
'1.3.6.1.5.5.7.0.1': ('id-pkix1-explicit-88', ),
'1.3.6.1.5.5.7.0.2': ('id-pkix1-implicit-88', ),
'1.3.6.1.5.5.7.0.3': ('id-pkix1-explicit-93', ),
'1.3.6.1.5.5.7.0.4': ('id-pkix1-implicit-93', ),
'1.3.6.1.5.5.7.0.5': ('id-mod-crmf', ),
'1.3.6.1.5.5.7.0.6': ('id-mod-cmc', ),
'1.3.6.1.5.5.7.0.7': ('id-mod-kea-profile-88', ),
'1.3.6.1.5.5.7.0.8': ('id-mod-kea-profile-93', ),
'1.3.6.1.5.5.7.0.9': ('id-mod-cmp', ),
'1.3.6.1.5.5.7.0.10': ('id-mod-qualified-cert-88', ),
'1.3.6.1.5.5.7.0.11': ('id-mod-qualified-cert-93', ),
'1.3.6.1.5.5.7.0.12': ('id-mod-attribute-cert', ),
'1.3.6.1.5.5.7.0.13': ('id-mod-timestamp-protocol', ),
'1.3.6.1.5.5.7.0.14': ('id-mod-ocsp', ),
'1.3.6.1.5.5.7.0.15': ('id-mod-dvcs', ),
'1.3.6.1.5.5.7.0.16': ('id-mod-cmp2000', ),
'1.3.6.1.5.5.7.1': ('id-pe', ),
'1.3.6.1.5.5.7.1.1': ('Authority Information Access', 'authorityInfoAccess'),
'1.3.6.1.5.5.7.1.2': ('Biometric Info', 'biometricInfo'),
'1.3.6.1.5.5.7.1.3': ('qcStatements', ),
'1.3.6.1.5.5.7.1.4': ('ac-auditEntity', ),
'1.3.6.1.5.5.7.1.5': ('ac-targeting', ),
'1.3.6.1.5.5.7.1.6': ('aaControls', ),
'1.3.6.1.5.5.7.1.7': ('sbgp-ipAddrBlock', ),
'1.3.6.1.5.5.7.1.8': ('sbgp-autonomousSysNum', ),
'1.3.6.1.5.5.7.1.9': ('sbgp-routerIdentifier', ),
'1.3.6.1.5.5.7.1.10': ('ac-proxying', ),
'1.3.6.1.5.5.7.1.11': ('Subject Information Access', 'subjectInfoAccess'),
'1.3.6.1.5.5.7.1.14': ('Proxy Certificate Information', 'proxyCertInfo'),
'1.3.6.1.5.5.7.1.24': ('TLS Feature', 'tlsfeature'),
'1.3.6.1.5.5.7.2': ('id-qt', ),
'1.3.6.1.5.5.7.2.1': ('Policy Qualifier CPS', 'id-qt-cps'),
'1.3.6.1.5.5.7.2.2': ('Policy Qualifier User Notice', 'id-qt-unotice'),
'1.3.6.1.5.5.7.2.3': ('textNotice', ),
'1.3.6.1.5.5.7.3': ('id-kp', ),
'1.3.6.1.5.5.7.3.1': ('TLS Web Server Authentication', 'serverAuth'),
'1.3.6.1.5.5.7.3.2': ('TLS Web Client Authentication', 'clientAuth'),
'1.3.6.1.5.5.7.3.3': ('Code Signing', 'codeSigning'),
'1.3.6.1.5.5.7.3.4': ('E-mail Protection', 'emailProtection'),
'1.3.6.1.5.5.7.3.5': ('IPSec End System', 'ipsecEndSystem'),
'1.3.6.1.5.5.7.3.6': ('IPSec Tunnel', 'ipsecTunnel'),
'1.3.6.1.5.5.7.3.7': ('IPSec User', 'ipsecUser'),
'1.3.6.1.5.5.7.3.8': ('Time Stamping', 'timeStamping'),
'1.3.6.1.5.5.7.3.9': ('OCSP Signing', 'OCSPSigning'),
'1.3.6.1.5.5.7.3.10': ('dvcs', 'DVCS'),
'1.3.6.1.5.5.7.3.17': ('ipsec Internet Key Exchange', 'ipsecIKE'),
'1.3.6.1.5.5.7.3.18': ('Ctrl/provision WAP Access', 'capwapAC'),
'1.3.6.1.5.5.7.3.19': ('Ctrl/Provision WAP Termination', 'capwapWTP'),
'1.3.6.1.5.5.7.3.21': ('SSH Client', 'secureShellClient'),
'1.3.6.1.5.5.7.3.22': ('SSH Server', 'secureShellServer'),
'1.3.6.1.5.5.7.3.23': ('Send Router', 'sendRouter'),
'1.3.6.1.5.5.7.3.24': ('Send Proxied Router', 'sendProxiedRouter'),
'1.3.6.1.5.5.7.3.25': ('Send Owner', 'sendOwner'),
'1.3.6.1.5.5.7.3.26': ('Send Proxied Owner', 'sendProxiedOwner'),
'1.3.6.1.5.5.7.3.27': ('CMC Certificate Authority', 'cmcCA'),
'1.3.6.1.5.5.7.3.28': ('CMC Registration Authority', 'cmcRA'),
'1.3.6.1.5.5.7.4': ('id-it', ),
'1.3.6.1.5.5.7.4.1': ('id-it-caProtEncCert', ),
'1.3.6.1.5.5.7.4.2': ('id-it-signKeyPairTypes', ),
'1.3.6.1.5.5.7.4.3': ('id-it-encKeyPairTypes', ),
'1.3.6.1.5.5.7.4.4': ('id-it-preferredSymmAlg', ),
'1.3.6.1.5.5.7.4.5': ('id-it-caKeyUpdateInfo', ),
'1.3.6.1.5.5.7.4.6': ('id-it-currentCRL', ),
'1.3.6.1.5.5.7.4.7': ('id-it-unsupportedOIDs', ),
'1.3.6.1.5.5.7.4.8': ('id-it-subscriptionRequest', ),
'1.3.6.1.5.5.7.4.9': ('id-it-subscriptionResponse', ),
'1.3.6.1.5.5.7.4.10': ('id-it-keyPairParamReq', ),
'1.3.6.1.5.5.7.4.11': ('id-it-keyPairParamRep', ),
'1.3.6.1.5.5.7.4.12': ('id-it-revPassphrase', ),
'1.3.6.1.5.5.7.4.13': ('id-it-implicitConfirm', ),
'1.3.6.1.5.5.7.4.14': ('id-it-confirmWaitTime', ),
'1.3.6.1.5.5.7.4.15': ('id-it-origPKIMessage', ),
'1.3.6.1.5.5.7.4.16': ('id-it-suppLangTags', ),
'1.3.6.1.5.5.7.5': ('id-pkip', ),
'1.3.6.1.5.5.7.5.1': ('id-regCtrl', ),
'1.3.6.1.5.5.7.5.1.1': ('id-regCtrl-regToken', ),
'1.3.6.1.5.5.7.5.1.2': ('id-regCtrl-authenticator', ),
'1.3.6.1.5.5.7.5.1.3': ('id-regCtrl-pkiPublicationInfo', ),
'1.3.6.1.5.5.7.5.1.4': ('id-regCtrl-pkiArchiveOptions', ),
'1.3.6.1.5.5.7.5.1.5': ('id-regCtrl-oldCertID', ),
'1.3.6.1.5.5.7.5.1.6': ('id-regCtrl-protocolEncrKey', ),
'1.3.6.1.5.5.7.5.2': ('id-regInfo', ),
'1.3.6.1.5.5.7.5.2.1': ('id-regInfo-utf8Pairs', ),
'1.3.6.1.5.5.7.5.2.2': ('id-regInfo-certReq', ),
'1.3.6.1.5.5.7.6': ('id-alg', ),
'1.3.6.1.5.5.7.6.1': ('id-alg-des40', ),
'1.3.6.1.5.5.7.6.2': ('id-alg-noSignature', ),
'1.3.6.1.5.5.7.6.3': ('id-alg-dh-sig-hmac-sha1', ),
'1.3.6.1.5.5.7.6.4': ('id-alg-dh-pop', ),
'1.3.6.1.5.5.7.7': ('id-cmc', ),
'1.3.6.1.5.5.7.7.1': ('id-cmc-statusInfo', ),
'1.3.6.1.5.5.7.7.2': ('id-cmc-identification', ),
'1.3.6.1.5.5.7.7.3': ('id-cmc-identityProof', ),
'1.3.6.1.5.5.7.7.4': ('id-cmc-dataReturn', ),
'1.3.6.1.5.5.7.7.5': ('id-cmc-transactionId', ),
'1.3.6.1.5.5.7.7.6': ('id-cmc-senderNonce', ),
'1.3.6.1.5.5.7.7.7': ('id-cmc-recipientNonce', ),
'1.3.6.1.5.5.7.7.8': ('id-cmc-addExtensions', ),
'1.3.6.1.5.5.7.7.9': ('id-cmc-encryptedPOP', ),
'1.3.6.1.5.5.7.7.10': ('id-cmc-decryptedPOP', ),
'1.3.6.1.5.5.7.7.11': ('id-cmc-lraPOPWitness', ),
'1.3.6.1.5.5.7.7.15': ('id-cmc-getCert', ),
'1.3.6.1.5.5.7.7.16': ('id-cmc-getCRL', ),
'1.3.6.1.5.5.7.7.17': ('id-cmc-revokeRequest', ),
'1.3.6.1.5.5.7.7.18': ('id-cmc-regInfo', ),
'1.3.6.1.5.5.7.7.19': ('id-cmc-responseInfo', ),
'1.3.6.1.5.5.7.7.21': ('id-cmc-queryPending', ),
'1.3.6.1.5.5.7.7.22': ('id-cmc-popLinkRandom', ),
'1.3.6.1.5.5.7.7.23': ('id-cmc-popLinkWitness', ),
'1.3.6.1.5.5.7.7.24': ('id-cmc-confirmCertAcceptance', ),
'1.3.6.1.5.5.7.8': ('id-on', ),
'1.3.6.1.5.5.7.8.1': ('id-on-personalData', ),
'1.3.6.1.5.5.7.8.3': ('Permanent Identifier', 'id-on-permanentIdentifier'),
'1.3.6.1.5.5.7.9': ('id-pda', ),
'1.3.6.1.5.5.7.9.1': ('id-pda-dateOfBirth', ),
'1.3.6.1.5.5.7.9.2': ('id-pda-placeOfBirth', ),
'1.3.6.1.5.5.7.9.3': ('id-pda-gender', ),
'1.3.6.1.5.5.7.9.4': ('id-pda-countryOfCitizenship', ),
'1.3.6.1.5.5.7.9.5': ('id-pda-countryOfResidence', ),
'1.3.6.1.5.5.7.10': ('id-aca', ),
'1.3.6.1.5.5.7.10.1': ('id-aca-authenticationInfo', ),
'1.3.6.1.5.5.7.10.2': ('id-aca-accessIdentity', ),
'1.3.6.1.5.5.7.10.3': ('id-aca-chargingIdentity', ),
'1.3.6.1.5.5.7.10.4': ('id-aca-group', ),
'1.3.6.1.5.5.7.10.5': ('id-aca-role', ),
'1.3.6.1.5.5.7.10.6': ('id-aca-encAttrs', ),
'1.3.6.1.5.5.7.11': ('id-qcs', ),
'1.3.6.1.5.5.7.11.1': ('id-qcs-pkixQCSyntax-v1', ),
'1.3.6.1.5.5.7.12': ('id-cct', ),
'1.3.6.1.5.5.7.12.1': ('id-cct-crs', ),
'1.3.6.1.5.5.7.12.2': ('id-cct-PKIData', ),
'1.3.6.1.5.5.7.12.3': ('id-cct-PKIResponse', ),
'1.3.6.1.5.5.7.21': ('id-ppl', ),
'1.3.6.1.5.5.7.21.0': ('Any language', 'id-ppl-anyLanguage'),
'1.3.6.1.5.5.7.21.1': ('Inherit all', 'id-ppl-inheritAll'),
'1.3.6.1.5.5.7.21.2': ('Independent', 'id-ppl-independent'),
'1.3.6.1.5.5.7.48': ('id-ad', ),
'1.3.6.1.5.5.7.48.1': ('OCSP', 'OCSP', 'id-pkix-OCSP'),
'1.3.6.1.5.5.7.48.1.1': ('Basic OCSP Response', 'basicOCSPResponse'),
'1.3.6.1.5.5.7.48.1.2': ('OCSP Nonce', 'Nonce'),
'1.3.6.1.5.5.7.48.1.3': ('OCSP CRL ID', 'CrlID'),
'1.3.6.1.5.5.7.48.1.4': ('Acceptable OCSP Responses', 'acceptableResponses'),
'1.3.6.1.5.5.7.48.1.5': ('OCSP No Check', 'noCheck'),
'1.3.6.1.5.5.7.48.1.6': ('OCSP Archive Cutoff', 'archiveCutoff'),
'1.3.6.1.5.5.7.48.1.7': ('OCSP Service Locator', 'serviceLocator'),
'1.3.6.1.5.5.7.48.1.8': ('Extended OCSP Status', 'extendedStatus'),
'1.3.6.1.5.5.7.48.1.9': ('valid', ),
'1.3.6.1.5.5.7.48.1.10': ('path', ),
'1.3.6.1.5.5.7.48.1.11': ('Trust Root', 'trustRoot'),
'1.3.6.1.5.5.7.48.2': ('CA Issuers', 'caIssuers'),
'1.3.6.1.5.5.7.48.3': ('AD Time Stamping', 'ad_timestamping'),
'1.3.6.1.5.5.7.48.4': ('ad dvcs', 'AD_DVCS'),
'1.3.6.1.5.5.7.48.5': ('CA Repository', 'caRepository'),
'1.3.6.1.5.5.8.1.1': ('hmac-md5', 'HMAC-MD5'),
'1.3.6.1.5.5.8.1.2': ('hmac-sha1', 'HMAC-SHA1'),
'1.3.6.1.6': ('SNMPv2', 'snmpv2'),
'1.3.6.1.7': ('Mail', ),
'1.3.6.1.7.1': ('MIME MHS', 'mime-mhs'),
'1.3.6.1.7.1.1': ('mime-mhs-headings', 'mime-mhs-headings'),
'1.3.6.1.7.1.1.1': ('id-hex-partial-message', 'id-hex-partial-message'),
'1.3.6.1.7.1.1.2': ('id-hex-multipart-message', 'id-hex-multipart-message'),
'1.3.6.1.7.1.2': ('mime-mhs-bodies', 'mime-mhs-bodies'),
'1.3.14.3.2': ('algorithm', 'algorithm'),
'1.3.14.3.2.3': ('md5WithRSA', 'RSA-NP-MD5'),
'1.3.14.3.2.6': ('des-ecb', 'DES-ECB'),
'1.3.14.3.2.7': ('des-cbc', 'DES-CBC'),
'1.3.14.3.2.8': ('des-ofb', 'DES-OFB'),
'1.3.14.3.2.9': ('des-cfb', 'DES-CFB'),
'1.3.14.3.2.11': ('rsaSignature', ),
'1.3.14.3.2.12': ('dsaEncryption-old', 'DSA-old'),
'1.3.14.3.2.13': ('dsaWithSHA', 'DSA-SHA'),
'1.3.14.3.2.15': ('shaWithRSAEncryption', 'RSA-SHA'),
'1.3.14.3.2.17': ('des-ede', 'DES-EDE'),
'1.3.14.3.2.18': ('sha', 'SHA'),
'1.3.14.3.2.26': ('sha1', 'SHA1'),
'1.3.14.3.2.27': ('dsaWithSHA1-old', 'DSA-SHA1-old'),
'1.3.14.3.2.29': ('sha1WithRSA', 'RSA-SHA1-2'),
'1.3.36.3.2.1': ('ripemd160', 'RIPEMD160'),
'1.3.36.3.3.1.2': ('ripemd160WithRSA', 'RSA-RIPEMD160'),
'1.3.36.3.3.2.8.1.1.1': ('brainpoolP160r1', ),
'1.3.36.3.3.2.8.1.1.2': ('brainpoolP160t1', ),
'1.3.36.3.3.2.8.1.1.3': ('brainpoolP192r1', ),
'1.3.36.3.3.2.8.1.1.4': ('brainpoolP192t1', ),
'1.3.36.3.3.2.8.1.1.5': ('brainpoolP224r1', ),
'1.3.36.3.3.2.8.1.1.6': ('brainpoolP224t1', ),
'1.3.36.3.3.2.8.1.1.7': ('brainpoolP256r1', ),
'1.3.36.3.3.2.8.1.1.8': ('brainpoolP256t1', ),
'1.3.36.3.3.2.8.1.1.9': ('brainpoolP320r1', ),
'1.3.36.3.3.2.8.1.1.10': ('brainpoolP320t1', ),
'1.3.36.3.3.2.8.1.1.11': ('brainpoolP384r1', ),
'1.3.36.3.3.2.8.1.1.12': ('brainpoolP384t1', ),
'1.3.36.3.3.2.8.1.1.13': ('brainpoolP512r1', ),
'1.3.36.3.3.2.8.1.1.14': ('brainpoolP512t1', ),
'1.3.36.8.3.3': ('Professional Information or basis for Admission', 'x509ExtAdmission'),
'1.3.101.1.4.1': ('Strong Extranet ID', 'SXNetID'),
'1.3.101.110': ('X25519', ),
'1.3.101.111': ('X448', ),
'1.3.101.112': ('ED25519', ),
'1.3.101.113': ('ED448', ),
'1.3.111': ('ieee', ),
'1.3.111.2.1619': ('IEEE Security in Storage Working Group', 'ieee-siswg'),
'1.3.111.2.1619.0.1.1': ('aes-128-xts', 'AES-128-XTS'),
'1.3.111.2.1619.0.1.2': ('aes-256-xts', 'AES-256-XTS'),
'1.3.132': ('certicom-arc', ),
'1.3.132.0': ('secg_ellipticCurve', ),
'1.3.132.0.1': ('sect163k1', ),
'1.3.132.0.2': ('sect163r1', ),
'1.3.132.0.3': ('sect239k1', ),
'1.3.132.0.4': ('sect113r1', ),
'1.3.132.0.5': ('sect113r2', ),
'1.3.132.0.6': ('secp112r1', ),
'1.3.132.0.7': ('secp112r2', ),
'1.3.132.0.8': ('secp160r1', ),
'1.3.132.0.9': ('secp160k1', ),
'1.3.132.0.10': ('secp256k1', ),
'1.3.132.0.15': ('sect163r2', ),
'1.3.132.0.16': ('sect283k1', ),
'1.3.132.0.17': ('sect283r1', ),
'1.3.132.0.22': ('sect131r1', ),
'1.3.132.0.23': ('sect131r2', ),
'1.3.132.0.24': ('sect193r1', ),
'1.3.132.0.25': ('sect193r2', ),
'1.3.132.0.26': ('sect233k1', ),
'1.3.132.0.27': ('sect233r1', ),
'1.3.132.0.28': ('secp128r1', ),
'1.3.132.0.29': ('secp128r2', ),
'1.3.132.0.30': ('secp160r2', ),
'1.3.132.0.31': ('secp192k1', ),
'1.3.132.0.32': ('secp224k1', ),
'1.3.132.0.33': ('secp224r1', ),
'1.3.132.0.34': ('secp384r1', ),
'1.3.132.0.35': ('secp521r1', ),
'1.3.132.0.36': ('sect409k1', ),
'1.3.132.0.37': ('sect409r1', ),
'1.3.132.0.38': ('sect571k1', ),
'1.3.132.0.39': ('sect571r1', ),
'1.3.132.1': ('secg-scheme', ),
'1.3.132.1.11.0': ('dhSinglePass-stdDH-sha224kdf-scheme', ),
'1.3.132.1.11.1': ('dhSinglePass-stdDH-sha256kdf-scheme', ),
'1.3.132.1.11.2': ('dhSinglePass-stdDH-sha384kdf-scheme', ),
'1.3.132.1.11.3': ('dhSinglePass-stdDH-sha512kdf-scheme', ),
'1.3.132.1.14.0': ('dhSinglePass-cofactorDH-sha224kdf-scheme', ),
'1.3.132.1.14.1': ('dhSinglePass-cofactorDH-sha256kdf-scheme', ),
'1.3.132.1.14.2': ('dhSinglePass-cofactorDH-sha384kdf-scheme', ),
'1.3.132.1.14.3': ('dhSinglePass-cofactorDH-sha512kdf-scheme', ),
'1.3.133.16.840.63.0': ('x9-63-scheme', ),
'1.3.133.16.840.63.0.2': ('dhSinglePass-stdDH-sha1kdf-scheme', ),
'1.3.133.16.840.63.0.3': ('dhSinglePass-cofactorDH-sha1kdf-scheme', ),
'2': ('joint-iso-itu-t', 'JOINT-ISO-ITU-T', 'joint-iso-ccitt'),
'2.5': ('directory services (X.500)', 'X500'),
'2.5.1.5': ('Selected Attribute Types', 'selected-attribute-types'),
'2.5.1.5.55': ('clearance', ),
'2.5.4': ('X509', ),
'2.5.4.3': ('commonName', 'CN'),
'2.5.4.4': ('surname', 'SN'),
'2.5.4.5': ('serialNumber', ),
'2.5.4.6': ('countryName', 'C'),
'2.5.4.7': ('localityName', 'L'),
'2.5.4.8': ('stateOrProvinceName', 'ST'),
'2.5.4.9': ('streetAddress', 'street'),
'2.5.4.10': ('organizationName', 'O'),
'2.5.4.11': ('organizationalUnitName', 'OU'),
'2.5.4.12': ('title', 'title'),
'2.5.4.13': ('description', ),
'2.5.4.14': ('searchGuide', ),
'2.5.4.15': ('businessCategory', ),
'2.5.4.16': ('postalAddress', ),
'2.5.4.17': ('postalCode', ),
'2.5.4.18': ('postOfficeBox', ),
'2.5.4.19': ('physicalDeliveryOfficeName', ),
'2.5.4.20': ('telephoneNumber', ),
'2.5.4.21': ('telexNumber', ),
'2.5.4.22': ('teletexTerminalIdentifier', ),
'2.5.4.23': ('facsimileTelephoneNumber', ),
'2.5.4.24': ('x121Address', ),
'2.5.4.25': ('internationaliSDNNumber', ),
'2.5.4.26': ('registeredAddress', ),
'2.5.4.27': ('destinationIndicator', ),
'2.5.4.28': ('preferredDeliveryMethod', ),
'2.5.4.29': ('presentationAddress', ),
'2.5.4.30': ('supportedApplicationContext', ),
'2.5.4.31': ('member', ),
'2.5.4.32': ('owner', ),
'2.5.4.33': ('roleOccupant', ),
'2.5.4.34': ('seeAlso', ),
'2.5.4.35': ('userPassword', ),
'2.5.4.36': ('userCertificate', ),
'2.5.4.37': ('cACertificate', ),
'2.5.4.38': ('authorityRevocationList', ),
'2.5.4.39': ('certificateRevocationList', ),
'2.5.4.40': ('crossCertificatePair', ),
'2.5.4.41': ('name', 'name'),
'2.5.4.42': ('givenName', 'GN'),
'2.5.4.43': ('initials', 'initials'),
'2.5.4.44': ('generationQualifier', ),
'2.5.4.45': ('x500UniqueIdentifier', ),
'2.5.4.46': ('dnQualifier', 'dnQualifier'),
'2.5.4.47': ('enhancedSearchGuide', ),
'2.5.4.48': ('protocolInformation', ),
'2.5.4.49': ('distinguishedName', ),
'2.5.4.50': ('uniqueMember', ),
'2.5.4.51': ('houseIdentifier', ),
'2.5.4.52': ('supportedAlgorithms', ),
'2.5.4.53': ('deltaRevocationList', ),
'2.5.4.54': ('dmdName', ),
'2.5.4.65': ('pseudonym', ),
'2.5.4.72': ('role', 'role'),
'2.5.4.97': ('organizationIdentifier', ),
'2.5.4.98': ('countryCode3c', 'c3'),
'2.5.4.99': ('countryCode3n', 'n3'),
'2.5.4.100': ('dnsName', ),
'2.5.8': ('directory services - algorithms', 'X500algorithms'),
'2.5.8.1.1': ('rsa', 'RSA'),
'2.5.8.3.100': ('mdc2WithRSA', 'RSA-MDC2'),
'2.5.8.3.101': ('mdc2', 'MDC2'),
'2.5.29': ('id-ce', ),
'2.5.29.9': ('X509v3 Subject Directory Attributes', 'subjectDirectoryAttributes'),
'2.5.29.14': ('X509v3 Subject Key Identifier', 'subjectKeyIdentifier'),
'2.5.29.15': ('X509v3 Key Usage', 'keyUsage'),
'2.5.29.16': ('X509v3 Private Key Usage Period', 'privateKeyUsagePeriod'),
'2.5.29.17': ('X509v3 Subject Alternative Name', 'subjectAltName'),
'2.5.29.18': ('X509v3 Issuer Alternative Name', 'issuerAltName'),
'2.5.29.19': ('X509v3 Basic Constraints', 'basicConstraints'),
'2.5.29.20': ('X509v3 CRL Number', 'crlNumber'),
'2.5.29.21': ('X509v3 CRL Reason Code', 'CRLReason'),
'2.5.29.23': ('Hold Instruction Code', 'holdInstructionCode'),
'2.5.29.24': ('Invalidity Date', 'invalidityDate'),
'2.5.29.27': ('X509v3 Delta CRL Indicator', 'deltaCRL'),
'2.5.29.28': ('X509v3 Issuing Distribution Point', 'issuingDistributionPoint'),
'2.5.29.29': ('X509v3 Certificate Issuer', 'certificateIssuer'),
'2.5.29.30': ('X509v3 Name Constraints', 'nameConstraints'),
'2.5.29.31': ('X509v3 CRL Distribution Points', 'crlDistributionPoints'),
'2.5.29.32': ('X509v3 Certificate Policies', 'certificatePolicies'),
'2.5.29.32.0': ('X509v3 Any Policy', 'anyPolicy'),
'2.5.29.33': ('X509v3 Policy Mappings', 'policyMappings'),
'2.5.29.35': ('X509v3 Authority Key Identifier', 'authorityKeyIdentifier'),
'2.5.29.36': ('X509v3 Policy Constraints', 'policyConstraints'),
'2.5.29.37': ('X509v3 Extended Key Usage', 'extendedKeyUsage'),
'2.5.29.37.0': ('Any Extended Key Usage', 'anyExtendedKeyUsage'),
'2.5.29.46': ('X509v3 Freshest CRL', 'freshestCRL'),
'2.5.29.54': ('X509v3 Inhibit Any Policy', 'inhibitAnyPolicy'),
'2.5.29.55': ('X509v3 AC Targeting', 'targetInformation'),
'2.5.29.56': ('X509v3 No Revocation Available', 'noRevAvail'),
'2.16.840.1.101.3': ('csor', ),
'2.16.840.1.101.3.4': ('nistAlgorithms', ),
'2.16.840.1.101.3.4.1': ('aes', ),
'2.16.840.1.101.3.4.1.1': ('aes-128-ecb', 'AES-128-ECB'),
'2.16.840.1.101.3.4.1.2': ('aes-128-cbc', 'AES-128-CBC'),
'2.16.840.1.101.3.4.1.3': ('aes-128-ofb', 'AES-128-OFB'),
'2.16.840.1.101.3.4.1.4': ('aes-128-cfb', 'AES-128-CFB'),
'2.16.840.1.101.3.4.1.5': ('id-aes128-wrap', ),
'2.16.840.1.101.3.4.1.6': ('aes-128-gcm', 'id-aes128-GCM'),
'2.16.840.1.101.3.4.1.7': ('aes-128-ccm', 'id-aes128-CCM'),
'2.16.840.1.101.3.4.1.8': ('id-aes128-wrap-pad', ),
'2.16.840.1.101.3.4.1.21': ('aes-192-ecb', 'AES-192-ECB'),
'2.16.840.1.101.3.4.1.22': ('aes-192-cbc', 'AES-192-CBC'),
'2.16.840.1.101.3.4.1.23': ('aes-192-ofb', 'AES-192-OFB'),
'2.16.840.1.101.3.4.1.24': ('aes-192-cfb', 'AES-192-CFB'),
'2.16.840.1.101.3.4.1.25': ('id-aes192-wrap', ),
'2.16.840.1.101.3.4.1.26': ('aes-192-gcm', 'id-aes192-GCM'),
'2.16.840.1.101.3.4.1.27': ('aes-192-ccm', 'id-aes192-CCM'),
'2.16.840.1.101.3.4.1.28': ('id-aes192-wrap-pad', ),
'2.16.840.1.101.3.4.1.41': ('aes-256-ecb', 'AES-256-ECB'),
'2.16.840.1.101.3.4.1.42': ('aes-256-cbc', 'AES-256-CBC'),
'2.16.840.1.101.3.4.1.43': ('aes-256-ofb', 'AES-256-OFB'),
'2.16.840.1.101.3.4.1.44': ('aes-256-cfb', 'AES-256-CFB'),
'2.16.840.1.101.3.4.1.45': ('id-aes256-wrap', ),
'2.16.840.1.101.3.4.1.46': ('aes-256-gcm', 'id-aes256-GCM'),
'2.16.840.1.101.3.4.1.47': ('aes-256-ccm', 'id-aes256-CCM'),
'2.16.840.1.101.3.4.1.48': ('id-aes256-wrap-pad', ),
'2.16.840.1.101.3.4.2': ('nist_hashalgs', ),
'2.16.840.1.101.3.4.2.1': ('sha256', 'SHA256'),
'2.16.840.1.101.3.4.2.2': ('sha384', 'SHA384'),
'2.16.840.1.101.3.4.2.3': ('sha512', 'SHA512'),
'2.16.840.1.101.3.4.2.4': ('sha224', 'SHA224'),
'2.16.840.1.101.3.4.2.5': ('sha512-224', 'SHA512-224'),
'2.16.840.1.101.3.4.2.6': ('sha512-256', 'SHA512-256'),
'2.16.840.1.101.3.4.2.7': ('sha3-224', 'SHA3-224'),
'2.16.840.1.101.3.4.2.8': ('sha3-256', 'SHA3-256'),
'2.16.840.1.101.3.4.2.9': ('sha3-384', 'SHA3-384'),
'2.16.840.1.101.3.4.2.10': ('sha3-512', 'SHA3-512'),
'2.16.840.1.101.3.4.2.11': ('shake128', 'SHAKE128'),
'2.16.840.1.101.3.4.2.12': ('shake256', 'SHAKE256'),
'2.16.840.1.101.3.4.2.13': ('hmac-sha3-224', 'id-hmacWithSHA3-224'),
'2.16.840.1.101.3.4.2.14': ('hmac-sha3-256', 'id-hmacWithSHA3-256'),
'2.16.840.1.101.3.4.2.15': ('hmac-sha3-384', 'id-hmacWithSHA3-384'),
'2.16.840.1.101.3.4.2.16': ('hmac-sha3-512', 'id-hmacWithSHA3-512'),
'2.16.840.1.101.3.4.3': ('dsa_with_sha2', 'sigAlgs'),
'2.16.840.1.101.3.4.3.1': ('dsa_with_SHA224', ),
'2.16.840.1.101.3.4.3.2': ('dsa_with_SHA256', ),
'2.16.840.1.101.3.4.3.3': ('dsa_with_SHA384', 'id-dsa-with-sha384'),
'2.16.840.1.101.3.4.3.4': ('dsa_with_SHA512', 'id-dsa-with-sha512'),
'2.16.840.1.101.3.4.3.5': ('dsa_with_SHA3-224', 'id-dsa-with-sha3-224'),
'2.16.840.1.101.3.4.3.6': ('dsa_with_SHA3-256', 'id-dsa-with-sha3-256'),
'2.16.840.1.101.3.4.3.7': ('dsa_with_SHA3-384', 'id-dsa-with-sha3-384'),
'2.16.840.1.101.3.4.3.8': ('dsa_with_SHA3-512', 'id-dsa-with-sha3-512'),
'2.16.840.1.101.3.4.3.9': ('ecdsa_with_SHA3-224', 'id-ecdsa-with-sha3-224'),
'2.16.840.1.101.3.4.3.10': ('ecdsa_with_SHA3-256', 'id-ecdsa-with-sha3-256'),
'2.16.840.1.101.3.4.3.11': ('ecdsa_with_SHA3-384', 'id-ecdsa-with-sha3-384'),
'2.16.840.1.101.3.4.3.12': ('ecdsa_with_SHA3-512', 'id-ecdsa-with-sha3-512'),
'2.16.840.1.101.3.4.3.13': ('RSA-SHA3-224', 'id-rsassa-pkcs1-v1_5-with-sha3-224'),
'2.16.840.1.101.3.4.3.14': ('RSA-SHA3-256', 'id-rsassa-pkcs1-v1_5-with-sha3-256'),
'2.16.840.1.101.3.4.3.15': ('RSA-SHA3-384', 'id-rsassa-pkcs1-v1_5-with-sha3-384'),
'2.16.840.1.101.3.4.3.16': ('RSA-SHA3-512', 'id-rsassa-pkcs1-v1_5-with-sha3-512'),
'2.16.840.1.113730': ('Netscape Communications Corp.', 'Netscape'),
'2.16.840.1.113730.1': ('Netscape Certificate Extension', 'nsCertExt'),
'2.16.840.1.113730.1.1': ('Netscape Cert Type', 'nsCertType'),
'2.16.840.1.113730.1.2': ('Netscape Base Url', 'nsBaseUrl'),
'2.16.840.1.113730.1.3': ('Netscape Revocation Url', 'nsRevocationUrl'),
'2.16.840.1.113730.1.4': ('Netscape CA Revocation Url', 'nsCaRevocationUrl'),
'2.16.840.1.113730.1.7': ('Netscape Renewal Url', 'nsRenewalUrl'),
'2.16.840.1.113730.1.8': ('Netscape CA Policy Url', 'nsCaPolicyUrl'),
'2.16.840.1.113730.1.12': ('Netscape SSL Server Name', 'nsSslServerName'),
'2.16.840.1.113730.1.13': ('Netscape Comment', 'nsComment'),
'2.16.840.1.113730.2': ('Netscape Data Type', 'nsDataType'),
'2.16.840.1.113730.2.5': ('Netscape Certificate Sequence', 'nsCertSequence'),
'2.16.840.1.113730.4.1': ('Netscape Server Gated Crypto', 'nsSGC'),
'2.23': ('International Organizations', 'international-organizations'),
'2.23.42': ('Secure Electronic Transactions', 'id-set'),
'2.23.42.0': ('content types', 'set-ctype'),
'2.23.42.0.0': ('setct-PANData', ),
'2.23.42.0.1': ('setct-PANToken', ),
'2.23.42.0.2': ('setct-PANOnly', ),
'2.23.42.0.3': ('setct-OIData', ),
'2.23.42.0.4': ('setct-PI', ),
'2.23.42.0.5': ('setct-PIData', ),
'2.23.42.0.6': ('setct-PIDataUnsigned', ),
'2.23.42.0.7': ('setct-HODInput', ),
'2.23.42.0.8': ('setct-AuthResBaggage', ),
'2.23.42.0.9': ('setct-AuthRevReqBaggage', ),
'2.23.42.0.10': ('setct-AuthRevResBaggage', ),
'2.23.42.0.11': ('setct-CapTokenSeq', ),
'2.23.42.0.12': ('setct-PInitResData', ),
'2.23.42.0.13': ('setct-PI-TBS', ),
'2.23.42.0.14': ('setct-PResData', ),
'2.23.42.0.16': ('setct-AuthReqTBS', ),
'2.23.42.0.17': ('setct-AuthResTBS', ),
'2.23.42.0.18': ('setct-AuthResTBSX', ),
'2.23.42.0.19': ('setct-AuthTokenTBS', ),
'2.23.42.0.20': ('setct-CapTokenData', ),
'2.23.42.0.21': ('setct-CapTokenTBS', ),
'2.23.42.0.22': ('setct-AcqCardCodeMsg', ),
'2.23.42.0.23': ('setct-AuthRevReqTBS', ),
'2.23.42.0.24': ('setct-AuthRevResData', ),
'2.23.42.0.25': ('setct-AuthRevResTBS', ),
'2.23.42.0.26': ('setct-CapReqTBS', ),
'2.23.42.0.27': ('setct-CapReqTBSX', ),
'2.23.42.0.28': ('setct-CapResData', ),
'2.23.42.0.29': ('setct-CapRevReqTBS', ),
'2.23.42.0.30': ('setct-CapRevReqTBSX', ),
'2.23.42.0.31': ('setct-CapRevResData', ),
'2.23.42.0.32': ('setct-CredReqTBS', ),
'2.23.42.0.33': ('setct-CredReqTBSX', ),
'2.23.42.0.34': ('setct-CredResData', ),
'2.23.42.0.35': ('setct-CredRevReqTBS', ),
'2.23.42.0.36': ('setct-CredRevReqTBSX', ),
'2.23.42.0.37': ('setct-CredRevResData', ),
'2.23.42.0.38': ('setct-PCertReqData', ),
'2.23.42.0.39': ('setct-PCertResTBS', ),
'2.23.42.0.40': ('setct-BatchAdminReqData', ),
'2.23.42.0.41': ('setct-BatchAdminResData', ),
'2.23.42.0.42': ('setct-CardCInitResTBS', ),
'2.23.42.0.43': ('setct-MeAqCInitResTBS', ),
'2.23.42.0.44': ('setct-RegFormResTBS', ),
'2.23.42.0.45': ('setct-CertReqData', ),
'2.23.42.0.46': ('setct-CertReqTBS', ),
'2.23.42.0.47': ('setct-CertResData', ),
'2.23.42.0.48': ('setct-CertInqReqTBS', ),
'2.23.42.0.49': ('setct-ErrorTBS', ),
'2.23.42.0.50': ('setct-PIDualSignedTBE', ),
'2.23.42.0.51': ('setct-PIUnsignedTBE', ),
'2.23.42.0.52': ('setct-AuthReqTBE', ),
'2.23.42.0.53': ('setct-AuthResTBE', ),
'2.23.42.0.54': ('setct-AuthResTBEX', ),
'2.23.42.0.55': ('setct-AuthTokenTBE', ),
'2.23.42.0.56': ('setct-CapTokenTBE', ),
'2.23.42.0.57': ('setct-CapTokenTBEX', ),
'2.23.42.0.58': ('setct-AcqCardCodeMsgTBE', ),
'2.23.42.0.59': ('setct-AuthRevReqTBE', ),
'2.23.42.0.60': ('setct-AuthRevResTBE', ),
'2.23.42.0.61': ('setct-AuthRevResTBEB', ),
'2.23.42.0.62': ('setct-CapReqTBE', ),
'2.23.42.0.63': ('setct-CapReqTBEX', ),
'2.23.42.0.64': ('setct-CapResTBE', ),
'2.23.42.0.65': ('setct-CapRevReqTBE', ),
'2.23.42.0.66': ('setct-CapRevReqTBEX', ),
'2.23.42.0.67': ('setct-CapRevResTBE', ),
'2.23.42.0.68': ('setct-CredReqTBE', ),
'2.23.42.0.69': ('setct-CredReqTBEX', ),
'2.23.42.0.70': ('setct-CredResTBE', ),
'2.23.42.0.71': ('setct-CredRevReqTBE', ),
'2.23.42.0.72': ('setct-CredRevReqTBEX', ),
'2.23.42.0.73': ('setct-CredRevResTBE', ),
'2.23.42.0.74': ('setct-BatchAdminReqTBE', ),
'2.23.42.0.75': ('setct-BatchAdminResTBE', ),
'2.23.42.0.76': ('setct-RegFormReqTBE', ),
'2.23.42.0.77': ('setct-CertReqTBE', ),
'2.23.42.0.78': ('setct-CertReqTBEX', ),
'2.23.42.0.79': ('setct-CertResTBE', ),
'2.23.42.0.80': ('setct-CRLNotificationTBS', ),
'2.23.42.0.81': ('setct-CRLNotificationResTBS', ),
'2.23.42.0.82': ('setct-BCIDistributionTBS', ),
'2.23.42.1': ('message extensions', 'set-msgExt'),
'2.23.42.1.1': ('generic cryptogram', 'setext-genCrypt'),
'2.23.42.1.3': ('merchant initiated auth', 'setext-miAuth'),
'2.23.42.1.4': ('setext-pinSecure', ),
'2.23.42.1.5': ('setext-pinAny', ),
'2.23.42.1.7': ('setext-track2', ),
'2.23.42.1.8': ('additional verification', 'setext-cv'),
'2.23.42.3': ('set-attr', ),
'2.23.42.3.0': ('setAttr-Cert', ),
'2.23.42.3.0.0': ('set-rootKeyThumb', ),
'2.23.42.3.0.1': ('set-addPolicy', ),
'2.23.42.3.1': ('payment gateway capabilities', 'setAttr-PGWYcap'),
'2.23.42.3.2': ('setAttr-TokenType', ),
'2.23.42.3.2.1': ('setAttr-Token-EMV', ),
'2.23.42.3.2.2': ('setAttr-Token-B0Prime', ),
'2.23.42.3.3': ('issuer capabilities', 'setAttr-IssCap'),
'2.23.42.3.3.3': ('setAttr-IssCap-CVM', ),
'2.23.42.3.3.3.1': ('generate cryptogram', 'setAttr-GenCryptgrm'),
'2.23.42.3.3.4': ('setAttr-IssCap-T2', ),
'2.23.42.3.3.4.1': ('encrypted track 2', 'setAttr-T2Enc'),
'2.23.42.3.3.4.2': ('cleartext track 2', 'setAttr-T2cleartxt'),
'2.23.42.3.3.5': ('setAttr-IssCap-Sig', ),
'2.23.42.3.3.5.1': ('ICC or token signature', 'setAttr-TokICCsig'),
'2.23.42.3.3.5.2': ('secure device signature', 'setAttr-SecDevSig'),
'2.23.42.5': ('set-policy', ),
'2.23.42.5.0': ('set-policy-root', ),
'2.23.42.7': ('certificate extensions', 'set-certExt'),
'2.23.42.7.0': ('setCext-hashedRoot', ),
'2.23.42.7.1': ('setCext-certType', ),
'2.23.42.7.2': ('setCext-merchData', ),
'2.23.42.7.3': ('setCext-cCertRequired', ),
'2.23.42.7.4': ('setCext-tunneling', ),
'2.23.42.7.5': ('setCext-setExt', ),
'2.23.42.7.6': ('setCext-setQualf', ),
'2.23.42.7.7': ('setCext-PGWYcapabilities', ),
'2.23.42.7.8': ('setCext-TokenIdentifier', ),
'2.23.42.7.9': ('setCext-Track2Data', ),
'2.23.42.7.10': ('setCext-TokenType', ),
'2.23.42.7.11': ('setCext-IssuerCapabilities', ),
'2.23.42.8': ('set-brand', ),
'2.23.42.8.1': ('set-brand-IATA-ATA', ),
'2.23.42.8.4': ('set-brand-Visa', ),
'2.23.42.8.5': ('set-brand-MasterCard', ),
'2.23.42.8.30': ('set-brand-Diners', ),
'2.23.42.8.34': ('set-brand-AmericanExpress', ),
'2.23.42.8.35': ('set-brand-JCB', ),
'2.23.42.8.6011': ('set-brand-Novus', ),
'2.23.43': ('wap', ),
'2.23.43.1': ('wap-wsg', ),
'2.23.43.1.4': ('wap-wsg-idm-ecid', ),
'2.23.43.1.4.1': ('wap-wsg-idm-ecid-wtls1', ),
'2.23.43.1.4.3': ('wap-wsg-idm-ecid-wtls3', ),
'2.23.43.1.4.4': ('wap-wsg-idm-ecid-wtls4', ),
'2.23.43.1.4.5': ('wap-wsg-idm-ecid-wtls5', ),
'2.23.43.1.4.6': ('wap-wsg-idm-ecid-wtls6', ),
'2.23.43.1.4.7': ('wap-wsg-idm-ecid-wtls7', ),
'2.23.43.1.4.8': ('wap-wsg-idm-ecid-wtls8', ),
'2.23.43.1.4.9': ('wap-wsg-idm-ecid-wtls9', ),
'2.23.43.1.4.10': ('wap-wsg-idm-ecid-wtls10', ),
'2.23.43.1.4.11': ('wap-wsg-idm-ecid-wtls11', ),
'2.23.43.1.4.12': ('wap-wsg-idm-ecid-wtls12', ),
}
# #####################################################################################
# #####################################################################################
_OID_LOOKUP = dict()
_NORMALIZE_NAMES = dict()
_NORMALIZE_NAMES_SHORT = dict()
for dotted, names in _OID_MAP.items():
for name in names:
if name in _NORMALIZE_NAMES and _OID_LOOKUP[name] != dotted:
raise AssertionError(
'Name collision during setup: "{0}" for OIDs {1} and {2}'
.format(name, dotted, _OID_LOOKUP[name])
)
_NORMALIZE_NAMES[name] = names[0]
_NORMALIZE_NAMES_SHORT[name] = names[-1]
_OID_LOOKUP[name] = dotted
for alias, original in [('userID', 'userId')]:
if alias in _NORMALIZE_NAMES:
raise AssertionError(
'Name collision during adding aliases: "{0}" (alias for "{1}") is already mapped to OID {2}'
.format(alias, original, _OID_LOOKUP[alias])
)
_NORMALIZE_NAMES[alias] = original
_NORMALIZE_NAMES_SHORT[alias] = _NORMALIZE_NAMES_SHORT[original]
_OID_LOOKUP[alias] = _OID_LOOKUP[original]
def pyopenssl_normalize_name(name, short=False):
nid = OpenSSL._util.lib.OBJ_txt2nid(to_bytes(name))
if nid != 0:
b_name = OpenSSL._util.lib.OBJ_nid2ln(nid)
name = to_text(OpenSSL._util.ffi.string(b_name))
if short:
return _NORMALIZE_NAMES_SHORT.get(name, name)
else:
return _NORMALIZE_NAMES.get(name, name)
# #####################################################################################
# #####################################################################################
# # This excerpt is dual licensed under the terms of the Apache License, Version
# # 2.0, and the BSD License. See the LICENSE file at
# # https://github.com/pyca/cryptography/blob/master/LICENSE for complete details.
# #
# # Adapted from cryptography's hazmat/backends/openssl/decode_asn1.py
# #
# # Copyright (c) 2015, 2016 Paul Kehrer (@reaperhulk)
# # Copyright (c) 2017 Fraser Tweedale (@frasertweedale)
# #
# # Relevant commits from cryptography project (https://github.com/pyca/cryptography):
# # pyca/cryptography@719d536dd691e84e208534798f2eb4f82aaa2e07
# # pyca/cryptography@5ab6d6a5c05572bd1c75f05baf264a2d0001894a
# # pyca/cryptography@2e776e20eb60378e0af9b7439000d0e80da7c7e3
# # pyca/cryptography@fb309ed24647d1be9e319b61b1f2aa8ebb87b90b
# # pyca/cryptography@2917e460993c475c72d7146c50dc3bbc2414280d
# # pyca/cryptography@3057f91ea9a05fb593825006d87a391286a4d828
# # pyca/cryptography@d607dd7e5bc5c08854ec0c9baff70ba4a35be36f
def _obj2txt(openssl_lib, openssl_ffi, obj):
# Set to 80 on the recommendation of
# https://www.openssl.org/docs/crypto/OBJ_nid2ln.html#return_values
#
# But OIDs longer than this occur in real life (e.g. Active
# Directory makes some very long OIDs). So we need to detect
# and properly handle the case where the default buffer is not
# big enough.
#
buf_len = 80
buf = openssl_ffi.new("char[]", buf_len)
# 'res' is the number of bytes that *would* be written if the
# buffer is large enough. If 'res' > buf_len - 1, we need to
# alloc a big-enough buffer and go again.
res = openssl_lib.OBJ_obj2txt(buf, buf_len, obj, 1)
if res > buf_len - 1: # account for terminating null byte
buf_len = res + 1
buf = openssl_ffi.new("char[]", buf_len)
res = openssl_lib.OBJ_obj2txt(buf, buf_len, obj, 1)
return openssl_ffi.buffer(buf, res)[:].decode()
# #####################################################################################
# #####################################################################################
def cryptography_get_extensions_from_cert(cert):
# Since cryptography won't give us the DER value for an extension
# (that is only stored for unrecognized extensions), we have to re-do
# the extension parsing outselves.
result = dict()
backend = cert._backend
x509_obj = cert._x509
for i in range(backend._lib.X509_get_ext_count(x509_obj)):
ext = backend._lib.X509_get_ext(x509_obj, i)
if ext == backend._ffi.NULL:
continue
crit = backend._lib.X509_EXTENSION_get_critical(ext)
data = backend._lib.X509_EXTENSION_get_data(ext)
backend.openssl_assert(data != backend._ffi.NULL)
der = backend._ffi.buffer(data.data, data.length)[:]
entry = dict(
critical=(crit == 1),
value=base64.b64encode(der),
)
oid = _obj2txt(backend._lib, backend._ffi, backend._lib.X509_EXTENSION_get_object(ext))
result[oid] = entry
return result
def cryptography_get_extensions_from_csr(csr):
# Since cryptography won't give us the DER value for an extension
# (that is only stored for unrecognized extensions), we have to re-do
# the extension parsing outselves.
result = dict()
backend = csr._backend
extensions = backend._lib.X509_REQ_get_extensions(csr._x509_req)
extensions = backend._ffi.gc(
extensions,
lambda ext: backend._lib.sk_X509_EXTENSION_pop_free(
ext,
backend._ffi.addressof(backend._lib._original_lib, "X509_EXTENSION_free")
)
)
for i in range(backend._lib.sk_X509_EXTENSION_num(extensions)):
ext = backend._lib.sk_X509_EXTENSION_value(extensions, i)
if ext == backend._ffi.NULL:
continue
crit = backend._lib.X509_EXTENSION_get_critical(ext)
data = backend._lib.X509_EXTENSION_get_data(ext)
backend.openssl_assert(data != backend._ffi.NULL)
der = backend._ffi.buffer(data.data, data.length)[:]
entry = dict(
critical=(crit == 1),
value=base64.b64encode(der),
)
oid = _obj2txt(backend._lib, backend._ffi, backend._lib.X509_EXTENSION_get_object(ext))
result[oid] = entry
return result
def pyopenssl_get_extensions_from_cert(cert):
# While pyOpenSSL allows us to get an extension's DER value, it won't
# give us the dotted string for an OID. So we have to do some magic to
# get hold of it.
result = dict()
ext_count = cert.get_extension_count()
for i in range(0, ext_count):
ext = cert.get_extension(i)
entry = dict(
critical=bool(ext.get_critical()),
value=base64.b64encode(ext.get_data()),
)
oid = _obj2txt(
OpenSSL._util.lib,
OpenSSL._util.ffi,
OpenSSL._util.lib.X509_EXTENSION_get_object(ext._extension)
)
# This could also be done a bit simpler:
#
# oid = _obj2txt(OpenSSL._util.lib, OpenSSL._util.ffi, OpenSSL._util.lib.OBJ_nid2obj(ext._nid))
#
# Unfortunately this gives the wrong result in case the linked OpenSSL
# doesn't know the OID. That's why we have to get the OID dotted string
# similarly to how cryptography does it.
result[oid] = entry
return result
def pyopenssl_get_extensions_from_csr(csr):
# While pyOpenSSL allows us to get an extension's DER value, it won't
# give us the dotted string for an OID. So we have to do some magic to
# get hold of it.
result = dict()
for ext in csr.get_extensions():
entry = dict(
critical=bool(ext.get_critical()),
value=base64.b64encode(ext.get_data()),
)
oid = _obj2txt(
OpenSSL._util.lib,
OpenSSL._util.ffi,
OpenSSL._util.lib.X509_EXTENSION_get_object(ext._extension)
)
# This could also be done a bit simpler:
#
# oid = _obj2txt(OpenSSL._util.lib, OpenSSL._util.ffi, OpenSSL._util.lib.OBJ_nid2obj(ext._nid))
#
# Unfortunately this gives the wrong result in case the linked OpenSSL
# doesn't know the OID. That's why we have to get the OID dotted string
# similarly to how cryptography does it.
result[oid] = entry
return result
def cryptography_name_to_oid(name):
dotted = _OID_LOOKUP.get(name)
if dotted is None:
raise OpenSSLObjectError('Cannot find OID for "{0}"'.format(name))
return x509.oid.ObjectIdentifier(dotted)
def cryptography_oid_to_name(oid, short=False):
dotted_string = oid.dotted_string
names = _OID_MAP.get(dotted_string)
name = names[0] if names else oid._name
if short:
return _NORMALIZE_NAMES_SHORT.get(name, name)
else:
return _NORMALIZE_NAMES.get(name, name)
def cryptography_get_name(name):
'''
Given a name string, returns a cryptography x509.Name object.
Raises an OpenSSLObjectError if the name is unknown or cannot be parsed.
'''
try:
if name.startswith('DNS:'):
return x509.DNSName(to_text(name[4:]))
if name.startswith('IP:'):
return x509.IPAddress(ipaddress.ip_address(to_text(name[3:])))
if name.startswith('email:'):
return x509.RFC822Name(to_text(name[6:]))
if name.startswith('URI:'):
return x509.UniformResourceIdentifier(to_text(name[4:]))
except Exception as e:
raise OpenSSLObjectError('Cannot parse Subject Alternative Name "{0}": {1}'.format(name, e))
if ':' not in name:
raise OpenSSLObjectError('Cannot parse Subject Alternative Name "{0}" (forgot "DNS:" prefix?)'.format(name))
raise OpenSSLObjectError('Cannot parse Subject Alternative Name "{0}" (potentially unsupported by cryptography backend)'.format(name))
def _get_hex(bytesstr):
if bytesstr is None:
return bytesstr
data = binascii.hexlify(bytesstr)
data = to_text(b':'.join(data[i:i + 2] for i in range(0, len(data), 2)))
return data
def cryptography_decode_name(name):
'''
Given a cryptography x509.Name object, returns a string.
Raises an OpenSSLObjectError if the name is not supported.
'''
if isinstance(name, x509.DNSName):
return 'DNS:{0}'.format(name.value)
if isinstance(name, x509.IPAddress):
return 'IP:{0}'.format(name.value.compressed)
if isinstance(name, x509.RFC822Name):
return 'email:{0}'.format(name.value)
if isinstance(name, x509.UniformResourceIdentifier):
return 'URI:{0}'.format(name.value)
if isinstance(name, x509.DirectoryName):
# FIXME: test
return 'DirName:' + ''.join(['/{0}:{1}'.format(attribute.oid._name, attribute.value) for attribute in name.value])
if isinstance(name, x509.RegisteredID):
# FIXME: test
return 'RegisteredID:{0}'.format(name.value)
if isinstance(name, x509.OtherName):
# FIXME: test
return '{0}:{1}'.format(name.type_id.dotted_string, _get_hex(name.value))
raise OpenSSLObjectError('Cannot decode name "{0}"'.format(name))
def _cryptography_get_keyusage(usage):
'''
Given a key usage identifier string, returns the parameter name used by cryptography's x509.KeyUsage().
Raises an OpenSSLObjectError if the identifier is unknown.
'''
if usage in ('Digital Signature', 'digitalSignature'):
return 'digital_signature'
if usage in ('Non Repudiation', 'nonRepudiation'):
return 'content_commitment'
if usage in ('Key Encipherment', 'keyEncipherment'):
return 'key_encipherment'
if usage in ('Data Encipherment', 'dataEncipherment'):
return 'data_encipherment'
if usage in ('Key Agreement', 'keyAgreement'):
return 'key_agreement'
if usage in ('Certificate Sign', 'keyCertSign'):
return 'key_cert_sign'
if usage in ('CRL Sign', 'cRLSign'):
return 'crl_sign'
if usage in ('Encipher Only', 'encipherOnly'):
return 'encipher_only'
if usage in ('Decipher Only', 'decipherOnly'):
return 'decipher_only'
raise OpenSSLObjectError('Unknown key usage "{0}"'.format(usage))
def cryptography_parse_key_usage_params(usages):
'''
Given a list of key usage identifier strings, returns the parameters for cryptography's x509.KeyUsage().
Raises an OpenSSLObjectError if an identifier is unknown.
'''
params = dict(
digital_signature=False,
content_commitment=False,
key_encipherment=False,
data_encipherment=False,
key_agreement=False,
key_cert_sign=False,
crl_sign=False,
encipher_only=False,
decipher_only=False,
)
for usage in usages:
params[_cryptography_get_keyusage(usage)] = True
return params
def cryptography_get_basic_constraints(constraints):
'''
Given a list of constraints, returns a tuple (ca, path_length).
Raises an OpenSSLObjectError if a constraint is unknown or cannot be parsed.
'''
ca = False
path_length = None
if constraints:
for constraint in constraints:
if constraint.startswith('CA:'):
if constraint == 'CA:TRUE':
ca = True
elif constraint == 'CA:FALSE':
ca = False
else:
raise OpenSSLObjectError('Unknown basic constraint value "{0}" for CA'.format(constraint[3:]))
elif constraint.startswith('pathlen:'):
v = constraint[len('pathlen:'):]
try:
path_length = int(v)
except Exception as e:
raise OpenSSLObjectError('Cannot parse path length constraint "{0}" ({1})'.format(v, e))
else:
raise OpenSSLObjectError('Unknown basic constraint "{0}"'.format(constraint))
return ca, path_length
def binary_exp_mod(f, e, m):
'''Computes f^e mod m in O(log e) multiplications modulo m.'''
# Compute len_e = floor(log_2(e))
len_e = -1
x = e
while x > 0:
x >>= 1
len_e += 1
# Compute f**e mod m
result = 1
for k in range(len_e, -1, -1):
result = (result * result) % m
if ((e >> k) & 1) != 0:
result = (result * f) % m
return result
def simple_gcd(a, b):
'''Compute GCD of its two inputs.'''
while b != 0:
a, b = b, a % b
return a
def quick_is_not_prime(n):
'''Does some quick checks to see if we can poke a hole into the primality of n.
A result of `False` does **not** mean that the number is prime; it just means
that we couldn't detect quickly whether it is not prime.
'''
if n <= 2:
return True
# The constant in the next line is the product of all primes < 200
if simple_gcd(n, 7799922041683461553249199106329813876687996789903550945093032474868511536164700810) > 1:
return True
# TODO: maybe do some iterations of Miller-Rabin to increase confidence
# (https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test)
return False
python_version = (sys.version_info[0], sys.version_info[1])
if python_version >= (2, 7) or python_version >= (3, 1):
# Ansible still supports Python 2.6 on remote nodes
def count_bits(no):
no = abs(no)
if no == 0:
return 0
return no.bit_length()
else:
# Slow, but works
def count_bits(no):
no = abs(no)
count = 0
while no > 0:
no >>= 1
count += 1
return count
PEM_START = '-----BEGIN '
PEM_END = '-----'
PKCS8_PRIVATEKEY_NAMES = ('PRIVATE KEY', 'ENCRYPTED PRIVATE KEY')
PKCS1_PRIVATEKEY_SUFFIX = ' PRIVATE KEY'
def identify_private_key_format(content):
'''Given the contents of a private key file, identifies its format.'''
# See https://github.com/openssl/openssl/blob/master/crypto/pem/pem_pkey.c#L40-L85
# (PEM_read_bio_PrivateKey)
# and https://github.com/openssl/openssl/blob/master/include/openssl/pem.h#L46-L47
# (PEM_STRING_PKCS8, PEM_STRING_PKCS8INF)
try:
lines = content.decode('utf-8').splitlines(False)
if lines[0].startswith(PEM_START) and lines[0].endswith(PEM_END) and len(lines[0]) > len(PEM_START) + len(PEM_END):
name = lines[0][len(PEM_START):-len(PEM_END)]
if name in PKCS8_PRIVATEKEY_NAMES:
return 'pkcs8'
if len(name) > len(PKCS1_PRIVATEKEY_SUFFIX) and name.endswith(PKCS1_PRIVATEKEY_SUFFIX):
return 'pkcs1'
return 'unknown-pem'
except UnicodeDecodeError:
pass
return 'raw'
def cryptography_key_needs_digest_for_signing(key):
'''Tests whether the given private key requires a digest algorithm for signing.
Ed25519 and Ed448 keys do not; they need None to be passed as the digest algorithm.
'''
if CRYPTOGRAPHY_HAS_ED25519 and isinstance(key, cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PrivateKey):
return False
if CRYPTOGRAPHY_HAS_ED448 and isinstance(key, cryptography.hazmat.primitives.asymmetric.ed448.Ed448PrivateKey):
return False
return True
def cryptography_compare_public_keys(key1, key2):
'''Tests whether two public keys are the same.
Needs special logic for Ed25519 and Ed448 keys, since they do not have public_numbers().
'''
if CRYPTOGRAPHY_HAS_ED25519:
a = isinstance(key1, cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PublicKey)
b = isinstance(key2, cryptography.hazmat.primitives.asymmetric.ed25519.Ed25519PublicKey)
if a or b:
if not a or not b:
return False
a = key1.public_bytes(serialization.Encoding.Raw, serialization.PublicFormat.Raw)
b = key2.public_bytes(serialization.Encoding.Raw, serialization.PublicFormat.Raw)
return a == b
if CRYPTOGRAPHY_HAS_ED448:
a = isinstance(key1, cryptography.hazmat.primitives.asymmetric.ed448.Ed448PublicKey)
b = isinstance(key2, cryptography.hazmat.primitives.asymmetric.ed448.Ed448PublicKey)
if a or b:
if not a or not b:
return False
a = key1.public_bytes(serialization.Encoding.Raw, serialization.PublicFormat.Raw)
b = key2.public_bytes(serialization.Encoding.Raw, serialization.PublicFormat.Raw)
return a == b
return key1.public_numbers() == key2.public_numbers()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,617 |
grafana_dashboard: add dashboard report "send() got multiple values for keyword argument 'MESSAGE'"
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Add dashboard report "send() got multiple values for keyword argument 'MESSAGE'"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
grafana_dashboard
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /home/namnh/workspace/ansible-setup/ansible.cfg
configured module search path = ['/home/namnh/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/namnh/.venvs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/namnh/.venvs/ansible/bin/ansible
python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/home/namnh/workspace/ansible-setup/ansible.cfg) = ['/home/namnh/workspace/ansible-setup/inventory']
DEFAULT_STDOUT_CALLBACK(/home/namnh/workspace/ansible-setup/ansible.cfg) = yaml
INTERPRETER_PYTHON(/home/namnh/workspace/ansible-setup/ansible.cfg) = /usr/bin/python3
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: Import Grafana telegraf dashboard
ignore_errors: yes
become: false
grafana_dashboard:
grafana_url: "{{ grafana_url }}"
grafana_user: "{{ grafana_admin_user }}"
grafana_password: "{{ grafana_admin_password }}"
message: telegraf
overwrite: yes
state: present
path: "{{ playbook_dir }}/configs/telegraf-system.json"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
New datasource is added
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 114, in <module>
_ansiballz_main()
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/usr/lib/python3.6/imp.py", line 235, in load_module
return load_source(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 170, in load_source
module = _exec(spec, sys.modules[name])
File "<frozen importlib._bootstrap>", line 618, in _exec
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/__main__.py", line 451, in <module>
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/__main__.py", line 408, in main
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 691, in __init__
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 1946, in _log_invocation
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 1904, in log
TypeError: send() got multiple values for keyword argument 'MESSAGE'
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
|
https://github.com/ansible/ansible/issues/59617
|
https://github.com/ansible/ansible/pull/60051
|
00bed0eb1c2ed22a7b56078b9e7911756182ac92
|
b6753b46a987a319ff062a8adcdcd4e0000353ed
| 2019-07-26T04:36:45Z |
python
| 2020-02-18T12:00:16Z |
changelogs/fragments/39295-grafana_dashboard.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,617 |
grafana_dashboard: add dashboard report "send() got multiple values for keyword argument 'MESSAGE'"
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Add dashboard report "send() got multiple values for keyword argument 'MESSAGE'"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
grafana_dashboard
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /home/namnh/workspace/ansible-setup/ansible.cfg
configured module search path = ['/home/namnh/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/namnh/.venvs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/namnh/.venvs/ansible/bin/ansible
python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/home/namnh/workspace/ansible-setup/ansible.cfg) = ['/home/namnh/workspace/ansible-setup/inventory']
DEFAULT_STDOUT_CALLBACK(/home/namnh/workspace/ansible-setup/ansible.cfg) = yaml
INTERPRETER_PYTHON(/home/namnh/workspace/ansible-setup/ansible.cfg) = /usr/bin/python3
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: Import Grafana telegraf dashboard
ignore_errors: yes
become: false
grafana_dashboard:
grafana_url: "{{ grafana_url }}"
grafana_user: "{{ grafana_admin_user }}"
grafana_password: "{{ grafana_admin_password }}"
message: telegraf
overwrite: yes
state: present
path: "{{ playbook_dir }}/configs/telegraf-system.json"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
New datasource is added
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 114, in <module>
_ansiballz_main()
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/usr/lib/python3.6/imp.py", line 235, in load_module
return load_source(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 170, in load_source
module = _exec(spec, sys.modules[name])
File "<frozen importlib._bootstrap>", line 618, in _exec
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/__main__.py", line 451, in <module>
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/__main__.py", line 408, in main
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 691, in __init__
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 1946, in _log_invocation
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 1904, in log
TypeError: send() got multiple values for keyword argument 'MESSAGE'
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
|
https://github.com/ansible/ansible/issues/59617
|
https://github.com/ansible/ansible/pull/60051
|
00bed0eb1c2ed22a7b56078b9e7911756182ac92
|
b6753b46a987a319ff062a8adcdcd4e0000353ed
| 2019-07-26T04:36:45Z |
python
| 2020-02-18T12:00:16Z |
docs/docsite/rst/dev_guide/developing_modules_best_practices.rst
|
.. _developing_modules_best_practices:
.. _module_dev_conventions:
*******************************
Conventions, tips, and pitfalls
*******************************
.. contents:: Topics
:local:
As you design and develop modules, follow these basic conventions and tips for clean, usable code:
Scoping your module(s)
======================
Especially if you want to contribute your module(s) back to Ansible Core, make sure each module includes enough logic and functionality, but not too much. If you're finding these guidelines tricky, consider :ref:`whether you really need to write a module <module_dev_should_you>` at all.
* Each module should have a concise and well-defined functionality. Basically, follow the UNIX philosophy of doing one thing well.
* Do not add ``get``, ``list`` or ``info`` state options to an existing module - create a new ``_info`` or ``_facts`` module.
* Modules should not require that a user know all the underlying options of an API/tool to be used. For instance, if the legal values for a required module parameter cannot be documented, the module does not belong in Ansible Core.
* Modules should encompass much of the logic for interacting with a resource. A lightweight wrapper around a complex API forces users to offload too much logic into their playbooks. If you want to connect Ansible to a complex API, :ref:`create multiple modules <developing_modules_in_groups>` that interact with smaller individual pieces of the API.
* Avoid creating a module that does the work of other modules; this leads to code duplication and divergence, and makes things less uniform, unpredictable and harder to maintain. Modules should be the building blocks. If you are asking 'how can I have a module execute other modules' ... you want to write a role.
Designing module interfaces
===========================
* If your module is addressing an object, the parameter for that object should be called ``name`` whenever possible, or accept ``name`` as an alias.
* Modules accepting boolean status should accept ``yes``, ``no``, ``true``, ``false``, or anything else a user may likely throw at them. The AnsibleModule common code supports this with ``type='bool'``.
* Avoid ``action``/``command``, they are imperative and not declarative, there are other ways to express the same thing.
General guidelines & tips
=========================
* Each module should be self-contained in one file, so it can be be auto-transferred by Ansible.
* Module name MUST use underscores instead of hyphens or spaces as a word separator. Using hyphens and spaces will prevent Ansible from importing your module.
* Always use the ``hacking/test-module.py`` script when developing modules - it will warn you about common pitfalls.
* If you have a local module that returns facts specific to your installations, a good name for this module is ``site_facts``.
* Eliminate or minimize dependencies. If your module has dependencies, document them at the top of the module file and raise JSON error messages when dependency import fails.
* Don't write to files directly; use a temporary file and then use the ``atomic_move`` function from ``ansible.module_utils.basic`` to move the updated temporary file into place. This prevents data corruption and ensures that the correct context for the file is kept.
* Avoid creating caches. Ansible is designed without a central server or authority, so you cannot guarantee it will not run with different permissions, options or locations. If you need a central authority, have it on top of Ansible (for example, using bastion/cm/ci server or tower); do not try to build it into modules.
* If you package your module(s) in an RPM, install the modules on the control machine in ``/usr/share/ansible``. Packaging modules in RPMs is optional.
Functions and Methods
=====================
* Each function should be concise and should describe a meaningful amount of work.
* "Don't repeat yourself" is generally a good philosophy.
* Function names should use underscores: ``my_function_name``.
* Each function's name should describes what it does.
* Each function should have a docstring.
* If your code is too nested, that's usually a sign the loop body could benefit from being a function. Parts of our existing code are not the best examples of this at times.
Python tips
===========
* When fetching URLs, use ``fetch_url`` or ``open_url`` from ``ansible.module_utils.urls``. Do not use ``urllib2``, which does not natively verify TLS certificates and so is insecure for https.
* Include a ``main`` function that wraps the normal execution.
* Call your ``main`` function from a conditional so you can import it into unit tests - for example:
.. code-block:: python
if __name__ == '__main__':
main()
.. _shared_code:
Importing and using shared code
===============================
* Use shared code whenever possible - don't reinvent the wheel. Ansible offers the ``AnsibleModule`` common Python code, plus :ref:`utilities <developing_module_utilities>` for many common use cases and patterns. You can also create documentation fragments for docs that apply to multiple modules.
* Import ``ansible.module_utils`` code in the same place as you import other libraries.
* Do NOT use wildcards (*) for importing other python modules; instead, list the function(s) you are importing (for example, ``from some.other_python_module.basic import otherFunction``).
* Import custom packages in ``try``/``except``, capture any import errors, and handle them with ``fail_json()`` in ``main()``. For example:
.. code-block:: python
import traceback
from ansible.basic import missing_required_lib
LIB_IMP_ERR = None
try:
import foo
HAS_LIB = True
except:
HAS_LIB = False
LIB_IMP_ERR = traceback.format_exc()
Then in ``main()``, just after the argspec, do
.. code-block:: python
if not HAS_LIB:
module.fail_json(msg=missing_required_lib("foo"),
exception=LIB_IMP_ERR)
And document the dependency in the ``requirements`` section of your module's :ref:`documentation_block`.
.. _module_failures:
Handling module failures
========================
When your module fails, help users understand what went wrong. If you are using the ``AnsibleModule`` common Python code, the ``failed`` element will be included for you automatically when you call ``fail_json``. For polite module failure behavior:
* Include a key of ``failed`` along with a string explanation in ``msg``. If you don't do this, Ansible will use standard return codes: 0=success and non-zero=failure.
* Don't raise a traceback (stacktrace). Ansible can deal with stacktraces and automatically converts anything unparseable into a failed result, but raising a stacktrace on module failure is not user-friendly.
* Do not use ``sys.exit()``. Use ``fail_json()`` from the module object.
Handling exceptions (bugs) gracefully
=====================================
* Validate upfront--fail fast and return useful and clear error messages.
* Use defensive programming--use a simple design for your module, handle errors gracefully, and avoid direct stacktraces.
* Fail predictably--if we must fail, do it in a way that is the most expected. Either mimic the underlying tool or the general way the system works.
* Give out a useful message on what you were doing and add exception messages to that.
* Avoid catchall exceptions, they are not very useful unless the underlying API gives very good error messages pertaining the attempted action.
.. _module_output:
Creating correct and informative module output
==============================================
Modules must output valid JSON only. Follow these guidelines for creating correct, useful module output:
* Make your top-level return type a hash (dictionary).
* Nest complex return values within the top-level hash.
* Incorporate any lists or simple scalar values within the top-level return hash.
* Do not send module output to standard error, because the system will merge standard out with standard error and prevent the JSON from parsing.
* Capture standard error and return it as a variable in the JSON on standard out. This is how the command module is implemented.
* Never do ``print("some status message")`` in a module, because it will not produce valid JSON output.
* Always return useful data, even when there is no change.
* Be consistent about returns (some modules are too random), unless it is detrimental to the state/action.
* Make returns reusable--most of the time you don't want to read it, but you do want to process it and re-purpose it.
* Return diff if in diff mode. This is not required for all modules, as it won't make sense for certain ones, but please include it when applicable.
* Enable your return values to be serialized as JSON with Python's standard `JSON encoder and decoder <https://docs.python.org/3/library/json.html>`_ library. Basic python types (strings, int, dicts, lists, etc) are serializable.
* Do not return an object via exit_json(). Instead, convert the fields you need from the object into the fields of a dictionary and return the dictionary.
* Results from many hosts will be aggregated at once, so your module should return only relevant output. Returning the entire contents of a log file is generally bad form.
If a module returns stderr or otherwise fails to produce valid JSON, the actual output will still be shown in Ansible, but the command will not succeed.
.. _module_conventions:
Following Ansible conventions
=============================
Ansible conventions offer a predictable user interface across all modules, playbooks, and roles. To follow Ansible conventions in your module development:
* Use consistent names across modules (yes, we have many legacy deviations - don't make the problem worse!).
* Use consistent parameters (arguments) within your module(s).
* Normalize parameters with other modules - if Ansible and the API your module connects to use different names for the same parameter, add aliases to your parameters so the user can choose which names to use in tasks and playbooks.
* Return facts from ``*_facts`` modules in the ``ansible_facts`` field of the :ref:`result dictionary<common_return_values>` so other modules can access them.
* Implement ``check_mode`` in all ``*_info`` and ``*_facts`` modules. Playbooks which conditionalize based on fact information will only conditionalize correctly in ``check_mode`` if the facts are returned in ``check_mode``. Usually you can add ``supports_check_mode=True`` when instantiating ``AnsibleModule``.
* Use module-specific environment variables. For example, if you use the helpers in ``module_utils.api`` for basic authentication with ``module_utils.urls.fetch_url()`` and you fall back on environment variables for default values, use a module-specific environment variable like :code:`API_<MODULENAME>_USERNAME` to avoid conflict between modules.
* Keep module options simple and focused - if you're loading a lot of choices/states on an existing option, consider adding a new, simple option instead.
* Keep options small when possible. Passing a large data structure to an option might save us a few tasks, but it adds a complex requirement that we cannot easily validate before passing on to the module.
* If you want to pass complex data to an option, write an expert module that allows this, along with several smaller modules that provide a more 'atomic' operation against the underlying APIs and services. Complex operations require complex data. Let the user choose whether to reflect that complexity in tasks and plays or in vars files.
* Implement declarative operations (not CRUD) so the user can ignore existing state and focus on final state. For example, use ``started/stopped``, ``present/absent``.
* Strive for a consistent final state (aka idempotency). If running your module twice in a row against the same system would result in two different states, see if you can redesign or rewrite to achieve consistent final state. If you can't, document the behavior and the reasons for it.
* Provide consistent return values within the standard Ansible return structure, even if NA/None are used for keys normally returned under other options.
* Follow additional guidelines that apply to families of modules if applicable. For example, AWS modules should follow the :ref:`Amazon development checklist <AWS_module_development>`.
Module Security
===============
* Avoid passing user input from the shell.
* Always check return codes.
* You must always use ``module.run_command``, not ``subprocess`` or ``Popen`` or ``os.system``.
* Avoid using the shell unless absolutely necessary.
* If you must use the shell, you must pass ``use_unsafe_shell=True`` to ``module.run_command``.
* If any variables in your module can come from user input with ``use_unsafe_shell=True``, you must wrap them with ``pipes.quote(x)``.
* When fetching URLs, use ``fetch_url`` or ``open_url`` from ``ansible.module_utils.urls``. Do not use ``urllib2``, which does not natively verify TLS certificates and so is insecure for https.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,617 |
grafana_dashboard: add dashboard report "send() got multiple values for keyword argument 'MESSAGE'"
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Add dashboard report "send() got multiple values for keyword argument 'MESSAGE'"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
grafana_dashboard
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /home/namnh/workspace/ansible-setup/ansible.cfg
configured module search path = ['/home/namnh/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/namnh/.venvs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/namnh/.venvs/ansible/bin/ansible
python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/home/namnh/workspace/ansible-setup/ansible.cfg) = ['/home/namnh/workspace/ansible-setup/inventory']
DEFAULT_STDOUT_CALLBACK(/home/namnh/workspace/ansible-setup/ansible.cfg) = yaml
INTERPRETER_PYTHON(/home/namnh/workspace/ansible-setup/ansible.cfg) = /usr/bin/python3
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: Import Grafana telegraf dashboard
ignore_errors: yes
become: false
grafana_dashboard:
grafana_url: "{{ grafana_url }}"
grafana_user: "{{ grafana_admin_user }}"
grafana_password: "{{ grafana_admin_password }}"
message: telegraf
overwrite: yes
state: present
path: "{{ playbook_dir }}/configs/telegraf-system.json"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
New datasource is added
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 114, in <module>
_ansiballz_main()
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/usr/lib/python3.6/imp.py", line 235, in load_module
return load_source(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 170, in load_source
module = _exec(spec, sys.modules[name])
File "<frozen importlib._bootstrap>", line 618, in _exec
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/__main__.py", line 451, in <module>
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/__main__.py", line 408, in main
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 691, in __init__
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 1946, in _log_invocation
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 1904, in log
TypeError: send() got multiple values for keyword argument 'MESSAGE'
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
|
https://github.com/ansible/ansible/issues/59617
|
https://github.com/ansible/ansible/pull/60051
|
00bed0eb1c2ed22a7b56078b9e7911756182ac92
|
b6753b46a987a319ff062a8adcdcd4e0000353ed
| 2019-07-26T04:36:45Z |
python
| 2020-02-18T12:00:16Z |
docs/docsite/rst/dev_guide/testing_validate-modules.rst
|
:orphan:
.. _testing_validate-modules:
****************
validate-modules
****************
.. contents:: Topics
Python program to help test or validate Ansible modules.
``validate-modules`` is one of the ``ansible-test`` Sanity Tests, see :ref:`testing_sanity` for more information.
Originally developed by Matt Martz (@sivel)
Usage
=====
.. code:: shell
cd /path/to/ansible/source
source hacking/env-setup
ansible-test sanity --test validate-modules
Help
====
.. code:: shell
usage: validate-modules [-h] [-w] [--exclude EXCLUDE] [--arg-spec]
[--base-branch BASE_BRANCH] [--format {json,plain}]
[--output OUTPUT]
modules [modules ...]
positional arguments:
modules Path to module or module directory
optional arguments:
-h, --help show this help message and exit
-w, --warnings Show warnings
--exclude EXCLUDE RegEx exclusion pattern
--arg-spec Analyze module argument spec
--base-branch BASE_BRANCH
Used in determining if new options were added
--format {json,plain}
Output format. Default: "plain"
--output OUTPUT Output location, use "-" for stdout. Default "-"
Extending validate-modules
==========================
The ``validate-modules`` tool has a `schema.py <https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_data/sanity/validate-modules/validate_modules/schema.py>`_ that is used to validate the YAML blocks, such as ``DOCUMENTATION`` and ``RETURNS``.
Codes
=====
============================================================ ================== ==================== =========================================================================================
**Error Code** **Type** **Level** **Sample Message**
------------------------------------------------------------ ------------------ -------------------- -----------------------------------------------------------------------------------------
ansible-module-not-initialized Syntax Error Execution of the module did not result in initialization of AnsibleModule
deprecation-mismatch Documentation Error Module marked as deprecated or removed in at least one of the filename, its metadata, or in DOCUMENTATION (setting DOCUMENTATION.deprecated for deprecation or removing all Documentation for removed) but not in all three places.
doc-choices-do-not-match-spec Documentation Error Value for "choices" from the argument_spec does not match the documentation
doc-choices-incompatible-type Documentation Error Choices value from the documentation is not compatible with type defined in the argument_spec
doc-default-does-not-match-spec Documentation Error Value for "default" from the argument_spec does not match the documentation
doc-default-incompatible-type Documentation Error Default value from the documentation is not compatible with type defined in the argument_spec
doc-elements-invalid Documentation Error Documentation specifies elements for argument, when "type" is not ``list``.
doc-elements-mismatch Documentation Error Argument_spec defines elements different than documentation does
doc-missing-type Documentation Error Documentation doesn't specify a type but argument in ``argument_spec`` use default type (``str``)
doc-required-mismatch Documentation Error argument in argument_spec is required but documentation says it is not, or vice versa
doc-type-does-not-match-spec Documentation Error Argument_spec defines type different than documentation does
documentation-error Documentation Error Unknown ``DOCUMENTATION`` error
documentation-syntax-error Documentation Error Invalid ``DOCUMENTATION`` schema
illegal-future-imports Imports Error Only the following ``from __future__`` imports are allowed: ``absolute_import``, ``division``, and ``print_function``.
import-before-documentation Imports Error Import found before documentation variables. All imports must appear below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``/``ANSIBLE_METADATA``
import-error Documentation Error ``Exception`` attempting to import module for ``argument_spec`` introspection
import-placement Locations Warning Imports should be directly below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``/``ANSIBLE_METADATA`` for legacy modules
imports-improper-location Imports Error Imports should be directly below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``/``ANSIBLE_METADATA``
incompatible-choices Documentation Error Choices value from the argument_spec is not compatible with type defined in the argument_spec
incompatible-default-type Documentation Error Default value from the argument_spec is not compatible with type defined in the argument_spec
invalid-argument-spec Documentation Error Argument in argument_spec must be a dictionary/hash when used
invalid-argument-spec-options Documentation Error Suboptions in argument_spec are invalid
invalid-documentation Documentation Error ``DOCUMENTATION`` is not valid YAML
invalid-documentation-options Documentation Error ``DOCUMENTATION.options`` must be a dictionary/hash when used
invalid-examples Documentation Error ``EXAMPLES`` is not valid YAML
invalid-extension Naming Error Official Ansible modules must have a ``.py`` extension for python modules or a ``.ps1`` for powershell modules
invalid-metadata-status Documentation Error ``ANSIBLE_METADATA.status`` of deprecated or removed can't include other statuses
invalid-metadata-type Documentation Error ``ANSIBLE_METADATA`` was not provided as a dict, YAML not supported, Invalid ``ANSIBLE_METADATA`` schema
invalid-module-schema Documentation Error ``AnsibleModule`` schema validation error
invalid-requires-extension Naming Error Module ``#AnsibleRequires -CSharpUtil`` should not end in .cs, Module ``#Requires`` should not end in .psm1
last-line-main-call Syntax Error Call to ``main()`` not the last line (or ``removed_module()`` in the case of deprecated & docs only modules)
metadata-changed Documentation Error ``ANSIBLE_METADATA`` cannot be changed in a point release for a stable branch
missing-doc-fragment Documentation Error ``DOCUMENTATION`` fragment missing
missing-existing-doc-fragment Documentation Warning Pre-existing ``DOCUMENTATION`` fragment missing
missing-documentation Documentation Error No ``DOCUMENTATION`` provided
missing-examples Documentation Error No ``EXAMPLES`` provided
missing-gplv3-license Documentation Error GPLv3 license header not found
missing-if-name-main Syntax Error Next to last line is not ``if __name__ == "__main__":``
missing-main-call Syntax Error Did not find a call to ``main()`` (or ``removed_module()`` in the case of deprecated & docs only modules)
missing-metadata Documentation Error No ``ANSIBLE_METADATA`` provided
missing-module-utils-basic-import Imports Warning Did not find ``ansible.module_utils.basic`` import
missing-module-utils-import-csharp-requirements Imports Error No ``Ansible.ModuleUtils`` or C# Ansible util requirements/imports found
missing-powershell-interpreter Syntax Error Interpreter line is not ``#!powershell``
missing-python-doc Naming Error Missing python documentation file
missing-python-interpreter Syntax Error Interpreter line is not ``#!/usr/bin/python``
missing-return Documentation Error No ``RETURN`` documentation provided
missing-return-legacy Documentation Warning No ``RETURN`` documentation provided for legacy module
missing-suboption-docs Documentation Error Argument in argument_spec has sub-options but documentation does not define sub-options
module-incorrect-version-added Documentation Error Module level ``version_added`` is incorrect
module-invalid-version-added Documentation Error Module level ``version_added`` is not a valid version number
module-utils-specific-import Imports Error ``module_utils`` imports should import specific components, not ``*``
multiple-utils-per-requires Imports Error ``Ansible.ModuleUtils`` requirements do not support multiple modules per statement
multiple-csharp-utils-per-requires Imports Error Ansible C# util requirements do not support multiple utils per statement
no-default-for-required-parameter Documentation Error Option is marked as required but specifies a default. Arguments with a default should not be marked as required
nonexistent-parameter-documented Documentation Error Argument is listed in DOCUMENTATION.options, but not accepted by the module
option-incorrect-version-added Documentation Error ``version_added`` for new option is incorrect
option-invalid-version-added Documentation Error ``version_added`` for new option is not a valid version number
parameter-invalid Documentation Error Argument in argument_spec is not a valid python identifier
parameter-invalid-elements Documentation Error Value for "elements" is valid only when value of "type" is ``list``
implied-parameter-type-mismatch Documentation Error Argument_spec implies ``type="str"`` but documentation defines it as different data type
parameter-type-not-in-doc Documentation Error Type value is defined in ``argument_spec`` but documentation doesn't specify a type
parameter-alias-repeated Parameters Error argument in argument_spec has at least one alias specified multiple times in aliases
parameter-alias-self Parameters Error argument in argument_spec is specified as its own alias
parameter-documented-multiple-times Documentation Error argument in argument_spec with aliases is documented multiple times
parameter-list-no-elements Parameters Error argument in argument_spec "type" is specified as ``list`` without defining "elements"
parameter-state-invalid-choice Parameters Error Argument ``state`` includes ``get``, ``list`` or ``info`` as a choice. Functionality should be in an ``_info`` or (if further conditions apply) ``_facts`` module.
python-syntax-error Syntax Error Python ``SyntaxError`` while parsing module
return-syntax-error Documentation Error ``RETURN`` is not valid YAML, ``RETURN`` fragments missing or invalid
subdirectory-missing-init Naming Error Ansible module subdirectories must contain an ``__init__.py``
try-except-missing-has Imports Warning Try/Except ``HAS_`` expression missing
undocumented-parameter Documentation Error Argument is listed in the argument_spec, but not documented in the module
unidiomatic-typecheck Syntax Error Type comparison using ``type()`` found. Use ``isinstance()`` instead
unknown-doc-fragment Documentation Warning Unknown pre-existing ``DOCUMENTATION`` error
use-boto3 Imports Error ``boto`` import found, new modules should use ``boto3``
use-fail-json-not-sys-exit Imports Error ``sys.exit()`` call found. Should be ``exit_json``/``fail_json``
use-module-utils-urls Imports Error ``requests`` import found, should use ``ansible.module_utils.urls`` instead
use-run-command-not-os-call Imports Error ``os.call`` used instead of ``module.run_command``
use-run-command-not-popen Imports Error ``subprocess.Popen`` used instead of ``module.run_command``
use-short-gplv3-license Documentation Error GPLv3 license header should be the :ref:`short form <copyright>` for new modules
============================================================ ================== ==================== =========================================================================================
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,617 |
grafana_dashboard: add dashboard report "send() got multiple values for keyword argument 'MESSAGE'"
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Add dashboard report "send() got multiple values for keyword argument 'MESSAGE'"
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
grafana_dashboard
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.3
config file = /home/namnh/workspace/ansible-setup/ansible.cfg
configured module search path = ['/home/namnh/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/namnh/.venvs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/namnh/.venvs/ansible/bin/ansible
python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/home/namnh/workspace/ansible-setup/ansible.cfg) = ['/home/namnh/workspace/ansible-setup/inventory']
DEFAULT_STDOUT_CALLBACK(/home/namnh/workspace/ansible-setup/ansible.cfg) = yaml
INTERPRETER_PYTHON(/home/namnh/workspace/ansible-setup/ansible.cfg) = /usr/bin/python3
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
tasks:
- name: Import Grafana telegraf dashboard
ignore_errors: yes
become: false
grafana_dashboard:
grafana_url: "{{ grafana_url }}"
grafana_user: "{{ grafana_admin_user }}"
grafana_password: "{{ grafana_admin_password }}"
message: telegraf
overwrite: yes
state: present
path: "{{ playbook_dir }}/configs/telegraf-system.json"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
New datasource is added
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 114, in <module>
_ansiballz_main()
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/namnh/.ansible/tmp/ansible-tmp-1564113599.2774515-158744002373584/AnsiballZ_grafana_dashboard.py", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/usr/lib/python3.6/imp.py", line 235, in load_module
return load_source(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 170, in load_source
module = _exec(spec, sys.modules[name])
File "<frozen importlib._bootstrap>", line 618, in _exec
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/__main__.py", line 451, in <module>
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/__main__.py", line 408, in main
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 691, in __init__
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 1946, in _log_invocation
File "/tmp/ansible_grafana_dashboard_payload_apewcyne/ansible_grafana_dashboard_payload.zip/ansible/module_utils/basic.py", line 1904, in log
TypeError: send() got multiple values for keyword argument 'MESSAGE'
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
|
https://github.com/ansible/ansible/issues/59617
|
https://github.com/ansible/ansible/pull/60051
|
00bed0eb1c2ed22a7b56078b9e7911756182ac92
|
b6753b46a987a319ff062a8adcdcd4e0000353ed
| 2019-07-26T04:36:45Z |
python
| 2020-02-18T12:00:16Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
* Windows Server 2008 and 2008 R2 will no longer be supported or tested in the next Ansible release, see :ref:`windows_faq_server2008`.
* The :ref:`win_stat <win_stat_module>` module has removed the deprecated ``get_md55`` option and ``md5`` return value.
* The :ref:`win_psexec <win_psexec_module>` module has removed the deprecated ``extra_opts`` option.
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
* :ref:`redfish_config <redfish_config_module>`: the ``bios_attribute_name`` and ``bios_attribute_value`` options will be removed. To maintain the existing behavior use the ``bios_attributes`` option instead.
* :ref:`clc_aa_policy <clc_aa_policy_module>`: the ``wait`` parameter will be removed. It has always been ignored by the module.
* :ref:`redfish_config <redfish_config_module>`, :ref:`redfish_command <redfish_command_module>`: the behavior to select the first System, Manager, or Chassis resource to modify when multiple are present will be removed. Use the new ``resource_id`` option to specify target resource to modify.
* :ref:`win_domain_controller <win_domain_controller_module>`: the ``log_path`` option will be removed. This was undocumented and only related to debugging information for module development.
* :ref:`win_package <win_package_module>`: the ``username`` and ``password`` options will be removed. The same functionality can be done by using ``become: yes`` and ``become_flags: logon_type=new_credentials logon_flags=netcredentials_only`` on the task.
* :ref:`win_package <win_package_module>`: the ``ensure`` alias for the ``state`` option will be removed. Please use ``state`` instead of ``ensure``.
* :ref:`win_package <win_package_module>`: the ``productid`` alias for the ``product_id`` option will be removed. Please use ``product_id`` instead of ``productid``.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`ec2 <ec2_module>`: the ``group`` and ``group_id`` options will become mutually exclusive. Currently ``group_id`` is ignored if you pass both.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
* :ref:`vmware_tag_info <vmware_tag_info_module>`: the module will not return ``tag_facts`` since it does not return multiple tags with the same name and different category id. To maintain the existing behavior use ``tag_info`` which is a list of tag metadata.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead.
Noteworthy module changes
-------------------------
* The ``datacenter`` option has been removed from :ref:`vmware_guest_find <vmware_guest_find_module>`
* The options ``ip_address`` and ``subnet_mask`` have been removed from :ref:`vmware_vmkernel <vmware_vmkernel_module>`; use the suboptions ``ip_address`` and ``subnet_mask`` of the ``network`` option instead.
* Ansible modules created with ``add_file_common_args=True`` added a number of undocumented arguments which were mostly there to ease implementing certain action plugins. The undocumented arguments ``src``, ``follow``, ``force``, ``content``, ``backup``, ``remote_src``, ``regexp``, ``delimiter``, and ``directory_mode`` are now no longer added. Modules relying on these options to be added need to specify them by themselves.
* The ``AWSRetry`` decorator no longer catches ``NotFound`` exceptions by default. ``NotFound`` exceptions need to be explicitly added using ``catch_extra_error_codes``. Some AWS modules may see an increase in transient failures due to AWS's eventual consistency model.
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`zabbix_proxy <zabbix_proxy_module>` deprecates ``interface`` sub-options ``type`` and ``main`` when proxy type is set to passive via ``status=passive``. Make sure these suboptions are removed from your playbook as they were never supported by Zabbix in the first place.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
* :ref:`docker_container <docker_container_module>` no longer passes information on non-anonymous volumes or binds as ``Volumes`` to the Docker daemon. This increases compatibility with the ``docker`` CLI program. Note that if you specify ``volumes: strict`` in ``comparisons``, this could cause existing containers created with docker_container from Ansible 2.9 or earlier to restart.
* :ref:`docker_container <docker_container_module>`'s support for port ranges was adjusted to be more compatible to the ``docker`` command line utility: a one-port container range combined with a multiple-port host range will no longer result in only the first host port be used, but the whole range being passed to Docker so that a free port in that range will be used.
* :ref:`purefb_fs <purefb_fs_module>` no longer supports the deprecated ``nfs`` option. This has been superceeded by ``nfsv3``.
* :ref:`nxos_igmp_interface <nxos_igmp_interface_module>` no longer supports the deprecated ``oif_prefix`` and ``oif_source`` options. These have been superceeded by ``oif_ps``.
* :ref:`aws_s3 <aws_s3_module>` can now delete versioned buckets even when they are not empty - set mode to delete to delete a versioned bucket and everything in it.
Plugins
=======
Lookup plugin names case-sensitivity
------------------------------------
* Prior to Ansible ``2.10`` lookup plugin names passed in as an argument to the ``lookup()`` function were treated as case-insensitive as opposed to lookups invoked via ``with_<lookup_name>``. ``2.10`` brings consistency to ``lookup()`` and ``with_`` to be both case-sensitive.
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
* Some undocumented arguments from ``FILE_COMMON_ARGUMENTS`` have been removed; plugins using these, in particular action plugins, need to be adjusted. The undocumented arguments which were removed are ``src``, ``follow``, ``force``, ``content``, ``backup``, ``remote_src``, ``regexp``, ``delimiter``, and ``directory_mode``.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.