status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
369
body
stringlengths
0
254k
βŒ€
issue_url
stringlengths
37
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
timestamp[us, tz=UTC]
language
stringclasses
5 values
commit_datetime
timestamp[us, tz=UTC]
updated_file
stringlengths
4
188
file_content
stringlengths
0
5.12M
closed
ansible/ansible
https://github.com/ansible/ansible
64,770
gitlab modules : user/password method is deprecated
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `python-gitlab` library release a new version [v1.13.0](https://github.com/python-gitlab/python-gitlab/releases/tag/v1.13.0). User/Password authentication as been removed completely causing all Ansible GitLab module to fail... ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> gitlab_group gitlab_deploy_key gitlab_hook gitlab_project gitlab_project_variable gitlab_runner gitlab_user ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> All versions
https://github.com/ansible/ansible/issues/64770
https://github.com/ansible/ansible/pull/64989
bc479fcafc3e73cb660577c6a4ef1480277bbd4f
4e6fa59ec1b5f31332b5f19f6e51607ada6345dd
2019-11-13T09:59:06Z
python
2019-11-19T10:00:34Z
lib/ansible/modules/source_control/gitlab/gitlab_hook.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2019, Guillaume Martinez ([email protected]) # Copyright: (c) 2018, Marcus Watkins <[email protected]> # Based on code: # Copyright: (c) 2013, Phillip Gentry <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: gitlab_hook short_description: Manages GitLab project hooks. description: - Adds, updates and removes project hook version_added: "2.6" author: - Marcus Watkins (@marwatk) - Guillaume Martinez (@Lunik) requirements: - python >= 2.7 - python-gitlab python module extends_documentation_fragment: - auth_basic options: api_token: description: - GitLab token for logging in. version_added: "2.8" type: str project: description: - Id or Full path of the project in the form of group/name. required: true type: str hook_url: description: - The url that you want GitLab to post to, this is used as the primary key for updates and deletion. required: true type: str state: description: - When C(present) the hook will be updated to match the input or created if it doesn't exist. - When C(absent) hook will be deleted if it exists. required: true default: present type: str choices: [ "present", "absent" ] push_events: description: - Trigger hook on push events. type: bool default: yes issues_events: description: - Trigger hook on issues events. type: bool default: no merge_requests_events: description: - Trigger hook on merge requests events. type: bool default: no tag_push_events: description: - Trigger hook on tag push events. type: bool default: no note_events: description: - Trigger hook on note events or when someone adds a comment. type: bool default: no job_events: description: - Trigger hook on job events. type: bool default: no pipeline_events: description: - Trigger hook on pipeline events. type: bool default: no wiki_page_events: description: - Trigger hook on wiki events. type: bool default: no hook_validate_certs: description: - Whether GitLab will do SSL verification when triggering the hook. type: bool default: no aliases: [ enable_ssl_verification ] token: description: - Secret token to validate hook messages at the receiver. - If this is present it will always result in a change as it cannot be retrieved from GitLab. - Will show up in the X-GitLab-Token HTTP request header. required: false type: str ''' EXAMPLES = ''' - name: "Adding a project hook" gitlab_hook: api_url: https://gitlab.example.com/ api_token: "{{ access_token }}" project: "my_group/my_project" hook_url: "https://my-ci-server.example.com/gitlab-hook" state: present push_events: yes tag_push_events: yes hook_validate_certs: no token: "my-super-secret-token-that-my-ci-server-will-check" - name: "Delete the previous hook" gitlab_hook: api_url: https://gitlab.example.com/ api_token: "{{ access_token }}" project: "my_group/my_project" hook_url: "https://my-ci-server.example.com/gitlab-hook" state: absent - name: "Delete a hook by numeric project id" gitlab_hook: api_url: https://gitlab.example.com/ api_token: "{{ access_token }}" project: 10 hook_url: "https://my-ci-server.example.com/gitlab-hook" state: absent ''' RETURN = ''' msg: description: Success or failure message returned: always type: str sample: "Success" result: description: json parsed response from the server returned: always type: dict error: description: the error message returned by the GitLab API returned: failed type: str sample: "400: path is already in use" hook: description: API object returned: always type: dict ''' import re import traceback GITLAB_IMP_ERR = None try: import gitlab HAS_GITLAB_PACKAGE = True except Exception: GITLAB_IMP_ERR = traceback.format_exc() HAS_GITLAB_PACKAGE = False from ansible.module_utils.api import basic_auth_argument_spec from ansible.module_utils.basic import AnsibleModule, missing_required_lib from ansible.module_utils._text import to_native from ansible.module_utils.gitlab import findProject class GitLabHook(object): def __init__(self, module, gitlab_instance): self._module = module self._gitlab = gitlab_instance self.hookObject = None ''' @param project Project Object @param hook_url Url to call on event @param description Description of the group @param parent Parent group full path ''' def createOrUpdateHook(self, project, hook_url, options): changed = False # Because we have already call userExists in main() if self.hookObject is None: hook = self.createHook(project, { 'url': hook_url, 'push_events': options['push_events'], 'issues_events': options['issues_events'], 'merge_requests_events': options['merge_requests_events'], 'tag_push_events': options['tag_push_events'], 'note_events': options['note_events'], 'job_events': options['job_events'], 'pipeline_events': options['pipeline_events'], 'wiki_page_events': options['wiki_page_events'], 'enable_ssl_verification': options['enable_ssl_verification'], 'token': options['token']}) changed = True else: changed, hook = self.updateHook(self.hookObject, { 'push_events': options['push_events'], 'issues_events': options['issues_events'], 'merge_requests_events': options['merge_requests_events'], 'tag_push_events': options['tag_push_events'], 'note_events': options['note_events'], 'job_events': options['job_events'], 'pipeline_events': options['pipeline_events'], 'wiki_page_events': options['wiki_page_events'], 'enable_ssl_verification': options['enable_ssl_verification'], 'token': options['token']}) self.hookObject = hook if changed: if self._module.check_mode: self._module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url) try: hook.save() except Exception as e: self._module.fail_json(msg="Failed to update hook: %s " % e) return True else: return False ''' @param project Project Object @param arguments Attributes of the hook ''' def createHook(self, project, arguments): if self._module.check_mode: return True hook = project.hooks.create(arguments) return hook ''' @param hook Hook Object @param arguments Attributes of the hook ''' def updateHook(self, hook, arguments): changed = False for arg_key, arg_value in arguments.items(): if arguments[arg_key] is not None: if getattr(hook, arg_key) != arguments[arg_key]: setattr(hook, arg_key, arguments[arg_key]) changed = True return (changed, hook) ''' @param project Project object @param hook_url Url to call on event ''' def findHook(self, project, hook_url): hooks = project.hooks.list() for hook in hooks: if (hook.url == hook_url): return hook ''' @param project Project object @param hook_url Url to call on event ''' def existsHook(self, project, hook_url): # When project exists, object will be stored in self.projectObject. hook = self.findHook(project, hook_url) if hook: self.hookObject = hook return True return False def deleteHook(self): if self._module.check_mode: return True return self.hookObject.delete() def main(): argument_spec = basic_auth_argument_spec() argument_spec.update(dict( api_token=dict(type='str', no_log=True), state=dict(type='str', default="present", choices=["absent", "present"]), project=dict(type='str', required=True), hook_url=dict(type='str', required=True), push_events=dict(type='bool', default=True), issues_events=dict(type='bool', default=False), merge_requests_events=dict(type='bool', default=False), tag_push_events=dict(type='bool', default=False), note_events=dict(type='bool', default=False), job_events=dict(type='bool', default=False), pipeline_events=dict(type='bool', default=False), wiki_page_events=dict(type='bool', default=False), hook_validate_certs=dict(type='bool', default=False, aliases=['enable_ssl_verification']), token=dict(type='str', no_log=True), )) module = AnsibleModule( argument_spec=argument_spec, mutually_exclusive=[ ['api_username', 'api_token'], ['api_password', 'api_token'] ], required_together=[ ['api_username', 'api_password'] ], required_one_of=[ ['api_username', 'api_token'] ], supports_check_mode=True, ) gitlab_url = re.sub('/api.*', '', module.params['api_url']) validate_certs = module.params['validate_certs'] gitlab_user = module.params['api_username'] gitlab_password = module.params['api_password'] gitlab_token = module.params['api_token'] state = module.params['state'] project_identifier = module.params['project'] hook_url = module.params['hook_url'] push_events = module.params['push_events'] issues_events = module.params['issues_events'] merge_requests_events = module.params['merge_requests_events'] tag_push_events = module.params['tag_push_events'] note_events = module.params['note_events'] job_events = module.params['job_events'] pipeline_events = module.params['pipeline_events'] wiki_page_events = module.params['wiki_page_events'] enable_ssl_verification = module.params['hook_validate_certs'] hook_token = module.params['token'] if not HAS_GITLAB_PACKAGE: module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR) try: gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password, private_token=gitlab_token, api_version=4) gitlab_instance.auth() except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e: module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e)) except (gitlab.exceptions.GitlabHttpError) as e: module.fail_json(msg="Failed to connect to GitLab server: %s. \ GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e)) gitlab_hook = GitLabHook(module, gitlab_instance) project = findProject(gitlab_instance, project_identifier) if project is None: module.fail_json(msg="Failed to create hook: project %s doesn't exists" % project_identifier) hook_exists = gitlab_hook.existsHook(project, hook_url) if state == 'absent': if hook_exists: gitlab_hook.deleteHook() module.exit_json(changed=True, msg="Successfully deleted hook %s" % hook_url) else: module.exit_json(changed=False, msg="Hook deleted or does not exists") if state == 'present': if gitlab_hook.createOrUpdateHook(project, hook_url, { "push_events": push_events, "issues_events": issues_events, "merge_requests_events": merge_requests_events, "tag_push_events": tag_push_events, "note_events": note_events, "job_events": job_events, "pipeline_events": pipeline_events, "wiki_page_events": wiki_page_events, "enable_ssl_verification": enable_ssl_verification, "token": hook_token}): module.exit_json(changed=True, msg="Successfully created or updated the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs) else: module.exit_json(changed=False, msg="No need to update the hook %s" % hook_url, hook=gitlab_hook.hookObject._attrs) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,770
gitlab modules : user/password method is deprecated
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `python-gitlab` library release a new version [v1.13.0](https://github.com/python-gitlab/python-gitlab/releases/tag/v1.13.0). User/Password authentication as been removed completely causing all Ansible GitLab module to fail... ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> gitlab_group gitlab_deploy_key gitlab_hook gitlab_project gitlab_project_variable gitlab_runner gitlab_user ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> All versions
https://github.com/ansible/ansible/issues/64770
https://github.com/ansible/ansible/pull/64989
bc479fcafc3e73cb660577c6a4ef1480277bbd4f
4e6fa59ec1b5f31332b5f19f6e51607ada6345dd
2019-11-13T09:59:06Z
python
2019-11-19T10:00:34Z
lib/ansible/modules/source_control/gitlab/gitlab_project.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2019, Guillaume Martinez ([email protected]) # Copyright: (c) 2015, Werner Dijkerman ([email protected]) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: gitlab_project short_description: Creates/updates/deletes GitLab Projects description: - When the project does not exist in GitLab, it will be created. - When the project does exists and state=absent, the project will be deleted. - When changes are made to the project, the project will be updated. version_added: "2.1" author: - Werner Dijkerman (@dj-wasabi) - Guillaume Martinez (@Lunik) requirements: - python >= 2.7 - python-gitlab python module extends_documentation_fragment: - auth_basic options: api_token: description: - GitLab token for logging in. type: str group: description: - Id or The full path of the group of which this projects belongs to. type: str name: description: - The name of the project required: true type: str path: description: - The path of the project you want to create, this will be server_url/<group>/path. - If not supplied, name will be used. type: str description: description: - An description for the project. type: str issues_enabled: description: - Whether you want to create issues or not. - Possible values are true and false. type: bool default: yes merge_requests_enabled: description: - If merge requests can be made or not. - Possible values are true and false. type: bool default: yes wiki_enabled: description: - If an wiki for this project should be available or not. - Possible values are true and false. type: bool default: yes snippets_enabled: description: - If creating snippets should be available or not. - Possible values are true and false. type: bool default: yes visibility: description: - Private. Project access must be granted explicitly for each user. - Internal. The project can be cloned by any logged in user. - Public. The project can be cloned without any authentication. default: private type: str choices: ["private", "internal", "public"] aliases: - visibility_level import_url: description: - Git repository which will be imported into gitlab. - GitLab server needs read access to this git repository. required: false type: str state: description: - create or delete project. - Possible values are present and absent. default: present type: str choices: ["present", "absent"] ''' EXAMPLES = ''' - name: Delete GitLab Project gitlab_project: api_url: https://gitlab.example.com/ api_token: "{{ access_token }}" validate_certs: False name: my_first_project state: absent delegate_to: localhost - name: Create GitLab Project in group Ansible gitlab_project: api_url: https://gitlab.example.com/ validate_certs: True api_username: dj-wasabi api_password: "MySecretPassword" name: my_first_project group: ansible issues_enabled: False wiki_enabled: True snippets_enabled: True import_url: http://git.example.com/example/lab.git state: present delegate_to: localhost ''' RETURN = ''' msg: description: Success or failure message returned: always type: str sample: "Success" result: description: json parsed response from the server returned: always type: dict error: description: the error message returned by the GitLab API returned: failed type: str sample: "400: path is already in use" project: description: API object returned: always type: dict ''' import traceback GITLAB_IMP_ERR = None try: import gitlab HAS_GITLAB_PACKAGE = True except Exception: GITLAB_IMP_ERR = traceback.format_exc() HAS_GITLAB_PACKAGE = False from ansible.module_utils.api import basic_auth_argument_spec from ansible.module_utils.basic import AnsibleModule, missing_required_lib from ansible.module_utils._text import to_native from ansible.module_utils.gitlab import findGroup, findProject class GitLabProject(object): def __init__(self, module, gitlab_instance): self._module = module self._gitlab = gitlab_instance self.projectObject = None ''' @param project_name Name of the project @param namespace Namespace Object (User or Group) @param options Options of the project ''' def createOrUpdateProject(self, project_name, namespace, options): changed = False # Because we have already call userExists in main() if self.projectObject is None: project = self.createProject(namespace, { 'name': project_name, 'path': options['path'], 'description': options['description'], 'issues_enabled': options['issues_enabled'], 'merge_requests_enabled': options['merge_requests_enabled'], 'wiki_enabled': options['wiki_enabled'], 'snippets_enabled': options['snippets_enabled'], 'visibility': options['visibility'], 'import_url': options['import_url']}) changed = True else: changed, project = self.updateProject(self.projectObject, { 'name': project_name, 'description': options['description'], 'issues_enabled': options['issues_enabled'], 'merge_requests_enabled': options['merge_requests_enabled'], 'wiki_enabled': options['wiki_enabled'], 'snippets_enabled': options['snippets_enabled'], 'visibility': options['visibility']}) self.projectObject = project if changed: if self._module.check_mode: self._module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name) try: project.save() except Exception as e: self._module.fail_json(msg="Failed update project: %s " % e) return True else: return False ''' @param namespace Namespace Object (User or Group) @param arguments Attributes of the project ''' def createProject(self, namespace, arguments): if self._module.check_mode: return True arguments['namespace_id'] = namespace.id try: project = self._gitlab.projects.create(arguments) except (gitlab.exceptions.GitlabCreateError) as e: self._module.fail_json(msg="Failed to create project: %s " % to_native(e)) return project ''' @param project Project Object @param arguments Attributes of the project ''' def updateProject(self, project, arguments): changed = False for arg_key, arg_value in arguments.items(): if arguments[arg_key] is not None: if getattr(project, arg_key) != arguments[arg_key]: setattr(project, arg_key, arguments[arg_key]) changed = True return (changed, project) def deleteProject(self): if self._module.check_mode: return True project = self.projectObject return project.delete() ''' @param namespace User/Group object @param name Name of the project ''' def existsProject(self, namespace, path): # When project exists, object will be stored in self.projectObject. project = findProject(self._gitlab, namespace.full_path + '/' + path) if project: self.projectObject = project return True return False def main(): argument_spec = basic_auth_argument_spec() argument_spec.update(dict( api_token=dict(type='str', no_log=True), group=dict(type='str'), name=dict(type='str', required=True), path=dict(type='str'), description=dict(type='str'), issues_enabled=dict(type='bool', default=True), merge_requests_enabled=dict(type='bool', default=True), wiki_enabled=dict(type='bool', default=True), snippets_enabled=dict(default=True, type='bool'), visibility=dict(type='str', default="private", choices=["internal", "private", "public"], aliases=["visibility_level"]), import_url=dict(type='str'), state=dict(type='str', default="present", choices=["absent", "present"]), )) module = AnsibleModule( argument_spec=argument_spec, mutually_exclusive=[ ['api_username', 'api_token'], ['api_password', 'api_token'], ], required_together=[ ['api_username', 'api_password'], ], required_one_of=[ ['api_username', 'api_token'] ], supports_check_mode=True, ) gitlab_url = module.params['api_url'] validate_certs = module.params['validate_certs'] gitlab_user = module.params['api_username'] gitlab_password = module.params['api_password'] gitlab_token = module.params['api_token'] group_identifier = module.params['group'] project_name = module.params['name'] project_path = module.params['path'] project_description = module.params['description'] issues_enabled = module.params['issues_enabled'] merge_requests_enabled = module.params['merge_requests_enabled'] wiki_enabled = module.params['wiki_enabled'] snippets_enabled = module.params['snippets_enabled'] visibility = module.params['visibility'] import_url = module.params['import_url'] state = module.params['state'] if not HAS_GITLAB_PACKAGE: module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR) try: gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password, private_token=gitlab_token, api_version=4) gitlab_instance.auth() except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e: module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e)) except (gitlab.exceptions.GitlabHttpError) as e: module.fail_json(msg="Failed to connect to GitLab server: %s. \ GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e)) # Set project_path to project_name if it is empty. if project_path is None: project_path = project_name.replace(" ", "_") gitlab_project = GitLabProject(module, gitlab_instance) if group_identifier: group = findGroup(gitlab_instance, group_identifier) if group is None: module.fail_json(msg="Failed to create project: group %s doesn't exists" % group_identifier) namespace = gitlab_instance.namespaces.get(group.id) project_exists = gitlab_project.existsProject(namespace, project_path) else: user = gitlab_instance.users.list(username=gitlab_instance.user.username)[0] namespace = gitlab_instance.namespaces.get(user.id) project_exists = gitlab_project.existsProject(namespace, project_path) if state == 'absent': if project_exists: gitlab_project.deleteProject() module.exit_json(changed=True, msg="Successfully deleted project %s" % project_name) else: module.exit_json(changed=False, msg="Project deleted or does not exists") if state == 'present': if gitlab_project.createOrUpdateProject(project_name, namespace, { "path": project_path, "description": project_description, "issues_enabled": issues_enabled, "merge_requests_enabled": merge_requests_enabled, "wiki_enabled": wiki_enabled, "snippets_enabled": snippets_enabled, "visibility": visibility, "import_url": import_url}): module.exit_json(changed=True, msg="Successfully created or updated the project %s" % project_name, project=gitlab_project.projectObject._attrs) else: module.exit_json(changed=False, msg="No need to update the project %s" % project_name, project=gitlab_project.projectObject._attrs) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,770
gitlab modules : user/password method is deprecated
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `python-gitlab` library release a new version [v1.13.0](https://github.com/python-gitlab/python-gitlab/releases/tag/v1.13.0). User/Password authentication as been removed completely causing all Ansible GitLab module to fail... ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> gitlab_group gitlab_deploy_key gitlab_hook gitlab_project gitlab_project_variable gitlab_runner gitlab_user ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> All versions
https://github.com/ansible/ansible/issues/64770
https://github.com/ansible/ansible/pull/64989
bc479fcafc3e73cb660577c6a4ef1480277bbd4f
4e6fa59ec1b5f31332b5f19f6e51607ada6345dd
2019-11-13T09:59:06Z
python
2019-11-19T10:00:34Z
lib/ansible/modules/source_control/gitlab/gitlab_project_variable.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2019, Markus Bergholz ([email protected]) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' module: gitlab_project_variable short_description: Creates/updates/deletes GitLab Projects Variables description: - When a project variable does not exist, it will be created. - When a project variable does exist, its value will be updated when the values are different. - Variables which are untouched in the playbook, but are not untouched in the GitLab project, they stay untouched (I(purge) is C(false)) or will be deleted (I(purge) is C(true)). version_added: "2.9" author: - "Markus Bergholz (@markuman)" requirements: - python >= 2.7 - python-gitlab python module extends_documentation_fragment: - auth_basic options: state: description: - Create or delete project variable. - Possible values are present and absent. default: present type: str choices: ["present", "absent"] api_token: description: - GitLab access token with API permissions. required: true type: str project: description: - The path and name of the project. required: true type: str purge: description: - When set to true, all variables which are not untouched in the task will be deleted. default: false type: bool vars: description: - A list of key value pairs. default: {} type: dict ''' EXAMPLES = ''' - name: Set or update some CI/CD variables gitlab_project_variable: api_url: https://gitlab.com api_token: secret_access_token project: markuman/dotfiles purge: false vars: ACCESS_KEY_ID: abc123 SECRET_ACCESS_KEY: 321cba - name: Delete one variable gitlab_project_variable: api_url: https://gitlab.com api_token: secret_access_token project: markuman/dotfiles state: absent vars: ACCESS_KEY_ID: abc123 ''' RETURN = ''' project_variable: description: Four lists of the variablenames which were added, updated, removed or exist. returned: always type: dict contains: added: description: A list of variables which were created. returned: always type: list sample: "['ACCESS_KEY_ID', 'SECRET_ACCESS_KEY']" untouched: description: A list of variables which exist. returned: always type: list sample: "['ACCESS_KEY_ID', 'SECRET_ACCESS_KEY']" removed: description: A list of variables which were deleted. returned: always type: list sample: "['ACCESS_KEY_ID', 'SECRET_ACCESS_KEY']" updated: description: A list of variables whose values were changed. returned: always type: list sample: "['ACCESS_KEY_ID', 'SECRET_ACCESS_KEY']" ''' import traceback from ansible.module_utils.basic import AnsibleModule, missing_required_lib from ansible.module_utils._text import to_native from ansible.module_utils.api import basic_auth_argument_spec GITLAB_IMP_ERR = None try: import gitlab HAS_GITLAB_PACKAGE = True except Exception: GITLAB_IMP_ERR = traceback.format_exc() HAS_GITLAB_PACKAGE = False class GitlabProjectVariables(object): def __init__(self, module, gitlab_instance): self.repo = gitlab_instance self.project = self.get_project(module.params['project']) self._module = module def get_project(self, project_name): return self.repo.projects.get(project_name) def list_all_project_variables(self): return self.project.variables.list() def create_variable(self, key, value): if self._module.check_mode: return return self.project.variables.create({"key": key, "value": value}) def update_variable(self, var, value): if var.value == value: return False if self._module.check_mode: return True var.value = value var.save() return True def delete_variable(self, key): if self._module.check_mode: return return self.project.variables.delete(key) def native_python_main(this_gitlab, purge, var_list, state): change = False return_value = dict(added=list(), updated=list(), removed=list(), untouched=list()) gitlab_keys = this_gitlab.list_all_project_variables() existing_variables = [x.get_id() for x in gitlab_keys] for key in var_list: if key in existing_variables: index = existing_variables.index(key) existing_variables[index] = None if state == 'present': single_change = this_gitlab.update_variable( gitlab_keys[index], var_list[key]) change = single_change or change if single_change: return_value['updated'].append(key) else: return_value['untouched'].append(key) elif state == 'absent': this_gitlab.delete_variable(key) change = True return_value['removed'].append(key) elif key not in existing_variables and state == 'present': this_gitlab.create_variable(key, var_list[key]) change = True return_value['added'].append(key) existing_variables = list(filter(None, existing_variables)) if purge: for item in existing_variables: this_gitlab.delete_variable(item) change = True return_value['removed'].append(item) else: return_value['untouched'].extend(existing_variables) return change, return_value def main(): argument_spec = basic_auth_argument_spec() argument_spec.update( api_token=dict(type='str', required=True, no_log=True), project=dict(type='str', required=True), purge=dict(type='bool', required=False, default=False), vars=dict(type='dict', required=False, default=dict()), state=dict(type='str', default="present", choices=["absent", "present"]) ) module = AnsibleModule( argument_spec=argument_spec, mutually_exclusive=[ ['api_username', 'api_token'], ['api_password', 'api_token'], ], required_together=[ ['api_username', 'api_password'], ], required_one_of=[ ['api_username', 'api_token'] ], supports_check_mode=True ) api_url = module.params['api_url'] gitlab_token = module.params['api_token'] purge = module.params['purge'] var_list = module.params['vars'] state = module.params['state'] if not HAS_GITLAB_PACKAGE: module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR) try: gitlab_instance = gitlab.Gitlab(url=api_url, private_token=gitlab_token) gitlab_instance.auth() except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e: module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e)) except (gitlab.exceptions.GitlabHttpError) as e: module.fail_json(msg="Failed to connect to GitLab server: %s. \ GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e)) this_gitlab = GitlabProjectVariables(module=module, gitlab_instance=gitlab_instance) change, return_value = native_python_main(this_gitlab, purge, var_list, state) module.exit_json(changed=change, project_variable=return_value) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,770
gitlab modules : user/password method is deprecated
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `python-gitlab` library release a new version [v1.13.0](https://github.com/python-gitlab/python-gitlab/releases/tag/v1.13.0). User/Password authentication as been removed completely causing all Ansible GitLab module to fail... ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> gitlab_group gitlab_deploy_key gitlab_hook gitlab_project gitlab_project_variable gitlab_runner gitlab_user ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> All versions
https://github.com/ansible/ansible/issues/64770
https://github.com/ansible/ansible/pull/64989
bc479fcafc3e73cb660577c6a4ef1480277bbd4f
4e6fa59ec1b5f31332b5f19f6e51607ada6345dd
2019-11-13T09:59:06Z
python
2019-11-19T10:00:34Z
lib/ansible/modules/source_control/gitlab/gitlab_runner.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2019, Guillaume Martinez ([email protected]) # Copyright: (c) 2018, Samy Coenen <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: gitlab_runner short_description: Create, modify and delete GitLab Runners. description: - Register, update and delete runners with the GitLab API. - All operations are performed using the GitLab API v4. - For details, consult the full API documentation at U(https://docs.gitlab.com/ee/api/runners.html). - A valid private API token is required for all operations. You can create as many tokens as you like using the GitLab web interface at U(https://$GITLAB_URL/profile/personal_access_tokens). - A valid registration token is required for registering a new runner. To create shared runners, you need to ask your administrator to give you this token. It can be found at U(https://$GITLAB_URL/admin/runners/). notes: - To create a new runner at least the C(api_token), C(description) and C(api_url) options are required. - Runners need to have unique descriptions. version_added: 2.8 author: - Samy Coenen (@SamyCoenen) - Guillaume Martinez (@Lunik) requirements: - python >= 2.7 - python-gitlab >= 1.5.0 extends_documentation_fragment: - auth_basic options: api_token: description: - Your private token to interact with the GitLab API. required: True type: str description: description: - The unique name of the runner. required: True type: str aliases: - name state: description: - Make sure that the runner with the same name exists with the same configuration or delete the runner with the same name. required: False default: present choices: ["present", "absent"] type: str registration_token: description: - The registration token is used to register new runners. required: True type: str active: description: - Define if the runners is immediately active after creation. required: False default: yes type: bool locked: description: - Determines if the runner is locked or not. required: False default: False type: bool access_level: description: - Determines if a runner can pick up jobs from protected branches. required: False default: ref_protected choices: ["ref_protected", "not_protected"] type: str maximum_timeout: description: - The maximum timeout that a runner has to pick up a specific job. required: False default: 3600 type: int run_untagged: description: - Run untagged jobs or not. required: False default: yes type: bool tag_list: description: The tags that apply to the runner. required: False default: [] type: list ''' EXAMPLES = ''' - name: "Register runner" gitlab_runner: api_url: https://gitlab.example.com/ api_token: "{{ access_token }}" registration_token: 4gfdsg345 description: Docker Machine t1 state: present active: True tag_list: ['docker'] run_untagged: False locked: False - name: "Delete runner" gitlab_runner: api_url: https://gitlab.example.com/ api_token: "{{ access_token }}" description: Docker Machine t1 state: absent ''' RETURN = ''' msg: description: Success or failure message returned: always type: str sample: "Success" result: description: json parsed response from the server returned: always type: dict error: description: the error message returned by the GitLab API returned: failed type: str sample: "400: path is already in use" runner: description: API object returned: always type: dict ''' import traceback GITLAB_IMP_ERR = None try: import gitlab HAS_GITLAB_PACKAGE = True except Exception: GITLAB_IMP_ERR = traceback.format_exc() HAS_GITLAB_PACKAGE = False from ansible.module_utils.api import basic_auth_argument_spec from ansible.module_utils.basic import AnsibleModule, missing_required_lib from ansible.module_utils._text import to_native try: cmp except NameError: def cmp(a, b): return (a > b) - (a < b) class GitLabRunner(object): def __init__(self, module, gitlab_instance): self._module = module self._gitlab = gitlab_instance self.runnerObject = None def createOrUpdateRunner(self, description, options): changed = False # Because we have already call userExists in main() if self.runnerObject is None: runner = self.createRunner({ 'description': description, 'active': options['active'], 'token': options['registration_token'], 'locked': options['locked'], 'run_untagged': options['run_untagged'], 'maximum_timeout': options['maximum_timeout'], 'tag_list': options['tag_list']}) changed = True else: changed, runner = self.updateRunner(self.runnerObject, { 'active': options['active'], 'locked': options['locked'], 'run_untagged': options['run_untagged'], 'maximum_timeout': options['maximum_timeout'], 'access_level': options['access_level'], 'tag_list': options['tag_list']}) self.runnerObject = runner if changed: if self._module.check_mode: self._module.exit_json(changed=True, msg="Successfully created or updated the runner %s" % description) try: runner.save() except Exception as e: self._module.fail_json(msg="Failed to update runner: %s " % to_native(e)) return True else: return False ''' @param arguments Attributes of the runner ''' def createRunner(self, arguments): if self._module.check_mode: return True try: runner = self._gitlab.runners.create(arguments) except (gitlab.exceptions.GitlabCreateError) as e: self._module.fail_json(msg="Failed to create runner: %s " % to_native(e)) return runner ''' @param runner Runner object @param arguments Attributes of the runner ''' def updateRunner(self, runner, arguments): changed = False for arg_key, arg_value in arguments.items(): if arguments[arg_key] is not None: if isinstance(arguments[arg_key], list): list1 = getattr(runner, arg_key) list1.sort() list2 = arguments[arg_key] list2.sort() if cmp(list1, list2): setattr(runner, arg_key, arguments[arg_key]) changed = True else: if getattr(runner, arg_key) != arguments[arg_key]: setattr(runner, arg_key, arguments[arg_key]) changed = True return (changed, runner) ''' @param description Description of the runner ''' def findRunner(self, description): runners = self._gitlab.runners.list(as_list=False) for runner in runners: if (runner.description == description): return self._gitlab.runners.get(runner.id) ''' @param description Description of the runner ''' def existsRunner(self, description): # When runner exists, object will be stored in self.runnerObject. runner = self.findRunner(description) if runner: self.runnerObject = runner return True return False def deleteRunner(self): if self._module.check_mode: return True runner = self.runnerObject return runner.delete() def main(): argument_spec = basic_auth_argument_spec() argument_spec.update(dict( api_token=dict(type='str', no_log=True), description=dict(type='str', required=True, aliases=["name"]), active=dict(type='bool', default=True), tag_list=dict(type='list', default=[]), run_untagged=dict(type='bool', default=True), locked=dict(type='bool', default=False), access_level=dict(type='str', default='ref_protected', choices=["not_protected", "ref_protected"]), maximum_timeout=dict(type='int', default=3600), registration_token=dict(type='str', required=True), state=dict(type='str', default="present", choices=["absent", "present"]), )) module = AnsibleModule( argument_spec=argument_spec, mutually_exclusive=[ ['api_username', 'api_token'], ['api_password', 'api_token'], ], required_together=[ ['api_username', 'api_password'], ], required_one_of=[ ['api_username', 'api_token'], ], supports_check_mode=True, ) gitlab_url = module.params['api_url'] validate_certs = module.params['validate_certs'] gitlab_user = module.params['api_username'] gitlab_password = module.params['api_password'] gitlab_token = module.params['api_token'] state = module.params['state'] runner_description = module.params['description'] runner_active = module.params['active'] tag_list = module.params['tag_list'] run_untagged = module.params['run_untagged'] runner_locked = module.params['locked'] access_level = module.params['access_level'] maximum_timeout = module.params['maximum_timeout'] registration_token = module.params['registration_token'] if not HAS_GITLAB_PACKAGE: module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR) try: gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password, private_token=gitlab_token, api_version=4) gitlab_instance.auth() except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e: module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e)) except (gitlab.exceptions.GitlabHttpError) as e: module.fail_json(msg="Failed to connect to GitLab server: %s. \ GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2" % to_native(e)) gitlab_runner = GitLabRunner(module, gitlab_instance) runner_exists = gitlab_runner.existsRunner(runner_description) if state == 'absent': if runner_exists: gitlab_runner.deleteRunner() module.exit_json(changed=True, msg="Successfully deleted runner %s" % runner_description) else: module.exit_json(changed=False, msg="Runner deleted or does not exists") if state == 'present': if gitlab_runner.createOrUpdateRunner(runner_description, { "active": runner_active, "tag_list": tag_list, "run_untagged": run_untagged, "locked": runner_locked, "access_level": access_level, "maximum_timeout": maximum_timeout, "registration_token": registration_token}): module.exit_json(changed=True, runner=gitlab_runner.runnerObject._attrs, msg="Successfully created or updated the runner %s" % runner_description) else: module.exit_json(changed=False, runner=gitlab_runner.runnerObject._attrs, msg="No need to update the runner %s" % runner_description) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,770
gitlab modules : user/password method is deprecated
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> `python-gitlab` library release a new version [v1.13.0](https://github.com/python-gitlab/python-gitlab/releases/tag/v1.13.0). User/Password authentication as been removed completely causing all Ansible GitLab module to fail... ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> gitlab_group gitlab_deploy_key gitlab_hook gitlab_project gitlab_project_variable gitlab_runner gitlab_user ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> All versions
https://github.com/ansible/ansible/issues/64770
https://github.com/ansible/ansible/pull/64989
bc479fcafc3e73cb660577c6a4ef1480277bbd4f
4e6fa59ec1b5f31332b5f19f6e51607ada6345dd
2019-11-13T09:59:06Z
python
2019-11-19T10:00:34Z
lib/ansible/modules/source_control/gitlab/gitlab_user.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2019, Guillaume Martinez ([email protected]) # Copyright: (c) 2015, Werner Dijkerman ([email protected]) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: gitlab_user short_description: Creates/updates/deletes GitLab Users description: - When the user does not exist in GitLab, it will be created. - When the user does exists and state=absent, the user will be deleted. - When changes are made to user, the user will be updated. notes: - From Ansible 2.10 and onwards, name, email and password are optional while deleting the user. version_added: "2.1" author: - Werner Dijkerman (@dj-wasabi) - Guillaume Martinez (@Lunik) requirements: - python >= 2.7 - python-gitlab python module - administrator rights on the GitLab server extends_documentation_fragment: - auth_basic options: api_token: description: - GitLab token for logging in. type: str name: description: - Name of the user you want to create. - Required only if C(state) is set to C(present). type: str username: description: - The username of the user. required: true type: str password: description: - The password of the user. - GitLab server enforces minimum password length to 8, set this value with 8 or more characters. - Required only if C(state) is set to C(present). type: str email: description: - The email that belongs to the user. - Required only if C(state) is set to C(present). type: str sshkey_name: description: - The name of the sshkey type: str sshkey_file: description: - The ssh key itself. type: str group: description: - Id or Full path of parent group in the form of group/name. - Add user as an member to this group. type: str access_level: description: - The access level to the group. One of the following can be used. - guest - reporter - developer - master (alias for maintainer) - maintainer - owner default: guest type: str choices: ["guest", "reporter", "developer", "master", "maintainer", "owner"] state: description: - create or delete group. - Possible values are present and absent. default: present type: str choices: ["present", "absent"] confirm: description: - Require confirmation. type: bool default: yes version_added: "2.4" isadmin: description: - Grant admin privileges to the user. type: bool default: no version_added: "2.8" external: description: - Define external parameter for this user. type: bool default: no version_added: "2.8" ''' EXAMPLES = ''' - name: "Delete GitLab User" gitlab_user: api_url: https://gitlab.example.com/ api_token: "{{ access_token }}" validate_certs: False username: myusername state: absent delegate_to: localhost - name: "Create GitLab User" gitlab_user: api_url: https://gitlab.example.com/ validate_certs: True api_username: dj-wasabi api_password: "MySecretPassword" name: My Name username: myusername password: mysecretpassword email: [email protected] sshkey_name: MySSH sshkey_file: ssh-rsa AAAAB3NzaC1yc... state: present group: super_group/mon_group access_level: owner delegate_to: localhost ''' RETURN = ''' msg: description: Success or failure message returned: always type: str sample: "Success" result: description: json parsed response from the server returned: always type: dict error: description: the error message returned by the GitLab API returned: failed type: str sample: "400: path is already in use" user: description: API object returned: always type: dict ''' import traceback GITLAB_IMP_ERR = None try: import gitlab HAS_GITLAB_PACKAGE = True except Exception: GITLAB_IMP_ERR = traceback.format_exc() HAS_GITLAB_PACKAGE = False from ansible.module_utils.api import basic_auth_argument_spec from ansible.module_utils.basic import AnsibleModule, missing_required_lib from ansible.module_utils._text import to_native from ansible.module_utils.gitlab import findGroup class GitLabUser(object): def __init__(self, module, gitlab_instance): self._module = module self._gitlab = gitlab_instance self.userObject = None self.ACCESS_LEVEL = { 'guest': gitlab.GUEST_ACCESS, 'reporter': gitlab.REPORTER_ACCESS, 'developer': gitlab.DEVELOPER_ACCESS, 'master': gitlab.MAINTAINER_ACCESS, 'maintainer': gitlab.MAINTAINER_ACCESS, 'owner': gitlab.OWNER_ACCESS} ''' @param username Username of the user @param options User options ''' def createOrUpdateUser(self, username, options): changed = False # Because we have already call userExists in main() if self.userObject is None: user = self.createUser({ 'name': options['name'], 'username': username, 'password': options['password'], 'email': options['email'], 'skip_confirmation': not options['confirm'], 'admin': options['isadmin'], 'external': options['external']}) changed = True else: changed, user = self.updateUser(self.userObject, { 'name': options['name'], 'email': options['email'], 'is_admin': options['isadmin'], 'external': options['external']}) # Assign ssh keys if options['sshkey_name'] and options['sshkey_file']: key_changed = self.addSshKeyToUser(user, { 'name': options['sshkey_name'], 'file': options['sshkey_file']}) changed = changed or key_changed # Assign group if options['group_path']: group_changed = self.assignUserToGroup(user, options['group_path'], options['access_level']) changed = changed or group_changed self.userObject = user if changed: if self._module.check_mode: self._module.exit_json(changed=True, msg="Successfully created or updated the user %s" % username) try: user.save() except Exception as e: self._module.fail_json(msg="Failed to update user: %s " % to_native(e)) return True else: return False ''' @param group User object ''' def getUserId(self, user): if user is not None: return user.id return None ''' @param user User object @param sshkey_name Name of the ssh key ''' def sshKeyExists(self, user, sshkey_name): keyList = map(lambda k: k.title, user.keys.list()) return sshkey_name in keyList ''' @param user User object @param sshkey Dict containing sshkey infos {"name": "", "file": ""} ''' def addSshKeyToUser(self, user, sshkey): if not self.sshKeyExists(user, sshkey['name']): if self._module.check_mode: return True try: user.keys.create({ 'title': sshkey['name'], 'key': sshkey['file']}) except gitlab.exceptions.GitlabCreateError as e: self._module.fail_json(msg="Failed to assign sshkey to user: %s" % to_native(e)) return True return False ''' @param group Group object @param user_id Id of the user to find ''' def findMember(self, group, user_id): try: member = group.members.get(user_id) except gitlab.exceptions.GitlabGetError: return None return member ''' @param group Group object @param user_id Id of the user to check ''' def memberExists(self, group, user_id): member = self.findMember(group, user_id) return member is not None ''' @param group Group object @param user_id Id of the user to check @param access_level GitLab access_level to check ''' def memberAsGoodAccessLevel(self, group, user_id, access_level): member = self.findMember(group, user_id) return member.access_level == access_level ''' @param user User object @param group_path Complete path of the Group including parent group path. <parent_path>/<group_path> @param access_level GitLab access_level to assign ''' def assignUserToGroup(self, user, group_identifier, access_level): group = findGroup(self._gitlab, group_identifier) if self._module.check_mode: return True if group is None: return False if self.memberExists(group, self.getUserId(user)): member = self.findMember(group, self.getUserId(user)) if not self.memberAsGoodAccessLevel(group, member.id, self.ACCESS_LEVEL[access_level]): member.access_level = self.ACCESS_LEVEL[access_level] member.save() return True else: try: group.members.create({ 'user_id': self.getUserId(user), 'access_level': self.ACCESS_LEVEL[access_level]}) except gitlab.exceptions.GitlabCreateError as e: self._module.fail_json(msg="Failed to assign user to group: %s" % to_native(e)) return True return False ''' @param user User object @param arguments User attributes ''' def updateUser(self, user, arguments): changed = False for arg_key, arg_value in arguments.items(): if arguments[arg_key] is not None: if getattr(user, arg_key) != arguments[arg_key]: setattr(user, arg_key, arguments[arg_key]) changed = True return (changed, user) ''' @param arguments User attributes ''' def createUser(self, arguments): if self._module.check_mode: return True try: user = self._gitlab.users.create(arguments) except (gitlab.exceptions.GitlabCreateError) as e: self._module.fail_json(msg="Failed to create user: %s " % to_native(e)) return user ''' @param username Username of the user ''' def findUser(self, username): users = self._gitlab.users.list(search=username) for user in users: if (user.username == username): return user ''' @param username Username of the user ''' def existsUser(self, username): # When user exists, object will be stored in self.userObject. user = self.findUser(username) if user: self.userObject = user return True return False def deleteUser(self): if self._module.check_mode: return True user = self.userObject return user.delete() def main(): argument_spec = basic_auth_argument_spec() argument_spec.update(dict( api_token=dict(type='str', no_log=True), name=dict(type='str'), state=dict(type='str', default="present", choices=["absent", "present"]), username=dict(type='str', required=True), password=dict(type='str', no_log=True), email=dict(type='str'), sshkey_name=dict(type='str'), sshkey_file=dict(type='str'), group=dict(type='str'), access_level=dict(type='str', default="guest", choices=["developer", "guest", "maintainer", "master", "owner", "reporter"]), confirm=dict(type='bool', default=True), isadmin=dict(type='bool', default=False), external=dict(type='bool', default=False), )) module = AnsibleModule( argument_spec=argument_spec, mutually_exclusive=[ ['api_username', 'api_token'], ['api_password', 'api_token'], ], required_together=[ ['api_username', 'api_password'], ], required_one_of=[ ['api_username', 'api_token'] ], supports_check_mode=True, required_if=( ('state', 'present', ['name', 'email', 'password']), ) ) gitlab_url = module.params['api_url'] validate_certs = module.params['validate_certs'] gitlab_user = module.params['api_username'] gitlab_password = module.params['api_password'] gitlab_token = module.params['api_token'] user_name = module.params['name'] state = module.params['state'] user_username = module.params['username'].lower() user_password = module.params['password'] user_email = module.params['email'] user_sshkey_name = module.params['sshkey_name'] user_sshkey_file = module.params['sshkey_file'] group_path = module.params['group'] access_level = module.params['access_level'] confirm = module.params['confirm'] user_isadmin = module.params['isadmin'] user_external = module.params['external'] if not HAS_GITLAB_PACKAGE: module.fail_json(msg=missing_required_lib("python-gitlab"), exception=GITLAB_IMP_ERR) try: gitlab_instance = gitlab.Gitlab(url=gitlab_url, ssl_verify=validate_certs, email=gitlab_user, password=gitlab_password, private_token=gitlab_token, api_version=4) gitlab_instance.auth() except (gitlab.exceptions.GitlabAuthenticationError, gitlab.exceptions.GitlabGetError) as e: module.fail_json(msg="Failed to connect to GitLab server: %s" % to_native(e)) except (gitlab.exceptions.GitlabHttpError) as e: module.fail_json(msg="Failed to connect to GitLab server: %s. \ GitLab remove Session API now that private tokens are removed from user API endpoints since version 10.2." % to_native(e)) gitlab_user = GitLabUser(module, gitlab_instance) user_exists = gitlab_user.existsUser(user_username) if state == 'absent': if user_exists: gitlab_user.deleteUser() module.exit_json(changed=True, msg="Successfully deleted user %s" % user_username) else: module.exit_json(changed=False, msg="User deleted or does not exists") if state == 'present': if gitlab_user.createOrUpdateUser(user_username, { "name": user_name, "password": user_password, "email": user_email, "sshkey_name": user_sshkey_name, "sshkey_file": user_sshkey_file, "group_path": group_path, "access_level": access_level, "confirm": confirm, "isadmin": user_isadmin, "external": user_external}): module.exit_json(changed=True, msg="Successfully created or updated the user %s" % user_username, user=gitlab_user.userObject._attrs) else: module.exit_json(changed=False, msg="No need to update the user %s" % user_username, user=gitlab_user.userObject._attrs) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,569
Provided ansible.cfg contains outdated, non-working sample configs
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Some of the examples in the provided ansible.cfg are incomplete or outdated. For example: the `sudo_flags` in the defaults section doesn't do anything, while `become_flags` does work, it only works when in the `privilege_escalation` section, but an example is missing from that section. I suspect there are other things missing/incorrect in the config file that could be updated to better reflect sane defaults. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `ansible.cfg` ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.8.6 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/awheeler/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Apr 9 2019, 14:30:50) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_SSH_CONTROL_PATH(/home/awheeler/it-ansible-repo/ansible.cfg) = %(directory)s/%%h-%%r DEFAULT_BECOME_METHOD(/home/awheeler/it-ansible-repo/ansible.cfg) = sudo DEFAULT_CALLBACK_WHITELIST(/home/awheeler/it-ansible-repo/ansible.cfg) = [u'profile_tasks', u'dense'] DEFAULT_FORKS(/home/awheeler/it-ansible-repo/ansible.cfg) = 5 DEFAULT_HOST_LIST(/home/awheeler/it-ansible-repo/ansible.cfg) = [u'/home/awheeler/it-ansible-repo/hosts'] DEFAULT_MANAGED_STR(/home/awheeler/it-ansible-repo/ansible.cfg) = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} DEFAULT_MODULE_PATH(/home/awheeler/it-ansible-repo/ansible.cfg) = [u'/usr/share/ansible', u'/etc/ansible/modules'] DEFAULT_POLL_INTERVAL(/home/awheeler/it-ansible-repo/ansible.cfg) = 15 DEFAULT_ROLES_PATH(/home/awheeler/it-ansible-repo/ansible.cfg) = [u'/home/awheeler/it-ansible-repo/roles', u'/home/awheeler/it-ansible-repo/librarian_roles', u'/etc/ansible/librarian_roles'] HOST_KEY_CHECKING(/home/awheeler/it-ansible-repo/ansible.cfg) = False RETRY_FILES_ENABLED(/home/awheeler/it-ansible-repo/ansible.cfg) = False RETRY_FILES_SAVE_PATH(/home/awheeler/it-ansible-repo/ansible.cfg) = /home/awheeler/.ansible-retry TRANSFORM_INVALID_GROUP_CHARS(/home/awheeler/it-ansible-repo/ansible.cfg) = never ``` ##### OS / ENVIRONMENT CentOS 6/7 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Uncomment `sudo_flags` in the provided ansible.cfg, and change the `sudo_flags` to include -E, to preserve environment. <!--- Paste example playbooks or commands between quotes below --> ``` ansible -m shell localhost -a 'env' ansible -m shell localhost -a 'env' -b ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS The output env should be pretty much the same between the two commands ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> Because the `sudo_flags` are ignored, most of the env variables are missing from the second command.
https://github.com/ansible/ansible/issues/64569
https://github.com/ansible/ansible/pull/64855
b6f0f14dd3d950719f9d435cf4a958aef30e7d22
1588ad77e22b70a11b4d955bae6df442302f9b7e
2019-11-07T17:03:01Z
python
2019-11-19T15:36:35Z
examples/ansible.cfg
# config file for ansible -- https://ansible.com/ # =============================================== # nearly all parameters can be overridden in ansible-playbook # or with command line flags. ansible will read ANSIBLE_CONFIG, # ansible.cfg in the current working directory, .ansible.cfg in # the home directory or /etc/ansible/ansible.cfg, whichever it # finds first [defaults] # some basic default values... #inventory = /etc/ansible/hosts #library = /usr/share/my_modules/ #module_utils = /usr/share/my_module_utils/ #remote_tmp = ~/.ansible/tmp #local_tmp = ~/.ansible/tmp #plugin_filters_cfg = /etc/ansible/plugin_filters.yml #forks = 5 #poll_interval = 15 #sudo_user = root #ask_sudo_pass = True #ask_pass = True #transport = smart #remote_port = 22 #module_lang = C #module_set_locale = False # plays will gather facts by default, which contain information about # the remote system. # # smart - gather by default, but don't regather if already gathered # implicit - gather by default, turn off with gather_facts: False # explicit - do not gather by default, must say gather_facts: True #gathering = implicit # This only affects the gathering done by a play's gather_facts directive, # by default gathering retrieves all facts subsets # all - gather all subsets # network - gather min and network facts # hardware - gather hardware facts (longest facts to retrieve) # virtual - gather min and virtual facts # facter - import facts from facter # ohai - import facts from ohai # You can combine them using comma (ex: network,virtual) # You can negate them using ! (ex: !hardware,!facter,!ohai) # A minimal set of facts is always gathered. #gather_subset = all # some hardware related facts are collected # with a maximum timeout of 10 seconds. This # option lets you increase or decrease that # timeout to something more suitable for the # environment. # gather_timeout = 10 # Ansible facts are available inside the ansible_facts.* dictionary # namespace. This setting maintains the behaviour which was the default prior # to 2.5, duplicating these variables into the main namespace, each with a # prefix of 'ansible_'. # This variable is set to True by default for backwards compatibility. It # will be changed to a default of 'False' in a future release. # ansible_facts. # inject_facts_as_vars = True # additional paths to search for roles in, colon separated #roles_path = /etc/ansible/roles # uncomment this to disable SSH key host checking #host_key_checking = False # change the default callback, you can only have one 'stdout' type enabled at a time. #stdout_callback = skippy ## Ansible ships with some plugins that require whitelisting, ## this is done to avoid running all of a type by default. ## These setting lists those that you want enabled for your system. ## Custom plugins should not need this unless plugin author specifies it. # enable callback plugins, they can output to stdout but cannot be 'stdout' type. #callback_whitelist = timer, mail # Determine whether includes in tasks and handlers are "static" by # default. As of 2.0, includes are dynamic by default. Setting these # values to True will make includes behave more like they did in the # 1.x versions. #task_includes_static = False #handler_includes_static = False # Controls if a missing handler for a notification event is an error or a warning #error_on_missing_handler = True # change this for alternative sudo implementations #sudo_exe = sudo # What flags to pass to sudo # WARNING: leaving out the defaults might create unexpected behaviours #sudo_flags = -H -S -n # SSH timeout #timeout = 10 # default user to use for playbooks if user is not specified # (/usr/bin/ansible will use current user as default) #remote_user = root # logging is off by default unless this path is defined # if so defined, consider logrotate #log_path = /var/log/ansible.log # default module name for /usr/bin/ansible #module_name = command # use this shell for commands executed under sudo # you may need to change this to bin/bash in rare instances # if sudo is constrained #executable = /bin/sh # if inventory variables overlap, does the higher precedence one win # or are hash values merged together? The default is 'replace' but # this can also be set to 'merge'. #hash_behaviour = replace # by default, variables from roles will be visible in the global variable # scope. To prevent this, the following option can be enabled, and only # tasks and handlers within the role will see the variables there #private_role_vars = yes # list any Jinja2 extensions to enable here: #jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n # if set, always use this private key file for authentication, same as # if passing --private-key to ansible or ansible-playbook #private_key_file = /path/to/file # If set, configures the path to the Vault password file as an alternative to # specifying --vault-password-file on the command line. #vault_password_file = /path/to/vault_password_file # format of string {{ ansible_managed }} available within Jinja2 # templates indicates to users editing templates files will be replaced. # replacing {file}, {host} and {uid} and strftime codes with proper values. #ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} # {file}, {host}, {uid}, and the timestamp can all interfere with idempotence # in some situations so the default is a static string: #ansible_managed = Ansible managed # by default, ansible-playbook will display "Skipping [host]" if it determines a task # should not be run on a host. Set this to "False" if you don't want to see these "Skipping" # messages. NOTE: the task header will still be shown regardless of whether or not the # task is skipped. #display_skipped_hosts = True # by default, if a task in a playbook does not include a name: field then # ansible-playbook will construct a header that includes the task's action but # not the task's args. This is a security feature because ansible cannot know # if the *module* considers an argument to be no_log at the time that the # header is printed. If your environment doesn't have a problem securing # stdout from ansible-playbook (or you have manually specified no_log in your # playbook on all of the tasks where you have secret information) then you can # safely set this to True to get more informative messages. #display_args_to_stdout = False # by default (as of 1.3), Ansible will raise errors when attempting to dereference # Jinja2 variables that are not set in templates or action lines. Uncomment this line # to revert the behavior to pre-1.3. #error_on_undefined_vars = False # by default (as of 1.6), Ansible may display warnings based on the configuration of the # system running ansible itself. This may include warnings about 3rd party packages or # other conditions that should be resolved if possible. # to disable these warnings, set the following value to False: #system_warnings = True # by default (as of 1.4), Ansible may display deprecation warnings for language # features that should no longer be used and will be removed in future versions. # to disable these warnings, set the following value to False: #deprecation_warnings = True # (as of 1.8), Ansible can optionally warn when usage of the shell and # command module appear to be simplified by using a default Ansible module # instead. These warnings can be silenced by adjusting the following # setting or adding warn=yes or warn=no to the end of the command line # parameter string. This will for example suggest using the git module # instead of shelling out to the git command. # command_warnings = False # set plugin path directories here, separate with colons #action_plugins = /usr/share/ansible/plugins/action #become_plugins = /usr/share/ansible/plugins/become #cache_plugins = /usr/share/ansible/plugins/cache #callback_plugins = /usr/share/ansible/plugins/callback #connection_plugins = /usr/share/ansible/plugins/connection #lookup_plugins = /usr/share/ansible/plugins/lookup #inventory_plugins = /usr/share/ansible/plugins/inventory #vars_plugins = /usr/share/ansible/plugins/vars #filter_plugins = /usr/share/ansible/plugins/filter #test_plugins = /usr/share/ansible/plugins/test #terminal_plugins = /usr/share/ansible/plugins/terminal #strategy_plugins = /usr/share/ansible/plugins/strategy # by default, ansible will use the 'linear' strategy but you may want to try # another one #strategy = free # by default callbacks are not loaded for /bin/ansible, enable this if you # want, for example, a notification or logging callback to also apply to # /bin/ansible runs #bin_ansible_callbacks = False # don't like cows? that's unfortunate. # set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1 #nocows = 1 # set which cowsay stencil you'd like to use by default. When set to 'random', # a random stencil will be selected for each task. The selection will be filtered # against the `cow_whitelist` option below. #cow_selection = default #cow_selection = random # when using the 'random' option for cowsay, stencils will be restricted to this list. # it should be formatted as a comma-separated list with no spaces between names. # NOTE: line continuations here are for formatting purposes only, as the INI parser # in python does not support them. #cow_whitelist=bud-frogs,bunny,cheese,daemon,default,dragon,elephant-in-snake,elephant,eyes,\ # hellokitty,kitty,luke-koala,meow,milk,moofasa,moose,ren,sheep,small,stegosaurus,\ # stimpy,supermilker,three-eyes,turkey,turtle,tux,udder,vader-koala,vader,www # don't like colors either? # set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 #nocolor = 1 # if set to a persistent type (not 'memory', for example 'redis') fact values # from previous runs in Ansible will be stored. This may be useful when # wanting to use, for example, IP information from one group of servers # without having to talk to them in the same playbook run to get their # current IP information. #fact_caching = memory #This option tells Ansible where to cache facts. The value is plugin dependent. #For the jsonfile plugin, it should be a path to a local directory. #For the redis plugin, the value is a host:port:database triplet: fact_caching_connection = localhost:6379:0 #fact_caching_connection=/tmp # retry files # When a playbook fails a .retry file can be created that will be placed in ~/ # You can enable this feature by setting retry_files_enabled to True # and you can change the location of the files by setting retry_files_save_path #retry_files_enabled = False #retry_files_save_path = ~/.ansible-retry # squash actions # Ansible can optimise actions that call modules with list parameters # when looping. Instead of calling the module once per with_ item, the # module is called once with all items at once. Currently this only works # under limited circumstances, and only with parameters named 'name'. #squash_actions = apk,apt,dnf,homebrew,pacman,pkgng,yum,zypper # prevents logging of task data, off by default #no_log = False # prevents logging of tasks, but only on the targets, data is still logged on the master/controller #no_target_syslog = False # controls whether Ansible will raise an error or warning if a task has no # choice but to create world readable temporary files to execute a module on # the remote machine. This option is False by default for security. Users may # turn this on to have behaviour more like Ansible prior to 2.1.x. See # https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user # for more secure ways to fix this than enabling this option. #allow_world_readable_tmpfiles = False # controls the compression level of variables sent to # worker processes. At the default of 0, no compression # is used. This value must be an integer from 0 to 9. #var_compression_level = 9 # controls what compression method is used for new-style ansible modules when # they are sent to the remote system. The compression types depend on having # support compiled into both the controller's python and the client's python. # The names should match with the python Zipfile compression types: # * ZIP_STORED (no compression. available everywhere) # * ZIP_DEFLATED (uses zlib, the default) # These values may be set per host via the ansible_module_compression inventory # variable #module_compression = 'ZIP_DEFLATED' # This controls the cutoff point (in bytes) on --diff for files # set to 0 for unlimited (RAM may suffer!). #max_diff_size = 1048576 # This controls how ansible handles multiple --tags and --skip-tags arguments # on the CLI. If this is True then multiple arguments are merged together. If # it is False, then the last specified argument is used and the others are ignored. # This option will be removed in 2.8. #merge_multiple_cli_flags = True # Controls showing custom stats at the end, off by default #show_custom_stats = True # Controls which files to ignore when using a directory as inventory with # possibly multiple sources (both static and dynamic) #inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo # This family of modules use an alternative execution path optimized for network appliances # only update this setting if you know how this works, otherwise it can break module execution #network_group_modules=eos, nxos, ios, iosxr, junos, vyos # When enabled, this option allows lookups (via variables like {{lookup('foo')}} or when used as # a loop with `with_foo`) to return data that is not marked "unsafe". This means the data may contain # jinja2 templating language which will be run through the templating engine. # ENABLING THIS COULD BE A SECURITY RISK #allow_unsafe_lookups = False # set default errors for all plays #any_errors_fatal = False [inventory] # enable inventory plugins, default: 'host_list', 'script', 'auto', 'yaml', 'ini', 'toml' #enable_plugins = host_list, virtualbox, yaml, constructed # ignore these extensions when parsing a directory as inventory source #ignore_extensions = .pyc, .pyo, .swp, .bak, ~, .rpm, .md, .txt, ~, .orig, .ini, .cfg, .retry # ignore files matching these patterns when parsing a directory as inventory source #ignore_patterns= # If 'true' unparsed inventory sources become fatal errors, they are warnings otherwise. #unparsed_is_failed=False [privilege_escalation] #become=True #become_method=sudo #become_user=root #become_ask_pass=False [paramiko_connection] # uncomment this line to cause the paramiko connection plugin to not record new host # keys encountered. Increases performance on new host additions. Setting works independently of the # host key checking setting above. #record_host_keys=False # by default, Ansible requests a pseudo-terminal for commands executed under sudo. Uncomment this # line to disable this behaviour. #pty=False # paramiko will default to looking for SSH keys initially when trying to # authenticate to remote devices. This is a problem for some network devices # that close the connection after a key failure. Uncomment this line to # disable the Paramiko look for keys function #look_for_keys = False # When using persistent connections with Paramiko, the connection runs in a # background process. If the host doesn't already have a valid SSH key, by # default Ansible will prompt to add the host key. This will cause connections # running in background processes to fail. Uncomment this line to have # Paramiko automatically add host keys. #host_key_auto_add = True [ssh_connection] # ssh arguments to use # Leaving off ControlPersist will result in poor performance, so use # paramiko on older platforms rather than removing it, -C controls compression use #ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s # The base directory for the ControlPath sockets. # This is the "%(directory)s" in the control_path option # # Example: # control_path_dir = /tmp/.ansible/cp #control_path_dir = ~/.ansible/cp # The path to use for the ControlPath sockets. This defaults to a hashed string of the hostname, # port and username (empty string in the config). The hash mitigates a common problem users # found with long hostnames and the conventional %(directory)s/ansible-ssh-%%h-%%p-%%r format. # In those cases, a "too long for Unix domain socket" ssh error would occur. # # Example: # control_path = %(directory)s/%%h-%%r #control_path = # Enabling pipelining reduces the number of SSH operations required to # execute a module on the remote server. This can result in a significant # performance improvement when enabled, however when using "sudo:" you must # first disable 'requiretty' in /etc/sudoers # # By default, this option is disabled to preserve compatibility with # sudoers configurations that have requiretty (the default on many distros). # #pipelining = False # Control the mechanism for transferring files (old) # * smart = try sftp and then try scp [default] # * True = use scp only # * False = use sftp only #scp_if_ssh = smart # Control the mechanism for transferring files (new) # If set, this will override the scp_if_ssh option # * sftp = use sftp to transfer files # * scp = use scp to transfer files # * piped = use 'dd' over SSH to transfer files # * smart = try sftp, scp, and piped, in that order [default] #transfer_method = smart # if False, sftp will not use batch mode to transfer files. This may cause some # types of file transfer failures impossible to catch however, and should # only be disabled if your sftp version has problems with batch mode #sftp_batch_mode = False # The -tt argument is passed to ssh when pipelining is not enabled because sudo # requires a tty by default. #usetty = True # Number of times to retry an SSH connection to a host, in case of UNREACHABLE. # For each retry attempt, there is an exponential backoff, # so after the first attempt there is 1s wait, then 2s, 4s etc. up to 30s (max). #retries = 3 [persistent_connection] # Configures the persistent connection timeout value in seconds. This value is # how long the persistent connection will remain idle before it is destroyed. # If the connection doesn't receive a request before the timeout value # expires, the connection is shutdown. The default value is 30 seconds. #connect_timeout = 30 # The command timeout value defines the amount of time to wait for a command # or RPC call before timing out. The value for the command timeout must # be less than the value of the persistent connection idle timeout (connect_timeout) # The default value is 30 second. #command_timeout = 30 [accelerate] #accelerate_port = 5099 #accelerate_timeout = 30 #accelerate_connect_timeout = 5.0 # The daemon timeout is measured in minutes. This time is measured # from the last activity to the accelerate daemon. #accelerate_daemon_timeout = 30 # If set to yes, accelerate_multi_key will allow multiple # private keys to be uploaded to it, though each user must # have access to the system via SSH to add a new key. The default # is "no". #accelerate_multi_key = yes [selinux] # file systems that require special treatment when dealing with security context # the default behaviour that copies the existing context or uses the user default # needs to be changed to use the file system dependent context. #special_context_filesystems=nfs,vboxsf,fuse,ramfs,9p,vfat # Set this to yes to allow libvirt_lxc connections to work without SELinux. #libvirt_lxc_noseclabel = yes [colors] #highlight = white #verbose = blue #warn = bright purple #error = red #debug = dark gray #deprecate = purple #skip = cyan #unreachable = red #ok = green #changed = yellow #diff_add = green #diff_remove = red #diff_lines = cyan [diff] # Always print diff when running ( same as always running with -D/--diff ) # always = no # Set how many context lines to show in diff # context = 3
closed
ansible/ansible
https://github.com/ansible/ansible
63,003
win_firewall module have higher requirements than needed
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> The requirement in the `win_firewall` module are higher than they should be. Currently, it checks if Windows Management Framework 5 or higher is installed on remote by checking [if powershell version is 5.0 or higher](https://github.com/ansible/ansible/blob/95a117090880906b757ba0643cd277b499e34048/lib/ansible/modules/windows/win_firewall.ps1#L23-L25), but several releases like Windows Server 2012 R2 have access to the powershell cmdlet used by [win_firewall.ps1](https://github.com/ansible/ansible/blob/95a117090880906b757ba0643cd277b499e34048/lib/ansible/modules/windows/win_firewall.ps1) (mainly [Get-NetFirewallProfile](https://docs.microsoft.com/en-us/powershell/module/netsecurity/get-netfirewallprofile?view=winserver2012r2-ps) and [Set-NetFirewallProfile](https://docs.microsoft.com/en-us/powershell/module/netsecurity/set-netfirewallprofile?view=win10-ps)) despite having a lower powershell version. It may be better to check whether `Get-NetFirewallProfile` and `Set-NetFirewallProfile` are defined cmdlet rather than checking current powershell version, especially since upgrading powershell is not always an available option. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `win_firewall` ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.8.3 config file = /home/user/ansible/ansible.cfg configured module search path = [u'/home/user/ansible/module_library'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible python version = 2.7.9 (default, Jun 21 2019, 00:38:53) [GCC 4.9.2] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below Nothing changed ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> * Core : Debian GNU/Linux 8 (jessie) * Remote : Windows Server 2012 R2 Standard ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> By running the following task on core, <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Enable Windows firewall service win_firewall: state: enabled profiles: - Domain - Private - Public ``` <!--- HINT: You can paste gist.github.com links for larger files --> However, remote has the following PS version : ``` PS C:\Users\Administrator> $PSVersionTable.PSVersion Major Minor Build Revision ----- ----- ----- -------- 4 0 -1 -1 ``` And `Get-NetFirewallProfile` and `Set-NetFirewallProfile` are defined. ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Enabled firewall ```paste below ok: [windows.domain.net] ``` (this can be obtained by bypassing the powershell version check) ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below fatal: [windows.domain.net]: FAILED! => { "changed": false, "profiles": [ "Domain", "Private", "Public" ], "state": "enabled" } MSG: win_firewall requires Windows Management Framework 5 or higher. ``` ##### ADDITIONAL INFORMATIONS It may be related to issue [#34411](https://github.com/ansible/ansible/issues/34411), but from what I read, it is relatively different as their issue was that used cmdlets weren't found.
https://github.com/ansible/ansible/issues/63003
https://github.com/ansible/ansible/pull/64998
f5133bec22947ee89a812663d8b2e6d4078c8901
96a422a6fc7d993cc17c895a54ae361c4458cb53
2019-10-01T12:27:15Z
python
2019-11-20T01:00:56Z
changelogs/fragments/win_firewall-Change-req-check-from-wmf-version-to-cmdlets-presence.yml
closed
ansible/ansible
https://github.com/ansible/ansible
63,003
win_firewall module have higher requirements than needed
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> The requirement in the `win_firewall` module are higher than they should be. Currently, it checks if Windows Management Framework 5 or higher is installed on remote by checking [if powershell version is 5.0 or higher](https://github.com/ansible/ansible/blob/95a117090880906b757ba0643cd277b499e34048/lib/ansible/modules/windows/win_firewall.ps1#L23-L25), but several releases like Windows Server 2012 R2 have access to the powershell cmdlet used by [win_firewall.ps1](https://github.com/ansible/ansible/blob/95a117090880906b757ba0643cd277b499e34048/lib/ansible/modules/windows/win_firewall.ps1) (mainly [Get-NetFirewallProfile](https://docs.microsoft.com/en-us/powershell/module/netsecurity/get-netfirewallprofile?view=winserver2012r2-ps) and [Set-NetFirewallProfile](https://docs.microsoft.com/en-us/powershell/module/netsecurity/set-netfirewallprofile?view=win10-ps)) despite having a lower powershell version. It may be better to check whether `Get-NetFirewallProfile` and `Set-NetFirewallProfile` are defined cmdlet rather than checking current powershell version, especially since upgrading powershell is not always an available option. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME `win_firewall` ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.8.3 config file = /home/user/ansible/ansible.cfg configured module search path = [u'/home/user/ansible/module_library'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible python version = 2.7.9 (default, Jun 21 2019, 00:38:53) [GCC 4.9.2] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below Nothing changed ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> * Core : Debian GNU/Linux 8 (jessie) * Remote : Windows Server 2012 R2 Standard ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> By running the following task on core, <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Enable Windows firewall service win_firewall: state: enabled profiles: - Domain - Private - Public ``` <!--- HINT: You can paste gist.github.com links for larger files --> However, remote has the following PS version : ``` PS C:\Users\Administrator> $PSVersionTable.PSVersion Major Minor Build Revision ----- ----- ----- -------- 4 0 -1 -1 ``` And `Get-NetFirewallProfile` and `Set-NetFirewallProfile` are defined. ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Enabled firewall ```paste below ok: [windows.domain.net] ``` (this can be obtained by bypassing the powershell version check) ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below fatal: [windows.domain.net]: FAILED! => { "changed": false, "profiles": [ "Domain", "Private", "Public" ], "state": "enabled" } MSG: win_firewall requires Windows Management Framework 5 or higher. ``` ##### ADDITIONAL INFORMATIONS It may be related to issue [#34411](https://github.com/ansible/ansible/issues/34411), but from what I read, it is relatively different as their issue was that used cmdlets weren't found.
https://github.com/ansible/ansible/issues/63003
https://github.com/ansible/ansible/pull/64998
f5133bec22947ee89a812663d8b2e6d4078c8901
96a422a6fc7d993cc17c895a54ae361c4458cb53
2019-10-01T12:27:15Z
python
2019-11-20T01:00:56Z
lib/ansible/modules/windows/win_firewall.ps1
#!powershell # Copyright: (c) 2017, Michael Eaton <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) #Requires -Module Ansible.ModuleUtils.Legacy $ErrorActionPreference = "Stop" $firewall_profiles = @('Domain', 'Private', 'Public') $params = Parse-Args $args -supports_check_mode $true $check_mode = Get-AnsibleParam -obj $params -name "_ansible_check_mode" -type "bool" -default $false $profiles = Get-AnsibleParam -obj $params -name "profiles" -type "list" -default @("Domain", "Private", "Public") $state = Get-AnsibleParam -obj $params -name "state" -type "str" -failifempty $true -validateset 'disabled','enabled' $result = @{ changed = $false profiles = $profiles state = $state } if ($PSVersionTable.PSVersion -lt [Version]"5.0") { Fail-Json $result "win_firewall requires Windows Management Framework 5 or higher." } Try { ForEach ($profile in $firewall_profiles) { $currentstate = (Get-NetFirewallProfile -Name $profile).Enabled $result.$profile = @{ enabled = ($currentstate -eq 1) considered = ($profiles -contains $profile) currentstate = $currentstate } if ($profiles -notcontains $profile) { continue } if ($state -eq 'enabled') { if ($currentstate -eq $false) { Set-NetFirewallProfile -name $profile -Enabled true -WhatIf:$check_mode $result.changed = $true $result.$profile.enabled = $true } } else { if ($currentstate -eq $true) { Set-NetFirewallProfile -name $profile -Enabled false -WhatIf:$check_mode $result.changed = $true $result.$profile.enabled = $false } } } } Catch { Fail-Json $result "an error occurred when attempting to change firewall status for profile $profile $($_.Exception.Message)" } Exit-Json $result
closed
ansible/ansible
https://github.com/ansible/ansible
65,043
Can't pass parameter with value `false` to cp_mgmt modules
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Can't pass parameter with value `false` to cp_mgmt modules for example the following task, won't sent the parameter `add_default_rule`: ``` - name: Create access layer check_point.mgmt.cp_mgmt_access_layer: name: "access layer 3" add_default_rule: false ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> check_point ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/65043
https://github.com/ansible/ansible/pull/65040
bc92170242ed2dc456e284b796dccc81e6ff18ac
b1e666766447e1eab9d986f19503d19fe1c21ae6
2019-11-19T09:44:42Z
python
2019-11-20T06:39:40Z
lib/ansible/module_utils/network/checkpoint/checkpoint.py
# This code is part of Ansible, but is an independent component. # This particular file snippet, and this file snippet only, is BSD licensed. # Modules you write using this snippet, which is embedded dynamically by Ansible # still belong to the author of the module, and may assign their own license # to the complete work. # # (c) 2018 Red Hat Inc. # # Redistribution and use in source and binary forms, with or without modification, # are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. # IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # from __future__ import (absolute_import, division, print_function) import time from ansible.module_utils.connection import Connection checkpoint_argument_spec_for_objects = dict( auto_publish_session=dict(type='bool'), wait_for_task=dict(type='bool', default=True), state=dict(type='str', choices=['present', 'absent'], default='present'), version=dict(type='str') ) checkpoint_argument_spec_for_facts = dict( version=dict(type='str') ) checkpoint_argument_spec_for_commands = dict( wait_for_task=dict(type='bool', default=True), version=dict(type='str') ) delete_params = ['name', 'uid', 'layer', 'exception-group-name', 'layer', 'rule-name'] # send the request to checkpoint def send_request(connection, version, url, payload=None): code, response = connection.send_request('/web_api/' + version + url, payload) return code, response # get the payload from the user parameters def is_checkpoint_param(parameter): if parameter == 'auto_publish_session' or \ parameter == 'state' or \ parameter == 'wait_for_task' or \ parameter == 'version': return False return True # build the payload from the parameters which has value (not None), and they are parameter of checkpoint API as well def get_payload_from_parameters(params): payload = {} for parameter in params: parameter_value = params[parameter] if parameter_value and is_checkpoint_param(parameter): if isinstance(parameter_value, dict): payload[parameter.replace("_", "-")] = get_payload_from_parameters(parameter_value) elif isinstance(parameter_value, list) and len(parameter_value) != 0 and isinstance(parameter_value[0], dict): payload_list = [] for element_dict in parameter_value: payload_list.append(get_payload_from_parameters(element_dict)) payload[parameter.replace("_", "-")] = payload_list else: payload[parameter.replace("_", "-")] = parameter_value return payload # wait for task def wait_for_task(module, version, connection, task_id): task_id_payload = {'task-id': task_id} task_complete = False current_iteration = 0 max_num_iterations = 300 # As long as there is a task in progress while not task_complete and current_iteration < max_num_iterations: current_iteration += 1 # Check the status of the task code, response = send_request(connection, version, 'show-task', task_id_payload) attempts_counter = 0 while code != 200: if attempts_counter < 5: attempts_counter += 1 time.sleep(2) code, response = send_request(connection, version, 'show-task', task_id_payload) else: response['message'] = "ERROR: Failed to handle asynchronous tasks as synchronous, tasks result is" \ " undefined.\n" + response['message'] module.fail_json(msg=response) # Count the number of tasks that are not in-progress completed_tasks = 0 for task in response['tasks']: if task['status'] == 'failed': module.fail_json(msg='Task {0} with task id {1} failed. Look at the logs for more details' .format(task['task-name'], task['task-id'])) if task['status'] == 'in progress': break completed_tasks += 1 # Are we done? check if all tasks are completed if completed_tasks == len(response["tasks"]): task_complete = True else: time.sleep(2) # Wait for two seconds if not task_complete: module.fail_json(msg="ERROR: Timeout.\nTask-id: {0}.".format(task_id_payload['task-id'])) # handle publish command, and wait for it to end if the user asked so def handle_publish(module, connection, version): if module.params['auto_publish_session']: publish_code, publish_response = send_request(connection, version, 'publish') if publish_code != 200: module.fail_json(msg=publish_response) if module.params['wait_for_task']: wait_for_task(module, version, connection, publish_response['task-id']) # handle a command def api_command(module, command): payload = get_payload_from_parameters(module.params) connection = Connection(module._socket_path) # if user insert a specific version, we add it to the url version = ('v' + module.params['version'] + '/') if module.params.get('version') else '' code, response = send_request(connection, version, command, payload) result = {'changed': True} if code == 200: if module.params['wait_for_task']: if 'task-id' in response: wait_for_task(module, version, connection, response['task-id']) elif 'tasks' in response: for task_id in response['tasks']: wait_for_task(module, version, connection, task_id) result[command] = response else: module.fail_json(msg='Checkpoint device returned error {0} with message {1}'.format(code, response)) return result # handle api call facts def api_call_facts(module, api_call_object, api_call_object_plural_version): payload = get_payload_from_parameters(module.params) connection = Connection(module._socket_path) # if user insert a specific version, we add it to the url version = ('v' + module.params['version'] + '/') if module.params['version'] else '' # if there is neither name nor uid, the API command will be in plural version (e.g. show-hosts instead of show-host) if payload.get("name") is None and payload.get("uid") is None: api_call_object = api_call_object_plural_version code, response = send_request(connection, version, 'show-' + api_call_object, payload) if code != 200: module.fail_json(msg='Checkpoint device returned error {0} with message {1}'.format(code, response)) result = {api_call_object: response} return result # handle api call def api_call(module, api_call_object): payload = get_payload_from_parameters(module.params) connection = Connection(module._socket_path) result = {'changed': False} if module.check_mode: return result # if user insert a specific version, we add it to the url version = ('v' + module.params['version'] + '/') if module.params.get('version') else '' payload_for_equals = {'type': api_call_object, 'params': payload} equals_code, equals_response = send_request(connection, version, 'equals', payload_for_equals) result['checkpoint_session_uid'] = connection.get_session_uid() # if code is 400 (bad request) or 500 (internal error) - fail if equals_code == 400 or equals_code == 500: module.fail_json(msg=equals_response) if equals_code == 404 and equals_response['code'] == 'generic_err_command_not_found': module.fail_json(msg='Relevant hotfix is not installed on Check Point server. See sk114661 on Check Point Support Center.') if module.params['state'] == 'present': if equals_code == 200: if not equals_response['equals']: code, response = send_request(connection, version, 'set-' + api_call_object, payload) if code != 200: module.fail_json(msg=response) handle_publish(module, connection, version) result['changed'] = True result[api_call_object] = response else: # objects are equals and there is no need for set request pass elif equals_code == 404: code, response = send_request(connection, version, 'add-' + api_call_object, payload) if code != 200: module.fail_json(msg=response) handle_publish(module, connection, version) result['changed'] = True result[api_call_object] = response elif module.params['state'] == 'absent': if equals_code == 200: payload_for_delete = get_copy_payload_with_some_params(payload, delete_params) code, response = send_request(connection, version, 'delete-' + api_call_object, payload_for_delete) if code != 200: module.fail_json(msg=response) handle_publish(module, connection, version) result['changed'] = True elif equals_code == 404: # no need to delete because object dose not exist pass return result # get the position in integer format def get_number_from_position(payload, connection, version): if 'position' in payload: position = payload['position'] else: return None # This code relevant if we will decide to support 'top' and 'bottom' in position # position_number = None # # if position is not int, convert it to int. There are several cases: "top" # if position == 'top': # position_number = 1 # elif position == 'bottom': # payload_for_show_access_rulebase = {'name': payload['layer'], 'limit': 0} # code, response = send_request(connection, version, 'show-access-rulebase', payload_for_show_access_rulebase) # position_number = response['total'] # elif isinstance(position, str): # # here position is a number in format str (e.g. "5" and not 5) # position_number = int(position) # else: # # here position suppose to be int # position_number = position # # return position_number return int(position) # is the param position (if the user inserted it) equals between the object and the user input def is_equals_with_position_param(payload, connection, version, api_call_object): position_number = get_number_from_position(payload, connection, version) # if there is no position param, then it's equals in vacuous truth if position_number is None: return True payload_for_show_access_rulebase = {'name': payload['layer'], 'offset': position_number - 1, 'limit': 1} rulebase_command = 'show-' + api_call_object.split('-')[0] + '-rulebase' # if it's threat-exception, we change a little the payload and the command if api_call_object == 'threat-exception': payload_for_show_access_rulebase['rule-name'] = payload['rule-name'] rulebase_command = 'show-threat-rule-exception-rulebase' code, response = send_request(connection, version, rulebase_command, payload_for_show_access_rulebase) # if true, it means there is no rule in the position that the user inserted, so I return false, and when we will try to set # the rule, the API server will get throw relevant error if response['total'] < position_number: return False rule = response['rulebase'][0] while 'rulebase' in rule: rule = rule['rulebase'][0] # if the names of the exist rule and the user input rule are equals, then it's means that their positions are equals so I # return True. and there is no way that there is another rule with this name cause otherwise the 'equals' command would fail if rule['name'] == payload['name']: return True else: return False # get copy of the payload without some of the params def get_copy_payload_without_some_params(payload, params_to_remove): copy_payload = dict(payload) for param in params_to_remove: if param in copy_payload: del copy_payload[param] return copy_payload # get copy of the payload with only some of the params def get_copy_payload_with_some_params(payload, params_to_insert): copy_payload = {} for param in params_to_insert: if param in payload: copy_payload[param] = payload[param] return copy_payload # is equals with all the params including action and position def is_equals_with_all_params(payload, connection, version, api_call_object, is_access_rule): if is_access_rule and 'action' in payload: payload_for_show = get_copy_payload_with_some_params(payload, ['name', 'uid', 'layer']) code, response = send_request(connection, version, 'show-' + api_call_object, payload_for_show) exist_action = response['action']['name'] if exist_action != payload['action']: return False if not is_equals_with_position_param(payload, connection, version, api_call_object): return False return True # handle api call for rule def api_call_for_rule(module, api_call_object): is_access_rule = True if 'access' in api_call_object else False payload = get_payload_from_parameters(module.params) connection = Connection(module._socket_path) result = {'changed': False} if module.check_mode: return result # if user insert a specific version, we add it to the url version = ('v' + module.params['version'] + '/') if module.params.get('version') else '' if is_access_rule: copy_payload_without_some_params = get_copy_payload_without_some_params(payload, ['action', 'position']) else: copy_payload_without_some_params = get_copy_payload_without_some_params(payload, ['position']) payload_for_equals = {'type': api_call_object, 'params': copy_payload_without_some_params} equals_code, equals_response = send_request(connection, version, 'equals', payload_for_equals) result['checkpoint_session_uid'] = connection.get_session_uid() # if code is 400 (bad request) or 500 (internal error) - fail if equals_code == 400 or equals_code == 500: module.fail_json(msg=equals_response) if equals_code == 404 and equals_response['code'] == 'generic_err_command_not_found': module.fail_json(msg='Relevant hotfix is not installed on Check Point server. See sk114661 on Check Point Support Center.') if module.params['state'] == 'present': if equals_code == 200: if equals_response['equals']: if not is_equals_with_all_params(payload, connection, version, api_call_object, is_access_rule): equals_response['equals'] = False if not equals_response['equals']: # if user insert param 'position' and needed to use the 'set' command, change the param name to 'new-position' if 'position' in payload: payload['new-position'] = payload['position'] del payload['position'] code, response = send_request(connection, version, 'set-' + api_call_object, payload) if code != 200: module.fail_json(msg=response) handle_publish(module, connection, version) result['changed'] = True result[api_call_object] = response else: # objects are equals and there is no need for set request pass elif equals_code == 404: code, response = send_request(connection, version, 'add-' + api_call_object, payload) if code != 200: module.fail_json(msg=response) handle_publish(module, connection, version) result['changed'] = True result[api_call_object] = response elif module.params['state'] == 'absent': if equals_code == 200: payload_for_delete = get_copy_payload_with_some_params(payload, delete_params) code, response = send_request(connection, version, 'delete-' + api_call_object, payload_for_delete) if code != 200: module.fail_json(msg=response) handle_publish(module, connection, version) result['changed'] = True elif equals_code == 404: # no need to delete because object dose not exist pass return result # handle api call facts for rule def api_call_facts_for_rule(module, api_call_object, api_call_object_plural_version): payload = get_payload_from_parameters(module.params) connection = Connection(module._socket_path) # if user insert a specific version, we add it to the url version = ('v' + module.params['version'] + '/') if module.params['version'] else '' # if there is neither name nor uid, the API command will be in plural version (e.g. show-hosts instead of show-host) if payload.get("layer") is None: api_call_object = api_call_object_plural_version code, response = send_request(connection, version, 'show-' + api_call_object, payload) if code != 200: module.fail_json(msg='Checkpoint device returned error {0} with message {1}'.format(code, response)) result = {api_call_object: response} return result # The code from here till EOF will be deprecated when Rikis' modules will be deprecated checkpoint_argument_spec = dict(auto_publish_session=dict(type='bool', default=True), policy_package=dict(type='str', default='standard'), auto_install_policy=dict(type='bool', default=True), targets=dict(type='list') ) def publish(connection, uid=None): payload = None if uid: payload = {'uid': uid} connection.send_request('/web_api/publish', payload) def discard(connection, uid=None): payload = None if uid: payload = {'uid': uid} connection.send_request('/web_api/discard', payload) def install_policy(connection, policy_package, targets): payload = {'policy-package': policy_package, 'targets': targets} connection.send_request('/web_api/install-policy', payload)
closed
ansible/ansible
https://github.com/ansible/ansible
65,043
Can't pass parameter with value `false` to cp_mgmt modules
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Can't pass parameter with value `false` to cp_mgmt modules for example the following task, won't sent the parameter `add_default_rule`: ``` - name: Create access layer check_point.mgmt.cp_mgmt_access_layer: name: "access layer 3" add_default_rule: false ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> check_point ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/65043
https://github.com/ansible/ansible/pull/65040
bc92170242ed2dc456e284b796dccc81e6ff18ac
b1e666766447e1eab9d986f19503d19fe1c21ae6
2019-11-19T09:44:42Z
python
2019-11-20T06:39:40Z
lib/ansible/plugins/doc_fragments/checkpoint_commands.py
# -*- coding: utf-8 -*- # Copyright: (c) 2019, Or Soffer <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) class ModuleDocFragment(object): # Standard files documentation fragment DOCUMENTATION = r''' options: wait_for_task: description: - Wait for the task to end. Such as publish task. type: bool default: True version: description: - Version of checkpoint. If not given one, the latest version taken. type: str '''
closed
ansible/ansible
https://github.com/ansible/ansible
65,043
Can't pass parameter with value `false` to cp_mgmt modules
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Can't pass parameter with value `false` to cp_mgmt modules for example the following task, won't sent the parameter `add_default_rule`: ``` - name: Create access layer check_point.mgmt.cp_mgmt_access_layer: name: "access layer 3" add_default_rule: false ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> check_point ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/65043
https://github.com/ansible/ansible/pull/65040
bc92170242ed2dc456e284b796dccc81e6ff18ac
b1e666766447e1eab9d986f19503d19fe1c21ae6
2019-11-19T09:44:42Z
python
2019-11-20T06:39:40Z
lib/ansible/plugins/doc_fragments/checkpoint_objects.py
# -*- coding: utf-8 -*- # Copyright: (c) 2019, Or Soffer <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) class ModuleDocFragment(object): # Standard files documentation fragment DOCUMENTATION = r''' options: state: description: - State of the access rule (present or absent). Defaults to present. type: str default: present choices: - 'present' - 'absent' auto_publish_session: description: - Publish the current session if changes have been performed after task completes. type: bool wait_for_task: description: - Wait for the task to end. Such as publish task. type: bool default: True version: description: - Version of checkpoint. If not given one, the latest version taken. type: str '''
closed
ansible/ansible
https://github.com/ansible/ansible
65,043
Can't pass parameter with value `false` to cp_mgmt modules
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Can't pass parameter with value `false` to cp_mgmt modules for example the following task, won't sent the parameter `add_default_rule`: ``` - name: Create access layer check_point.mgmt.cp_mgmt_access_layer: name: "access layer 3" add_default_rule: false ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> check_point ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/65043
https://github.com/ansible/ansible/pull/65040
bc92170242ed2dc456e284b796dccc81e6ff18ac
b1e666766447e1eab9d986f19503d19fe1c21ae6
2019-11-19T09:44:42Z
python
2019-11-20T06:39:40Z
test/sanity/ignore.txt
contrib/inventory/abiquo.py future-import-boilerplate contrib/inventory/abiquo.py metaclass-boilerplate contrib/inventory/apache-libcloud.py future-import-boilerplate contrib/inventory/apache-libcloud.py metaclass-boilerplate contrib/inventory/apstra_aos.py future-import-boilerplate contrib/inventory/apstra_aos.py metaclass-boilerplate contrib/inventory/azure_rm.py future-import-boilerplate contrib/inventory/azure_rm.py metaclass-boilerplate contrib/inventory/brook.py future-import-boilerplate contrib/inventory/brook.py metaclass-boilerplate contrib/inventory/cloudforms.py future-import-boilerplate contrib/inventory/cloudforms.py metaclass-boilerplate contrib/inventory/cobbler.py future-import-boilerplate contrib/inventory/cobbler.py metaclass-boilerplate contrib/inventory/collins.py future-import-boilerplate contrib/inventory/collins.py metaclass-boilerplate contrib/inventory/consul_io.py future-import-boilerplate contrib/inventory/consul_io.py metaclass-boilerplate contrib/inventory/digital_ocean.py future-import-boilerplate contrib/inventory/digital_ocean.py metaclass-boilerplate contrib/inventory/ec2.py future-import-boilerplate contrib/inventory/ec2.py metaclass-boilerplate contrib/inventory/fleet.py future-import-boilerplate contrib/inventory/fleet.py metaclass-boilerplate contrib/inventory/foreman.py future-import-boilerplate contrib/inventory/foreman.py metaclass-boilerplate contrib/inventory/freeipa.py future-import-boilerplate contrib/inventory/freeipa.py metaclass-boilerplate contrib/inventory/gce.py future-import-boilerplate contrib/inventory/gce.py metaclass-boilerplate contrib/inventory/gce.py pylint:blacklisted-name contrib/inventory/infoblox.py future-import-boilerplate contrib/inventory/infoblox.py metaclass-boilerplate contrib/inventory/jail.py future-import-boilerplate contrib/inventory/jail.py metaclass-boilerplate contrib/inventory/landscape.py future-import-boilerplate contrib/inventory/landscape.py metaclass-boilerplate contrib/inventory/libvirt_lxc.py future-import-boilerplate contrib/inventory/libvirt_lxc.py metaclass-boilerplate contrib/inventory/linode.py future-import-boilerplate contrib/inventory/linode.py metaclass-boilerplate contrib/inventory/lxc_inventory.py future-import-boilerplate contrib/inventory/lxc_inventory.py metaclass-boilerplate contrib/inventory/lxd.py future-import-boilerplate contrib/inventory/lxd.py metaclass-boilerplate contrib/inventory/mdt_dynamic_inventory.py future-import-boilerplate contrib/inventory/mdt_dynamic_inventory.py metaclass-boilerplate contrib/inventory/nagios_livestatus.py future-import-boilerplate contrib/inventory/nagios_livestatus.py metaclass-boilerplate contrib/inventory/nagios_ndo.py future-import-boilerplate contrib/inventory/nagios_ndo.py metaclass-boilerplate contrib/inventory/nsot.py future-import-boilerplate contrib/inventory/nsot.py metaclass-boilerplate contrib/inventory/openshift.py future-import-boilerplate contrib/inventory/openshift.py metaclass-boilerplate contrib/inventory/openstack_inventory.py future-import-boilerplate contrib/inventory/openstack_inventory.py metaclass-boilerplate contrib/inventory/openvz.py future-import-boilerplate contrib/inventory/openvz.py metaclass-boilerplate contrib/inventory/ovirt.py future-import-boilerplate contrib/inventory/ovirt.py metaclass-boilerplate contrib/inventory/ovirt4.py future-import-boilerplate contrib/inventory/ovirt4.py metaclass-boilerplate contrib/inventory/packet_net.py future-import-boilerplate contrib/inventory/packet_net.py metaclass-boilerplate contrib/inventory/proxmox.py future-import-boilerplate contrib/inventory/proxmox.py metaclass-boilerplate contrib/inventory/rackhd.py future-import-boilerplate contrib/inventory/rackhd.py metaclass-boilerplate contrib/inventory/rax.py future-import-boilerplate contrib/inventory/rax.py metaclass-boilerplate contrib/inventory/rudder.py future-import-boilerplate contrib/inventory/rudder.py metaclass-boilerplate contrib/inventory/scaleway.py future-import-boilerplate contrib/inventory/scaleway.py metaclass-boilerplate contrib/inventory/serf.py future-import-boilerplate contrib/inventory/serf.py metaclass-boilerplate contrib/inventory/softlayer.py future-import-boilerplate contrib/inventory/softlayer.py metaclass-boilerplate contrib/inventory/spacewalk.py future-import-boilerplate contrib/inventory/spacewalk.py metaclass-boilerplate contrib/inventory/ssh_config.py future-import-boilerplate contrib/inventory/ssh_config.py metaclass-boilerplate contrib/inventory/stacki.py future-import-boilerplate contrib/inventory/stacki.py metaclass-boilerplate contrib/inventory/vagrant.py future-import-boilerplate contrib/inventory/vagrant.py metaclass-boilerplate contrib/inventory/vbox.py future-import-boilerplate contrib/inventory/vbox.py metaclass-boilerplate contrib/inventory/vmware.py future-import-boilerplate contrib/inventory/vmware.py metaclass-boilerplate contrib/inventory/vmware_inventory.py future-import-boilerplate contrib/inventory/vmware_inventory.py metaclass-boilerplate contrib/inventory/zabbix.py future-import-boilerplate contrib/inventory/zabbix.py metaclass-boilerplate contrib/inventory/zone.py future-import-boilerplate contrib/inventory/zone.py metaclass-boilerplate contrib/vault/azure_vault.py future-import-boilerplate contrib/vault/azure_vault.py metaclass-boilerplate contrib/vault/vault-keyring-client.py future-import-boilerplate contrib/vault/vault-keyring-client.py metaclass-boilerplate contrib/vault/vault-keyring.py future-import-boilerplate contrib/vault/vault-keyring.py metaclass-boilerplate docs/bin/find-plugin-refs.py future-import-boilerplate docs/bin/find-plugin-refs.py metaclass-boilerplate docs/docsite/_extensions/pygments_lexer.py future-import-boilerplate docs/docsite/_extensions/pygments_lexer.py metaclass-boilerplate docs/docsite/_themes/sphinx_rtd_theme/__init__.py future-import-boilerplate docs/docsite/_themes/sphinx_rtd_theme/__init__.py metaclass-boilerplate docs/docsite/rst/conf.py future-import-boilerplate docs/docsite/rst/conf.py metaclass-boilerplate docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs examples/scripts/uptime.py future-import-boilerplate examples/scripts/uptime.py metaclass-boilerplate hacking/build-ansible.py shebang # only run by release engineers, Python 3.6+ required hacking/build_library/build_ansible/announce.py compile-2.6!skip # release process only, 3.6+ required hacking/build_library/build_ansible/announce.py compile-2.7!skip # release process only, 3.6+ required hacking/build_library/build_ansible/announce.py compile-3.5!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/dump_config.py compile-2.6!skip # docs build only, 2.7+ required hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-2.6!skip # docs build only, 2.7+ required hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.6!skip # docs build only, 2.7+ required hacking/build_library/build_ansible/command_plugins/plugin_formatter.py compile-2.6!skip # docs build only, 2.7+ required hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.6!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.7!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-3.5!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.6!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-2.7!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/release_announcement.py compile-3.5!skip # release process only, 3.6+ required hacking/fix_test_syntax.py future-import-boilerplate hacking/fix_test_syntax.py metaclass-boilerplate hacking/get_library.py future-import-boilerplate hacking/get_library.py metaclass-boilerplate hacking/report.py future-import-boilerplate hacking/report.py metaclass-boilerplate hacking/return_skeleton_generator.py future-import-boilerplate hacking/return_skeleton_generator.py metaclass-boilerplate hacking/test-module.py future-import-boilerplate hacking/test-module.py metaclass-boilerplate hacking/tests/gen_distribution_version_testcase.py future-import-boilerplate hacking/tests/gen_distribution_version_testcase.py metaclass-boilerplate lib/ansible/cli/console.py pylint:blacklisted-name lib/ansible/cli/scripts/ansible_cli_stub.py shebang lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang lib/ansible/compat/selectors/_selectors2.py future-import-boilerplate # ignore bundled lib/ansible/compat/selectors/_selectors2.py metaclass-boilerplate # ignore bundled lib/ansible/compat/selectors/_selectors2.py pylint:blacklisted-name lib/ansible/config/base.yml no-unwanted-files lib/ansible/config/module_defaults.yml no-unwanted-files lib/ansible/executor/playbook_executor.py pylint:blacklisted-name lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath lib/ansible/executor/task_queue_manager.py pylint:blacklisted-name lib/ansible/module_utils/_text.py future-import-boilerplate lib/ansible/module_utils/_text.py metaclass-boilerplate lib/ansible/module_utils/alicloud_ecs.py future-import-boilerplate lib/ansible/module_utils/alicloud_ecs.py metaclass-boilerplate lib/ansible/module_utils/ansible_tower.py future-import-boilerplate lib/ansible/module_utils/ansible_tower.py metaclass-boilerplate lib/ansible/module_utils/api.py future-import-boilerplate lib/ansible/module_utils/api.py metaclass-boilerplate lib/ansible/module_utils/azure_rm_common.py future-import-boilerplate lib/ansible/module_utils/azure_rm_common.py metaclass-boilerplate lib/ansible/module_utils/azure_rm_common_ext.py future-import-boilerplate lib/ansible/module_utils/azure_rm_common_ext.py metaclass-boilerplate lib/ansible/module_utils/azure_rm_common_rest.py future-import-boilerplate lib/ansible/module_utils/azure_rm_common_rest.py metaclass-boilerplate lib/ansible/module_utils/basic.py metaclass-boilerplate lib/ansible/module_utils/cloud.py future-import-boilerplate lib/ansible/module_utils/cloud.py metaclass-boilerplate lib/ansible/module_utils/common/network.py future-import-boilerplate lib/ansible/module_utils/common/network.py metaclass-boilerplate lib/ansible/module_utils/compat/ipaddress.py future-import-boilerplate lib/ansible/module_utils/compat/ipaddress.py metaclass-boilerplate lib/ansible/module_utils/compat/ipaddress.py no-assert lib/ansible/module_utils/compat/ipaddress.py no-unicode-literals lib/ansible/module_utils/connection.py future-import-boilerplate lib/ansible/module_utils/connection.py metaclass-boilerplate lib/ansible/module_utils/database.py future-import-boilerplate lib/ansible/module_utils/database.py metaclass-boilerplate lib/ansible/module_utils/digital_ocean.py future-import-boilerplate lib/ansible/module_utils/digital_ocean.py metaclass-boilerplate lib/ansible/module_utils/dimensiondata.py future-import-boilerplate lib/ansible/module_utils/dimensiondata.py metaclass-boilerplate lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled lib/ansible/module_utils/distro/_distro.py no-assert lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify lib/ansible/module_utils/f5_utils.py future-import-boilerplate lib/ansible/module_utils/f5_utils.py metaclass-boilerplate lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove lib/ansible/module_utils/facts/network/linux.py pylint:blacklisted-name lib/ansible/module_utils/facts/sysctl.py future-import-boilerplate lib/ansible/module_utils/facts/sysctl.py metaclass-boilerplate lib/ansible/module_utils/facts/utils.py future-import-boilerplate lib/ansible/module_utils/facts/utils.py metaclass-boilerplate lib/ansible/module_utils/firewalld.py future-import-boilerplate lib/ansible/module_utils/firewalld.py metaclass-boilerplate lib/ansible/module_utils/gcdns.py future-import-boilerplate lib/ansible/module_utils/gcdns.py metaclass-boilerplate lib/ansible/module_utils/gce.py future-import-boilerplate lib/ansible/module_utils/gce.py metaclass-boilerplate lib/ansible/module_utils/gcp.py future-import-boilerplate lib/ansible/module_utils/gcp.py metaclass-boilerplate lib/ansible/module_utils/gcp_utils.py future-import-boilerplate lib/ansible/module_utils/gcp_utils.py metaclass-boilerplate lib/ansible/module_utils/gitlab.py future-import-boilerplate lib/ansible/module_utils/gitlab.py metaclass-boilerplate lib/ansible/module_utils/hwc_utils.py future-import-boilerplate lib/ansible/module_utils/hwc_utils.py metaclass-boilerplate lib/ansible/module_utils/infinibox.py future-import-boilerplate lib/ansible/module_utils/infinibox.py metaclass-boilerplate lib/ansible/module_utils/ipa.py future-import-boilerplate lib/ansible/module_utils/ipa.py metaclass-boilerplate lib/ansible/module_utils/ismount.py future-import-boilerplate lib/ansible/module_utils/ismount.py metaclass-boilerplate lib/ansible/module_utils/json_utils.py future-import-boilerplate lib/ansible/module_utils/json_utils.py metaclass-boilerplate lib/ansible/module_utils/k8s/common.py metaclass-boilerplate lib/ansible/module_utils/k8s/raw.py metaclass-boilerplate lib/ansible/module_utils/k8s/scale.py metaclass-boilerplate lib/ansible/module_utils/known_hosts.py future-import-boilerplate lib/ansible/module_utils/known_hosts.py metaclass-boilerplate lib/ansible/module_utils/kubevirt.py future-import-boilerplate lib/ansible/module_utils/kubevirt.py metaclass-boilerplate lib/ansible/module_utils/linode.py future-import-boilerplate lib/ansible/module_utils/linode.py metaclass-boilerplate lib/ansible/module_utils/lxd.py future-import-boilerplate lib/ansible/module_utils/lxd.py metaclass-boilerplate lib/ansible/module_utils/manageiq.py future-import-boilerplate lib/ansible/module_utils/manageiq.py metaclass-boilerplate lib/ansible/module_utils/memset.py future-import-boilerplate lib/ansible/module_utils/memset.py metaclass-boilerplate lib/ansible/module_utils/mysql.py future-import-boilerplate lib/ansible/module_utils/mysql.py metaclass-boilerplate lib/ansible/module_utils/net_tools/netbox/netbox_utils.py future-import-boilerplate lib/ansible/module_utils/net_tools/nios/api.py future-import-boilerplate lib/ansible/module_utils/net_tools/nios/api.py metaclass-boilerplate lib/ansible/module_utils/netapp.py future-import-boilerplate lib/ansible/module_utils/netapp.py metaclass-boilerplate lib/ansible/module_utils/netapp_elementsw_module.py future-import-boilerplate lib/ansible/module_utils/netapp_elementsw_module.py metaclass-boilerplate lib/ansible/module_utils/netapp_module.py future-import-boilerplate lib/ansible/module_utils/netapp_module.py metaclass-boilerplate lib/ansible/module_utils/network/a10/a10.py future-import-boilerplate lib/ansible/module_utils/network/a10/a10.py metaclass-boilerplate lib/ansible/module_utils/network/aireos/aireos.py future-import-boilerplate lib/ansible/module_utils/network/aireos/aireos.py metaclass-boilerplate lib/ansible/module_utils/network/aos/aos.py future-import-boilerplate lib/ansible/module_utils/network/aos/aos.py metaclass-boilerplate lib/ansible/module_utils/network/aruba/aruba.py future-import-boilerplate lib/ansible/module_utils/network/aruba/aruba.py metaclass-boilerplate lib/ansible/module_utils/network/asa/asa.py future-import-boilerplate lib/ansible/module_utils/network/asa/asa.py metaclass-boilerplate lib/ansible/module_utils/network/avi/ansible_utils.py future-import-boilerplate lib/ansible/module_utils/network/avi/ansible_utils.py metaclass-boilerplate lib/ansible/module_utils/network/avi/avi.py future-import-boilerplate lib/ansible/module_utils/network/avi/avi.py metaclass-boilerplate lib/ansible/module_utils/network/avi/avi_api.py future-import-boilerplate lib/ansible/module_utils/network/avi/avi_api.py metaclass-boilerplate lib/ansible/module_utils/network/bigswitch/bigswitch.py future-import-boilerplate lib/ansible/module_utils/network/bigswitch/bigswitch.py metaclass-boilerplate lib/ansible/module_utils/network/checkpoint/checkpoint.py metaclass-boilerplate lib/ansible/module_utils/network/cloudengine/ce.py future-import-boilerplate lib/ansible/module_utils/network/cloudengine/ce.py metaclass-boilerplate lib/ansible/module_utils/network/cnos/cnos.py future-import-boilerplate lib/ansible/module_utils/network/cnos/cnos.py metaclass-boilerplate lib/ansible/module_utils/network/cnos/cnos_devicerules.py future-import-boilerplate lib/ansible/module_utils/network/cnos/cnos_devicerules.py metaclass-boilerplate lib/ansible/module_utils/network/cnos/cnos_errorcodes.py future-import-boilerplate lib/ansible/module_utils/network/cnos/cnos_errorcodes.py metaclass-boilerplate lib/ansible/module_utils/network/common/cfg/base.py future-import-boilerplate lib/ansible/module_utils/network/common/cfg/base.py metaclass-boilerplate lib/ansible/module_utils/network/common/config.py future-import-boilerplate lib/ansible/module_utils/network/common/config.py metaclass-boilerplate lib/ansible/module_utils/network/common/facts/facts.py future-import-boilerplate lib/ansible/module_utils/network/common/facts/facts.py metaclass-boilerplate lib/ansible/module_utils/network/common/netconf.py future-import-boilerplate lib/ansible/module_utils/network/common/netconf.py metaclass-boilerplate lib/ansible/module_utils/network/common/network.py future-import-boilerplate lib/ansible/module_utils/network/common/network.py metaclass-boilerplate lib/ansible/module_utils/network/common/parsing.py future-import-boilerplate lib/ansible/module_utils/network/common/parsing.py metaclass-boilerplate lib/ansible/module_utils/network/common/utils.py future-import-boilerplate lib/ansible/module_utils/network/common/utils.py metaclass-boilerplate lib/ansible/module_utils/network/dellos10/dellos10.py future-import-boilerplate lib/ansible/module_utils/network/dellos10/dellos10.py metaclass-boilerplate lib/ansible/module_utils/network/dellos6/dellos6.py future-import-boilerplate lib/ansible/module_utils/network/dellos6/dellos6.py metaclass-boilerplate lib/ansible/module_utils/network/dellos9/dellos9.py future-import-boilerplate lib/ansible/module_utils/network/dellos9/dellos9.py metaclass-boilerplate lib/ansible/module_utils/network/edgeos/edgeos.py future-import-boilerplate lib/ansible/module_utils/network/edgeos/edgeos.py metaclass-boilerplate lib/ansible/module_utils/network/edgeswitch/edgeswitch.py future-import-boilerplate lib/ansible/module_utils/network/edgeswitch/edgeswitch.py metaclass-boilerplate lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py future-import-boilerplate lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py metaclass-boilerplate lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py pylint:duplicate-string-formatting-argument lib/ansible/module_utils/network/enos/enos.py future-import-boilerplate lib/ansible/module_utils/network/enos/enos.py metaclass-boilerplate lib/ansible/module_utils/network/eos/eos.py future-import-boilerplate lib/ansible/module_utils/network/eos/eos.py metaclass-boilerplate lib/ansible/module_utils/network/eos/providers/cli/config/bgp/address_family.py future-import-boilerplate lib/ansible/module_utils/network/eos/providers/cli/config/bgp/address_family.py metaclass-boilerplate lib/ansible/module_utils/network/eos/providers/cli/config/bgp/neighbors.py future-import-boilerplate lib/ansible/module_utils/network/eos/providers/cli/config/bgp/neighbors.py metaclass-boilerplate lib/ansible/module_utils/network/eos/providers/cli/config/bgp/process.py future-import-boilerplate lib/ansible/module_utils/network/eos/providers/cli/config/bgp/process.py metaclass-boilerplate lib/ansible/module_utils/network/eos/providers/module.py future-import-boilerplate lib/ansible/module_utils/network/eos/providers/module.py metaclass-boilerplate lib/ansible/module_utils/network/eos/providers/providers.py future-import-boilerplate lib/ansible/module_utils/network/eos/providers/providers.py metaclass-boilerplate lib/ansible/module_utils/network/exos/exos.py future-import-boilerplate lib/ansible/module_utils/network/exos/exos.py metaclass-boilerplate lib/ansible/module_utils/network/fortimanager/common.py future-import-boilerplate lib/ansible/module_utils/network/fortimanager/common.py metaclass-boilerplate lib/ansible/module_utils/network/fortimanager/fortimanager.py future-import-boilerplate lib/ansible/module_utils/network/fortimanager/fortimanager.py metaclass-boilerplate lib/ansible/module_utils/network/fortios/fortios.py future-import-boilerplate lib/ansible/module_utils/network/fortios/fortios.py metaclass-boilerplate lib/ansible/module_utils/network/frr/frr.py future-import-boilerplate lib/ansible/module_utils/network/frr/frr.py metaclass-boilerplate lib/ansible/module_utils/network/frr/providers/cli/config/base.py future-import-boilerplate lib/ansible/module_utils/network/frr/providers/cli/config/base.py metaclass-boilerplate lib/ansible/module_utils/network/frr/providers/cli/config/bgp/address_family.py future-import-boilerplate lib/ansible/module_utils/network/frr/providers/cli/config/bgp/address_family.py metaclass-boilerplate lib/ansible/module_utils/network/frr/providers/cli/config/bgp/neighbors.py future-import-boilerplate lib/ansible/module_utils/network/frr/providers/cli/config/bgp/neighbors.py metaclass-boilerplate lib/ansible/module_utils/network/frr/providers/cli/config/bgp/process.py future-import-boilerplate lib/ansible/module_utils/network/frr/providers/cli/config/bgp/process.py metaclass-boilerplate lib/ansible/module_utils/network/frr/providers/module.py future-import-boilerplate lib/ansible/module_utils/network/frr/providers/module.py metaclass-boilerplate lib/ansible/module_utils/network/frr/providers/providers.py future-import-boilerplate lib/ansible/module_utils/network/frr/providers/providers.py metaclass-boilerplate lib/ansible/module_utils/network/ftd/common.py future-import-boilerplate lib/ansible/module_utils/network/ftd/common.py metaclass-boilerplate lib/ansible/module_utils/network/ftd/configuration.py future-import-boilerplate lib/ansible/module_utils/network/ftd/configuration.py metaclass-boilerplate lib/ansible/module_utils/network/ftd/device.py future-import-boilerplate lib/ansible/module_utils/network/ftd/device.py metaclass-boilerplate lib/ansible/module_utils/network/ftd/fdm_swagger_client.py future-import-boilerplate lib/ansible/module_utils/network/ftd/fdm_swagger_client.py metaclass-boilerplate lib/ansible/module_utils/network/ftd/operation.py future-import-boilerplate lib/ansible/module_utils/network/ftd/operation.py metaclass-boilerplate lib/ansible/module_utils/network/ios/ios.py future-import-boilerplate lib/ansible/module_utils/network/ios/ios.py metaclass-boilerplate lib/ansible/module_utils/network/ios/providers/cli/config/base.py future-import-boilerplate lib/ansible/module_utils/network/ios/providers/cli/config/base.py metaclass-boilerplate lib/ansible/module_utils/network/ios/providers/cli/config/bgp/address_family.py future-import-boilerplate lib/ansible/module_utils/network/ios/providers/cli/config/bgp/address_family.py metaclass-boilerplate lib/ansible/module_utils/network/ios/providers/cli/config/bgp/neighbors.py future-import-boilerplate lib/ansible/module_utils/network/ios/providers/cli/config/bgp/neighbors.py metaclass-boilerplate lib/ansible/module_utils/network/ios/providers/cli/config/bgp/process.py future-import-boilerplate lib/ansible/module_utils/network/ios/providers/cli/config/bgp/process.py metaclass-boilerplate lib/ansible/module_utils/network/ios/providers/module.py future-import-boilerplate lib/ansible/module_utils/network/ios/providers/module.py metaclass-boilerplate lib/ansible/module_utils/network/ios/providers/providers.py future-import-boilerplate lib/ansible/module_utils/network/ios/providers/providers.py metaclass-boilerplate lib/ansible/module_utils/network/iosxr/iosxr.py future-import-boilerplate lib/ansible/module_utils/network/iosxr/iosxr.py metaclass-boilerplate lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/address_family.py future-import-boilerplate lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/address_family.py metaclass-boilerplate lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/neighbors.py future-import-boilerplate lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/neighbors.py metaclass-boilerplate lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/process.py future-import-boilerplate lib/ansible/module_utils/network/iosxr/providers/cli/config/bgp/process.py metaclass-boilerplate lib/ansible/module_utils/network/iosxr/providers/module.py future-import-boilerplate lib/ansible/module_utils/network/iosxr/providers/module.py metaclass-boilerplate lib/ansible/module_utils/network/iosxr/providers/providers.py future-import-boilerplate lib/ansible/module_utils/network/iosxr/providers/providers.py metaclass-boilerplate lib/ansible/module_utils/network/junos/argspec/facts/facts.py future-import-boilerplate lib/ansible/module_utils/network/junos/argspec/facts/facts.py metaclass-boilerplate lib/ansible/module_utils/network/junos/facts/facts.py future-import-boilerplate lib/ansible/module_utils/network/junos/facts/facts.py metaclass-boilerplate lib/ansible/module_utils/network/junos/facts/legacy/base.py future-import-boilerplate lib/ansible/module_utils/network/junos/facts/legacy/base.py metaclass-boilerplate lib/ansible/module_utils/network/junos/junos.py future-import-boilerplate lib/ansible/module_utils/network/junos/junos.py metaclass-boilerplate lib/ansible/module_utils/network/meraki/meraki.py future-import-boilerplate lib/ansible/module_utils/network/meraki/meraki.py metaclass-boilerplate lib/ansible/module_utils/network/netconf/netconf.py future-import-boilerplate lib/ansible/module_utils/network/netconf/netconf.py metaclass-boilerplate lib/ansible/module_utils/network/netscaler/netscaler.py future-import-boilerplate lib/ansible/module_utils/network/netscaler/netscaler.py metaclass-boilerplate lib/ansible/module_utils/network/nos/nos.py future-import-boilerplate lib/ansible/module_utils/network/nos/nos.py metaclass-boilerplate lib/ansible/module_utils/network/nso/nso.py future-import-boilerplate lib/ansible/module_utils/network/nso/nso.py metaclass-boilerplate lib/ansible/module_utils/network/nxos/argspec/facts/facts.py future-import-boilerplate lib/ansible/module_utils/network/nxos/argspec/facts/facts.py metaclass-boilerplate lib/ansible/module_utils/network/nxos/facts/facts.py future-import-boilerplate lib/ansible/module_utils/network/nxos/facts/facts.py metaclass-boilerplate lib/ansible/module_utils/network/nxos/facts/legacy/base.py future-import-boilerplate lib/ansible/module_utils/network/nxos/facts/legacy/base.py metaclass-boilerplate lib/ansible/module_utils/network/nxos/nxos.py future-import-boilerplate lib/ansible/module_utils/network/nxos/nxos.py metaclass-boilerplate lib/ansible/module_utils/network/nxos/utils/utils.py future-import-boilerplate lib/ansible/module_utils/network/nxos/utils/utils.py metaclass-boilerplate lib/ansible/module_utils/network/onyx/onyx.py future-import-boilerplate lib/ansible/module_utils/network/onyx/onyx.py metaclass-boilerplate lib/ansible/module_utils/network/ordnance/ordnance.py future-import-boilerplate lib/ansible/module_utils/network/ordnance/ordnance.py metaclass-boilerplate lib/ansible/module_utils/network/restconf/restconf.py future-import-boilerplate lib/ansible/module_utils/network/restconf/restconf.py metaclass-boilerplate lib/ansible/module_utils/network/routeros/routeros.py future-import-boilerplate lib/ansible/module_utils/network/routeros/routeros.py metaclass-boilerplate lib/ansible/module_utils/network/skydive/api.py future-import-boilerplate lib/ansible/module_utils/network/skydive/api.py metaclass-boilerplate lib/ansible/module_utils/network/slxos/slxos.py future-import-boilerplate lib/ansible/module_utils/network/slxos/slxos.py metaclass-boilerplate lib/ansible/module_utils/network/sros/sros.py future-import-boilerplate lib/ansible/module_utils/network/sros/sros.py metaclass-boilerplate lib/ansible/module_utils/network/voss/voss.py future-import-boilerplate lib/ansible/module_utils/network/voss/voss.py metaclass-boilerplate lib/ansible/module_utils/network/vyos/vyos.py future-import-boilerplate lib/ansible/module_utils/network/vyos/vyos.py metaclass-boilerplate lib/ansible/module_utils/oneandone.py future-import-boilerplate lib/ansible/module_utils/oneandone.py metaclass-boilerplate lib/ansible/module_utils/oneview.py metaclass-boilerplate lib/ansible/module_utils/opennebula.py future-import-boilerplate lib/ansible/module_utils/opennebula.py metaclass-boilerplate lib/ansible/module_utils/openstack.py future-import-boilerplate lib/ansible/module_utils/openstack.py metaclass-boilerplate lib/ansible/module_utils/oracle/oci_utils.py future-import-boilerplate lib/ansible/module_utils/oracle/oci_utils.py metaclass-boilerplate lib/ansible/module_utils/ovirt.py future-import-boilerplate lib/ansible/module_utils/ovirt.py metaclass-boilerplate lib/ansible/module_utils/parsing/convert_bool.py future-import-boilerplate lib/ansible/module_utils/parsing/convert_bool.py metaclass-boilerplate lib/ansible/module_utils/postgres.py future-import-boilerplate lib/ansible/module_utils/postgres.py metaclass-boilerplate lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs lib/ansible/module_utils/pure.py future-import-boilerplate lib/ansible/module_utils/pure.py metaclass-boilerplate lib/ansible/module_utils/pycompat24.py future-import-boilerplate lib/ansible/module_utils/pycompat24.py metaclass-boilerplate lib/ansible/module_utils/pycompat24.py no-get-exception lib/ansible/module_utils/rax.py future-import-boilerplate lib/ansible/module_utils/rax.py metaclass-boilerplate lib/ansible/module_utils/redhat.py future-import-boilerplate lib/ansible/module_utils/redhat.py metaclass-boilerplate lib/ansible/module_utils/remote_management/dellemc/dellemc_idrac.py future-import-boilerplate lib/ansible/module_utils/remote_management/intersight.py future-import-boilerplate lib/ansible/module_utils/remote_management/intersight.py metaclass-boilerplate lib/ansible/module_utils/remote_management/lxca/common.py future-import-boilerplate lib/ansible/module_utils/remote_management/lxca/common.py metaclass-boilerplate lib/ansible/module_utils/remote_management/ucs.py future-import-boilerplate lib/ansible/module_utils/remote_management/ucs.py metaclass-boilerplate lib/ansible/module_utils/scaleway.py future-import-boilerplate lib/ansible/module_utils/scaleway.py metaclass-boilerplate lib/ansible/module_utils/service.py future-import-boilerplate lib/ansible/module_utils/service.py metaclass-boilerplate lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled lib/ansible/module_utils/six/__init__.py no-basestring lib/ansible/module_utils/six/__init__.py no-dict-iteritems lib/ansible/module_utils/six/__init__.py no-dict-iterkeys lib/ansible/module_utils/six/__init__.py no-dict-itervalues lib/ansible/module_utils/six/__init__.py replace-urlopen lib/ansible/module_utils/splitter.py future-import-boilerplate lib/ansible/module_utils/splitter.py metaclass-boilerplate lib/ansible/module_utils/storage/hpe3par/hpe3par.py future-import-boilerplate lib/ansible/module_utils/storage/hpe3par/hpe3par.py metaclass-boilerplate lib/ansible/module_utils/univention_umc.py future-import-boilerplate lib/ansible/module_utils/univention_umc.py metaclass-boilerplate lib/ansible/module_utils/urls.py future-import-boilerplate lib/ansible/module_utils/urls.py metaclass-boilerplate lib/ansible/module_utils/urls.py pylint:blacklisted-name lib/ansible/module_utils/urls.py replace-urlopen lib/ansible/module_utils/vca.py future-import-boilerplate lib/ansible/module_utils/vca.py metaclass-boilerplate lib/ansible/module_utils/vexata.py future-import-boilerplate lib/ansible/module_utils/vexata.py metaclass-boilerplate lib/ansible/module_utils/yumdnf.py future-import-boilerplate lib/ansible/module_utils/yumdnf.py metaclass-boilerplate lib/ansible/modules/cloud/alicloud/ali_instance.py validate-modules:doc-missing-type lib/ansible/modules/cloud/alicloud/ali_instance.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/alicloud/ali_instance_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/alicloud/ali_instance_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:doc-missing-type lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:no-default-for-required-parameter lib/ansible/modules/cloud/atomic/atomic_container.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/atomic/atomic_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/atomic/atomic_image.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_acs.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_aks.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/azure/azure_rm_aks_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_aksversion_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_appgateway.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_appgateway.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_applicationsecuritygroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_applicationsecuritygroup_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_appserviceplan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_appserviceplan_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_autoscale.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_autoscale.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/azure/azure_rm_autoscale_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_availabilityset.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_availabilityset_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_azurefirewall.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/azure/azure_rm_batchaccount.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_cdnendpoint_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_cdnprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_cdnprofile_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_containerinstance.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_containerinstance_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_containerregistry.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_containerregistry_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/azure/azure_rm_cosmosdbaccount_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_deployment.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_deployment.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_deployment_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_deployment_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_devtestlab.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_devtestlab_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_devtestlabarmtemplate_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifact_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifactsource.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_devtestlabartifactsource_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_devtestlabcustomimage.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_devtestlabcustomimage_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_devtestlabenvironment.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_devtestlabenvironment_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_devtestlabpolicy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_devtestlabpolicy_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_devtestlabschedule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_devtestlabschedule_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualmachine_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_devtestlabvirtualnetwork.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset.py validate-modules:doc-missing-type lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_dnsrecordset_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_dnszone.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_dnszone_info.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_dnszone_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_dnszone_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_functionapp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_functionapp_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_galleryimage_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_hdinsightcluster_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_image.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_image_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_image_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_keyvault.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_keyvault.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_keyvault_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_keyvaultkey.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_keyvaultkey_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_keyvaultsecret.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_loadbalancer.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_loadbalancer_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_loganalyticsworkspace.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_loganalyticsworkspace_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_manageddisk.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_manageddisk_info.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_manageddisk_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_mariadbconfiguration.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_mariadbconfiguration_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_mariadbdatabase.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_mariadbfirewallrule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_mariadbserver.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_mysqlconfiguration.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_mysqlconfiguration_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_mysqldatabase.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_mysqlfirewallrule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_mysqlserver.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:doc-missing-type lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_networkinterface_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_networkinterface_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_postgresqlconfiguration.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_postgresqlconfiguration_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_postgresqldatabase.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_postgresqlfirewallrule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_postgresqlserver.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_publicipaddress.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_publicipaddress_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_rediscache.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_rediscache_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_rediscachefirewallrule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_resource.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_resource_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_resourcegroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_resourcegroup_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_roleassignment.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_roleassignment_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:invalid-argument-spec lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/azure/azure_rm_roledefinition.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_roledefinition_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_roledefinition_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_route.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_routetable.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_routetable_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_securitygroup.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/azure/azure_rm_securitygroup_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_servicebus.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_servicebus_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_servicebusqueue.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_servicebussaspolicy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_servicebustopic.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_servicebustopicsubscription.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_sqldatabase.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_sqldatabase_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_sqlfirewallrule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_sqlfirewallrule_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_sqlserver.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_sqlserver_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py validate-modules:doc-missing-type lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_storageaccount_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_storageaccount_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/azure/azure_rm_storageblob.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_subnet.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_subnet_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerendpoint.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerendpoint_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/azure/azure_rm_trafficmanagerprofile_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualmachine_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualmachineextension.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualmachineextension_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualmachineimage_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetextension.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetextension_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescalesetinstance_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualnetwork.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualnetwork_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:doc-missing-type lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkgateway.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkpeering.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_virtualnetworkpeering_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_webapp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_webapp_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/azure/azure_rm_webappslot.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/centurylink/clc_aa_policy.py validate-modules:doc-missing-type lib/ansible/modules/cloud/centurylink/clc_aa_policy.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:doc-missing-type lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:no-default-for-required-parameter lib/ansible/modules/cloud/centurylink/clc_alert_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:doc-missing-type lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/cloud/centurylink/clc_blueprint_package.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:doc-missing-type lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:no-default-for-required-parameter lib/ansible/modules/cloud/centurylink/clc_firewall_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/centurylink/clc_group.py validate-modules:doc-missing-type lib/ansible/modules/cloud/centurylink/clc_loadbalancer.py validate-modules:doc-missing-type lib/ansible/modules/cloud/centurylink/clc_loadbalancer.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/centurylink/clc_modify_server.py validate-modules:doc-missing-type lib/ansible/modules/cloud/centurylink/clc_modify_server.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/centurylink/clc_publicip.py validate-modules:doc-missing-type lib/ansible/modules/cloud/centurylink/clc_publicip.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/centurylink/clc_server.py validate-modules:doc-missing-type lib/ansible/modules/cloud/centurylink/clc_server.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:doc-missing-type lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/cloud/centurylink/clc_server_snapshot.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:doc-missing-type lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/_digital_ocean.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/digital_ocean/digital_ocean_block_storage.py validate-modules:doc-missing-type lib/ansible/modules/cloud/digital_ocean/digital_ocean_block_storage.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate.py validate-modules:doc-missing-type lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_certificate_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain.py validate-modules:doc-missing-type lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_domain_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_droplet.py validate-modules:doc-missing-type lib/ansible/modules/cloud/digital_ocean/digital_ocean_droplet.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_firewall_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:doc-missing-type lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_floating_ip.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/digital_ocean/digital_ocean_image_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_load_balancer_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_snapshot_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:doc-missing-type lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_sshkey.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag.py validate-modules:doc-missing-type lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_tag_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/digital_ocean/digital_ocean_volume_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:doc-missing-type lib/ansible/modules/cloud/dimensiondata/dimensiondata_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:doc-missing-type lib/ansible/modules/cloud/dimensiondata/dimensiondata_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/docker/docker_container.py use-argspec-type-path # uses colon-separated paths, can't use type=path lib/ansible/modules/cloud/google/_gcdns_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/_gcdns_zone.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/_gce.py pylint:blacklisted-name lib/ansible/modules/cloud/google/_gce.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/_gce.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/_gce.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/_gcp_backend_service.py pylint:blacklisted-name lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/_gcp_backend_service.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/_gcp_forwarding_rule.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/_gcp_healthcheck.py pylint:blacklisted-name lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/_gcp_healthcheck.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/_gcp_target_proxy.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/_gcp_url_map.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/_gcspanner.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/_gcspanner.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/gc_storage.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/gc_storage.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/google/gc_storage.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/gc_storage.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gc_storage.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/gce_eip.py pylint:blacklisted-name lib/ansible/modules/cloud/google/gce_eip.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/gce_eip.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gce_eip.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/gce_img.py pylint:blacklisted-name lib/ansible/modules/cloud/google/gce_img.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/gce_img.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gce_instance_template.py pylint:blacklisted-name lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gce_instance_template.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/gce_labels.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/gce_labels.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/google/gce_labels.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/gce_labels.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gce_labels.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/gce_lb.py pylint:blacklisted-name lib/ansible/modules/cloud/google/gce_lb.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/gce_lb.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/gce_lb.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/cloud/google/gce_lb.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gce_mig.py pylint:blacklisted-name lib/ansible/modules/cloud/google/gce_mig.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/gce_mig.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gce_mig.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/gce_net.py pylint:blacklisted-name lib/ansible/modules/cloud/google/gce_net.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/gce_net.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/gce_net.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/cloud/google/gce_net.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gce_pd.py pylint:blacklisted-name lib/ansible/modules/cloud/google/gce_pd.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/gce_pd.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/gce_pd.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gce_pd.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/gce_snapshot.py pylint:blacklisted-name lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/gce_snapshot.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gce_tag.py pylint:blacklisted-name lib/ansible/modules/cloud/google/gce_tag.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gcp_bigquery_table.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/google/gcpubsub.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/google/gcpubsub_info.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/hcloud/hcloud_network_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/heroku/heroku_collaborator.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/kubevirt/kubevirt_cdi_upload.py validate-modules:doc-missing-type lib/ansible/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:doc-missing-type lib/ansible/modules/cloud/kubevirt/kubevirt_preset.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:doc-missing-type lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/kubevirt/kubevirt_pvc.py validate-modules:return-syntax-error lib/ansible/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:doc-missing-type lib/ansible/modules/cloud/kubevirt/kubevirt_rs.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/kubevirt/kubevirt_template.py validate-modules:doc-missing-type lib/ansible/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:doc-missing-type lib/ansible/modules/cloud/kubevirt/kubevirt_vm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/linode/linode.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/linode/linode.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/linode/linode.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/linode/linode_v4.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/lxc/lxc_container.py pylint:blacklisted-name lib/ansible/modules/cloud/lxc/lxc_container.py use-argspec-type-path lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:doc-missing-type lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/lxc/lxc_container.py validate-modules:use-run-command-not-popen lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:doc-missing-type lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/lxd/lxd_container.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:doc-missing-type lib/ansible/modules/cloud/lxd/lxd_profile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/memset/memset_dns_reload.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/memset/memset_memstore_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/memset/memset_server_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/memset/memset_zone.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/memset/memset_zone_domain.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/memset/memset_zone_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/misc/cloud_init_data_facts.py validate-modules:doc-missing-type lib/ansible/modules/cloud/misc/helm.py validate-modules:doc-missing-type lib/ansible/modules/cloud/misc/helm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/misc/ovirt.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/misc/ovirt.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/misc/ovirt.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/misc/proxmox.py validate-modules:doc-missing-type lib/ansible/modules/cloud/misc/proxmox.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:doc-missing-type lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/misc/proxmox_kvm.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:doc-missing-type lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/cloud/misc/proxmox_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/misc/terraform.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/misc/terraform.py validate-modules:doc-missing-type lib/ansible/modules/cloud/misc/terraform.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/misc/terraform.py validate-modules:return-syntax-error lib/ansible/modules/cloud/misc/virt.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/misc/virt.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/misc/virt.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/misc/virt_net.py validate-modules:doc-missing-type lib/ansible/modules/cloud/misc/virt_pool.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/misc/virt_pool.py validate-modules:doc-missing-type lib/ansible/modules/cloud/oneandone/oneandone_firewall_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:doc-missing-type lib/ansible/modules/cloud/oneandone/oneandone_load_balancer.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/oneandone/oneandone_monitoring_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:doc-missing-type lib/ansible/modules/cloud/oneandone/oneandone_private_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:doc-missing-type lib/ansible/modules/cloud/oneandone/oneandone_public_ip.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:doc-missing-type lib/ansible/modules/cloud/oneandone/oneandone_server.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/online/_online_server_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/online/_online_user_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/online/online_server_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/online/online_user_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/opennebula/one_host.py validate-modules:doc-missing-type lib/ansible/modules/cloud/opennebula/one_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/opennebula/one_image.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/opennebula/one_image_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/opennebula/one_service.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/opennebula/one_vm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_auth.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_client_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_coe_cluster.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_coe_cluster.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_coe_cluster_template.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_coe_cluster_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/cloud/openstack/os_flavor_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_floating_ip.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_group.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_image.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/openstack/os_image.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/openstack/os_image.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_image.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_image_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_ironic.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/openstack/os_ironic_inspect.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_ironic_node.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/openstack/os_keypair.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_keystone_domain.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_keystone_domain_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_keystone_domain_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_keystone_endpoint.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/openstack/os_keystone_role.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_keystone_service.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_listener.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_listener.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_loadbalancer.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_loadbalancer.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_member.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_member.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_network.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_networks_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_networks_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_nova_flavor.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_nova_flavor.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_nova_host_aggregate.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_nova_host_aggregate.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_object.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_pool.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_port.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_port.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_port_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_port_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_project.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_project_access.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_project_access.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_project_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_project_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:return-syntax-error lib/ansible/modules/cloud/openstack/os_quota.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/openstack/os_recordset.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_recordset.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_router.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_router.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_security_group.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_security_group_rule.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_security_group_rule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_server.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/openstack/os_server.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_server.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_server.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/openstack/os_server_action.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/openstack/os_server_action.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_server_group.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_server_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_server_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_server_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_server_metadata.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_server_metadata.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_server_volume.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_stack.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_subnet.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_subnets_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_subnets_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_user.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_user_group.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_user_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_user_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_user_role.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/openstack/os_volume.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/openstack/os_volume_snapshot.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_zone.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/openstack/os_zone.py validate-modules:doc-missing-type lib/ansible/modules/cloud/openstack/os_zone.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/oracle/oci_vcn.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovh/ovh_ip_failover.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovh/ovh_ip_failover.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovh/ovh_ip_loadbalancing_backend.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovh/ovh_ip_loadbalancing_backend.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_affinity_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:no-default-for-required-parameter lib/ansible/modules/cloud/ovirt/ovirt_affinity_label.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_affinity_label_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_auth.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_auth.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_auth.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/ovirt/ovirt_cluster.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_cluster.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:no-default-for-required-parameter lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_cluster.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/ovirt/ovirt_cluster_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_cluster_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_cluster_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_datacenter.py validate-modules:no-default-for-required-parameter lib/ansible/modules/cloud/ovirt/ovirt_datacenter_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_datacenter_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_datacenter_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_disk.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_disk.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_disk.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/ovirt/ovirt_disk_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_disk_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_disk_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:no-default-for-required-parameter lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_external_provider.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py validate-modules:no-default-for-required-parameter lib/ansible/modules/cloud/ovirt/ovirt_external_provider_info.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/ovirt/ovirt_group.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_group.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_group.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_group_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_group_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_group_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_host.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_host.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/cloud/ovirt/ovirt_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_host_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_host_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_host_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_host_network.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_host_network.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_host_network.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_host_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:no-default-for-required-parameter lib/ansible/modules/cloud/ovirt/ovirt_host_pm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_host_storage_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_instance_type.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_job.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_job.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_job.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_mac_pool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_network.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_network.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_network.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_network_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_network_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_network_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_nic.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_nic.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_nic.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_nic.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_nic_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_nic_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_nic_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_permission.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_permission.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_permission.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_permission_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_permission_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_permission_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_quota.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_quota.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_quota.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_quota.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_quota_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_quota_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_quota_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_role.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_role.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_role.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_role.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_scheduling_policy_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_snapshot_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_snapshot_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_snapshot_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_storage_connection.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_storage_domain.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_storage_template_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_storage_vm_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_tag.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_tag.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_tag.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_tag.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_tag_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_tag_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_tag_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_template.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_template.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_template.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_template_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_template_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_template_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_user.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_user.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_user.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_user_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_user_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_user_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_vm.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_vm.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_vm.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_vm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_vm_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_vmpool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/ovirt/ovirt_vmpool_info.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_vmpool_info.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_vmpool_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py future-import-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py metaclass-boilerplate lib/ansible/modules/cloud/ovirt/ovirt_vnic_profile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/packet/packet_device.py validate-modules:doc-missing-type lib/ansible/modules/cloud/packet/packet_device.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:doc-missing-type lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/packet/packet_sshkey.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/podman/podman_image.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/cloud/podman/podman_image.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/podman/podman_image.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/podman/podman_image_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:doc-missing-type lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/profitbricks/profitbricks.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:doc-missing-type lib/ansible/modules/cloud/profitbricks/profitbricks_datacenter.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:doc-missing-type lib/ansible/modules/cloud/profitbricks/profitbricks_nic.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:doc-missing-type lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/profitbricks/profitbricks_volume.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:doc-missing-type lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/pubnub/pubnub_blocks.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/pubnub/pubnub_blocks.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax.py use-argspec-type-path # fix needed lib/ansible/modules/cloud/rackspace/rax.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_cbs.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_cbs_attachments.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_cdb.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_cdb_database.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_cdb_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_clb.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_clb_nodes.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_clb_ssl.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_dns.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_dns_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_files.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_files_objects.py use-argspec-type-path lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/cloud/rackspace/rax_files_objects.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_identity.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_keypair.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_meta.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_mon_alarm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_mon_check.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_mon_entity.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_mon_notification.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_mon_notification_plan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_queue.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_scaling_group.py use-argspec-type-path # fix needed lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_scaling_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:doc-missing-type lib/ansible/modules/cloud/rackspace/rax_scaling_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/scaleway/_scaleway_image_facts.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/_scaleway_image_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/_scaleway_ip_facts.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/_scaleway_ip_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/_scaleway_organization_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/_scaleway_security_group_facts.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/_scaleway_security_group_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/_scaleway_server_facts.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/_scaleway_server_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/_scaleway_snapshot_facts.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/_scaleway_snapshot_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/_scaleway_volume_facts.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/_scaleway_volume_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_compute.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/scaleway/scaleway_image_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_image_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/scaleway_ip.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_ip_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_ip_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_lb.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/scaleway/scaleway_organization_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/scaleway_security_group_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_security_group_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/scaleway_security_group_rule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/scaleway/scaleway_server_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_server_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/scaleway_snapshot_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_snapshot_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/scaleway/scaleway_sshkey.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_user_data.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_user_data.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/scaleway/scaleway_volume.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_volume.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/scaleway/scaleway_volume_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/smartos/imgadm.py validate-modules:doc-missing-type lib/ansible/modules/cloud/smartos/imgadm.py validate-modules:no-default-for-required-parameter lib/ansible/modules/cloud/smartos/smartos_image_info.py validate-modules:doc-missing-type lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/smartos/vmadm.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/softlayer/sl_vm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:doc-missing-type lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/spotinst/spotinst_aws_elastigroup.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/univention/udm_dns_record.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/univention/udm_dns_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/univention/udm_dns_zone.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/univention/udm_group.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/univention/udm_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/univention/udm_share.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/univention/udm_share.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/univention/udm_share.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/cloud/univention/udm_share.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/univention/udm_share.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/univention/udm_user.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/univention/udm_user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/univention/udm_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:doc-missing-type lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vca_fw.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:doc-missing-type lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vca_nat.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:doc-missing-type lib/ansible/modules/cloud/vmware/vca_vapp.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_cluster.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_deploy_ovf.py use-argspec-type-path lib/ansible/modules/cloud/vmware/vmware_deploy_ovf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_dvs_host.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_dvs_portgroup.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_dvswitch.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_dvswitch_nioc.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_dvswitch_uplink_pg.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_guest.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_guest.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_guest_custom_attributes.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_guest_file_operation.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_portgroup.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_vcenter_settings.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_vcenter_statistics.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_vmkernel.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vmware/vmware_vspan_session.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vmware/vsphere_copy.py validate-modules:doc-missing-type lib/ansible/modules/cloud/vmware/vsphere_copy.py validate-modules:undocumented-parameter lib/ansible/modules/cloud/vultr/_vultr_block_storage_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/_vultr_dns_domain_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/_vultr_firewall_group_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/_vultr_network_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/_vultr_os_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/_vultr_region_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/_vultr_server_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/_vultr_ssh_key_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/_vultr_startup_script_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/_vultr_user_facts.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/vultr_block_storage.py validate-modules:doc-missing-type lib/ansible/modules/cloud/vultr/vultr_block_storage.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vultr/vultr_dns_domain.py validate-modules:doc-missing-type lib/ansible/modules/cloud/vultr/vultr_dns_domain_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/vultr_dns_record.py validate-modules:doc-missing-type lib/ansible/modules/cloud/vultr/vultr_dns_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vultr/vultr_firewall_group.py validate-modules:doc-missing-type lib/ansible/modules/cloud/vultr/vultr_firewall_group_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/vultr_firewall_rule.py validate-modules:doc-missing-type lib/ansible/modules/cloud/vultr/vultr_firewall_rule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/vultr/vultr_network.py validate-modules:doc-missing-type lib/ansible/modules/cloud/vultr/vultr_network_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/vultr_region_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/vultr_server_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/vultr/vultr_startup_script_info.py validate-modules:return-syntax-error lib/ansible/modules/cloud/webfaction/webfaction_app.py validate-modules:doc-missing-type lib/ansible/modules/cloud/webfaction/webfaction_db.py validate-modules:doc-missing-type lib/ansible/modules/cloud/webfaction/webfaction_domain.py validate-modules:doc-missing-type lib/ansible/modules/cloud/webfaction/webfaction_domain.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/webfaction/webfaction_mailbox.py validate-modules:doc-missing-type lib/ansible/modules/cloud/webfaction/webfaction_site.py validate-modules:doc-missing-type lib/ansible/modules/cloud/webfaction/webfaction_site.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:missing-suboption-docs lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/cloud/xenserver/xenserver_guest.py validate-modules:undocumented-parameter lib/ansible/modules/clustering/consul/consul.py validate-modules:doc-missing-type lib/ansible/modules/clustering/consul/consul.py validate-modules:undocumented-parameter lib/ansible/modules/clustering/consul/consul_acl.py validate-modules:doc-missing-type lib/ansible/modules/clustering/consul/consul_kv.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/clustering/etcd3.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/clustering/etcd3.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/clustering/k8s/k8s.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/clustering/k8s/k8s.py validate-modules:doc-missing-type lib/ansible/modules/clustering/k8s/k8s.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/clustering/k8s/k8s.py validate-modules:return-syntax-error lib/ansible/modules/clustering/k8s/k8s_auth.py validate-modules:doc-missing-type lib/ansible/modules/clustering/k8s/k8s_auth.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/clustering/k8s/k8s_info.py validate-modules:doc-missing-type lib/ansible/modules/clustering/k8s/k8s_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:doc-missing-type lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/clustering/k8s/k8s_scale.py validate-modules:return-syntax-error lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:doc-missing-type lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/clustering/k8s/k8s_service.py validate-modules:return-syntax-error lib/ansible/modules/clustering/pacemaker_cluster.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/clustering/znode.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/clustering/znode.py validate-modules:doc-missing-type lib/ansible/modules/clustering/znode.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/commands/command.py validate-modules:doc-missing-type lib/ansible/modules/commands/command.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/commands/command.py validate-modules:undocumented-parameter lib/ansible/modules/commands/expect.py validate-modules:doc-missing-type lib/ansible/modules/crypto/acme/acme_account_info.py validate-modules:return-syntax-error lib/ansible/modules/database/influxdb/influxdb_database.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/database/influxdb/influxdb_database.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/influxdb/influxdb_query.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/database/influxdb/influxdb_query.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/influxdb/influxdb_retention_policy.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/database/influxdb/influxdb_retention_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/influxdb/influxdb_user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/database/influxdb/influxdb_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/influxdb/influxdb_write.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/database/influxdb/influxdb_write.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/misc/elasticsearch_plugin.py validate-modules:doc-missing-type lib/ansible/modules/database/misc/elasticsearch_plugin.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/misc/kibana_plugin.py validate-modules:doc-missing-type lib/ansible/modules/database/misc/kibana_plugin.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/misc/redis.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/misc/riak.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/database/misc/riak.py validate-modules:doc-missing-type lib/ansible/modules/database/misc/riak.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/mongodb/mongodb_parameter.py use-argspec-type-path lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:doc-missing-type lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:no-default-for-required-parameter lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/database/mongodb/mongodb_parameter.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/mongodb/mongodb_replicaset.py use-argspec-type-path lib/ansible/modules/database/mongodb/mongodb_shard.py use-argspec-type-path lib/ansible/modules/database/mongodb/mongodb_shard.py validate-modules:doc-missing-type lib/ansible/modules/database/mongodb/mongodb_shard.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/mongodb/mongodb_user.py use-argspec-type-path lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:doc-missing-type lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/mongodb/mongodb_user.py validate-modules:undocumented-parameter lib/ansible/modules/database/mssql/mssql_db.py validate-modules:doc-missing-type lib/ansible/modules/database/mysql/mysql_db.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/mysql/mysql_db.py validate-modules:use-run-command-not-popen lib/ansible/modules/database/mysql/mysql_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/mysql/mysql_user.py validate-modules:undocumented-parameter lib/ansible/modules/database/postgresql/postgresql_db.py use-argspec-type-path lib/ansible/modules/database/postgresql/postgresql_db.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/postgresql/postgresql_db.py validate-modules:use-run-command-not-popen lib/ansible/modules/database/postgresql/postgresql_ext.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/postgresql/postgresql_pg_hba.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/postgresql/postgresql_schema.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/postgresql/postgresql_tablespace.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/postgresql/postgresql_user.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/database/postgresql/postgresql_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:doc-missing-type lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/proxysql/proxysql_backend_servers.py validate-modules:undocumented-parameter lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:doc-missing-type lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/proxysql/proxysql_global_variables.py validate-modules:undocumented-parameter lib/ansible/modules/database/proxysql/proxysql_manage_config.py validate-modules:doc-missing-type lib/ansible/modules/database/proxysql/proxysql_manage_config.py validate-modules:undocumented-parameter lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:doc-missing-type lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/proxysql/proxysql_mysql_users.py validate-modules:undocumented-parameter lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:doc-missing-type lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/proxysql/proxysql_query_rules.py validate-modules:undocumented-parameter lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:doc-missing-type lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/proxysql/proxysql_replication_hostgroups.py validate-modules:undocumented-parameter lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:doc-missing-type lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/database/proxysql/proxysql_scheduler.py validate-modules:undocumented-parameter lib/ansible/modules/database/vertica/vertica_configuration.py validate-modules:doc-missing-type lib/ansible/modules/database/vertica/vertica_info.py validate-modules:doc-missing-type lib/ansible/modules/database/vertica/vertica_role.py validate-modules:doc-missing-type lib/ansible/modules/database/vertica/vertica_role.py validate-modules:undocumented-parameter lib/ansible/modules/database/vertica/vertica_schema.py validate-modules:doc-missing-type lib/ansible/modules/database/vertica/vertica_schema.py validate-modules:undocumented-parameter lib/ansible/modules/database/vertica/vertica_user.py validate-modules:doc-missing-type lib/ansible/modules/database/vertica/vertica_user.py validate-modules:undocumented-parameter lib/ansible/modules/files/acl.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/files/archive.py use-argspec-type-path # fix needed lib/ansible/modules/files/assemble.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/files/blockinfile.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/files/blockinfile.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/files/copy.py pylint:blacklisted-name lib/ansible/modules/files/copy.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/files/copy.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/files/copy.py validate-modules:undocumented-parameter lib/ansible/modules/files/file.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/files/file.py validate-modules:undocumented-parameter lib/ansible/modules/files/find.py use-argspec-type-path # fix needed lib/ansible/modules/files/find.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/files/iso_extract.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/files/lineinfile.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/files/lineinfile.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/files/lineinfile.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/files/patch.py pylint:blacklisted-name lib/ansible/modules/files/replace.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/files/stat.py validate-modules:parameter-invalid lib/ansible/modules/files/stat.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/files/stat.py validate-modules:undocumented-parameter lib/ansible/modules/files/synchronize.py pylint:blacklisted-name lib/ansible/modules/files/synchronize.py use-argspec-type-path lib/ansible/modules/files/synchronize.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/files/synchronize.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/files/synchronize.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/files/synchronize.py validate-modules:undocumented-parameter lib/ansible/modules/files/unarchive.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/identity/cyberark/cyberark_authentication.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_dnsrecord.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_dnszone.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_hbacrule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_hostgroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_role.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_service.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_subca.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_sudocmd.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_sudocmdgroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_sudorule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/ipa/ipa_vault.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:doc-missing-type lib/ansible/modules/identity/keycloak/keycloak_client.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:doc-missing-type lib/ansible/modules/identity/keycloak/keycloak_clienttemplate.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/identity/opendj/opendj_backendprop.py validate-modules:doc-missing-type lib/ansible/modules/identity/opendj/opendj_backendprop.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/messaging/rabbitmq/rabbitmq_binding.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/messaging/rabbitmq/rabbitmq_binding.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/messaging/rabbitmq/rabbitmq_exchange.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/messaging/rabbitmq/rabbitmq_global_parameter.py validate-modules:doc-missing-type lib/ansible/modules/messaging/rabbitmq/rabbitmq_global_parameter.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/messaging/rabbitmq/rabbitmq_parameter.py validate-modules:doc-missing-type lib/ansible/modules/messaging/rabbitmq/rabbitmq_plugin.py validate-modules:doc-missing-type lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:doc-missing-type lib/ansible/modules/messaging/rabbitmq/rabbitmq_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:doc-default-incompatible-type lib/ansible/modules/messaging/rabbitmq/rabbitmq_queue.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/messaging/rabbitmq/rabbitmq_user.py validate-modules:doc-missing-type lib/ansible/modules/messaging/rabbitmq/rabbitmq_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/messaging/rabbitmq/rabbitmq_vhost.py validate-modules:doc-missing-type lib/ansible/modules/messaging/rabbitmq/rabbitmq_vhost_limits.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/airbrake_deployment.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/airbrake_deployment.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/bigpanda.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/bigpanda.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/bigpanda.py validate-modules:undocumented-parameter lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:doc-default-incompatible-type lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/circonus_annotation.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:doc-default-incompatible-type lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/datadog/datadog_event.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/datadog/datadog_monitor.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/datadog/datadog_monitor.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/grafana/grafana_dashboard.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/grafana/grafana_dashboard.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/grafana/grafana_datasource.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/grafana/grafana_datasource.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/grafana/grafana_datasource.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/grafana/grafana_plugin.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/grafana/grafana_plugin.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/honeybadger_deployment.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/icinga2_feature.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/icinga2_host.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/icinga2_host.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/icinga2_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/icinga2_host.py validate-modules:undocumented-parameter lib/ansible/modules/monitoring/librato_annotation.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/librato_annotation.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/logentries.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/monitoring/logentries.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/logentries.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/logentries.py validate-modules:undocumented-parameter lib/ansible/modules/monitoring/logicmonitor.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/logicmonitor.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/logicmonitor.py validate-modules:no-default-for-required-parameter lib/ansible/modules/monitoring/logicmonitor.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/logicmonitor_facts.py validate-modules:no-default-for-required-parameter lib/ansible/modules/monitoring/logstash_plugin.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/logstash_plugin.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/monit.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/monit.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/nagios.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/nagios.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/nagios.py validate-modules:no-default-for-required-parameter lib/ansible/modules/monitoring/newrelic_deployment.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/pagerduty.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/pagerduty.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/pagerduty.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/pagerduty_alert.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/pingdom.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/monitoring/pingdom.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/rollbar_deployment.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/sensu/sensu_check.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/sensu/sensu_check.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/sensu/sensu_client.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/sensu/sensu_client.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/sensu/sensu_handler.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/monitoring/sensu/sensu_handler.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/sensu/sensu_silence.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/sensu/sensu_silence.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/sensu/sensu_subscription.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/spectrum_device.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/spectrum_device.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/stackdriver.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/statusio_maintenance.py pylint:blacklisted-name lib/ansible/modules/monitoring/statusio_maintenance.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/statusio_maintenance.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/uptimerobot.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:doc-default-incompatible-type lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:missing-suboption-docs lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/zabbix/zabbix_action.py validate-modules:undocumented-parameter lib/ansible/modules/monitoring/zabbix/zabbix_group.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/zabbix/zabbix_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/zabbix/zabbix_group_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/zabbix/zabbix_host.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/zabbix/zabbix_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/zabbix/zabbix_host_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/zabbix/zabbix_hostmacro.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/zabbix/zabbix_hostmacro.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/zabbix/zabbix_maintenance.py pylint:blacklisted-name lib/ansible/modules/monitoring/zabbix/zabbix_maintenance.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/zabbix/zabbix_maintenance.py validate-modules:no-default-for-required-parameter lib/ansible/modules/monitoring/zabbix/zabbix_maintenance.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/zabbix/zabbix_map.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/zabbix/zabbix_map.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/zabbix/zabbix_proxy.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/zabbix/zabbix_proxy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/monitoring/zabbix/zabbix_template.py validate-modules:doc-missing-type lib/ansible/modules/monitoring/zabbix/zabbix_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/basics/get_url.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/basics/uri.py pylint:blacklisted-name lib/ansible/modules/net_tools/basics/uri.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/cloudflare_dns.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/dnsmadeeasy.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/dnsmadeeasy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/ip_netns.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/ipinfoio_facts.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/ipinfoio_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/ldap/ldap_entry.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/ldap/ldap_entry.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/ldap/ldap_passwd.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/netbox/netbox_device.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/netbox/netbox_device.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/netbox/netbox_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/netbox/netbox_ip_address.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/netbox/netbox_ip_address.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/netbox/netbox_prefix.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/netbox/netbox_prefix.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/netbox/netbox_site.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/netcup_dns.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/netcup_dns.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_a_record.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_aaaa_record.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_cname_record.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_dns_view.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_fixed_address.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_host_record.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_member.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_mx_record.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_naptr_record.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_network.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_network_view.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:missing-suboption-docs lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_nsgroup.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_ptr_record.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_srv_record.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_txt_record.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:doc-missing-type lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nios/nios_zone.py validate-modules:undocumented-parameter lib/ansible/modules/net_tools/nmcli.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/net_tools/nsupdate.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/a10/a10_server.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/a10/a10_server_axapi3.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/a10/a10_server_axapi3.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/a10/a10_service_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/a10/a10_virtual_server.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/a10/a10_virtual_server.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/aireos/aireos_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/aireos/aireos_command.py validate-modules:doc-missing-type lib/ansible/modules/network/aireos/aireos_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/aireos/aireos_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/aireos/aireos_config.py validate-modules:doc-missing-type lib/ansible/modules/network/aireos/aireos_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/aos/_aos_asn_pool.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_asn_pool.py metaclass-boilerplate lib/ansible/modules/network/aos/_aos_blueprint.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_blueprint.py metaclass-boilerplate lib/ansible/modules/network/aos/_aos_blueprint_param.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_blueprint_param.py metaclass-boilerplate lib/ansible/modules/network/aos/_aos_blueprint_virtnet.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_blueprint_virtnet.py metaclass-boilerplate lib/ansible/modules/network/aos/_aos_device.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_device.py metaclass-boilerplate lib/ansible/modules/network/aos/_aos_external_router.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_external_router.py metaclass-boilerplate lib/ansible/modules/network/aos/_aos_ip_pool.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_ip_pool.py metaclass-boilerplate lib/ansible/modules/network/aos/_aos_logical_device.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_logical_device.py metaclass-boilerplate lib/ansible/modules/network/aos/_aos_logical_device_map.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_logical_device_map.py metaclass-boilerplate lib/ansible/modules/network/aos/_aos_login.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_login.py metaclass-boilerplate lib/ansible/modules/network/aos/_aos_rack_type.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_rack_type.py metaclass-boilerplate lib/ansible/modules/network/aos/_aos_template.py future-import-boilerplate lib/ansible/modules/network/aos/_aos_template.py metaclass-boilerplate lib/ansible/modules/network/aruba/aruba_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/aruba/aruba_command.py validate-modules:doc-missing-type lib/ansible/modules/network/aruba/aruba_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/aruba/aruba_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/aruba/aruba_config.py validate-modules:doc-missing-type lib/ansible/modules/network/aruba/aruba_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/asa/asa_acl.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/asa/asa_acl.py validate-modules:doc-missing-type lib/ansible/modules/network/asa/asa_acl.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/asa/asa_acl.py validate-modules:undocumented-parameter lib/ansible/modules/network/asa/asa_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/asa/asa_command.py validate-modules:doc-missing-type lib/ansible/modules/network/asa/asa_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/asa/asa_command.py validate-modules:undocumented-parameter lib/ansible/modules/network/asa/asa_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/asa/asa_config.py validate-modules:doc-missing-type lib/ansible/modules/network/asa/asa_config.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/network/asa/asa_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/asa/asa_config.py validate-modules:undocumented-parameter lib/ansible/modules/network/asa/asa_og.py validate-modules:doc-missing-type lib/ansible/modules/network/asa/asa_og.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_actiongroupconfig.py future-import-boilerplate lib/ansible/modules/network/avi/avi_actiongroupconfig.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_actiongroupconfig.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_actiongroupconfig.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_alertconfig.py future-import-boilerplate lib/ansible/modules/network/avi/avi_alertconfig.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_alertconfig.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_alertconfig.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_alertemailconfig.py future-import-boilerplate lib/ansible/modules/network/avi/avi_alertemailconfig.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_alertemailconfig.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_alertemailconfig.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_alertscriptconfig.py future-import-boilerplate lib/ansible/modules/network/avi/avi_alertscriptconfig.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_alertscriptconfig.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_alertscriptconfig.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_alertsyslogconfig.py future-import-boilerplate lib/ansible/modules/network/avi/avi_alertsyslogconfig.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_alertsyslogconfig.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_alertsyslogconfig.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_analyticsprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_analyticsprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_analyticsprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_analyticsprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_api_session.py future-import-boilerplate lib/ansible/modules/network/avi/avi_api_session.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_api_session.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_api_session.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_api_version.py future-import-boilerplate lib/ansible/modules/network/avi/avi_api_version.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_api_version.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_api_version.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_applicationpersistenceprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_applicationprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_applicationprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_applicationprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_applicationprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_authprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_authprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_authprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_authprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py future-import-boilerplate lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_autoscalelaunchconfig.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_backup.py future-import-boilerplate lib/ansible/modules/network/avi/avi_backup.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_backup.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_backup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_backupconfiguration.py future-import-boilerplate lib/ansible/modules/network/avi/avi_backupconfiguration.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_backupconfiguration.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_backupconfiguration.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_certificatemanagementprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_cloud.py future-import-boilerplate lib/ansible/modules/network/avi/avi_cloud.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_cloud.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_cloud.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_cloudconnectoruser.py future-import-boilerplate lib/ansible/modules/network/avi/avi_cloudconnectoruser.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_cloudconnectoruser.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_cloudconnectoruser.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_cloudproperties.py future-import-boilerplate lib/ansible/modules/network/avi/avi_cloudproperties.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_cloudproperties.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_cloudproperties.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_cluster.py future-import-boilerplate lib/ansible/modules/network/avi/avi_cluster.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_cluster.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_cluster.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_clusterclouddetails.py future-import-boilerplate lib/ansible/modules/network/avi/avi_clusterclouddetails.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_clusterclouddetails.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_clusterclouddetails.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_controllerproperties.py future-import-boilerplate lib/ansible/modules/network/avi/avi_controllerproperties.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_controllerproperties.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_controllerproperties.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_customipamdnsprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_customipamdnsprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_customipamdnsprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_customipamdnsprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_dnspolicy.py future-import-boilerplate lib/ansible/modules/network/avi/avi_dnspolicy.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_dnspolicy.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_dnspolicy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_errorpagebody.py future-import-boilerplate lib/ansible/modules/network/avi/avi_errorpagebody.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_errorpagebody.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_errorpagebody.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_errorpageprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_errorpageprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_errorpageprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_errorpageprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_gslb.py future-import-boilerplate lib/ansible/modules/network/avi/avi_gslb.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_gslb.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_gslb.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_gslbgeodbprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_gslbservice.py future-import-boilerplate lib/ansible/modules/network/avi/avi_gslbservice.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_gslbservice.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_gslbservice.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py future-import-boilerplate lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_gslbservice_patch_member.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py future-import-boilerplate lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_hardwaresecuritymodulegroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_healthmonitor.py future-import-boilerplate lib/ansible/modules/network/avi/avi_healthmonitor.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_healthmonitor.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_healthmonitor.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_httppolicyset.py future-import-boilerplate lib/ansible/modules/network/avi/avi_httppolicyset.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_httppolicyset.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_httppolicyset.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_ipaddrgroup.py future-import-boilerplate lib/ansible/modules/network/avi/avi_ipaddrgroup.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_ipaddrgroup.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_ipaddrgroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_ipamdnsproviderprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_l4policyset.py future-import-boilerplate lib/ansible/modules/network/avi/avi_l4policyset.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_l4policyset.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_l4policyset.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_microservicegroup.py future-import-boilerplate lib/ansible/modules/network/avi/avi_microservicegroup.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_microservicegroup.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_microservicegroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_network.py future-import-boilerplate lib/ansible/modules/network/avi/avi_network.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_network.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_networkprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_networkprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_networkprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_networkprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_networksecuritypolicy.py future-import-boilerplate lib/ansible/modules/network/avi/avi_networksecuritypolicy.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_networksecuritypolicy.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_networksecuritypolicy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_pkiprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_pkiprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_pkiprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_pkiprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_pool.py future-import-boilerplate lib/ansible/modules/network/avi/avi_pool.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_pool.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_pool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_poolgroup.py future-import-boilerplate lib/ansible/modules/network/avi/avi_poolgroup.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_poolgroup.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_poolgroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py future-import-boilerplate lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_poolgroupdeploymentpolicy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_prioritylabels.py future-import-boilerplate lib/ansible/modules/network/avi/avi_prioritylabels.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_prioritylabels.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_prioritylabels.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_role.py future-import-boilerplate lib/ansible/modules/network/avi/avi_role.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_role.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_role.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_scheduler.py future-import-boilerplate lib/ansible/modules/network/avi/avi_scheduler.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_scheduler.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_scheduler.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_seproperties.py future-import-boilerplate lib/ansible/modules/network/avi/avi_seproperties.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_seproperties.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_seproperties.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py future-import-boilerplate lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_serverautoscalepolicy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_serviceengine.py future-import-boilerplate lib/ansible/modules/network/avi/avi_serviceengine.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_serviceengine.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_serviceengine.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_serviceenginegroup.py future-import-boilerplate lib/ansible/modules/network/avi/avi_serviceenginegroup.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_serviceenginegroup.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_serviceenginegroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_snmptrapprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_snmptrapprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_snmptrapprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_snmptrapprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py future-import-boilerplate lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_sslkeyandcertificate.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_sslprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_sslprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_sslprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_sslprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_stringgroup.py future-import-boilerplate lib/ansible/modules/network/avi/avi_stringgroup.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_stringgroup.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_stringgroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_systemconfiguration.py future-import-boilerplate lib/ansible/modules/network/avi/avi_systemconfiguration.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_systemconfiguration.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_systemconfiguration.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_tenant.py future-import-boilerplate lib/ansible/modules/network/avi/avi_tenant.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_tenant.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_tenant.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_trafficcloneprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_trafficcloneprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_trafficcloneprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_trafficcloneprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_user.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_useraccount.py future-import-boilerplate lib/ansible/modules/network/avi/avi_useraccount.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_useraccount.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_useraccount.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_useraccountprofile.py future-import-boilerplate lib/ansible/modules/network/avi/avi_useraccountprofile.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_useraccountprofile.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_useraccountprofile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_virtualservice.py future-import-boilerplate lib/ansible/modules/network/avi/avi_virtualservice.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_virtualservice.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_virtualservice.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_vrfcontext.py future-import-boilerplate lib/ansible/modules/network/avi/avi_vrfcontext.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_vrfcontext.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_vrfcontext.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_vsdatascriptset.py future-import-boilerplate lib/ansible/modules/network/avi/avi_vsdatascriptset.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_vsdatascriptset.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_vsdatascriptset.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_vsvip.py future-import-boilerplate lib/ansible/modules/network/avi/avi_vsvip.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_vsvip.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_vsvip.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/avi/avi_webhook.py future-import-boilerplate lib/ansible/modules/network/avi/avi_webhook.py metaclass-boilerplate lib/ansible/modules/network/avi/avi_webhook.py validate-modules:doc-missing-type lib/ansible/modules/network/avi/avi_webhook.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/bigswitch/bcf_switch.py validate-modules:doc-missing-type lib/ansible/modules/network/bigswitch/bcf_switch.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/bigswitch/bigmon_chain.py validate-modules:doc-missing-type lib/ansible/modules/network/bigswitch/bigmon_chain.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:doc-missing-type lib/ansible/modules/network/bigswitch/bigmon_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/check_point/checkpoint_object_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cli/cli_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cli/cli_config.py validate-modules:doc-missing-type lib/ansible/modules/network/cli/cli_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_aaa_server.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_aaa_server.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_aaa_server.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_aaa_server_host.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_acl.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_acl.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_acl.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_acl_advance.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_acl_advance.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_acl_advance.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_acl_interface.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_acl_interface.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_acl_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_bfd_global.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_bfd_global.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_bfd_global.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_bfd_session.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_bfd_session.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_bfd_session.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_bfd_view.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_bfd_view.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_bfd_view.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_bgp.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_bgp.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_bgp.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_bgp_af.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_bgp_af.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_bgp_af.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_bgp_neighbor.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_bgp_neighbor_af.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_command.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_command.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_command.py pylint:blacklisted-name lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_command.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_config.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_config.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_config.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_dldp.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_dldp.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_dldp.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_dldp_interface.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_dldp_interface.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_dldp_interface.py pylint:blacklisted-name lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_dldp_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_eth_trunk.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_eth_trunk.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_eth_trunk.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_evpn_bd_vni.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_evpn_bgp.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_evpn_bgp_rr.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_evpn_global.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_evpn_global.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_evpn_global.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_facts.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_facts.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_facts.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_file_copy.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_file_copy.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_file_copy.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_info_center_debug.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_info_center_debug.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_info_center_debug.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_info_center_global.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_info_center_global.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_info_center_global.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_info_center_log.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_info_center_log.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_info_center_log.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_info_center_trap.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_info_center_trap.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_info_center_trap.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_interface.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_interface.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_interface_ospf.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_interface_ospf.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_interface_ospf.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_ip_interface.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_ip_interface.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_ip_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_link_status.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_link_status.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_link_status.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_mlag_config.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_mlag_config.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_mlag_config.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_mlag_interface.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_mlag_interface.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_mlag_interface.py pylint:blacklisted-name lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_mlag_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_mtu.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_mtu.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_mtu.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_netconf.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_netconf.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_netconf.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_netstream_aging.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_netstream_aging.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_netstream_aging.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_netstream_export.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_netstream_export.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_netstream_export.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_netstream_global.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_netstream_global.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_netstream_global.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_netstream_template.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_netstream_template.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_netstream_template.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_ntp.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_ntp.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_ntp.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_ntp_auth.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_ntp_auth.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_ntp_auth.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_ospf.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_ospf.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_ospf.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_ospf_vrf.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_reboot.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_reboot.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_reboot.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_rollback.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_rollback.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_rollback.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_sflow.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_sflow.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_sflow.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_snmp_community.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_community.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_snmp_community.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_snmp_contact.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_contact.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_snmp_contact.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_snmp_location.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_location.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_snmp_location.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_snmp_target_host.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_snmp_traps.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_traps.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_snmp_traps.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_snmp_user.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_user.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_snmp_user.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_startup.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_startup.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_startup.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_static_route.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_static_route.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_static_route.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_stp.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_stp.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_stp.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_switchport.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_switchport.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_switchport.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_vlan.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_vlan.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_vlan.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_vrf.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_vrf.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_vrf.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_vrf_af.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_vrf_af.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_vrf_af.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_vrf_interface.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_vrf_interface.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_vrf_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_vrrp.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_vrrp.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_vrrp.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_vxlan_arp.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_vxlan_gateway.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_vxlan_global.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_vxlan_global.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_vxlan_global.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_vxlan_tunnel.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py future-import-boilerplate lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py metaclass-boilerplate lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:doc-missing-type lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cloudengine/ce_vxlan_vap.py validate-modules:undocumented-parameter lib/ansible/modules/network/cloudvision/cv_server_provision.py pylint:blacklisted-name lib/ansible/modules/network/cloudvision/cv_server_provision.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/cnos/cnos_backup.py validate-modules:undocumented-parameter lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_banner.py validate-modules:undocumented-parameter lib/ansible/modules/network/cnos/cnos_bgp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_bgp.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_command.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_conditional_command.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_conditional_command.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_conditional_template.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_conditional_template.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_config.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_factory.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_facts.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/cnos/cnos_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_image.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_image.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_l2_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_l3_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_linkagg.py validate-modules:undocumented-parameter lib/ansible/modules/network/cnos/cnos_lldp.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_logging.py validate-modules:undocumented-parameter lib/ansible/modules/network/cnos/cnos_reload.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/cnos/cnos_rollback.py validate-modules:undocumented-parameter lib/ansible/modules/network/cnos/cnos_save.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_showrun.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_static_route.py validate-modules:undocumented-parameter lib/ansible/modules/network/cnos/cnos_system.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_system.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_template.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_template.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_user.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_user.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_user.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cnos/cnos_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_user.py validate-modules:undocumented-parameter lib/ansible/modules/network/cnos/cnos_vlag.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_vlag.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_vlan.py validate-modules:undocumented-parameter lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:doc-missing-type lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:missing-suboption-docs lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/cnos/cnos_vrf.py validate-modules:undocumented-parameter lib/ansible/modules/network/cumulus/nclu.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:doc-missing-type lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/dellos10/dellos10_command.py validate-modules:undocumented-parameter lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:doc-missing-type lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/dellos10/dellos10_config.py validate-modules:undocumented-parameter lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:doc-missing-type lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/dellos10/dellos10_facts.py validate-modules:undocumented-parameter lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:doc-missing-type lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/dellos6/dellos6_command.py validate-modules:undocumented-parameter lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:doc-missing-type lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/dellos6/dellos6_config.py validate-modules:undocumented-parameter lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:doc-missing-type lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/dellos6/dellos6_facts.py validate-modules:undocumented-parameter lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:doc-missing-type lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/dellos9/dellos9_command.py validate-modules:undocumented-parameter lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:doc-missing-type lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/dellos9/dellos9_config.py validate-modules:undocumented-parameter lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:doc-missing-type lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/dellos9/dellos9_facts.py validate-modules:undocumented-parameter lib/ansible/modules/network/edgeos/edgeos_command.py validate-modules:doc-missing-type lib/ansible/modules/network/edgeos/edgeos_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/edgeos/edgeos_config.py validate-modules:doc-missing-type lib/ansible/modules/network/edgeos/edgeos_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/edgeos/edgeos_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/edgeswitch/edgeswitch_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/edgeswitch/edgeswitch_vlan.py validate-modules:undocumented-parameter lib/ansible/modules/network/enos/enos_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/enos/enos_command.py validate-modules:doc-missing-type lib/ansible/modules/network/enos/enos_command.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/enos/enos_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/enos/enos_command.py validate-modules:undocumented-parameter lib/ansible/modules/network/enos/enos_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/enos/enos_config.py validate-modules:doc-missing-type lib/ansible/modules/network/enos/enos_config.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/enos/enos_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/enos/enos_config.py validate-modules:undocumented-parameter lib/ansible/modules/network/enos/enos_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/enos/enos_facts.py validate-modules:doc-missing-type lib/ansible/modules/network/enos/enos_facts.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/enos/enos_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/enos/enos_facts.py validate-modules:undocumented-parameter lib/ansible/modules/network/eos/_eos_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/eos/_eos_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/_eos_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/eos/_eos_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/_eos_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/_eos_l2_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/_eos_l3_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:missing-suboption-docs lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/_eos_linkagg.py validate-modules:undocumented-parameter lib/ansible/modules/network/eos/_eos_vlan.py future-import-boilerplate lib/ansible/modules/network/eos/_eos_vlan.py metaclass-boilerplate lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/_eos_vlan.py validate-modules:undocumented-parameter lib/ansible/modules/network/eos/eos_banner.py future-import-boilerplate lib/ansible/modules/network/eos/eos_banner.py metaclass-boilerplate lib/ansible/modules/network/eos/eos_banner.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/eos_bgp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/eos/eos_bgp.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/eos_bgp.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/network/eos/eos_bgp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/eos_command.py future-import-boilerplate lib/ansible/modules/network/eos/eos_command.py metaclass-boilerplate lib/ansible/modules/network/eos/eos_command.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/eos_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/eos_config.py future-import-boilerplate lib/ansible/modules/network/eos/eos_config.py metaclass-boilerplate lib/ansible/modules/network/eos/eos_config.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/eos_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/eos_eapi.py future-import-boilerplate lib/ansible/modules/network/eos/eos_eapi.py metaclass-boilerplate lib/ansible/modules/network/eos/eos_eapi.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/eos/eos_eapi.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/eos_eapi.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/eos_lldp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/eos/eos_lldp.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/eos_logging.py future-import-boilerplate lib/ansible/modules/network/eos/eos_logging.py metaclass-boilerplate lib/ansible/modules/network/eos/eos_logging.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/eos/eos_logging.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/eos_logging.py validate-modules:missing-suboption-docs lib/ansible/modules/network/eos/eos_logging.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/eos_logging.py validate-modules:undocumented-parameter lib/ansible/modules/network/eos/eos_static_route.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/eos/eos_static_route.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/eos_static_route.py validate-modules:missing-suboption-docs lib/ansible/modules/network/eos/eos_static_route.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/eos_static_route.py validate-modules:undocumented-parameter lib/ansible/modules/network/eos/eos_system.py future-import-boilerplate lib/ansible/modules/network/eos/eos_system.py metaclass-boilerplate lib/ansible/modules/network/eos/eos_system.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/eos_system.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/eos_user.py future-import-boilerplate lib/ansible/modules/network/eos/eos_user.py metaclass-boilerplate lib/ansible/modules/network/eos/eos_user.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/eos/eos_user.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/eos_user.py validate-modules:missing-suboption-docs lib/ansible/modules/network/eos/eos_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/eos_user.py validate-modules:undocumented-parameter lib/ansible/modules/network/eos/eos_vrf.py future-import-boilerplate lib/ansible/modules/network/eos/eos_vrf.py metaclass-boilerplate lib/ansible/modules/network/eos/eos_vrf.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/eos/eos_vrf.py validate-modules:doc-missing-type lib/ansible/modules/network/eos/eos_vrf.py validate-modules:missing-suboption-docs lib/ansible/modules/network/eos/eos_vrf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/eos/eos_vrf.py validate-modules:undocumented-parameter lib/ansible/modules/network/exos/exos_command.py validate-modules:doc-missing-type lib/ansible/modules/network/exos/exos_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/exos/exos_config.py validate-modules:doc-missing-type lib/ansible/modules/network/exos/exos_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:doc-missing-type lib/ansible/modules/network/f5/_bigip_asm_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/f5/_bigip_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:doc-missing-type lib/ansible/modules/network/f5/_bigip_gtm_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/f5/bigip_device_info.py validate-modules:return-syntax-error lib/ansible/modules/network/f5/bigip_firewall_address_list.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/f5/bigip_firewall_log_profile_network.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:doc-missing-type lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:missing-suboption-docs lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/f5/bigip_gtm_pool_member.py validate-modules:undocumented-parameter lib/ansible/modules/network/f5/bigip_pool.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/f5/bigip_pool.py validate-modules:doc-missing-type lib/ansible/modules/network/f5/bigip_pool.py validate-modules:missing-suboption-docs lib/ansible/modules/network/f5/bigip_pool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/f5/bigip_pool.py validate-modules:undocumented-parameter lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:doc-missing-type lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:missing-suboption-docs lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/f5/bigip_pool_member.py validate-modules:undocumented-parameter lib/ansible/modules/network/f5/bigiq_device_info.py validate-modules:return-syntax-error lib/ansible/modules/network/fortimanager/fmgr_device.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_device_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_device_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_device_provision_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_fwobj_address.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_fwobj_ippool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_fwobj_ippool6.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_fwobj_service.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_fwobj_vip.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_fwpol_ipv4.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_fwpol_package.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_ha.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_provisioning.py validate-modules:doc-missing-type lib/ansible/modules/network/fortimanager/fmgr_provisioning.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_query.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_script.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/fortimanager/fmgr_script.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_appctrl.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_av.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_dns.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_ips.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_profile_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_proxy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_spam.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_ssl_ssh.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_voip.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_waf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_wanopt.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortimanager/fmgr_secprof_web.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortios/fortios_address.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/fortios/fortios_address.py validate-modules:doc-missing-type lib/ansible/modules/network/fortios/fortios_antivirus_quarantine.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/fortios/fortios_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy.py validate-modules:parameter-invalid lib/ansible/modules/network/fortios/fortios_firewall_DoS_policy6.py validate-modules:parameter-invalid lib/ansible/modules/network/fortios/fortios_firewall_policy.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/fortios/fortios_firewall_sniffer.py validate-modules:parameter-invalid lib/ansible/modules/network/fortios/fortios_ipv4_policy.py validate-modules:doc-missing-type lib/ansible/modules/network/fortios/fortios_ipv4_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortios/fortios_report_chart.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/fortios/fortios_switch_controller_lldp_profile.py validate-modules:parameter-invalid lib/ansible/modules/network/fortios/fortios_switch_controller_managed_switch.py validate-modules:parameter-invalid lib/ansible/modules/network/fortios/fortios_system_dhcp_server.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/fortios/fortios_system_global.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/fortios/fortios_voip_profile.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/fortios/fortios_vpn_ipsec_manualkey_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:doc-choices-incompatible-type lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:parameter-invalid lib/ansible/modules/network/fortios/fortios_webfilter.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/fortios/fortios_wireless_controller_setting.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/fortios/fortios_wireless_controller_wtp_profile.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/frr/frr_bgp.py validate-modules:doc-missing-type lib/ansible/modules/network/frr/frr_bgp.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/frr/frr_bgp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/frr/frr_bgp.py validate-modules:undocumented-parameter lib/ansible/modules/network/frr/frr_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/illumos/dladm_etherstub.py pylint:blacklisted-name lib/ansible/modules/network/illumos/dladm_etherstub.py validate-modules:doc-missing-type lib/ansible/modules/network/illumos/dladm_iptun.py pylint:blacklisted-name lib/ansible/modules/network/illumos/dladm_iptun.py validate-modules:doc-missing-type lib/ansible/modules/network/illumos/dladm_iptun.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/illumos/dladm_linkprop.py pylint:blacklisted-name lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:doc-missing-type lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:no-default-for-required-parameter lib/ansible/modules/network/illumos/dladm_linkprop.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/illumos/dladm_vlan.py pylint:blacklisted-name lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/illumos/dladm_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/illumos/dladm_vnic.py pylint:blacklisted-name lib/ansible/modules/network/illumos/dladm_vnic.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/illumos/dladm_vnic.py validate-modules:doc-missing-type lib/ansible/modules/network/illumos/flowadm.py pylint:blacklisted-name lib/ansible/modules/network/illumos/flowadm.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/illumos/flowadm.py validate-modules:doc-missing-type lib/ansible/modules/network/illumos/ipadm_addr.py pylint:blacklisted-name lib/ansible/modules/network/illumos/ipadm_addr.py validate-modules:doc-missing-type lib/ansible/modules/network/illumos/ipadm_addr.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/illumos/ipadm_addrprop.py pylint:blacklisted-name lib/ansible/modules/network/illumos/ipadm_addrprop.py validate-modules:doc-missing-type lib/ansible/modules/network/illumos/ipadm_addrprop.py validate-modules:no-default-for-required-parameter lib/ansible/modules/network/illumos/ipadm_if.py pylint:blacklisted-name lib/ansible/modules/network/illumos/ipadm_if.py validate-modules:doc-missing-type lib/ansible/modules/network/illumos/ipadm_ifprop.py pylint:blacklisted-name lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:doc-missing-type lib/ansible/modules/network/illumos/ipadm_ifprop.py validate-modules:no-default-for-required-parameter lib/ansible/modules/network/illumos/ipadm_prop.py pylint:blacklisted-name lib/ansible/modules/network/illumos/ipadm_prop.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/illumos/ipadm_prop.py validate-modules:doc-missing-type lib/ansible/modules/network/ingate/ig_config.py validate-modules:doc-missing-type lib/ansible/modules/network/ingate/ig_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ingate/ig_config.py validate-modules:return-syntax-error lib/ansible/modules/network/ingate/ig_unit_information.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/_ios_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/_ios_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/ios/_ios_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/_ios_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/_ios_l2_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/_ios_l3_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/_ios_vlan.py validate-modules:undocumented-parameter lib/ansible/modules/network/ios/ios_banner.py future-import-boilerplate lib/ansible/modules/network/ios/ios_banner.py metaclass-boilerplate lib/ansible/modules/network/ios/ios_banner.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_banner.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_bgp.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_bgp.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/ios/ios_bgp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/ios_command.py future-import-boilerplate lib/ansible/modules/network/ios/ios_command.py metaclass-boilerplate lib/ansible/modules/network/ios/ios_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_command.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/ios_config.py future-import-boilerplate lib/ansible/modules/network/ios/ios_config.py metaclass-boilerplate lib/ansible/modules/network/ios/ios_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_config.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/ios_facts.py future-import-boilerplate lib/ansible/modules/network/ios/ios_facts.py metaclass-boilerplate lib/ansible/modules/network/ios/ios_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:missing-suboption-docs lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/ios_linkagg.py validate-modules:undocumented-parameter lib/ansible/modules/network/ios/ios_lldp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/ios/ios_lldp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_lldp.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_logging.py future-import-boilerplate lib/ansible/modules/network/ios/ios_logging.py metaclass-boilerplate lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_logging.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_logging.py validate-modules:missing-suboption-docs lib/ansible/modules/network/ios/ios_logging.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/ios_logging.py validate-modules:undocumented-parameter lib/ansible/modules/network/ios/ios_ntp.py future-import-boilerplate lib/ansible/modules/network/ios/ios_ntp.py metaclass-boilerplate lib/ansible/modules/network/ios/ios_ntp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_ntp.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_ping.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_ping.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/ios_static_route.py future-import-boilerplate lib/ansible/modules/network/ios/ios_static_route.py metaclass-boilerplate lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_static_route.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_static_route.py validate-modules:missing-suboption-docs lib/ansible/modules/network/ios/ios_static_route.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/ios_static_route.py validate-modules:undocumented-parameter lib/ansible/modules/network/ios/ios_system.py future-import-boilerplate lib/ansible/modules/network/ios/ios_system.py metaclass-boilerplate lib/ansible/modules/network/ios/ios_system.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_system.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_system.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/ios_user.py future-import-boilerplate lib/ansible/modules/network/ios/ios_user.py metaclass-boilerplate lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_user.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_user.py validate-modules:missing-suboption-docs lib/ansible/modules/network/ios/ios_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ios/ios_user.py validate-modules:undocumented-parameter lib/ansible/modules/network/ios/ios_vrf.py future-import-boilerplate lib/ansible/modules/network/ios/ios_vrf.py metaclass-boilerplate lib/ansible/modules/network/ios/ios_vrf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ios/ios_vrf.py validate-modules:doc-missing-type lib/ansible/modules/network/ios/ios_vrf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/iosxr/_iosxr_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:doc-missing-type lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/iosxr/iosxr_banner.py validate-modules:undocumented-parameter lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:doc-missing-type lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/iosxr/iosxr_bgp.py validate-modules:undocumented-parameter lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:doc-missing-type lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/iosxr/iosxr_command.py validate-modules:undocumented-parameter lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:doc-missing-type lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/iosxr/iosxr_config.py validate-modules:undocumented-parameter lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/iosxr/iosxr_facts.py validate-modules:undocumented-parameter lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:doc-missing-type lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:missing-suboption-docs lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/iosxr/iosxr_logging.py validate-modules:undocumented-parameter lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:doc-missing-type lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/iosxr/iosxr_netconf.py validate-modules:undocumented-parameter lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:doc-missing-type lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/iosxr/iosxr_system.py validate-modules:undocumented-parameter lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:doc-missing-type lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:missing-suboption-docs lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/iosxr/iosxr_user.py validate-modules:undocumented-parameter lib/ansible/modules/network/ironware/ironware_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ironware/ironware_command.py validate-modules:doc-missing-type lib/ansible/modules/network/ironware/ironware_command.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/ironware/ironware_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ironware/ironware_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ironware/ironware_config.py validate-modules:doc-missing-type lib/ansible/modules/network/ironware/ironware_config.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/ironware/ironware_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/ironware/ironware_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/itential/iap_token.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/_junos_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/_junos_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/junos/_junos_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/_junos_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/_junos_l2_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/_junos_l3_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:missing-suboption-docs lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/_junos_linkagg.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/_junos_lldp.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/_junos_lldp_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/_junos_vlan.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_banner.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_banner.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_banner.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_banner.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_command.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_command.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_command.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_config.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_config.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_config.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_facts.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_facts.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_facts.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_interfaces.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/network/junos/junos_lag_interfaces.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_logging.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_logging.py validate-modules:missing-suboption-docs lib/ansible/modules/network/junos/junos_logging.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_logging.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_netconf.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_netconf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_netconf.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_netconf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_netconf.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_package.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_package.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_package.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_package.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_package.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_ping.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_ping.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_ping.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_ping.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_ping.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_rpc.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_rpc.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_rpc.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_rpc.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_rpc.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_scp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_scp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_scp.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_scp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_scp.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_static_route.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_static_route.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_static_route.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_static_route.py validate-modules:missing-suboption-docs lib/ansible/modules/network/junos/junos_static_route.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_static_route.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_system.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_system.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_system.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_system.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_system.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_user.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_user.py validate-modules:missing-suboption-docs lib/ansible/modules/network/junos/junos_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_user.py validate-modules:undocumented-parameter lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/junos/junos_vrf.py validate-modules:doc-missing-type lib/ansible/modules/network/junos/junos_vrf.py validate-modules:missing-suboption-docs lib/ansible/modules/network/junos/junos_vrf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/junos/junos_vrf.py validate-modules:undocumented-parameter lib/ansible/modules/network/meraki/meraki_admin.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_config_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_malware.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_mr_l3_firewall.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/network/meraki/meraki_mx_l3_firewall.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/meraki/meraki_mx_l7_firewall.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_nat.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/network/meraki/meraki_nat.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_organization.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_snmp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_ssid.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/network/meraki/meraki_switchport.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_syslog.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/meraki/meraki_syslog.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/meraki/meraki_vlan.py validate-modules:undocumented-parameter lib/ansible/modules/network/netact/netact_cm_command.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/netact/netact_cm_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netconf/netconf_config.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/netconf/netconf_config.py validate-modules:doc-missing-type lib/ansible/modules/network/netconf/netconf_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netconf/netconf_get.py validate-modules:doc-missing-type lib/ansible/modules/network/netconf/netconf_get.py validate-modules:return-syntax-error lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:doc-missing-type lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netconf/netconf_rpc.py validate-modules:return-syntax-error lib/ansible/modules/network/netscaler/netscaler_cs_action.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/netscaler/netscaler_cs_action.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_cs_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_cs_vserver.py validate-modules:undocumented-parameter lib/ansible/modules/network/netscaler/netscaler_gslb_service.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_gslb_site.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_gslb_vserver.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_gslb_vserver.py validate-modules:undocumented-parameter lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/netscaler/netscaler_lb_monitor.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_lb_vserver.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/netscaler/netscaler_lb_vserver.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_nitro_request.py validate-modules:doc-missing-type lib/ansible/modules/network/netscaler/netscaler_nitro_request.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_save_config.py validate-modules:doc-missing-type lib/ansible/modules/network/netscaler/netscaler_save_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_server.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/netscaler/netscaler_server.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_service.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/netscaler/netscaler_service.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_servicegroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netscaler/netscaler_ssl_certkey.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/_pn_cluster.py future-import-boilerplate lib/ansible/modules/network/netvisor/_pn_cluster.py metaclass-boilerplate lib/ansible/modules/network/netvisor/_pn_cluster.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/_pn_ospf.py future-import-boilerplate lib/ansible/modules/network/netvisor/_pn_ospf.py metaclass-boilerplate lib/ansible/modules/network/netvisor/_pn_ospf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/_pn_ospfarea.py future-import-boilerplate lib/ansible/modules/network/netvisor/_pn_ospfarea.py metaclass-boilerplate lib/ansible/modules/network/netvisor/_pn_ospfarea.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/_pn_show.py future-import-boilerplate lib/ansible/modules/network/netvisor/_pn_show.py metaclass-boilerplate lib/ansible/modules/network/netvisor/_pn_show.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/_pn_trunk.py future-import-boilerplate lib/ansible/modules/network/netvisor/_pn_trunk.py metaclass-boilerplate lib/ansible/modules/network/netvisor/_pn_trunk.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/_pn_vlag.py future-import-boilerplate lib/ansible/modules/network/netvisor/_pn_vlag.py metaclass-boilerplate lib/ansible/modules/network/netvisor/_pn_vlag.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/_pn_vlan.py future-import-boilerplate lib/ansible/modules/network/netvisor/_pn_vlan.py metaclass-boilerplate lib/ansible/modules/network/netvisor/_pn_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/_pn_vrouter.py future-import-boilerplate lib/ansible/modules/network/netvisor/_pn_vrouter.py metaclass-boilerplate lib/ansible/modules/network/netvisor/_pn_vrouter.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py future-import-boilerplate lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py metaclass-boilerplate lib/ansible/modules/network/netvisor/_pn_vrouterbgp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/_pn_vrouterif.py future-import-boilerplate lib/ansible/modules/network/netvisor/_pn_vrouterif.py metaclass-boilerplate lib/ansible/modules/network/netvisor/_pn_vrouterif.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py future-import-boilerplate lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py metaclass-boilerplate lib/ansible/modules/network/netvisor/_pn_vrouterlbif.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/pn_access_list.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/pn_access_list_ip.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/pn_cpu_class.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/pn_dscp_map.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/pn_fabric_local.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/pn_igmp_snooping.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/pn_port_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/pn_snmp_community.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/pn_switch_setup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/netvisor/pn_vrouter_bgp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nos/nos_command.py validate-modules:doc-missing-type lib/ansible/modules/network/nos/nos_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nos/nos_config.py validate-modules:doc-missing-type lib/ansible/modules/network/nos/nos_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nos/nos_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nso/nso_action.py validate-modules:doc-missing-type lib/ansible/modules/network/nso/nso_action.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nso/nso_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nso/nso_config.py validate-modules:return-syntax-error lib/ansible/modules/network/nso/nso_query.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nso/nso_show.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nso/nso_verify.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:missing-suboption-docs lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nuage/nuage_vspk.py validate-modules:undocumented-parameter lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/_nxos_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/nxos/_nxos_ip_interface.py future-import-boilerplate lib/ansible/modules/network/nxos/_nxos_ip_interface.py metaclass-boilerplate lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/_nxos_l2_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/_nxos_l3_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:missing-suboption-docs lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/_nxos_linkagg.py validate-modules:undocumented-parameter lib/ansible/modules/network/nxos/_nxos_mtu.py future-import-boilerplate lib/ansible/modules/network/nxos/_nxos_mtu.py metaclass-boilerplate lib/ansible/modules/network/nxos/_nxos_portchannel.py future-import-boilerplate lib/ansible/modules/network/nxos/_nxos_portchannel.py metaclass-boilerplate lib/ansible/modules/network/nxos/_nxos_switchport.py future-import-boilerplate lib/ansible/modules/network/nxos/_nxos_switchport.py metaclass-boilerplate lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/_nxos_vlan.py validate-modules:undocumented-parameter lib/ansible/modules/network/nxos/nxos_aaa_server.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_aaa_server.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_aaa_server.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_aaa_server_host.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_aaa_server_host.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_aaa_server_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_acl.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_acl.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_acl.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_acl_interface.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_acl_interface.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_acl_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_banner.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_banner.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_banner.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_bfd_global.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_bfd_global.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_bfd_global.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_bfd_global.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_bgp.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_bgp.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_bgp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_bgp_af.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_bgp_af.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_bgp_af.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_bgp_neighbor.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_bgp_neighbor_af.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_command.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_config.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_config.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_config.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_config.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_evpn_global.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_evpn_global.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_evpn_global.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_evpn_global.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_evpn_vni.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_evpn_vni.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_evpn_vni.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_facts.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_facts.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_feature.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_feature.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_feature.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_gir.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_gir.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_gir.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_gir_profile_management.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_gir_profile_management.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_gir_profile_management.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_hsrp.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_hsrp.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_hsrp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_igmp.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_igmp.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_igmp.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_igmp_interface.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_igmp_interface.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_igmp_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_igmp_snooping.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_igmp_snooping.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_igmp_snooping.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_install_os.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_install_os.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_install_os.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_interface_ospf.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_interface_ospf.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_interface_ospf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_lag_interfaces.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_lldp.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_logging.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_logging.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_logging.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_ntp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_ntp_auth.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_ntp_auth.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_ntp_auth.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_ntp_options.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_ntp_options.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_ntp_options.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_nxapi.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_nxapi.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_nxapi.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_ospf.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_ospf.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_ospf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_ospf_vrf.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_ospf_vrf.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_ospf_vrf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_overlay_global.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_overlay_global.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_overlay_global.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_pim.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_pim.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_pim.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_pim_interface.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_pim_interface.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_pim_interface.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_pim_rp_address.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_pim_rp_address.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_pim_rp_address.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_ping.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_ping.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_ping.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_reboot.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_reboot.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_reboot.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_reboot.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_rollback.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_rollback.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_rollback.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_rpm.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_rpm.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:missing-suboption-docs lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_rpm.py validate-modules:undocumented-parameter lib/ansible/modules/network/nxos/nxos_smu.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_smu.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_smu.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_snapshot.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_snapshot.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_snapshot.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_snmp_community.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_community.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_snmp_community.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_snmp_contact.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_contact.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_snmp_contact.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_snmp_host.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_host.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_snmp_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_snmp_location.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_location.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_snmp_location.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_snmp_traps.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_traps.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_snmp_traps.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_snmp_user.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_user.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_snmp_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_static_route.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_static_route.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:missing-suboption-docs lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_static_route.py validate-modules:undocumented-parameter lib/ansible/modules/network/nxos/nxos_system.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_system.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_system.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_system.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_system.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_system.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_udld.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_udld.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_udld.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_udld_interface.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_udld_interface.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_udld_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_user.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_user.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_user.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_user.py validate-modules:missing-suboption-docs lib/ansible/modules/network/nxos/nxos_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_user.py validate-modules:undocumented-parameter lib/ansible/modules/network/nxos/nxos_vpc.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_vpc.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_vpc.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_vpc_interface.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_vpc_interface.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_vpc_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_vrf.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_vrf.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:missing-suboption-docs lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_vrf.py validate-modules:undocumented-parameter lib/ansible/modules/network/nxos/nxos_vrf_af.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_vrf_af.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_vrf_af.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_vrf_interface.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_vrf_interface.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_vrf_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_vrrp.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_vrrp.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_vrrp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_vtp_domain.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_vtp_domain.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_vtp_domain.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_vtp_password.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_vtp_password.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_vtp_password.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_vtp_version.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_vtp_version.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_vtp_version.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_vxlan_vtep.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py future-import-boilerplate lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py metaclass-boilerplate lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:doc-default-incompatible-type lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:doc-missing-type lib/ansible/modules/network/nxos/nxos_vxlan_vtep_vni.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_bgp.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_bgp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_buffer_pool.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_buffer_pool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_command.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_command.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/onyx/onyx_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_config.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_config.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/onyx/onyx_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_igmp.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_igmp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_igmp_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_igmp_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_l2_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_l3_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:missing-suboption-docs lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_linkagg.py validate-modules:undocumented-parameter lib/ansible/modules/network/onyx/onyx_lldp.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_lldp_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/onyx/onyx_magp.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_magp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_mlag_ipl.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_mlag_vip.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_ospf.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_ospf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_pfc_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/onyx/onyx_protocol.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_ptp_global.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_ptp_global.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_ptp_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_ptp_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_qos.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_qos.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_traffic_class.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_traffic_class.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_vlan.py validate-modules:undocumented-parameter lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/onyx/onyx_vxlan.py validate-modules:undocumented-parameter lib/ansible/modules/network/opx/opx_cps.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:doc-missing-type lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ordnance/ordnance_config.py validate-modules:undocumented-parameter lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:doc-missing-type lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ordnance/ordnance_facts.py validate-modules:undocumented-parameter lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:doc-missing-type lib/ansible/modules/network/ovs/openvswitch_bridge.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ovs/openvswitch_db.py validate-modules:doc-missing-type lib/ansible/modules/network/ovs/openvswitch_db.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/ovs/openvswitch_port.py validate-modules:doc-missing-type lib/ansible/modules/network/ovs/openvswitch_port.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/panos/_panos_admin.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_admin.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_admin.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_admpwd.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_admpwd.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_admpwd.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_cert_gen_ssh.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_check.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_check.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_check.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/panos/_panos_commit.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_commit.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_commit.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_commit.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/panos/_panos_dag.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_dag.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_dag.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_dag_tags.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_dag_tags.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_dag_tags.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_dag_tags.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/panos/_panos_import.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_import.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_import.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_interface.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_interface.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_lic.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_lic.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_lic.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_loadcfg.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_loadcfg.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_loadcfg.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_match_rule.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_match_rule.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_match_rule.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_match_rule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/panos/_panos_mgtconfig.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_mgtconfig.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_mgtconfig.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_nat_policy.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_nat_policy.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_nat_rule.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_nat_rule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/panos/_panos_object.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_object.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_object.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_object.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/panos/_panos_op.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_op.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_op.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_pg.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_pg.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_pg.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_query_rules.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_query_rules.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_query_rules.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_restart.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_restart.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_sag.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_sag.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_sag.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_sag.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/panos/_panos_security_policy.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_security_policy.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_security_rule.py validate-modules:doc-missing-type lib/ansible/modules/network/panos/_panos_security_rule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/panos/_panos_set.py future-import-boilerplate lib/ansible/modules/network/panos/_panos_set.py metaclass-boilerplate lib/ansible/modules/network/panos/_panos_set.py validate-modules:doc-missing-type lib/ansible/modules/network/radware/vdirect_commit.py validate-modules:doc-missing-type lib/ansible/modules/network/radware/vdirect_commit.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/radware/vdirect_file.py validate-modules:doc-missing-type lib/ansible/modules/network/radware/vdirect_file.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/radware/vdirect_runnable.py validate-modules:doc-missing-type lib/ansible/modules/network/radware/vdirect_runnable.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/restconf/restconf_config.py validate-modules:doc-missing-type lib/ansible/modules/network/restconf/restconf_get.py validate-modules:doc-missing-type lib/ansible/modules/network/routeros/routeros_command.py validate-modules:doc-missing-type lib/ansible/modules/network/routeros/routeros_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/routeros/routeros_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:doc-missing-type lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/skydive/skydive_capture.py validate-modules:undocumented-parameter lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:doc-missing-type lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/skydive/skydive_edge.py validate-modules:undocumented-parameter lib/ansible/modules/network/skydive/skydive_node.py validate-modules:doc-missing-type lib/ansible/modules/network/skydive/skydive_node.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/skydive/skydive_node.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/skydive/skydive_node.py validate-modules:undocumented-parameter lib/ansible/modules/network/slxos/slxos_command.py validate-modules:doc-missing-type lib/ansible/modules/network/slxos/slxos_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/slxos/slxos_config.py validate-modules:doc-missing-type lib/ansible/modules/network/slxos/slxos_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/slxos/slxos_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/slxos/slxos_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/slxos/slxos_l2_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/slxos/slxos_l3_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:doc-missing-type lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:missing-suboption-docs lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/slxos/slxos_linkagg.py validate-modules:undocumented-parameter lib/ansible/modules/network/slxos/slxos_lldp.py validate-modules:doc-missing-type lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/slxos/slxos_vlan.py validate-modules:undocumented-parameter lib/ansible/modules/network/sros/sros_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/sros/sros_command.py validate-modules:doc-missing-type lib/ansible/modules/network/sros/sros_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/sros/sros_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/sros/sros_config.py validate-modules:doc-missing-type lib/ansible/modules/network/sros/sros_config.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/network/sros/sros_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/sros/sros_rollback.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/sros/sros_rollback.py validate-modules:doc-missing-type lib/ansible/modules/network/sros/sros_rollback.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/voss/voss_command.py validate-modules:doc-missing-type lib/ansible/modules/network/voss/voss_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/voss/voss_config.py validate-modules:doc-missing-type lib/ansible/modules/network/voss/voss_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/voss/voss_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/_vyos_interface.py future-import-boilerplate lib/ansible/modules/network/vyos/_vyos_interface.py metaclass-boilerplate lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/_vyos_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/vyos/_vyos_l3_interface.py future-import-boilerplate lib/ansible/modules/network/vyos/_vyos_l3_interface.py metaclass-boilerplate lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/_vyos_l3_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/vyos/_vyos_linkagg.py future-import-boilerplate lib/ansible/modules/network/vyos/_vyos_linkagg.py metaclass-boilerplate lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:doc-missing-type lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:missing-suboption-docs lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/_vyos_linkagg.py validate-modules:undocumented-parameter lib/ansible/modules/network/vyos/_vyos_lldp.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/_vyos_lldp_interface.py future-import-boilerplate lib/ansible/modules/network/vyos/_vyos_lldp_interface.py metaclass-boilerplate lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:doc-missing-type lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:missing-suboption-docs lib/ansible/modules/network/vyos/_vyos_lldp_interface.py validate-modules:undocumented-parameter lib/ansible/modules/network/vyos/vyos_banner.py future-import-boilerplate lib/ansible/modules/network/vyos/vyos_banner.py metaclass-boilerplate lib/ansible/modules/network/vyos/vyos_banner.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/vyos_banner.py validate-modules:doc-missing-type lib/ansible/modules/network/vyos/vyos_command.py future-import-boilerplate lib/ansible/modules/network/vyos/vyos_command.py metaclass-boilerplate lib/ansible/modules/network/vyos/vyos_command.py pylint:blacklisted-name lib/ansible/modules/network/vyos/vyos_command.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/vyos_command.py validate-modules:doc-missing-type lib/ansible/modules/network/vyos/vyos_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/vyos_config.py future-import-boilerplate lib/ansible/modules/network/vyos/vyos_config.py metaclass-boilerplate lib/ansible/modules/network/vyos/vyos_config.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/vyos_config.py validate-modules:doc-missing-type lib/ansible/modules/network/vyos/vyos_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/vyos_facts.py future-import-boilerplate lib/ansible/modules/network/vyos/vyos_facts.py metaclass-boilerplate lib/ansible/modules/network/vyos/vyos_facts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/vyos_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/vyos_logging.py future-import-boilerplate lib/ansible/modules/network/vyos/vyos_logging.py metaclass-boilerplate lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:doc-missing-type lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:missing-suboption-docs lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/vyos_logging.py validate-modules:undocumented-parameter lib/ansible/modules/network/vyos/vyos_ping.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/vyos_ping.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/vyos_static_route.py future-import-boilerplate lib/ansible/modules/network/vyos/vyos_static_route.py metaclass-boilerplate lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:doc-missing-type lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:missing-suboption-docs lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/vyos_static_route.py validate-modules:undocumented-parameter lib/ansible/modules/network/vyos/vyos_system.py future-import-boilerplate lib/ansible/modules/network/vyos/vyos_system.py metaclass-boilerplate lib/ansible/modules/network/vyos/vyos_system.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/vyos_system.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/vyos_user.py future-import-boilerplate lib/ansible/modules/network/vyos/vyos_user.py metaclass-boilerplate lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/vyos_user.py validate-modules:doc-missing-type lib/ansible/modules/network/vyos/vyos_user.py validate-modules:missing-suboption-docs lib/ansible/modules/network/vyos/vyos_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/vyos_user.py validate-modules:undocumented-parameter lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:doc-missing-type lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:missing-suboption-docs lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/network/vyos/vyos_vlan.py validate-modules:undocumented-parameter lib/ansible/modules/notification/bearychat.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/campfire.py validate-modules:doc-missing-type lib/ansible/modules/notification/catapult.py validate-modules:doc-missing-type lib/ansible/modules/notification/catapult.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/cisco_spark.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/notification/cisco_spark.py validate-modules:doc-missing-type lib/ansible/modules/notification/cisco_spark.py validate-modules:undocumented-parameter lib/ansible/modules/notification/flowdock.py validate-modules:doc-missing-type lib/ansible/modules/notification/grove.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/hipchat.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/notification/hipchat.py validate-modules:doc-missing-type lib/ansible/modules/notification/hipchat.py validate-modules:undocumented-parameter lib/ansible/modules/notification/irc.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/notification/irc.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/notification/irc.py validate-modules:doc-missing-type lib/ansible/modules/notification/irc.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/irc.py validate-modules:undocumented-parameter lib/ansible/modules/notification/jabber.py validate-modules:doc-missing-type lib/ansible/modules/notification/jabber.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/logentries_msg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/mail.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/notification/mail.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/mail.py validate-modules:undocumented-parameter lib/ansible/modules/notification/matrix.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/mattermost.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/mqtt.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/notification/mqtt.py validate-modules:doc-missing-type lib/ansible/modules/notification/mqtt.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/nexmo.py validate-modules:doc-missing-type lib/ansible/modules/notification/nexmo.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/office_365_connector_card.py validate-modules:doc-missing-type lib/ansible/modules/notification/office_365_connector_card.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/pushbullet.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/pushbullet.py validate-modules:undocumented-parameter lib/ansible/modules/notification/pushover.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/notification/pushover.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/notification/pushover.py validate-modules:doc-missing-type lib/ansible/modules/notification/pushover.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/rabbitmq_publish.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/rocketchat.py validate-modules:no-default-for-required-parameter lib/ansible/modules/notification/rocketchat.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/say.py validate-modules:doc-missing-type lib/ansible/modules/notification/sendgrid.py validate-modules:doc-missing-type lib/ansible/modules/notification/sendgrid.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/sendgrid.py validate-modules:undocumented-parameter lib/ansible/modules/notification/slack.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/notification/slack.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/syslogger.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/telegram.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/twilio.py validate-modules:doc-missing-type lib/ansible/modules/notification/twilio.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/notification/typetalk.py validate-modules:doc-missing-type lib/ansible/modules/notification/typetalk.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/language/bower.py validate-modules:doc-missing-type lib/ansible/modules/packaging/language/bower.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/language/bundler.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/language/bundler.py validate-modules:doc-missing-type lib/ansible/modules/packaging/language/bundler.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/language/composer.py validate-modules:parameter-invalid lib/ansible/modules/packaging/language/composer.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/language/cpanm.py validate-modules:doc-missing-type lib/ansible/modules/packaging/language/cpanm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/language/easy_install.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/language/easy_install.py validate-modules:doc-missing-type lib/ansible/modules/packaging/language/easy_install.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/language/gem.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/language/maven_artifact.py validate-modules:doc-missing-type lib/ansible/modules/packaging/language/maven_artifact.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/language/pear.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/language/pear.py validate-modules:doc-missing-type lib/ansible/modules/packaging/language/pear.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/language/pear.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/language/pip.py pylint:blacklisted-name lib/ansible/modules/packaging/language/yarn.py validate-modules:doc-missing-type lib/ansible/modules/packaging/language/yarn.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/apk.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/apk.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/apk.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/apt.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/os/apt.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/apt.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/apt.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/apt_key.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/apt_key.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/apt_repo.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/apt_repository.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/os/apt_repository.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/apt_repository.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/apt_repository.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/apt_rpm.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/dnf.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/dnf.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/dnf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/dpkg_selections.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/flatpak.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/flatpak.py validate-modules:use-run-command-not-popen lib/ansible/modules/packaging/os/flatpak_remote.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/flatpak_remote.py validate-modules:use-run-command-not-popen lib/ansible/modules/packaging/os/homebrew.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/homebrew.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/homebrew.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/homebrew.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/homebrew_cask.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/homebrew_tap.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/homebrew_tap.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/layman.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/layman.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/macports.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/macports.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/macports.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/openbsd_pkg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/openbsd_pkg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/opkg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/opkg.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/os/opkg.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/opkg.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/opkg.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/package_facts.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/package_facts.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/package_facts.py validate-modules:return-syntax-error lib/ansible/modules/packaging/os/pacman.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/pacman.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/pacman.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/pkg5.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/pkg5.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/pkg5_publisher.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/pkg5_publisher.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/pkgin.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/pkgin.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/pkgin.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/pkgng.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/pkgng.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/pkgng.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/pkgutil.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/portage.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/portage.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/portage.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/portinstall.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/portinstall.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/pulp_repo.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/redhat_subscription.py validate-modules:return-syntax-error lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/rhn_channel.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/rhsm_release.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/rhsm_repository.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/rpm_key.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/slackpkg.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/slackpkg.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/os/slackpkg.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/slackpkg.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/slackpkg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/slackpkg.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/snap.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/sorcery.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/sorcery.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/svr4pkg.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/swdepot.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/swdepot.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/swupd.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/urpmi.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/urpmi.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/os/urpmi.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/urpmi.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/urpmi.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/xbps.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/xbps.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/xbps.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/xbps.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/xbps.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/yum.py pylint:blacklisted-name lib/ansible/modules/packaging/os/yum.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/os/yum.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/yum.py validate-modules:parameter-invalid lib/ansible/modules/packaging/os/yum.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/yum.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/yum_repository.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/packaging/os/yum_repository.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/yum_repository.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/yum_repository.py validate-modules:undocumented-parameter lib/ansible/modules/packaging/os/zypper.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/packaging/os/zypper.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/zypper.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/packaging/os/zypper_repository.py validate-modules:doc-missing-type lib/ansible/modules/packaging/os/zypper_repository.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/cobbler/cobbler_sync.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/cobbler/cobbler_system.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/cpm/cpm_plugconfig.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/cpm/cpm_plugconfig.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/cpm/cpm_plugcontrol.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/cpm/cpm_plugcontrol.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/cpm/cpm_serial_port_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/cpm/cpm_serial_port_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/cpm/cpm_user.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/cpm/cpm_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/dellemc/idrac_server_config_profile.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/dellemc/idrac_server_config_profile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/foreman/_foreman.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/foreman/_katello.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/hpilo/hpilo_boot.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/remote_management/hpilo/hpilo_boot.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/hpilo/hpilo_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/hpilo/hponcfg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/imc/imc_rest.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/intersight/intersight_rest_api.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/ipmi/ipmi_boot.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/ipmi/ipmi_power.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/lxca/lxca_cmms.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/lxca/lxca_nodes.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/remote_management/manageiq/manageiq_alert_profiles.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/remote_management/manageiq/manageiq_alerts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/remote_management/manageiq/manageiq_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/remote_management/manageiq/manageiq_policies.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/manageiq/manageiq_provider.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/remote_management/manageiq/manageiq_tags.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/remote_management/manageiq/manageiq_tenant.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/remote_management/manageiq/manageiq_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_datacenter_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_datacenter_info.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_enclosure_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_enclosure_info.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_ethernet_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_ethernet_network.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_ethernet_network_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_ethernet_network_info.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_fc_network.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_fc_network_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_fc_network_info.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_fcoe_network.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_fcoe_network_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_fcoe_network_info.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_logical_interconnect_group_info.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:doc-missing-type lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_network_set.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_network_set_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_network_set_info.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_san_manager.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_san_manager.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/oneview/oneview_san_manager_info.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:no-default-for-required-parameter lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/stacki/stacki_host.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/remote_management/ucs/ucs_disk_group_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_ip_pool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_lan_connectivity.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_mac_pool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_managed_objects.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_managed_objects.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/ucs/ucs_ntp_server.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_san_connectivity.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/ucs/ucs_service_profile_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:doc-type-does-not-match-spec lib/ansible/modules/remote_management/ucs/ucs_storage_profile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_timezone.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_uuid_pool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_vhba_template.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/ucs/ucs_vlans.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_vnic_template.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/remote_management/ucs/ucs_vnic_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_vsans.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_vsans.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/remote_management/ucs/ucs_wwn_pool.py validate-modules:undocumented-parameter lib/ansible/modules/remote_management/wakeonlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/source_control/bzr.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/source_control/git.py pylint:blacklisted-name lib/ansible/modules/source_control/git.py use-argspec-type-path lib/ansible/modules/source_control/git.py validate-modules:doc-missing-type lib/ansible/modules/source_control/git.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/source_control/git_config.py validate-modules:doc-missing-type lib/ansible/modules/source_control/git_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/source_control/github/_github_hooks.py validate-modules:doc-missing-type lib/ansible/modules/source_control/github/github_deploy_key.py validate-modules:doc-missing-type lib/ansible/modules/source_control/github/github_deploy_key.py validate-modules:parameter-invalid lib/ansible/modules/source_control/github/github_deploy_key.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/source_control/github/github_issue.py validate-modules:doc-missing-type lib/ansible/modules/source_control/github/github_issue.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/source_control/github/github_key.py validate-modules:doc-missing-type lib/ansible/modules/source_control/github/github_release.py validate-modules:doc-missing-type lib/ansible/modules/source_control/github/github_release.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/source_control/github/github_webhook.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/source_control/github/github_webhook_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/source_control/hg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/source_control/subversion.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/source_control/subversion.py validate-modules:undocumented-parameter lib/ansible/modules/storage/emc/emc_vnx_sg_member.py validate-modules:doc-missing-type lib/ansible/modules/storage/emc/emc_vnx_sg_member.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/glusterfs/gluster_heal_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/glusterfs/gluster_peer.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/glusterfs/gluster_volume.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/ibm/ibm_sa_domain.py validate-modules:doc-missing-type lib/ansible/modules/storage/ibm/ibm_sa_host.py validate-modules:doc-missing-type lib/ansible/modules/storage/ibm/ibm_sa_host_ports.py validate-modules:doc-missing-type lib/ansible/modules/storage/ibm/ibm_sa_pool.py validate-modules:doc-missing-type lib/ansible/modules/storage/ibm/ibm_sa_vol.py validate-modules:doc-missing-type lib/ansible/modules/storage/ibm/ibm_sa_vol_map.py validate-modules:doc-missing-type lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:doc-missing-type lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/storage/infinidat/infini_export.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/infinidat/infini_export_client.py validate-modules:doc-missing-type lib/ansible/modules/storage/infinidat/infini_export_client.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/storage/infinidat/infini_fs.py validate-modules:doc-missing-type lib/ansible/modules/storage/infinidat/infini_host.py validate-modules:doc-missing-type lib/ansible/modules/storage/infinidat/infini_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/infinidat/infini_pool.py validate-modules:doc-missing-type lib/ansible/modules/storage/infinidat/infini_vol.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_na_cdot_aggregate.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_na_cdot_aggregate.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_na_cdot_license.py validate-modules:incompatible-default-type lib/ansible/modules/storage/netapp/_na_cdot_license.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_na_cdot_lun.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_na_cdot_lun.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_na_cdot_qtree.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_na_cdot_qtree.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_na_cdot_svm.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_na_cdot_svm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_na_cdot_user.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_na_cdot_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_na_cdot_user_role.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_na_cdot_user_role.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:no-default-for-required-parameter lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_na_cdot_volume.py validate-modules:undocumented-parameter lib/ansible/modules/storage/netapp/_na_ontap_gather_facts.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_na_ontap_gather_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_sf_account_manager.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_sf_account_manager.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_sf_check_connections.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_sf_snapshot_schedule_manager.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_sf_snapshot_schedule_manager.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_sf_volume_access_group_manager.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_sf_volume_access_group_manager.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:parameter-invalid lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/_sf_volume_manager.py validate-modules:undocumented-parameter lib/ansible/modules/storage/netapp/na_elementsw_access_group.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_access_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_account.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_account.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_admin_users.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_admin_users.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_backup.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_backup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_check_connections.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_cluster.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_cluster_config.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_cluster_pair.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_cluster_pair.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_cluster_snmp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_drive.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_drive.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_initiators.py validate-modules:undocumented-parameter lib/ansible/modules/storage/netapp/na_elementsw_ldap.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_network_interfaces.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_node.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_node.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_snapshot.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_snapshot.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_snapshot_restore.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_snapshot_schedule.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_snapshot_schedule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_vlan.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:parameter-invalid lib/ansible/modules/storage/netapp/na_elementsw_volume.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_volume_clone.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_volume_clone.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_elementsw_volume_pair.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_elementsw_volume_pair.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_aggregate.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_aggregate.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_autosupport.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_autosupport.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain_ports.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_broadcast_domain_ports.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_cg_snapshot.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_cg_snapshot.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_cifs.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_cifs_acl.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_cifs_server.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_cifs_server.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_cluster.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_cluster.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_cluster_ha.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_cluster_peer.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_command.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_disks.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_dns.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_dns.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_export_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_export_policy_rule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_fcp.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_fcp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_firewall_policy.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_firewall_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_flexcache.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_igroup.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_igroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_igroup_initiator.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_igroup_initiator.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_interface.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_ipspace.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_ipspace.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_iscsi.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_iscsi.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_job_schedule.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_job_schedule.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_license.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_license.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_lun.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_lun.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_lun_copy.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_lun_copy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_lun_map.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_lun_map.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_motd.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_net_ifgrp.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_net_ifgrp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_net_port.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_net_port.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_net_routes.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_net_routes.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_net_subnet.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_net_subnet.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_net_vlan.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_net_vlan.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:parameter-invalid lib/ansible/modules/storage/netapp/na_ontap_nfs.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_node.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_ntp.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_ntp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_nvme.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_nvme_namespace.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_nvme_subsystem.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_portset.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_portset.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_qos_policy_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_qtree.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_security_key_manager.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_security_key_manager.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_service_processor_network.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_service_processor_network.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_snapmirror.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_snapshot.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_snapshot.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_snapshot_policy.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_snapshot_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_snmp.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_software_update.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_svm.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_svm.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_svm_options.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_ucadapter.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_ucadapter.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_unix_group.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_unix_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_unix_user.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_unix_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_user.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_user_role.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_user_role.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_volume.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_volume.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_volume_clone.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_vscan_on_access_policy.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_vscan_on_access_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_vscan_on_demand_task.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_vscan_on_demand_task.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_vscan_scanner_pool.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/na_ontap_vscan_scanner_pool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/na_ontap_vserver_peer.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_alerts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_amg.py validate-modules:undocumented-parameter lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_amg_role.py validate-modules:undocumented-parameter lib/ansible/modules/storage/netapp/netapp_e_amg_sync.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_asup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_auditlog.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_auth.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/netapp_e_auth.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_facts.py validate-modules:return-syntax-error lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_flashcache.py validate-modules:undocumented-parameter lib/ansible/modules/storage/netapp/netapp_e_global.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_hostgroup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_iscsi_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_iscsi_target.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_ldap.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_lun_mapping.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/netapp_e_lun_mapping.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_mgmt_interface.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_snapshot_group.py validate-modules:undocumented-parameter lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_snapshot_images.py validate-modules:undocumented-parameter lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/storage/netapp/netapp_e_snapshot_volume.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_storage_system.py validate-modules:undocumented-parameter lib/ansible/modules/storage/netapp/netapp_e_storagepool.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/netapp_e_storagepool.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_syslog.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/netapp_e_syslog.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:doc-default-incompatible-type lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:doc-missing-type lib/ansible/modules/storage/netapp/netapp_e_volume.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:implied-parameter-type-mismatch lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/netapp/netapp_e_volume_copy.py validate-modules:undocumented-parameter lib/ansible/modules/storage/purestorage/_purefa_facts.py validate-modules:return-syntax-error lib/ansible/modules/storage/purestorage/_purefb_facts.py validate-modules:return-syntax-error lib/ansible/modules/storage/purestorage/purefa_dsrole.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/purestorage/purefa_info.py validate-modules:return-syntax-error lib/ansible/modules/storage/purestorage/purefa_pgsnap.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/purestorage/purefb_fs.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/storage/purestorage/purefb_info.py validate-modules:return-syntax-error lib/ansible/modules/storage/zfs/zfs.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/zfs/zfs_delegate_admin.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/zfs/zfs_facts.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/storage/zfs/zfs_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/storage/zfs/zpool_facts.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/storage/zfs/zpool_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/alternatives.py pylint:blacklisted-name lib/ansible/modules/system/authorized_key.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/beadm.py pylint:blacklisted-name lib/ansible/modules/system/cronvar.py pylint:blacklisted-name lib/ansible/modules/system/dconf.py pylint:blacklisted-name lib/ansible/modules/system/dconf.py validate-modules:doc-missing-type lib/ansible/modules/system/dconf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/filesystem.py pylint:blacklisted-name lib/ansible/modules/system/filesystem.py validate-modules:doc-missing-type lib/ansible/modules/system/gconftool2.py pylint:blacklisted-name lib/ansible/modules/system/gconftool2.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/getent.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/hostname.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/interfaces_file.py pylint:blacklisted-name lib/ansible/modules/system/interfaces_file.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/iptables.py pylint:blacklisted-name lib/ansible/modules/system/java_cert.py pylint:blacklisted-name lib/ansible/modules/system/java_keystore.py validate-modules:doc-missing-type lib/ansible/modules/system/kernel_blacklist.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/known_hosts.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/system/known_hosts.py validate-modules:doc-missing-type lib/ansible/modules/system/known_hosts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/locale_gen.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/lvg.py pylint:blacklisted-name lib/ansible/modules/system/lvol.py pylint:blacklisted-name lib/ansible/modules/system/lvol.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/mksysb.py validate-modules:doc-missing-type lib/ansible/modules/system/modprobe.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/nosh.py validate-modules:doc-missing-type lib/ansible/modules/system/nosh.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/nosh.py validate-modules:return-syntax-error lib/ansible/modules/system/openwrt_init.py validate-modules:doc-missing-type lib/ansible/modules/system/openwrt_init.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/pam_limits.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/parted.py pylint:blacklisted-name lib/ansible/modules/system/puppet.py use-argspec-type-path lib/ansible/modules/system/puppet.py validate-modules:parameter-invalid lib/ansible/modules/system/puppet.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/puppet.py validate-modules:undocumented-parameter lib/ansible/modules/system/python_requirements_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/runit.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/system/runit.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/runit.py validate-modules:undocumented-parameter lib/ansible/modules/system/seboolean.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/selinux.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/selogin.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/service.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/system/service.py validate-modules:use-run-command-not-popen lib/ansible/modules/system/setup.py validate-modules:doc-missing-type lib/ansible/modules/system/setup.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/sysctl.py validate-modules:doc-missing-type lib/ansible/modules/system/sysctl.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/syspatch.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/systemd.py validate-modules:parameter-invalid lib/ansible/modules/system/systemd.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/systemd.py validate-modules:return-syntax-error lib/ansible/modules/system/sysvinit.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/system/sysvinit.py validate-modules:return-syntax-error lib/ansible/modules/system/timezone.py pylint:blacklisted-name lib/ansible/modules/system/user.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/system/user.py validate-modules:doc-default-incompatible-type lib/ansible/modules/system/user.py validate-modules:use-run-command-not-popen lib/ansible/modules/system/xfconf.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/utilities/logic/async_status.py use-argspec-type-path lib/ansible/modules/utilities/logic/async_status.py validate-modules!skip lib/ansible/modules/utilities/logic/async_wrapper.py ansible-doc!skip # not an actual module lib/ansible/modules/utilities/logic/async_wrapper.py use-argspec-type-path lib/ansible/modules/web_infrastructure/_nginx_status_facts.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/_nginx_status_facts.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential_type.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_credential_type.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py use-argspec-type-path lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py validate-modules:doc-default-does-not-match-spec lib/ansible/modules/web_infrastructure/ansible_tower/tower_group.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_host.py use-argspec-type-path lib/ansible/modules/web_infrastructure/ansible_tower/tower_host.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory_source.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_inventory_source.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_cancel.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:nonexistent-parameter-documented lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_launch.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_list.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_list.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_template.py validate-modules:undocumented-parameter lib/ansible/modules/web_infrastructure/ansible_tower/tower_job_wait.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_label.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_notification.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_notification.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_organization.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_project.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_project.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_receive.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_role.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_send.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_send.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_settings.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_team.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_team.py validate-modules:undocumented-parameter lib/ansible/modules/web_infrastructure/ansible_tower/tower_user.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_launch.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_launch.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/ansible_tower/tower_workflow_template.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:no-default-for-required-parameter lib/ansible/modules/web_infrastructure/apache2_mod_proxy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/apache2_module.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/apache2_module.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/deploy_helper.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/deploy_helper.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:no-default-for-required-parameter lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/django_manage.py validate-modules:undocumented-parameter lib/ansible/modules/web_infrastructure/ejabberd_user.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/ejabberd_user.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/gunicorn.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/gunicorn.py validate-modules:undocumented-parameter lib/ansible/modules/web_infrastructure/htpasswd.py validate-modules:doc-choices-do-not-match-spec lib/ansible/modules/web_infrastructure/htpasswd.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/jenkins_job.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/jenkins_job_info.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/jenkins_plugin.py use-argspec-type-path lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/jenkins_plugin.py validate-modules:undocumented-parameter lib/ansible/modules/web_infrastructure/jenkins_script.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/jira.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/jira.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/jira.py validate-modules:undocumented-parameter lib/ansible/modules/web_infrastructure/rundeck_acl_policy.py pylint:blacklisted-name lib/ansible/modules/web_infrastructure/rundeck_acl_policy.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/rundeck_project.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/sophos_utm/utm_aaa_group_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/sophos_utm/utm_ca_host_key_cert.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/sophos_utm/utm_ca_host_key_cert_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/sophos_utm/utm_dns_host.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/sophos_utm/utm_network_interface_address.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/sophos_utm/utm_network_interface_address_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_auth_profile.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_exception.py validate-modules:return-syntax-error lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_frontend.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_frontend_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_location.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/sophos_utm/utm_proxy_location_info.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/supervisorctl.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/supervisorctl.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/web_infrastructure/taiga_issue.py validate-modules:doc-missing-type lib/ansible/modules/web_infrastructure/taiga_issue.py validate-modules:parameter-type-not-in-doc lib/ansible/modules/windows/_win_msi.py future-import-boilerplate lib/ansible/modules/windows/_win_msi.py metaclass-boilerplate lib/ansible/modules/windows/async_status.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/setup.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_acl_inheritance.ps1 pslint:PSAvoidTrailingWhitespace lib/ansible/modules/windows/win_audit_rule.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_certificate_store.ps1 validate-modules:parameter-type-not-in-doc lib/ansible/modules/windows/win_chocolatey_config.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_chocolatey_facts.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_chocolatey_source.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_copy.ps1 pslint:PSUseApprovedVerbs lib/ansible/modules/windows/win_credential.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_credential.ps1 validate-modules:parameter-type-not-in-doc lib/ansible/modules/windows/win_dns_client.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_domain.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep lib/ansible/modules/windows/win_domain.ps1 pslint:PSUseApprovedVerbs lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSAvoidGlobalVars # New PR lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_domain_controller.ps1 pslint:PSUseApprovedVerbs lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSAvoidGlobalVars # New PR lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_domain_membership.ps1 pslint:PSUseApprovedVerbs lib/ansible/modules/windows/win_dotnet_ngen.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_dsc.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep lib/ansible/modules/windows/win_dsc.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_eventlog.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_feature.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_file_version.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_find.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep lib/ansible/modules/windows/win_firewall_rule.ps1 pslint:PSUseApprovedVerbs lib/ansible/modules/windows/win_hotfix.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_hotfix.ps1 pslint:PSUseApprovedVerbs lib/ansible/modules/windows/win_http_proxy.ps1 validate-modules:parameter-type-not-in-doc lib/ansible/modules/windows/win_iis_virtualdirectory.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_iis_webapplication.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_iis_webapppool.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_iis_webbinding.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_iis_webbinding.ps1 pslint:PSUseApprovedVerbs lib/ansible/modules/windows/win_iis_website.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_inet_proxy.ps1 validate-modules:parameter-type-not-in-doc lib/ansible/modules/windows/win_lineinfile.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_mapped_drive.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_package.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_package.ps1 pslint:PSUseApprovedVerbs lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSUseDeclaredVarsMoreThanAssignments # New PR - bug test_path should be testPath lib/ansible/modules/windows/win_pagefile.ps1 pslint:PSUseSupportsShouldProcess lib/ansible/modules/windows/win_product_facts.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_psexec.ps1 validate-modules:parameter-type-not-in-doc lib/ansible/modules/windows/win_rabbitmq_plugin.ps1 pslint:PSAvoidUsingInvokeExpression lib/ansible/modules/windows/win_rabbitmq_plugin.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_rds_cap.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_rds_rap.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_rds_settings.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_regedit.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_region.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep lib/ansible/modules/windows/win_region.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_regmerge.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_robocopy.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_say.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_security_policy.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_security_policy.ps1 pslint:PSUseApprovedVerbs lib/ansible/modules/windows/win_share.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_shell.ps1 pslint:PSUseApprovedVerbs lib/ansible/modules/windows/win_shortcut.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_snmp.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_unzip.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_unzip.ps1 pslint:PSUseApprovedVerbs lib/ansible/modules/windows/win_updates.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_uri.ps1 pslint:PSAvoidUsingEmptyCatchBlock # Keep lib/ansible/modules/windows/win_user_profile.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_user_profile.ps1 validate-modules:parameter-type-not-in-doc lib/ansible/modules/windows/win_wait_for.ps1 pslint:PSCustomUseLiteralPath lib/ansible/modules/windows/win_webpicmd.ps1 pslint:PSAvoidUsingInvokeExpression lib/ansible/modules/windows/win_xml.ps1 pslint:PSCustomUseLiteralPath lib/ansible/parsing/vault/__init__.py pylint:blacklisted-name lib/ansible/playbook/base.py pylint:blacklisted-name lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460 lib/ansible/playbook/helpers.py pylint:blacklisted-name lib/ansible/playbook/role/__init__.py pylint:blacklisted-name lib/ansible/plugins/action/aireos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/aruba.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/asa.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/bigip.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added lib/ansible/plugins/action/bigiq.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added lib/ansible/plugins/action/ce.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/ce_template.py action-plugin-docs # undocumented action plugin to fix, existed before sanity test was added lib/ansible/plugins/action/cnos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/dellos10.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/dellos6.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/dellos9.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/enos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/eos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/exos.py action-plugin-docs # undocumented action plugin to fix lib/ansible/plugins/action/ios.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/iosxr.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/ironware.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/junos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/net_base.py action-plugin-docs # base class for other net_* action plugins which have a matching module lib/ansible/plugins/action/netconf.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/network.py action-plugin-docs # base class for network action plugins lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin lib/ansible/plugins/action/nxos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/slxos.py action-plugin-docs # undocumented action plugin to fix lib/ansible/plugins/action/sros.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/action/voss.py action-plugin-docs # undocumented action plugin to fix lib/ansible/plugins/action/vyos.py action-plugin-docs # base class for deprecated network platform modules using `connection: local` lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility lib/ansible/plugins/callback/hipchat.py pylint:blacklisted-name lib/ansible/plugins/connection/lxc.py pylint:blacklisted-name lib/ansible/plugins/doc_fragments/a10.py future-import-boilerplate lib/ansible/plugins/doc_fragments/a10.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/aireos.py future-import-boilerplate lib/ansible/plugins/doc_fragments/aireos.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/alicloud.py future-import-boilerplate lib/ansible/plugins/doc_fragments/alicloud.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/aruba.py future-import-boilerplate lib/ansible/plugins/doc_fragments/aruba.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/asa.py future-import-boilerplate lib/ansible/plugins/doc_fragments/asa.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/auth_basic.py future-import-boilerplate lib/ansible/plugins/doc_fragments/auth_basic.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/avi.py future-import-boilerplate lib/ansible/plugins/doc_fragments/avi.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/aws.py future-import-boilerplate lib/ansible/plugins/doc_fragments/aws.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/aws_credentials.py future-import-boilerplate lib/ansible/plugins/doc_fragments/aws_credentials.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/aws_region.py future-import-boilerplate lib/ansible/plugins/doc_fragments/aws_region.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/azure.py future-import-boilerplate lib/ansible/plugins/doc_fragments/azure.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/azure_tags.py future-import-boilerplate lib/ansible/plugins/doc_fragments/azure_tags.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/backup.py future-import-boilerplate lib/ansible/plugins/doc_fragments/backup.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/ce.py future-import-boilerplate lib/ansible/plugins/doc_fragments/ce.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/checkpoint_commands.py future-import-boilerplate lib/ansible/plugins/doc_fragments/checkpoint_commands.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/checkpoint_objects.py future-import-boilerplate lib/ansible/plugins/doc_fragments/checkpoint_objects.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/cnos.py future-import-boilerplate lib/ansible/plugins/doc_fragments/cnos.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/constructed.py future-import-boilerplate lib/ansible/plugins/doc_fragments/constructed.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/decrypt.py future-import-boilerplate lib/ansible/plugins/doc_fragments/decrypt.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/default_callback.py future-import-boilerplate lib/ansible/plugins/doc_fragments/default_callback.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/dellos10.py future-import-boilerplate lib/ansible/plugins/doc_fragments/dellos10.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/dellos6.py future-import-boilerplate lib/ansible/plugins/doc_fragments/dellos6.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/dellos9.py future-import-boilerplate lib/ansible/plugins/doc_fragments/dellos9.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/digital_ocean.py future-import-boilerplate lib/ansible/plugins/doc_fragments/digital_ocean.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/dimensiondata.py future-import-boilerplate lib/ansible/plugins/doc_fragments/dimensiondata.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/dimensiondata_wait.py future-import-boilerplate lib/ansible/plugins/doc_fragments/dimensiondata_wait.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/ec2.py future-import-boilerplate lib/ansible/plugins/doc_fragments/ec2.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/emc.py future-import-boilerplate lib/ansible/plugins/doc_fragments/emc.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/enos.py future-import-boilerplate lib/ansible/plugins/doc_fragments/enos.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/eos.py future-import-boilerplate lib/ansible/plugins/doc_fragments/eos.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/f5.py future-import-boilerplate lib/ansible/plugins/doc_fragments/f5.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/files.py future-import-boilerplate lib/ansible/plugins/doc_fragments/files.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/fortios.py future-import-boilerplate lib/ansible/plugins/doc_fragments/fortios.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/gcp.py future-import-boilerplate lib/ansible/plugins/doc_fragments/gcp.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/hcloud.py future-import-boilerplate lib/ansible/plugins/doc_fragments/hcloud.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/hetzner.py future-import-boilerplate lib/ansible/plugins/doc_fragments/hetzner.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/hpe3par.py future-import-boilerplate lib/ansible/plugins/doc_fragments/hpe3par.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/hwc.py future-import-boilerplate lib/ansible/plugins/doc_fragments/hwc.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/infinibox.py future-import-boilerplate lib/ansible/plugins/doc_fragments/infinibox.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/influxdb.py future-import-boilerplate lib/ansible/plugins/doc_fragments/influxdb.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/ingate.py future-import-boilerplate lib/ansible/plugins/doc_fragments/ingate.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/intersight.py future-import-boilerplate lib/ansible/plugins/doc_fragments/intersight.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/inventory_cache.py future-import-boilerplate lib/ansible/plugins/doc_fragments/inventory_cache.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/ios.py future-import-boilerplate lib/ansible/plugins/doc_fragments/ios.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/iosxr.py future-import-boilerplate lib/ansible/plugins/doc_fragments/iosxr.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/ipa.py future-import-boilerplate lib/ansible/plugins/doc_fragments/ipa.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/ironware.py future-import-boilerplate lib/ansible/plugins/doc_fragments/ironware.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/junos.py future-import-boilerplate lib/ansible/plugins/doc_fragments/junos.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/k8s_auth_options.py future-import-boilerplate lib/ansible/plugins/doc_fragments/k8s_auth_options.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/k8s_name_options.py future-import-boilerplate lib/ansible/plugins/doc_fragments/k8s_name_options.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/k8s_resource_options.py future-import-boilerplate lib/ansible/plugins/doc_fragments/k8s_resource_options.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/k8s_scale_options.py future-import-boilerplate lib/ansible/plugins/doc_fragments/k8s_scale_options.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/k8s_state_options.py future-import-boilerplate lib/ansible/plugins/doc_fragments/k8s_state_options.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/keycloak.py future-import-boilerplate lib/ansible/plugins/doc_fragments/keycloak.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/kubevirt_common_options.py future-import-boilerplate lib/ansible/plugins/doc_fragments/kubevirt_common_options.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/kubevirt_vm_options.py future-import-boilerplate lib/ansible/plugins/doc_fragments/kubevirt_vm_options.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/ldap.py future-import-boilerplate lib/ansible/plugins/doc_fragments/ldap.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/lxca_common.py future-import-boilerplate lib/ansible/plugins/doc_fragments/lxca_common.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/manageiq.py future-import-boilerplate lib/ansible/plugins/doc_fragments/manageiq.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/meraki.py future-import-boilerplate lib/ansible/plugins/doc_fragments/meraki.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/mysql.py future-import-boilerplate lib/ansible/plugins/doc_fragments/mysql.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/netapp.py future-import-boilerplate lib/ansible/plugins/doc_fragments/netapp.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/netconf.py future-import-boilerplate lib/ansible/plugins/doc_fragments/netconf.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/netscaler.py future-import-boilerplate lib/ansible/plugins/doc_fragments/netscaler.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/network_agnostic.py future-import-boilerplate lib/ansible/plugins/doc_fragments/network_agnostic.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/nios.py future-import-boilerplate lib/ansible/plugins/doc_fragments/nios.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/nso.py future-import-boilerplate lib/ansible/plugins/doc_fragments/nso.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/nxos.py future-import-boilerplate lib/ansible/plugins/doc_fragments/nxos.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/oneview.py future-import-boilerplate lib/ansible/plugins/doc_fragments/oneview.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/online.py future-import-boilerplate lib/ansible/plugins/doc_fragments/online.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/onyx.py future-import-boilerplate lib/ansible/plugins/doc_fragments/onyx.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/opennebula.py future-import-boilerplate lib/ansible/plugins/doc_fragments/opennebula.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/openstack.py future-import-boilerplate lib/ansible/plugins/doc_fragments/openstack.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/openswitch.py future-import-boilerplate lib/ansible/plugins/doc_fragments/openswitch.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/oracle.py future-import-boilerplate lib/ansible/plugins/doc_fragments/oracle.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/oracle_creatable_resource.py future-import-boilerplate lib/ansible/plugins/doc_fragments/oracle_creatable_resource.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/oracle_display_name_option.py future-import-boilerplate lib/ansible/plugins/doc_fragments/oracle_display_name_option.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/oracle_name_option.py future-import-boilerplate lib/ansible/plugins/doc_fragments/oracle_name_option.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/oracle_tags.py future-import-boilerplate lib/ansible/plugins/doc_fragments/oracle_tags.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/oracle_wait_options.py future-import-boilerplate lib/ansible/plugins/doc_fragments/oracle_wait_options.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/ovirt.py future-import-boilerplate lib/ansible/plugins/doc_fragments/ovirt.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/ovirt_info.py future-import-boilerplate lib/ansible/plugins/doc_fragments/ovirt_info.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/panos.py future-import-boilerplate lib/ansible/plugins/doc_fragments/panos.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/postgres.py future-import-boilerplate lib/ansible/plugins/doc_fragments/postgres.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/proxysql.py future-import-boilerplate lib/ansible/plugins/doc_fragments/proxysql.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/purestorage.py future-import-boilerplate lib/ansible/plugins/doc_fragments/purestorage.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/rabbitmq.py future-import-boilerplate lib/ansible/plugins/doc_fragments/rabbitmq.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/rackspace.py future-import-boilerplate lib/ansible/plugins/doc_fragments/rackspace.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/return_common.py future-import-boilerplate lib/ansible/plugins/doc_fragments/return_common.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/scaleway.py future-import-boilerplate lib/ansible/plugins/doc_fragments/scaleway.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/shell_common.py future-import-boilerplate lib/ansible/plugins/doc_fragments/shell_common.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/shell_windows.py future-import-boilerplate lib/ansible/plugins/doc_fragments/shell_windows.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/skydive.py future-import-boilerplate lib/ansible/plugins/doc_fragments/skydive.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/sros.py future-import-boilerplate lib/ansible/plugins/doc_fragments/sros.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/tower.py future-import-boilerplate lib/ansible/plugins/doc_fragments/tower.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/ucs.py future-import-boilerplate lib/ansible/plugins/doc_fragments/ucs.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/url.py future-import-boilerplate lib/ansible/plugins/doc_fragments/url.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/utm.py future-import-boilerplate lib/ansible/plugins/doc_fragments/utm.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/validate.py future-import-boilerplate lib/ansible/plugins/doc_fragments/validate.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/vca.py future-import-boilerplate lib/ansible/plugins/doc_fragments/vca.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/vexata.py future-import-boilerplate lib/ansible/plugins/doc_fragments/vexata.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/vmware.py future-import-boilerplate lib/ansible/plugins/doc_fragments/vmware.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/vmware_rest_client.py future-import-boilerplate lib/ansible/plugins/doc_fragments/vmware_rest_client.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/vultr.py future-import-boilerplate lib/ansible/plugins/doc_fragments/vultr.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/vyos.py future-import-boilerplate lib/ansible/plugins/doc_fragments/vyos.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/xenserver.py future-import-boilerplate lib/ansible/plugins/doc_fragments/xenserver.py metaclass-boilerplate lib/ansible/plugins/doc_fragments/zabbix.py future-import-boilerplate lib/ansible/plugins/doc_fragments/zabbix.py metaclass-boilerplate lib/ansible/plugins/lookup/sequence.py pylint:blacklisted-name lib/ansible/plugins/strategy/__init__.py pylint:blacklisted-name lib/ansible/plugins/strategy/linear.py pylint:blacklisted-name lib/ansible/vars/hostvars.py pylint:blacklisted-name setup.py future-import-boilerplate setup.py metaclass-boilerplate test/integration/targets/ansible-runner/files/adhoc_example1.py future-import-boilerplate test/integration/targets/ansible-runner/files/adhoc_example1.py metaclass-boilerplate test/integration/targets/ansible-runner/files/playbook_example1.py future-import-boilerplate test/integration/targets/ansible-runner/files/playbook_example1.py metaclass-boilerplate test/integration/targets/async/library/async_test.py future-import-boilerplate test/integration/targets/async/library/async_test.py metaclass-boilerplate test/integration/targets/async_fail/library/async_test.py future-import-boilerplate test/integration/targets/async_fail/library/async_test.py metaclass-boilerplate test/integration/targets/aws_lambda/files/mini_lambda.py future-import-boilerplate test/integration/targets/aws_lambda/files/mini_lambda.py metaclass-boilerplate test/integration/targets/collections_plugin_namespace/collection_root/ansible_collections/my_ns/my_col/plugins/lookup/lookup_no_future_boilerplate.py future-import-boilerplate test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util3.py pylint:relative-beyond-top-level test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level test/integration/targets/expect/files/test_command.py future-import-boilerplate test/integration/targets/expect/files/test_command.py metaclass-boilerplate test/integration/targets/get_url/files/testserver.py future-import-boilerplate test/integration/targets/get_url/files/testserver.py metaclass-boilerplate test/integration/targets/group/files/gidget.py future-import-boilerplate test/integration/targets/group/files/gidget.py metaclass-boilerplate test/integration/targets/ignore_unreachable/fake_connectors/bad_exec.py future-import-boilerplate test/integration/targets/ignore_unreachable/fake_connectors/bad_exec.py metaclass-boilerplate test/integration/targets/ignore_unreachable/fake_connectors/bad_put_file.py future-import-boilerplate test/integration/targets/ignore_unreachable/fake_connectors/bad_put_file.py metaclass-boilerplate test/integration/targets/inventory_kubevirt_conformance/inventory_diff.py future-import-boilerplate test/integration/targets/inventory_kubevirt_conformance/inventory_diff.py metaclass-boilerplate test/integration/targets/inventory_kubevirt_conformance/server.py future-import-boilerplate test/integration/targets/inventory_kubevirt_conformance/server.py metaclass-boilerplate test/integration/targets/jinja2_native_types/filter_plugins/native_plugins.py future-import-boilerplate test/integration/targets/jinja2_native_types/filter_plugins/native_plugins.py metaclass-boilerplate test/integration/targets/lambda_policy/files/mini_http_lambda.py future-import-boilerplate test/integration/targets/lambda_policy/files/mini_http_lambda.py metaclass-boilerplate test/integration/targets/lookup_properties/lookup-8859-15.ini no-smart-quotes test/integration/targets/module_precedence/lib_with_extension/ping.py future-import-boilerplate test/integration/targets/module_precedence/lib_with_extension/ping.py metaclass-boilerplate test/integration/targets/module_precedence/multiple_roles/bar/library/ping.py future-import-boilerplate test/integration/targets/module_precedence/multiple_roles/bar/library/ping.py metaclass-boilerplate test/integration/targets/module_precedence/multiple_roles/foo/library/ping.py future-import-boilerplate test/integration/targets/module_precedence/multiple_roles/foo/library/ping.py metaclass-boilerplate test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.py future-import-boilerplate test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.py metaclass-boilerplate test/integration/targets/module_utils/library/test.py future-import-boilerplate test/integration/targets/module_utils/library/test.py metaclass-boilerplate test/integration/targets/module_utils/library/test_env_override.py future-import-boilerplate test/integration/targets/module_utils/library/test_env_override.py metaclass-boilerplate test/integration/targets/module_utils/library/test_failure.py future-import-boilerplate test/integration/targets/module_utils/library/test_failure.py metaclass-boilerplate test/integration/targets/module_utils/library/test_override.py future-import-boilerplate test/integration/targets/module_utils/library/test_override.py metaclass-boilerplate test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/foo.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:blacklisted-name test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:blacklisted-name test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang test/integration/targets/pause/test-pause.py future-import-boilerplate test/integration/targets/pause/test-pause.py metaclass-boilerplate test/integration/targets/pip/files/ansible_test_pip_chdir/__init__.py future-import-boilerplate test/integration/targets/pip/files/ansible_test_pip_chdir/__init__.py metaclass-boilerplate test/integration/targets/pip/files/setup.py future-import-boilerplate test/integration/targets/pip/files/setup.py metaclass-boilerplate test/integration/targets/run_modules/library/test.py future-import-boilerplate test/integration/targets/run_modules/library/test.py metaclass-boilerplate test/integration/targets/s3_bucket_notification/files/mini_lambda.py future-import-boilerplate test/integration/targets/s3_bucket_notification/files/mini_lambda.py metaclass-boilerplate test/integration/targets/script/files/no_shebang.py future-import-boilerplate test/integration/targets/script/files/no_shebang.py metaclass-boilerplate test/integration/targets/service/files/ansible_test_service.py future-import-boilerplate test/integration/targets/service/files/ansible_test_service.py metaclass-boilerplate test/integration/targets/setup_rpm_repo/files/create-repo.py future-import-boilerplate test/integration/targets/setup_rpm_repo/files/create-repo.py metaclass-boilerplate test/integration/targets/sns_topic/files/sns_topic_lambda/sns_topic_lambda.py future-import-boilerplate test/integration/targets/sns_topic/files/sns_topic_lambda/sns_topic_lambda.py metaclass-boilerplate test/integration/targets/supervisorctl/files/sendProcessStdin.py future-import-boilerplate test/integration/targets/supervisorctl/files/sendProcessStdin.py metaclass-boilerplate test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes test/integration/targets/template/files/foo.dos.txt line-endings test/integration/targets/template/role_filter/filter_plugins/myplugin.py future-import-boilerplate test/integration/targets/template/role_filter/filter_plugins/myplugin.py metaclass-boilerplate test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes test/integration/targets/test_infra/library/test.py future-import-boilerplate test/integration/targets/test_infra/library/test.py metaclass-boilerplate test/integration/targets/unicode/unicode.yml no-smart-quotes test/integration/targets/uri/files/testserver.py future-import-boilerplate test/integration/targets/uri/files/testserver.py metaclass-boilerplate test/integration/targets/var_precedence/ansible-var-precedence-check.py future-import-boilerplate test/integration/targets/var_precedence/ansible-var-precedence-check.py metaclass-boilerplate test/integration/targets/vars_prompt/test-vars_prompt.py future-import-boilerplate test/integration/targets/vars_prompt/test-vars_prompt.py metaclass-boilerplate test/integration/targets/vault/test-vault-client.py future-import-boilerplate test/integration/targets/vault/test-vault-client.py metaclass-boilerplate test/integration/targets/wait_for/files/testserver.py future-import-boilerplate test/integration/targets/wait_for/files/testserver.py metaclass-boilerplate test/integration/targets/want_json_modules_posix/library/helloworld.py future-import-boilerplate test/integration/targets/want_json_modules_posix/library/helloworld.py metaclass-boilerplate test/integration/targets/win_audit_rule/library/test_get_audit_rule.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_chocolatey/files/tools/chocolateyUninstall.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_chocolatey_source/library/choco_source.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_csharp_utils/library/ansible_basic_tests.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_csharp_utils/library/ansible_basic_tests.ps1 pslint:PSUseDeclaredVarsMoreThanAssignments # test setup requires vars to be set globally and not referenced in the same scope test/integration/targets/win_csharp_utils/library/ansible_become_tests.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xSetReboot/ANSIBLE_xSetReboot.psm1 pslint!skip test/integration/targets/win_dsc/files/xTestDsc/1.0.0/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip test/integration/targets/win_dsc/files/xTestDsc/1.0.0/xTestDsc.psd1 pslint!skip test/integration/targets/win_dsc/files/xTestDsc/1.0.1/DSCResources/ANSIBLE_xTestResource/ANSIBLE_xTestResource.psm1 pslint!skip test/integration/targets/win_dsc/files/xTestDsc/1.0.1/xTestDsc.psd1 pslint!skip test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_iis_webbinding/library/test_get_webbindings.ps1 pslint:PSUseApprovedVerbs test/integration/targets/win_module_utils/library/argv_parser_test.ps1 pslint:PSUseApprovedVerbs test/integration/targets/win_module_utils/library/backup_file_test.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_module_utils/library/command_util_test.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings test/integration/targets/win_ping/library/win_ping_syntax_error.ps1 pslint!skip test/integration/targets/win_psmodule/files/module/template.psd1 pslint!skip test/integration/targets/win_psmodule/files/module/template.psm1 pslint!skip test/integration/targets/win_psmodule/files/setup_modules.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_reboot/templates/post_reboot.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_regmerge/templates/win_line_ending.j2 line-endings test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep test/integration/targets/win_script/files/test_script_creates_file.ps1 pslint:PSAvoidUsingCmdletAliases test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep test/integration/targets/win_stat/library/test_symlink_file.ps1 pslint:PSCustomUseLiteralPath test/integration/targets/win_template/files/foo.dos.txt line-endings test/integration/targets/win_user_right/library/test_get_right.ps1 pslint:PSCustomUseLiteralPath test/legacy/cleanup_gce.py future-import-boilerplate test/legacy/cleanup_gce.py metaclass-boilerplate test/legacy/cleanup_gce.py pylint:blacklisted-name test/legacy/cleanup_rax.py future-import-boilerplate test/legacy/cleanup_rax.py metaclass-boilerplate test/legacy/consul_running.py future-import-boilerplate test/legacy/consul_running.py metaclass-boilerplate test/legacy/gce_credentials.py future-import-boilerplate test/legacy/gce_credentials.py metaclass-boilerplate test/legacy/gce_credentials.py pylint:blacklisted-name test/legacy/setup_gce.py future-import-boilerplate test/legacy/setup_gce.py metaclass-boilerplate test/lib/ansible_test/_data/requirements/constraints.txt test-constraints test/lib/ansible_test/_data/requirements/integration.cloud.azure.txt test-constraints test/lib/ansible_test/_data/sanity/pylint/plugins/string_format.py use-compat-six test/lib/ansible_test/_data/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath test/lib/ansible_test/_data/setup/windows-httptester.ps1 pslint:PSCustomUseLiteralPath test/units/config/manager/test_find_ini_config_file.py future-import-boilerplate test/units/contrib/inventory/test_vmware_inventory.py future-import-boilerplate test/units/contrib/inventory/test_vmware_inventory.py metaclass-boilerplate test/units/contrib/inventory/test_vmware_inventory.py pylint:blacklisted-name test/units/executor/test_play_iterator.py pylint:blacklisted-name test/units/mock/path.py future-import-boilerplate test/units/mock/path.py metaclass-boilerplate test/units/mock/yaml_helper.py future-import-boilerplate test/units/mock/yaml_helper.py metaclass-boilerplate test/units/module_utils/aws/test_aws_module.py metaclass-boilerplate test/units/module_utils/basic/test__symbolic_mode_to_octal.py future-import-boilerplate test/units/module_utils/basic/test_deprecate_warn.py future-import-boilerplate test/units/module_utils/basic/test_deprecate_warn.py metaclass-boilerplate test/units/module_utils/basic/test_exit_json.py future-import-boilerplate test/units/module_utils/basic/test_get_file_attributes.py future-import-boilerplate test/units/module_utils/basic/test_heuristic_log_sanitize.py future-import-boilerplate test/units/module_utils/basic/test_run_command.py future-import-boilerplate test/units/module_utils/basic/test_run_command.py pylint:blacklisted-name test/units/module_utils/basic/test_safe_eval.py future-import-boilerplate test/units/module_utils/basic/test_tmpdir.py future-import-boilerplate test/units/module_utils/cloud/test_backoff.py future-import-boilerplate test/units/module_utils/cloud/test_backoff.py metaclass-boilerplate test/units/module_utils/common/test_dict_transformations.py future-import-boilerplate test/units/module_utils/common/test_dict_transformations.py metaclass-boilerplate test/units/module_utils/conftest.py future-import-boilerplate test/units/module_utils/conftest.py metaclass-boilerplate test/units/module_utils/facts/base.py future-import-boilerplate test/units/module_utils/facts/hardware/test_sunos_get_uptime_facts.py future-import-boilerplate test/units/module_utils/facts/hardware/test_sunos_get_uptime_facts.py metaclass-boilerplate test/units/module_utils/facts/network/test_generic_bsd.py future-import-boilerplate test/units/module_utils/facts/other/test_facter.py future-import-boilerplate test/units/module_utils/facts/other/test_ohai.py future-import-boilerplate test/units/module_utils/facts/system/test_lsb.py future-import-boilerplate test/units/module_utils/facts/test_ansible_collector.py future-import-boilerplate test/units/module_utils/facts/test_collector.py future-import-boilerplate test/units/module_utils/facts/test_collectors.py future-import-boilerplate test/units/module_utils/facts/test_facts.py future-import-boilerplate test/units/module_utils/facts/test_timeout.py future-import-boilerplate test/units/module_utils/facts/test_utils.py future-import-boilerplate test/units/module_utils/gcp/test_auth.py future-import-boilerplate test/units/module_utils/gcp/test_auth.py metaclass-boilerplate test/units/module_utils/gcp/test_gcp_utils.py future-import-boilerplate test/units/module_utils/gcp/test_gcp_utils.py metaclass-boilerplate test/units/module_utils/gcp/test_utils.py future-import-boilerplate test/units/module_utils/gcp/test_utils.py metaclass-boilerplate test/units/module_utils/hwc/test_dict_comparison.py future-import-boilerplate test/units/module_utils/hwc/test_dict_comparison.py metaclass-boilerplate test/units/module_utils/hwc/test_hwc_utils.py future-import-boilerplate test/units/module_utils/hwc/test_hwc_utils.py metaclass-boilerplate test/units/module_utils/json_utils/test_filter_non_json_lines.py future-import-boilerplate test/units/module_utils/net_tools/test_netbox.py future-import-boilerplate test/units/module_utils/net_tools/test_netbox.py metaclass-boilerplate test/units/module_utils/network/avi/test_avi_api_utils.py future-import-boilerplate test/units/module_utils/network/avi/test_avi_api_utils.py metaclass-boilerplate test/units/module_utils/network/ftd/test_common.py future-import-boilerplate test/units/module_utils/network/ftd/test_common.py metaclass-boilerplate test/units/module_utils/network/ftd/test_configuration.py future-import-boilerplate test/units/module_utils/network/ftd/test_configuration.py metaclass-boilerplate test/units/module_utils/network/ftd/test_device.py future-import-boilerplate test/units/module_utils/network/ftd/test_device.py metaclass-boilerplate test/units/module_utils/network/ftd/test_fdm_swagger_parser.py future-import-boilerplate test/units/module_utils/network/ftd/test_fdm_swagger_parser.py metaclass-boilerplate test/units/module_utils/network/ftd/test_fdm_swagger_validator.py future-import-boilerplate test/units/module_utils/network/ftd/test_fdm_swagger_validator.py metaclass-boilerplate test/units/module_utils/network/ftd/test_fdm_swagger_with_real_data.py future-import-boilerplate test/units/module_utils/network/ftd/test_fdm_swagger_with_real_data.py metaclass-boilerplate test/units/module_utils/network/ftd/test_upsert_functionality.py future-import-boilerplate test/units/module_utils/network/ftd/test_upsert_functionality.py metaclass-boilerplate test/units/module_utils/network/nso/test_nso.py metaclass-boilerplate test/units/module_utils/parsing/test_convert_bool.py future-import-boilerplate test/units/module_utils/postgresql/test_postgres.py future-import-boilerplate test/units/module_utils/postgresql/test_postgres.py metaclass-boilerplate test/units/module_utils/remote_management/dellemc/test_ome.py future-import-boilerplate test/units/module_utils/remote_management/dellemc/test_ome.py metaclass-boilerplate test/units/module_utils/test_database.py future-import-boilerplate test/units/module_utils/test_database.py metaclass-boilerplate test/units/module_utils/test_distro.py future-import-boilerplate test/units/module_utils/test_distro.py metaclass-boilerplate test/units/module_utils/test_hetzner.py future-import-boilerplate test/units/module_utils/test_hetzner.py metaclass-boilerplate test/units/module_utils/test_kubevirt.py future-import-boilerplate test/units/module_utils/test_kubevirt.py metaclass-boilerplate test/units/module_utils/test_netapp.py future-import-boilerplate test/units/module_utils/test_text.py future-import-boilerplate test/units/module_utils/test_utm_utils.py future-import-boilerplate test/units/module_utils/test_utm_utils.py metaclass-boilerplate test/units/module_utils/urls/test_Request.py replace-urlopen test/units/module_utils/urls/test_fetch_url.py replace-urlopen test/units/module_utils/xenserver/FakeAnsibleModule.py future-import-boilerplate test/units/module_utils/xenserver/FakeAnsibleModule.py metaclass-boilerplate test/units/module_utils/xenserver/FakeXenAPI.py future-import-boilerplate test/units/module_utils/xenserver/FakeXenAPI.py metaclass-boilerplate test/units/modules/cloud/google/test_gce_tag.py future-import-boilerplate test/units/modules/cloud/google/test_gce_tag.py metaclass-boilerplate test/units/modules/cloud/google/test_gcp_forwarding_rule.py future-import-boilerplate test/units/modules/cloud/google/test_gcp_forwarding_rule.py metaclass-boilerplate test/units/modules/cloud/google/test_gcp_url_map.py future-import-boilerplate test/units/modules/cloud/google/test_gcp_url_map.py metaclass-boilerplate test/units/modules/cloud/kubevirt/test_kubevirt_rs.py future-import-boilerplate test/units/modules/cloud/kubevirt/test_kubevirt_rs.py metaclass-boilerplate test/units/modules/cloud/kubevirt/test_kubevirt_vm.py future-import-boilerplate test/units/modules/cloud/kubevirt/test_kubevirt_vm.py metaclass-boilerplate test/units/modules/cloud/linode/conftest.py future-import-boilerplate test/units/modules/cloud/linode/conftest.py metaclass-boilerplate test/units/modules/cloud/linode/test_linode.py metaclass-boilerplate test/units/modules/cloud/linode_v4/conftest.py future-import-boilerplate test/units/modules/cloud/linode_v4/conftest.py metaclass-boilerplate test/units/modules/cloud/linode_v4/test_linode_v4.py metaclass-boilerplate test/units/modules/cloud/misc/test_terraform.py future-import-boilerplate test/units/modules/cloud/misc/test_terraform.py metaclass-boilerplate test/units/modules/cloud/misc/virt_net/conftest.py future-import-boilerplate test/units/modules/cloud/misc/virt_net/conftest.py metaclass-boilerplate test/units/modules/cloud/misc/virt_net/test_virt_net.py future-import-boilerplate test/units/modules/cloud/misc/virt_net/test_virt_net.py metaclass-boilerplate test/units/modules/cloud/openstack/test_os_server.py future-import-boilerplate test/units/modules/cloud/openstack/test_os_server.py metaclass-boilerplate test/units/modules/cloud/xenserver/FakeAnsibleModule.py future-import-boilerplate test/units/modules/cloud/xenserver/FakeAnsibleModule.py metaclass-boilerplate test/units/modules/cloud/xenserver/FakeXenAPI.py future-import-boilerplate test/units/modules/cloud/xenserver/FakeXenAPI.py metaclass-boilerplate test/units/modules/conftest.py future-import-boilerplate test/units/modules/conftest.py metaclass-boilerplate test/units/modules/files/test_copy.py future-import-boilerplate test/units/modules/messaging/rabbitmq/test_rabbimq_user.py future-import-boilerplate test/units/modules/messaging/rabbitmq/test_rabbimq_user.py metaclass-boilerplate test/units/modules/monitoring/test_circonus_annotation.py future-import-boilerplate test/units/modules/monitoring/test_circonus_annotation.py metaclass-boilerplate test/units/modules/monitoring/test_icinga2_feature.py future-import-boilerplate test/units/modules/monitoring/test_icinga2_feature.py metaclass-boilerplate test/units/modules/monitoring/test_pagerduty.py future-import-boilerplate test/units/modules/monitoring/test_pagerduty.py metaclass-boilerplate test/units/modules/monitoring/test_pagerduty_alert.py future-import-boilerplate test/units/modules/monitoring/test_pagerduty_alert.py metaclass-boilerplate test/units/modules/net_tools/test_nmcli.py future-import-boilerplate test/units/modules/net_tools/test_nmcli.py metaclass-boilerplate test/units/modules/network/avi/test_avi_user.py future-import-boilerplate test/units/modules/network/avi/test_avi_user.py metaclass-boilerplate test/units/modules/network/check_point/test_checkpoint_access_rule.py future-import-boilerplate test/units/modules/network/check_point/test_checkpoint_access_rule.py metaclass-boilerplate test/units/modules/network/check_point/test_checkpoint_host.py future-import-boilerplate test/units/modules/network/check_point/test_checkpoint_host.py metaclass-boilerplate test/units/modules/network/check_point/test_checkpoint_session.py future-import-boilerplate test/units/modules/network/check_point/test_checkpoint_session.py metaclass-boilerplate test/units/modules/network/check_point/test_checkpoint_task_facts.py future-import-boilerplate test/units/modules/network/check_point/test_checkpoint_task_facts.py metaclass-boilerplate test/units/modules/network/cloudvision/test_cv_server_provision.py future-import-boilerplate test/units/modules/network/cloudvision/test_cv_server_provision.py metaclass-boilerplate test/units/modules/network/cumulus/test_nclu.py future-import-boilerplate test/units/modules/network/cumulus/test_nclu.py metaclass-boilerplate test/units/modules/network/ftd/test_ftd_configuration.py future-import-boilerplate test/units/modules/network/ftd/test_ftd_configuration.py metaclass-boilerplate test/units/modules/network/ftd/test_ftd_file_download.py future-import-boilerplate test/units/modules/network/ftd/test_ftd_file_download.py metaclass-boilerplate test/units/modules/network/ftd/test_ftd_file_upload.py future-import-boilerplate test/units/modules/network/ftd/test_ftd_file_upload.py metaclass-boilerplate test/units/modules/network/ftd/test_ftd_install.py future-import-boilerplate test/units/modules/network/ftd/test_ftd_install.py metaclass-boilerplate test/units/modules/network/netscaler/netscaler_module.py future-import-boilerplate test/units/modules/network/netscaler/netscaler_module.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_cs_action.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_cs_action.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_cs_policy.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_cs_policy.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_cs_vserver.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_cs_vserver.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_gslb_service.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_gslb_service.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_gslb_site.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_gslb_site.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_gslb_vserver.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_gslb_vserver.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_lb_monitor.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_lb_monitor.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_lb_vserver.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_lb_vserver.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_module_utils.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_module_utils.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_nitro_request.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_nitro_request.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_save_config.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_save_config.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_server.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_server.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_service.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_service.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_servicegroup.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_servicegroup.py metaclass-boilerplate test/units/modules/network/netscaler/test_netscaler_ssl_certkey.py future-import-boilerplate test/units/modules/network/netscaler/test_netscaler_ssl_certkey.py metaclass-boilerplate test/units/modules/network/nso/nso_module.py metaclass-boilerplate test/units/modules/network/nso/test_nso_action.py metaclass-boilerplate test/units/modules/network/nso/test_nso_config.py metaclass-boilerplate test/units/modules/network/nso/test_nso_query.py metaclass-boilerplate test/units/modules/network/nso/test_nso_show.py metaclass-boilerplate test/units/modules/network/nso/test_nso_verify.py metaclass-boilerplate test/units/modules/network/nuage/nuage_module.py future-import-boilerplate test/units/modules/network/nuage/nuage_module.py metaclass-boilerplate test/units/modules/network/nuage/test_nuage_vspk.py future-import-boilerplate test/units/modules/network/nuage/test_nuage_vspk.py metaclass-boilerplate test/units/modules/network/nxos/test_nxos_acl_interface.py metaclass-boilerplate test/units/modules/network/radware/test_vdirect_commit.py future-import-boilerplate test/units/modules/network/radware/test_vdirect_commit.py metaclass-boilerplate test/units/modules/network/radware/test_vdirect_file.py future-import-boilerplate test/units/modules/network/radware/test_vdirect_file.py metaclass-boilerplate test/units/modules/network/radware/test_vdirect_runnable.py future-import-boilerplate test/units/modules/network/radware/test_vdirect_runnable.py metaclass-boilerplate test/units/modules/notification/test_slack.py future-import-boilerplate test/units/modules/notification/test_slack.py metaclass-boilerplate test/units/modules/packaging/language/test_gem.py future-import-boilerplate test/units/modules/packaging/language/test_gem.py metaclass-boilerplate test/units/modules/packaging/language/test_pip.py future-import-boilerplate test/units/modules/packaging/language/test_pip.py metaclass-boilerplate test/units/modules/packaging/os/conftest.py future-import-boilerplate test/units/modules/packaging/os/conftest.py metaclass-boilerplate test/units/modules/packaging/os/test_apk.py future-import-boilerplate test/units/modules/packaging/os/test_apk.py metaclass-boilerplate test/units/modules/packaging/os/test_apt.py future-import-boilerplate test/units/modules/packaging/os/test_apt.py metaclass-boilerplate test/units/modules/packaging/os/test_apt.py pylint:blacklisted-name test/units/modules/packaging/os/test_rhn_channel.py future-import-boilerplate test/units/modules/packaging/os/test_rhn_channel.py metaclass-boilerplate test/units/modules/packaging/os/test_rhn_register.py future-import-boilerplate test/units/modules/packaging/os/test_rhn_register.py metaclass-boilerplate test/units/modules/packaging/os/test_yum.py future-import-boilerplate test/units/modules/packaging/os/test_yum.py metaclass-boilerplate test/units/modules/remote_management/dellemc/test_ome_device_info.py future-import-boilerplate test/units/modules/remote_management/dellemc/test_ome_device_info.py metaclass-boilerplate test/units/modules/remote_management/lxca/test_lxca_cmms.py future-import-boilerplate test/units/modules/remote_management/lxca/test_lxca_cmms.py metaclass-boilerplate test/units/modules/remote_management/lxca/test_lxca_nodes.py future-import-boilerplate test/units/modules/remote_management/lxca/test_lxca_nodes.py metaclass-boilerplate test/units/modules/remote_management/oneview/conftest.py future-import-boilerplate test/units/modules/remote_management/oneview/conftest.py metaclass-boilerplate test/units/modules/remote_management/oneview/hpe_test_utils.py future-import-boilerplate test/units/modules/remote_management/oneview/hpe_test_utils.py metaclass-boilerplate test/units/modules/remote_management/oneview/oneview_module_loader.py future-import-boilerplate test/units/modules/remote_management/oneview/oneview_module_loader.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_datacenter_info.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_datacenter_info.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_enclosure_info.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_enclosure_info.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_ethernet_network.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_ethernet_network.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_ethernet_network_info.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_ethernet_network_info.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_fc_network.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_fc_network.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_fc_network_info.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_fc_network_info.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_fcoe_network.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_fcoe_network.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_fcoe_network_info.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_fcoe_network_info.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group_info.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_logical_interconnect_group_info.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_network_set.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_network_set.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_network_set_info.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_network_set_info.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_san_manager.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_san_manager.py metaclass-boilerplate test/units/modules/remote_management/oneview/test_oneview_san_manager_info.py future-import-boilerplate test/units/modules/remote_management/oneview/test_oneview_san_manager_info.py metaclass-boilerplate test/units/modules/source_control/gitlab/gitlab.py future-import-boilerplate test/units/modules/source_control/gitlab/gitlab.py metaclass-boilerplate test/units/modules/source_control/gitlab/test_gitlab_deploy_key.py future-import-boilerplate test/units/modules/source_control/gitlab/test_gitlab_deploy_key.py metaclass-boilerplate test/units/modules/source_control/gitlab/test_gitlab_group.py future-import-boilerplate test/units/modules/source_control/gitlab/test_gitlab_group.py metaclass-boilerplate test/units/modules/source_control/gitlab/test_gitlab_hook.py future-import-boilerplate test/units/modules/source_control/gitlab/test_gitlab_hook.py metaclass-boilerplate test/units/modules/source_control/gitlab/test_gitlab_project.py future-import-boilerplate test/units/modules/source_control/gitlab/test_gitlab_project.py metaclass-boilerplate test/units/modules/source_control/gitlab/test_gitlab_runner.py future-import-boilerplate test/units/modules/source_control/gitlab/test_gitlab_runner.py metaclass-boilerplate test/units/modules/source_control/gitlab/test_gitlab_user.py future-import-boilerplate test/units/modules/source_control/gitlab/test_gitlab_user.py metaclass-boilerplate test/units/modules/source_control/test_bitbucket_access_key.py future-import-boilerplate test/units/modules/source_control/test_bitbucket_access_key.py metaclass-boilerplate test/units/modules/source_control/test_bitbucket_pipeline_key_pair.py future-import-boilerplate test/units/modules/source_control/test_bitbucket_pipeline_key_pair.py metaclass-boilerplate test/units/modules/source_control/test_bitbucket_pipeline_known_host.py future-import-boilerplate test/units/modules/source_control/test_bitbucket_pipeline_known_host.py metaclass-boilerplate test/units/modules/source_control/test_bitbucket_pipeline_variable.py future-import-boilerplate test/units/modules/source_control/test_bitbucket_pipeline_variable.py metaclass-boilerplate test/units/modules/storage/hpe3par/test_ss_3par_cpg.py future-import-boilerplate test/units/modules/storage/hpe3par/test_ss_3par_cpg.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_elementsw_cluster_config.py future-import-boilerplate test/units/modules/storage/netapp/test_na_elementsw_cluster_config.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_elementsw_cluster_snmp.py future-import-boilerplate test/units/modules/storage/netapp/test_na_elementsw_cluster_snmp.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_elementsw_initiators.py future-import-boilerplate test/units/modules/storage/netapp/test_na_elementsw_initiators.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_aggregate.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_aggregate.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_autosupport.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_autosupport.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_broadcast_domain.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_broadcast_domain.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_cifs.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_cifs.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_cifs_server.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_cifs_server.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_cluster.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_cluster.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_cluster_peer.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_cluster_peer.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_command.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_command.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_export_policy_rule.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_export_policy_rule.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_firewall_policy.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_firewall_policy.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_flexcache.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_flexcache.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_igroup.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_igroup.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_igroup_initiator.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_igroup_initiator.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_info.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_info.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_interface.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_interface.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_ipspace.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_ipspace.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_job_schedule.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_job_schedule.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_lun_copy.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_lun_copy.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_lun_map.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_lun_map.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_motd.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_motd.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_net_ifgrp.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_net_ifgrp.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_net_port.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_net_port.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_net_routes.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_net_routes.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_net_subnet.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_net_subnet.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_nfs.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_nfs.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_nvme.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_nvme.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_nvme_namespace.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_nvme_namespace.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_nvme_subsystem.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_nvme_subsystem.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_portset.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_portset.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_qos_policy_group.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_qos_policy_group.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_quotas.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_quotas.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_security_key_manager.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_security_key_manager.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_service_processor_network.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_service_processor_network.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_snapmirror.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_snapmirror.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_snapshot.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_snapshot.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_snapshot_policy.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_snapshot_policy.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_software_update.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_software_update.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_svm.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_svm.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_ucadapter.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_ucadapter.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_unix_group.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_unix_group.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_unix_user.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_unix_user.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_user.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_user.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_user_role.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_user_role.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_volume.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_volume.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_volume_clone.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_volume_clone.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_vscan.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_vscan.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_vscan_on_access_policy.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_vscan_on_access_policy.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_vscan_on_demand_task.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_vscan_on_demand_task.py metaclass-boilerplate test/units/modules/storage/netapp/test_na_ontap_vscan_scanner_pool.py future-import-boilerplate test/units/modules/storage/netapp/test_na_ontap_vscan_scanner_pool.py metaclass-boilerplate test/units/modules/storage/netapp/test_netapp.py metaclass-boilerplate test/units/modules/storage/netapp/test_netapp_e_alerts.py future-import-boilerplate test/units/modules/storage/netapp/test_netapp_e_asup.py future-import-boilerplate test/units/modules/storage/netapp/test_netapp_e_auditlog.py future-import-boilerplate test/units/modules/storage/netapp/test_netapp_e_global.py future-import-boilerplate test/units/modules/storage/netapp/test_netapp_e_host.py future-import-boilerplate test/units/modules/storage/netapp/test_netapp_e_iscsi_interface.py future-import-boilerplate test/units/modules/storage/netapp/test_netapp_e_iscsi_target.py future-import-boilerplate test/units/modules/storage/netapp/test_netapp_e_ldap.py future-import-boilerplate test/units/modules/storage/netapp/test_netapp_e_mgmt_interface.py future-import-boilerplate test/units/modules/storage/netapp/test_netapp_e_syslog.py future-import-boilerplate test/units/modules/system/interfaces_file/test_interfaces_file.py future-import-boilerplate test/units/modules/system/interfaces_file/test_interfaces_file.py metaclass-boilerplate test/units/modules/system/interfaces_file/test_interfaces_file.py pylint:blacklisted-name test/units/modules/system/test_iptables.py future-import-boilerplate test/units/modules/system/test_iptables.py metaclass-boilerplate test/units/modules/system/test_java_keystore.py future-import-boilerplate test/units/modules/system/test_java_keystore.py metaclass-boilerplate test/units/modules/system/test_known_hosts.py future-import-boilerplate test/units/modules/system/test_known_hosts.py metaclass-boilerplate test/units/modules/system/test_known_hosts.py pylint:ansible-bad-function test/units/modules/system/test_linux_mountinfo.py future-import-boilerplate test/units/modules/system/test_linux_mountinfo.py metaclass-boilerplate test/units/modules/system/test_pamd.py metaclass-boilerplate test/units/modules/system/test_parted.py future-import-boilerplate test/units/modules/system/test_systemd.py future-import-boilerplate test/units/modules/system/test_systemd.py metaclass-boilerplate test/units/modules/system/test_ufw.py future-import-boilerplate test/units/modules/system/test_ufw.py metaclass-boilerplate test/units/modules/utils.py future-import-boilerplate test/units/modules/utils.py metaclass-boilerplate test/units/modules/web_infrastructure/test_apache2_module.py future-import-boilerplate test/units/modules/web_infrastructure/test_apache2_module.py metaclass-boilerplate test/units/modules/web_infrastructure/test_jenkins_plugin.py future-import-boilerplate test/units/modules/web_infrastructure/test_jenkins_plugin.py metaclass-boilerplate test/units/parsing/utils/test_addresses.py future-import-boilerplate test/units/parsing/utils/test_addresses.py metaclass-boilerplate test/units/parsing/vault/test_vault.py pylint:blacklisted-name test/units/playbook/role/test_role.py pylint:blacklisted-name test/units/playbook/test_attribute.py future-import-boilerplate test/units/playbook/test_attribute.py metaclass-boilerplate test/units/playbook/test_conditional.py future-import-boilerplate test/units/playbook/test_conditional.py metaclass-boilerplate test/units/plugins/action/test_synchronize.py future-import-boilerplate test/units/plugins/action/test_synchronize.py metaclass-boilerplate test/units/plugins/httpapi/test_checkpoint.py future-import-boilerplate test/units/plugins/httpapi/test_checkpoint.py metaclass-boilerplate test/units/plugins/httpapi/test_ftd.py future-import-boilerplate test/units/plugins/httpapi/test_ftd.py metaclass-boilerplate test/units/plugins/inventory/test_constructed.py future-import-boilerplate test/units/plugins/inventory/test_constructed.py metaclass-boilerplate test/units/plugins/inventory/test_group.py future-import-boilerplate test/units/plugins/inventory/test_group.py metaclass-boilerplate test/units/plugins/inventory/test_host.py future-import-boilerplate test/units/plugins/inventory/test_host.py metaclass-boilerplate test/units/plugins/loader_fixtures/import_fixture.py future-import-boilerplate test/units/plugins/shell/test_cmd.py future-import-boilerplate test/units/plugins/shell/test_cmd.py metaclass-boilerplate test/units/plugins/shell/test_powershell.py future-import-boilerplate test/units/plugins/shell/test_powershell.py metaclass-boilerplate test/units/plugins/test_plugins.py pylint:blacklisted-name test/units/template/test_templar.py pylint:blacklisted-name test/units/test_constants.py future-import-boilerplate test/units/test_context.py future-import-boilerplate test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/action/my_action.py future-import-boilerplate test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/action/my_action.py metaclass-boilerplate test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_other_util.py future-import-boilerplate test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_other_util.py metaclass-boilerplate test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_util.py future-import-boilerplate test/units/utils/fixtures/collections/ansible_collections/my_namespace/my_collection/plugins/module_utils/my_util.py metaclass-boilerplate test/units/utils/kubevirt_fixtures.py future-import-boilerplate test/units/utils/kubevirt_fixtures.py metaclass-boilerplate test/units/utils/test_cleanup_tmp_file.py future-import-boilerplate test/units/utils/test_encrypt.py future-import-boilerplate test/units/utils/test_encrypt.py metaclass-boilerplate test/units/utils/test_helpers.py future-import-boilerplate test/units/utils/test_helpers.py metaclass-boilerplate test/units/utils/test_shlex.py future-import-boilerplate test/units/utils/test_shlex.py metaclass-boilerplate test/utils/shippable/check_matrix.py replace-urlopen test/utils/shippable/timing.py shebang
closed
ansible/ansible
https://github.com/ansible/ansible
65,043
Can't pass parameter with value `false` to cp_mgmt modules
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Can't pass parameter with value `false` to cp_mgmt modules for example the following task, won't sent the parameter `add_default_rule`: ``` - name: Create access layer check_point.mgmt.cp_mgmt_access_layer: name: "access layer 3" add_default_rule: false ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> check_point ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/65043
https://github.com/ansible/ansible/pull/65040
bc92170242ed2dc456e284b796dccc81e6ff18ac
b1e666766447e1eab9d986f19503d19fe1c21ae6
2019-11-19T09:44:42Z
python
2019-11-20T06:39:40Z
test/units/plugins/httpapi/test_checkpoint.py
# (c) 2018 Red Hat Inc. # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) import json from ansible.module_utils.six.moves.urllib.error import HTTPError from units.compat import mock from units.compat import unittest from ansible.errors import AnsibleConnectionFailure from ansible.module_utils.connection import ConnectionError from ansible.module_utils.six import BytesIO, StringIO from ansible.plugins.httpapi.checkpoint import HttpApi EXPECTED_BASE_HEADERS = { 'Content-Type': 'application/json' } class FakeCheckpointHttpApiPlugin(HttpApi): def __init__(self, conn): super(FakeCheckpointHttpApiPlugin, self).__init__(conn) self.hostvars = { 'domain': None } def get_option(self, var): return self.hostvars[var] def set_option(self, var, val): self.hostvars[var] = val class TestCheckpointHttpApi(unittest.TestCase): def setUp(self): self.connection_mock = mock.Mock() self.checkpoint_plugin = FakeCheckpointHttpApiPlugin(self.connection_mock) self.checkpoint_plugin._load_name = 'httpapi' def test_login_raises_exception_when_username_and_password_are_not_provided(self): with self.assertRaises(AnsibleConnectionFailure) as res: self.checkpoint_plugin.login(None, None) assert 'Username and password are required' in str(res.exception) def test_login_raises_exception_when_invalid_response(self): self.connection_mock.send.return_value = self._connection_response( {'NOSIDKEY': 'NOSIDVALUE'} ) with self.assertRaises(ConnectionError) as res: self.checkpoint_plugin.login('foo', 'bar') assert 'Server returned response without token info during connection authentication' in str(res.exception) def test_send_request_should_return_error_info_when_http_error_raises(self): self.connection_mock.send.side_effect = HTTPError('http://testhost.com', 500, '', {}, StringIO('{"errorMessage": "ERROR"}')) resp = self.checkpoint_plugin.send_request('/test', None) assert resp == (500, {'errorMessage': 'ERROR'}) def test_login_to_global_domain(self): temp_domain = self.checkpoint_plugin.hostvars['domain'] self.checkpoint_plugin.hostvars['domain'] = 'test_domain' self.connection_mock.send.return_value = self._connection_response( {'sid': 'SID', 'uid': 'UID'} ) self.checkpoint_plugin.login('USERNAME', 'PASSWORD') self.connection_mock.send.assert_called_once_with('/web_api/login', mock.ANY, headers=mock.ANY, method=mock.ANY) self.checkpoint_plugin.hostvars['domain'] = temp_domain @staticmethod def _connection_response(response, status=200): response_mock = mock.Mock() response_mock.getcode.return_value = status response_text = json.dumps(response) if type(response) is dict else response response_data = BytesIO(response_text.encode() if response_text else ''.encode()) return response_mock, response_data
closed
ansible/ansible
https://github.com/ansible/ansible
65,095
CLI Parsing fails in when volume return Gluster Id
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Ansible facts return invalid result format when gluster id present. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> Gluster-heal ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below 2.8.2 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Fedora-29 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> 1. Setup Environment for 1*3 replica. 2. Make one of the brick Down 3. Create some file or any operation on the up bricks so that are available for heals. Ex. Create a large file or create few vm's on host etc.. 4. run Command gluster volume heal data info o/p: [root@dhcp42-43 ~]# gluster v heal data info Brick headwig.lab.eng.blr.redhat.com:/gluster_bricks/data/data Status: Transport endpoint is not connected Number of entries: - Brick fisher.lab.eng.blr.redhat.com:/gluster_bricks/data/data /901a4da9-2b0f-4b37-8bcb-e4bb548dc1b9/dom_md/ids /__DIRECT_IO_TEST__ Status: Connected Number of entries: 2 Brick pinstripe.lab.eng.blr.redhat.com:/gluster_bricks/data/data /__DIRECT_IO_TEST__ <gfid:711baa99-2d96-410a-87d0-2527c46b47f5> Status: Connected Number of entries: 2 5. See the gfid has came for brick pinstripe.lab.eng.blr.redhat.com:/gluster_bricks/data/data 6. Have look on actual result it has splitted the brick (pinestripe....) into 2 object and expected is it should come in single object. <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ```{ "ansible_facts": { "glusterfs": { "heal_info": [ { "brick": " headwig.lab.eng.blr.redhat.com:/gluster_bricks/vmstore/vmstore", "no_of_entries": "-", "status": "Transport endpoint is not connected" }, { "brick": " fisher.lab.eng.blr.redhat.com:/gluster_bricks/vmstore/vmstore", "no_of_entries": "2", "status": "Connected" }, { "brick": " pinstripe.lab.eng.blr.redhat.com:/gluster_bricks/vmstore/vmstore", "no_of_entries": "2", "status": "Connected" } ], "rebalance": "", "status_filter": "self-heal", "volume": "vmstore" } }, "ansible_loop_var": "item", "attempts": 2, "changed": false, "invocation": { "module_args": { "name": "vmstore", "status_filter": "self-heal" } }, "item": "vmstore" } ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> ```{ "ansible_facts": { "glusterfs": { "heal_info": [ { "brick": " headwig.lab.eng.blr.redhat.com:/gluster_bricks/vmstore/vmstore", "no_of_entries": "-", "status": "Transport endpoint is not connected" }, { "brick": " fisher.lab.eng.blr.redhat.com:/gluster_bricks/vmstore/vmstore", "no_of_entries": "2", "status": "Connected" }, { "brick": " pinstripe.lab.eng.blr.redhat.com:/gluster_bricks/vmstore/vmstore" }, { "no_of_entries": "2", "status": "Connected" } ], "rebalance": "", "status_filter": "self-heal", "volume": "vmstore" } }, "ansible_loop_var": "item", "attempts": 2, "changed": false, "invocation": { "module_args": { "name": "vmstore", "status_filter": "self-heal" } }, "item": "vmstore" } ``` <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/65095
https://github.com/ansible/ansible/pull/65041
35cc26f8c0447ab1ad4427eafcc7283c4356370d
38d6421425dd5ea6d0529c523f89e92bdeb21a37
2019-11-20T08:53:31Z
python
2019-11-20T09:29:22Z
lib/ansible/modules/storage/glusterfs/gluster_heal_info.py
#!/usr/bin/python # -*- coding: utf-8 -*- # # Copyright: (c) 2016, Red Hat, Inc. # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: gluster_heal_info short_description: Gather information on self-heal or rebalance status author: "Devyani Kota (@devyanikota)" version_added: "2.8" description: - Gather facts about either self-heal or rebalance status. - This module was called C(gluster_heal_facts) before Ansible 2.9, returning C(ansible_facts). Note that the M(gluster_heal_info) module no longer returns C(ansible_facts)! options: name: description: - The volume name. required: true aliases: ['volume'] status_filter: default: "self-heal" choices: ["self-heal", "rebalance"] description: - Determines which facts are to be returned. - If the C(status_filter) is C(self-heal), status of self-heal, along with the number of files still in process are returned. - If the C(status_filter) is C(rebalance), rebalance status is returned. requirements: - GlusterFS > 3.2 ''' EXAMPLES = ''' - name: Gather self-heal facts about all gluster hosts in the cluster gluster_heal_info: name: test_volume status_filter: self-heal register: self_heal_status - debug: var: self_heal_status - name: Gather rebalance facts about all gluster hosts in the cluster gluster_heal_info: name: test_volume status_filter: rebalance register: rebalance_status - debug: var: rebalance_status ''' RETURN = ''' name: description: GlusterFS volume name returned: always type: str status_filter: description: Whether self-heal or rebalance status is to be returned returned: always type: str heal_info: description: List of files that still need healing process returned: On success type: list rebalance_status: description: Status of rebalance operation returned: On success type: list ''' import traceback from ansible.module_utils.basic import AnsibleModule from ansible.module_utils._text import to_native from distutils.version import LooseVersion glusterbin = '' def run_gluster(gargs, **kwargs): global glusterbin global module args = [glusterbin, '--mode=script'] args.extend(gargs) try: rc, out, err = module.run_command(args, **kwargs) if rc != 0: module.fail_json(msg='error running gluster (%s) command (rc=%d): %s' % (' '.join(args), rc, out or err), exception=traceback.format_exc()) except Exception as e: module.fail_json(msg='error running gluster (%s) command: %s' % (' '.join(args), to_native(e)), exception=traceback.format_exc()) return out def get_self_heal_status(name): out = run_gluster(['volume', 'heal', name, 'info'], environ_update=dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')) raw_out = out.split("\n") heal_info = [] # return files that still need healing. for line in raw_out: if 'Brick' in line: br_dict = {} br_dict['brick'] = line.strip().strip("Brick") elif 'Status' in line: br_dict['status'] = line.split(":")[1].strip() elif 'Number' in line: br_dict['no_of_entries'] = line.split(":")[1].strip() elif line.startswith('/') or '\n' in line: continue else: br_dict and heal_info.append(br_dict) br_dict = {} return heal_info def get_rebalance_status(name): out = run_gluster(['volume', 'rebalance', name, 'status'], environ_update=dict(LANG='C', LC_ALL='C', LC_MESSAGES='C')) raw_out = out.split("\n") rebalance_status = [] # return the files that are either still 'in progress' state or 'completed'. for line in raw_out: line = " ".join(line.split()) line_vals = line.split(" ") if line_vals[0].startswith('-') or line_vals[0].startswith('Node'): continue node_dict = {} if len(line_vals) == 1 or len(line_vals) == 4: continue node_dict['node'] = line_vals[0] node_dict['rebalanced_files'] = line_vals[1] node_dict['failures'] = line_vals[4] if 'in progress' in line: node_dict['status'] = line_vals[5] + line_vals[6] rebalance_status.append(node_dict) elif 'completed' in line: node_dict['status'] = line_vals[5] rebalance_status.append(node_dict) return rebalance_status def is_invalid_gluster_version(module, required_version): cmd = module.get_bin_path('gluster', True) + ' --version' result = module.run_command(cmd) ver_line = result[1].split('\n')[0] version = ver_line.split(' ')[1] # If the installed version is less than 3.2, it is an invalid version # return True return LooseVersion(version) < LooseVersion(required_version) def main(): global module global glusterbin module = AnsibleModule( argument_spec=dict( name=dict(type='str', required=True, aliases=['volume']), status_filter=dict(type='str', default='self-heal', choices=['self-heal', 'rebalance']), ), ) is_old_facts = module._name == 'gluster_heal_facts' if is_old_facts: module.deprecate("The 'gluster_heal_facts' module has been renamed to 'gluster_heal_info', " "and the renamed one no longer returns ansible_facts", version='2.13') glusterbin = module.get_bin_path('gluster', True) required_version = "3.2" status_filter = module.params['status_filter'] volume_name = module.params['name'] heal_info = '' rebalance_status = '' # Verify if required GlusterFS version is installed if is_invalid_gluster_version(module, required_version): module.fail_json(msg="GlusterFS version > %s is required" % required_version) try: if status_filter == "self-heal": heal_info = get_self_heal_status(volume_name) elif status_filter == "rebalance": rebalance_status = get_rebalance_status(volume_name) except Exception as e: module.fail_json(msg='Error retrieving status: %s' % e, exception=traceback.format_exc()) facts = {} facts['glusterfs'] = {'volume': volume_name, 'status_filter': status_filter, 'heal_info': heal_info, 'rebalance': rebalance_status} if is_old_facts: module.exit_json(ansible_facts=facts) else: module.exit_json(**facts) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,968
Meraki - Integration tests shouldn't have connection tests
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Many of the Meraki modules' integration tests have connection tests. These tests should be moved into unit tests. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meraki ##### ANSIBLE VERSION ```paste below 2.10 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64968
https://github.com/ansible/ansible/pull/64975
502fc2087ec98e57948dd17a977ee65f1b3bf036
2cf079bc8fb0bfc21df9173f6abe1910448af1ff
2019-11-17T21:00:28Z
python
2019-11-20T14:24:12Z
test/integration/targets/meraki_config_template/tasks/main.yml
# Test code for the Meraki Organization module # Copyright: (c) 2018, Kevin Breit (@kbreit) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) --- - block: - name: Test an API key is provided fail: msg: Please define an API key when: auth_key is not defined - name: Use an invalid domain meraki_config_template: auth_key: '{{ auth_key }}' host: marrrraki.com state: query org_name: '{{test_org_name}}' output_level: debug delegate_to: localhost register: invalid_domain ignore_errors: yes - name: Connection assertions assert: that: - '"Failed to connect to" in invalid_domain.msg' - name: Query all configuration templates meraki_config_template: auth_key: '{{auth_key}}' state: query org_name: '{{test_org_name}}' register: get_all - name: Delete non-existant configuration template meraki_config_template: auth_key: '{{auth_key}}' state: absent org_name: '{{test_org_name}}' config_template: FakeConfigTemplate register: deleted ignore_errors: yes - assert: that: - '"No configuration template named" in deleted.msg' - name: Create a network meraki_network: auth_key: '{{auth_key}}' state: present org_name: '{{ test_org_name }}' net_name: '{{ test_net_name }}' type: appliance delegate_to: localhost register: net_info - set_fact: net_id: '{{net_info.data.id}}' - name: Bind a template to a network with check mode meraki_config_template: auth_key: '{{auth_key}}' state: present org_name: '{{ test_org_name }}' net_name: '{{ test_net_name }}' config_template: '{{test_template_name}}' check_mode: yes register: bind_check - name: Bind a template to a network meraki_config_template: auth_key: '{{auth_key}}' state: present org_name: '{{ test_org_name }}' net_name: '{{ test_net_name }}' config_template: '{{test_template_name}}' register: bind - assert: that: bind.changed == True - assert: that: bind_check is changed - name: Bind a template to a network when it's already bound meraki_config_template: auth_key: '{{auth_key}}' state: present org_name: '{{ test_org_name }}' net_name: '{{ test_net_name }}' config_template: '{{test_template_name}}' register: bind_invalid ignore_errors: yes - assert: that: - bind_invalid.changed == False - name: Unbind a template from a network meraki_config_template: auth_key: '{{auth_key}}' state: absent org_name: '{{ test_org_name }}' net_name: '{{ test_net_name }}' config_template: '{{test_template_name}}' register: unbind - assert: that: unbind.changed == True - name: Unbind a template from a network when it's not bound meraki_config_template: auth_key: '{{auth_key}}' state: absent org_name: '{{ test_org_name }}' net_name: '{{ test_net_name }}' config_template: '{{test_template_name}}' register: unbind_invalid - assert: that: unbind_invalid.changed == False - name: Bind a template to a network via id meraki_config_template: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_id: '{{net_id}}' config_template: '{{test_template_name}}' register: bind_id - assert: that: bind_id.changed == True - name: Bind a template to a network via id for idempotency meraki_config_template: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_id: '{{net_id}}' config_template: '{{test_template_name}}' register: bind_id_idempotent - assert: that: - bind_id_idempotent.changed == False - bind_id_idempotent.data is defined - name: Unbind a template from a network via id with check mode meraki_config_template: auth_key: '{{auth_key}}' state: absent org_name: '{{test_org_name}}' net_id: '{{net_id}}' config_template: '{{test_template_name}}' check_mode: yes register: unbind_id_check - assert: that: unbind_id_check is changed - name: Unbind a template from a network via id meraki_config_template: auth_key: '{{auth_key}}' state: absent org_name: '{{test_org_name}}' net_id: '{{net_id}}' config_template: '{{test_template_name}}' register: unbind_id - assert: that: unbind_id.changed == True # This is disabled by default since they can't be created via API - name: Delete sacrificial template with check mode meraki_config_template: auth_key: '{{auth_key}}' state: absent org_name: '{{test_org_name}}' config_template: sacrificial_template check_mode: yes register: delete_template_check # This is disabled by default since they can't be created via API - name: Delete sacrificial template meraki_config_template: auth_key: '{{auth_key}}' state: absent org_name: '{{test_org_name}}' config_template: sacrificial_template output_level: debug register: delete_template - debug: var: delete_template always: - name: Delete network meraki_network: auth_key: '{{auth_key}}' state: absent org_name: '{{ test_org_name }}' net_name: '{{ test_net_name }}' delegate_to: localhost
closed
ansible/ansible
https://github.com/ansible/ansible
64,968
Meraki - Integration tests shouldn't have connection tests
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Many of the Meraki modules' integration tests have connection tests. These tests should be moved into unit tests. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meraki ##### ANSIBLE VERSION ```paste below 2.10 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64968
https://github.com/ansible/ansible/pull/64975
502fc2087ec98e57948dd17a977ee65f1b3bf036
2cf079bc8fb0bfc21df9173f6abe1910448af1ff
2019-11-17T21:00:28Z
python
2019-11-20T14:24:12Z
test/integration/targets/meraki_mx_l3_firewall/tasks/main.yml
# Test code for the Meraki Organization module # Copyright: (c) 2018, Kevin Breit (@kbreit) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) --- - block: - name: Test an API key is provided fail: msg: Please define an API key when: auth_key is not defined - name: Use an invalid domain meraki_organization: auth_key: '{{ auth_key }}' host: marrrraki.com state: present org_name: IntTestOrg output_level: debug delegate_to: localhost register: invalid_domain ignore_errors: yes - name: Disable HTTP meraki_organization: auth_key: '{{ auth_key }}' use_https: false state: query output_level: debug delegate_to: localhost register: http ignore_errors: yes - name: Connection assertions assert: that: - '"Failed to connect to" in invalid_domain.msg' - '"http" in http.url' - name: Create network meraki_network: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: TestNetAppliance state: present type: appliance delegate_to: localhost - name: Query firewall rules meraki_mx_l3_firewall: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: TestNetAppliance state: query delegate_to: localhost register: query - assert: that: - query.data|length == 1 - name: Set one firewall rule meraki_mx_l3_firewall: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: TestNetAppliance state: present rules: - comment: Deny to documentation address src_port: any src_cidr: any dest_port: 80,443 dest_cidr: 192.0.1.1/32 protocol: tcp policy: deny delegate_to: localhost register: create_one - debug: var: create_one - assert: that: - create_one.data|length == 2 - create_one.data.0.dest_cidr == '192.0.1.1/32' - create_one.data.0.protocol == 'tcp' - create_one.data.0.policy == 'deny' - create_one.changed == True - create_one.data is defined - name: Check for idempotency meraki_mx_l3_firewall: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: TestNetAppliance state: present rules: - comment: Deny to documentation address src_port: any src_cidr: any dest_port: 80,443 dest_cidr: 192.0.1.1/32 protocol: tcp policy: deny delegate_to: localhost register: create_one_idempotent - debug: msg: '{{create_one_idempotent}}' - assert: that: - create_one_idempotent.changed == False - create_one_idempotent.data is defined - name: Create syslog in network meraki_syslog: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: TestNetAppliance state: present servers: - host: 192.0.2.10 port: 514 roles: - Appliance event log - Flows delegate_to: localhost - name: Enable syslog for default rule meraki_mx_l3_firewall: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: TestNetAppliance state: present rules: - comment: Deny to documentation address src_port: any src_cidr: any dest_port: 80,443 dest_cidr: 192.0.1.1/32 protocol: tcp policy: deny syslog_default_rule: yes delegate_to: localhost register: default_syslog - debug: msg: '{{default_syslog}}' - assert: that: - default_syslog.data is defined - name: Query firewall rules meraki_mx_l3_firewall: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: TestNetAppliance state: query delegate_to: localhost register: query - debug: msg: '{{query.data.1}}' - assert: that: - query.data.1.syslog_enabled == True - default_syslog.changed == True - name: Disable syslog for default rule meraki_mx_l3_firewall: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: TestNetAppliance state: present rules: - comment: Deny to documentation address src_port: any src_cidr: any dest_port: 80,443 dest_cidr: 192.0.1.1/32 protocol: tcp policy: deny syslog_default_rule: no delegate_to: localhost register: disable_syslog - debug: msg: '{{disable_syslog}}' - assert: that: - disable_syslog.data is defined - name: Query firewall rules meraki_mx_l3_firewall: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: TestNetAppliance state: query delegate_to: localhost register: query - debug: msg: '{{query.data.1}}' - assert: that: - query.data.1.syslog_enabled == False - disable_syslog.changed == True always: - name: Delete all firewall rules meraki_mx_l3_firewall: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: TestNetAppliance state: present rules: [] delegate_to: localhost register: delete_all - name: Delete network meraki_network: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: TestNetAppliance state: absent delegate_to: localhost
closed
ansible/ansible
https://github.com/ansible/ansible
64,968
Meraki - Integration tests shouldn't have connection tests
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Many of the Meraki modules' integration tests have connection tests. These tests should be moved into unit tests. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meraki ##### ANSIBLE VERSION ```paste below 2.10 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64968
https://github.com/ansible/ansible/pull/64975
502fc2087ec98e57948dd17a977ee65f1b3bf036
2cf079bc8fb0bfc21df9173f6abe1910448af1ff
2019-11-17T21:00:28Z
python
2019-11-20T14:24:12Z
test/integration/targets/meraki_ssid/tasks/main.yml
# Test code for the Meraki SSID module # Copyright: (c) 2018, Kevin Breit (@kbreit) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) --- - block: - name: Test an API key is provided fail: msg: Please define an API key when: auth_key is not defined - name: Use an invalid domain meraki_organization: auth_key: '{{ auth_key }}' host: marrrraki.com state: present org_name: IntTestOrg output_level: debug delegate_to: localhost register: invalid_domain ignore_errors: yes - name: Disable HTTP meraki_organization: auth_key: '{{ auth_key }}' use_https: false state: query output_level: debug delegate_to: localhost register: http ignore_errors: yes - name: Connection assertions assert: that: # - '"Failed to connect to" in invalid_domain.msg' - '"http" in http.url' - name: Create test network meraki_network: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID type: wireless register: test_net - debug: msg: '{{test_net}}' - name: Query all SSIDs meraki_ssid: auth_key: '{{auth_key}}' state: query org_name: '{{test_org_name}}' net_name: TestNetSSID delegate_to: localhost register: query_all - name: Enable and name SSID meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID enabled: true delegate_to: localhost register: enable_name_ssid - debug: msg: '{{ enable_name_ssid }}' - assert: that: - query_all.data | length == 15 - query_all.data.0.name == 'TestNetSSID WiFi' - enable_name_ssid.data.name == 'AnsibleSSID' - name: Check for idempotency meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID enabled: true delegate_to: localhost register: enable_name_ssid_idempotent - debug: msg: '{{ enable_name_ssid_idempotent }}' - assert: that: - enable_name_ssid_idempotent.changed == False - enable_name_ssid_idempotent.data is defined - name: Query one SSIDs meraki_ssid: auth_key: '{{auth_key}}' state: query org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID delegate_to: localhost register: query_one - debug: msg: '{{query_one}}' - assert: that: - query_one.data.name == 'AnsibleSSID' - name: Query one SSID with number meraki_ssid: auth_key: '{{auth_key}}' state: query org_name: '{{test_org_name}}' net_name: TestNetSSID number: 1 delegate_to: localhost register: query_one_number - debug: msg: '{{query_one_number}}' - assert: that: - query_one_number.data.name == 'AnsibleSSID' - name: Disable SSID without specifying number meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID enabled: false delegate_to: localhost register: disable_ssid - debug: msg: '{{ disable_ssid.data.enabled }}' - assert: that: - disable_ssid.data.enabled == False - name: Enable SSID with number meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID number: 1 enabled: true delegate_to: localhost register: enable_ssid_number - debug: msg: '{{ enable_ssid_number.data.enabled }}' - assert: that: - enable_ssid_number.data.enabled == True - name: Set VLAN arg spec meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID number: 1 use_vlan_tagging: yes ip_assignment_mode: Bridge mode default_vlan_id: 1 ap_tags_vlan_ids: - tags: wifi vlan_id: 2 delegate_to: localhost register: set_vlan_arg - debug: var: set_vlan_arg - assert: that: set_vlan_arg is changed - name: Set VLAN arg spec meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID number: 1 use_vlan_tagging: yes ip_assignment_mode: Bridge mode default_vlan_id: 1 ap_tags_vlan_ids: - tags: wifi vlan_id: 2 delegate_to: localhost register: set_vlan_arg_idempotent - debug: var: set_vlan_arg_idempotent - assert: that: set_vlan_arg_idempotent is not changed - name: Set PSK meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID auth_mode: psk psk: abc1234567890 encryption_mode: wpa delegate_to: localhost register: psk - debug: msg: '{{ psk }}' - assert: that: - psk.data.auth_mode == 'psk' - psk.data.encryption_mode == 'wpa' - psk.data.wpa_encryption_mode == 'WPA2 only' - name: Set PSK with idempotency meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID auth_mode: psk psk: abc1234567890 encryption_mode: wpa delegate_to: localhost register: psk_idempotent - debug: msg: '{{ psk_idempotent }}' - assert: that: - psk_idempotent is not changed - name: Enable click-through splash page meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID splash_page: Click-through splash page delegate_to: localhost register: splash_click - debug: msg: '{{ splash_click }}' - assert: that: - splash_click.data.splash_page == 'Click-through splash page' - name: Configure RADIUS servers meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID auth_mode: open-with-radius radius_servers: - host: 192.0.1.200 port: 1234 secret: abc98765 delegate_to: localhost register: set_radius_server - debug: msg: '{{ set_radius_server }}' - assert: that: - set_radius_server.data.radius_servers.0.host == '192.0.1.200' - name: Configure RADIUS servers with idempotency meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID auth_mode: open-with-radius radius_servers: - host: 192.0.1.200 port: 1234 secret: abc98765 delegate_to: localhost register: set_radius_server_idempotent - debug: var: set_radius_server_idempotent - assert: that: - set_radius_server_idempotent is not changed ################# # Error testing # ################# - name: Set PSK with wrong mode meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID auth_mode: open psk: abc1234 delegate_to: localhost register: psk_invalid ignore_errors: yes - debug: msg: '{{ psk_invalid }}' - assert: that: - psk_invalid.msg == 'PSK is only allowed when auth_mode is set to psk' - name: Set PSK with invalid encryption mode meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID auth_mode: psk psk: abc1234 encryption_mode: eap delegate_to: localhost register: psk_invalid_mode ignore_errors: yes - debug: msg: '{{ psk_invalid_mode }}' - assert: that: - psk_invalid_mode.msg == 'PSK requires encryption_mode be set to wpa' - name: Error for PSK and RADIUS servers meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID auth_mode: psk radius_servers: - host: 192.0.1.200 port: 1234 secret: abc98765 delegate_to: localhost register: err_radius_server_psk ignore_errors: yes - debug: var: err_radius_server_psk - assert: that: - 'err_radius_server_psk.msg == "radius_servers requires auth_mode to be open-with-radius or 8021x-radius"' - name: Set VLAN arg without default VLAN error meraki_ssid: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: TestNetSSID number: 1 use_vlan_tagging: yes ip_assignment_mode: Bridge mode ap_tags_vlan_ids: - tags: wifi vlan_id: 2 delegate_to: localhost register: set_vlan_arg_err ignore_errors: yes - debug: var: set_vlan_arg_err - assert: that: - 'set_vlan_arg_err.msg == "default_vlan_id is required when use_vlan_tagging is True"' always: - name: Delete SSID meraki_ssid: auth_key: '{{auth_key}}' state: absent org_name: '{{test_org_name}}' net_name: TestNetSSID name: AnsibleSSID delegate_to: localhost register: delete_ssid - debug: msg: '{{ delete_ssid }}' - assert: that: - delete_ssid.data.name == 'Unconfigured SSID 2' - name: Delete test network meraki_network: auth_key: '{{auth_key}}' state: absent org_name: '{{test_org_name}}' net_name: TestNetSSID register: delete_net - debug: msg: '{{delete_net}}'
closed
ansible/ansible
https://github.com/ansible/ansible
64,968
Meraki - Integration tests shouldn't have connection tests
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Many of the Meraki modules' integration tests have connection tests. These tests should be moved into unit tests. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meraki ##### ANSIBLE VERSION ```paste below 2.10 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64968
https://github.com/ansible/ansible/pull/64975
502fc2087ec98e57948dd17a977ee65f1b3bf036
2cf079bc8fb0bfc21df9173f6abe1910448af1ff
2019-11-17T21:00:28Z
python
2019-11-20T14:24:12Z
test/integration/targets/meraki_switchport/tasks/main.yml
# Test code for the Meraki Organization module # Copyright: (c) 2018, Kevin Breit (@kbreit) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) --- - name: Test an API key is provided fail: msg: Please define an API key when: auth_key is not defined - name: Use an invalid domain meraki_switchport: auth_key: '{{ auth_key }}' host: marrrraki.com state: query serial: Q2HP-2C6E-GTLD org_name: IntTestOrg delegate_to: localhost register: invaliddomain ignore_errors: yes - name: Disable HTTP meraki_switchport: auth_key: '{{ auth_key }}' use_https: false state: query serial: Q2HP-2C6E-GTLD output_level: debug delegate_to: localhost register: http ignore_errors: yes - name: Connection assertions assert: that: - '"Failed to connect to" in invaliddomain.msg' - '"http" in http.url' - name: Query all switchports meraki_switchport: auth_key: '{{auth_key}}' state: query serial: Q2HP-2C6E-GTLD delegate_to: localhost register: query_all - debug: msg: '{{query_all}}' - name: Query one switchport meraki_switchport: auth_key: '{{auth_key}}' state: query serial: Q2HP-2C6E-GTLD number: 1 delegate_to: localhost register: query_one - debug: msg: '{{query_one}}' - name: Enable switchport meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 7 enabled: true delegate_to: localhost register: update_port_true - debug: msg: '{{update_port_true}}' - assert: that: - update_port_true.data.enabled == True - name: Disable switchport meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 7 enabled: false delegate_to: localhost register: update_port_false - debug: msg: '{{update_port_false}}' - assert: that: - update_port_false.data.enabled == False - name: Name switchport meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 7 name: Test Port delegate_to: localhost register: update_port_name - debug: msg: '{{update_port_name}}' - assert: that: - update_port_name.data.name == 'Test Port' - name: Configure access port meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 7 enabled: true name: Test Port tags: desktop type: access vlan: 10 delegate_to: localhost register: update_access_port - debug: msg: '{{update_access_port}}' - assert: that: - update_access_port.data.vlan == 10 - name: Configure port as trunk meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 8 enabled: true name: Test Port type: trunk vlan: 10 allowed_vlans: 10, 100, 200 delegate_to: localhost - name: Convert trunk port to access meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 8 enabled: true name: Test Port type: access vlan: 10 delegate_to: localhost - name: Test converted port for idempotency meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 8 enabled: true name: Test Port type: access vlan: 10 delegate_to: localhost register: convert_idempotent - assert: that: - convert_idempotent.changed == False - name: Configure access port with voice VLAN meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 7 enabled: true name: Test Port tags: desktop type: access vlan: 10 voice_vlan: 11 delegate_to: localhost register: update_port_vvlan - debug: msg: '{{update_port_vvlan}}' - assert: that: - update_port_vvlan.data.voice_vlan == 11 - update_port_vvlan.changed == True - name: Check access port for idempotenty meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 7 enabled: true name: Test Port tags: desktop type: access vlan: 10 voice_vlan: 11 delegate_to: localhost register: update_port_access_idempotent - debug: msg: '{{update_port_access_idempotent}}' - assert: that: - update_port_access_idempotent.changed == False - update_port_access_idempotent.data is defined - name: Configure trunk port meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 7 enabled: true name: Server port tags: server type: trunk allowed_vlans: all vlan: 8 delegate_to: localhost register: update_trunk - debug: msg: '{{update_trunk}}' - assert: that: - update_trunk.data.tags == 'server' - update_trunk.data.type == 'trunk' - update_trunk.data.allowed_vlans == 'all' - name: Configure trunk port with specific VLANs meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 7 enabled: true name: Server port tags: server type: trunk vlan: 8 allowed_vlans: - 10 - 15 - 20 delegate_to: localhost register: update_trunk - debug: msg: '{{update_trunk}}' - assert: that: - update_trunk.data.tags == 'server' - update_trunk.data.type == 'trunk' - update_trunk.data.allowed_vlans == '8,10,15,20' - name: Configure trunk port with specific VLANs and native VLAN meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 7 enabled: true name: Server port tags: server type: trunk vlan: 2 allowed_vlans: - 10 - 15 - 20 delegate_to: localhost register: update_trunk - debug: msg: '{{update_trunk}}' - assert: that: - update_trunk.data.tags == 'server' - update_trunk.data.type == 'trunk' - update_trunk.data.allowed_vlans == '2,10,15,20' - name: Check for idempotency on trunk port meraki_switchport: auth_key: '{{auth_key}}' state: present serial: Q2HP-2C6E-GTLD number: 7 enabled: true name: Server port tags: server type: trunk vlan: 2 allowed_vlans: - 10 - 15 - 20 delegate_to: localhost register: update_trunk_idempotent - debug: msg: '{{update_trunk_idempotent}}' - assert: that: - update_trunk_idempotent.changed == False - update_trunk_idempotent.data is defined
closed
ansible/ansible
https://github.com/ansible/ansible
64,968
Meraki - Integration tests shouldn't have connection tests
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Many of the Meraki modules' integration tests have connection tests. These tests should be moved into unit tests. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meraki ##### ANSIBLE VERSION ```paste below 2.10 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64968
https://github.com/ansible/ansible/pull/64975
502fc2087ec98e57948dd17a977ee65f1b3bf036
2cf079bc8fb0bfc21df9173f6abe1910448af1ff
2019-11-17T21:00:28Z
python
2019-11-20T14:24:12Z
test/integration/targets/meraki_syslog/tasks/main.yml
# Test code for the Meraki Organization module # Copyright: (c) 2018, Kevin Breit (@kbreit) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) --- - block: # - name: Test an API key is provided # fail: # msg: Please define an API key # when: auth_key is not defined # - name: Use an invalid domain # meraki_switchport: # auth_key: '{{ auth_key }}' # host: marrrraki.com # state: query # serial: Q2HP-2C6E-GTLD # org_name: IntTestOrg # delegate_to: localhost # register: invaliddomain # ignore_errors: yes # - name: Disable HTTP # meraki_switchport: # auth_key: '{{ auth_key }}' # use_https: false # state: query # serial: Q2HP-2C6E- # output_level: debug # delegate_to: localhost # register: http # ignore_errors: yes # - name: Connection assertions # assert: # that: # - '"Failed to connect to" in invaliddomain.msg' # - '"http" in http.url' - set_fact: syslog_test_net_name: 'syslog_{{test_net_name}}' - name: Create network with type appliance and no timezone meraki_network: auth_key: '{{ auth_key }}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' type: appliance delegate_to: localhost register: new_net - set_fact: net_id: '{{new_net.data.id}}' - name: Query syslog settings meraki_syslog: auth_key: '{{auth_key}}' org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' state: query delegate_to: localhost register: query_all - debug: msg: '{{query_all}}' - name: Set syslog server meraki_syslog: auth_key: '{{auth_key}}' org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' state: present servers: - host: 192.0.1.2 port: 514 roles: - Appliance event log delegate_to: localhost register: create_server - debug: msg: '{{create_server.data}}' - assert: that: - create_server['data'][0]['host'] == "192.0.1.2" - name: Set syslog server with idempotency meraki_syslog: auth_key: '{{auth_key}}' org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' state: present servers: - host: 192.0.1.2 port: 514 roles: - Appliance event log delegate_to: localhost register: create_server_idempotency - debug: msg: '{{create_server_idempotency}}' - assert: that: - create_server_idempotency.changed == False - create_server_idempotency.data is defined - name: Set multiple syslog servers meraki_syslog: auth_key: '{{auth_key}}' org_name: '{{test_org_name}}' net_id: '{{net_id}}' state: present servers: - host: 192.0.1.3 port: 514 roles: - Appliance event log - host: 192.0.1.4 port: 514 roles: - Appliance Event log - Flows - host: 192.0.1.5 port: 514 roles: - Flows delegate_to: localhost register: create_multiple_servers - debug: msg: '{{create_multiple_servers}}' - assert: that: - create_multiple_servers['data'][0]['host'] == "192.0.1.3" - create_multiple_servers['data'][1]['host'] == "192.0.1.4" - create_multiple_servers['data'][2]['host'] == "192.0.1.5" - create_multiple_servers['data'] | length == 3 - name: Create syslog server with bad name meraki_syslog: auth_key: '{{auth_key}}' org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' state: present servers: - host: 192.0.1.6 port: 514 roles: - Invalid role delegate_to: localhost register: invalid_role ignore_errors: yes # - debug: # msg: '{{invalid_role.body.errors.0}}' - assert: that: - '"Please select at least one valid role" in invalid_role.body.errors.0' - name: Add role to existing syslog server # Adding doesn't work, just creation meraki_syslog: auth_key: '{{auth_key}}' org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' state: present servers: - host: 192.0.1.2 port: 514 roles: - flows delegate_to: localhost register: add_role - debug: msg: '{{add_role.data.0.roles}}' - assert: that: - add_role.data.0.roles.0 == 'Flows' # - add_role.data.0.roles | length == 2 always: - name: Delete syslog test network meraki_network: auth_key: '{{ auth_key }}' state: absent org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' delegate_to: localhost register: delete_all ignore_errors: yes
closed
ansible/ansible
https://github.com/ansible/ansible
64,968
Meraki - Integration tests shouldn't have connection tests
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Many of the Meraki modules' integration tests have connection tests. These tests should be moved into unit tests. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME meraki ##### ANSIBLE VERSION ```paste below 2.10 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64968
https://github.com/ansible/ansible/pull/64975
502fc2087ec98e57948dd17a977ee65f1b3bf036
2cf079bc8fb0bfc21df9173f6abe1910448af1ff
2019-11-17T21:00:28Z
python
2019-11-20T14:24:12Z
test/integration/targets/meraki_vlan/tasks/main.yml
# Test code for the Meraki VLAN module # Copyright: (c) 2018, Kevin Breit (@kbreit) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) --- - block: - name: Test an API key is provided fail: msg: Please define an API key when: auth_key is not defined - name: Use an invalid domain meraki_vlan: auth_key: '{{ auth_key }}' host: marrrraki.com state: present org_name: IntTestOrg output_level: debug delegate_to: localhost register: invalid_domain ignore_errors: yes - name: Disable HTTPS meraki_vlan: auth_key: '{{ auth_key }}' use_https: false state: query output_level: debug delegate_to: localhost register: http ignore_errors: yes - name: Connection assertions assert: that: - '"Failed to connect to" in invalid_domain.msg' - '"http" in http.url' - name: Test play without auth_key meraki_network: state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' type: appliance delegate_to: localhost ignore_errors: yes register: no_key - assert: that: - '"missing required arguments" in no_key.msg' - name: Create test network meraki_network: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' type: appliance delegate_to: localhost - name: Enable VLANs on network meraki_network: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' enable_vlans: yes delegate_to: localhost - name: Create VLAN in check mode meraki_vlan: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 name: TestVLAN subnet: 192.168.250.0/24 appliance_ip: 192.168.250.1 delegate_to: localhost register: create_vlan_check check_mode: yes - debug: var: create_vlan_check - assert: that: - create_vlan_check is changed - name: Create VLAN meraki_vlan: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 name: TestVLAN subnet: 192.168.250.0/24 appliance_ip: 192.168.250.1 delegate_to: localhost register: create_vlan environment: ANSIBLE_MERAKI_FORMAT: camelcase - debug: msg: '{{create_vlan}}' - assert: that: - create_vlan.data.id == 2 - create_vlan.changed == True - create_vlan.data.networkId is defined - name: Update VLAN with check mode meraki_vlan: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 name: TestVLAN subnet: 192.168.250.0/24 appliance_ip: 192.168.250.2 fixed_ip_assignments: - mac: "13:37:de:ad:be:ef" ip: 192.168.250.10 name: fixed_ip reserved_ip_range: - start: 192.168.250.10 end: 192.168.250.20 comment: reserved_range dns_nameservers: opendns delegate_to: localhost register: update_vlan_check check_mode: yes - debug: var: update_vlan_check - assert: that: - update_vlan_check is changed - name: Update VLAN meraki_vlan: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 name: TestVLAN subnet: 192.168.250.0/24 appliance_ip: 192.168.250.2 fixed_ip_assignments: - mac: "13:37:de:ad:be:ef" ip: 192.168.250.10 name: fixed_ip reserved_ip_range: - start: 192.168.250.10 end: 192.168.250.20 comment: reserved_range dns_nameservers: opendns delegate_to: localhost register: update_vlan - debug: msg: '{{update_vlan}}' - assert: that: - update_vlan.data.appliance_ip == '192.168.250.2' - update_vlan.changed == True - name: Update VLAN with idempotency and check mode meraki_vlan: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 name: TestVLAN subnet: 192.168.250.0/24 appliance_ip: 192.168.250.2 fixed_ip_assignments: - mac: "13:37:de:ad:be:ef" ip: 192.168.250.10 name: fixed_ip reserved_ip_range: - start: 192.168.250.10 end: 192.168.250.20 comment: reserved_range dns_nameservers: opendns delegate_to: localhost register: update_vlan_idempotent_check check_mode: yes - debug: var: update_vlan_idempotent_check - assert: that: - update_vlan_idempotent_check is not changed - name: Update VLAN with idempotency meraki_vlan: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 name: TestVLAN subnet: 192.168.250.0/24 appliance_ip: 192.168.250.2 fixed_ip_assignments: - mac: "13:37:de:ad:be:ef" ip: 192.168.250.10 name: fixed_ip reserved_ip_range: - start: 192.168.250.10 end: 192.168.250.20 comment: reserved_range dns_nameservers: opendns delegate_to: localhost register: update_vlan_idempotent - debug: msg: '{{update_vlan_idempotent}}' - assert: that: - update_vlan_idempotent.changed == False - update_vlan_idempotent.data is defined - name: Add IP assignments and reserved IP ranges meraki_vlan: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 name: TestVLAN subnet: 192.168.250.0/24 appliance_ip: 192.168.250.2 fixed_ip_assignments: - mac: "13:37:de:ad:be:ef" ip: 192.168.250.10 name: fixed_ip - mac: "12:34:56:78:90:12" ip: 192.168.250.11 name: another_fixed_ip reserved_ip_range: - start: 192.168.250.10 end: 192.168.250.20 comment: reserved_range - start: 192.168.250.100 end: 192.168.250.120 comment: reserved_range_high dns_nameservers: opendns delegate_to: localhost register: update_vlan_add_ip - debug: msg: '{{update_vlan_add_ip}}' - assert: that: - update_vlan_add_ip.changed == True - update_vlan_add_ip.data.fixed_ip_assignments | length == 2 - update_vlan_add_ip.data.reserved_ip_ranges | length == 2 - name: Remove IP assignments and reserved IP ranges meraki_vlan: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 name: TestVLAN subnet: 192.168.250.0/24 appliance_ip: 192.168.250.2 fixed_ip_assignments: - mac: "13:37:de:ad:be:ef" ip: 192.168.250.10 name: fixed_ip reserved_ip_range: - start: 192.168.250.10 end: 192.168.250.20 comment: reserved_range dns_nameservers: opendns delegate_to: localhost register: update_vlan_remove_ip - debug: msg: '{{update_vlan_remove_ip}}' - assert: that: - update_vlan_remove_ip.changed == True - update_vlan_remove_ip.data.fixed_ip_assignments | length == 1 - update_vlan_remove_ip.data.reserved_ip_ranges | length == 1 - name: Update VLAN with idempotency meraki_vlan: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 name: TestVLAN subnet: 192.168.250.0/24 appliance_ip: 192.168.250.2 fixed_ip_assignments: - mac: "13:37:de:ad:be:ef" ip: 192.168.250.10 name: fixed_ip reserved_ip_range: - start: 192.168.250.10 end: 192.168.250.20 comment: reserved_range dns_nameservers: opendns delegate_to: localhost register: update_vlan_idempotent - debug: msg: '{{update_vlan_idempotent}}' - assert: that: - update_vlan_idempotent.changed == False - update_vlan_idempotent.data is defined - name: Update VLAN with list of DNS entries meraki_vlan: auth_key: '{{auth_key}}' state: present org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 name: TestVLAN subnet: 192.168.250.0/24 appliance_ip: 192.168.250.2 fixed_ip_assignments: - mac: "13:37:de:ad:be:ef" ip: 192.168.250.10 name: fixed_ip reserved_ip_range: - start: 192.168.250.10 end: 192.168.250.20 comment: reserved_range dns_nameservers: 1.1.1.1;8.8.8.8 delegate_to: localhost register: update_vlan_dns_list - debug: msg: '{{update_vlan_dns_list}}' - assert: that: - '"1.1.1.1" in update_vlan_dns_list.data.dns_nameservers' - update_vlan_dns_list.changed == True - name: Query all VLANs in network meraki_vlan: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' state: query delegate_to: localhost register: query_vlans - debug: msg: '{{query_vlans}}' - assert: that: - query_vlans.data | length >= 2 - query_vlans.data.1.id == 2 - query_vlans.changed == False - name: Query single VLAN meraki_vlan: auth_key: '{{ auth_key }}' org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 state: query output_level: debug delegate_to: localhost register: query_vlan - debug: msg: '{{query_vlan}}' - assert: that: - query_vlan.data.id == 2 - query_vlan.changed == False always: ############################################################################# # Tear down starts here ############################################################################# - name: Delete VLAN with check mode meraki_vlan: auth_key: '{{auth_key}}' state: absent org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 delegate_to: localhost register: delete_vlan_check check_mode: yes - assert: that: delete_vlan_check is changed - name: Delete VLAN meraki_vlan: auth_key: '{{auth_key}}' state: absent org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' vlan_id: 2 delegate_to: localhost register: delete_vlan - debug: msg: '{{delete_vlan}}' - name: Delete test network meraki_network: auth_key: '{{auth_key}}' state: absent org_name: '{{test_org_name}}' net_name: '{{test_net_name}}' delegate_to: localhost
closed
ansible/ansible
https://github.com/ansible/ansible
64,969
openssh_keypair module not idempotent on debian
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY `openssh_keypair` seem to report changed in cases where it shouldn't ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> openssh_keypair ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.1 config file = None configured module search path = ['/home/florian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/florian/.local/pipx/venvs/ansible/lib64/python3.7/site-packages/ansible executable location = /home/florian/.local/bin/ansible python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Debian Sid in a docker container, see playbook below ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - hosts: localhost gather_facts: no tasks: - docker_container: name=testing_debian state=started image=debian:sid command="sleep infinity" - add_host: name=testing_debian ansible_connection=docker - hosts: testing_debian gather_facts: no tasks: # Ensure we start from a consistent state - raw: "apt -y update && apt -y install python openssh-client" - file: path=/tmp/key state=absent - file: path=/tmp/key.pub state=absent # full reproducer here - openssh_keypair: path=/tmp/key comment=test type=ed25519 - openssh_keypair: path=/tmp/key comment=test type=ed25519 ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> I'd expect the second `openssh_keypair` task to report unchanged. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> Both tasks show up as changed: <!--- Paste verbatim command output between quotes --> ```paste below TASK [file] ********************************************************************************************************************************************************************************************************************************** [WARNING]: Platform linux on host testing_debian is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [testing_debian] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "path": "/tmp/key", "state": "absent"} TASK [file] ********************************************************************************************************************************************************************************************************************************** ok: [testing_debian] => {"changed": false, "path": "/tmp/key.pub", "state": "absent"} TASK [openssh_keypair] *********************************************************************************************************************************************************************************************************************** changed: [testing_debian] => {"changed": true, "comment": "test", "filename": "/tmp/key", "fingerprint": "SHA256:k7zoCrUKcwjfvSsKSr+zElJL5wGgCbmNsgGdD4qXp6o", "public_key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ5JTc62+6h0NlqM0r5pranKV3kXS7+XbzO9ZPeY2f36 test", "size": 256, "type": "ed25519"} TASK [openssh_keypair] *********************************************************************************************************************************************************************************************************************** changed: [testing_debian] => {"changed": true, "comment": "test", "filename": "/tmp/key", "fingerprint": "SHA256:k7zoCrUKcwjfvSsKSr+zElJL5wGgCbmNsgGdD4qXp6o", "public_key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ5JTc62+6h0NlqM0r5pranKV3kXS7+XbzO9ZPeY2f36 test", "size": 256, "type": "ed25519"} ```
https://github.com/ansible/ansible/issues/64969
https://github.com/ansible/ansible/pull/65017
509b989a9a525d0d1b2dbcf0187bfda2eabe4ce5
b36f57225665de07c31d6affac541adc12207040
2019-11-17T21:05:27Z
python
2019-11-20T20:02:26Z
changelogs/fragments/65017-openssh_keypair-idempotence.yml
closed
ansible/ansible
https://github.com/ansible/ansible
64,969
openssh_keypair module not idempotent on debian
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY `openssh_keypair` seem to report changed in cases where it shouldn't ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> openssh_keypair ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.1 config file = None configured module search path = ['/home/florian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/florian/.local/pipx/venvs/ansible/lib64/python3.7/site-packages/ansible executable location = /home/florian/.local/bin/ansible python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Debian Sid in a docker container, see playbook below ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - hosts: localhost gather_facts: no tasks: - docker_container: name=testing_debian state=started image=debian:sid command="sleep infinity" - add_host: name=testing_debian ansible_connection=docker - hosts: testing_debian gather_facts: no tasks: # Ensure we start from a consistent state - raw: "apt -y update && apt -y install python openssh-client" - file: path=/tmp/key state=absent - file: path=/tmp/key.pub state=absent # full reproducer here - openssh_keypair: path=/tmp/key comment=test type=ed25519 - openssh_keypair: path=/tmp/key comment=test type=ed25519 ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> I'd expect the second `openssh_keypair` task to report unchanged. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> Both tasks show up as changed: <!--- Paste verbatim command output between quotes --> ```paste below TASK [file] ********************************************************************************************************************************************************************************************************************************** [WARNING]: Platform linux on host testing_debian is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [testing_debian] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "path": "/tmp/key", "state": "absent"} TASK [file] ********************************************************************************************************************************************************************************************************************************** ok: [testing_debian] => {"changed": false, "path": "/tmp/key.pub", "state": "absent"} TASK [openssh_keypair] *********************************************************************************************************************************************************************************************************************** changed: [testing_debian] => {"changed": true, "comment": "test", "filename": "/tmp/key", "fingerprint": "SHA256:k7zoCrUKcwjfvSsKSr+zElJL5wGgCbmNsgGdD4qXp6o", "public_key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ5JTc62+6h0NlqM0r5pranKV3kXS7+XbzO9ZPeY2f36 test", "size": 256, "type": "ed25519"} TASK [openssh_keypair] *********************************************************************************************************************************************************************************************************************** changed: [testing_debian] => {"changed": true, "comment": "test", "filename": "/tmp/key", "fingerprint": "SHA256:k7zoCrUKcwjfvSsKSr+zElJL5wGgCbmNsgGdD4qXp6o", "public_key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ5JTc62+6h0NlqM0r5pranKV3kXS7+XbzO9ZPeY2f36 test", "size": 256, "type": "ed25519"} ```
https://github.com/ansible/ansible/issues/64969
https://github.com/ansible/ansible/pull/65017
509b989a9a525d0d1b2dbcf0187bfda2eabe4ce5
b36f57225665de07c31d6affac541adc12207040
2019-11-17T21:05:27Z
python
2019-11-20T20:02:26Z
lib/ansible/modules/crypto/openssh_keypair.py
#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2018, David Kainz <[email protected]> <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = { 'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community' } DOCUMENTATION = ''' --- module: openssh_keypair author: "David Kainz (@lolcube)" version_added: "2.8" short_description: Generate OpenSSH private and public keys. description: - "This module allows one to (re)generate OpenSSH private and public keys. It uses ssh-keygen to generate keys. One can generate C(rsa), C(dsa), C(rsa1), C(ed25519) or C(ecdsa) private keys." requirements: - "ssh-keygen" options: state: description: - Whether the private and public keys should exist or not, taking action if the state is different from what is stated. type: str default: present choices: [ present, absent ] size: description: - "Specifies the number of bits in the private key to create. For RSA keys, the minimum size is 1024 bits and the default is 4096 bits. Generally, 2048 bits is considered sufficient. DSA keys must be exactly 1024 bits as specified by FIPS 186-2. For ECDSA keys, size determines the key length by selecting from one of three elliptic curve sizes: 256, 384 or 521 bits. Attempting to use bit lengths other than these three values for ECDSA keys will cause this module to fail. Ed25519 keys have a fixed length and the size will be ignored." type: int type: description: - "The algorithm used to generate the SSH private key. C(rsa1) is for protocol version 1. C(rsa1) is deprecated and may not be supported by every version of ssh-keygen." type: str default: rsa choices: ['rsa', 'dsa', 'rsa1', 'ecdsa', 'ed25519'] force: description: - Should the key be regenerated even if it already exists type: bool default: false path: description: - Name of the files containing the public and private key. The file containing the public key will have the extension C(.pub). type: path required: true comment: description: - Provides a new comment to the public key. When checking if the key is in the correct state this will be ignored. type: str version_added: "2.9" extends_documentation_fragment: files ''' EXAMPLES = ''' # Generate an OpenSSH keypair with the default values (4096 bits, rsa) - openssh_keypair: path: /tmp/id_ssh_rsa # Generate an OpenSSH rsa keypair with a different size (2048 bits) - openssh_keypair: path: /tmp/id_ssh_rsa size: 2048 # Force regenerate an OpenSSH keypair if it already exists - openssh_keypair: path: /tmp/id_ssh_rsa force: True # Generate an OpenSSH keypair with a different algorithm (dsa) - openssh_keypair: path: /tmp/id_ssh_dsa type: dsa ''' RETURN = ''' size: description: Size (in bits) of the SSH private key returned: changed or success type: int sample: 4096 type: description: Algorithm used to generate the SSH private key returned: changed or success type: str sample: rsa filename: description: Path to the generated SSH private key file returned: changed or success type: str sample: /tmp/id_ssh_rsa fingerprint: description: The fingerprint of the key. returned: changed or success type: str sample: SHA256:r4YCZxihVjedH2OlfjVGI6Y5xAYtdCwk8VxKyzVyYfM public_key: description: The public key of the generated SSH private key returned: changed or success type: str sample: ssh-rsa AAAAB3Nza(...omitted...)veL4E3Xcw== test_key comment: description: The comment of the generated key returned: changed or success type: str sample: test@comment ''' import os import stat import errno from ansible.module_utils.basic import AnsibleModule from ansible.module_utils._text import to_native class KeypairError(Exception): pass class Keypair(object): def __init__(self, module): self.path = module.params['path'] self.state = module.params['state'] self.force = module.params['force'] self.size = module.params['size'] self.type = module.params['type'] self.comment = module.params['comment'] self.changed = False self.check_mode = module.check_mode self.privatekey = None self.fingerprint = {} self.public_key = {} if self.type in ('rsa', 'rsa1'): self.size = 4096 if self.size is None else self.size if self.size < 1024: module.fail_json(msg=('For RSA keys, the minimum size is 1024 bits and the default is 4096 bits. ' 'Attempting to use bit lengths under 1024 will cause the module to fail.')) if self.type == 'dsa': self.size = 1024 if self.size is None else self.size if self.size != 1024: module.fail_json(msg=('DSA keys must be exactly 1024 bits as specified by FIPS 186-2.')) if self.type == 'ecdsa': self.size = 256 if self.size is None else self.size if self.size not in (256, 384, 521): module.fail_json(msg=('For ECDSA keys, size determines the key length by selecting from ' 'one of three elliptic curve sizes: 256, 384 or 521 bits. ' 'Attempting to use bit lengths other than these three values for ' 'ECDSA keys will cause this module to fail. ')) if self.type == 'ed25519': self.size = 256 def generate(self, module): # generate a keypair if not self.isPrivateKeyValid(module, perms_required=False) or self.force: args = [ module.get_bin_path('ssh-keygen', True), '-q', '-N', '', '-b', str(self.size), '-t', self.type, '-f', self.path, ] if self.comment: args.extend(['-C', self.comment]) else: args.extend(['-C', ""]) try: if os.path.exists(self.path) and not os.access(self.path, os.W_OK): os.chmod(self.path, stat.S_IWUSR + stat.S_IRUSR) self.changed = True stdin_data = None if os.path.exists(self.path): stdin_data = 'y' module.run_command(args, data=stdin_data) proc = module.run_command([module.get_bin_path('ssh-keygen', True), '-lf', self.path]) self.fingerprint = proc[1].split() pubkey = module.run_command([module.get_bin_path('ssh-keygen', True), '-yf', self.path]) self.public_key = pubkey[1].strip('\n') except Exception as e: self.remove() module.fail_json(msg="%s" % to_native(e)) elif not self.isPublicKeyValid(module, perms_required=False): pubkey = module.run_command([module.get_bin_path('ssh-keygen', True), '-yf', self.path]) pubkey = pubkey[1].strip('\n') try: self.changed = True with open(self.path + ".pub", "w") as pubkey_f: pubkey_f.write(pubkey + '\n') os.chmod(self.path + ".pub", stat.S_IWUSR + stat.S_IRUSR + stat.S_IRGRP + stat.S_IROTH) except IOError: module.fail_json( msg='The public key is missing or does not match the private key. ' 'Unable to regenerate the public key.') self.public_key = pubkey if self.comment: try: if os.path.exists(self.path) and not os.access(self.path, os.W_OK): os.chmod(self.path, stat.S_IWUSR + stat.S_IRUSR) args = [module.get_bin_path('ssh-keygen', True), '-q', '-o', '-c', '-C', self.comment, '-f', self.path] module.run_command(args) except IOError: module.fail_json( msg='Unable to update the comment for the public key.') file_args = module.load_file_common_arguments(module.params) if module.set_fs_attributes_if_different(file_args, False): self.changed = True file_args['path'] = file_args['path'] + '.pub' if module.set_fs_attributes_if_different(file_args, False): self.changed = True def isPrivateKeyValid(self, module, perms_required=True): # check if the key is correct def _check_state(): return os.path.exists(self.path) if _check_state(): proc = module.run_command([module.get_bin_path('ssh-keygen', True), '-lf', self.path], check_rc=False) if not proc[0] == 0: if os.path.isdir(self.path): module.fail_json(msg='%s is a directory. Please specify a path to a file.' % (self.path)) return False fingerprint = proc[1].split() keysize = int(fingerprint[0]) keytype = fingerprint[-1][1:-1].lower() else: return False def _check_perms(module): file_args = module.load_file_common_arguments(module.params) return not module.set_fs_attributes_if_different(file_args, False) def _check_type(): return self.type == keytype def _check_size(): return self.size == keysize self.fingerprint = fingerprint if not perms_required: return _check_state() and _check_type() and _check_size() return _check_state() and _check_perms(module) and _check_type() and _check_size() def isPublicKeyValid(self, module, perms_required=True): def _get_pubkey_content(): if os.path.exists(self.path + ".pub"): with open(self.path + ".pub", "r") as pubkey_f: present_pubkey = pubkey_f.read().strip(' \n') return present_pubkey else: return False def _parse_pubkey(): pubkey_content = _get_pubkey_content() if pubkey_content: parts = pubkey_content.split(' ', 2) return parts[0], parts[1], '' if len(parts) <= 2 else parts[2] return False def _pubkey_valid(pubkey): if pubkey_parts: current_pubkey = ' '.join([pubkey_parts[0], pubkey_parts[1]]) return current_pubkey == pubkey return False def _comment_valid(): if pubkey_parts: return pubkey_parts[2] == self.comment return False def _check_perms(module): file_args = module.load_file_common_arguments(module.params) file_args['path'] = file_args['path'] + '.pub' return not module.set_fs_attributes_if_different(file_args, False) pubkey = module.run_command([module.get_bin_path('ssh-keygen', True), '-yf', self.path]) pubkey = pubkey[1].strip('\n') pubkey_parts = _parse_pubkey() if _pubkey_valid(pubkey): self.public_key = pubkey if not self.comment: return _pubkey_valid(pubkey) if not perms_required: return _pubkey_valid(pubkey) and _comment_valid() return _pubkey_valid(pubkey) and _comment_valid() and _check_perms(module) def dump(self): # return result as a dict """Serialize the object into a dictionary.""" result = { 'changed': self.changed, 'size': self.size, 'type': self.type, 'filename': self.path, # On removal this has no value 'fingerprint': self.fingerprint[1] if self.fingerprint else '', 'public_key': self.public_key, 'comment': self.comment if self.comment else '', } return result def remove(self): """Remove the resource from the filesystem.""" try: os.remove(self.path) self.changed = True except OSError as exc: if exc.errno != errno.ENOENT: raise KeypairError(exc) else: pass if os.path.exists(self.path + ".pub"): try: os.remove(self.path + ".pub") self.changed = True except OSError as exc: if exc.errno != errno.ENOENT: raise KeypairError(exc) else: pass def main(): # Define Ansible Module module = AnsibleModule( argument_spec=dict( state=dict(type='str', default='present', choices=['present', 'absent']), size=dict(type='int'), type=dict(type='str', default='rsa', choices=['rsa', 'dsa', 'rsa1', 'ecdsa', 'ed25519']), force=dict(type='bool', default=False), path=dict(type='path', required=True), comment=dict(type='str'), ), supports_check_mode=True, add_file_common_args=True, ) # Check if Path exists base_dir = os.path.dirname(module.params['path']) or '.' if not os.path.isdir(base_dir): module.fail_json( name=base_dir, msg='The directory %s does not exist or the file is not a directory' % base_dir ) keypair = Keypair(module) if keypair.state == 'present': if module.check_mode: result = keypair.dump() result['changed'] = module.params['force'] or not keypair.isPrivateKeyValid(module) or not keypair.isPublicKeyValid(module) module.exit_json(**result) try: keypair.generate(module) except Exception as exc: module.fail_json(msg=to_native(exc)) else: if module.check_mode: keypair.changed = os.path.exists(module.params['path']) if keypair.changed: keypair.fingerprint = {} result = keypair.dump() module.exit_json(**result) try: keypair.remove() except Exception as exc: module.fail_json(msg=to_native(exc)) result = keypair.dump() module.exit_json(**result) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,969
openssh_keypair module not idempotent on debian
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY `openssh_keypair` seem to report changed in cases where it shouldn't ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> openssh_keypair ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.1 config file = None configured module search path = ['/home/florian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/florian/.local/pipx/venvs/ansible/lib64/python3.7/site-packages/ansible executable location = /home/florian/.local/bin/ansible python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Debian Sid in a docker container, see playbook below ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - hosts: localhost gather_facts: no tasks: - docker_container: name=testing_debian state=started image=debian:sid command="sleep infinity" - add_host: name=testing_debian ansible_connection=docker - hosts: testing_debian gather_facts: no tasks: # Ensure we start from a consistent state - raw: "apt -y update && apt -y install python openssh-client" - file: path=/tmp/key state=absent - file: path=/tmp/key.pub state=absent # full reproducer here - openssh_keypair: path=/tmp/key comment=test type=ed25519 - openssh_keypair: path=/tmp/key comment=test type=ed25519 ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> I'd expect the second `openssh_keypair` task to report unchanged. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> Both tasks show up as changed: <!--- Paste verbatim command output between quotes --> ```paste below TASK [file] ********************************************************************************************************************************************************************************************************************************** [WARNING]: Platform linux on host testing_debian is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [testing_debian] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "path": "/tmp/key", "state": "absent"} TASK [file] ********************************************************************************************************************************************************************************************************************************** ok: [testing_debian] => {"changed": false, "path": "/tmp/key.pub", "state": "absent"} TASK [openssh_keypair] *********************************************************************************************************************************************************************************************************************** changed: [testing_debian] => {"changed": true, "comment": "test", "filename": "/tmp/key", "fingerprint": "SHA256:k7zoCrUKcwjfvSsKSr+zElJL5wGgCbmNsgGdD4qXp6o", "public_key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ5JTc62+6h0NlqM0r5pranKV3kXS7+XbzO9ZPeY2f36 test", "size": 256, "type": "ed25519"} TASK [openssh_keypair] *********************************************************************************************************************************************************************************************************************** changed: [testing_debian] => {"changed": true, "comment": "test", "filename": "/tmp/key", "fingerprint": "SHA256:k7zoCrUKcwjfvSsKSr+zElJL5wGgCbmNsgGdD4qXp6o", "public_key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ5JTc62+6h0NlqM0r5pranKV3kXS7+XbzO9ZPeY2f36 test", "size": 256, "type": "ed25519"} ```
https://github.com/ansible/ansible/issues/64969
https://github.com/ansible/ansible/pull/65017
509b989a9a525d0d1b2dbcf0187bfda2eabe4ce5
b36f57225665de07c31d6affac541adc12207040
2019-11-17T21:05:27Z
python
2019-11-20T20:02:26Z
test/integration/targets/openssh_keypair/tasks/main.yml
- name: Generate privatekey1 - standard openssh_keypair: path: '{{ output_dir }}/privatekey1' register: privatekey1_result - name: Generate privatekey2 - size 2048 openssh_keypair: path: '{{ output_dir }}/privatekey2' size: 2048 - name: Generate privatekey3 - type dsa openssh_keypair: path: '{{ output_dir }}/privatekey3' type: dsa - name: Generate privatekey4 - standard openssh_keypair: path: '{{ output_dir }}/privatekey4' - name: Delete privatekey4 - standard openssh_keypair: state: absent path: '{{ output_dir }}/privatekey4' - name: Generate privatekey5 - standard openssh_keypair: path: '{{ output_dir }}/privatekey5' register: publickey_gen - name: Generate privatekey6 openssh_keypair: path: '{{ output_dir }}/privatekey6' type: rsa - name: Regenerate privatekey6 via force openssh_keypair: path: '{{ output_dir }}/privatekey6' type: rsa force: yes register: output_regenerated_via_force - name: Create broken key copy: dest: '{{ item }}' content: '' mode: '0700' loop: - '{{ output_dir }}/privatekeybroken' - '{{ output_dir }}/privatekeybroken.pub' - name: Regenerate broken key openssh_keypair: path: '{{ output_dir }}/privatekeybroken' type: rsa register: output_broken - name: Generate read-only private key openssh_keypair: path: '{{ output_dir }}/privatekeyreadonly' type: rsa mode: '0200' - name: Regenerate read-only private key via force openssh_keypair: path: '{{ output_dir }}/privatekeyreadonly' type: rsa force: yes register: output_read_only - name: Generate privatekey7 - standard with comment openssh_keypair: path: '{{ output_dir }}/privatekey7' comment: 'test@privatekey7' register: privatekey7_result - name: Modify privatekey7 comment openssh_keypair: path: '{{ output_dir }}/privatekey7' comment: 'test_modified@privatekey7' register: privatekey7_modified_result - import_tasks: ../tests/validate.yml
closed
ansible/ansible
https://github.com/ansible/ansible
64,969
openssh_keypair module not idempotent on debian
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY `openssh_keypair` seem to report changed in cases where it shouldn't ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> openssh_keypair ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.1 config file = None configured module search path = ['/home/florian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/florian/.local/pipx/venvs/ansible/lib64/python3.7/site-packages/ansible executable location = /home/florian/.local/bin/ansible python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Debian Sid in a docker container, see playbook below ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - hosts: localhost gather_facts: no tasks: - docker_container: name=testing_debian state=started image=debian:sid command="sleep infinity" - add_host: name=testing_debian ansible_connection=docker - hosts: testing_debian gather_facts: no tasks: # Ensure we start from a consistent state - raw: "apt -y update && apt -y install python openssh-client" - file: path=/tmp/key state=absent - file: path=/tmp/key.pub state=absent # full reproducer here - openssh_keypair: path=/tmp/key comment=test type=ed25519 - openssh_keypair: path=/tmp/key comment=test type=ed25519 ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> I'd expect the second `openssh_keypair` task to report unchanged. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> Both tasks show up as changed: <!--- Paste verbatim command output between quotes --> ```paste below TASK [file] ********************************************************************************************************************************************************************************************************************************** [WARNING]: Platform linux on host testing_debian is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [testing_debian] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "path": "/tmp/key", "state": "absent"} TASK [file] ********************************************************************************************************************************************************************************************************************************** ok: [testing_debian] => {"changed": false, "path": "/tmp/key.pub", "state": "absent"} TASK [openssh_keypair] *********************************************************************************************************************************************************************************************************************** changed: [testing_debian] => {"changed": true, "comment": "test", "filename": "/tmp/key", "fingerprint": "SHA256:k7zoCrUKcwjfvSsKSr+zElJL5wGgCbmNsgGdD4qXp6o", "public_key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ5JTc62+6h0NlqM0r5pranKV3kXS7+XbzO9ZPeY2f36 test", "size": 256, "type": "ed25519"} TASK [openssh_keypair] *********************************************************************************************************************************************************************************************************************** changed: [testing_debian] => {"changed": true, "comment": "test", "filename": "/tmp/key", "fingerprint": "SHA256:k7zoCrUKcwjfvSsKSr+zElJL5wGgCbmNsgGdD4qXp6o", "public_key": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ5JTc62+6h0NlqM0r5pranKV3kXS7+XbzO9ZPeY2f36 test", "size": 256, "type": "ed25519"} ```
https://github.com/ansible/ansible/issues/64969
https://github.com/ansible/ansible/pull/65017
509b989a9a525d0d1b2dbcf0187bfda2eabe4ce5
b36f57225665de07c31d6affac541adc12207040
2019-11-17T21:05:27Z
python
2019-11-20T20:02:26Z
test/integration/targets/openssh_keypair/tests/validate.yml
--- - name: Log privatekey1 return values debug: var: privatekey1_result - name: Validate privatekey1 return fingerprint assert: that: - privatekey1_result["fingerprint"] is string - privatekey1_result["fingerprint"].startswith("SHA256:") # only distro old enough that it still gives md5 with no prefix when: ansible_distribution != 'CentOS' and ansible_distribution_major_version != '6' - name: Validate privatekey1 return public_key assert: that: - privatekey1_result["public_key"] is string - privatekey1_result["public_key"].startswith("ssh-rsa ") - name: Validate privatekey1 return size value assert: that: - privatekey1_result["size"]|type_debug == 'int' - privatekey1_result["size"] == 4096 - name: Validate privatekey1 return key type assert: that: - privatekey1_result["type"] is string - privatekey1_result["type"] == "rsa" - name: Validate privatekey1 (test - RSA key with size 4096 bits) shell: "ssh-keygen -lf {{ output_dir }}/privatekey1 | grep -o -E '^[0-9]+'" register: privatekey1 - name: Validate privatekey1 (assert - RSA key with size 4096 bits) assert: that: - privatekey1.stdout == '4096' - name: Validate privatekey2 (test - RSA key with size 2048 bits) shell: "ssh-keygen -lf {{ output_dir }}/privatekey2 | grep -o -E '^[0-9]+'" register: privatekey2 - name: Validate privatekey2 (assert - RSA key with size 2048 bits) assert: that: - privatekey2.stdout == '2048' - name: Validate privatekey3 (test - DSA key with size 1024 bits) shell: "ssh-keygen -lf {{ output_dir }}/privatekey3 | grep -o -E '^[0-9]+'" register: privatekey3 - name: Validate privatekey3 (assert - DSA key with size 4096 bits) assert: that: - privatekey3.stdout == '1024' - name: Validate privatekey4 (test - Ensure key has been removed) stat: path: '{{ output_dir }}/privatekey4' register: privatekey4 - name: Validate privatekey4 (assert - Ensure key has been removed) assert: that: - privatekey4.stat.exists == False - name: Validate privatekey5 (assert - Public key module output equal to the public key on host) assert: that: - "publickey_gen.public_key == lookup('file', output_dir ~ '/privatekey5.pub').strip('\n')" - name: Verify that privatekey6 will be regenerated via force assert: that: - output_regenerated_via_force is changed - name: Verify that broken key will be regenerated assert: that: - output_broken is changed - name: Verify that read-only key will be regenerated assert: that: - output_read_only is changed - name: Validate privatekey7 (assert - Public key remains the same after comment change) assert: that: - privatekey7_result.public_key == privatekey7_modified_result.public_key - name: Validate privatekey7 comment on creation assert: that: - privatekey7_result.comment == 'test@privatekey7' - name: Validate privatekey7 comment update assert: that: - privatekey7_modified_result.comment == 'test_modified@privatekey7'
closed
ansible/ansible
https://github.com/ansible/ansible
64,684
redfish_command - failure on user account deletion
### SUMMARY Running the example playbook from https://docs.ansible.com/ansible/latest/modules/redfish_command_module.html#examples to disable and delete a certain user works as expected, but results in an error message > The specified value is not allowed to be configured if the user name \\nor password is blank.' Note that the user to be deleted (in the example below the user with id 5) _is different_ from the user specified in "username". The user (in the example below the user with id 5) is deleted as expected despite the error message. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME redfish_command ##### ANSIBLE VERSION ``` ansible 2.10.0.dev0 config file = /home/ansible/ansible_local/ansible.cfg configured module search path = [u'/home/ansible/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT CentOS Linux release 7.6.1810 (Core) Dell R640, iDRAC Firmware 3.32.32.32 ##### STEPS TO REPRODUCE ```yaml - name: Disable and delete user hosts: dellR640 gather_facts: False tasks: - name: Disable and delete user with id 5 local_action: module: redfish_command category: Accounts command: ["DisableUser", "DeleteUser"] baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" id: "5" ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ``` fatal: [dellR640 -> localhost]: FAILED! => { "changed": false, "invocation": { "module_args": { "account_properties": {}, "baseuri": "dellR640idrac", "boot_next": null, "bootdevice": null, "category": "Accounts", "command": [ "DisableUser", "DeleteUser" ], "id": "5", "new_password": null, "new_username": null, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "resource_id": null, "roleid": null, "timeout": 10, "uefi_target": null, "update_username": null, "username": "customadminuser" } }, "msg": "HTTP Error 400 on PATCH request to 'https://dellR640idrac/redfish/v1/Managers/iDRAC.Embedded.1/Accounts/5', extended message: 'The specified value is not allowed to be configured if the user name \\nor password is blank.'" } ```
https://github.com/ansible/ansible/issues/64684
https://github.com/ansible/ansible/pull/64797
b36f57225665de07c31d6affac541adc12207040
f51f87a986b54329e731e3cccb16049011009cb1
2019-11-11T21:12:20Z
python
2019-11-20T20:03:19Z
changelogs/fragments/64797-fix-error-deleting-redfish-acct.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
64,684
redfish_command - failure on user account deletion
### SUMMARY Running the example playbook from https://docs.ansible.com/ansible/latest/modules/redfish_command_module.html#examples to disable and delete a certain user works as expected, but results in an error message > The specified value is not allowed to be configured if the user name \\nor password is blank.' Note that the user to be deleted (in the example below the user with id 5) _is different_ from the user specified in "username". The user (in the example below the user with id 5) is deleted as expected despite the error message. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME redfish_command ##### ANSIBLE VERSION ``` ansible 2.10.0.dev0 config file = /home/ansible/ansible_local/ansible.cfg configured module search path = [u'/home/ansible/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT CentOS Linux release 7.6.1810 (Core) Dell R640, iDRAC Firmware 3.32.32.32 ##### STEPS TO REPRODUCE ```yaml - name: Disable and delete user hosts: dellR640 gather_facts: False tasks: - name: Disable and delete user with id 5 local_action: module: redfish_command category: Accounts command: ["DisableUser", "DeleteUser"] baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" id: "5" ``` ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ``` fatal: [dellR640 -> localhost]: FAILED! => { "changed": false, "invocation": { "module_args": { "account_properties": {}, "baseuri": "dellR640idrac", "boot_next": null, "bootdevice": null, "category": "Accounts", "command": [ "DisableUser", "DeleteUser" ], "id": "5", "new_password": null, "new_username": null, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "resource_id": null, "roleid": null, "timeout": 10, "uefi_target": null, "update_username": null, "username": "customadminuser" } }, "msg": "HTTP Error 400 on PATCH request to 'https://dellR640idrac/redfish/v1/Managers/iDRAC.Embedded.1/Accounts/5', extended message: 'The specified value is not allowed to be configured if the user name \\nor password is blank.'" } ```
https://github.com/ansible/ansible/issues/64684
https://github.com/ansible/ansible/pull/64797
b36f57225665de07c31d6affac541adc12207040
f51f87a986b54329e731e3cccb16049011009cb1
2019-11-11T21:12:20Z
python
2019-11-20T20:03:19Z
lib/ansible/module_utils/redfish_utils.py
# Copyright (c) 2017-2018 Dell EMC Inc. # GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type import json from ansible.module_utils.urls import open_url from ansible.module_utils._text import to_text from ansible.module_utils.six.moves import http_client from ansible.module_utils.six.moves.urllib.error import URLError, HTTPError GET_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'} POST_HEADERS = {'content-type': 'application/json', 'accept': 'application/json', 'OData-Version': '4.0'} PATCH_HEADERS = {'content-type': 'application/json', 'accept': 'application/json', 'OData-Version': '4.0'} DELETE_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'} DEPRECATE_MSG = 'Issuing a data modification command without specifying the '\ 'ID of the target %(resource)s resource when there is more '\ 'than one %(resource)s will use the first one in the '\ 'collection. Use the `resource_id` option to specify the '\ 'target %(resource)s ID' class RedfishUtils(object): def __init__(self, creds, root_uri, timeout, module, resource_id=None, data_modification=False): self.root_uri = root_uri self.creds = creds self.timeout = timeout self.module = module self.service_root = '/redfish/v1/' self.resource_id = resource_id self.data_modification = data_modification self._init_session() # The following functions are to send GET/POST/PATCH/DELETE requests def get_request(self, uri): try: resp = open_url(uri, method="GET", headers=GET_HEADERS, url_username=self.creds['user'], url_password=self.creds['pswd'], force_basic_auth=True, validate_certs=False, follow_redirects='all', use_proxy=False, timeout=self.timeout) data = json.loads(resp.read()) headers = dict((k.lower(), v) for (k, v) in resp.info().items()) except HTTPError as e: msg = self._get_extended_message(e) return {'ret': False, 'msg': "HTTP Error %s on GET request to '%s', extended message: '%s'" % (e.code, uri, msg), 'status': e.code} except URLError as e: return {'ret': False, 'msg': "URL Error on GET request to '%s': '%s'" % (uri, e.reason)} # Almost all errors should be caught above, but just in case except Exception as e: return {'ret': False, 'msg': "Failed GET request to '%s': '%s'" % (uri, to_text(e))} return {'ret': True, 'data': data, 'headers': headers} def post_request(self, uri, pyld): try: resp = open_url(uri, data=json.dumps(pyld), headers=POST_HEADERS, method="POST", url_username=self.creds['user'], url_password=self.creds['pswd'], force_basic_auth=True, validate_certs=False, follow_redirects='all', use_proxy=False, timeout=self.timeout) except HTTPError as e: msg = self._get_extended_message(e) return {'ret': False, 'msg': "HTTP Error %s on POST request to '%s', extended message: '%s'" % (e.code, uri, msg), 'status': e.code} except URLError as e: return {'ret': False, 'msg': "URL Error on POST request to '%s': '%s'" % (uri, e.reason)} # Almost all errors should be caught above, but just in case except Exception as e: return {'ret': False, 'msg': "Failed POST request to '%s': '%s'" % (uri, to_text(e))} return {'ret': True, 'resp': resp} def patch_request(self, uri, pyld): headers = PATCH_HEADERS r = self.get_request(uri) if r['ret']: # Get etag from etag header or @odata.etag property etag = r['headers'].get('etag') if not etag: etag = r['data'].get('@odata.etag') if etag: # Make copy of headers and add If-Match header headers = dict(headers) headers['If-Match'] = etag try: resp = open_url(uri, data=json.dumps(pyld), headers=headers, method="PATCH", url_username=self.creds['user'], url_password=self.creds['pswd'], force_basic_auth=True, validate_certs=False, follow_redirects='all', use_proxy=False, timeout=self.timeout) except HTTPError as e: msg = self._get_extended_message(e) return {'ret': False, 'msg': "HTTP Error %s on PATCH request to '%s', extended message: '%s'" % (e.code, uri, msg), 'status': e.code} except URLError as e: return {'ret': False, 'msg': "URL Error on PATCH request to '%s': '%s'" % (uri, e.reason)} # Almost all errors should be caught above, but just in case except Exception as e: return {'ret': False, 'msg': "Failed PATCH request to '%s': '%s'" % (uri, to_text(e))} return {'ret': True, 'resp': resp} def delete_request(self, uri, pyld=None): try: data = json.dumps(pyld) if pyld else None resp = open_url(uri, data=data, headers=DELETE_HEADERS, method="DELETE", url_username=self.creds['user'], url_password=self.creds['pswd'], force_basic_auth=True, validate_certs=False, follow_redirects='all', use_proxy=False, timeout=self.timeout) except HTTPError as e: msg = self._get_extended_message(e) return {'ret': False, 'msg': "HTTP Error %s on DELETE request to '%s', extended message: '%s'" % (e.code, uri, msg), 'status': e.code} except URLError as e: return {'ret': False, 'msg': "URL Error on DELETE request to '%s': '%s'" % (uri, e.reason)} # Almost all errors should be caught above, but just in case except Exception as e: return {'ret': False, 'msg': "Failed DELETE request to '%s': '%s'" % (uri, to_text(e))} return {'ret': True, 'resp': resp} @staticmethod def _get_extended_message(error): """ Get Redfish ExtendedInfo message from response payload if present :param error: an HTTPError exception :type error: HTTPError :return: the ExtendedInfo message if present, else standard HTTP error """ msg = http_client.responses.get(error.code, '') if error.code >= 400: try: body = error.read().decode('utf-8') data = json.loads(body) ext_info = data['error']['@Message.ExtendedInfo'] msg = ext_info[0]['Message'] except Exception: pass return msg def _init_session(self): pass def _find_accountservice_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'AccountService' not in data: return {'ret': False, 'msg': "AccountService resource not found"} else: account_service = data["AccountService"]["@odata.id"] response = self.get_request(self.root_uri + account_service) if response['ret'] is False: return response data = response['data'] accounts = data['Accounts']['@odata.id'] if accounts[-1:] == '/': accounts = accounts[:-1] self.accounts_uri = accounts return {'ret': True} def _find_sessionservice_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'SessionService' not in data: return {'ret': False, 'msg': "SessionService resource not found"} else: session_service = data["SessionService"]["@odata.id"] response = self.get_request(self.root_uri + session_service) if response['ret'] is False: return response data = response['data'] sessions = data['Sessions']['@odata.id'] if sessions[-1:] == '/': sessions = sessions[:-1] self.sessions_uri = sessions return {'ret': True} def _get_resource_uri_by_id(self, uris, id_prop): for uri in uris: response = self.get_request(self.root_uri + uri) if response['ret'] is False: continue data = response['data'] if id_prop == data.get('Id'): return uri return None def _find_systems_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'Systems' not in data: return {'ret': False, 'msg': "Systems resource not found"} response = self.get_request(self.root_uri + data['Systems']['@odata.id']) if response['ret'] is False: return response self.systems_uris = [ i['@odata.id'] for i in response['data'].get('Members', [])] if not self.systems_uris: return { 'ret': False, 'msg': "ComputerSystem's Members array is either empty or missing"} self.systems_uri = self.systems_uris[0] if self.data_modification: if self.resource_id: self.systems_uri = self._get_resource_uri_by_id(self.systems_uris, self.resource_id) if not self.systems_uri: return { 'ret': False, 'msg': "System resource %s not found" % self.resource_id} elif len(self.systems_uris) > 1: self.module.deprecate(DEPRECATE_MSG % {'resource': 'System'}, version='2.13') return {'ret': True} def _find_updateservice_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'UpdateService' not in data: return {'ret': False, 'msg': "UpdateService resource not found"} else: update = data["UpdateService"]["@odata.id"] self.update_uri = update response = self.get_request(self.root_uri + update) if response['ret'] is False: return response data = response['data'] self.firmware_uri = self.software_uri = None if 'FirmwareInventory' in data: self.firmware_uri = data['FirmwareInventory'][u'@odata.id'] if 'SoftwareInventory' in data: self.software_uri = data['SoftwareInventory'][u'@odata.id'] return {'ret': True} def _find_chassis_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'Chassis' not in data: return {'ret': False, 'msg': "Chassis resource not found"} chassis = data["Chassis"]["@odata.id"] response = self.get_request(self.root_uri + chassis) if response['ret'] is False: return response self.chassis_uris = [ i['@odata.id'] for i in response['data'].get('Members', [])] if not self.chassis_uris: return {'ret': False, 'msg': "Chassis Members array is either empty or missing"} self.chassis_uri = self.chassis_uris[0] if self.data_modification: if self.resource_id: self.chassis_uri = self._get_resource_uri_by_id(self.chassis_uris, self.resource_id) if not self.chassis_uri: return { 'ret': False, 'msg': "Chassis resource %s not found" % self.resource_id} elif len(self.chassis_uris) > 1: self.module.deprecate(DEPRECATE_MSG % {'resource': 'Chassis'}, version='2.13') return {'ret': True} def _find_managers_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'Managers' not in data: return {'ret': False, 'msg': "Manager resource not found"} manager = data["Managers"]["@odata.id"] response = self.get_request(self.root_uri + manager) if response['ret'] is False: return response self.manager_uris = [ i['@odata.id'] for i in response['data'].get('Members', [])] if not self.manager_uris: return {'ret': False, 'msg': "Managers Members array is either empty or missing"} self.manager_uri = self.manager_uris[0] if self.data_modification: if self.resource_id: self.manager_uri = self._get_resource_uri_by_id(self.manager_uris, self.resource_id) if not self.manager_uri: return { 'ret': False, 'msg': "Manager resource %s not found" % self.resource_id} elif len(self.manager_uris) > 1: self.module.deprecate(DEPRECATE_MSG % {'resource': 'Manager'}, version='2.13') return {'ret': True} def get_logs(self): log_svcs_uri_list = [] list_of_logs = [] properties = ['Severity', 'Created', 'EntryType', 'OemRecordFormat', 'Message', 'MessageId', 'MessageArgs'] # Find LogService response = self.get_request(self.root_uri + self.manager_uri) if response['ret'] is False: return response data = response['data'] if 'LogServices' not in data: return {'ret': False, 'msg': "LogServices resource not found"} # Find all entries in LogServices logs_uri = data["LogServices"]["@odata.id"] response = self.get_request(self.root_uri + logs_uri) if response['ret'] is False: return response data = response['data'] for log_svcs_entry in data.get('Members', []): response = self.get_request(self.root_uri + log_svcs_entry[u'@odata.id']) if response['ret'] is False: return response _data = response['data'] if 'Entries' in _data: log_svcs_uri_list.append(_data['Entries'][u'@odata.id']) # For each entry in LogServices, get log name and all log entries for log_svcs_uri in log_svcs_uri_list: logs = {} list_of_log_entries = [] response = self.get_request(self.root_uri + log_svcs_uri) if response['ret'] is False: return response data = response['data'] logs['Description'] = data.get('Description', 'Collection of log entries') # Get all log entries for each type of log found for logEntry in data.get('Members', []): entry = {} for prop in properties: if prop in logEntry: entry[prop] = logEntry.get(prop) if entry: list_of_log_entries.append(entry) log_name = log_svcs_uri.split('/')[-1] logs[log_name] = list_of_log_entries list_of_logs.append(logs) # list_of_logs[logs{list_of_log_entries[entry{}]}] return {'ret': True, 'entries': list_of_logs} def clear_logs(self): # Find LogService response = self.get_request(self.root_uri + self.manager_uri) if response['ret'] is False: return response data = response['data'] if 'LogServices' not in data: return {'ret': False, 'msg': "LogServices resource not found"} # Find all entries in LogServices logs_uri = data["LogServices"]["@odata.id"] response = self.get_request(self.root_uri + logs_uri) if response['ret'] is False: return response data = response['data'] for log_svcs_entry in data[u'Members']: response = self.get_request(self.root_uri + log_svcs_entry["@odata.id"]) if response['ret'] is False: return response _data = response['data'] # Check to make sure option is available, otherwise error is ugly if "Actions" in _data: if "#LogService.ClearLog" in _data[u"Actions"]: self.post_request(self.root_uri + _data[u"Actions"]["#LogService.ClearLog"]["target"], {}) if response['ret'] is False: return response return {'ret': True} def aggregate(self, func, uri_list, uri_name): ret = True entries = [] for uri in uri_list: inventory = func(uri) ret = inventory.pop('ret') and ret if 'entries' in inventory: entries.append(({uri_name: uri}, inventory['entries'])) return dict(ret=ret, entries=entries) def aggregate_chassis(self, func): return self.aggregate(func, self.chassis_uris, 'chassis_uri') def aggregate_managers(self, func): return self.aggregate(func, self.manager_uris, 'manager_uri') def aggregate_systems(self, func): return self.aggregate(func, self.systems_uris, 'system_uri') def get_storage_controller_inventory(self, systems_uri): result = {} controller_list = [] controller_results = [] # Get these entries, but does not fail if not found properties = ['CacheSummary', 'FirmwareVersion', 'Identifiers', 'Location', 'Manufacturer', 'Model', 'Name', 'PartNumber', 'SerialNumber', 'SpeedGbps', 'Status'] key = "StorageControllers" # Find Storage service response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response data = response['data'] if 'Storage' not in data: return {'ret': False, 'msg': "Storage resource not found"} # Get a list of all storage controllers and build respective URIs storage_uri = data['Storage']["@odata.id"] response = self.get_request(self.root_uri + storage_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] # Loop through Members and their StorageControllers # and gather properties from each StorageController if data[u'Members']: for storage_member in data[u'Members']: storage_member_uri = storage_member[u'@odata.id'] response = self.get_request(self.root_uri + storage_member_uri) data = response['data'] if key in data: controller_list = data[key] for controller in controller_list: controller_result = {} for property in properties: if property in controller: controller_result[property] = controller[property] controller_results.append(controller_result) result['entries'] = controller_results return result else: return {'ret': False, 'msg': "Storage resource not found"} def get_multi_storage_controller_inventory(self): return self.aggregate_systems(self.get_storage_controller_inventory) def get_disk_inventory(self, systems_uri): result = {'entries': []} controller_list = [] # Get these entries, but does not fail if not found properties = ['BlockSizeBytes', 'CapableSpeedGbs', 'CapacityBytes', 'EncryptionAbility', 'EncryptionStatus', 'FailurePredicted', 'HotspareType', 'Id', 'Identifiers', 'Manufacturer', 'MediaType', 'Model', 'Name', 'PartNumber', 'PhysicalLocation', 'Protocol', 'Revision', 'RotationSpeedRPM', 'SerialNumber', 'Status'] # Find Storage service response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response data = response['data'] if 'SimpleStorage' not in data and 'Storage' not in data: return {'ret': False, 'msg': "SimpleStorage and Storage resource \ not found"} if 'Storage' in data: # Get a list of all storage controllers and build respective URIs storage_uri = data[u'Storage'][u'@odata.id'] response = self.get_request(self.root_uri + storage_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if data[u'Members']: for controller in data[u'Members']: controller_list.append(controller[u'@odata.id']) for c in controller_list: uri = self.root_uri + c response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] controller_name = 'Controller 1' if 'StorageControllers' in data: sc = data['StorageControllers'] if sc: if 'Name' in sc[0]: controller_name = sc[0]['Name'] else: sc_id = sc[0].get('Id', '1') controller_name = 'Controller %s' % sc_id drive_results = [] if 'Drives' in data: for device in data[u'Drives']: disk_uri = self.root_uri + device[u'@odata.id'] response = self.get_request(disk_uri) data = response['data'] drive_result = {} for property in properties: if property in data: if data[property] is not None: drive_result[property] = data[property] drive_results.append(drive_result) drives = {'Controller': controller_name, 'Drives': drive_results} result["entries"].append(drives) if 'SimpleStorage' in data: # Get a list of all storage controllers and build respective URIs storage_uri = data["SimpleStorage"]["@odata.id"] response = self.get_request(self.root_uri + storage_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for controller in data[u'Members']: controller_list.append(controller[u'@odata.id']) for c in controller_list: uri = self.root_uri + c response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] if 'Name' in data: controller_name = data['Name'] else: sc_id = data.get('Id', '1') controller_name = 'Controller %s' % sc_id drive_results = [] for device in data[u'Devices']: drive_result = {} for property in properties: if property in device: drive_result[property] = device[property] drive_results.append(drive_result) drives = {'Controller': controller_name, 'Drives': drive_results} result["entries"].append(drives) return result def get_multi_disk_inventory(self): return self.aggregate_systems(self.get_disk_inventory) def get_volume_inventory(self, systems_uri): result = {'entries': []} controller_list = [] volume_list = [] # Get these entries, but does not fail if not found properties = ['Id', 'Name', 'RAIDType', 'VolumeType', 'BlockSizeBytes', 'Capacity', 'CapacityBytes', 'CapacitySources', 'Encrypted', 'EncryptionTypes', 'Identifiers', 'Operations', 'OptimumIOSizeBytes', 'AccessCapabilities', 'AllocatedPools', 'Status'] # Find Storage service response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response data = response['data'] if 'SimpleStorage' not in data and 'Storage' not in data: return {'ret': False, 'msg': "SimpleStorage and Storage resource \ not found"} if 'Storage' in data: # Get a list of all storage controllers and build respective URIs storage_uri = data[u'Storage'][u'@odata.id'] response = self.get_request(self.root_uri + storage_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if data.get('Members'): for controller in data[u'Members']: controller_list.append(controller[u'@odata.id']) for c in controller_list: uri = self.root_uri + c response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] controller_name = 'Controller 1' if 'StorageControllers' in data: sc = data['StorageControllers'] if sc: if 'Name' in sc[0]: controller_name = sc[0]['Name'] else: sc_id = sc[0].get('Id', '1') controller_name = 'Controller %s' % sc_id volume_results = [] if 'Volumes' in data: # Get a list of all volumes and build respective URIs volumes_uri = data[u'Volumes'][u'@odata.id'] response = self.get_request(self.root_uri + volumes_uri) data = response['data'] if data.get('Members'): for volume in data[u'Members']: volume_list.append(volume[u'@odata.id']) for v in volume_list: uri = self.root_uri + v response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] volume_result = {} for property in properties: if property in data: if data[property] is not None: volume_result[property] = data[property] # Get related Drives Id drive_id_list = [] if 'Links' in data: if 'Drives' in data[u'Links']: for link in data[u'Links'][u'Drives']: drive_id_link = link[u'@odata.id'] drive_id = drive_id_link.split("/")[-1] drive_id_list.append({'Id': drive_id}) volume_result['Linked_drives'] = drive_id_list volume_results.append(volume_result) volumes = {'Controller': controller_name, 'Volumes': volume_results} result["entries"].append(volumes) else: return {'ret': False, 'msg': "Storage resource not found"} return result def get_multi_volume_inventory(self): return self.aggregate_systems(self.get_volume_inventory) def restart_manager_gracefully(self): result = {} key = "Actions" # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + self.manager_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] action_uri = data[key]["#Manager.Reset"]["target"] payload = {'ResetType': 'GracefulRestart'} response = self.post_request(self.root_uri + action_uri, payload) if response['ret'] is False: return response return {'ret': True} def manage_indicator_led(self, command): result = {} key = 'IndicatorLED' payloads = {'IndicatorLedOn': 'Lit', 'IndicatorLedOff': 'Off', "IndicatorLedBlink": 'Blinking'} result = {} for chassis_uri in self.chassis_uris: response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} if command in payloads.keys(): payload = {'IndicatorLED': payloads[command]} response = self.patch_request(self.root_uri + chassis_uri, payload) if response['ret'] is False: return response else: return {'ret': False, 'msg': 'Invalid command'} return result def _map_reset_type(self, reset_type, allowable_values): equiv_types = { 'On': 'ForceOn', 'ForceOn': 'On', 'ForceOff': 'GracefulShutdown', 'GracefulShutdown': 'ForceOff', 'GracefulRestart': 'ForceRestart', 'ForceRestart': 'GracefulRestart' } if reset_type in allowable_values: return reset_type if reset_type not in equiv_types: return reset_type mapped_type = equiv_types[reset_type] if mapped_type in allowable_values: return mapped_type return reset_type def manage_system_power(self, command): key = "Actions" reset_type_values = ['On', 'ForceOff', 'GracefulShutdown', 'GracefulRestart', 'ForceRestart', 'Nmi', 'ForceOn', 'PushPowerButton', 'PowerCycle'] # command should be PowerOn, PowerForceOff, etc. if not command.startswith('Power'): return {'ret': False, 'msg': 'Invalid Command (%s)' % command} reset_type = command[5:] # map Reboot to a ResetType that does a reboot if reset_type == 'Reboot': reset_type = 'GracefulRestart' if reset_type not in reset_type_values: return {'ret': False, 'msg': 'Invalid Command (%s)' % command} # read the system resource and get the current power state response = self.get_request(self.root_uri + self.systems_uri) if response['ret'] is False: return response data = response['data'] power_state = data.get('PowerState') # if power is already in target state, nothing to do if power_state == "On" and reset_type in ['On', 'ForceOn']: return {'ret': True, 'changed': False} if power_state == "Off" and reset_type in ['GracefulShutdown', 'ForceOff']: return {'ret': True, 'changed': False} # get the #ComputerSystem.Reset Action and target URI if key not in data or '#ComputerSystem.Reset' not in data[key]: return {'ret': False, 'msg': 'Action #ComputerSystem.Reset not found'} reset_action = data[key]['#ComputerSystem.Reset'] if 'target' not in reset_action: return {'ret': False, 'msg': 'target URI missing from Action #ComputerSystem.Reset'} action_uri = reset_action['target'] # get AllowableValues from ActionInfo allowable_values = None if '@Redfish.ActionInfo' in reset_action: action_info_uri = reset_action.get('@Redfish.ActionInfo') response = self.get_request(self.root_uri + action_info_uri) if response['ret'] is True: data = response['data'] if 'Parameters' in data: params = data['Parameters'] for param in params: if param.get('Name') == 'ResetType': allowable_values = param.get('AllowableValues') break # fallback to @Redfish.AllowableValues annotation if allowable_values is None: allowable_values = reset_action.get('[email protected]', []) # map ResetType to an allowable value if needed if reset_type not in allowable_values: reset_type = self._map_reset_type(reset_type, allowable_values) # define payload payload = {'ResetType': reset_type} # POST to Action URI response = self.post_request(self.root_uri + action_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True} def _find_account_uri(self, username=None, acct_id=None): if not any((username, acct_id)): return {'ret': False, 'msg': 'Must provide either account_id or account_username'} response = self.get_request(self.root_uri + self.accounts_uri) if response['ret'] is False: return response data = response['data'] uris = [a.get('@odata.id') for a in data.get('Members', []) if a.get('@odata.id')] for uri in uris: response = self.get_request(self.root_uri + uri) if response['ret'] is False: continue data = response['data'] headers = response['headers'] if username: if username == data.get('UserName'): return {'ret': True, 'data': data, 'headers': headers, 'uri': uri} if acct_id: if acct_id == data.get('Id'): return {'ret': True, 'data': data, 'headers': headers, 'uri': uri} return {'ret': False, 'no_match': True, 'msg': 'No account with the given account_id or account_username found'} def _find_empty_account_slot(self): response = self.get_request(self.root_uri + self.accounts_uri) if response['ret'] is False: return response data = response['data'] uris = [a.get('@odata.id') for a in data.get('Members', []) if a.get('@odata.id')] if uris: # first slot may be reserved, so move to end of list uris += [uris.pop(0)] for uri in uris: response = self.get_request(self.root_uri + uri) if response['ret'] is False: continue data = response['data'] headers = response['headers'] if data.get('UserName') == "" and not data.get('Enabled', True): return {'ret': True, 'data': data, 'headers': headers, 'uri': uri} return {'ret': False, 'no_match': True, 'msg': 'No empty account slot found'} def list_users(self): result = {} # listing all users has always been slower than other operations, why? user_list = [] users_results = [] # Get these entries, but does not fail if not found properties = ['Id', 'Name', 'UserName', 'RoleId', 'Locked', 'Enabled'] response = self.get_request(self.root_uri + self.accounts_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for users in data.get('Members', []): user_list.append(users[u'@odata.id']) # user_list[] are URIs # for each user, get details for uri in user_list: user = {} response = self.get_request(self.root_uri + uri) if response['ret'] is False: return response data = response['data'] for property in properties: if property in data: user[property] = data[property] users_results.append(user) result["entries"] = users_results return result def add_user_via_patch(self, user): if user.get('account_id'): # If Id slot specified, use it response = self._find_account_uri(acct_id=user.get('account_id')) else: # Otherwise find first empty slot response = self._find_empty_account_slot() if not response['ret']: return response uri = response['uri'] payload = {} if user.get('account_username'): payload['UserName'] = user.get('account_username') if user.get('account_password'): payload['Password'] = user.get('account_password') if user.get('account_roleid'): payload['RoleId'] = user.get('account_roleid') response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def add_user(self, user): if not user.get('account_username'): return {'ret': False, 'msg': 'Must provide account_username for AddUser command'} response = self._find_account_uri(username=user.get('account_username')) if response['ret']: # account_username already exists, nothing to do return {'ret': True, 'changed': False} response = self.get_request(self.root_uri + self.accounts_uri) if not response['ret']: return response headers = response['headers'] if 'allow' in headers: methods = [m.strip() for m in headers.get('allow').split(',')] if 'POST' not in methods: # if Allow header present and POST not listed, add via PATCH return self.add_user_via_patch(user) payload = {} if user.get('account_username'): payload['UserName'] = user.get('account_username') if user.get('account_password'): payload['Password'] = user.get('account_password') if user.get('account_roleid'): payload['RoleId'] = user.get('account_roleid') response = self.post_request(self.root_uri + self.accounts_uri, payload) if not response['ret']: if response.get('status') == 405: # if POST returned a 405, try to add via PATCH return self.add_user_via_patch(user) else: return response return {'ret': True} def enable_user(self, user): response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] data = response['data'] if data.get('Enabled', True): # account already enabled, nothing to do return {'ret': True, 'changed': False} payload = {'Enabled': True} response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def delete_user_via_patch(self, user, uri=None, data=None): if not uri: response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] data = response['data'] if data and data.get('UserName') == '' and not data.get('Enabled', False): # account UserName already cleared, nothing to do return {'ret': True, 'changed': False} payload = {'UserName': ''} if 'Enabled' in data: payload['Enabled'] = False response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def delete_user(self, user): response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: if response.get('no_match'): # account does not exist, nothing to do return {'ret': True, 'changed': False} else: # some error encountered return response uri = response['uri'] headers = response['headers'] data = response['data'] if 'allow' in headers: methods = [m.strip() for m in headers.get('allow').split(',')] if 'DELETE' not in methods: # if Allow header present and DELETE not listed, del via PATCH return self.delete_user_via_patch(user, uri=uri, data=data) response = self.delete_request(self.root_uri + uri) if not response['ret']: if response.get('status') == 405: # if DELETE returned a 405, try to delete via PATCH return self.delete_user_via_patch(user, uri=uri, data=data) else: return response return {'ret': True} def disable_user(self, user): response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] data = response['data'] if not data.get('Enabled'): # account already disabled, nothing to do return {'ret': True, 'changed': False} payload = {'Enabled': False} response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def update_user_role(self, user): if not user.get('account_roleid'): return {'ret': False, 'msg': 'Must provide account_roleid for UpdateUserRole command'} response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] data = response['data'] if data.get('RoleId') == user.get('account_roleid'): # account already has RoleId , nothing to do return {'ret': True, 'changed': False} payload = {'RoleId': user.get('account_roleid')} response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def update_user_password(self, user): response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] payload = {'Password': user['account_password']} response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def update_user_name(self, user): if not user.get('account_updatename'): return {'ret': False, 'msg': 'Must provide account_updatename for UpdateUserName command'} response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] payload = {'UserName': user['account_updatename']} response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def update_accountservice_properties(self, user): if user.get('account_properties') is None: return {'ret': False, 'msg': 'Must provide account_properties for UpdateAccountServiceProperties command'} account_properties = user.get('account_properties') # Find AccountService response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'AccountService' not in data: return {'ret': False, 'msg': "AccountService resource not found"} accountservice_uri = data["AccountService"]["@odata.id"] # Check support or not response = self.get_request(self.root_uri + accountservice_uri) if response['ret'] is False: return response data = response['data'] for property_name in account_properties.keys(): if property_name not in data: return {'ret': False, 'msg': 'property %s not supported' % property_name} # if properties is already matched, nothing to do need_change = False for property_name in account_properties.keys(): if account_properties[property_name] != data[property_name]: need_change = True break if not need_change: return {'ret': True, 'changed': False, 'msg': "AccountService properties already set"} payload = account_properties response = self.patch_request(self.root_uri + accountservice_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "Modified AccountService properties"} def get_sessions(self): result = {} # listing all users has always been slower than other operations, why? session_list = [] sessions_results = [] # Get these entries, but does not fail if not found properties = ['Description', 'Id', 'Name', 'UserName'] response = self.get_request(self.root_uri + self.sessions_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for sessions in data[u'Members']: session_list.append(sessions[u'@odata.id']) # session_list[] are URIs # for each session, get details for uri in session_list: session = {} response = self.get_request(self.root_uri + uri) if response['ret'] is False: return response data = response['data'] for property in properties: if property in data: session[property] = data[property] sessions_results.append(session) result["entries"] = sessions_results return result def get_firmware_update_capabilities(self): result = {} response = self.get_request(self.root_uri + self.update_uri) if response['ret'] is False: return response result['ret'] = True result['entries'] = {} data = response['data'] if "Actions" in data: actions = data['Actions'] if len(actions) > 0: for key in actions.keys(): action = actions.get(key) if 'title' in action: title = action['title'] else: title = key result['entries'][title] = action.get('[email protected]', ["Key [email protected] not found"]) else: return {'ret': "False", 'msg': "Actions list is empty."} else: return {'ret': "False", 'msg': "Key Actions not found."} return result def _software_inventory(self, uri): result = {} response = self.get_request(self.root_uri + uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] result['entries'] = [] for member in data[u'Members']: uri = self.root_uri + member[u'@odata.id'] # Get details for each software or firmware member response = self.get_request(uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] software = {} # Get these standard properties if present for key in ['Name', 'Id', 'Status', 'Version', 'Updateable', 'SoftwareId', 'LowestSupportedVersion', 'Manufacturer', 'ReleaseDate']: if key in data: software[key] = data.get(key) result['entries'].append(software) return result def get_firmware_inventory(self): if self.firmware_uri is None: return {'ret': False, 'msg': 'No FirmwareInventory resource found'} else: return self._software_inventory(self.firmware_uri) def get_software_inventory(self): if self.software_uri is None: return {'ret': False, 'msg': 'No SoftwareInventory resource found'} else: return self._software_inventory(self.software_uri) def get_bios_attributes(self, systems_uri): result = {} bios_attributes = {} key = "Bios" # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} bios_uri = data[key]["@odata.id"] response = self.get_request(self.root_uri + bios_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for attribute in data[u'Attributes'].items(): bios_attributes[attribute[0]] = attribute[1] result["entries"] = bios_attributes return result def get_multi_bios_attributes(self): return self.aggregate_systems(self.get_bios_attributes) def _get_boot_options_dict(self, boot): # Get these entries from BootOption, if present properties = ['DisplayName', 'BootOptionReference'] # Retrieve BootOptions if present if 'BootOptions' in boot and '@odata.id' in boot['BootOptions']: boot_options_uri = boot['BootOptions']["@odata.id"] # Get BootOptions resource response = self.get_request(self.root_uri + boot_options_uri) if response['ret'] is False: return {} data = response['data'] # Retrieve Members array if 'Members' not in data: return {} members = data['Members'] else: members = [] # Build dict of BootOptions keyed by BootOptionReference boot_options_dict = {} for member in members: if '@odata.id' not in member: return {} boot_option_uri = member['@odata.id'] response = self.get_request(self.root_uri + boot_option_uri) if response['ret'] is False: return {} data = response['data'] if 'BootOptionReference' not in data: return {} boot_option_ref = data['BootOptionReference'] # fetch the props to display for this boot device boot_props = {} for prop in properties: if prop in data: boot_props[prop] = data[prop] boot_options_dict[boot_option_ref] = boot_props return boot_options_dict def get_boot_order(self, systems_uri): result = {} # Retrieve System resource response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] # Confirm needed Boot properties are present if 'Boot' not in data or 'BootOrder' not in data['Boot']: return {'ret': False, 'msg': "Key BootOrder not found"} boot = data['Boot'] boot_order = boot['BootOrder'] boot_options_dict = self._get_boot_options_dict(boot) # Build boot device list boot_device_list = [] for ref in boot_order: boot_device_list.append( boot_options_dict.get(ref, {'BootOptionReference': ref})) result["entries"] = boot_device_list return result def get_multi_boot_order(self): return self.aggregate_systems(self.get_boot_order) def get_boot_override(self, systems_uri): result = {} properties = ["BootSourceOverrideEnabled", "BootSourceOverrideTarget", "BootSourceOverrideMode", "UefiTargetBootSourceOverride", "[email protected]"] response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if 'Boot' not in data: return {'ret': False, 'msg': "Key Boot not found"} boot = data['Boot'] boot_overrides = {} if "BootSourceOverrideEnabled" in boot: if boot["BootSourceOverrideEnabled"] is not False: for property in properties: if property in boot: if boot[property] is not None: boot_overrides[property] = boot[property] else: return {'ret': False, 'msg': "No boot override is enabled."} result['entries'] = boot_overrides return result def get_multi_boot_override(self): return self.aggregate_systems(self.get_boot_override) def set_bios_default_settings(self): result = {} key = "Bios" # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + self.systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} bios_uri = data[key]["@odata.id"] # Extract proper URI response = self.get_request(self.root_uri + bios_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] reset_bios_settings_uri = data["Actions"]["#Bios.ResetBios"]["target"] response = self.post_request(self.root_uri + reset_bios_settings_uri, {}) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "Set BIOS to default settings"} def set_one_time_boot_device(self, bootdevice, uefi_target, boot_next): result = {} key = "Boot" if not bootdevice: return {'ret': False, 'msg': "bootdevice option required for SetOneTimeBoot"} # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + self.systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} boot = data[key] annotation = '[email protected]' if annotation in boot: allowable_values = boot[annotation] if isinstance(allowable_values, list) and bootdevice not in allowable_values: return {'ret': False, 'msg': "Boot device %s not in list of allowable values (%s)" % (bootdevice, allowable_values)} # read existing values enabled = boot.get('BootSourceOverrideEnabled') target = boot.get('BootSourceOverrideTarget') cur_uefi_target = boot.get('UefiTargetBootSourceOverride') cur_boot_next = boot.get('BootNext') if bootdevice == 'UefiTarget': if not uefi_target: return {'ret': False, 'msg': "uefi_target option required to SetOneTimeBoot for UefiTarget"} if enabled == 'Once' and target == bootdevice and uefi_target == cur_uefi_target: # If properties are already set, no changes needed return {'ret': True, 'changed': False} payload = { 'Boot': { 'BootSourceOverrideEnabled': 'Once', 'BootSourceOverrideTarget': bootdevice, 'UefiTargetBootSourceOverride': uefi_target } } elif bootdevice == 'UefiBootNext': if not boot_next: return {'ret': False, 'msg': "boot_next option required to SetOneTimeBoot for UefiBootNext"} if enabled == 'Once' and target == bootdevice and boot_next == cur_boot_next: # If properties are already set, no changes needed return {'ret': True, 'changed': False} payload = { 'Boot': { 'BootSourceOverrideEnabled': 'Once', 'BootSourceOverrideTarget': bootdevice, 'BootNext': boot_next } } else: if enabled == 'Once' and target == bootdevice: # If properties are already set, no changes needed return {'ret': True, 'changed': False} payload = { 'Boot': { 'BootSourceOverrideEnabled': 'Once', 'BootSourceOverrideTarget': bootdevice } } response = self.patch_request(self.root_uri + self.systems_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True} def set_bios_attributes(self, attributes): result = {} key = "Bios" # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + self.systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} bios_uri = data[key]["@odata.id"] # Extract proper URI response = self.get_request(self.root_uri + bios_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] # Make a copy of the attributes dict attrs_to_patch = dict(attributes) # Check the attributes for attr in attributes: if attr not in data[u'Attributes']: return {'ret': False, 'msg': "BIOS attribute %s not found" % attr} # If already set to requested value, remove it from PATCH payload if data[u'Attributes'][attr] == attributes[attr]: del attrs_to_patch[attr] # Return success w/ changed=False if no attrs need to be changed if not attrs_to_patch: return {'ret': True, 'changed': False, 'msg': "BIOS attributes already set"} # Get the SettingsObject URI set_bios_attr_uri = data["@Redfish.Settings"]["SettingsObject"]["@odata.id"] # Construct payload and issue PATCH command payload = {"Attributes": attrs_to_patch} response = self.patch_request(self.root_uri + set_bios_attr_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "Modified BIOS attribute"} def set_boot_order(self, boot_list): if not boot_list: return {'ret': False, 'msg': "boot_order list required for SetBootOrder command"} systems_uri = self.systems_uri response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response data = response['data'] # Confirm needed Boot properties are present if 'Boot' not in data or 'BootOrder' not in data['Boot']: return {'ret': False, 'msg': "Key BootOrder not found"} boot = data['Boot'] boot_order = boot['BootOrder'] boot_options_dict = self._get_boot_options_dict(boot) # validate boot_list against BootOptionReferences if available if boot_options_dict: boot_option_references = boot_options_dict.keys() for ref in boot_list: if ref not in boot_option_references: return {'ret': False, 'msg': "BootOptionReference %s not found in BootOptions" % ref} # If requested BootOrder is already set, nothing to do if boot_order == boot_list: return {'ret': True, 'changed': False, 'msg': "BootOrder already set to %s" % boot_list} payload = { 'Boot': { 'BootOrder': boot_list } } response = self.patch_request(self.root_uri + systems_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "BootOrder set"} def set_default_boot_order(self): systems_uri = self.systems_uri response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response data = response['data'] # get the #ComputerSystem.SetDefaultBootOrder Action and target URI action = '#ComputerSystem.SetDefaultBootOrder' if 'Actions' not in data or action not in data['Actions']: return {'ret': False, 'msg': 'Action %s not found' % action} if 'target' not in data['Actions'][action]: return {'ret': False, 'msg': 'target URI missing from Action %s' % action} action_uri = data['Actions'][action]['target'] # POST to Action URI payload = {} response = self.post_request(self.root_uri + action_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "BootOrder set to default"} def get_chassis_inventory(self): result = {} chassis_results = [] # Get these entries, but does not fail if not found properties = ['ChassisType', 'PartNumber', 'AssetTag', 'Manufacturer', 'IndicatorLED', 'SerialNumber', 'Model'] # Go through list for chassis_uri in self.chassis_uris: response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] chassis_result = {} for property in properties: if property in data: chassis_result[property] = data[property] chassis_results.append(chassis_result) result["entries"] = chassis_results return result def get_fan_inventory(self): result = {} fan_results = [] key = "Thermal" # Get these entries, but does not fail if not found properties = ['FanName', 'Reading', 'ReadingUnits', 'Status'] # Go through list for chassis_uri in self.chassis_uris: response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key in data: # match: found an entry for "Thermal" information = fans thermal_uri = data[key]["@odata.id"] response = self.get_request(self.root_uri + thermal_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for device in data[u'Fans']: fan = {} for property in properties: if property in device: fan[property] = device[property] fan_results.append(fan) result["entries"] = fan_results return result def get_chassis_power(self): result = {} key = "Power" # Get these entries, but does not fail if not found properties = ['Name', 'PowerAllocatedWatts', 'PowerAvailableWatts', 'PowerCapacityWatts', 'PowerConsumedWatts', 'PowerMetrics', 'PowerRequestedWatts', 'RelatedItem', 'Status'] chassis_power_results = [] # Go through list for chassis_uri in self.chassis_uris: chassis_power_result = {} response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key in data: response = self.get_request(self.root_uri + data[key]['@odata.id']) data = response['data'] if 'PowerControl' in data: if len(data['PowerControl']) > 0: data = data['PowerControl'][0] for property in properties: if property in data: chassis_power_result[property] = data[property] else: return {'ret': False, 'msg': 'Key PowerControl not found.'} chassis_power_results.append(chassis_power_result) else: return {'ret': False, 'msg': 'Key Power not found.'} result['entries'] = chassis_power_results return result def get_chassis_thermals(self): result = {} sensors = [] key = "Thermal" # Get these entries, but does not fail if not found properties = ['Name', 'PhysicalContext', 'UpperThresholdCritical', 'UpperThresholdFatal', 'UpperThresholdNonCritical', 'LowerThresholdCritical', 'LowerThresholdFatal', 'LowerThresholdNonCritical', 'MaxReadingRangeTemp', 'MinReadingRangeTemp', 'ReadingCelsius', 'RelatedItem', 'SensorNumber'] # Go through list for chassis_uri in self.chassis_uris: response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key in data: thermal_uri = data[key]["@odata.id"] response = self.get_request(self.root_uri + thermal_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if "Temperatures" in data: for sensor in data[u'Temperatures']: sensor_result = {} for property in properties: if property in sensor: if sensor[property] is not None: sensor_result[property] = sensor[property] sensors.append(sensor_result) if sensors is None: return {'ret': False, 'msg': 'Key Temperatures was not found.'} result['entries'] = sensors return result def get_cpu_inventory(self, systems_uri): result = {} cpu_list = [] cpu_results = [] key = "Processors" # Get these entries, but does not fail if not found properties = ['Id', 'Manufacturer', 'Model', 'MaxSpeedMHz', 'TotalCores', 'TotalThreads', 'Status'] # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} processors_uri = data[key]["@odata.id"] # Get a list of all CPUs and build respective URIs response = self.get_request(self.root_uri + processors_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for cpu in data[u'Members']: cpu_list.append(cpu[u'@odata.id']) for c in cpu_list: cpu = {} uri = self.root_uri + c response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] for property in properties: if property in data: cpu[property] = data[property] cpu_results.append(cpu) result["entries"] = cpu_results return result def get_multi_cpu_inventory(self): return self.aggregate_systems(self.get_cpu_inventory) def get_memory_inventory(self, systems_uri): result = {} memory_list = [] memory_results = [] key = "Memory" # Get these entries, but does not fail if not found properties = ['SerialNumber', 'MemoryDeviceType', 'PartNuber', 'MemoryLocation', 'RankCount', 'CapacityMiB', 'OperatingMemoryModes', 'Status', 'Manufacturer', 'Name'] # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} memory_uri = data[key]["@odata.id"] # Get a list of all DIMMs and build respective URIs response = self.get_request(self.root_uri + memory_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for dimm in data[u'Members']: memory_list.append(dimm[u'@odata.id']) for m in memory_list: dimm = {} uri = self.root_uri + m response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] if "Status" in data: if "State" in data["Status"]: if data["Status"]["State"] == "Absent": continue else: continue for property in properties: if property in data: dimm[property] = data[property] memory_results.append(dimm) result["entries"] = memory_results return result def get_multi_memory_inventory(self): return self.aggregate_systems(self.get_memory_inventory) def get_nic_inventory(self, resource_uri): result = {} nic_list = [] nic_results = [] key = "EthernetInterfaces" # Get these entries, but does not fail if not found properties = ['Description', 'FQDN', 'IPv4Addresses', 'IPv6Addresses', 'NameServers', 'MACAddress', 'PermanentMACAddress', 'SpeedMbps', 'MTUSize', 'AutoNeg', 'Status'] response = self.get_request(self.root_uri + resource_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} ethernetinterfaces_uri = data[key]["@odata.id"] # Get a list of all network controllers and build respective URIs response = self.get_request(self.root_uri + ethernetinterfaces_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for nic in data[u'Members']: nic_list.append(nic[u'@odata.id']) for n in nic_list: nic = {} uri = self.root_uri + n response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] for property in properties: if property in data: nic[property] = data[property] nic_results.append(nic) result["entries"] = nic_results return result def get_multi_nic_inventory(self, resource_type): ret = True entries = [] # Given resource_type, use the proper URI if resource_type == 'Systems': resource_uris = self.systems_uris elif resource_type == 'Manager': resource_uris = self.manager_uris for resource_uri in resource_uris: inventory = self.get_nic_inventory(resource_uri) ret = inventory.pop('ret') and ret if 'entries' in inventory: entries.append(({'resource_uri': resource_uri}, inventory['entries'])) return dict(ret=ret, entries=entries) def get_virtualmedia(self, resource_uri): result = {} virtualmedia_list = [] virtualmedia_results = [] key = "VirtualMedia" # Get these entries, but does not fail if not found properties = ['Description', 'ConnectedVia', 'Id', 'MediaTypes', 'Image', 'ImageName', 'Name', 'WriteProtected', 'TransferMethod', 'TransferProtocolType'] response = self.get_request(self.root_uri + resource_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} virtualmedia_uri = data[key]["@odata.id"] # Get a list of all virtual media and build respective URIs response = self.get_request(self.root_uri + virtualmedia_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for virtualmedia in data[u'Members']: virtualmedia_list.append(virtualmedia[u'@odata.id']) for n in virtualmedia_list: virtualmedia = {} uri = self.root_uri + n response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] for property in properties: if property in data: virtualmedia[property] = data[property] virtualmedia_results.append(virtualmedia) result["entries"] = virtualmedia_results return result def get_multi_virtualmedia(self): ret = True entries = [] resource_uris = self.manager_uris for resource_uri in resource_uris: virtualmedia = self.get_virtualmedia(resource_uri) ret = virtualmedia.pop('ret') and ret if 'entries' in virtualmedia: entries.append(({'resource_uri': resource_uri}, virtualmedia['entries'])) return dict(ret=ret, entries=entries) def get_psu_inventory(self): result = {} psu_list = [] psu_results = [] key = "PowerSupplies" # Get these entries, but does not fail if not found properties = ['Name', 'Model', 'SerialNumber', 'PartNumber', 'Manufacturer', 'FirmwareVersion', 'PowerCapacityWatts', 'PowerSupplyType', 'Status'] # Get a list of all Chassis and build URIs, then get all PowerSupplies # from each Power entry in the Chassis chassis_uri_list = self.chassis_uris for chassis_uri in chassis_uri_list: response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if 'Power' in data: power_uri = data[u'Power'][u'@odata.id'] else: continue response = self.get_request(self.root_uri + power_uri) data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} psu_list = data[key] for psu in psu_list: psu_not_present = False psu_data = {} for property in properties: if property in psu: if psu[property] is not None: if property == 'Status': if 'State' in psu[property]: if psu[property]['State'] == 'Absent': psu_not_present = True psu_data[property] = psu[property] if psu_not_present: continue psu_results.append(psu_data) result["entries"] = psu_results if not result["entries"]: return {'ret': False, 'msg': "No PowerSupply objects found"} return result def get_multi_psu_inventory(self): return self.aggregate_systems(self.get_psu_inventory) def get_system_inventory(self, systems_uri): result = {} inventory = {} # Get these entries, but does not fail if not found properties = ['Status', 'HostName', 'PowerState', 'Model', 'Manufacturer', 'PartNumber', 'SystemType', 'AssetTag', 'ServiceTag', 'SerialNumber', 'SKU', 'BiosVersion', 'MemorySummary', 'ProcessorSummary', 'TrustedModules'] response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for property in properties: if property in data: inventory[property] = data[property] result["entries"] = inventory return result def get_multi_system_inventory(self): return self.aggregate_systems(self.get_system_inventory) def get_network_protocols(self): result = {} service_result = {} # Find NetworkProtocol response = self.get_request(self.root_uri + self.manager_uri) if response['ret'] is False: return response data = response['data'] if 'NetworkProtocol' not in data: return {'ret': False, 'msg': "NetworkProtocol resource not found"} networkprotocol_uri = data["NetworkProtocol"]["@odata.id"] response = self.get_request(self.root_uri + networkprotocol_uri) if response['ret'] is False: return response data = response['data'] protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH', 'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP', 'RFB'] for protocol_service in protocol_services: if protocol_service in data.keys(): service_result[protocol_service] = data[protocol_service] result['ret'] = True result["entries"] = service_result return result def set_network_protocols(self, manager_services): # Check input data validity protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH', 'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP', 'RFB'] protocol_state_onlist = ['true', 'True', True, 'on', 1] protocol_state_offlist = ['false', 'False', False, 'off', 0] payload = {} for service_name in manager_services.keys(): if service_name not in protocol_services: return {'ret': False, 'msg': "Service name %s is invalid" % service_name} payload[service_name] = {} for service_property in manager_services[service_name].keys(): value = manager_services[service_name][service_property] if service_property in ['ProtocolEnabled', 'protocolenabled']: if value in protocol_state_onlist: payload[service_name]['ProtocolEnabled'] = True elif value in protocol_state_offlist: payload[service_name]['ProtocolEnabled'] = False else: return {'ret': False, 'msg': "Value of property %s is invalid" % service_property} elif service_property in ['port', 'Port']: if isinstance(value, int): payload[service_name]['Port'] = value elif isinstance(value, str) and value.isdigit(): payload[service_name]['Port'] = int(value) else: return {'ret': False, 'msg': "Value of property %s is invalid" % service_property} else: payload[service_name][service_property] = value # Find NetworkProtocol response = self.get_request(self.root_uri + self.manager_uri) if response['ret'] is False: return response data = response['data'] if 'NetworkProtocol' not in data: return {'ret': False, 'msg': "NetworkProtocol resource not found"} networkprotocol_uri = data["NetworkProtocol"]["@odata.id"] # Check service property support or not response = self.get_request(self.root_uri + networkprotocol_uri) if response['ret'] is False: return response data = response['data'] for service_name in payload.keys(): if service_name not in data: return {'ret': False, 'msg': "%s service not supported" % service_name} for service_property in payload[service_name].keys(): if service_property not in data[service_name]: return {'ret': False, 'msg': "%s property for %s service not supported" % (service_property, service_name)} # if the protocol is already set, nothing to do need_change = False for service_name in payload.keys(): for service_property in payload[service_name].keys(): value = payload[service_name][service_property] if value != data[service_name][service_property]: need_change = True break if not need_change: return {'ret': True, 'changed': False, 'msg': "Manager NetworkProtocol services already set"} response = self.patch_request(self.root_uri + networkprotocol_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "Modified Manager NetworkProtocol services"} @staticmethod def to_singular(resource_name): if resource_name.endswith('ies'): resource_name = resource_name[:-3] + 'y' elif resource_name.endswith('s'): resource_name = resource_name[:-1] return resource_name def get_health_resource(self, subsystem, uri, health, expanded): status = 'Status' if expanded: d = expanded else: r = self.get_request(self.root_uri + uri) if r.get('ret'): d = r.get('data') else: return if 'Members' in d: # collections case for m in d.get('Members'): u = m.get('@odata.id') r = self.get_request(self.root_uri + u) if r.get('ret'): p = r.get('data') if p: e = {self.to_singular(subsystem.lower()) + '_uri': u, status: p.get(status, "Status not available")} health[subsystem].append(e) else: # non-collections case e = {self.to_singular(subsystem.lower()) + '_uri': uri, status: d.get(status, "Status not available")} health[subsystem].append(e) def get_health_subsystem(self, subsystem, data, health): if subsystem in data: sub = data.get(subsystem) if isinstance(sub, list): for r in sub: if '@odata.id' in r: uri = r.get('@odata.id') expanded = None if '#' in uri and len(r) > 1: expanded = r self.get_health_resource(subsystem, uri, health, expanded) elif isinstance(sub, dict): if '@odata.id' in sub: uri = sub.get('@odata.id') self.get_health_resource(subsystem, uri, health, None) elif 'Members' in data: for m in data.get('Members'): u = m.get('@odata.id') r = self.get_request(self.root_uri + u) if r.get('ret'): d = r.get('data') self.get_health_subsystem(subsystem, d, health) def get_health_report(self, category, uri, subsystems): result = {} health = {} status = 'Status' # Get health status of top level resource response = self.get_request(self.root_uri + uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] health[category] = {status: data.get(status, "Status not available")} # Get health status of subsystems for sub in subsystems: d = None if sub.startswith('Links.'): # ex: Links.PCIeDevices sub = sub[len('Links.'):] d = data.get('Links', {}) elif '.' in sub: # ex: Thermal.Fans p, sub = sub.split('.') u = data.get(p, {}).get('@odata.id') if u: r = self.get_request(self.root_uri + u) if r['ret']: d = r['data'] if not d: continue else: # ex: Memory d = data health[sub] = [] self.get_health_subsystem(sub, d, health) if not health[sub]: del health[sub] result["entries"] = health return result def get_system_health_report(self, systems_uri): subsystems = ['Processors', 'Memory', 'SimpleStorage', 'Storage', 'EthernetInterfaces', 'NetworkInterfaces.NetworkPorts', 'NetworkInterfaces.NetworkDeviceFunctions'] return self.get_health_report('System', systems_uri, subsystems) def get_multi_system_health_report(self): return self.aggregate_systems(self.get_system_health_report) def get_chassis_health_report(self, chassis_uri): subsystems = ['Power.PowerSupplies', 'Thermal.Fans', 'Links.PCIeDevices'] return self.get_health_report('Chassis', chassis_uri, subsystems) def get_multi_chassis_health_report(self): return self.aggregate_chassis(self.get_chassis_health_report) def get_manager_health_report(self, manager_uri): subsystems = [] return self.get_health_report('Manager', manager_uri, subsystems) def get_multi_manager_health_report(self): return self.aggregate_managers(self.get_manager_health_report)
closed
ansible/ansible
https://github.com/ansible/ansible
64,475
redfish_config - Manager - SetManagerNic
##### SUMMARY This feature would implement a SetManagerNic command for the Manager category of redfish_config, to update Manager NIC configuration. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME redfish_config ##### ADDITIONAL INFORMATION This command would help user to configure their Manager NIC.
https://github.com/ansible/ansible/issues/64475
https://github.com/ansible/ansible/pull/64477
f51f87a986b54329e731e3cccb16049011009cb1
a7716ae7a91411a2ccfc51f4e80bc2257f196625
2019-11-06T02:42:58Z
python
2019-11-20T20:04:24Z
lib/ansible/module_utils/redfish_utils.py
# Copyright (c) 2017-2018 Dell EMC Inc. # GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type import json from ansible.module_utils.urls import open_url from ansible.module_utils._text import to_text from ansible.module_utils.six.moves import http_client from ansible.module_utils.six.moves.urllib.error import URLError, HTTPError GET_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'} POST_HEADERS = {'content-type': 'application/json', 'accept': 'application/json', 'OData-Version': '4.0'} PATCH_HEADERS = {'content-type': 'application/json', 'accept': 'application/json', 'OData-Version': '4.0'} DELETE_HEADERS = {'accept': 'application/json', 'OData-Version': '4.0'} DEPRECATE_MSG = 'Issuing a data modification command without specifying the '\ 'ID of the target %(resource)s resource when there is more '\ 'than one %(resource)s will use the first one in the '\ 'collection. Use the `resource_id` option to specify the '\ 'target %(resource)s ID' class RedfishUtils(object): def __init__(self, creds, root_uri, timeout, module, resource_id=None, data_modification=False): self.root_uri = root_uri self.creds = creds self.timeout = timeout self.module = module self.service_root = '/redfish/v1/' self.resource_id = resource_id self.data_modification = data_modification self._init_session() # The following functions are to send GET/POST/PATCH/DELETE requests def get_request(self, uri): try: resp = open_url(uri, method="GET", headers=GET_HEADERS, url_username=self.creds['user'], url_password=self.creds['pswd'], force_basic_auth=True, validate_certs=False, follow_redirects='all', use_proxy=False, timeout=self.timeout) data = json.loads(resp.read()) headers = dict((k.lower(), v) for (k, v) in resp.info().items()) except HTTPError as e: msg = self._get_extended_message(e) return {'ret': False, 'msg': "HTTP Error %s on GET request to '%s', extended message: '%s'" % (e.code, uri, msg), 'status': e.code} except URLError as e: return {'ret': False, 'msg': "URL Error on GET request to '%s': '%s'" % (uri, e.reason)} # Almost all errors should be caught above, but just in case except Exception as e: return {'ret': False, 'msg': "Failed GET request to '%s': '%s'" % (uri, to_text(e))} return {'ret': True, 'data': data, 'headers': headers} def post_request(self, uri, pyld): try: resp = open_url(uri, data=json.dumps(pyld), headers=POST_HEADERS, method="POST", url_username=self.creds['user'], url_password=self.creds['pswd'], force_basic_auth=True, validate_certs=False, follow_redirects='all', use_proxy=False, timeout=self.timeout) except HTTPError as e: msg = self._get_extended_message(e) return {'ret': False, 'msg': "HTTP Error %s on POST request to '%s', extended message: '%s'" % (e.code, uri, msg), 'status': e.code} except URLError as e: return {'ret': False, 'msg': "URL Error on POST request to '%s': '%s'" % (uri, e.reason)} # Almost all errors should be caught above, but just in case except Exception as e: return {'ret': False, 'msg': "Failed POST request to '%s': '%s'" % (uri, to_text(e))} return {'ret': True, 'resp': resp} def patch_request(self, uri, pyld): headers = PATCH_HEADERS r = self.get_request(uri) if r['ret']: # Get etag from etag header or @odata.etag property etag = r['headers'].get('etag') if not etag: etag = r['data'].get('@odata.etag') if etag: # Make copy of headers and add If-Match header headers = dict(headers) headers['If-Match'] = etag try: resp = open_url(uri, data=json.dumps(pyld), headers=headers, method="PATCH", url_username=self.creds['user'], url_password=self.creds['pswd'], force_basic_auth=True, validate_certs=False, follow_redirects='all', use_proxy=False, timeout=self.timeout) except HTTPError as e: msg = self._get_extended_message(e) return {'ret': False, 'msg': "HTTP Error %s on PATCH request to '%s', extended message: '%s'" % (e.code, uri, msg), 'status': e.code} except URLError as e: return {'ret': False, 'msg': "URL Error on PATCH request to '%s': '%s'" % (uri, e.reason)} # Almost all errors should be caught above, but just in case except Exception as e: return {'ret': False, 'msg': "Failed PATCH request to '%s': '%s'" % (uri, to_text(e))} return {'ret': True, 'resp': resp} def delete_request(self, uri, pyld=None): try: data = json.dumps(pyld) if pyld else None resp = open_url(uri, data=data, headers=DELETE_HEADERS, method="DELETE", url_username=self.creds['user'], url_password=self.creds['pswd'], force_basic_auth=True, validate_certs=False, follow_redirects='all', use_proxy=False, timeout=self.timeout) except HTTPError as e: msg = self._get_extended_message(e) return {'ret': False, 'msg': "HTTP Error %s on DELETE request to '%s', extended message: '%s'" % (e.code, uri, msg), 'status': e.code} except URLError as e: return {'ret': False, 'msg': "URL Error on DELETE request to '%s': '%s'" % (uri, e.reason)} # Almost all errors should be caught above, but just in case except Exception as e: return {'ret': False, 'msg': "Failed DELETE request to '%s': '%s'" % (uri, to_text(e))} return {'ret': True, 'resp': resp} @staticmethod def _get_extended_message(error): """ Get Redfish ExtendedInfo message from response payload if present :param error: an HTTPError exception :type error: HTTPError :return: the ExtendedInfo message if present, else standard HTTP error """ msg = http_client.responses.get(error.code, '') if error.code >= 400: try: body = error.read().decode('utf-8') data = json.loads(body) ext_info = data['error']['@Message.ExtendedInfo'] msg = ext_info[0]['Message'] except Exception: pass return msg def _init_session(self): pass def _find_accountservice_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'AccountService' not in data: return {'ret': False, 'msg': "AccountService resource not found"} else: account_service = data["AccountService"]["@odata.id"] response = self.get_request(self.root_uri + account_service) if response['ret'] is False: return response data = response['data'] accounts = data['Accounts']['@odata.id'] if accounts[-1:] == '/': accounts = accounts[:-1] self.accounts_uri = accounts return {'ret': True} def _find_sessionservice_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'SessionService' not in data: return {'ret': False, 'msg': "SessionService resource not found"} else: session_service = data["SessionService"]["@odata.id"] response = self.get_request(self.root_uri + session_service) if response['ret'] is False: return response data = response['data'] sessions = data['Sessions']['@odata.id'] if sessions[-1:] == '/': sessions = sessions[:-1] self.sessions_uri = sessions return {'ret': True} def _get_resource_uri_by_id(self, uris, id_prop): for uri in uris: response = self.get_request(self.root_uri + uri) if response['ret'] is False: continue data = response['data'] if id_prop == data.get('Id'): return uri return None def _find_systems_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'Systems' not in data: return {'ret': False, 'msg': "Systems resource not found"} response = self.get_request(self.root_uri + data['Systems']['@odata.id']) if response['ret'] is False: return response self.systems_uris = [ i['@odata.id'] for i in response['data'].get('Members', [])] if not self.systems_uris: return { 'ret': False, 'msg': "ComputerSystem's Members array is either empty or missing"} self.systems_uri = self.systems_uris[0] if self.data_modification: if self.resource_id: self.systems_uri = self._get_resource_uri_by_id(self.systems_uris, self.resource_id) if not self.systems_uri: return { 'ret': False, 'msg': "System resource %s not found" % self.resource_id} elif len(self.systems_uris) > 1: self.module.deprecate(DEPRECATE_MSG % {'resource': 'System'}, version='2.13') return {'ret': True} def _find_updateservice_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'UpdateService' not in data: return {'ret': False, 'msg': "UpdateService resource not found"} else: update = data["UpdateService"]["@odata.id"] self.update_uri = update response = self.get_request(self.root_uri + update) if response['ret'] is False: return response data = response['data'] self.firmware_uri = self.software_uri = None if 'FirmwareInventory' in data: self.firmware_uri = data['FirmwareInventory'][u'@odata.id'] if 'SoftwareInventory' in data: self.software_uri = data['SoftwareInventory'][u'@odata.id'] return {'ret': True} def _find_chassis_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'Chassis' not in data: return {'ret': False, 'msg': "Chassis resource not found"} chassis = data["Chassis"]["@odata.id"] response = self.get_request(self.root_uri + chassis) if response['ret'] is False: return response self.chassis_uris = [ i['@odata.id'] for i in response['data'].get('Members', [])] if not self.chassis_uris: return {'ret': False, 'msg': "Chassis Members array is either empty or missing"} self.chassis_uri = self.chassis_uris[0] if self.data_modification: if self.resource_id: self.chassis_uri = self._get_resource_uri_by_id(self.chassis_uris, self.resource_id) if not self.chassis_uri: return { 'ret': False, 'msg': "Chassis resource %s not found" % self.resource_id} elif len(self.chassis_uris) > 1: self.module.deprecate(DEPRECATE_MSG % {'resource': 'Chassis'}, version='2.13') return {'ret': True} def _find_managers_resource(self): response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'Managers' not in data: return {'ret': False, 'msg': "Manager resource not found"} manager = data["Managers"]["@odata.id"] response = self.get_request(self.root_uri + manager) if response['ret'] is False: return response self.manager_uris = [ i['@odata.id'] for i in response['data'].get('Members', [])] if not self.manager_uris: return {'ret': False, 'msg': "Managers Members array is either empty or missing"} self.manager_uri = self.manager_uris[0] if self.data_modification: if self.resource_id: self.manager_uri = self._get_resource_uri_by_id(self.manager_uris, self.resource_id) if not self.manager_uri: return { 'ret': False, 'msg': "Manager resource %s not found" % self.resource_id} elif len(self.manager_uris) > 1: self.module.deprecate(DEPRECATE_MSG % {'resource': 'Manager'}, version='2.13') return {'ret': True} def get_logs(self): log_svcs_uri_list = [] list_of_logs = [] properties = ['Severity', 'Created', 'EntryType', 'OemRecordFormat', 'Message', 'MessageId', 'MessageArgs'] # Find LogService response = self.get_request(self.root_uri + self.manager_uri) if response['ret'] is False: return response data = response['data'] if 'LogServices' not in data: return {'ret': False, 'msg': "LogServices resource not found"} # Find all entries in LogServices logs_uri = data["LogServices"]["@odata.id"] response = self.get_request(self.root_uri + logs_uri) if response['ret'] is False: return response data = response['data'] for log_svcs_entry in data.get('Members', []): response = self.get_request(self.root_uri + log_svcs_entry[u'@odata.id']) if response['ret'] is False: return response _data = response['data'] if 'Entries' in _data: log_svcs_uri_list.append(_data['Entries'][u'@odata.id']) # For each entry in LogServices, get log name and all log entries for log_svcs_uri in log_svcs_uri_list: logs = {} list_of_log_entries = [] response = self.get_request(self.root_uri + log_svcs_uri) if response['ret'] is False: return response data = response['data'] logs['Description'] = data.get('Description', 'Collection of log entries') # Get all log entries for each type of log found for logEntry in data.get('Members', []): entry = {} for prop in properties: if prop in logEntry: entry[prop] = logEntry.get(prop) if entry: list_of_log_entries.append(entry) log_name = log_svcs_uri.split('/')[-1] logs[log_name] = list_of_log_entries list_of_logs.append(logs) # list_of_logs[logs{list_of_log_entries[entry{}]}] return {'ret': True, 'entries': list_of_logs} def clear_logs(self): # Find LogService response = self.get_request(self.root_uri + self.manager_uri) if response['ret'] is False: return response data = response['data'] if 'LogServices' not in data: return {'ret': False, 'msg': "LogServices resource not found"} # Find all entries in LogServices logs_uri = data["LogServices"]["@odata.id"] response = self.get_request(self.root_uri + logs_uri) if response['ret'] is False: return response data = response['data'] for log_svcs_entry in data[u'Members']: response = self.get_request(self.root_uri + log_svcs_entry["@odata.id"]) if response['ret'] is False: return response _data = response['data'] # Check to make sure option is available, otherwise error is ugly if "Actions" in _data: if "#LogService.ClearLog" in _data[u"Actions"]: self.post_request(self.root_uri + _data[u"Actions"]["#LogService.ClearLog"]["target"], {}) if response['ret'] is False: return response return {'ret': True} def aggregate(self, func, uri_list, uri_name): ret = True entries = [] for uri in uri_list: inventory = func(uri) ret = inventory.pop('ret') and ret if 'entries' in inventory: entries.append(({uri_name: uri}, inventory['entries'])) return dict(ret=ret, entries=entries) def aggregate_chassis(self, func): return self.aggregate(func, self.chassis_uris, 'chassis_uri') def aggregate_managers(self, func): return self.aggregate(func, self.manager_uris, 'manager_uri') def aggregate_systems(self, func): return self.aggregate(func, self.systems_uris, 'system_uri') def get_storage_controller_inventory(self, systems_uri): result = {} controller_list = [] controller_results = [] # Get these entries, but does not fail if not found properties = ['CacheSummary', 'FirmwareVersion', 'Identifiers', 'Location', 'Manufacturer', 'Model', 'Name', 'PartNumber', 'SerialNumber', 'SpeedGbps', 'Status'] key = "StorageControllers" # Find Storage service response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response data = response['data'] if 'Storage' not in data: return {'ret': False, 'msg': "Storage resource not found"} # Get a list of all storage controllers and build respective URIs storage_uri = data['Storage']["@odata.id"] response = self.get_request(self.root_uri + storage_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] # Loop through Members and their StorageControllers # and gather properties from each StorageController if data[u'Members']: for storage_member in data[u'Members']: storage_member_uri = storage_member[u'@odata.id'] response = self.get_request(self.root_uri + storage_member_uri) data = response['data'] if key in data: controller_list = data[key] for controller in controller_list: controller_result = {} for property in properties: if property in controller: controller_result[property] = controller[property] controller_results.append(controller_result) result['entries'] = controller_results return result else: return {'ret': False, 'msg': "Storage resource not found"} def get_multi_storage_controller_inventory(self): return self.aggregate_systems(self.get_storage_controller_inventory) def get_disk_inventory(self, systems_uri): result = {'entries': []} controller_list = [] # Get these entries, but does not fail if not found properties = ['BlockSizeBytes', 'CapableSpeedGbs', 'CapacityBytes', 'EncryptionAbility', 'EncryptionStatus', 'FailurePredicted', 'HotspareType', 'Id', 'Identifiers', 'Manufacturer', 'MediaType', 'Model', 'Name', 'PartNumber', 'PhysicalLocation', 'Protocol', 'Revision', 'RotationSpeedRPM', 'SerialNumber', 'Status'] # Find Storage service response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response data = response['data'] if 'SimpleStorage' not in data and 'Storage' not in data: return {'ret': False, 'msg': "SimpleStorage and Storage resource \ not found"} if 'Storage' in data: # Get a list of all storage controllers and build respective URIs storage_uri = data[u'Storage'][u'@odata.id'] response = self.get_request(self.root_uri + storage_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if data[u'Members']: for controller in data[u'Members']: controller_list.append(controller[u'@odata.id']) for c in controller_list: uri = self.root_uri + c response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] controller_name = 'Controller 1' if 'StorageControllers' in data: sc = data['StorageControllers'] if sc: if 'Name' in sc[0]: controller_name = sc[0]['Name'] else: sc_id = sc[0].get('Id', '1') controller_name = 'Controller %s' % sc_id drive_results = [] if 'Drives' in data: for device in data[u'Drives']: disk_uri = self.root_uri + device[u'@odata.id'] response = self.get_request(disk_uri) data = response['data'] drive_result = {} for property in properties: if property in data: if data[property] is not None: drive_result[property] = data[property] drive_results.append(drive_result) drives = {'Controller': controller_name, 'Drives': drive_results} result["entries"].append(drives) if 'SimpleStorage' in data: # Get a list of all storage controllers and build respective URIs storage_uri = data["SimpleStorage"]["@odata.id"] response = self.get_request(self.root_uri + storage_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for controller in data[u'Members']: controller_list.append(controller[u'@odata.id']) for c in controller_list: uri = self.root_uri + c response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] if 'Name' in data: controller_name = data['Name'] else: sc_id = data.get('Id', '1') controller_name = 'Controller %s' % sc_id drive_results = [] for device in data[u'Devices']: drive_result = {} for property in properties: if property in device: drive_result[property] = device[property] drive_results.append(drive_result) drives = {'Controller': controller_name, 'Drives': drive_results} result["entries"].append(drives) return result def get_multi_disk_inventory(self): return self.aggregate_systems(self.get_disk_inventory) def get_volume_inventory(self, systems_uri): result = {'entries': []} controller_list = [] volume_list = [] # Get these entries, but does not fail if not found properties = ['Id', 'Name', 'RAIDType', 'VolumeType', 'BlockSizeBytes', 'Capacity', 'CapacityBytes', 'CapacitySources', 'Encrypted', 'EncryptionTypes', 'Identifiers', 'Operations', 'OptimumIOSizeBytes', 'AccessCapabilities', 'AllocatedPools', 'Status'] # Find Storage service response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response data = response['data'] if 'SimpleStorage' not in data and 'Storage' not in data: return {'ret': False, 'msg': "SimpleStorage and Storage resource \ not found"} if 'Storage' in data: # Get a list of all storage controllers and build respective URIs storage_uri = data[u'Storage'][u'@odata.id'] response = self.get_request(self.root_uri + storage_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if data.get('Members'): for controller in data[u'Members']: controller_list.append(controller[u'@odata.id']) for c in controller_list: uri = self.root_uri + c response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] controller_name = 'Controller 1' if 'StorageControllers' in data: sc = data['StorageControllers'] if sc: if 'Name' in sc[0]: controller_name = sc[0]['Name'] else: sc_id = sc[0].get('Id', '1') controller_name = 'Controller %s' % sc_id volume_results = [] if 'Volumes' in data: # Get a list of all volumes and build respective URIs volumes_uri = data[u'Volumes'][u'@odata.id'] response = self.get_request(self.root_uri + volumes_uri) data = response['data'] if data.get('Members'): for volume in data[u'Members']: volume_list.append(volume[u'@odata.id']) for v in volume_list: uri = self.root_uri + v response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] volume_result = {} for property in properties: if property in data: if data[property] is not None: volume_result[property] = data[property] # Get related Drives Id drive_id_list = [] if 'Links' in data: if 'Drives' in data[u'Links']: for link in data[u'Links'][u'Drives']: drive_id_link = link[u'@odata.id'] drive_id = drive_id_link.split("/")[-1] drive_id_list.append({'Id': drive_id}) volume_result['Linked_drives'] = drive_id_list volume_results.append(volume_result) volumes = {'Controller': controller_name, 'Volumes': volume_results} result["entries"].append(volumes) else: return {'ret': False, 'msg': "Storage resource not found"} return result def get_multi_volume_inventory(self): return self.aggregate_systems(self.get_volume_inventory) def restart_manager_gracefully(self): result = {} key = "Actions" # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + self.manager_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] action_uri = data[key]["#Manager.Reset"]["target"] payload = {'ResetType': 'GracefulRestart'} response = self.post_request(self.root_uri + action_uri, payload) if response['ret'] is False: return response return {'ret': True} def manage_indicator_led(self, command): result = {} key = 'IndicatorLED' payloads = {'IndicatorLedOn': 'Lit', 'IndicatorLedOff': 'Off', "IndicatorLedBlink": 'Blinking'} result = {} for chassis_uri in self.chassis_uris: response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} if command in payloads.keys(): payload = {'IndicatorLED': payloads[command]} response = self.patch_request(self.root_uri + chassis_uri, payload) if response['ret'] is False: return response else: return {'ret': False, 'msg': 'Invalid command'} return result def _map_reset_type(self, reset_type, allowable_values): equiv_types = { 'On': 'ForceOn', 'ForceOn': 'On', 'ForceOff': 'GracefulShutdown', 'GracefulShutdown': 'ForceOff', 'GracefulRestart': 'ForceRestart', 'ForceRestart': 'GracefulRestart' } if reset_type in allowable_values: return reset_type if reset_type not in equiv_types: return reset_type mapped_type = equiv_types[reset_type] if mapped_type in allowable_values: return mapped_type return reset_type def manage_system_power(self, command): key = "Actions" reset_type_values = ['On', 'ForceOff', 'GracefulShutdown', 'GracefulRestart', 'ForceRestart', 'Nmi', 'ForceOn', 'PushPowerButton', 'PowerCycle'] # command should be PowerOn, PowerForceOff, etc. if not command.startswith('Power'): return {'ret': False, 'msg': 'Invalid Command (%s)' % command} reset_type = command[5:] # map Reboot to a ResetType that does a reboot if reset_type == 'Reboot': reset_type = 'GracefulRestart' if reset_type not in reset_type_values: return {'ret': False, 'msg': 'Invalid Command (%s)' % command} # read the system resource and get the current power state response = self.get_request(self.root_uri + self.systems_uri) if response['ret'] is False: return response data = response['data'] power_state = data.get('PowerState') # if power is already in target state, nothing to do if power_state == "On" and reset_type in ['On', 'ForceOn']: return {'ret': True, 'changed': False} if power_state == "Off" and reset_type in ['GracefulShutdown', 'ForceOff']: return {'ret': True, 'changed': False} # get the #ComputerSystem.Reset Action and target URI if key not in data or '#ComputerSystem.Reset' not in data[key]: return {'ret': False, 'msg': 'Action #ComputerSystem.Reset not found'} reset_action = data[key]['#ComputerSystem.Reset'] if 'target' not in reset_action: return {'ret': False, 'msg': 'target URI missing from Action #ComputerSystem.Reset'} action_uri = reset_action['target'] # get AllowableValues from ActionInfo allowable_values = None if '@Redfish.ActionInfo' in reset_action: action_info_uri = reset_action.get('@Redfish.ActionInfo') response = self.get_request(self.root_uri + action_info_uri) if response['ret'] is True: data = response['data'] if 'Parameters' in data: params = data['Parameters'] for param in params: if param.get('Name') == 'ResetType': allowable_values = param.get('AllowableValues') break # fallback to @Redfish.AllowableValues annotation if allowable_values is None: allowable_values = reset_action.get('[email protected]', []) # map ResetType to an allowable value if needed if reset_type not in allowable_values: reset_type = self._map_reset_type(reset_type, allowable_values) # define payload payload = {'ResetType': reset_type} # POST to Action URI response = self.post_request(self.root_uri + action_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True} def _find_account_uri(self, username=None, acct_id=None): if not any((username, acct_id)): return {'ret': False, 'msg': 'Must provide either account_id or account_username'} response = self.get_request(self.root_uri + self.accounts_uri) if response['ret'] is False: return response data = response['data'] uris = [a.get('@odata.id') for a in data.get('Members', []) if a.get('@odata.id')] for uri in uris: response = self.get_request(self.root_uri + uri) if response['ret'] is False: continue data = response['data'] headers = response['headers'] if username: if username == data.get('UserName'): return {'ret': True, 'data': data, 'headers': headers, 'uri': uri} if acct_id: if acct_id == data.get('Id'): return {'ret': True, 'data': data, 'headers': headers, 'uri': uri} return {'ret': False, 'no_match': True, 'msg': 'No account with the given account_id or account_username found'} def _find_empty_account_slot(self): response = self.get_request(self.root_uri + self.accounts_uri) if response['ret'] is False: return response data = response['data'] uris = [a.get('@odata.id') for a in data.get('Members', []) if a.get('@odata.id')] if uris: # first slot may be reserved, so move to end of list uris += [uris.pop(0)] for uri in uris: response = self.get_request(self.root_uri + uri) if response['ret'] is False: continue data = response['data'] headers = response['headers'] if data.get('UserName') == "" and not data.get('Enabled', True): return {'ret': True, 'data': data, 'headers': headers, 'uri': uri} return {'ret': False, 'no_match': True, 'msg': 'No empty account slot found'} def list_users(self): result = {} # listing all users has always been slower than other operations, why? user_list = [] users_results = [] # Get these entries, but does not fail if not found properties = ['Id', 'Name', 'UserName', 'RoleId', 'Locked', 'Enabled'] response = self.get_request(self.root_uri + self.accounts_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for users in data.get('Members', []): user_list.append(users[u'@odata.id']) # user_list[] are URIs # for each user, get details for uri in user_list: user = {} response = self.get_request(self.root_uri + uri) if response['ret'] is False: return response data = response['data'] for property in properties: if property in data: user[property] = data[property] users_results.append(user) result["entries"] = users_results return result def add_user_via_patch(self, user): if user.get('account_id'): # If Id slot specified, use it response = self._find_account_uri(acct_id=user.get('account_id')) else: # Otherwise find first empty slot response = self._find_empty_account_slot() if not response['ret']: return response uri = response['uri'] payload = {} if user.get('account_username'): payload['UserName'] = user.get('account_username') if user.get('account_password'): payload['Password'] = user.get('account_password') if user.get('account_roleid'): payload['RoleId'] = user.get('account_roleid') response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def add_user(self, user): if not user.get('account_username'): return {'ret': False, 'msg': 'Must provide account_username for AddUser command'} response = self._find_account_uri(username=user.get('account_username')) if response['ret']: # account_username already exists, nothing to do return {'ret': True, 'changed': False} response = self.get_request(self.root_uri + self.accounts_uri) if not response['ret']: return response headers = response['headers'] if 'allow' in headers: methods = [m.strip() for m in headers.get('allow').split(',')] if 'POST' not in methods: # if Allow header present and POST not listed, add via PATCH return self.add_user_via_patch(user) payload = {} if user.get('account_username'): payload['UserName'] = user.get('account_username') if user.get('account_password'): payload['Password'] = user.get('account_password') if user.get('account_roleid'): payload['RoleId'] = user.get('account_roleid') response = self.post_request(self.root_uri + self.accounts_uri, payload) if not response['ret']: if response.get('status') == 405: # if POST returned a 405, try to add via PATCH return self.add_user_via_patch(user) else: return response return {'ret': True} def enable_user(self, user): response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] data = response['data'] if data.get('Enabled', True): # account already enabled, nothing to do return {'ret': True, 'changed': False} payload = {'Enabled': True} response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def delete_user_via_patch(self, user, uri=None, data=None): if not uri: response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] data = response['data'] if data and data.get('UserName') == '' and not data.get('Enabled', False): # account UserName already cleared, nothing to do return {'ret': True, 'changed': False} payload = {'UserName': ''} if data.get('Enabled', False): payload['Enabled'] = False response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def delete_user(self, user): response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: if response.get('no_match'): # account does not exist, nothing to do return {'ret': True, 'changed': False} else: # some error encountered return response uri = response['uri'] headers = response['headers'] data = response['data'] if 'allow' in headers: methods = [m.strip() for m in headers.get('allow').split(',')] if 'DELETE' not in methods: # if Allow header present and DELETE not listed, del via PATCH return self.delete_user_via_patch(user, uri=uri, data=data) response = self.delete_request(self.root_uri + uri) if not response['ret']: if response.get('status') == 405: # if DELETE returned a 405, try to delete via PATCH return self.delete_user_via_patch(user, uri=uri, data=data) else: return response return {'ret': True} def disable_user(self, user): response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] data = response['data'] if not data.get('Enabled'): # account already disabled, nothing to do return {'ret': True, 'changed': False} payload = {'Enabled': False} response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def update_user_role(self, user): if not user.get('account_roleid'): return {'ret': False, 'msg': 'Must provide account_roleid for UpdateUserRole command'} response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] data = response['data'] if data.get('RoleId') == user.get('account_roleid'): # account already has RoleId , nothing to do return {'ret': True, 'changed': False} payload = {'RoleId': user.get('account_roleid')} response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def update_user_password(self, user): response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] payload = {'Password': user['account_password']} response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def update_user_name(self, user): if not user.get('account_updatename'): return {'ret': False, 'msg': 'Must provide account_updatename for UpdateUserName command'} response = self._find_account_uri(username=user.get('account_username'), acct_id=user.get('account_id')) if not response['ret']: return response uri = response['uri'] payload = {'UserName': user['account_updatename']} response = self.patch_request(self.root_uri + uri, payload) if response['ret'] is False: return response return {'ret': True} def update_accountservice_properties(self, user): if user.get('account_properties') is None: return {'ret': False, 'msg': 'Must provide account_properties for UpdateAccountServiceProperties command'} account_properties = user.get('account_properties') # Find AccountService response = self.get_request(self.root_uri + self.service_root) if response['ret'] is False: return response data = response['data'] if 'AccountService' not in data: return {'ret': False, 'msg': "AccountService resource not found"} accountservice_uri = data["AccountService"]["@odata.id"] # Check support or not response = self.get_request(self.root_uri + accountservice_uri) if response['ret'] is False: return response data = response['data'] for property_name in account_properties.keys(): if property_name not in data: return {'ret': False, 'msg': 'property %s not supported' % property_name} # if properties is already matched, nothing to do need_change = False for property_name in account_properties.keys(): if account_properties[property_name] != data[property_name]: need_change = True break if not need_change: return {'ret': True, 'changed': False, 'msg': "AccountService properties already set"} payload = account_properties response = self.patch_request(self.root_uri + accountservice_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "Modified AccountService properties"} def get_sessions(self): result = {} # listing all users has always been slower than other operations, why? session_list = [] sessions_results = [] # Get these entries, but does not fail if not found properties = ['Description', 'Id', 'Name', 'UserName'] response = self.get_request(self.root_uri + self.sessions_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for sessions in data[u'Members']: session_list.append(sessions[u'@odata.id']) # session_list[] are URIs # for each session, get details for uri in session_list: session = {} response = self.get_request(self.root_uri + uri) if response['ret'] is False: return response data = response['data'] for property in properties: if property in data: session[property] = data[property] sessions_results.append(session) result["entries"] = sessions_results return result def get_firmware_update_capabilities(self): result = {} response = self.get_request(self.root_uri + self.update_uri) if response['ret'] is False: return response result['ret'] = True result['entries'] = {} data = response['data'] if "Actions" in data: actions = data['Actions'] if len(actions) > 0: for key in actions.keys(): action = actions.get(key) if 'title' in action: title = action['title'] else: title = key result['entries'][title] = action.get('[email protected]', ["Key [email protected] not found"]) else: return {'ret': "False", 'msg': "Actions list is empty."} else: return {'ret': "False", 'msg': "Key Actions not found."} return result def _software_inventory(self, uri): result = {} response = self.get_request(self.root_uri + uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] result['entries'] = [] for member in data[u'Members']: uri = self.root_uri + member[u'@odata.id'] # Get details for each software or firmware member response = self.get_request(uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] software = {} # Get these standard properties if present for key in ['Name', 'Id', 'Status', 'Version', 'Updateable', 'SoftwareId', 'LowestSupportedVersion', 'Manufacturer', 'ReleaseDate']: if key in data: software[key] = data.get(key) result['entries'].append(software) return result def get_firmware_inventory(self): if self.firmware_uri is None: return {'ret': False, 'msg': 'No FirmwareInventory resource found'} else: return self._software_inventory(self.firmware_uri) def get_software_inventory(self): if self.software_uri is None: return {'ret': False, 'msg': 'No SoftwareInventory resource found'} else: return self._software_inventory(self.software_uri) def get_bios_attributes(self, systems_uri): result = {} bios_attributes = {} key = "Bios" # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} bios_uri = data[key]["@odata.id"] response = self.get_request(self.root_uri + bios_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for attribute in data[u'Attributes'].items(): bios_attributes[attribute[0]] = attribute[1] result["entries"] = bios_attributes return result def get_multi_bios_attributes(self): return self.aggregate_systems(self.get_bios_attributes) def _get_boot_options_dict(self, boot): # Get these entries from BootOption, if present properties = ['DisplayName', 'BootOptionReference'] # Retrieve BootOptions if present if 'BootOptions' in boot and '@odata.id' in boot['BootOptions']: boot_options_uri = boot['BootOptions']["@odata.id"] # Get BootOptions resource response = self.get_request(self.root_uri + boot_options_uri) if response['ret'] is False: return {} data = response['data'] # Retrieve Members array if 'Members' not in data: return {} members = data['Members'] else: members = [] # Build dict of BootOptions keyed by BootOptionReference boot_options_dict = {} for member in members: if '@odata.id' not in member: return {} boot_option_uri = member['@odata.id'] response = self.get_request(self.root_uri + boot_option_uri) if response['ret'] is False: return {} data = response['data'] if 'BootOptionReference' not in data: return {} boot_option_ref = data['BootOptionReference'] # fetch the props to display for this boot device boot_props = {} for prop in properties: if prop in data: boot_props[prop] = data[prop] boot_options_dict[boot_option_ref] = boot_props return boot_options_dict def get_boot_order(self, systems_uri): result = {} # Retrieve System resource response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] # Confirm needed Boot properties are present if 'Boot' not in data or 'BootOrder' not in data['Boot']: return {'ret': False, 'msg': "Key BootOrder not found"} boot = data['Boot'] boot_order = boot['BootOrder'] boot_options_dict = self._get_boot_options_dict(boot) # Build boot device list boot_device_list = [] for ref in boot_order: boot_device_list.append( boot_options_dict.get(ref, {'BootOptionReference': ref})) result["entries"] = boot_device_list return result def get_multi_boot_order(self): return self.aggregate_systems(self.get_boot_order) def get_boot_override(self, systems_uri): result = {} properties = ["BootSourceOverrideEnabled", "BootSourceOverrideTarget", "BootSourceOverrideMode", "UefiTargetBootSourceOverride", "[email protected]"] response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if 'Boot' not in data: return {'ret': False, 'msg': "Key Boot not found"} boot = data['Boot'] boot_overrides = {} if "BootSourceOverrideEnabled" in boot: if boot["BootSourceOverrideEnabled"] is not False: for property in properties: if property in boot: if boot[property] is not None: boot_overrides[property] = boot[property] else: return {'ret': False, 'msg': "No boot override is enabled."} result['entries'] = boot_overrides return result def get_multi_boot_override(self): return self.aggregate_systems(self.get_boot_override) def set_bios_default_settings(self): result = {} key = "Bios" # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + self.systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} bios_uri = data[key]["@odata.id"] # Extract proper URI response = self.get_request(self.root_uri + bios_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] reset_bios_settings_uri = data["Actions"]["#Bios.ResetBios"]["target"] response = self.post_request(self.root_uri + reset_bios_settings_uri, {}) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "Set BIOS to default settings"} def set_one_time_boot_device(self, bootdevice, uefi_target, boot_next): result = {} key = "Boot" if not bootdevice: return {'ret': False, 'msg': "bootdevice option required for SetOneTimeBoot"} # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + self.systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} boot = data[key] annotation = '[email protected]' if annotation in boot: allowable_values = boot[annotation] if isinstance(allowable_values, list) and bootdevice not in allowable_values: return {'ret': False, 'msg': "Boot device %s not in list of allowable values (%s)" % (bootdevice, allowable_values)} # read existing values enabled = boot.get('BootSourceOverrideEnabled') target = boot.get('BootSourceOverrideTarget') cur_uefi_target = boot.get('UefiTargetBootSourceOverride') cur_boot_next = boot.get('BootNext') if bootdevice == 'UefiTarget': if not uefi_target: return {'ret': False, 'msg': "uefi_target option required to SetOneTimeBoot for UefiTarget"} if enabled == 'Once' and target == bootdevice and uefi_target == cur_uefi_target: # If properties are already set, no changes needed return {'ret': True, 'changed': False} payload = { 'Boot': { 'BootSourceOverrideEnabled': 'Once', 'BootSourceOverrideTarget': bootdevice, 'UefiTargetBootSourceOverride': uefi_target } } elif bootdevice == 'UefiBootNext': if not boot_next: return {'ret': False, 'msg': "boot_next option required to SetOneTimeBoot for UefiBootNext"} if enabled == 'Once' and target == bootdevice and boot_next == cur_boot_next: # If properties are already set, no changes needed return {'ret': True, 'changed': False} payload = { 'Boot': { 'BootSourceOverrideEnabled': 'Once', 'BootSourceOverrideTarget': bootdevice, 'BootNext': boot_next } } else: if enabled == 'Once' and target == bootdevice: # If properties are already set, no changes needed return {'ret': True, 'changed': False} payload = { 'Boot': { 'BootSourceOverrideEnabled': 'Once', 'BootSourceOverrideTarget': bootdevice } } response = self.patch_request(self.root_uri + self.systems_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True} def set_bios_attributes(self, attributes): result = {} key = "Bios" # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + self.systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} bios_uri = data[key]["@odata.id"] # Extract proper URI response = self.get_request(self.root_uri + bios_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] # Make a copy of the attributes dict attrs_to_patch = dict(attributes) # Check the attributes for attr in attributes: if attr not in data[u'Attributes']: return {'ret': False, 'msg': "BIOS attribute %s not found" % attr} # If already set to requested value, remove it from PATCH payload if data[u'Attributes'][attr] == attributes[attr]: del attrs_to_patch[attr] # Return success w/ changed=False if no attrs need to be changed if not attrs_to_patch: return {'ret': True, 'changed': False, 'msg': "BIOS attributes already set"} # Get the SettingsObject URI set_bios_attr_uri = data["@Redfish.Settings"]["SettingsObject"]["@odata.id"] # Construct payload and issue PATCH command payload = {"Attributes": attrs_to_patch} response = self.patch_request(self.root_uri + set_bios_attr_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "Modified BIOS attribute"} def set_boot_order(self, boot_list): if not boot_list: return {'ret': False, 'msg': "boot_order list required for SetBootOrder command"} systems_uri = self.systems_uri response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response data = response['data'] # Confirm needed Boot properties are present if 'Boot' not in data or 'BootOrder' not in data['Boot']: return {'ret': False, 'msg': "Key BootOrder not found"} boot = data['Boot'] boot_order = boot['BootOrder'] boot_options_dict = self._get_boot_options_dict(boot) # validate boot_list against BootOptionReferences if available if boot_options_dict: boot_option_references = boot_options_dict.keys() for ref in boot_list: if ref not in boot_option_references: return {'ret': False, 'msg': "BootOptionReference %s not found in BootOptions" % ref} # If requested BootOrder is already set, nothing to do if boot_order == boot_list: return {'ret': True, 'changed': False, 'msg': "BootOrder already set to %s" % boot_list} payload = { 'Boot': { 'BootOrder': boot_list } } response = self.patch_request(self.root_uri + systems_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "BootOrder set"} def set_default_boot_order(self): systems_uri = self.systems_uri response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response data = response['data'] # get the #ComputerSystem.SetDefaultBootOrder Action and target URI action = '#ComputerSystem.SetDefaultBootOrder' if 'Actions' not in data or action not in data['Actions']: return {'ret': False, 'msg': 'Action %s not found' % action} if 'target' not in data['Actions'][action]: return {'ret': False, 'msg': 'target URI missing from Action %s' % action} action_uri = data['Actions'][action]['target'] # POST to Action URI payload = {} response = self.post_request(self.root_uri + action_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "BootOrder set to default"} def get_chassis_inventory(self): result = {} chassis_results = [] # Get these entries, but does not fail if not found properties = ['ChassisType', 'PartNumber', 'AssetTag', 'Manufacturer', 'IndicatorLED', 'SerialNumber', 'Model'] # Go through list for chassis_uri in self.chassis_uris: response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] chassis_result = {} for property in properties: if property in data: chassis_result[property] = data[property] chassis_results.append(chassis_result) result["entries"] = chassis_results return result def get_fan_inventory(self): result = {} fan_results = [] key = "Thermal" # Get these entries, but does not fail if not found properties = ['FanName', 'Reading', 'ReadingUnits', 'Status'] # Go through list for chassis_uri in self.chassis_uris: response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key in data: # match: found an entry for "Thermal" information = fans thermal_uri = data[key]["@odata.id"] response = self.get_request(self.root_uri + thermal_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for device in data[u'Fans']: fan = {} for property in properties: if property in device: fan[property] = device[property] fan_results.append(fan) result["entries"] = fan_results return result def get_chassis_power(self): result = {} key = "Power" # Get these entries, but does not fail if not found properties = ['Name', 'PowerAllocatedWatts', 'PowerAvailableWatts', 'PowerCapacityWatts', 'PowerConsumedWatts', 'PowerMetrics', 'PowerRequestedWatts', 'RelatedItem', 'Status'] chassis_power_results = [] # Go through list for chassis_uri in self.chassis_uris: chassis_power_result = {} response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key in data: response = self.get_request(self.root_uri + data[key]['@odata.id']) data = response['data'] if 'PowerControl' in data: if len(data['PowerControl']) > 0: data = data['PowerControl'][0] for property in properties: if property in data: chassis_power_result[property] = data[property] else: return {'ret': False, 'msg': 'Key PowerControl not found.'} chassis_power_results.append(chassis_power_result) else: return {'ret': False, 'msg': 'Key Power not found.'} result['entries'] = chassis_power_results return result def get_chassis_thermals(self): result = {} sensors = [] key = "Thermal" # Get these entries, but does not fail if not found properties = ['Name', 'PhysicalContext', 'UpperThresholdCritical', 'UpperThresholdFatal', 'UpperThresholdNonCritical', 'LowerThresholdCritical', 'LowerThresholdFatal', 'LowerThresholdNonCritical', 'MaxReadingRangeTemp', 'MinReadingRangeTemp', 'ReadingCelsius', 'RelatedItem', 'SensorNumber'] # Go through list for chassis_uri in self.chassis_uris: response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key in data: thermal_uri = data[key]["@odata.id"] response = self.get_request(self.root_uri + thermal_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if "Temperatures" in data: for sensor in data[u'Temperatures']: sensor_result = {} for property in properties: if property in sensor: if sensor[property] is not None: sensor_result[property] = sensor[property] sensors.append(sensor_result) if sensors is None: return {'ret': False, 'msg': 'Key Temperatures was not found.'} result['entries'] = sensors return result def get_cpu_inventory(self, systems_uri): result = {} cpu_list = [] cpu_results = [] key = "Processors" # Get these entries, but does not fail if not found properties = ['Id', 'Manufacturer', 'Model', 'MaxSpeedMHz', 'TotalCores', 'TotalThreads', 'Status'] # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} processors_uri = data[key]["@odata.id"] # Get a list of all CPUs and build respective URIs response = self.get_request(self.root_uri + processors_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for cpu in data[u'Members']: cpu_list.append(cpu[u'@odata.id']) for c in cpu_list: cpu = {} uri = self.root_uri + c response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] for property in properties: if property in data: cpu[property] = data[property] cpu_results.append(cpu) result["entries"] = cpu_results return result def get_multi_cpu_inventory(self): return self.aggregate_systems(self.get_cpu_inventory) def get_memory_inventory(self, systems_uri): result = {} memory_list = [] memory_results = [] key = "Memory" # Get these entries, but does not fail if not found properties = ['SerialNumber', 'MemoryDeviceType', 'PartNuber', 'MemoryLocation', 'RankCount', 'CapacityMiB', 'OperatingMemoryModes', 'Status', 'Manufacturer', 'Name'] # Search for 'key' entry and extract URI from it response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} memory_uri = data[key]["@odata.id"] # Get a list of all DIMMs and build respective URIs response = self.get_request(self.root_uri + memory_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for dimm in data[u'Members']: memory_list.append(dimm[u'@odata.id']) for m in memory_list: dimm = {} uri = self.root_uri + m response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] if "Status" in data: if "State" in data["Status"]: if data["Status"]["State"] == "Absent": continue else: continue for property in properties: if property in data: dimm[property] = data[property] memory_results.append(dimm) result["entries"] = memory_results return result def get_multi_memory_inventory(self): return self.aggregate_systems(self.get_memory_inventory) def get_nic_inventory(self, resource_uri): result = {} nic_list = [] nic_results = [] key = "EthernetInterfaces" # Get these entries, but does not fail if not found properties = ['Description', 'FQDN', 'IPv4Addresses', 'IPv6Addresses', 'NameServers', 'MACAddress', 'PermanentMACAddress', 'SpeedMbps', 'MTUSize', 'AutoNeg', 'Status'] response = self.get_request(self.root_uri + resource_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} ethernetinterfaces_uri = data[key]["@odata.id"] # Get a list of all network controllers and build respective URIs response = self.get_request(self.root_uri + ethernetinterfaces_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for nic in data[u'Members']: nic_list.append(nic[u'@odata.id']) for n in nic_list: nic = {} uri = self.root_uri + n response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] for property in properties: if property in data: nic[property] = data[property] nic_results.append(nic) result["entries"] = nic_results return result def get_multi_nic_inventory(self, resource_type): ret = True entries = [] # Given resource_type, use the proper URI if resource_type == 'Systems': resource_uris = self.systems_uris elif resource_type == 'Manager': resource_uris = self.manager_uris for resource_uri in resource_uris: inventory = self.get_nic_inventory(resource_uri) ret = inventory.pop('ret') and ret if 'entries' in inventory: entries.append(({'resource_uri': resource_uri}, inventory['entries'])) return dict(ret=ret, entries=entries) def get_virtualmedia(self, resource_uri): result = {} virtualmedia_list = [] virtualmedia_results = [] key = "VirtualMedia" # Get these entries, but does not fail if not found properties = ['Description', 'ConnectedVia', 'Id', 'MediaTypes', 'Image', 'ImageName', 'Name', 'WriteProtected', 'TransferMethod', 'TransferProtocolType'] response = self.get_request(self.root_uri + resource_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} virtualmedia_uri = data[key]["@odata.id"] # Get a list of all virtual media and build respective URIs response = self.get_request(self.root_uri + virtualmedia_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for virtualmedia in data[u'Members']: virtualmedia_list.append(virtualmedia[u'@odata.id']) for n in virtualmedia_list: virtualmedia = {} uri = self.root_uri + n response = self.get_request(uri) if response['ret'] is False: return response data = response['data'] for property in properties: if property in data: virtualmedia[property] = data[property] virtualmedia_results.append(virtualmedia) result["entries"] = virtualmedia_results return result def get_multi_virtualmedia(self): ret = True entries = [] resource_uris = self.manager_uris for resource_uri in resource_uris: virtualmedia = self.get_virtualmedia(resource_uri) ret = virtualmedia.pop('ret') and ret if 'entries' in virtualmedia: entries.append(({'resource_uri': resource_uri}, virtualmedia['entries'])) return dict(ret=ret, entries=entries) def get_psu_inventory(self): result = {} psu_list = [] psu_results = [] key = "PowerSupplies" # Get these entries, but does not fail if not found properties = ['Name', 'Model', 'SerialNumber', 'PartNumber', 'Manufacturer', 'FirmwareVersion', 'PowerCapacityWatts', 'PowerSupplyType', 'Status'] # Get a list of all Chassis and build URIs, then get all PowerSupplies # from each Power entry in the Chassis chassis_uri_list = self.chassis_uris for chassis_uri in chassis_uri_list: response = self.get_request(self.root_uri + chassis_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] if 'Power' in data: power_uri = data[u'Power'][u'@odata.id'] else: continue response = self.get_request(self.root_uri + power_uri) data = response['data'] if key not in data: return {'ret': False, 'msg': "Key %s not found" % key} psu_list = data[key] for psu in psu_list: psu_not_present = False psu_data = {} for property in properties: if property in psu: if psu[property] is not None: if property == 'Status': if 'State' in psu[property]: if psu[property]['State'] == 'Absent': psu_not_present = True psu_data[property] = psu[property] if psu_not_present: continue psu_results.append(psu_data) result["entries"] = psu_results if not result["entries"]: return {'ret': False, 'msg': "No PowerSupply objects found"} return result def get_multi_psu_inventory(self): return self.aggregate_systems(self.get_psu_inventory) def get_system_inventory(self, systems_uri): result = {} inventory = {} # Get these entries, but does not fail if not found properties = ['Status', 'HostName', 'PowerState', 'Model', 'Manufacturer', 'PartNumber', 'SystemType', 'AssetTag', 'ServiceTag', 'SerialNumber', 'SKU', 'BiosVersion', 'MemorySummary', 'ProcessorSummary', 'TrustedModules'] response = self.get_request(self.root_uri + systems_uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] for property in properties: if property in data: inventory[property] = data[property] result["entries"] = inventory return result def get_multi_system_inventory(self): return self.aggregate_systems(self.get_system_inventory) def get_network_protocols(self): result = {} service_result = {} # Find NetworkProtocol response = self.get_request(self.root_uri + self.manager_uri) if response['ret'] is False: return response data = response['data'] if 'NetworkProtocol' not in data: return {'ret': False, 'msg': "NetworkProtocol resource not found"} networkprotocol_uri = data["NetworkProtocol"]["@odata.id"] response = self.get_request(self.root_uri + networkprotocol_uri) if response['ret'] is False: return response data = response['data'] protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH', 'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP', 'RFB'] for protocol_service in protocol_services: if protocol_service in data.keys(): service_result[protocol_service] = data[protocol_service] result['ret'] = True result["entries"] = service_result return result def set_network_protocols(self, manager_services): # Check input data validity protocol_services = ['SNMP', 'VirtualMedia', 'Telnet', 'SSDP', 'IPMI', 'SSH', 'KVMIP', 'NTP', 'HTTP', 'HTTPS', 'DHCP', 'DHCPv6', 'RDP', 'RFB'] protocol_state_onlist = ['true', 'True', True, 'on', 1] protocol_state_offlist = ['false', 'False', False, 'off', 0] payload = {} for service_name in manager_services.keys(): if service_name not in protocol_services: return {'ret': False, 'msg': "Service name %s is invalid" % service_name} payload[service_name] = {} for service_property in manager_services[service_name].keys(): value = manager_services[service_name][service_property] if service_property in ['ProtocolEnabled', 'protocolenabled']: if value in protocol_state_onlist: payload[service_name]['ProtocolEnabled'] = True elif value in protocol_state_offlist: payload[service_name]['ProtocolEnabled'] = False else: return {'ret': False, 'msg': "Value of property %s is invalid" % service_property} elif service_property in ['port', 'Port']: if isinstance(value, int): payload[service_name]['Port'] = value elif isinstance(value, str) and value.isdigit(): payload[service_name]['Port'] = int(value) else: return {'ret': False, 'msg': "Value of property %s is invalid" % service_property} else: payload[service_name][service_property] = value # Find NetworkProtocol response = self.get_request(self.root_uri + self.manager_uri) if response['ret'] is False: return response data = response['data'] if 'NetworkProtocol' not in data: return {'ret': False, 'msg': "NetworkProtocol resource not found"} networkprotocol_uri = data["NetworkProtocol"]["@odata.id"] # Check service property support or not response = self.get_request(self.root_uri + networkprotocol_uri) if response['ret'] is False: return response data = response['data'] for service_name in payload.keys(): if service_name not in data: return {'ret': False, 'msg': "%s service not supported" % service_name} for service_property in payload[service_name].keys(): if service_property not in data[service_name]: return {'ret': False, 'msg': "%s property for %s service not supported" % (service_property, service_name)} # if the protocol is already set, nothing to do need_change = False for service_name in payload.keys(): for service_property in payload[service_name].keys(): value = payload[service_name][service_property] if value != data[service_name][service_property]: need_change = True break if not need_change: return {'ret': True, 'changed': False, 'msg': "Manager NetworkProtocol services already set"} response = self.patch_request(self.root_uri + networkprotocol_uri, payload) if response['ret'] is False: return response return {'ret': True, 'changed': True, 'msg': "Modified Manager NetworkProtocol services"} @staticmethod def to_singular(resource_name): if resource_name.endswith('ies'): resource_name = resource_name[:-3] + 'y' elif resource_name.endswith('s'): resource_name = resource_name[:-1] return resource_name def get_health_resource(self, subsystem, uri, health, expanded): status = 'Status' if expanded: d = expanded else: r = self.get_request(self.root_uri + uri) if r.get('ret'): d = r.get('data') else: return if 'Members' in d: # collections case for m in d.get('Members'): u = m.get('@odata.id') r = self.get_request(self.root_uri + u) if r.get('ret'): p = r.get('data') if p: e = {self.to_singular(subsystem.lower()) + '_uri': u, status: p.get(status, "Status not available")} health[subsystem].append(e) else: # non-collections case e = {self.to_singular(subsystem.lower()) + '_uri': uri, status: d.get(status, "Status not available")} health[subsystem].append(e) def get_health_subsystem(self, subsystem, data, health): if subsystem in data: sub = data.get(subsystem) if isinstance(sub, list): for r in sub: if '@odata.id' in r: uri = r.get('@odata.id') expanded = None if '#' in uri and len(r) > 1: expanded = r self.get_health_resource(subsystem, uri, health, expanded) elif isinstance(sub, dict): if '@odata.id' in sub: uri = sub.get('@odata.id') self.get_health_resource(subsystem, uri, health, None) elif 'Members' in data: for m in data.get('Members'): u = m.get('@odata.id') r = self.get_request(self.root_uri + u) if r.get('ret'): d = r.get('data') self.get_health_subsystem(subsystem, d, health) def get_health_report(self, category, uri, subsystems): result = {} health = {} status = 'Status' # Get health status of top level resource response = self.get_request(self.root_uri + uri) if response['ret'] is False: return response result['ret'] = True data = response['data'] health[category] = {status: data.get(status, "Status not available")} # Get health status of subsystems for sub in subsystems: d = None if sub.startswith('Links.'): # ex: Links.PCIeDevices sub = sub[len('Links.'):] d = data.get('Links', {}) elif '.' in sub: # ex: Thermal.Fans p, sub = sub.split('.') u = data.get(p, {}).get('@odata.id') if u: r = self.get_request(self.root_uri + u) if r['ret']: d = r['data'] if not d: continue else: # ex: Memory d = data health[sub] = [] self.get_health_subsystem(sub, d, health) if not health[sub]: del health[sub] result["entries"] = health return result def get_system_health_report(self, systems_uri): subsystems = ['Processors', 'Memory', 'SimpleStorage', 'Storage', 'EthernetInterfaces', 'NetworkInterfaces.NetworkPorts', 'NetworkInterfaces.NetworkDeviceFunctions'] return self.get_health_report('System', systems_uri, subsystems) def get_multi_system_health_report(self): return self.aggregate_systems(self.get_system_health_report) def get_chassis_health_report(self, chassis_uri): subsystems = ['Power.PowerSupplies', 'Thermal.Fans', 'Links.PCIeDevices'] return self.get_health_report('Chassis', chassis_uri, subsystems) def get_multi_chassis_health_report(self): return self.aggregate_chassis(self.get_chassis_health_report) def get_manager_health_report(self, manager_uri): subsystems = [] return self.get_health_report('Manager', manager_uri, subsystems) def get_multi_manager_health_report(self): return self.aggregate_managers(self.get_manager_health_report)
closed
ansible/ansible
https://github.com/ansible/ansible
64,475
redfish_config - Manager - SetManagerNic
##### SUMMARY This feature would implement a SetManagerNic command for the Manager category of redfish_config, to update Manager NIC configuration. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME redfish_config ##### ADDITIONAL INFORMATION This command would help user to configure their Manager NIC.
https://github.com/ansible/ansible/issues/64475
https://github.com/ansible/ansible/pull/64477
f51f87a986b54329e731e3cccb16049011009cb1
a7716ae7a91411a2ccfc51f4e80bc2257f196625
2019-11-06T02:42:58Z
python
2019-11-20T20:04:24Z
lib/ansible/modules/remote_management/redfish/redfish_config.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright (c) 2017-2018 Dell EMC Inc. # GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'status': ['preview'], 'supported_by': 'community', 'metadata_version': '1.1'} DOCUMENTATION = ''' --- module: redfish_config version_added: "2.7" short_description: Manages Out-Of-Band controllers using Redfish APIs description: - Builds Redfish URIs locally and sends them to remote OOB controllers to set or update a configuration attribute. - Manages BIOS configuration settings. - Manages OOB controller configuration settings. options: category: required: true description: - Category to execute on OOB controller type: str command: required: true description: - List of commands to execute on OOB controller type: list baseuri: required: true description: - Base URI of OOB controller type: str username: required: true description: - User for authentication with OOB controller type: str version_added: "2.8" password: required: true description: - Password for authentication with OOB controller type: str bios_attribute_name: required: false description: - name of BIOS attr to update (deprecated - use bios_attributes instead) default: 'null' type: str version_added: "2.8" bios_attribute_value: required: false description: - value of BIOS attr to update (deprecated - use bios_attributes instead) default: 'null' type: str version_added: "2.8" bios_attributes: required: false description: - dictionary of BIOS attributes to update default: {} type: dict version_added: "2.10" timeout: description: - Timeout in seconds for URL requests to OOB controller default: 10 type: int version_added: "2.8" boot_order: required: false description: - list of BootOptionReference strings specifying the BootOrder default: [] type: list version_added: "2.10" network_protocols: required: false description: - setting dict of manager services to update type: dict version_added: "2.10" resource_id: required: false description: - The ID of the System, Manager or Chassis to modify type: str version_added: "2.10" author: "Jose Delarosa (@jose-delarosa)" ''' EXAMPLES = ''' - name: Set BootMode to UEFI redfish_config: category: Systems command: SetBiosAttributes resource_id: 437XR1138R2 bios_attributes: BootMode: "Uefi" baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" - name: Set multiple BootMode attributes redfish_config: category: Systems command: SetBiosAttributes resource_id: 437XR1138R2 bios_attributes: BootMode: "Bios" OneTimeBootMode: "Enabled" BootSeqRetry: "Enabled" baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" - name: Enable PXE Boot for NIC1 using deprecated options redfish_config: category: Systems command: SetBiosAttributes resource_id: 437XR1138R2 bios_attribute_name: PxeDev1EnDis bios_attribute_value: Enabled baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" - name: Set BIOS default settings with a timeout of 20 seconds redfish_config: category: Systems command: SetBiosDefaultSettings resource_id: 437XR1138R2 baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" timeout: 20 - name: Set boot order redfish_config: category: Systems command: SetBootOrder boot_order: - Boot0002 - Boot0001 - Boot0000 - Boot0003 - Boot0004 baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" - name: Set boot order to the default redfish_config: category: Systems command: SetDefaultBootOrder baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" - name: Set Manager Network Protocols redfish_config: category: Manager command: SetNetworkProtocols network_protocols: SNMP: ProtocolEnabled: True Port: 161 HTTP: ProtocolEnabled: False Port: 8080 baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" ''' RETURN = ''' msg: description: Message with action result or error description returned: always type: str sample: "Action was successful" ''' from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.redfish_utils import RedfishUtils from ansible.module_utils._text import to_native # More will be added as module features are expanded CATEGORY_COMMANDS_ALL = { "Systems": ["SetBiosDefaultSettings", "SetBiosAttributes", "SetBootOrder", "SetDefaultBootOrder"], "Manager": ["SetNetworkProtocols"] } def main(): result = {} module = AnsibleModule( argument_spec=dict( category=dict(required=True), command=dict(required=True, type='list'), baseuri=dict(required=True), username=dict(required=True), password=dict(required=True, no_log=True), bios_attribute_name=dict(default='null'), bios_attribute_value=dict(default='null'), bios_attributes=dict(type='dict', default={}), timeout=dict(type='int', default=10), boot_order=dict(type='list', elements='str', default=[]), network_protocols=dict( type='dict', default={} ), resource_id=dict() ), supports_check_mode=False ) category = module.params['category'] command_list = module.params['command'] # admin credentials used for authentication creds = {'user': module.params['username'], 'pswd': module.params['password']} # timeout timeout = module.params['timeout'] # BIOS attributes to update bios_attributes = module.params['bios_attributes'] if module.params['bios_attribute_name'] != 'null': bios_attributes[module.params['bios_attribute_name']] = module.params[ 'bios_attribute_value'] module.deprecate(msg='The bios_attribute_name/bios_attribute_value ' 'options are deprecated. Use bios_attributes instead', version='2.10') # boot order boot_order = module.params['boot_order'] # System, Manager or Chassis ID to modify resource_id = module.params['resource_id'] # Build root URI root_uri = "https://" + module.params['baseuri'] rf_utils = RedfishUtils(creds, root_uri, timeout, module, resource_id=resource_id, data_modification=True) # Check that Category is valid if category not in CATEGORY_COMMANDS_ALL: module.fail_json(msg=to_native("Invalid Category '%s'. Valid Categories = %s" % (category, CATEGORY_COMMANDS_ALL.keys()))) # Check that all commands are valid for cmd in command_list: # Fail if even one command given is invalid if cmd not in CATEGORY_COMMANDS_ALL[category]: module.fail_json(msg=to_native("Invalid Command '%s'. Valid Commands = %s" % (cmd, CATEGORY_COMMANDS_ALL[category]))) # Organize by Categories / Commands if category == "Systems": # execute only if we find a System resource result = rf_utils._find_systems_resource() if result['ret'] is False: module.fail_json(msg=to_native(result['msg'])) for command in command_list: if command == "SetBiosDefaultSettings": result = rf_utils.set_bios_default_settings() elif command == "SetBiosAttributes": result = rf_utils.set_bios_attributes(bios_attributes) elif command == "SetBootOrder": result = rf_utils.set_boot_order(boot_order) elif command == "SetDefaultBootOrder": result = rf_utils.set_default_boot_order() elif category == "Manager": # execute only if we find a Manager service resource result = rf_utils._find_managers_resource() if result['ret'] is False: module.fail_json(msg=to_native(result['msg'])) for command in command_list: if command == "SetNetworkProtocols": result = rf_utils.set_network_protocols(module.params['network_protocols']) # Return data back or fail with proper message if result['ret'] is True: module.exit_json(changed=result['changed'], msg=to_native(result['msg'])) else: module.fail_json(msg=to_native(result['msg'])) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,394
Region us-gov-east-1 does not seem to be available for aws module boto.iam
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> attempting to use iam_policy module in govcloud and it results in the error: "Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path" the same playbook works fine in us-gov-west ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> iam_policy ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` { "exception": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 351, in <module>\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 325, in main\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/ansible_iam_policy_payload.zip/ansible/module_utils/ec2.py\", line 340, in connect_to_aws\nansible.module_utils.ec2.AnsibleAWSError: Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path\n", "_ansible_no_log": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 351, in <module>\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 325, in main\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/ansible_iam_policy_payload.zip/ansible/module_utils/ec2.py\", line 340, in connect_to_aws\nansible.module_utils.ec2.AnsibleAWSError: Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path\n", "changed": false, "module_stdout": "", "rc": 1, "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error" } ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```no output ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. - dont think anything that stands out or is relevant in this case -> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> use iam_policy with the region targeting us-gov-east-1, in our case we just wanted to create a policy. ``` ``` <!--- Paste example playbooks or commands between quotes below --> ``` - name: Create IAM Policy for the Audit S3 bucket iam_policy: iam_type: role iam_name: "some_name" region: "{{ a var that results in us-gov-east-1 }}" policy_name: some_policy_name policy_json: "{{ lookup('stuff for policy json') }}" state: present ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> i expected the same results as with gov west, which was successful ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> the error described above <!--- Paste verbatim command output between quotes --> ``` have pasted above ```
https://github.com/ansible/ansible/issues/64394
https://github.com/ansible/ansible/pull/63924
426e37ea92db037fe9367a6daa4d17622b1faf1d
f1311d3e98118e98449b77637476aa14f6653bdf
2019-11-04T16:09:20Z
python
2019-11-20T23:59:02Z
changelogs/fragments/63924-boto3.yml
closed
ansible/ansible
https://github.com/ansible/ansible
64,394
Region us-gov-east-1 does not seem to be available for aws module boto.iam
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> attempting to use iam_policy module in govcloud and it results in the error: "Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path" the same playbook works fine in us-gov-west ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> iam_policy ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` { "exception": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 351, in <module>\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 325, in main\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/ansible_iam_policy_payload.zip/ansible/module_utils/ec2.py\", line 340, in connect_to_aws\nansible.module_utils.ec2.AnsibleAWSError: Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path\n", "_ansible_no_log": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 351, in <module>\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 325, in main\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/ansible_iam_policy_payload.zip/ansible/module_utils/ec2.py\", line 340, in connect_to_aws\nansible.module_utils.ec2.AnsibleAWSError: Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path\n", "changed": false, "module_stdout": "", "rc": 1, "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error" } ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```no output ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. - dont think anything that stands out or is relevant in this case -> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> use iam_policy with the region targeting us-gov-east-1, in our case we just wanted to create a policy. ``` ``` <!--- Paste example playbooks or commands between quotes below --> ``` - name: Create IAM Policy for the Audit S3 bucket iam_policy: iam_type: role iam_name: "some_name" region: "{{ a var that results in us-gov-east-1 }}" policy_name: some_policy_name policy_json: "{{ lookup('stuff for policy json') }}" state: present ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> i expected the same results as with gov west, which was successful ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> the error described above <!--- Paste verbatim command output between quotes --> ``` have pasted above ```
https://github.com/ansible/ansible/issues/64394
https://github.com/ansible/ansible/pull/63924
426e37ea92db037fe9367a6daa4d17622b1faf1d
f1311d3e98118e98449b77637476aa14f6653bdf
2019-11-04T16:09:20Z
python
2019-11-20T23:59:02Z
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
.. _porting_2.10_guide: ************************** Ansible 2.10 Porting Guide ************************** This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10. It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible. We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make. This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`. .. contents:: Topics Playbook ======== No notable changes Command Line ============ No notable changes Deprecated ========== No notable changes Modules ======= Modules removed --------------- The following modules no longer exist: * letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead. Deprecation notices ------------------- The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly. * ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead. The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly. * The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version). * :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module. * :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option wil be removed. It has always been ignored by the module. * :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module. * :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module. * The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead. * :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3. * :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module. * :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3. * :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5. * :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5. * :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module. The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly. * The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings. The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly. * ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead. Noteworthy module changes ------------------------- * :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``. * :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``. * :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``. * :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``. * :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``. * The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead. * :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10. * :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``. * :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``. * :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself Plugins ======= No notable changes Porting custom scripts ====================== No notable changes Networking ========== No notable changes
closed
ansible/ansible
https://github.com/ansible/ansible
64,394
Region us-gov-east-1 does not seem to be available for aws module boto.iam
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> attempting to use iam_policy module in govcloud and it results in the error: "Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path" the same playbook works fine in us-gov-west ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> iam_policy ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` { "exception": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 351, in <module>\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 325, in main\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/ansible_iam_policy_payload.zip/ansible/module_utils/ec2.py\", line 340, in connect_to_aws\nansible.module_utils.ec2.AnsibleAWSError: Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path\n", "_ansible_no_log": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 351, in <module>\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 325, in main\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/ansible_iam_policy_payload.zip/ansible/module_utils/ec2.py\", line 340, in connect_to_aws\nansible.module_utils.ec2.AnsibleAWSError: Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path\n", "changed": false, "module_stdout": "", "rc": 1, "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error" } ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```no output ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. - dont think anything that stands out or is relevant in this case -> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> use iam_policy with the region targeting us-gov-east-1, in our case we just wanted to create a policy. ``` ``` <!--- Paste example playbooks or commands between quotes below --> ``` - name: Create IAM Policy for the Audit S3 bucket iam_policy: iam_type: role iam_name: "some_name" region: "{{ a var that results in us-gov-east-1 }}" policy_name: some_policy_name policy_json: "{{ lookup('stuff for policy json') }}" state: present ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> i expected the same results as with gov west, which was successful ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> the error described above <!--- Paste verbatim command output between quotes --> ``` have pasted above ```
https://github.com/ansible/ansible/issues/64394
https://github.com/ansible/ansible/pull/63924
426e37ea92db037fe9367a6daa4d17622b1faf1d
f1311d3e98118e98449b77637476aa14f6653bdf
2019-11-04T16:09:20Z
python
2019-11-20T23:59:02Z
lib/ansible/modules/cloud/amazon/iam_policy.py
#!/usr/bin/python # This file is part of Ansible # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['stableinterface'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: iam_policy short_description: Manage inline IAM policies for users, groups, and roles description: - Allows uploading or removing inline IAM policies for IAM users, groups or roles. - To administer managed policies please see M(iam_user), M(iam_role), M(iam_group) and M(iam_managed_policy) version_added: "2.0" options: iam_type: description: - Type of IAM resource. required: true choices: [ "user", "group", "role"] type: str iam_name: description: - Name of IAM resource you wish to target for policy actions. In other words, the user name, group name or role name. required: true type: str policy_name: description: - The name label for the policy to create or remove. required: true type: str policy_document: description: - The path to the properly json formatted policy file. - Mutually exclusive with I(policy_json). type: str policy_json: description: - A properly json formatted policy as string. - Mutually exclusive with I(policy_document). - See U(https://github.com/ansible/ansible/issues/7005#issuecomment-42894813) on how to use it properly. type: json state: description: - Whether to create or delete the IAM policy. required: true choices: [ "present", "absent"] default: present type: str skip_duplicates: description: - By default the module looks for any policies that match the document you pass in, if there is a match it will not make a new policy object with the same rules. You can override this by specifying false which would allow for two policy objects with different names but same rules. default: True type: bool author: - Jonathan I. Davila (@defionscode) extends_documentation_fragment: - aws - ec2 ''' EXAMPLES = ''' # Create a policy with the name of 'Admin' to the group 'administrators' - name: Assign a policy called Admin to the administrators group iam_policy: iam_type: group iam_name: administrators policy_name: Admin state: present policy_document: admin_policy.json # Advanced example, create two new groups and add a READ-ONLY policy to both # groups. - name: Create Two Groups, Mario and Luigi iam: iam_type: group name: "{{ item }}" state: present loop: - Mario - Luigi register: new_groups - name: Apply READ-ONLY policy to new groups that have been recently created iam_policy: iam_type: group iam_name: "{{ item.created_group.group_name }}" policy_name: "READ-ONLY" policy_document: readonlypolicy.json state: present loop: "{{ new_groups.results }}" # Create a new S3 policy with prefix per user - name: Create S3 policy from template iam_policy: iam_type: user iam_name: "{{ item.user }}" policy_name: "s3_limited_access_{{ item.prefix }}" state: present policy_json: " {{ lookup( 'template', 's3_policy.json.j2') }} " loop: - user: s3_user prefix: s3_user_prefix ''' import json try: import boto import boto.iam import boto.ec2 HAS_BOTO = True except ImportError: HAS_BOTO = False from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.ec2 import connect_to_aws, ec2_argument_spec, get_aws_connection_info, boto_exception from ansible.module_utils.six import string_types from ansible.module_utils.six.moves import urllib def user_action(module, iam, name, policy_name, skip, pdoc, state): policy_match = False changed = False try: current_policies = [cp for cp in iam.get_all_user_policies(name). list_user_policies_result. policy_names] matching_policies = [] for pol in current_policies: ''' urllib is needed here because boto returns url encoded strings instead ''' if urllib.parse.unquote(iam.get_user_policy(name, pol). get_user_policy_result.policy_document) == pdoc: policy_match = True matching_policies.append(pol) if state == 'present': # If policy document does not already exist (either it's changed # or the policy is not present) or if we're not skipping dupes then # make the put call. Note that the put call does a create or update. if not policy_match or (not skip and policy_name not in matching_policies): changed = True iam.put_user_policy(name, policy_name, pdoc) elif state == 'absent': try: iam.delete_user_policy(name, policy_name) changed = True except boto.exception.BotoServerError as err: error_msg = boto_exception(err) if 'cannot be found.' in error_msg: changed = False module.exit_json(changed=changed, msg="%s policy is already absent" % policy_name) updated_policies = [cp for cp in iam.get_all_user_policies(name). list_user_policies_result. policy_names] except boto.exception.BotoServerError as err: error_msg = boto_exception(err) module.fail_json(changed=changed, msg=error_msg) return changed, name, updated_policies def role_action(module, iam, name, policy_name, skip, pdoc, state): policy_match = False changed = False try: current_policies = [cp for cp in iam.list_role_policies(name). list_role_policies_result. policy_names] except boto.exception.BotoServerError as e: if e.error_code == "NoSuchEntity": # Role doesn't exist so it's safe to assume the policy doesn't either module.exit_json(changed=False, msg="No such role, policy will be skipped.") else: module.fail_json(msg=e.message) try: matching_policies = [] for pol in current_policies: if urllib.parse.unquote(iam.get_role_policy(name, pol). get_role_policy_result.policy_document) == pdoc: policy_match = True matching_policies.append(pol) if state == 'present': # If policy document does not already exist (either it's changed # or the policy is not present) or if we're not skipping dupes then # make the put call. Note that the put call does a create or update. if not policy_match or (not skip and policy_name not in matching_policies): changed = True iam.put_role_policy(name, policy_name, pdoc) elif state == 'absent': try: iam.delete_role_policy(name, policy_name) changed = True except boto.exception.BotoServerError as err: error_msg = boto_exception(err) if 'cannot be found.' in error_msg: changed = False module.exit_json(changed=changed, msg="%s policy is already absent" % policy_name) else: module.fail_json(msg=err.message) updated_policies = [cp for cp in iam.list_role_policies(name). list_role_policies_result. policy_names] except boto.exception.BotoServerError as err: error_msg = boto_exception(err) module.fail_json(changed=changed, msg=error_msg) return changed, name, updated_policies def group_action(module, iam, name, policy_name, skip, pdoc, state): policy_match = False changed = False msg = '' try: current_policies = [cp for cp in iam.get_all_group_policies(name). list_group_policies_result. policy_names] matching_policies = [] for pol in current_policies: if urllib.parse.unquote(iam.get_group_policy(name, pol). get_group_policy_result.policy_document) == pdoc: policy_match = True matching_policies.append(pol) msg = ("The policy document you specified already exists " "under the name %s." % pol) if state == 'present': # If policy document does not already exist (either it's changed # or the policy is not present) or if we're not skipping dupes then # make the put call. Note that the put call does a create or update. if not policy_match or (not skip and policy_name not in matching_policies): changed = True iam.put_group_policy(name, policy_name, pdoc) elif state == 'absent': try: iam.delete_group_policy(name, policy_name) changed = True except boto.exception.BotoServerError as err: error_msg = boto_exception(err) if 'cannot be found.' in error_msg: changed = False module.exit_json(changed=changed, msg="%s policy is already absent" % policy_name) updated_policies = [cp for cp in iam.get_all_group_policies(name). list_group_policies_result. policy_names] except boto.exception.BotoServerError as err: error_msg = boto_exception(err) module.fail_json(changed=changed, msg=error_msg) return changed, name, updated_policies, msg def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( iam_type=dict(required=True, choices=['user', 'group', 'role']), state=dict(default='present', choices=['present', 'absent']), iam_name=dict(default=None, required=False), policy_name=dict(required=True), policy_document=dict(default=None, required=False), policy_json=dict(type='json', default=None, required=False), skip_duplicates=dict(type='bool', default=True, required=False) )) module = AnsibleModule( argument_spec=argument_spec, ) if not HAS_BOTO: module.fail_json(msg='boto required for this module') iam_type = module.params.get('iam_type').lower() state = module.params.get('state') name = module.params.get('iam_name') policy_name = module.params.get('policy_name') skip = module.params.get('skip_duplicates') policy_document = module.params.get('policy_document') if policy_document is not None and module.params.get('policy_json') is not None: module.fail_json(msg='Only one of "policy_document" or "policy_json" may be set') if policy_document is not None: try: with open(policy_document, 'r') as json_data: pdoc = json.dumps(json.load(json_data)) json_data.close() except IOError as e: if e.errno == 2: module.fail_json( msg='policy_document {0:!r} does not exist'.format(policy_document)) else: raise elif module.params.get('policy_json') is not None: pdoc = module.params.get('policy_json') # if its a string, assume it is already JSON if not isinstance(pdoc, string_types): try: pdoc = json.dumps(pdoc) except Exception as e: module.fail_json(msg='Failed to convert the policy into valid JSON: %s' % str(e)) else: pdoc = None region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module) try: if region: iam = connect_to_aws(boto.iam, region, **aws_connect_kwargs) else: iam = boto.iam.connection.IAMConnection(**aws_connect_kwargs) except boto.exception.NoAuthHandlerFound as e: module.fail_json(msg=str(e)) changed = False if iam_type == 'user': changed, user_name, current_policies = user_action(module, iam, name, policy_name, skip, pdoc, state) module.exit_json(changed=changed, user_name=name, policies=current_policies) elif iam_type == 'role': changed, role_name, current_policies = role_action(module, iam, name, policy_name, skip, pdoc, state) module.exit_json(changed=changed, role_name=name, policies=current_policies) elif iam_type == 'group': changed, group_name, current_policies, msg = group_action(module, iam, name, policy_name, skip, pdoc, state) module.exit_json(changed=changed, group_name=name, policies=current_policies, msg=msg) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,394
Region us-gov-east-1 does not seem to be available for aws module boto.iam
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> attempting to use iam_policy module in govcloud and it results in the error: "Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path" the same playbook works fine in us-gov-west ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> iam_policy ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` { "exception": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 351, in <module>\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 325, in main\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/ansible_iam_policy_payload.zip/ansible/module_utils/ec2.py\", line 340, in connect_to_aws\nansible.module_utils.ec2.AnsibleAWSError: Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path\n", "_ansible_no_log": false, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 114, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1572882827.2-191594499800419/AnsiballZ_iam_policy.py\", line 49, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 351, in <module>\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/__main__.py\", line 325, in main\n File \"/tmp/ansible_iam_policy_payload_1NDwn0/ansible_iam_policy_payload.zip/ansible/module_utils/ec2.py\", line 340, in connect_to_aws\nansible.module_utils.ec2.AnsibleAWSError: Region us-gov-east-1 does not seem to be available for aws module boto.iam. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path\n", "changed": false, "module_stdout": "", "rc": 1, "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error" } ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```no output ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. - dont think anything that stands out or is relevant in this case -> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> use iam_policy with the region targeting us-gov-east-1, in our case we just wanted to create a policy. ``` ``` <!--- Paste example playbooks or commands between quotes below --> ``` - name: Create IAM Policy for the Audit S3 bucket iam_policy: iam_type: role iam_name: "some_name" region: "{{ a var that results in us-gov-east-1 }}" policy_name: some_policy_name policy_json: "{{ lookup('stuff for policy json') }}" state: present ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> i expected the same results as with gov west, which was successful ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> the error described above <!--- Paste verbatim command output between quotes --> ``` have pasted above ```
https://github.com/ansible/ansible/issues/64394
https://github.com/ansible/ansible/pull/63924
426e37ea92db037fe9367a6daa4d17622b1faf1d
f1311d3e98118e98449b77637476aa14f6653bdf
2019-11-04T16:09:20Z
python
2019-11-20T23:59:02Z
test/integration/targets/iam_policy/tasks/object.yml
--- - name: 'Run integration tests for IAM (inline) Policy management on {{ iam_type }}s' vars: iam_object_key: '{{ iam_type }}_name' block: # ============================================================ - name: 'Fetch policies from {{ iam_type }} before making changes' iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' register: iam_policy_info - name: 'Assert empty policy list' assert: that: - iam_policy_info is succeeded - iam_policy_info.policies | length == 0 - iam_policy_info.all_policy_names | length == 0 - iam_policy_info.policy_names | length == 0 - name: 'Fetch policies from non-existent {{ iam_type }}' iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}-junk' register: iam_policy_info - name: 'Assert not failed' assert: that: - iam_policy_info is succeeded # ============================================================ #- name: 'Create policy using document for {{ iam_type }} (check mode)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_a }}' # policy_document: '{{ tmpdir.path }}/no_access.json' # skip_duplicates: yes # register: result #- name: 'Assert policy would be added for {{ iam_type }}' # assert: # that: # - result is changed - name: 'Create policy using document for {{ iam_type }}' iam_policy: state: present iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_a }}' policy_document: '{{ tmpdir.path }}/no_access.json' skip_duplicates: yes register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' register: iam_policy_info - name: 'Assert policy was added for {{ iam_type }}' assert: that: - result is changed - result.policies | length == 1 - iam_policy_name_a in result.policies - result[iam_object_key] == iam_name - iam_policy_name_a in iam_policy_info.policy_names - iam_policy_info.policy_names | length == 1 - iam_policy_info.policies | length == 1 - iam_policy_name_a in iam_policy_info.all_policy_names - iam_policy_info.all_policy_names | length == 1 - iam_policy_info.policies[0].policy_name == iam_policy_name_a - '"Id" not in iam_policy_info.policies[0].policy_document' - name: 'Create policy using document for {{ iam_type }} (idempotency)' iam_policy: state: present iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_a }}' policy_document: '{{ tmpdir.path }}/no_access.json' skip_duplicates: yes register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' register: iam_policy_info - name: 'Assert no change' assert: that: - result is not changed - result.policies | length == 1 - iam_policy_name_a in result.policies - result[iam_object_key] == iam_name - iam_policy_info.policies | length == 1 - iam_policy_info.all_policy_names | length == 1 - iam_policy_name_a in iam_policy_info.all_policy_names - iam_policy_info.policies[0].policy_name == iam_policy_name_a - '"Id" not in iam_policy_info.policies[0].policy_document' # ============================================================ #- name: 'Create policy using document for {{ iam_type }} (check mode) (skip_duplicates)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_b }}' # policy_document: '{{ tmpdir.path }}/no_access.json' # skip_duplicates: yes # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_b }}' # register: iam_policy_info #- name: 'Assert policy would be added for {{ iam_type }}' # assert: # that: # - result is not changed # - iam_policy_info.all_policy_names | length == 1 # - '"policies" not in iam_policy_info' # - iam_policy_name_b not in iam_policy_info.all_policy_names - name: 'Create policy using document for {{ iam_type }} (skip_duplicates)' iam_policy: state: present iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' policy_document: '{{ tmpdir.path }}/no_access.json' skip_duplicates: yes register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' register: iam_policy_info - name: 'Assert policy was not added for {{ iam_type }} (skip_duplicates)' assert: that: - result is not changed - result.policies | length == 1 - iam_policy_name_b not in result.policies - result[iam_object_key] == iam_name - '"policies" not in iam_policy_info' - '"policy_names" not in iam_policy_info' - iam_policy_info.all_policy_names | length == 1 - iam_policy_name_b not in iam_policy_info.all_policy_names #- name: 'Create policy using document for {{ iam_type }} (check mode) (skip_duplicates = no)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_b }}' # policy_document: '{{ tmpdir.path }}/no_access.json' # skip_duplicates: no # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_b }}' # register: iam_policy_info #- name: 'Assert policy would be added for {{ iam_type }}' # assert: # that: # - result.changed == True # - '"policies" not in iam_policy_info' # - iam_policy_info.all_policy_names | length == 1 # - iam_policy_name_a in iam_policy_info.all_policy_names # - iam_policy_name_b not in iam_policy_info.all_policy_names - name: 'Create policy using document for {{ iam_type }} (skip_duplicates = no)' iam_policy: state: present skip_duplicates: no iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' policy_document: '{{ tmpdir.path }}/no_access.json' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' register: iam_policy_info - name: 'Assert policy was added for {{ iam_type }}' assert: that: - result is changed - result.policies | length == 2 - iam_policy_name_b in result.policies - result[iam_object_key] == iam_name - iam_policy_info.policies | length == 1 - iam_policy_info.all_policy_names | length == 2 - iam_policy_name_a in iam_policy_info.all_policy_names - iam_policy_name_b in iam_policy_info.all_policy_names - iam_policy_info.policies[0].policy_name == iam_policy_name_b - '"Id" not in iam_policy_info.policies[0].policy_document' - name: 'Create policy using document for {{ iam_type }} (idempotency) (skip_duplicates = no)' iam_policy: state: present skip_duplicates: no iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' policy_document: '{{ tmpdir.path }}/no_access.json' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' register: iam_policy_info - name: 'Assert no change' assert: that: - result is not changed - result.policies | length == 2 - iam_policy_name_b in result.policies - result[iam_object_key] == iam_name - iam_policy_info.policies | length == 1 - iam_policy_name_a in iam_policy_info.all_policy_names - iam_policy_name_b in iam_policy_info.all_policy_names - iam_policy_info.all_policy_names | length == 2 - iam_policy_info.policies[0].policy_name == iam_policy_name_b - '"Id" not in iam_policy_info.policies[0].policy_document' # ============================================================ #- name: 'Create policy using json for {{ iam_type }} (check mode)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_c }}' # policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access_with_id.json") }}' # skip_duplicates: yes # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_c }}' # register: iam_policy_info #- name: 'Assert policy would be added for {{ iam_type }}' # assert: # that: # - result is changed # - '"policies" not in iam_policy_info' # - iam_policy_info.all_policy_names | length == 2 # - iam_policy_name_c not in iam_policy_info.all_policy_names # - iam_policy_name_a in iam_policy_info.all_policy_names # - iam_policy_name_b in iam_policy_info.all_policy_names - name: 'Create policy using json for {{ iam_type }}' iam_policy: state: present iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access_with_id.json") }}' skip_duplicates: yes register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' register: iam_policy_info - name: 'Assert policy was added for {{ iam_type }}' assert: that: - result is changed - result.policies | length == 3 - iam_policy_name_c in result.policies - result[iam_object_key] == iam_name - iam_policy_info.policies | length == 1 - iam_policy_name_a in iam_policy_info.all_policy_names - iam_policy_name_b in iam_policy_info.all_policy_names - iam_policy_name_c in iam_policy_info.all_policy_names - iam_policy_info.all_policy_names | length == 3 - iam_policy_info.policies[0].policy_name == iam_policy_name_c - iam_policy_info.policies[0].policy_document.Id == 'MyId' - name: 'Create policy using json for {{ iam_type }} (idempotency)' iam_policy: state: present iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access_with_id.json") }}' skip_duplicates: yes register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' register: iam_policy_info - name: 'Assert no change' assert: that: - result is not changed - result.policies | length == 3 - iam_policy_name_c in result.policies - result[iam_object_key] == iam_name - iam_policy_name_a in iam_policy_info.all_policy_names - iam_policy_name_b in iam_policy_info.all_policy_names - iam_policy_name_c in iam_policy_info.all_policy_names - iam_policy_info.all_policy_names | length == 3 - iam_policy_info.policies[0].policy_name == iam_policy_name_c - iam_policy_info.policies[0].policy_document.Id == 'MyId' # ============================================================ #- name: 'Create policy using json for {{ iam_type }} (check mode) (skip_duplicates)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_d }}' # policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access_with_id.json") }}' # skip_duplicates: yes # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_d }}' # register: iam_policy_info #- name: 'Assert policy would be added for {{ iam_type }}' # assert: # that: # - result is not changed # - iam_policy_name_a in iam_policy_info.all_policy_names # - iam_policy_name_b in iam_policy_info.all_policy_names # - iam_policy_name_c in iam_policy_info.all_policy_names # - iam_policy_name_d not in iam_policy_info.all_policy_names # - iam_policy_info.all_policy_names | length == 3 # - '"policies" not in iam_policy_info' - name: 'Create policy using json for {{ iam_type }} (skip_duplicates)' iam_policy: state: present iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access_with_id.json") }}' skip_duplicates: yes register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' register: iam_policy_info - name: 'Assert policy was not added for {{ iam_type }} (skip_duplicates)' assert: that: - result is not changed - result.policies | length == 3 - iam_policy_name_d not in result.policies - result[iam_object_key] == iam_name - iam_policy_name_a in iam_policy_info.all_policy_names - iam_policy_name_b in iam_policy_info.all_policy_names - iam_policy_name_c in iam_policy_info.all_policy_names - iam_policy_name_d not in iam_policy_info.all_policy_names - iam_policy_info.all_policy_names | length == 3 - '"policies" not in iam_policy_info' #- name: 'Create policy using json for {{ iam_type }} (check mode) (skip_duplicates = no)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_d }}' # policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access_with_id.json") }}' # skip_duplicates: no # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_d }}' # register: iam_policy_info #- name: 'Assert policy would be added for {{ iam_type }}' # assert: # that: # - result.changed == True - name: 'Create policy using json for {{ iam_type }} (skip_duplicates = no)' iam_policy: state: present skip_duplicates: no iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access_with_id.json") }}' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' register: iam_policy_info - name: 'Assert policy was added for {{ iam_type }}' assert: that: - result is changed - result.policies | length == 4 - iam_policy_name_d in result.policies - result[iam_object_key] == iam_name - iam_policy_name_a in iam_policy_info.all_policy_names - iam_policy_name_b in iam_policy_info.all_policy_names - iam_policy_name_c in iam_policy_info.all_policy_names - iam_policy_name_d in iam_policy_info.all_policy_names - iam_policy_name_a not in iam_policy_info.policy_names - iam_policy_name_b not in iam_policy_info.policy_names - iam_policy_name_c not in iam_policy_info.policy_names - iam_policy_name_d in iam_policy_info.policy_names - iam_policy_info.policy_names | length == 1 - iam_policy_info.all_policy_names | length == 4 - iam_policy_info.policies[0].policy_name == iam_policy_name_d - iam_policy_info.policies[0].policy_document.Id == 'MyId' - name: 'Create policy using json for {{ iam_type }} (idempotency) (skip_duplicates = no)' iam_policy: state: present skip_duplicates: no iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access_with_id.json") }}' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' register: iam_policy_info - name: 'Assert no change' assert: that: - result is not changed - result.policies | length == 4 - iam_policy_name_d in result.policies - result[iam_object_key] == iam_name - iam_policy_name_a in iam_policy_info.all_policy_names - iam_policy_name_b in iam_policy_info.all_policy_names - iam_policy_name_c in iam_policy_info.all_policy_names - iam_policy_name_d in iam_policy_info.all_policy_names - iam_policy_info.all_policy_names | length == 4 - iam_policy_info.policies[0].policy_name == iam_policy_name_d - iam_policy_info.policies[0].policy_document.Id == 'MyId' # ============================================================ - name: 'Test fetching multiple policies from {{ iam_type }}' iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' register: iam_policy_info - name: 'Assert all policies returned' assert: that: - iam_policy_info is succeeded - iam_policy_info.policies | length == 4 - iam_policy_info.all_policy_names | length == 4 - iam_policy_name_a in iam_policy_info.all_policy_names - iam_policy_name_b in iam_policy_info.all_policy_names - iam_policy_name_c in iam_policy_info.all_policy_names - iam_policy_name_d in iam_policy_info.all_policy_names # Quick test that the policies are the ones we expect - iam_policy_info.policies | json_query('[*].policy_name') | length == 4 - iam_policy_info.policies | json_query('[?policy_document.Id == `MyId`].policy_name') | length == 2 - iam_policy_name_c in (iam_policy_info.policies | json_query('[?policy_document.Id == `MyId`].policy_name') | list) - iam_policy_name_d in (iam_policy_info.policies | json_query('[?policy_document.Id == `MyId`].policy_name') | list) # ============================================================ #- name: 'Update policy using document for {{ iam_type }} (check mode) (skip_duplicates)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_a }}' # policy_document: '{{ tmpdir.path }}/no_access_with_id.json' # skip_duplicates: yes # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_a }}' # register: iam_policy_info #- name: 'Assert policy would be added for {{ iam_type }}' # assert: # that: # - result is not changed # - iam_policy_info.policies[0].policy_name == iam_policy_name_a # - '"Id" not in iam_policy_info.policies[0].policy_document' - name: 'Update policy using document for {{ iam_type }} (skip_duplicates)' iam_policy: state: present iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_a }}' policy_document: '{{ tmpdir.path }}/no_access_with_id.json' skip_duplicates: yes register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_a }}' register: iam_policy_info - name: 'Assert policy was not updated for {{ iam_type }} (skip_duplicates)' assert: that: - result is not changed - result.policies | length == 4 - iam_policy_name_a in result.policies - result[iam_object_key] == iam_name - iam_policy_info.all_policy_names | length == 4 - iam_policy_info.policies[0].policy_name == iam_policy_name_a - '"Id" not in iam_policy_info.policies[0].policy_document' #- name: 'Update policy using document for {{ iam_type }} (check mode) (skip_duplicates = no)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_a }}' # policy_document: '{{ tmpdir.path }}/no_access_with_id.json' # skip_duplicates: no # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_a }}' # register: iam_policy_info #- name: 'Assert policy would be updated for {{ iam_type }}' # assert: # that: # - result.changed == True # - iam_policy_info.all_policy_names | length == 4 # - iam_policy_info.policies[0].policy_name == iam_policy_name_a # - '"Id" not in iam_policy_info.policies[0].policy_document' - name: 'Update policy using document for {{ iam_type }} (skip_duplicates = no)' iam_policy: state: present skip_duplicates: no iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_a }}' policy_document: '{{ tmpdir.path }}/no_access_with_id.json' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_a }}' register: iam_policy_info - name: 'Assert policy was updated for {{ iam_type }}' assert: that: - result is changed - result.policies | length == 4 - iam_policy_name_a in result.policies - result[iam_object_key] == iam_name - iam_policy_info.policies[0].policy_document.Id == 'MyId' - name: 'Update policy using document for {{ iam_type }} (idempotency) (skip_duplicates = no)' iam_policy: state: present skip_duplicates: no iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_a }}' policy_document: '{{ tmpdir.path }}/no_access_with_id.json' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_a }}' register: iam_policy_info - name: 'Assert no change' assert: that: - result is not changed - result.policies | length == 4 - iam_policy_name_a in result.policies - result[iam_object_key] == iam_name - iam_policy_info.policies[0].policy_document.Id == 'MyId' - name: 'Delete policy A' iam_policy: state: absent iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_a }}' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_a }}' register: iam_policy_info - name: 'Assert deleted' assert: that: - result is changed - result.policies | length == 3 - iam_policy_name_a not in result.policies - result[iam_object_key] == iam_name - '"policies" not in iam_policy_info' - iam_policy_info.all_policy_names | length == 3 - iam_policy_name_a not in iam_policy_info.all_policy_names # ============================================================ # Update C with no_access.json # Delete C # #- name: 'Update policy using json for {{ iam_type }} (check mode) (skip_duplicates)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_c }}' # policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access.json") }}' # skip_duplicates: yes # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_c }}' # register: iam_policy_info #- name: 'Assert policy would be added for {{ iam_type }}' # assert: # that: # - result is not changed # - iam_policy_info.policies[0].policy_document.Id == 'MyId' - name: 'Update policy using json for {{ iam_type }} (skip_duplicates)' iam_policy: state: present iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access.json") }}' skip_duplicates: yes register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' register: iam_policy_info - name: 'Assert policy was not updated for {{ iam_type }} (skip_duplicates)' assert: that: - result is not changed - result.policies | length == 3 - iam_policy_name_c in result.policies - result[iam_object_key] == iam_name - iam_policy_info.policies[0].policy_document.Id == 'MyId' #- name: 'Update policy using json for {{ iam_type }} (check mode) (skip_duplicates = no)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_c }}' # policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access.json") }}' # skip_duplicates: no # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_c }}' # register: iam_policy_info #- name: 'Assert policy would be updated for {{ iam_type }}' # assert: # that: # - result.changed == True # - iam_policy_info.policies[0].policy_document.Id == 'MyId' - name: 'Update policy using json for {{ iam_type }} (skip_duplicates = no)' iam_policy: state: present skip_duplicates: no iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access.json") }}' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' register: iam_policy_info - name: 'Assert policy was updated for {{ iam_type }}' assert: that: - result is changed - result.policies | length == 3 - iam_policy_name_c in result.policies - result[iam_object_key] == iam_name - '"Id" not in iam_policy_info.policies[0].policy_document' - name: 'Update policy using json for {{ iam_type }} (idempotency) (skip_duplicates = no)' iam_policy: state: present skip_duplicates: no iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access.json") }}' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' register: iam_policy_info - name: 'Assert no change' assert: that: - result is not changed - result.policies | length == 3 - iam_policy_name_c in result.policies - result[iam_object_key] == iam_name - '"Id" not in iam_policy_info.policies[0].policy_document' - name: 'Delete policy C' iam_policy: state: absent iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' register: iam_policy_info - name: 'Assert deleted' assert: that: - result is changed - result.policies | length == 2 - iam_policy_name_c not in result.policies - result[iam_object_key] == iam_name - '"policies" not in iam_policy_info' - iam_policy_info.all_policy_names | length == 2 - iam_policy_name_c not in iam_policy_info.all_policy_names # ============================================================ #- name: 'Update policy using document for {{ iam_type }} (check mode)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_b }}' # policy_document: '{{ tmpdir.path }}/no_access_with_second_id.json' # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_b }}' # register: iam_policy_info #- name: 'Assert policy would be updated for {{ iam_type }}' # assert: # that: # - result.changed == True # - '"Id" not in iam_policy_info.policies[0].policy_document' - name: 'Update policy using document for {{ iam_type }}' iam_policy: state: present iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' policy_document: '{{ tmpdir.path }}/no_access_with_second_id.json' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' register: iam_policy_info - name: 'Assert policy was updated for {{ iam_type }}' assert: that: - result is changed - result.policies | length == 2 - iam_policy_name_b in result.policies - result[iam_object_key] == iam_name - iam_policy_info.policies[0].policy_document.Id == 'MyOtherId' - name: 'Update policy using document for {{ iam_type }} (idempotency)' iam_policy: state: present iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' policy_document: '{{ tmpdir.path }}/no_access_with_second_id.json' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' register: iam_policy_info - name: 'Assert no change' assert: that: - result is not changed - result.policies | length == 2 - iam_policy_name_b in result.policies - result[iam_object_key] == iam_name - iam_policy_info.policies[0].policy_document.Id == 'MyOtherId' - name: 'Delete policy B' iam_policy: state: absent iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' register: iam_policy_info - name: 'Assert deleted' assert: that: - result is changed - result.policies | length == 1 - iam_policy_name_b not in result.policies - result[iam_object_key] == iam_name - '"policies" not in iam_policy_info' - iam_policy_info.all_policy_names | length == 1 - iam_policy_name_b not in iam_policy_info.all_policy_names # ============================================================ #- name: 'Update policy using json for {{ iam_type }} (check mode)' # check_mode: yes # iam_policy: # state: present # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_d }}' # policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access_with_second_id.json") }}' # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_d }}' # register: iam_policy_info #- name: 'Assert policy would be updated for {{ iam_type }}' # assert: # that: # - result.changed == True # - iam_policy_info.policies[0].policy_document.Id == 'MyId' - name: 'Update policy using json for {{ iam_type }}' iam_policy: state: present iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access_with_second_id.json") }}' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' register: iam_policy_info - name: 'Assert policy was updated for {{ iam_type }}' assert: that: - result is changed - result.policies | length == 1 - iam_policy_name_d in result.policies - result[iam_object_key] == iam_name - iam_policy_info.policies[0].policy_document.Id == 'MyOtherId' - name: 'Update policy using json for {{ iam_type }} (idempotency)' iam_policy: state: present skip_duplicates: no iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' policy_json: '{{ lookup("file", "{{ tmpdir.path }}/no_access_with_second_id.json") }}' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' register: iam_policy_info - name: 'Assert no change' assert: that: - result is not changed - result.policies | length == 1 - iam_policy_name_d in result.policies - result[iam_object_key] == iam_name - iam_policy_info.policies[0].policy_document.Id == 'MyOtherId' # ============================================================ #- name: 'Delete policy D (check_mode)' # check_mode: yes # iam_policy: # state: absent # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_d }}' # register: result #- iam_policy_info: # iam_type: '{{ iam_type }}' # iam_name: '{{ iam_name }}' # policy_name: '{{ iam_policy_name_d }}' # register: iam_policy_info #- name: 'Assert not deleted' # assert: # that: # - result is changed # - result.policies | length == 1 # - iam_policy_name_d in result.policies # - result[iam_object_key] == iam_name # - iam_policy_info.all_policy_names | length == 1 # - iam_policy_name_d in iam_policy_info.all_policy_names # - iam_policy_info.policies[0].policy_document.Id == 'MyOtherId' - name: 'Delete policy D' iam_policy: state: absent iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' register: iam_policy_info - name: 'Assert deleted' assert: that: - result is changed - '"policies" not in iam_policy_info' - iam_policy_name_d not in result.policies - result[iam_object_key] == iam_name - '"policies" not in iam_policy_info' - iam_policy_info.all_policy_names | length == 0 - name: 'Delete policy D (test idempotency)' iam_policy: state: absent iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' register: result - iam_policy_info: iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' register: iam_policy_info - name: 'Assert deleted' assert: that: - result is not changed - '"policies" not in iam_policy_info' - iam_policy_info.all_policy_names | length == 0 always: # ============================================================ - name: 'Delete policy A for {{ iam_type }}' iam_policy: state: absent iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_a }}' ignore_errors: yes - name: 'Delete policy B for {{ iam_type }}' iam_policy: state: absent iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_b }}' ignore_errors: yes - name: 'Delete policy C for {{ iam_type }}' iam_policy: state: absent iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_c }}' ignore_errors: yes - name: 'Delete policy D for {{ iam_type }}' iam_policy: state: absent iam_type: '{{ iam_type }}' iam_name: '{{ iam_name }}' policy_name: '{{ iam_policy_name_d }}' ignore_errors: yes
closed
ansible/ansible
https://github.com/ansible/ansible
65,020
AWS Application Load Balancer Fails with Multiple Host Headers
##### SUMMARY When using the `host-header` condition for a load balancer rule if there are multiple values for the host header ansible fails to apply the changes with broken json output. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME elbv2 ##### ANSIBLE VERSION ``` ansible 2.10.0.dev0 config file = None configured module search path = [u'/home/mjmayer/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/mjmayer/repos/ansible/lib/ansible executable location = /home/mjmayer/repos/ansible/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION ``` ``` ##### OS / ENVIRONMENT Mint 19.2 / Ubuntu 18.04 ##### STEPS TO REPRODUCE ```yaml - name: add a rule that uses the host header condition to the listener elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" state: present purge_rules: no listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" Rules: - Conditions: - Field: host-header Values: - 'local.mydomain.com' Priority: '3' Actions: - TargetGroupName: "{{ tg_name }}" Type: forward <<: *aws_connection_info register: alb - assert: that: - alb.changed - alb.listeners[0].rules|length == 4 - name: test replacing the rule that uses the host header condition with multiple host header conditions elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" purge_rules: no state: present listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" Rules: - Conditions: - Field: host-header Values: - 'local.mydomain.com' - 'alternate.mydomain.com' Priority: '3' Actions: - TargetGroupName: "{{ tg_name }}" Type: forward <<: *aws_connection_info register: alb ``` ##### EXPECTED RESULTS The second application of `elb_application_lb` should succeed and add the second host-header. ##### ACTUAL RESULTS Ansible prints invalid json to the screen with the following error. ```paste below Module invocation had junk after the JSON data ```
https://github.com/ansible/ansible/issues/65020
https://github.com/ansible/ansible/pull/65021
d1c58bc94274c4e91370333467a0868f4456993c
d52af75c68b3f31c994d8b234b9c1e2387f9a4dd
2019-11-18T21:48:27Z
python
2019-11-21T16:42:37Z
lib/ansible/module_utils/aws/elbv2.py
# Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type # Ansible imports from ansible.module_utils.ec2 import camel_dict_to_snake_dict, get_ec2_security_group_ids_from_names, \ ansible_dict_to_boto3_tag_list, boto3_tag_list_to_ansible_dict, compare_policies as compare_dicts, \ AWSRetry from ansible.module_utils.aws.elb_utils import get_elb, get_elb_listener, convert_tg_name_to_arn # Non-ansible imports try: from botocore.exceptions import BotoCoreError, ClientError except ImportError: pass import traceback from copy import deepcopy class ElasticLoadBalancerV2(object): def __init__(self, connection, module): self.connection = connection self.module = module self.changed = False self.new_load_balancer = False self.scheme = module.params.get("scheme") self.name = module.params.get("name") self.subnet_mappings = module.params.get("subnet_mappings") self.subnets = module.params.get("subnets") self.deletion_protection = module.params.get("deletion_protection") self.wait = module.params.get("wait") if module.params.get("tags") is not None: self.tags = ansible_dict_to_boto3_tag_list(module.params.get("tags")) else: self.tags = None self.purge_tags = module.params.get("purge_tags") self.elb = get_elb(connection, module, self.name) if self.elb is not None: self.elb_attributes = self.get_elb_attributes() self.elb['tags'] = self.get_elb_tags() else: self.elb_attributes = None def wait_for_status(self, elb_arn): """ Wait for load balancer to reach 'active' status :param elb_arn: The load balancer ARN :return: """ try: waiter = self.connection.get_waiter('load_balancer_available') waiter.wait(LoadBalancerArns=[elb_arn]) except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) def get_elb_attributes(self): """ Get load balancer attributes :return: """ try: attr_list = AWSRetry.jittered_backoff()( self.connection.describe_load_balancer_attributes )(LoadBalancerArn=self.elb['LoadBalancerArn'])['Attributes'] elb_attributes = boto3_tag_list_to_ansible_dict(attr_list) except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) # Replace '.' with '_' in attribute key names to make it more Ansibley return dict((k.replace('.', '_'), v) for k, v in elb_attributes.items()) def update_elb_attributes(self): """ Update the elb_attributes parameter :return: """ self.elb_attributes = self.get_elb_attributes() def get_elb_tags(self): """ Get load balancer tags :return: """ try: return AWSRetry.jittered_backoff()( self.connection.describe_tags )(ResourceArns=[self.elb['LoadBalancerArn']])['TagDescriptions'][0]['Tags'] except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) def delete_tags(self, tags_to_delete): """ Delete elb tags :return: """ try: AWSRetry.jittered_backoff()( self.connection.remove_tags )(ResourceArns=[self.elb['LoadBalancerArn']], TagKeys=tags_to_delete) except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) self.changed = True def modify_tags(self): """ Modify elb tags :return: """ try: AWSRetry.jittered_backoff()( self.connection.add_tags )(ResourceArns=[self.elb['LoadBalancerArn']], Tags=self.tags) except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) self.changed = True def delete(self): """ Delete elb :return: """ try: AWSRetry.jittered_backoff()( self.connection.delete_load_balancer )(LoadBalancerArn=self.elb['LoadBalancerArn']) except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) self.changed = True def compare_subnets(self): """ Compare user subnets with current ELB subnets :return: bool True if they match otherwise False """ subnet_mapping_id_list = [] subnet_mappings = [] # Check if we're dealing with subnets or subnet_mappings if self.subnets is not None: # Convert subnets to subnet_mappings format for comparison for subnet in self.subnets: subnet_mappings.append({'SubnetId': subnet}) if self.subnet_mappings is not None: # Use this directly since we're comparing as a mapping subnet_mappings = self.subnet_mappings # Build a subnet_mapping style struture of what's currently # on the load balancer for subnet in self.elb['AvailabilityZones']: this_mapping = {'SubnetId': subnet['SubnetId']} for address in subnet.get('LoadBalancerAddresses', []): if 'AllocationId' in address: this_mapping['AllocationId'] = address['AllocationId'] break subnet_mapping_id_list.append(this_mapping) return set(frozenset(mapping.items()) for mapping in subnet_mapping_id_list) == set(frozenset(mapping.items()) for mapping in subnet_mappings) def modify_subnets(self): """ Modify elb subnets to match module parameters :return: """ try: AWSRetry.jittered_backoff()( self.connection.set_subnets )(LoadBalancerArn=self.elb['LoadBalancerArn'], Subnets=self.subnets) except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) self.changed = True def update(self): """ Update the elb from AWS :return: """ self.elb = get_elb(self.connection, self.module, self.module.params.get("name")) self.elb['tags'] = self.get_elb_tags() class ApplicationLoadBalancer(ElasticLoadBalancerV2): def __init__(self, connection, connection_ec2, module): """ :param connection: boto3 connection :param module: Ansible module """ super(ApplicationLoadBalancer, self).__init__(connection, module) self.connection_ec2 = connection_ec2 # Ansible module parameters specific to ALBs self.type = 'application' if module.params.get('security_groups') is not None: try: self.security_groups = AWSRetry.jittered_backoff()( get_ec2_security_group_ids_from_names )(module.params.get('security_groups'), self.connection_ec2, boto3=True) except ValueError as e: self.module.fail_json(msg=str(e), exception=traceback.format_exc()) except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) else: self.security_groups = module.params.get('security_groups') self.access_logs_enabled = module.params.get("access_logs_enabled") self.access_logs_s3_bucket = module.params.get("access_logs_s3_bucket") self.access_logs_s3_prefix = module.params.get("access_logs_s3_prefix") self.idle_timeout = module.params.get("idle_timeout") self.http2 = module.params.get("http2") if self.elb is not None and self.elb['Type'] != 'application': self.module.fail_json(msg="The load balancer type you are trying to manage is not application. Try elb_network_lb module instead.") def create_elb(self): """ Create a load balancer :return: """ # Required parameters params = dict() params['Name'] = self.name params['Type'] = self.type # Other parameters if self.subnets is not None: params['Subnets'] = self.subnets if self.subnet_mappings is not None: params['SubnetMappings'] = self.subnet_mappings if self.security_groups is not None: params['SecurityGroups'] = self.security_groups params['Scheme'] = self.scheme if self.tags: params['Tags'] = self.tags try: self.elb = AWSRetry.jittered_backoff()(self.connection.create_load_balancer)(**params)['LoadBalancers'][0] self.changed = True self.new_load_balancer = True except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) if self.wait: self.wait_for_status(self.elb['LoadBalancerArn']) def modify_elb_attributes(self): """ Update Application ELB attributes if required :return: """ update_attributes = [] if self.access_logs_enabled is not None and str(self.access_logs_enabled).lower() != self.elb_attributes['access_logs_s3_enabled']: update_attributes.append({'Key': 'access_logs.s3.enabled', 'Value': str(self.access_logs_enabled).lower()}) if self.access_logs_s3_bucket is not None and self.access_logs_s3_bucket != self.elb_attributes['access_logs_s3_bucket']: update_attributes.append({'Key': 'access_logs.s3.bucket', 'Value': self.access_logs_s3_bucket}) if self.access_logs_s3_prefix is not None and self.access_logs_s3_prefix != self.elb_attributes['access_logs_s3_prefix']: update_attributes.append({'Key': 'access_logs.s3.prefix', 'Value': self.access_logs_s3_prefix}) if self.deletion_protection is not None and str(self.deletion_protection).lower() != self.elb_attributes['deletion_protection_enabled']: update_attributes.append({'Key': 'deletion_protection.enabled', 'Value': str(self.deletion_protection).lower()}) if self.idle_timeout is not None and str(self.idle_timeout) != self.elb_attributes['idle_timeout_timeout_seconds']: update_attributes.append({'Key': 'idle_timeout.timeout_seconds', 'Value': str(self.idle_timeout)}) if self.http2 is not None and str(self.http2).lower() != self.elb_attributes['routing_http2_enabled']: update_attributes.append({'Key': 'routing.http2.enabled', 'Value': str(self.http2).lower()}) if update_attributes: try: AWSRetry.jittered_backoff()( self.connection.modify_load_balancer_attributes )(LoadBalancerArn=self.elb['LoadBalancerArn'], Attributes=update_attributes) self.changed = True except (BotoCoreError, ClientError) as e: # Something went wrong setting attributes. If this ELB was created during this task, delete it to leave a consistent state if self.new_load_balancer: AWSRetry.jittered_backoff()(self.connection.delete_load_balancer)(LoadBalancerArn=self.elb['LoadBalancerArn']) self.module.fail_json_aws(e) def compare_security_groups(self): """ Compare user security groups with current ELB security groups :return: bool True if they match otherwise False """ if set(self.elb['SecurityGroups']) != set(self.security_groups): return False else: return True def modify_security_groups(self): """ Modify elb security groups to match module parameters :return: """ try: AWSRetry.jittered_backoff()( self.connection.set_security_groups )(LoadBalancerArn=self.elb['LoadBalancerArn'], SecurityGroups=self.security_groups) except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) self.changed = True class NetworkLoadBalancer(ElasticLoadBalancerV2): def __init__(self, connection, connection_ec2, module): """ :param connection: boto3 connection :param module: Ansible module """ super(NetworkLoadBalancer, self).__init__(connection, module) self.connection_ec2 = connection_ec2 # Ansible module parameters specific to NLBs self.type = 'network' self.cross_zone_load_balancing = module.params.get('cross_zone_load_balancing') if self.elb is not None and self.elb['Type'] != 'network': self.module.fail_json(msg="The load balancer type you are trying to manage is not network. Try elb_application_lb module instead.") def create_elb(self): """ Create a load balancer :return: """ # Required parameters params = dict() params['Name'] = self.name params['Type'] = self.type # Other parameters if self.subnets is not None: params['Subnets'] = self.subnets if self.subnet_mappings is not None: params['SubnetMappings'] = self.subnet_mappings params['Scheme'] = self.scheme if self.tags: params['Tags'] = self.tags try: self.elb = AWSRetry.jittered_backoff()(self.connection.create_load_balancer)(**params)['LoadBalancers'][0] self.changed = True self.new_load_balancer = True except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) if self.wait: self.wait_for_status(self.elb['LoadBalancerArn']) def modify_elb_attributes(self): """ Update Network ELB attributes if required :return: """ update_attributes = [] if self.cross_zone_load_balancing is not None and str(self.cross_zone_load_balancing).lower() != \ self.elb_attributes['load_balancing_cross_zone_enabled']: update_attributes.append({'Key': 'load_balancing.cross_zone.enabled', 'Value': str(self.cross_zone_load_balancing).lower()}) if self.deletion_protection is not None and str(self.deletion_protection).lower() != self.elb_attributes['deletion_protection_enabled']: update_attributes.append({'Key': 'deletion_protection.enabled', 'Value': str(self.deletion_protection).lower()}) if update_attributes: try: AWSRetry.jittered_backoff()( self.connection.modify_load_balancer_attributes )(LoadBalancerArn=self.elb['LoadBalancerArn'], Attributes=update_attributes) self.changed = True except (BotoCoreError, ClientError) as e: # Something went wrong setting attributes. If this ELB was created during this task, delete it to leave a consistent state if self.new_load_balancer: AWSRetry.jittered_backoff()(self.connection.delete_load_balancer)(LoadBalancerArn=self.elb['LoadBalancerArn']) self.module.fail_json_aws(e) def modify_subnets(self): """ Modify elb subnets to match module parameters (unsupported for NLB) :return: """ self.module.fail_json(msg='Modifying subnets and elastic IPs is not supported for Network Load Balancer') class ELBListeners(object): def __init__(self, connection, module, elb_arn): self.connection = connection self.module = module self.elb_arn = elb_arn listeners = module.params.get("listeners") if listeners is not None: # Remove suboption argspec defaults of None from each listener listeners = [dict((x, listener_dict[x]) for x in listener_dict if listener_dict[x] is not None) for listener_dict in listeners] self.listeners = self._ensure_listeners_default_action_has_arn(listeners) self.current_listeners = self._get_elb_listeners() self.purge_listeners = module.params.get("purge_listeners") self.changed = False def update(self): """ Update the listeners for the ELB :return: """ self.current_listeners = self._get_elb_listeners() def _get_elb_listeners(self): """ Get ELB listeners :return: """ try: listener_paginator = self.connection.get_paginator('describe_listeners') return (AWSRetry.jittered_backoff()(listener_paginator.paginate)(LoadBalancerArn=self.elb_arn).build_full_result())['Listeners'] except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) def _ensure_listeners_default_action_has_arn(self, listeners): """ If a listener DefaultAction has been passed with a Target Group Name instead of ARN, lookup the ARN and replace the name. :param listeners: a list of listener dicts :return: the same list of dicts ensuring that each listener DefaultActions dict has TargetGroupArn key. If a TargetGroupName key exists, it is removed. """ if not listeners: listeners = [] fixed_listeners = [] for listener in listeners: fixed_actions = [] for action in listener['DefaultActions']: if 'TargetGroupName' in action: action['TargetGroupArn'] = convert_tg_name_to_arn(self.connection, self.module, action['TargetGroupName']) del action['TargetGroupName'] fixed_actions.append(action) listener['DefaultActions'] = fixed_actions fixed_listeners.append(listener) return fixed_listeners def compare_listeners(self): """ :return: """ listeners_to_modify = [] listeners_to_delete = [] listeners_to_add = deepcopy(self.listeners) # Check each current listener port to see if it's been passed to the module for current_listener in self.current_listeners: current_listener_passed_to_module = False for new_listener in self.listeners[:]: new_listener['Port'] = int(new_listener['Port']) if current_listener['Port'] == new_listener['Port']: current_listener_passed_to_module = True # Remove what we match so that what is left can be marked as 'to be added' listeners_to_add.remove(new_listener) modified_listener = self._compare_listener(current_listener, new_listener) if modified_listener: modified_listener['Port'] = current_listener['Port'] modified_listener['ListenerArn'] = current_listener['ListenerArn'] listeners_to_modify.append(modified_listener) break # If the current listener was not matched against passed listeners and purge is True, mark for removal if not current_listener_passed_to_module and self.purge_listeners: listeners_to_delete.append(current_listener['ListenerArn']) return listeners_to_add, listeners_to_modify, listeners_to_delete def _compare_listener(self, current_listener, new_listener): """ Compare two listeners. :param current_listener: :param new_listener: :return: """ modified_listener = {} # Port if current_listener['Port'] != new_listener['Port']: modified_listener['Port'] = new_listener['Port'] # Protocol if current_listener['Protocol'] != new_listener['Protocol']: modified_listener['Protocol'] = new_listener['Protocol'] # If Protocol is HTTPS, check additional attributes if current_listener['Protocol'] == 'HTTPS' and new_listener['Protocol'] == 'HTTPS': # Cert if current_listener['SslPolicy'] != new_listener['SslPolicy']: modified_listener['SslPolicy'] = new_listener['SslPolicy'] if current_listener['Certificates'][0]['CertificateArn'] != new_listener['Certificates'][0]['CertificateArn']: modified_listener['Certificates'] = [] modified_listener['Certificates'].append({}) modified_listener['Certificates'][0]['CertificateArn'] = new_listener['Certificates'][0]['CertificateArn'] elif current_listener['Protocol'] != 'HTTPS' and new_listener['Protocol'] == 'HTTPS': modified_listener['SslPolicy'] = new_listener['SslPolicy'] modified_listener['Certificates'] = [] modified_listener['Certificates'].append({}) modified_listener['Certificates'][0]['CertificateArn'] = new_listener['Certificates'][0]['CertificateArn'] # Default action # Check proper rule format on current listener if len(current_listener['DefaultActions']) > 1: for action in current_listener['DefaultActions']: if 'Order' not in action: self.module.fail_json(msg="'Order' key not found in actions. " "installed version of botocore does not support " "multiple actions, please upgrade botocore to version " "1.10.30 or higher") # If the lengths of the actions are the same, we'll have to verify that the # contents of those actions are the same if len(current_listener['DefaultActions']) == len(new_listener['DefaultActions']): # if actions have just one element, compare the contents and then update if # they're different if len(current_listener['DefaultActions']) == 1 and len(new_listener['DefaultActions']) == 1: if current_listener['DefaultActions'] != new_listener['DefaultActions']: modified_listener['DefaultActions'] = new_listener['DefaultActions'] # if actions have multiple elements, we'll have to order them first before comparing. # multiple actions will have an 'Order' key for this purpose else: current_actions_sorted = sorted(current_listener['DefaultActions'], key=lambda x: x['Order']) new_actions_sorted = sorted(new_listener['DefaultActions'], key=lambda x: x['Order']) # the AWS api won't return the client secret, so we'll have to remove it # or the module will always see the new and current actions as different # and try to apply the same config new_actions_sorted_no_secret = [] for action in new_actions_sorted: # the secret is currently only defined in the oidc config if action['Type'] == 'authenticate-oidc': action['AuthenticateOidcConfig'].pop('ClientSecret') new_actions_sorted_no_secret.append(action) else: new_actions_sorted_no_secret.append(action) if current_actions_sorted != new_actions_sorted_no_secret: modified_listener['DefaultActions'] = new_listener['DefaultActions'] # If the action lengths are different, then replace with the new actions else: modified_listener['DefaultActions'] = new_listener['DefaultActions'] if modified_listener: return modified_listener else: return None class ELBListener(object): def __init__(self, connection, module, listener, elb_arn): """ :param connection: :param module: :param listener: :param elb_arn: """ self.connection = connection self.module = module self.listener = listener self.elb_arn = elb_arn def add(self): try: # Rules is not a valid parameter for create_listener if 'Rules' in self.listener: self.listener.pop('Rules') AWSRetry.jittered_backoff()(self.connection.create_listener)(LoadBalancerArn=self.elb_arn, **self.listener) except (BotoCoreError, ClientError) as e: if '"Order", must be one of: Type, TargetGroupArn' in str(e): self.module.fail_json(msg="installed version of botocore does not support " "multiple actions, please upgrade botocore to version " "1.10.30 or higher") else: self.module.fail_json_aws(e) def modify(self): try: # Rules is not a valid parameter for modify_listener if 'Rules' in self.listener: self.listener.pop('Rules') AWSRetry.jittered_backoff()(self.connection.modify_listener)(**self.listener) except (BotoCoreError, ClientError) as e: if '"Order", must be one of: Type, TargetGroupArn' in str(e): self.module.fail_json(msg="installed version of botocore does not support " "multiple actions, please upgrade botocore to version " "1.10.30 or higher") else: self.module.fail_json_aws(e) def delete(self): try: AWSRetry.jittered_backoff()(self.connection.delete_listener)(ListenerArn=self.listener) except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) class ELBListenerRules(object): def __init__(self, connection, module, elb_arn, listener_rules, listener_port): self.connection = connection self.module = module self.elb_arn = elb_arn self.rules = self._ensure_rules_action_has_arn(listener_rules) self.changed = False # Get listener based on port so we can use ARN self.current_listener = get_elb_listener(connection, module, elb_arn, listener_port) self.listener_arn = self.current_listener['ListenerArn'] self.rules_to_add = deepcopy(self.rules) self.rules_to_modify = [] self.rules_to_delete = [] # If the listener exists (i.e. has an ARN) get rules for the listener if 'ListenerArn' in self.current_listener: self.current_rules = self._get_elb_listener_rules() else: self.current_rules = [] def _ensure_rules_action_has_arn(self, rules): """ If a rule Action has been passed with a Target Group Name instead of ARN, lookup the ARN and replace the name. :param rules: a list of rule dicts :return: the same list of dicts ensuring that each rule Actions dict has TargetGroupArn key. If a TargetGroupName key exists, it is removed. """ fixed_rules = [] for rule in rules: fixed_actions = [] for action in rule['Actions']: if 'TargetGroupName' in action: action['TargetGroupArn'] = convert_tg_name_to_arn(self.connection, self.module, action['TargetGroupName']) del action['TargetGroupName'] fixed_actions.append(action) rule['Actions'] = fixed_actions fixed_rules.append(rule) return fixed_rules def _get_elb_listener_rules(self): try: return AWSRetry.jittered_backoff()(self.connection.describe_rules)(ListenerArn=self.current_listener['ListenerArn'])['Rules'] except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) def _compare_condition(self, current_conditions, condition): """ :param current_conditions: :param condition: :return: """ condition_found = False for current_condition in current_conditions: if current_condition.get('SourceIpConfig'): if (current_condition['Field'] == condition['Field'] and current_condition['SourceIpConfig']['Values'][0] == condition['SourceIpConfig']['Values'][0]): condition_found = True break elif current_condition['Field'] == condition['Field'] and current_condition['Values'][0] == condition['Values'][0]: condition_found = True break return condition_found def _compare_rule(self, current_rule, new_rule): """ :return: """ modified_rule = {} # Priority if int(current_rule['Priority']) != new_rule['Priority']: modified_rule['Priority'] = new_rule['Priority'] # Actions # Check proper rule format on current listener if len(current_rule['Actions']) > 1: for action in current_rule['Actions']: if 'Order' not in action: self.module.fail_json(msg="'Order' key not found in actions. " "installed version of botocore does not support " "multiple actions, please upgrade botocore to version " "1.10.30 or higher") # If the lengths of the actions are the same, we'll have to verify that the # contents of those actions are the same if len(current_rule['Actions']) == len(new_rule['Actions']): # if actions have just one element, compare the contents and then update if # they're different if len(current_rule['Actions']) == 1 and len(new_rule['Actions']) == 1: if current_rule['Actions'] != new_rule['Actions']: modified_rule['Actions'] = new_rule['Actions'] print("modified_rule:") print(new_rule['Actions']) # if actions have multiple elements, we'll have to order them first before comparing. # multiple actions will have an 'Order' key for this purpose else: current_actions_sorted = sorted(current_rule['Actions'], key=lambda x: x['Order']) new_actions_sorted = sorted(new_rule['Actions'], key=lambda x: x['Order']) # the AWS api won't return the client secret, so we'll have to remove it # or the module will always see the new and current actions as different # and try to apply the same config new_actions_sorted_no_secret = [] for action in new_actions_sorted: # the secret is currently only defined in the oidc config if action['Type'] == 'authenticate-oidc': action['AuthenticateOidcConfig'].pop('ClientSecret') new_actions_sorted_no_secret.append(action) else: new_actions_sorted_no_secret.append(action) if current_actions_sorted != new_actions_sorted_no_secret: modified_rule['Actions'] = new_rule['Actions'] print("modified_rule:") print(new_rule['Actions']) # If the action lengths are different, then replace with the new actions else: modified_rule['Actions'] = new_rule['Actions'] print("modified_rule:") print(new_rule['Actions']) # Conditions modified_conditions = [] for condition in new_rule['Conditions']: if not self._compare_condition(current_rule['Conditions'], condition): modified_conditions.append(condition) if modified_conditions: modified_rule['Conditions'] = modified_conditions return modified_rule def compare_rules(self): """ :return: """ rules_to_modify = [] rules_to_delete = [] rules_to_add = deepcopy(self.rules) for current_rule in self.current_rules: current_rule_passed_to_module = False for new_rule in self.rules[:]: if current_rule['Priority'] == str(new_rule['Priority']): current_rule_passed_to_module = True # Remove what we match so that what is left can be marked as 'to be added' rules_to_add.remove(new_rule) modified_rule = self._compare_rule(current_rule, new_rule) if modified_rule: modified_rule['Priority'] = int(current_rule['Priority']) modified_rule['RuleArn'] = current_rule['RuleArn'] modified_rule['Actions'] = new_rule['Actions'] modified_rule['Conditions'] = new_rule['Conditions'] rules_to_modify.append(modified_rule) break # If the current rule was not matched against passed rules, mark for removal if not current_rule_passed_to_module and not current_rule['IsDefault']: rules_to_delete.append(current_rule['RuleArn']) return rules_to_add, rules_to_modify, rules_to_delete class ELBListenerRule(object): def __init__(self, connection, module, rule, listener_arn): self.connection = connection self.module = module self.rule = rule self.listener_arn = listener_arn self.changed = False def create(self): """ Create a listener rule :return: """ try: self.rule['ListenerArn'] = self.listener_arn self.rule['Priority'] = int(self.rule['Priority']) AWSRetry.jittered_backoff()(self.connection.create_rule)(**self.rule) except (BotoCoreError, ClientError) as e: if '"Order", must be one of: Type, TargetGroupArn' in str(e): self.module.fail_json(msg="installed version of botocore does not support " "multiple actions, please upgrade botocore to version " "1.10.30 or higher") else: self.module.fail_json_aws(e) self.changed = True def modify(self): """ Modify a listener rule :return: """ try: del self.rule['Priority'] AWSRetry.jittered_backoff()(self.connection.modify_rule)(**self.rule) except (BotoCoreError, ClientError) as e: if '"Order", must be one of: Type, TargetGroupArn' in str(e): self.module.fail_json(msg="installed version of botocore does not support " "multiple actions, please upgrade botocore to version " "1.10.30 or higher") else: self.module.fail_json_aws(e) self.changed = True def delete(self): """ Delete a listener rule :return: """ try: AWSRetry.jittered_backoff()(self.connection.delete_rule)(RuleArn=self.rule['RuleArn']) except (BotoCoreError, ClientError) as e: self.module.fail_json_aws(e) self.changed = True
closed
ansible/ansible
https://github.com/ansible/ansible
65,020
AWS Application Load Balancer Fails with Multiple Host Headers
##### SUMMARY When using the `host-header` condition for a load balancer rule if there are multiple values for the host header ansible fails to apply the changes with broken json output. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME elbv2 ##### ANSIBLE VERSION ``` ansible 2.10.0.dev0 config file = None configured module search path = [u'/home/mjmayer/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/mjmayer/repos/ansible/lib/ansible executable location = /home/mjmayer/repos/ansible/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION ``` ``` ##### OS / ENVIRONMENT Mint 19.2 / Ubuntu 18.04 ##### STEPS TO REPRODUCE ```yaml - name: add a rule that uses the host header condition to the listener elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" state: present purge_rules: no listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" Rules: - Conditions: - Field: host-header Values: - 'local.mydomain.com' Priority: '3' Actions: - TargetGroupName: "{{ tg_name }}" Type: forward <<: *aws_connection_info register: alb - assert: that: - alb.changed - alb.listeners[0].rules|length == 4 - name: test replacing the rule that uses the host header condition with multiple host header conditions elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" purge_rules: no state: present listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" Rules: - Conditions: - Field: host-header Values: - 'local.mydomain.com' - 'alternate.mydomain.com' Priority: '3' Actions: - TargetGroupName: "{{ tg_name }}" Type: forward <<: *aws_connection_info register: alb ``` ##### EXPECTED RESULTS The second application of `elb_application_lb` should succeed and add the second host-header. ##### ACTUAL RESULTS Ansible prints invalid json to the screen with the following error. ```paste below Module invocation had junk after the JSON data ```
https://github.com/ansible/ansible/issues/65020
https://github.com/ansible/ansible/pull/65021
d1c58bc94274c4e91370333467a0868f4456993c
d52af75c68b3f31c994d8b234b9c1e2387f9a4dd
2019-11-18T21:48:27Z
python
2019-11-21T16:42:37Z
lib/ansible/modules/cloud/amazon/elb_application_lb.py
#!/usr/bin/python # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. from __future__ import (absolute_import, division, print_function) __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: elb_application_lb short_description: Manage an Application load balancer description: - Manage an AWS Application Elastic Load Balancer. See U(https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/) for details. version_added: "2.4" requirements: [ boto3 ] author: "Rob White (@wimnat)" options: access_logs_enabled: description: - "Whether or not to enable access logs. When true, I(access_logs_s3_bucket) must be set." required: false type: bool access_logs_s3_bucket: description: - The name of the S3 bucket for the access logs. - Required if access logs in Amazon S3 are enabled. - The bucket must exist in the same region as the load balancer and have a bucket policy that grants Elastic Load Balancing permission to write to the bucket. required: false type: str access_logs_s3_prefix: description: - The prefix for the log location in the S3 bucket. - If you don't specify a prefix, the access logs are stored in the root of the bucket. - Cannot begin or end with a slash. required: false type: str deletion_protection: description: - Indicates whether deletion protection for the ELB is enabled. required: false default: no type: bool http2: description: - Indicates whether to enable HTTP2 routing. required: false default: no type: bool version_added: 2.6 idle_timeout: description: - The number of seconds to wait before an idle connection is closed. required: false type: int listeners: description: - A list of dicts containing listeners to attach to the ELB. See examples for detail of the dict required. Note that listener keys are CamelCased. required: false type: list suboptions: Port: description: The port on which the load balancer is listening. type: int Protocol: description: The protocol for connections from clients to the load balancer. type: str Certificates: description: The SSL server certificate. type: list suboptions: CertificateArn: description: The Amazon Resource Name (ARN) of the certificate. type: str SslPolicy: description: The security policy that defines which ciphers and protocols are supported. type: str DefaultActions: description: The default actions for the listener. type: list suboptions: Type: description: The type of action. type: str TargetGroupArn: description: The Amazon Resource Name (ARN) of the target group. type: str Rules: type: list description: - A list of ALB Listener Rules. - 'For the complete documentation of possible Conditions and Actions please see the boto3 documentation:' - 'https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.create_rule' suboptions: Conditions: type: list description: Conditions which must be met for the actions to be applied. Priority: type: int description: The rule priority. Actions: type: list description: Actions to apply if all of the rule's conditions are met. name: description: - The name of the load balancer. This name must be unique within your AWS account, can have a maximum of 32 characters, must contain only alphanumeric characters or hyphens, and must not begin or end with a hyphen. required: true type: str purge_listeners: description: - If yes, existing listeners will be purged from the ELB to match exactly what is defined by I(listeners) parameter. If the I(listeners) parameter is not set then listeners will not be modified default: yes type: bool purge_tags: description: - If yes, existing tags will be purged from the resource to match exactly what is defined by I(tags) parameter. If the I(tags) parameter is not set then tags will not be modified. required: false default: yes type: bool subnets: description: - A list of the IDs of the subnets to attach to the load balancer. You can specify only one subnet per Availability Zone. You must specify subnets from at least two Availability Zones. Required if state=present. required: false type: list security_groups: description: - A list of the names or IDs of the security groups to assign to the load balancer. Required if state=present. required: false default: [] type: list scheme: description: - Internet-facing or internal load balancer. An ELB scheme can not be modified after creation. required: false default: internet-facing choices: [ 'internet-facing', 'internal' ] type: str state: description: - Create or destroy the load balancer. default: present choices: [ 'present', 'absent' ] type: str tags: description: - A dictionary of one or more tags to assign to the load balancer. required: false type: dict wait: description: - Wait for the load balancer to have a state of 'active' before completing. A status check is performed every 15 seconds until a successful state is reached. An error is returned after 40 failed checks. default: no type: bool version_added: 2.6 wait_timeout: description: - The time in seconds to use in conjunction with I(wait). version_added: 2.6 type: int purge_rules: description: - When set to no, keep the existing load balancer rules in place. Will modify and add, but will not delete. default: yes type: bool version_added: 2.7 extends_documentation_fragment: - aws - ec2 notes: - Listeners are matched based on port. If a listener's port is changed then a new listener will be created. - Listener rules are matched based on priority. If a rule's priority is changed then a new rule will be created. ''' EXAMPLES = ''' # Note: These examples do not set authentication details, see the AWS Guide for details. # Create an ELB and attach a listener - elb_application_lb: name: myelb security_groups: - sg-12345678 - my-sec-group subnets: - subnet-012345678 - subnet-abcdef000 listeners: - Protocol: HTTP # Required. The protocol for connections from clients to the load balancer (HTTP or HTTPS) (case-sensitive). Port: 80 # Required. The port on which the load balancer is listening. # The security policy that defines which ciphers and protocols are supported. The default is the current predefined security policy. SslPolicy: ELBSecurityPolicy-2015-05 Certificates: # The ARN of the certificate (only one certficate ARN should be provided) - CertificateArn: arn:aws:iam::12345678987:server-certificate/test.domain.com DefaultActions: - Type: forward # Required. TargetGroupName: # Required. The name of the target group state: present # Create an ELB and attach a listener with logging enabled - elb_application_lb: access_logs_enabled: yes access_logs_s3_bucket: mybucket access_logs_s3_prefix: "logs" name: myelb security_groups: - sg-12345678 - my-sec-group subnets: - subnet-012345678 - subnet-abcdef000 listeners: - Protocol: HTTP # Required. The protocol for connections from clients to the load balancer (HTTP or HTTPS) (case-sensitive). Port: 80 # Required. The port on which the load balancer is listening. # The security policy that defines which ciphers and protocols are supported. The default is the current predefined security policy. SslPolicy: ELBSecurityPolicy-2015-05 Certificates: # The ARN of the certificate (only one certficate ARN should be provided) - CertificateArn: arn:aws:iam::12345678987:server-certificate/test.domain.com DefaultActions: - Type: forward # Required. TargetGroupName: # Required. The name of the target group state: present # Create an ALB with listeners and rules - elb_application_lb: name: test-alb subnets: - subnet-12345678 - subnet-87654321 security_groups: - sg-12345678 scheme: internal listeners: - Protocol: HTTPS Port: 443 DefaultActions: - Type: forward TargetGroupName: test-target-group Certificates: - CertificateArn: arn:aws:iam::12345678987:server-certificate/test.domain.com SslPolicy: ELBSecurityPolicy-2015-05 Rules: - Conditions: - Field: path-pattern Values: - '/test' Priority: '1' Actions: - TargetGroupName: test-target-group Type: forward - Conditions: - Field: path-pattern Values: - "/redirect-path/*" Priority: '2' Actions: - Type: redirect RedirectConfig: Host: "#{host}" Path: "/example/redir" # or /#{path} Port: "#{port}" Protocol: "#{protocol}" Query: "#{query}" StatusCode: "HTTP_302" # or HTTP_301 - Conditions: - Field: path-pattern Values: - "/fixed-response-path/" Priority: '3' Actions: - Type: fixed-response FixedResponseConfig: ContentType: "text/plain" MessageBody: "This is the page you're looking for" StatusCode: "200" state: present # Remove an ELB - elb_application_lb: name: myelb state: absent ''' RETURN = ''' access_logs_s3_bucket: description: The name of the S3 bucket for the access logs. returned: when state is present type: str sample: mys3bucket access_logs_s3_enabled: description: Indicates whether access logs stored in Amazon S3 are enabled. returned: when state is present type: str sample: true access_logs_s3_prefix: description: The prefix for the location in the S3 bucket. returned: when state is present type: str sample: my/logs availability_zones: description: The Availability Zones for the load balancer. returned: when state is present type: list sample: "[{'subnet_id': 'subnet-aabbccddff', 'zone_name': 'ap-southeast-2a'}]" canonical_hosted_zone_id: description: The ID of the Amazon Route 53 hosted zone associated with the load balancer. returned: when state is present type: str sample: ABCDEF12345678 created_time: description: The date and time the load balancer was created. returned: when state is present type: str sample: "2015-02-12T02:14:02+00:00" deletion_protection_enabled: description: Indicates whether deletion protection is enabled. returned: when state is present type: str sample: true dns_name: description: The public DNS name of the load balancer. returned: when state is present type: str sample: internal-my-elb-123456789.ap-southeast-2.elb.amazonaws.com idle_timeout_timeout_seconds: description: The idle timeout value, in seconds. returned: when state is present type: int sample: 60 ip_address_type: description: The type of IP addresses used by the subnets for the load balancer. returned: when state is present type: str sample: ipv4 listeners: description: Information about the listeners. returned: when state is present type: complex contains: listener_arn: description: The Amazon Resource Name (ARN) of the listener. returned: when state is present type: str sample: "" load_balancer_arn: description: The Amazon Resource Name (ARN) of the load balancer. returned: when state is present type: str sample: "" port: description: The port on which the load balancer is listening. returned: when state is present type: int sample: 80 protocol: description: The protocol for connections from clients to the load balancer. returned: when state is present type: str sample: HTTPS certificates: description: The SSL server certificate. returned: when state is present type: complex contains: certificate_arn: description: The Amazon Resource Name (ARN) of the certificate. returned: when state is present type: str sample: "" ssl_policy: description: The security policy that defines which ciphers and protocols are supported. returned: when state is present type: str sample: "" default_actions: description: The default actions for the listener. returned: when state is present type: str contains: type: description: The type of action. returned: when state is present type: str sample: "" target_group_arn: description: The Amazon Resource Name (ARN) of the target group. returned: when state is present type: str sample: "" load_balancer_arn: description: The Amazon Resource Name (ARN) of the load balancer. returned: when state is present type: str sample: arn:aws:elasticloadbalancing:ap-southeast-2:0123456789:loadbalancer/app/my-elb/001122334455 load_balancer_name: description: The name of the load balancer. returned: when state is present type: str sample: my-elb routing_http2_enabled: description: Indicates whether HTTP/2 is enabled. returned: when state is present type: str sample: true scheme: description: Internet-facing or internal load balancer. returned: when state is present type: str sample: internal security_groups: description: The IDs of the security groups for the load balancer. returned: when state is present type: list sample: ['sg-0011223344'] state: description: The state of the load balancer. returned: when state is present type: dict sample: "{'code': 'active'}" tags: description: The tags attached to the load balancer. returned: when state is present type: dict sample: "{ 'Tag': 'Example' }" type: description: The type of load balancer. returned: when state is present type: str sample: application vpc_id: description: The ID of the VPC for the load balancer. returned: when state is present type: str sample: vpc-0011223344 ''' from ansible.module_utils.aws.core import AnsibleAWSModule from ansible.module_utils.ec2 import boto3_conn, get_aws_connection_info, camel_dict_to_snake_dict, ec2_argument_spec, \ boto3_tag_list_to_ansible_dict, compare_aws_tags, HAS_BOTO3 from ansible.module_utils.aws.elbv2 import ApplicationLoadBalancer, ELBListeners, ELBListener, ELBListenerRules, ELBListenerRule from ansible.module_utils.aws.elb_utils import get_elb_listener_rules def create_or_update_elb(elb_obj): """Create ELB or modify main attributes. json_exit here""" if elb_obj.elb: # ELB exists so check subnets, security groups and tags match what has been passed # Subnets if not elb_obj.compare_subnets(): elb_obj.modify_subnets() # Security Groups if not elb_obj.compare_security_groups(): elb_obj.modify_security_groups() # Tags - only need to play with tags if tags parameter has been set to something if elb_obj.tags is not None: # Delete necessary tags tags_need_modify, tags_to_delete = compare_aws_tags(boto3_tag_list_to_ansible_dict(elb_obj.elb['tags']), boto3_tag_list_to_ansible_dict(elb_obj.tags), elb_obj.purge_tags) if tags_to_delete: elb_obj.delete_tags(tags_to_delete) # Add/update tags if tags_need_modify: elb_obj.modify_tags() else: # Create load balancer elb_obj.create_elb() # ELB attributes elb_obj.update_elb_attributes() elb_obj.modify_elb_attributes() # Listeners listeners_obj = ELBListeners(elb_obj.connection, elb_obj.module, elb_obj.elb['LoadBalancerArn']) listeners_to_add, listeners_to_modify, listeners_to_delete = listeners_obj.compare_listeners() # Delete listeners for listener_to_delete in listeners_to_delete: listener_obj = ELBListener(elb_obj.connection, elb_obj.module, listener_to_delete, elb_obj.elb['LoadBalancerArn']) listener_obj.delete() listeners_obj.changed = True # Add listeners for listener_to_add in listeners_to_add: listener_obj = ELBListener(elb_obj.connection, elb_obj.module, listener_to_add, elb_obj.elb['LoadBalancerArn']) listener_obj.add() listeners_obj.changed = True # Modify listeners for listener_to_modify in listeners_to_modify: listener_obj = ELBListener(elb_obj.connection, elb_obj.module, listener_to_modify, elb_obj.elb['LoadBalancerArn']) listener_obj.modify() listeners_obj.changed = True # If listeners changed, mark ELB as changed if listeners_obj.changed: elb_obj.changed = True # Rules of each listener for listener in listeners_obj.listeners: if 'Rules' in listener: rules_obj = ELBListenerRules(elb_obj.connection, elb_obj.module, elb_obj.elb['LoadBalancerArn'], listener['Rules'], listener['Port']) rules_to_add, rules_to_modify, rules_to_delete = rules_obj.compare_rules() # Delete rules if elb_obj.module.params['purge_rules']: for rule in rules_to_delete: rule_obj = ELBListenerRule(elb_obj.connection, elb_obj.module, {'RuleArn': rule}, rules_obj.listener_arn) rule_obj.delete() elb_obj.changed = True # Add rules for rule in rules_to_add: rule_obj = ELBListenerRule(elb_obj.connection, elb_obj.module, rule, rules_obj.listener_arn) rule_obj.create() elb_obj.changed = True # Modify rules for rule in rules_to_modify: rule_obj = ELBListenerRule(elb_obj.connection, elb_obj.module, rule, rules_obj.listener_arn) rule_obj.modify() elb_obj.changed = True # Get the ELB again elb_obj.update() # Get the ELB listeners again listeners_obj.update() # Update the ELB attributes elb_obj.update_elb_attributes() # Convert to snake_case and merge in everything we want to return to the user snaked_elb = camel_dict_to_snake_dict(elb_obj.elb) snaked_elb.update(camel_dict_to_snake_dict(elb_obj.elb_attributes)) snaked_elb['listeners'] = [] for listener in listeners_obj.current_listeners: # For each listener, get listener rules listener['rules'] = get_elb_listener_rules(elb_obj.connection, elb_obj.module, listener['ListenerArn']) snaked_elb['listeners'].append(camel_dict_to_snake_dict(listener)) # Change tags to ansible friendly dict snaked_elb['tags'] = boto3_tag_list_to_ansible_dict(snaked_elb['tags']) elb_obj.module.exit_json(changed=elb_obj.changed, **snaked_elb) def delete_elb(elb_obj): if elb_obj.elb: elb_obj.delete() elb_obj.module.exit_json(changed=elb_obj.changed) def main(): argument_spec = ec2_argument_spec() argument_spec.update( dict( access_logs_enabled=dict(type='bool'), access_logs_s3_bucket=dict(type='str'), access_logs_s3_prefix=dict(type='str'), deletion_protection=dict(type='bool'), http2=dict(type='bool'), idle_timeout=dict(type='int'), listeners=dict(type='list', elements='dict', options=dict( Protocol=dict(type='str', required=True), Port=dict(type='int', required=True), SslPolicy=dict(type='str'), Certificates=dict(type='list'), DefaultActions=dict(type='list', required=True), Rules=dict(type='list') ) ), name=dict(required=True, type='str'), purge_listeners=dict(default=True, type='bool'), purge_tags=dict(default=True, type='bool'), subnets=dict(type='list'), security_groups=dict(type='list'), scheme=dict(default='internet-facing', choices=['internet-facing', 'internal']), state=dict(choices=['present', 'absent'], default='present'), tags=dict(type='dict'), wait_timeout=dict(type='int'), wait=dict(default=False, type='bool'), purge_rules=dict(default=True, type='bool') ) ) module = AnsibleAWSModule(argument_spec=argument_spec, required_if=[ ('state', 'present', ['subnets', 'security_groups']) ], required_together=[ ['access_logs_enabled', 'access_logs_s3_bucket'] ] ) # Quick check of listeners parameters listeners = module.params.get("listeners") if listeners is not None: for listener in listeners: for key in listener.keys(): if key == 'Protocol' and listener[key] == 'HTTPS': if listener.get('SslPolicy') is None: module.fail_json(msg="'SslPolicy' is a required listener dict key when Protocol = HTTPS") if listener.get('Certificates') is None: module.fail_json(msg="'Certificates' is a required listener dict key when Protocol = HTTPS") connection = module.client('elbv2') connection_ec2 = module.client('ec2') state = module.params.get("state") elb = ApplicationLoadBalancer(connection, connection_ec2, module) if state == 'present': create_or_update_elb(elb) else: delete_elb(elb) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
65,020
AWS Application Load Balancer Fails with Multiple Host Headers
##### SUMMARY When using the `host-header` condition for a load balancer rule if there are multiple values for the host header ansible fails to apply the changes with broken json output. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME elbv2 ##### ANSIBLE VERSION ``` ansible 2.10.0.dev0 config file = None configured module search path = [u'/home/mjmayer/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/mjmayer/repos/ansible/lib/ansible executable location = /home/mjmayer/repos/ansible/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION ``` ``` ##### OS / ENVIRONMENT Mint 19.2 / Ubuntu 18.04 ##### STEPS TO REPRODUCE ```yaml - name: add a rule that uses the host header condition to the listener elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" state: present purge_rules: no listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" Rules: - Conditions: - Field: host-header Values: - 'local.mydomain.com' Priority: '3' Actions: - TargetGroupName: "{{ tg_name }}" Type: forward <<: *aws_connection_info register: alb - assert: that: - alb.changed - alb.listeners[0].rules|length == 4 - name: test replacing the rule that uses the host header condition with multiple host header conditions elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" purge_rules: no state: present listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" Rules: - Conditions: - Field: host-header Values: - 'local.mydomain.com' - 'alternate.mydomain.com' Priority: '3' Actions: - TargetGroupName: "{{ tg_name }}" Type: forward <<: *aws_connection_info register: alb ``` ##### EXPECTED RESULTS The second application of `elb_application_lb` should succeed and add the second host-header. ##### ACTUAL RESULTS Ansible prints invalid json to the screen with the following error. ```paste below Module invocation had junk after the JSON data ```
https://github.com/ansible/ansible/issues/65020
https://github.com/ansible/ansible/pull/65021
d1c58bc94274c4e91370333467a0868f4456993c
d52af75c68b3f31c994d8b234b9c1e2387f9a4dd
2019-11-18T21:48:27Z
python
2019-11-21T16:42:37Z
test/integration/targets/elb_application_lb/tasks/test_modifying_alb_listeners.yml
- block: - name: set connection information for all tasks set_fact: aws_connection_info: &aws_connection_info aws_access_key: "{{ aws_access_key }}" aws_secret_key: "{{ aws_secret_key }}" security_token: "{{ security_token }}" region: "{{ aws_region }}" no_log: yes - name: add a rule to the listener elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" state: present listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" Rules: - Conditions: - Field: path-pattern Values: - '/test' Priority: '1' Actions: - TargetGroupName: "{{ tg_name }}" Type: forward <<: *aws_connection_info register: alb - assert: that: - alb.changed - alb.listeners[0].rules|length == 2 - name: test replacing the rule with one with the same priority elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" state: present purge_listeners: true listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" Rules: - Conditions: - Field: path-pattern Values: - '/new' Priority: '1' Actions: - TargetGroupName: "{{ tg_name }}" Type: forward <<: *aws_connection_info register: alb - assert: that: - alb.changed - alb.listeners[0].rules|length == 2 - name: test the rule will not be removed without purge_listeners elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" state: present listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" <<: *aws_connection_info register: alb - assert: that: - not alb.changed - alb.listeners[0].rules|length == 2 - name: test a rule can be added and other rules will not be removed when purge_rules is no. elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" state: present purge_rules: no listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" Rules: - Conditions: - Field: path-pattern Values: - '/new' Priority: '2' Actions: - TargetGroupName: "{{ tg_name }}" Type: forward <<: *aws_connection_info register: alb - assert: that: - alb.changed - alb.listeners[0].rules|length == 3 - name: remove the rule elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" state: present purge_listeners: true listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" Rules: [] <<: *aws_connection_info register: alb - assert: that: - alb.changed - alb.listeners[0].rules|length == 1 - name: remove listener from ALB elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" state: present listeners: [] <<: *aws_connection_info register: alb - assert: that: - alb.changed - not alb.listeners - name: add the listener to the ALB elb_application_lb: name: "{{ alb_name }}" subnets: "{{ alb_subnets }}" security_groups: "{{ sec_group.group_id }}" state: present listeners: - Protocol: HTTP Port: 80 DefaultActions: - Type: forward TargetGroupName: "{{ tg_name }}" <<: *aws_connection_info register: alb - assert: that: - alb.changed - alb.listeners|length == 1 - alb.availability_zones|length == 2
closed
ansible/ansible
https://github.com/ansible/ansible
64,503
VMware: vmware_vmotion fails to move VM from one cluster to another
##### SUMMARY When running vmware_vmotion module it fails with the error: ``` "msg": "('A specified parameter was not correct: spec.pool', None)" ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME vmware_vmotion module ##### ANSIBLE VERSION ``` ansible 2.8.6 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION ``` ACTION_WARNINGS(default) = True AGNOSTIC_BECOME_PROMPT(default) = True ALLOW_WORLD_READABLE_TMPFILES(default) = False ANSIBLE_CONNECTION_PATH(default) = None ANSIBLE_COW_PATH(default) = None ANSIBLE_COW_SELECTION(default) = default ANSIBLE_COW_WHITELIST(default) = ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kit ANSIBLE_FORCE_COLOR(default) = False ANSIBLE_NOCOLOR(default) = False ANSIBLE_NOCOWS(default) = False ANSIBLE_PIPELINING(default) = False ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s ANSIBLE_SSH_CONTROL_PATH(default) = None ANSIBLE_SSH_CONTROL_PATH_DIR(default) = ~/.ansible/cp ANSIBLE_SSH_EXECUTABLE(default) = ssh ANSIBLE_SSH_RETRIES(default) = 0 ANY_ERRORS_FATAL(default) = False BECOME_ALLOW_SAME_USER(default) = False BECOME_PLUGIN_PATH(default) = [u'/home/awx/.ansible/plugins/become', u'/usr/share/ansible/plugins/become'] CACHE_PLUGIN(default) = memory CACHE_PLUGIN_CONNECTION(default) = None CACHE_PLUGIN_PREFIX(default) = ansible_facts CACHE_PLUGIN_TIMEOUT(default) = 86400 COLLECTIONS_PATHS(default) = [u'/home/awx/.ansible/collections', u'/usr/share/ansible/collections'] COLOR_CHANGED(default) = yellow COLOR_CONSOLE_PROMPT(default) = white COLOR_DEBUG(default) = dark gray COLOR_DEPRECATE(default) = purple COLOR_DIFF_ADD(default) = green COLOR_DIFF_LINES(default) = cyan COLOR_DIFF_REMOVE(default) = red COLOR_ERROR(default) = red COLOR_HIGHLIGHT(default) = white COLOR_OK(default) = green COLOR_SKIP(default) = cyan COLOR_UNREACHABLE(default) = bright red COLOR_VERBOSE(default) = blue COLOR_WARN(default) = bright purple COMMAND_WARNINGS(default) = True CONDITIONAL_BARE_VARS(default) = True CONNECTION_FACTS_MODULES(default) = {'junos': 'junos_facts', 'eos': 'eos_facts', 'frr': 'frr_facts', 'iosxr': 'iosxr_facts', 'nxos': 'nxos_facts', 'ios': 'i DEFAULT_ACTION_PLUGIN_PATH(default) = [u'/home/awx/.ansible/plugins/action', u'/usr/share/ansible/plugins/action'] DEFAULT_ALLOW_UNSAFE_LOOKUPS(default) = False ``` ##### OS / ENVIRONMENT Running AWX 8.0.0 in a docker container ##### STEPS TO REPRODUCE Run a job template with the following playbook ```yaml --- tasks: - name: Perform vMotion of multiple Virtual Machines vmware_vmotion: vm_name: "{{ item }}" destination_host: "{{ esx_host }}" hostname: "{{ vcenter_host }}" username: "{{ vmware_admin }}" password: "{{ vault_vcenter_pass }}" validate_certs: no delegate_to: localhost loop: "{{ groups['vmotion'] }}" ``` NOTE: I did try this withOUT the loop and I get the same error. ##### EXPECTED RESULTS Expected host[s] to move to new ESX host ##### ACTUAL RESULTS ``` "msg": "('A specified parameter was not correct: spec.pool', None)" No command since running thru awx. ```
https://github.com/ansible/ansible/issues/64503
https://github.com/ansible/ansible/pull/64544
0ab21fd1ec3e397888d73bbea8f1ad40c0d6c315
d4c4c92c97e8dc2791f6a9f63ba0a3a0ce467a6b
2019-11-06T12:10:11Z
python
2019-11-22T21:42:41Z
lib/ansible/modules/cloud/vmware/vmware_vmotion.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2015, Bede Carroll <bc+github () bedecarroll.com> # Copyright: (c) 2018, Abhijeet Kasurde <[email protected]> # Copyright: (c) 2018, Ansible Project # # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = { 'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community' } DOCUMENTATION = r''' --- module: vmware_vmotion short_description: Move a virtual machine using vMotion, and/or its vmdks using storage vMotion. description: - Using VMware vCenter, move a virtual machine using vMotion to a different host, and/or its vmdks to another datastore using storage vMotion. version_added: 2.2 author: - Bede Carroll (@bedecarroll) - Olivier Boukili (@oboukili) notes: - Tested on vSphere 6.0 requirements: - "python >= 2.6" - pyVmomi options: vm_name: description: - Name of the VM to perform a vMotion on. - This is required parameter, if C(vm_uuid) is not set. - Version 2.6 onwards, this parameter is not a required parameter, unlike the previous versions. aliases: ['vm'] type: str vm_uuid: description: - UUID of the virtual machine to perform a vMotion operation on. - This is a required parameter, if C(vm_name) or C(moid) is not set. aliases: ['uuid'] version_added: 2.7 type: str moid: description: - Managed Object ID of the instance to manage if known, this is a unique identifier only within a single vCenter instance. - This is required if C(vm_name) or C(vm_uuid) is not supplied. version_added: '2.9' type: str use_instance_uuid: description: - Whether to use the VMware instance UUID rather than the BIOS UUID. default: no type: bool version_added: '2.8' destination_host: description: - Name of the destination host the virtual machine should be running on. - Version 2.6 onwards, this parameter is not a required parameter, unlike the previous versions. aliases: ['destination'] type: str destination_datastore: description: - "Name of the destination datastore the virtual machine's vmdk should be moved on." aliases: ['datastore'] version_added: 2.7 type: str extends_documentation_fragment: vmware.documentation ''' EXAMPLES = ''' - name: Perform vMotion of virtual machine vmware_vmotion: hostname: '{{ vcenter_hostname }}' username: '{{ vcenter_username }}' password: '{{ vcenter_password }}' validate_certs: no vm_name: 'vm_name_as_per_vcenter' destination_host: 'destination_host_as_per_vcenter' delegate_to: localhost - name: Perform vMotion of virtual machine vmware_vmotion: hostname: '{{ vcenter_hostname }}' username: '{{ vcenter_username }}' password: '{{ vcenter_password }}' validate_certs: no moid: vm-42 destination_host: 'destination_host_as_per_vcenter' delegate_to: localhost - name: Perform storage vMotion of of virtual machine vmware_vmotion: hostname: '{{ vcenter_hostname }}' username: '{{ vcenter_username }}' password: '{{ vcenter_password }}' validate_certs: no vm_name: 'vm_name_as_per_vcenter' destination_datastore: 'destination_datastore_as_per_vcenter' delegate_to: localhost - name: Perform storage vMotion and host vMotion of virtual machine vmware_vmotion: hostname: '{{ vcenter_hostname }}' username: '{{ vcenter_username }}' password: '{{ vcenter_password }}' validate_certs: no vm_name: 'vm_name_as_per_vcenter' destination_host: 'destination_host_as_per_vcenter' destination_datastore: 'destination_datastore_as_per_vcenter' delegate_to: localhost ''' RETURN = ''' running_host: description: List the host the virtual machine is registered to returned: changed or success type: str sample: 'host1.example.com' ''' try: from pyVmomi import vim, VmomiSupport except ImportError: pass from ansible.module_utils._text import to_native from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.vmware import (PyVmomi, find_hostsystem_by_name, find_vm_by_id, find_datastore_by_name, vmware_argument_spec, wait_for_task, TaskError) class VmotionManager(PyVmomi): def __init__(self, module): super(VmotionManager, self).__init__(module) self.vm = None self.vm_uuid = self.params.get('vm_uuid', None) self.use_instance_uuid = self.params.get('use_instance_uuid', False) self.vm_name = self.params.get('vm_name', None) self.moid = self.params.get('moid') or None result = dict() self.get_vm() if self.vm is None: vm_id = self.vm_uuid or self.vm_name or self.moid self.module.fail_json(msg="Failed to find the virtual machine with %s" % vm_id) # Get Destination Host System if specified by user dest_host_name = self.params.get('destination_host', None) self.host_object = None if dest_host_name is not None: self.host_object = find_hostsystem_by_name(content=self.content, hostname=dest_host_name) # Get Destination Datastore if specified by user dest_datastore = self.params.get('destination_datastore', None) self.datastore_object = None if dest_datastore is not None: self.datastore_object = find_datastore_by_name(content=self.content, datastore_name=dest_datastore) # At-least one of datastore, host system is required to migrate if self.datastore_object is None and self.host_object is None: self.module.fail_json(msg="Unable to find destination datastore" " and destination host system.") # Check if datastore is required, this check is required if destination # and source host system does not share same datastore. host_datastore_required = [] for vm_datastore in self.vm.datastore: if self.host_object and vm_datastore not in self.host_object.datastore: host_datastore_required.append(True) else: host_datastore_required.append(False) if any(host_datastore_required) and dest_datastore is None: msg = "Destination host system does not share" \ " datastore ['%s'] with source host system ['%s'] on which" \ " virtual machine is located. Please specify destination_datastore" \ " to rectify this problem." % ("', '".join([ds.name for ds in self.host_object.datastore]), "', '".join([ds.name for ds in self.vm.datastore])) self.module.fail_json(msg=msg) storage_vmotion_needed = True change_required = True if self.host_object and self.datastore_object: # We have both host system and datastore object if not self.datastore_object.summary.accessible: # Datastore is not accessible self.module.fail_json(msg='Destination datastore %s is' ' not accessible.' % dest_datastore) if self.datastore_object not in self.host_object.datastore: # Datastore is not associated with host system self.module.fail_json(msg="Destination datastore %s provided" " is not associated with destination" " host system %s. Please specify" " datastore value ['%s'] associated with" " the given host system." % (dest_datastore, dest_host_name, "', '".join([ds.name for ds in self.host_object.datastore]))) if self.vm.runtime.host.name == dest_host_name and dest_datastore in [ds.name for ds in self.vm.datastore]: change_required = False if self.host_object and self.datastore_object is None: if self.vm.runtime.host.name == dest_host_name: # VM is already located on same host change_required = False storage_vmotion_needed = False elif self.datastore_object and self.host_object is None: if self.datastore_object in self.vm.datastore: # VM is already located on same datastore change_required = False if not self.datastore_object.summary.accessible: # Datastore is not accessible self.module.fail_json(msg='Destination datastore %s is' ' not accessible.' % dest_datastore) if module.check_mode: result['running_host'] = module.params['destination_host'] result['changed'] = True module.exit_json(**result) if change_required: # Migrate VM and get Task object back task_object = self.migrate_vm() # Wait for task to complete try: wait_for_task(task_object) except TaskError as task_error: self.module.fail_json(msg=to_native(task_error)) # If task was a success the VM has moved, update running_host and complete module if task_object.info.state == vim.TaskInfo.State.success: # The storage layout is not automatically refreshed, so we trigger it to get coherent module return values if storage_vmotion_needed: self.vm.RefreshStorageInfo() result['running_host'] = module.params['destination_host'] result['changed'] = True module.exit_json(**result) else: msg = 'Unable to migrate virtual machine due to an error, please check vCenter' if task_object.info.error is not None: msg += " : %s" % task_object.info.error module.fail_json(msg=msg) else: try: host = self.vm.summary.runtime.host result['running_host'] = host.summary.config.name except vim.fault.NoPermission: result['running_host'] = 'NA' result['changed'] = False module.exit_json(**result) def migrate_vm(self): """ Migrate virtual machine and return the task. """ relocate_spec = vim.vm.RelocateSpec(host=self.host_object, datastore=self.datastore_object) task_object = self.vm.Relocate(relocate_spec) return task_object def get_vm(self): """ Find unique virtual machine either by UUID or Name. Returns: virtual machine object if found, else None. """ vms = [] if self.vm_uuid: if not self.use_instance_uuid: vm_obj = find_vm_by_id(self.content, vm_id=self.params['vm_uuid'], vm_id_type="uuid") elif self.use_instance_uuid: vm_obj = find_vm_by_id(self.content, vm_id=self.params['vm_uuid'], vm_id_type="instance_uuid") vms = [vm_obj] elif self.vm_name: objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name']) for temp_vm_object in objects: if len(temp_vm_object.propSet) != 1: continue if temp_vm_object.obj.name == self.vm_name: vms.append(temp_vm_object.obj) break elif self.moid: vm_obj = VmomiSupport.templateOf('VirtualMachine')(self.moid, self.si._stub) if vm_obj: vms.append(vm_obj) if len(vms) > 1: self.module.fail_json(msg="Multiple virtual machines with same name %s found." " Please specify vm_uuid instead of vm_name." % self.vm_name) self.vm = vms[0] def main(): argument_spec = vmware_argument_spec() argument_spec.update( dict( vm_name=dict(aliases=['vm']), vm_uuid=dict(aliases=['uuid']), moid=dict(type='str'), use_instance_uuid=dict(type='bool', default=False), destination_host=dict(aliases=['destination']), destination_datastore=dict(aliases=['datastore']) ) ) module = AnsibleModule( argument_spec=argument_spec, supports_check_mode=True, required_one_of=[ ['destination_host', 'destination_datastore'], ['vm_uuid', 'vm_name', 'moid'], ], mutually_exclusive=[ ['vm_uuid', 'vm_name', 'moid'], ], ) vmotion_manager = VmotionManager(module) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,503
VMware: vmware_vmotion fails to move VM from one cluster to another
##### SUMMARY When running vmware_vmotion module it fails with the error: ``` "msg": "('A specified parameter was not correct: spec.pool', None)" ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME vmware_vmotion module ##### ANSIBLE VERSION ``` ansible 2.8.6 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION ``` ACTION_WARNINGS(default) = True AGNOSTIC_BECOME_PROMPT(default) = True ALLOW_WORLD_READABLE_TMPFILES(default) = False ANSIBLE_CONNECTION_PATH(default) = None ANSIBLE_COW_PATH(default) = None ANSIBLE_COW_SELECTION(default) = default ANSIBLE_COW_WHITELIST(default) = ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kit ANSIBLE_FORCE_COLOR(default) = False ANSIBLE_NOCOLOR(default) = False ANSIBLE_NOCOWS(default) = False ANSIBLE_PIPELINING(default) = False ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s ANSIBLE_SSH_CONTROL_PATH(default) = None ANSIBLE_SSH_CONTROL_PATH_DIR(default) = ~/.ansible/cp ANSIBLE_SSH_EXECUTABLE(default) = ssh ANSIBLE_SSH_RETRIES(default) = 0 ANY_ERRORS_FATAL(default) = False BECOME_ALLOW_SAME_USER(default) = False BECOME_PLUGIN_PATH(default) = [u'/home/awx/.ansible/plugins/become', u'/usr/share/ansible/plugins/become'] CACHE_PLUGIN(default) = memory CACHE_PLUGIN_CONNECTION(default) = None CACHE_PLUGIN_PREFIX(default) = ansible_facts CACHE_PLUGIN_TIMEOUT(default) = 86400 COLLECTIONS_PATHS(default) = [u'/home/awx/.ansible/collections', u'/usr/share/ansible/collections'] COLOR_CHANGED(default) = yellow COLOR_CONSOLE_PROMPT(default) = white COLOR_DEBUG(default) = dark gray COLOR_DEPRECATE(default) = purple COLOR_DIFF_ADD(default) = green COLOR_DIFF_LINES(default) = cyan COLOR_DIFF_REMOVE(default) = red COLOR_ERROR(default) = red COLOR_HIGHLIGHT(default) = white COLOR_OK(default) = green COLOR_SKIP(default) = cyan COLOR_UNREACHABLE(default) = bright red COLOR_VERBOSE(default) = blue COLOR_WARN(default) = bright purple COMMAND_WARNINGS(default) = True CONDITIONAL_BARE_VARS(default) = True CONNECTION_FACTS_MODULES(default) = {'junos': 'junos_facts', 'eos': 'eos_facts', 'frr': 'frr_facts', 'iosxr': 'iosxr_facts', 'nxos': 'nxos_facts', 'ios': 'i DEFAULT_ACTION_PLUGIN_PATH(default) = [u'/home/awx/.ansible/plugins/action', u'/usr/share/ansible/plugins/action'] DEFAULT_ALLOW_UNSAFE_LOOKUPS(default) = False ``` ##### OS / ENVIRONMENT Running AWX 8.0.0 in a docker container ##### STEPS TO REPRODUCE Run a job template with the following playbook ```yaml --- tasks: - name: Perform vMotion of multiple Virtual Machines vmware_vmotion: vm_name: "{{ item }}" destination_host: "{{ esx_host }}" hostname: "{{ vcenter_host }}" username: "{{ vmware_admin }}" password: "{{ vault_vcenter_pass }}" validate_certs: no delegate_to: localhost loop: "{{ groups['vmotion'] }}" ``` NOTE: I did try this withOUT the loop and I get the same error. ##### EXPECTED RESULTS Expected host[s] to move to new ESX host ##### ACTUAL RESULTS ``` "msg": "('A specified parameter was not correct: spec.pool', None)" No command since running thru awx. ```
https://github.com/ansible/ansible/issues/64503
https://github.com/ansible/ansible/pull/64544
0ab21fd1ec3e397888d73bbea8f1ad40c0d6c315
d4c4c92c97e8dc2791f6a9f63ba0a3a0ce467a6b
2019-11-06T12:10:11Z
python
2019-11-22T21:42:41Z
test/integration/targets/vmware_vmotion/aliases
closed
ansible/ansible
https://github.com/ansible/ansible
64,503
VMware: vmware_vmotion fails to move VM from one cluster to another
##### SUMMARY When running vmware_vmotion module it fails with the error: ``` "msg": "('A specified parameter was not correct: spec.pool', None)" ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME vmware_vmotion module ##### ANSIBLE VERSION ``` ansible 2.8.6 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION ``` ACTION_WARNINGS(default) = True AGNOSTIC_BECOME_PROMPT(default) = True ALLOW_WORLD_READABLE_TMPFILES(default) = False ANSIBLE_CONNECTION_PATH(default) = None ANSIBLE_COW_PATH(default) = None ANSIBLE_COW_SELECTION(default) = default ANSIBLE_COW_WHITELIST(default) = ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kit ANSIBLE_FORCE_COLOR(default) = False ANSIBLE_NOCOLOR(default) = False ANSIBLE_NOCOWS(default) = False ANSIBLE_PIPELINING(default) = False ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=60s ANSIBLE_SSH_CONTROL_PATH(default) = None ANSIBLE_SSH_CONTROL_PATH_DIR(default) = ~/.ansible/cp ANSIBLE_SSH_EXECUTABLE(default) = ssh ANSIBLE_SSH_RETRIES(default) = 0 ANY_ERRORS_FATAL(default) = False BECOME_ALLOW_SAME_USER(default) = False BECOME_PLUGIN_PATH(default) = [u'/home/awx/.ansible/plugins/become', u'/usr/share/ansible/plugins/become'] CACHE_PLUGIN(default) = memory CACHE_PLUGIN_CONNECTION(default) = None CACHE_PLUGIN_PREFIX(default) = ansible_facts CACHE_PLUGIN_TIMEOUT(default) = 86400 COLLECTIONS_PATHS(default) = [u'/home/awx/.ansible/collections', u'/usr/share/ansible/collections'] COLOR_CHANGED(default) = yellow COLOR_CONSOLE_PROMPT(default) = white COLOR_DEBUG(default) = dark gray COLOR_DEPRECATE(default) = purple COLOR_DIFF_ADD(default) = green COLOR_DIFF_LINES(default) = cyan COLOR_DIFF_REMOVE(default) = red COLOR_ERROR(default) = red COLOR_HIGHLIGHT(default) = white COLOR_OK(default) = green COLOR_SKIP(default) = cyan COLOR_UNREACHABLE(default) = bright red COLOR_VERBOSE(default) = blue COLOR_WARN(default) = bright purple COMMAND_WARNINGS(default) = True CONDITIONAL_BARE_VARS(default) = True CONNECTION_FACTS_MODULES(default) = {'junos': 'junos_facts', 'eos': 'eos_facts', 'frr': 'frr_facts', 'iosxr': 'iosxr_facts', 'nxos': 'nxos_facts', 'ios': 'i DEFAULT_ACTION_PLUGIN_PATH(default) = [u'/home/awx/.ansible/plugins/action', u'/usr/share/ansible/plugins/action'] DEFAULT_ALLOW_UNSAFE_LOOKUPS(default) = False ``` ##### OS / ENVIRONMENT Running AWX 8.0.0 in a docker container ##### STEPS TO REPRODUCE Run a job template with the following playbook ```yaml --- tasks: - name: Perform vMotion of multiple Virtual Machines vmware_vmotion: vm_name: "{{ item }}" destination_host: "{{ esx_host }}" hostname: "{{ vcenter_host }}" username: "{{ vmware_admin }}" password: "{{ vault_vcenter_pass }}" validate_certs: no delegate_to: localhost loop: "{{ groups['vmotion'] }}" ``` NOTE: I did try this withOUT the loop and I get the same error. ##### EXPECTED RESULTS Expected host[s] to move to new ESX host ##### ACTUAL RESULTS ``` "msg": "('A specified parameter was not correct: spec.pool', None)" No command since running thru awx. ```
https://github.com/ansible/ansible/issues/64503
https://github.com/ansible/ansible/pull/64544
0ab21fd1ec3e397888d73bbea8f1ad40c0d6c315
d4c4c92c97e8dc2791f6a9f63ba0a3a0ce467a6b
2019-11-06T12:10:11Z
python
2019-11-22T21:42:41Z
test/integration/targets/vmware_vmotion/tasks/main.yml
closed
ansible/ansible
https://github.com/ansible/ansible
62,969
User module on Darwin is not idempotent
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Running the [user module](https://docs.ansible.com/ansible/latest/modules/user_module.html) on Ansible `2.8.5` and `devel` branch to create a user on a Darwin system fails if the user already exists with an error: `Cannot update property "uid" for user "user"` The issue exists in `lib/ansible/modules/system/user.py` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> user module ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.0.dev0 config file = /Users/johnchen/repos/buildhost-configuration3/scripts/macos_bootstrap/ansible.cfg configured module search path = [u'/Users/johnchen/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /Users/johnchen/repos/ansible/lib/ansible executable location = /Users/johnchen/repos/ansible/bin/ansible python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below HOST_KEY_CHECKING(ansible.cfg) = False RETRY_FILES_ENABLED(ansible.cfg) = False ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> macOS 10.14.5 (18F132) Kernel Version: Darwin 18.6.0 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run the user task on a macOS host to create a user with any name and uid. The first run should complete successfully. Any subsequent runs will result in the mentioned failure. <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Add Mac User become: true user: name: "test" uid: "500" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Play Recap should show no changes on second run. Example: ``` ok: [127.0.0.1] => (item={u'uid': 500, u'name': u'test'}) ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> The second run of the playbook fails with a `Cannot update property` error <!--- Paste verbatim command output between quotes --> ```paste below failed: [127.0.0.1] (item={u'uid': u'500', u'name': u'test'}) => {"ansible_loop_var": "item", "changed": false, "err": "", "item": {"name": "test", "uid": "500"}, "msg": "Cannot update property \"uid\" for user \"test\".", "out": "", "rc": 40} ```
https://github.com/ansible/ansible/issues/62969
https://github.com/ansible/ansible/pull/62973
d4c4c92c97e8dc2791f6a9f63ba0a3a0ce467a6b
c73288ad5387a728349fae772aa9d1769af73a13
2019-09-30T16:19:22Z
python
2019-11-22T22:05:17Z
changelogs/fragments/user-fix-value-comparison-on-macos.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
62,969
User module on Darwin is not idempotent
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Running the [user module](https://docs.ansible.com/ansible/latest/modules/user_module.html) on Ansible `2.8.5` and `devel` branch to create a user on a Darwin system fails if the user already exists with an error: `Cannot update property "uid" for user "user"` The issue exists in `lib/ansible/modules/system/user.py` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> user module ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.0.dev0 config file = /Users/johnchen/repos/buildhost-configuration3/scripts/macos_bootstrap/ansible.cfg configured module search path = [u'/Users/johnchen/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /Users/johnchen/repos/ansible/lib/ansible executable location = /Users/johnchen/repos/ansible/bin/ansible python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below HOST_KEY_CHECKING(ansible.cfg) = False RETRY_FILES_ENABLED(ansible.cfg) = False ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> macOS 10.14.5 (18F132) Kernel Version: Darwin 18.6.0 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run the user task on a macOS host to create a user with any name and uid. The first run should complete successfully. Any subsequent runs will result in the mentioned failure. <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Add Mac User become: true user: name: "test" uid: "500" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Play Recap should show no changes on second run. Example: ``` ok: [127.0.0.1] => (item={u'uid': 500, u'name': u'test'}) ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> The second run of the playbook fails with a `Cannot update property` error <!--- Paste verbatim command output between quotes --> ```paste below failed: [127.0.0.1] (item={u'uid': u'500', u'name': u'test'}) => {"ansible_loop_var": "item", "changed": false, "err": "", "item": {"name": "test", "uid": "500"}, "msg": "Cannot update property \"uid\" for user \"test\".", "out": "", "rc": 40} ```
https://github.com/ansible/ansible/issues/62969
https://github.com/ansible/ansible/pull/62973
d4c4c92c97e8dc2791f6a9f63ba0a3a0ce467a6b
c73288ad5387a728349fae772aa9d1769af73a13
2019-09-30T16:19:22Z
python
2019-11-22T22:05:17Z
lib/ansible/modules/system/user.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2012, Stephen Fromm <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['stableinterface'], 'supported_by': 'core'} DOCUMENTATION = r''' module: user version_added: "0.2" short_description: Manage user accounts description: - Manage user accounts and user attributes. - For Windows targets, use the M(win_user) module instead. options: name: description: - Name of the user to create, remove or modify. type: str required: true aliases: [ user ] uid: description: - Optionally sets the I(UID) of the user. type: int comment: description: - Optionally sets the description (aka I(GECOS)) of user account. type: str hidden: description: - macOS only, optionally hide the user from the login window and system preferences. - The default will be C(yes) if the I(system) option is used. type: bool version_added: "2.6" non_unique: description: - Optionally when used with the -u option, this option allows to change the user ID to a non-unique value. type: bool default: no version_added: "1.1" seuser: description: - Optionally sets the seuser type (user_u) on selinux enabled systems. type: str version_added: "2.1" group: description: - Optionally sets the user's primary group (takes a group name). type: str groups: description: - List of groups user will be added to. When set to an empty string C(''), the user is removed from all groups except the primary group. - Before Ansible 2.3, the only input format allowed was a comma separated string. - Mutually exclusive with C(local) type: list append: description: - If C(yes), add the user to the groups specified in C(groups). - If C(no), user will only be added to the groups specified in C(groups), removing them from all other groups. - Mutually exclusive with C(local) type: bool default: no shell: description: - Optionally set the user's shell. - On macOS, before Ansible 2.5, the default shell for non-system users was C(/usr/bin/false). Since Ansible 2.5, the default shell for non-system users on macOS is C(/bin/bash). - On other operating systems, the default shell is determined by the underlying tool being used. See Notes for details. type: str home: description: - Optionally set the user's home directory. type: path skeleton: description: - Optionally set a home skeleton directory. - Requires C(create_home) option! type: str version_added: "2.0" password: description: - Optionally set the user's password to this crypted value. - On macOS systems, this value has to be cleartext. Beware of security issues. - To create a disabled account on Linux systems, set this to C('!') or C('*'). - To create a disabled account on OpenBSD, set this to C('*************'). - See U(https://docs.ansible.com/ansible/faq.html#how-do-i-generate-encrypted-passwords-for-the-user-module) for details on various ways to generate these password values. type: str state: description: - Whether the account should exist or not, taking action if the state is different from what is stated. type: str choices: [ absent, present ] default: present create_home: description: - Unless set to C(no), a home directory will be made for the user when the account is created or if the home directory does not exist. - Changed from C(createhome) to C(create_home) in Ansible 2.5. type: bool default: yes aliases: [ createhome ] move_home: description: - "If set to C(yes) when used with C(home: ), attempt to move the user's old home directory to the specified directory if it isn't there already and the old home exists." type: bool default: no system: description: - When creating an account C(state=present), setting this to C(yes) makes the user a system account. - This setting cannot be changed on existing users. type: bool default: no force: description: - This only affects C(state=absent), it forces removal of the user and associated directories on supported platforms. - The behavior is the same as C(userdel --force), check the man page for C(userdel) on your system for details and support. - When used with C(generate_ssh_key=yes) this forces an existing key to be overwritten. type: bool default: no remove: description: - This only affects C(state=absent), it attempts to remove directories associated with the user. - The behavior is the same as C(userdel --remove), check the man page for details and support. type: bool default: no login_class: description: - Optionally sets the user's login class, a feature of most BSD OSs. type: str generate_ssh_key: description: - Whether to generate a SSH key for the user in question. - This will B(not) overwrite an existing SSH key unless used with C(force=yes). type: bool default: no version_added: "0.9" ssh_key_bits: description: - Optionally specify number of bits in SSH key to create. type: int default: default set by ssh-keygen version_added: "0.9" ssh_key_type: description: - Optionally specify the type of SSH key to generate. - Available SSH key types will depend on implementation present on target host. type: str default: rsa version_added: "0.9" ssh_key_file: description: - Optionally specify the SSH key filename. - If this is a relative filename then it will be relative to the user's home directory. - This parameter defaults to I(.ssh/id_rsa). type: path version_added: "0.9" ssh_key_comment: description: - Optionally define the comment for the SSH key. type: str default: ansible-generated on $HOSTNAME version_added: "0.9" ssh_key_passphrase: description: - Set a passphrase for the SSH key. - If no passphrase is provided, the SSH key will default to having no passphrase. type: str version_added: "0.9" update_password: description: - C(always) will update passwords if they differ. - C(on_create) will only set the password for newly created users. type: str choices: [ always, on_create ] default: always version_added: "1.3" expires: description: - An expiry time for the user in epoch, it will be ignored on platforms that do not support this. - Currently supported on GNU/Linux, FreeBSD, and DragonFlyBSD. - Since Ansible 2.6 you can remove the expiry time specify a negative value. Currently supported on GNU/Linux and FreeBSD. type: float version_added: "1.9" password_lock: description: - Lock the password (usermod -L, pw lock, usermod -C). - BUT implementation differs on different platforms, this option does not always mean the user cannot login via other methods. - This option does not disable the user, only lock the password. Do not change the password in the same task. - Currently supported on Linux, FreeBSD, DragonFlyBSD, NetBSD, OpenBSD. type: bool version_added: "2.6" local: description: - Forces the use of "local" command alternatives on platforms that implement it. - This is useful in environments that use centralized authentification when you want to manipulate the local users (i.e. it uses C(luseradd) instead of C(useradd)). - This will check C(/etc/passwd) for an existing account before invoking commands. If the local account database exists somewhere other than C(/etc/passwd), this setting will not work properly. - This requires that the above commands as well as C(/etc/passwd) must exist on the target host, otherwise it will be a fatal error. - Mutually exclusive with C(groups) and C(append) type: bool default: no version_added: "2.4" profile: description: - Sets the profile of the user. - Does nothing when used with other platforms. - Can set multiple profiles using comma separation. - To delete all the profiles, use C(profile=''). - Currently supported on Illumos/Solaris. type: str version_added: "2.8" authorization: description: - Sets the authorization of the user. - Does nothing when used with other platforms. - Can set multiple authorizations using comma separation. - To delete all authorizations, use C(authorization=''). - Currently supported on Illumos/Solaris. type: str version_added: "2.8" role: description: - Sets the role of the user. - Does nothing when used with other platforms. - Can set multiple roles using comma separation. - To delete all roles, use C(role=''). - Currently supported on Illumos/Solaris. type: str version_added: "2.8" notes: - There are specific requirements per platform on user management utilities. However they generally come pre-installed with the system and Ansible will require they are present at runtime. If they are not, a descriptive error message will be shown. - On SunOS platforms, the shadow file is backed up automatically since this module edits it directly. On other platforms, the shadow file is backed up by the underlying tools used by this module. - On macOS, this module uses C(dscl) to create, modify, and delete accounts. C(dseditgroup) is used to modify group membership. Accounts are hidden from the login window by modifying C(/Library/Preferences/com.apple.loginwindow.plist). - On FreeBSD, this module uses C(pw useradd) and C(chpass) to create, C(pw usermod) and C(chpass) to modify, C(pw userdel) remove, C(pw lock) to lock, and C(pw unlock) to unlock accounts. - On all other platforms, this module uses C(useradd) to create, C(usermod) to modify, and C(userdel) to remove accounts. seealso: - module: authorized_key - module: group - module: win_user author: - Stephen Fromm (@sfromm) ''' EXAMPLES = r''' - name: Add the user 'johnd' with a specific uid and a primary group of 'admin' user: name: johnd comment: John Doe uid: 1040 group: admin - name: Add the user 'james' with a bash shell, appending the group 'admins' and 'developers' to the user's groups user: name: james shell: /bin/bash groups: admins,developers append: yes - name: Remove the user 'johnd' user: name: johnd state: absent remove: yes - name: Create a 2048-bit SSH key for user jsmith in ~jsmith/.ssh/id_rsa user: name: jsmith generate_ssh_key: yes ssh_key_bits: 2048 ssh_key_file: .ssh/id_rsa - name: Added a consultant whose account you want to expire user: name: james18 shell: /bin/zsh groups: developers expires: 1422403387 - name: Starting at Ansible 2.6, modify user, remove expiry time user: name: james18 expires: -1 ''' RETURN = r''' append: description: Whether or not to append the user to groups returned: When state is 'present' and the user exists type: bool sample: True comment: description: Comment section from passwd file, usually the user name returned: When user exists type: str sample: Agent Smith create_home: description: Whether or not to create the home directory returned: When user does not exist and not check mode type: bool sample: True force: description: Whether or not a user account was forcibly deleted returned: When state is 'absent' and user exists type: bool sample: False group: description: Primary user group ID returned: When user exists type: int sample: 1001 groups: description: List of groups of which the user is a member returned: When C(groups) is not empty and C(state) is 'present' type: str sample: 'chrony,apache' home: description: "Path to user's home directory" returned: When C(state) is 'present' type: str sample: '/home/asmith' move_home: description: Whether or not to move an existing home directory returned: When C(state) is 'present' and user exists type: bool sample: False name: description: User account name returned: always type: str sample: asmith password: description: Masked value of the password returned: When C(state) is 'present' and C(password) is not empty type: str sample: 'NOT_LOGGING_PASSWORD' remove: description: Whether or not to remove the user account returned: When C(state) is 'absent' and user exists type: bool sample: True shell: description: User login shell returned: When C(state) is 'present' type: str sample: '/bin/bash' ssh_fingerprint: description: Fingerprint of generated SSH key returned: When C(generate_ssh_key) is C(True) type: str sample: '2048 SHA256:aYNHYcyVm87Igh0IMEDMbvW0QDlRQfE0aJugp684ko8 ansible-generated on host (RSA)' ssh_key_file: description: Path to generated SSH private key file returned: When C(generate_ssh_key) is C(True) type: str sample: /home/asmith/.ssh/id_rsa ssh_public_key: description: Generated SSH public key file returned: When C(generate_ssh_key) is C(True) type: str sample: > 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC95opt4SPEC06tOYsJQJIuN23BbLMGmYo8ysVZQc4h2DZE9ugbjWWGS1/pweUGjVstgzMkBEeBCByaEf/RJKNecKRPeGd2Bw9DCj/bn5Z6rGfNENKBmo 618mUJBvdlEgea96QGjOwSB7/gmonduC7gsWDMNcOdSE3wJMTim4lddiBx4RgC9yXsJ6Tkz9BHD73MXPpT5ETnse+A3fw3IGVSjaueVnlUyUmOBf7fzmZbhlFVXf2Zi2rFTXqvbdGHKkzpw1U8eB8xFPP7y d5u1u0e6Acju/8aZ/l17IDFiLke5IzlqIMRTEbDwLNeO84YQKWTm9fODHzhYe0yvxqLiK07 ansible-generated on host' stderr: description: Standard error from running commands returned: When stderr is returned by a command that is run type: str sample: Group wheels does not exist stdout: description: Standard output from running commands returned: When standard output is returned by the command that is run type: str sample: system: description: Whether or not the account is a system account returned: When C(system) is passed to the module and the account does not exist type: bool sample: True uid: description: User ID of the user account returned: When C(UID) is passed to the module type: int sample: 1044 ''' import errno import grp import calendar import os import re import pty import pwd import select import shutil import socket import subprocess import time from ansible.module_utils import distro from ansible.module_utils._text import to_native, to_bytes from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.common.sys_info import get_platform_subclass try: import spwd HAVE_SPWD = True except ImportError: HAVE_SPWD = False _HASH_RE = re.compile(r'[^a-zA-Z0-9./=]') class User(object): """ This is a generic User manipulation class that is subclassed based on platform. A subclass may wish to override the following action methods:- - create_user() - remove_user() - modify_user() - ssh_key_gen() - ssh_key_fingerprint() - user_exists() All subclasses MUST define platform and distribution (which may be None). """ platform = 'Generic' distribution = None PASSWORDFILE = '/etc/passwd' SHADOWFILE = '/etc/shadow' SHADOWFILE_EXPIRE_INDEX = 7 LOGIN_DEFS = '/etc/login.defs' DATE_FORMAT = '%Y-%m-%d' def __new__(cls, *args, **kwargs): new_cls = get_platform_subclass(User) return super(cls, new_cls).__new__(new_cls) def __init__(self, module): self.module = module self.state = module.params['state'] self.name = module.params['name'] self.uid = module.params['uid'] self.hidden = module.params['hidden'] self.non_unique = module.params['non_unique'] self.seuser = module.params['seuser'] self.group = module.params['group'] self.comment = module.params['comment'] self.shell = module.params['shell'] self.password = module.params['password'] self.force = module.params['force'] self.remove = module.params['remove'] self.create_home = module.params['create_home'] self.move_home = module.params['move_home'] self.skeleton = module.params['skeleton'] self.system = module.params['system'] self.login_class = module.params['login_class'] self.append = module.params['append'] self.sshkeygen = module.params['generate_ssh_key'] self.ssh_bits = module.params['ssh_key_bits'] self.ssh_type = module.params['ssh_key_type'] self.ssh_comment = module.params['ssh_key_comment'] self.ssh_passphrase = module.params['ssh_key_passphrase'] self.update_password = module.params['update_password'] self.home = module.params['home'] self.expires = None self.password_lock = module.params['password_lock'] self.groups = None self.local = module.params['local'] self.profile = module.params['profile'] self.authorization = module.params['authorization'] self.role = module.params['role'] if module.params['groups'] is not None: self.groups = ','.join(module.params['groups']) if module.params['expires'] is not None: try: self.expires = time.gmtime(module.params['expires']) except Exception as e: module.fail_json(msg="Invalid value for 'expires' %s: %s" % (self.expires, to_native(e))) if module.params['ssh_key_file'] is not None: self.ssh_file = module.params['ssh_key_file'] else: self.ssh_file = os.path.join('.ssh', 'id_%s' % self.ssh_type) def check_password_encrypted(self): # Darwin needs cleartext password, so skip validation if self.module.params['password'] and self.platform != 'Darwin': maybe_invalid = False # Allow setting certain passwords in order to disable the account if self.module.params['password'] in set(['*', '!', '*************']): maybe_invalid = False else: # : for delimiter, * for disable user, ! for lock user # these characters are invalid in the password if any(char in self.module.params['password'] for char in ':*!'): maybe_invalid = True if '$' not in self.module.params['password']: maybe_invalid = True else: fields = self.module.params['password'].split("$") if len(fields) >= 3: # contains character outside the crypto constraint if bool(_HASH_RE.search(fields[-1])): maybe_invalid = True # md5 if fields[1] == '1' and len(fields[-1]) != 22: maybe_invalid = True # sha256 if fields[1] == '5' and len(fields[-1]) != 43: maybe_invalid = True # sha512 if fields[1] == '6' and len(fields[-1]) != 86: maybe_invalid = True else: maybe_invalid = True if maybe_invalid: self.module.warn("The input password appears not to have been hashed. " "The 'password' argument must be encrypted for this module to work properly.") def execute_command(self, cmd, use_unsafe_shell=False, data=None, obey_checkmode=True): if self.module.check_mode and obey_checkmode: self.module.debug('In check mode, would have run: "%s"' % cmd) return (0, '', '') else: # cast all args to strings ansible-modules-core/issues/4397 cmd = [str(x) for x in cmd] return self.module.run_command(cmd, use_unsafe_shell=use_unsafe_shell, data=data) def backup_shadow(self): if not self.module.check_mode and self.SHADOWFILE: return self.module.backup_local(self.SHADOWFILE) def remove_user_userdel(self): if self.local: command_name = 'luserdel' else: command_name = 'userdel' cmd = [self.module.get_bin_path(command_name, True)] if self.force: cmd.append('-f') if self.remove: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def create_user_useradd(self): if self.local: command_name = 'luseradd' else: command_name = 'useradd' cmd = [self.module.get_bin_path(command_name, True)] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.seuser is not None: cmd.append('-Z') cmd.append(self.seuser) if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) elif self.group_exists(self.name): # use the -N option (no user group) if a group already # exists with the same name as the user to prevent # errors from useradd trying to create a group when # USERGROUPS_ENAB is set in /etc/login.defs. if os.path.exists('/etc/redhat-release'): dist = distro.linux_distribution(full_distribution_name=False) major_release = int(dist[1].split('.')[0]) if major_release <= 5: cmd.append('-n') else: cmd.append('-N') elif os.path.exists('/etc/SuSE-release'): # -N did not exist in useradd before SLE 11 and did not # automatically create a group dist = distro.linux_distribution(full_distribution_name=False) major_release = int(dist[1].split('.')[0]) if major_release >= 12: cmd.append('-N') else: cmd.append('-N') if self.groups is not None and not self.local and len(self.groups): groups = self.get_groups_set() cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: # If the specified path to the user home contains parent directories that # do not exist, first create the home directory since useradd cannot # create parent directories parent = os.path.dirname(self.home) if not os.path.isdir(parent): self.create_homedir(self.home) cmd.append('-d') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.expires is not None: cmd.append('-e') if self.expires < time.gmtime(0): cmd.append('') else: cmd.append(time.strftime(self.DATE_FORMAT, self.expires)) if self.password is not None: cmd.append('-p') cmd.append(self.password) if self.create_home: if not self.local: cmd.append('-m') if self.skeleton is not None: cmd.append('-k') cmd.append(self.skeleton) else: cmd.append('-M') if self.system: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def _check_usermod_append(self): # check if this version of usermod can append groups if self.local: command_name = 'lusermod' else: command_name = 'usermod' usermod_path = self.module.get_bin_path(command_name, True) # for some reason, usermod --help cannot be used by non root # on RH/Fedora, due to lack of execute bit for others if not os.access(usermod_path, os.X_OK): return False cmd = [usermod_path, '--help'] (rc, data1, data2) = self.execute_command(cmd, obey_checkmode=False) helpout = data1 + data2 # check if --append exists lines = to_native(helpout).split('\n') for line in lines: if line.strip().startswith('-a, --append'): return True return False def modify_user_usermod(self): if self.local: command_name = 'lusermod' else: command_name = 'usermod' cmd = [self.module.get_bin_path(command_name, True)] info = self.user_info() has_append = self._check_usermod_append() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.groups is not None: # get a list of all groups for the user, including the primary current_groups = self.user_group_membership(exclude_primary=False) groups_need_mod = False groups = [] if self.groups == '': if current_groups and not self.append: groups_need_mod = True else: groups = self.get_groups_set(remove_existing=False) group_diff = set(current_groups).symmetric_difference(groups) if group_diff: if self.append: for g in groups: if g in group_diff: if has_append: cmd.append('-a') groups_need_mod = True break else: groups_need_mod = True if groups_need_mod and not self.local: if self.append and not has_append: cmd.append('-A') cmd.append(','.join(group_diff)) else: cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: cmd.append('-d') cmd.append(self.home) if self.move_home: cmd.append('-m') if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) if self.expires is not None: current_expires = int(self.user_password()[1]) if self.expires < time.gmtime(0): if current_expires >= 0: cmd.append('-e') cmd.append('') else: # Convert days since Epoch to seconds since Epoch as struct_time current_expire_date = time.gmtime(current_expires * 86400) # Current expires is negative or we compare year, month, and day only if current_expires < 0 or current_expire_date[:3] != self.expires[:3]: cmd.append('-e') cmd.append(time.strftime(self.DATE_FORMAT, self.expires)) # Lock if no password or unlocked, unlock only if locked if self.password_lock and not info[1].startswith('!'): cmd.append('-L') elif self.password_lock is False and info[1].startswith('!'): # usermod will refuse to unlock a user with no password, module shows 'changed' regardless cmd.append('-U') if self.update_password == 'always' and self.password is not None and info[1] != self.password: cmd.append('-p') cmd.append(self.password) # skip if no changes to be made if len(cmd) == 1: return (None, '', '') cmd.append(self.name) return self.execute_command(cmd) def group_exists(self, group): try: # Try group as a gid first grp.getgrgid(int(group)) return True except (ValueError, KeyError): try: grp.getgrnam(group) return True except KeyError: return False def group_info(self, group): if not self.group_exists(group): return False try: # Try group as a gid first return list(grp.getgrgid(int(group))) except (ValueError, KeyError): return list(grp.getgrnam(group)) def get_groups_set(self, remove_existing=True): if self.groups is None: return None info = self.user_info() groups = set(x.strip() for x in self.groups.split(',') if x) for g in groups.copy(): if not self.group_exists(g): self.module.fail_json(msg="Group %s does not exist" % (g)) if info and remove_existing and self.group_info(g)[2] == info[3]: groups.remove(g) return groups def user_group_membership(self, exclude_primary=True): ''' Return a list of groups the user belongs to ''' groups = [] info = self.get_pwd_info() for group in grp.getgrall(): if self.name in group.gr_mem: # Exclude the user's primary group by default if not exclude_primary: groups.append(group[0]) else: if info[3] != group.gr_gid: groups.append(group[0]) return groups def user_exists(self): # The pwd module does not distinguish between local and directory accounts. # It's output cannot be used to determine whether or not an account exists locally. # It returns True if the account exists locally or in the directory, so instead # look in the local PASSWORD file for an existing account. if self.local: if not os.path.exists(self.PASSWORDFILE): self.module.fail_json(msg="'local: true' specified but unable to find local account file {0} to parse.".format(self.PASSWORDFILE)) exists = False name_test = '{0}:'.format(self.name) with open(self.PASSWORDFILE, 'rb') as f: reversed_lines = f.readlines()[::-1] for line in reversed_lines: if line.startswith(to_bytes(name_test)): exists = True break if not exists: self.module.warn( "'local: true' specified and user '{name}' was not found in {file}. " "The local user account may already exist if the local account database exists " "somewhere other than {file}.".format(file=self.PASSWORDFILE, name=self.name)) return exists else: try: if pwd.getpwnam(self.name): return True except KeyError: return False def get_pwd_info(self): if not self.user_exists(): return False return list(pwd.getpwnam(self.name)) def user_info(self): if not self.user_exists(): return False info = self.get_pwd_info() if len(info[1]) == 1 or len(info[1]) == 0: info[1] = self.user_password()[0] return info def user_password(self): passwd = '' expires = '' if HAVE_SPWD: try: passwd = spwd.getspnam(self.name)[1] expires = spwd.getspnam(self.name)[7] return passwd, expires except KeyError: return passwd, expires except OSError as e: # Python 3.6 raises PermissionError instead of KeyError # Due to absence of PermissionError in python2.7 need to check # errno if e.errno in (errno.EACCES, errno.EPERM, errno.ENOENT): return passwd, expires raise if not self.user_exists(): return passwd, expires elif self.SHADOWFILE: passwd, expires = self.parse_shadow_file() return passwd, expires def parse_shadow_file(self): passwd = '' expires = '' if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK): with open(self.SHADOWFILE, 'r') as f: for line in f: if line.startswith('%s:' % self.name): passwd = line.split(':')[1] expires = line.split(':')[self.SHADOWFILE_EXPIRE_INDEX] or -1 return passwd, expires def get_ssh_key_path(self): info = self.user_info() if os.path.isabs(self.ssh_file): ssh_key_file = self.ssh_file else: if not os.path.exists(info[5]) and not self.module.check_mode: raise Exception('User %s home directory does not exist' % self.name) ssh_key_file = os.path.join(info[5], self.ssh_file) return ssh_key_file def ssh_key_gen(self): info = self.user_info() overwrite = None try: ssh_key_file = self.get_ssh_key_path() except Exception as e: return (1, '', to_native(e)) ssh_dir = os.path.dirname(ssh_key_file) if not os.path.exists(ssh_dir): if self.module.check_mode: return (0, '', '') try: os.mkdir(ssh_dir, int('0700', 8)) os.chown(ssh_dir, info[2], info[3]) except OSError as e: return (1, '', 'Failed to create %s: %s' % (ssh_dir, to_native(e))) if os.path.exists(ssh_key_file): if self.force: # ssh-keygen doesn't support overwriting the key interactively, so send 'y' to confirm overwrite = 'y' else: return (None, 'Key already exists, use "force: yes" to overwrite', '') cmd = [self.module.get_bin_path('ssh-keygen', True)] cmd.append('-t') cmd.append(self.ssh_type) if self.ssh_bits > 0: cmd.append('-b') cmd.append(self.ssh_bits) cmd.append('-C') cmd.append(self.ssh_comment) cmd.append('-f') cmd.append(ssh_key_file) if self.ssh_passphrase is not None: if self.module.check_mode: self.module.debug('In check mode, would have run: "%s"' % cmd) return (0, '', '') master_in_fd, slave_in_fd = pty.openpty() master_out_fd, slave_out_fd = pty.openpty() master_err_fd, slave_err_fd = pty.openpty() env = os.environ.copy() env['LC_ALL'] = 'C' try: p = subprocess.Popen([to_bytes(c) for c in cmd], stdin=slave_in_fd, stdout=slave_out_fd, stderr=slave_err_fd, preexec_fn=os.setsid, env=env) out_buffer = b'' err_buffer = b'' while p.poll() is None: r, w, e = select.select([master_out_fd, master_err_fd], [], [], 1) first_prompt = b'Enter passphrase (empty for no passphrase):' second_prompt = b'Enter same passphrase again' prompt = first_prompt for fd in r: if fd == master_out_fd: chunk = os.read(master_out_fd, 10240) out_buffer += chunk if prompt in out_buffer: os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r') prompt = second_prompt else: chunk = os.read(master_err_fd, 10240) err_buffer += chunk if prompt in err_buffer: os.write(master_in_fd, to_bytes(self.ssh_passphrase, errors='strict') + b'\r') prompt = second_prompt if b'Overwrite (y/n)?' in out_buffer or b'Overwrite (y/n)?' in err_buffer: # The key was created between us checking for existence and now return (None, 'Key already exists', '') rc = p.returncode out = to_native(out_buffer) err = to_native(err_buffer) except OSError as e: return (1, '', to_native(e)) else: cmd.append('-N') cmd.append('') (rc, out, err) = self.execute_command(cmd, data=overwrite) if rc == 0 and not self.module.check_mode: # If the keys were successfully created, we should be able # to tweak ownership. os.chown(ssh_key_file, info[2], info[3]) os.chown('%s.pub' % ssh_key_file, info[2], info[3]) return (rc, out, err) def ssh_key_fingerprint(self): ssh_key_file = self.get_ssh_key_path() if not os.path.exists(ssh_key_file): return (1, 'SSH Key file %s does not exist' % ssh_key_file, '') cmd = [self.module.get_bin_path('ssh-keygen', True)] cmd.append('-l') cmd.append('-f') cmd.append(ssh_key_file) return self.execute_command(cmd, obey_checkmode=False) def get_ssh_public_key(self): ssh_public_key_file = '%s.pub' % self.get_ssh_key_path() try: with open(ssh_public_key_file, 'r') as f: ssh_public_key = f.read().strip() except IOError: return None return ssh_public_key def create_user(self): # by default we use the create_user_useradd method return self.create_user_useradd() def remove_user(self): # by default we use the remove_user_userdel method return self.remove_user_userdel() def modify_user(self): # by default we use the modify_user_usermod method return self.modify_user_usermod() def create_homedir(self, path): if not os.path.exists(path): if self.skeleton is not None: skeleton = self.skeleton else: skeleton = '/etc/skel' if os.path.exists(skeleton): try: shutil.copytree(skeleton, path, symlinks=True) except OSError as e: self.module.exit_json(failed=True, msg="%s" % to_native(e)) else: try: os.makedirs(path) except OSError as e: self.module.exit_json(failed=True, msg="%s" % to_native(e)) # get umask from /etc/login.defs and set correct home mode if os.path.exists(self.LOGIN_DEFS): with open(self.LOGIN_DEFS, 'r') as f: for line in f: m = re.match(r'^UMASK\s+(\d+)$', line) if m: umask = int(m.group(1), 8) mode = 0o777 & ~umask try: os.chmod(path, mode) except OSError as e: self.module.exit_json(failed=True, msg="%s" % to_native(e)) def chown_homedir(self, uid, gid, path): try: os.chown(path, uid, gid) for root, dirs, files in os.walk(path): for d in dirs: os.chown(os.path.join(root, d), uid, gid) for f in files: os.chown(os.path.join(root, f), uid, gid) except OSError as e: self.module.exit_json(failed=True, msg="%s" % to_native(e)) # =========================================== class FreeBsdUser(User): """ This is a FreeBSD User manipulation class - it uses the pw command to manipulate the user database, followed by the chpass command to change the password. This overrides the following methods from the generic class:- - create_user() - remove_user() - modify_user() """ platform = 'FreeBSD' distribution = None SHADOWFILE = '/etc/master.passwd' SHADOWFILE_EXPIRE_INDEX = 6 DATE_FORMAT = '%d-%b-%Y' def remove_user(self): cmd = [ self.module.get_bin_path('pw', True), 'userdel', '-n', self.name ] if self.remove: cmd.append('-r') return self.execute_command(cmd) def create_user(self): cmd = [ self.module.get_bin_path('pw', True), 'useradd', '-n', self.name, ] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) if self.groups is not None: groups = self.get_groups_set() cmd.append('-G') cmd.append(','.join(groups)) if self.create_home: cmd.append('-m') if self.skeleton is not None: cmd.append('-k') cmd.append(self.skeleton) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: cmd.append('-L') cmd.append(self.login_class) if self.expires is not None: cmd.append('-e') if self.expires < time.gmtime(0): cmd.append('0') else: cmd.append(str(calendar.timegm(self.expires))) # system cannot be handled currently - should we error if its requested? # create the user (rc, out, err) = self.execute_command(cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) # we have to set the password in a second command if self.password is not None: cmd = [ self.module.get_bin_path('chpass', True), '-p', self.password, self.name ] return self.execute_command(cmd) return (rc, out, err) def modify_user(self): cmd = [ self.module.get_bin_path('pw', True), 'usermod', '-n', self.name ] cmd_len = len(cmd) info = self.user_info() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None: if (info[5] != self.home and self.move_home) or (not os.path.exists(self.home) and self.create_home): cmd.append('-m') if info[5] != self.home: cmd.append('-d') cmd.append(self.home) if self.skeleton is not None: cmd.append('-k') cmd.append(self.skeleton) if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: # find current login class user_login_class = None if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK): with open(self.SHADOWFILE, 'r') as f: for line in f: if line.startswith('%s:' % self.name): user_login_class = line.split(':')[4] # act only if login_class change if self.login_class != user_login_class: cmd.append('-L') cmd.append(self.login_class) if self.groups is not None: current_groups = self.user_group_membership() groups = self.get_groups_set() group_diff = set(current_groups).symmetric_difference(groups) groups_need_mod = False if group_diff: if self.append: for g in groups: if g in group_diff: groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: cmd.append('-G') new_groups = groups if self.append: new_groups = groups | set(current_groups) cmd.append(','.join(new_groups)) if self.expires is not None: current_expires = int(self.user_password()[1]) # If expiration is negative or zero and the current expiration is greater than zero, disable expiration. # In OpenBSD, setting expiration to zero disables expiration. It does not expire the account. if self.expires <= time.gmtime(0): if current_expires > 0: cmd.append('-e') cmd.append('0') else: # Convert days since Epoch to seconds since Epoch as struct_time current_expire_date = time.gmtime(current_expires) # Current expires is negative or we compare year, month, and day only if current_expires <= 0 or current_expire_date[:3] != self.expires[:3]: cmd.append('-e') cmd.append(str(calendar.timegm(self.expires))) # modify the user if cmd will do anything if cmd_len != len(cmd): (rc, out, err) = self.execute_command(cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) else: (rc, out, err) = (None, '', '') # we have to set the password in a second command if self.update_password == 'always' and self.password is not None and info[1] != self.password: cmd = [ self.module.get_bin_path('chpass', True), '-p', self.password, self.name ] return self.execute_command(cmd) # we have to lock/unlock the password in a distinct command if self.password_lock and not info[1].startswith('*LOCKED*'): cmd = [ self.module.get_bin_path('pw', True), 'lock', self.name ] if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) return self.execute_command(cmd) elif self.password_lock is False and info[1].startswith('*LOCKED*'): cmd = [ self.module.get_bin_path('pw', True), 'unlock', self.name ] if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) return self.execute_command(cmd) return (rc, out, err) class DragonFlyBsdUser(FreeBsdUser): """ This is a DragonFlyBSD User manipulation class - it inherits the FreeBsdUser class behaviors, such as using the pw command to manipulate the user database, followed by the chpass command to change the password. """ platform = 'DragonFly' class OpenBSDUser(User): """ This is a OpenBSD User manipulation class. Main differences are that OpenBSD:- - has no concept of "system" account. - has no force delete user This overrides the following methods from the generic class:- - create_user() - remove_user() - modify_user() """ platform = 'OpenBSD' distribution = None SHADOWFILE = '/etc/master.passwd' def create_user(self): cmd = [self.module.get_bin_path('useradd', True)] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) if self.groups is not None: groups = self.get_groups_set() cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: cmd.append('-L') cmd.append(self.login_class) if self.password is not None and self.password != '*': cmd.append('-p') cmd.append(self.password) if self.create_home: cmd.append('-m') if self.skeleton is not None: cmd.append('-k') cmd.append(self.skeleton) cmd.append(self.name) return self.execute_command(cmd) def remove_user_userdel(self): cmd = [self.module.get_bin_path('userdel', True)] if self.remove: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def modify_user(self): cmd = [self.module.get_bin_path('usermod', True)] info = self.user_info() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.groups is not None: current_groups = self.user_group_membership() groups_need_mod = False groups_option = '-S' groups = [] if self.groups == '': if current_groups and not self.append: groups_need_mod = True else: groups = self.get_groups_set() group_diff = set(current_groups).symmetric_difference(groups) if group_diff: if self.append: for g in groups: if g in group_diff: groups_option = '-G' groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: cmd.append(groups_option) cmd.append(','.join(groups)) if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: if self.move_home: cmd.append('-m') cmd.append('-d') cmd.append(self.home) if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: # find current login class user_login_class = None userinfo_cmd = [self.module.get_bin_path('userinfo', True), self.name] (rc, out, err) = self.execute_command(userinfo_cmd, obey_checkmode=False) for line in out.splitlines(): tokens = line.split() if tokens[0] == 'class' and len(tokens) == 2: user_login_class = tokens[1] # act only if login_class change if self.login_class != user_login_class: cmd.append('-L') cmd.append(self.login_class) if self.password_lock and not info[1].startswith('*'): cmd.append('-Z') elif self.password_lock is False and info[1].startswith('*'): cmd.append('-U') if self.update_password == 'always' and self.password is not None \ and self.password != '*' and info[1] != self.password: cmd.append('-p') cmd.append(self.password) # skip if no changes to be made if len(cmd) == 1: return (None, '', '') cmd.append(self.name) return self.execute_command(cmd) class NetBSDUser(User): """ This is a NetBSD User manipulation class. Main differences are that NetBSD:- - has no concept of "system" account. - has no force delete user This overrides the following methods from the generic class:- - create_user() - remove_user() - modify_user() """ platform = 'NetBSD' distribution = None SHADOWFILE = '/etc/master.passwd' def create_user(self): cmd = [self.module.get_bin_path('useradd', True)] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) if self.groups is not None: groups = self.get_groups_set() if len(groups) > 16: self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups)) cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: cmd.append('-L') cmd.append(self.login_class) if self.password is not None: cmd.append('-p') cmd.append(self.password) if self.create_home: cmd.append('-m') if self.skeleton is not None: cmd.append('-k') cmd.append(self.skeleton) cmd.append(self.name) return self.execute_command(cmd) def remove_user_userdel(self): cmd = [self.module.get_bin_path('userdel', True)] if self.remove: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def modify_user(self): cmd = [self.module.get_bin_path('usermod', True)] info = self.user_info() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.groups is not None: current_groups = self.user_group_membership() groups_need_mod = False groups = [] if self.groups == '': if current_groups and not self.append: groups_need_mod = True else: groups = self.get_groups_set() group_diff = set(current_groups).symmetric_difference(groups) if group_diff: if self.append: for g in groups: if g in group_diff: groups = set(current_groups).union(groups) groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: if len(groups) > 16: self.module.fail_json(msg="Too many groups (%d) NetBSD allows for 16 max." % len(groups)) cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: if self.move_home: cmd.append('-m') cmd.append('-d') cmd.append(self.home) if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) if self.login_class is not None: cmd.append('-L') cmd.append(self.login_class) if self.update_password == 'always' and self.password is not None and info[1] != self.password: cmd.append('-p') cmd.append(self.password) if self.password_lock and not info[1].startswith('*LOCKED*'): cmd.append('-C yes') elif self.password_lock is False and info[1].startswith('*LOCKED*'): cmd.append('-C no') # skip if no changes to be made if len(cmd) == 1: return (None, '', '') cmd.append(self.name) return self.execute_command(cmd) class SunOS(User): """ This is a SunOS User manipulation class - The main difference between this class and the generic user class is that Solaris-type distros don't support the concept of a "system" account and we need to edit the /etc/shadow file manually to set a password. (Ugh) This overrides the following methods from the generic class:- - create_user() - remove_user() - modify_user() - user_info() """ platform = 'SunOS' distribution = None SHADOWFILE = '/etc/shadow' USER_ATTR = '/etc/user_attr' def get_password_defaults(self): # Read password aging defaults try: minweeks = '' maxweeks = '' warnweeks = '' with open("/etc/default/passwd", 'r') as f: for line in f: line = line.strip() if (line.startswith('#') or line == ''): continue m = re.match(r'^([^#]*)#(.*)$', line) if m: # The line contains a hash / comment line = m.group(1) key, value = line.split('=') if key == "MINWEEKS": minweeks = value.rstrip('\n') elif key == "MAXWEEKS": maxweeks = value.rstrip('\n') elif key == "WARNWEEKS": warnweeks = value.rstrip('\n') except Exception as err: self.module.fail_json(msg="failed to read /etc/default/passwd: %s" % to_native(err)) return (minweeks, maxweeks, warnweeks) def remove_user(self): cmd = [self.module.get_bin_path('userdel', True)] if self.remove: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def create_user(self): cmd = [self.module.get_bin_path('useradd', True)] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) if self.groups is not None: groups = self.get_groups_set() cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.create_home: cmd.append('-m') if self.skeleton is not None: cmd.append('-k') cmd.append(self.skeleton) if self.profile is not None: cmd.append('-P') cmd.append(self.profile) if self.authorization is not None: cmd.append('-A') cmd.append(self.authorization) if self.role is not None: cmd.append('-R') cmd.append(self.role) cmd.append(self.name) (rc, out, err) = self.execute_command(cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) if not self.module.check_mode: # we have to set the password by editing the /etc/shadow file if self.password is not None: self.backup_shadow() minweeks, maxweeks, warnweeks = self.get_password_defaults() try: lines = [] with open(self.SHADOWFILE, 'rb') as f: for line in f: line = to_native(line, errors='surrogate_or_strict') fields = line.strip().split(':') if not fields[0] == self.name: lines.append(line) continue fields[1] = self.password fields[2] = str(int(time.time() // 86400)) if minweeks: try: fields[3] = str(int(minweeks) * 7) except ValueError: # mirror solaris, which allows for any value in this field, and ignores anything that is not an int. pass if maxweeks: try: fields[4] = str(int(maxweeks) * 7) except ValueError: # mirror solaris, which allows for any value in this field, and ignores anything that is not an int. pass if warnweeks: try: fields[5] = str(int(warnweeks) * 7) except ValueError: # mirror solaris, which allows for any value in this field, and ignores anything that is not an int. pass line = ':'.join(fields) lines.append('%s\n' % line) with open(self.SHADOWFILE, 'w+') as f: f.writelines(lines) except Exception as err: self.module.fail_json(msg="failed to update users password: %s" % to_native(err)) return (rc, out, err) def modify_user_usermod(self): cmd = [self.module.get_bin_path('usermod', True)] cmd_len = len(cmd) info = self.user_info() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.groups is not None: current_groups = self.user_group_membership() groups = self.get_groups_set() group_diff = set(current_groups).symmetric_difference(groups) groups_need_mod = False if group_diff: if self.append: for g in groups: if g in group_diff: groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: cmd.append('-G') new_groups = groups if self.append: new_groups.update(current_groups) cmd.append(','.join(new_groups)) if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: if self.move_home: cmd.append('-m') cmd.append('-d') cmd.append(self.home) if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) if self.profile is not None and info[7] != self.profile: cmd.append('-P') cmd.append(self.profile) if self.authorization is not None and info[8] != self.authorization: cmd.append('-A') cmd.append(self.authorization) if self.role is not None and info[9] != self.role: cmd.append('-R') cmd.append(self.role) # modify the user if cmd will do anything if cmd_len != len(cmd): cmd.append(self.name) (rc, out, err) = self.execute_command(cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) else: (rc, out, err) = (None, '', '') # we have to set the password by editing the /etc/shadow file if self.update_password == 'always' and self.password is not None and info[1] != self.password: self.backup_shadow() (rc, out, err) = (0, '', '') if not self.module.check_mode: minweeks, maxweeks, warnweeks = self.get_password_defaults() try: lines = [] with open(self.SHADOWFILE, 'rb') as f: for line in f: line = to_native(line, errors='surrogate_or_strict') fields = line.strip().split(':') if not fields[0] == self.name: lines.append(line) continue fields[1] = self.password fields[2] = str(int(time.time() // 86400)) if minweeks: fields[3] = str(int(minweeks) * 7) if maxweeks: fields[4] = str(int(maxweeks) * 7) if warnweeks: fields[5] = str(int(warnweeks) * 7) line = ':'.join(fields) lines.append('%s\n' % line) with open(self.SHADOWFILE, 'w+') as f: f.writelines(lines) rc = 0 except Exception as err: self.module.fail_json(msg="failed to update users password: %s" % to_native(err)) return (rc, out, err) def user_info(self): info = super(SunOS, self).user_info() if info: info += self._user_attr_info() return info def _user_attr_info(self): info = [''] * 3 with open(self.USER_ATTR, 'r') as file_handler: for line in file_handler: lines = line.strip().split('::::') if lines[0] == self.name: tmp = dict(x.split('=') for x in lines[1].split(';')) info[0] = tmp.get('profiles', '') info[1] = tmp.get('auths', '') info[2] = tmp.get('roles', '') return info class DarwinUser(User): """ This is a Darwin macOS User manipulation class. Main differences are that Darwin:- - Handles accounts in a database managed by dscl(1) - Has no useradd/groupadd - Does not create home directories - User password must be cleartext - UID must be given - System users must ben under 500 This overrides the following methods from the generic class:- - user_exists() - create_user() - remove_user() - modify_user() """ platform = 'Darwin' distribution = None SHADOWFILE = None dscl_directory = '.' fields = [ ('comment', 'RealName'), ('home', 'NFSHomeDirectory'), ('shell', 'UserShell'), ('uid', 'UniqueID'), ('group', 'PrimaryGroupID'), ('hidden', 'IsHidden'), ] def __init__(self, module): super(DarwinUser, self).__init__(module) # make the user hidden if option is set or deffer to system option if self.hidden is None: if self.system: self.hidden = 1 elif self.hidden: self.hidden = 1 else: self.hidden = 0 # add hidden to processing if set if self.hidden is not None: self.fields.append(('hidden', 'IsHidden')) def _get_dscl(self): return [self.module.get_bin_path('dscl', True), self.dscl_directory] def _list_user_groups(self): cmd = self._get_dscl() cmd += ['-search', '/Groups', 'GroupMembership', self.name] (rc, out, err) = self.execute_command(cmd, obey_checkmode=False) groups = [] for line in out.splitlines(): if line.startswith(' ') or line.startswith(')'): continue groups.append(line.split()[0]) return groups def _get_user_property(self, property): '''Return user PROPERTY as given my dscl(1) read or None if not found.''' cmd = self._get_dscl() cmd += ['-read', '/Users/%s' % self.name, property] (rc, out, err) = self.execute_command(cmd, obey_checkmode=False) if rc != 0: return None # from dscl(1) # if property contains embedded spaces, the list will instead be # displayed one entry per line, starting on the line after the key. lines = out.splitlines() # sys.stderr.write('*** |%s| %s -> %s\n' % (property, out, lines)) if len(lines) == 1: return lines[0].split(': ')[1] else: if len(lines) > 2: return '\n'.join([lines[1].strip()] + lines[2:]) else: if len(lines) == 2: return lines[1].strip() else: return None def _get_next_uid(self, system=None): ''' Return the next available uid. If system=True, then uid should be below of 500, if possible. ''' cmd = self._get_dscl() cmd += ['-list', '/Users', 'UniqueID'] (rc, out, err) = self.execute_command(cmd, obey_checkmode=False) if rc != 0: self.module.fail_json( msg="Unable to get the next available uid", rc=rc, out=out, err=err ) max_uid = 0 max_system_uid = 0 for line in out.splitlines(): current_uid = int(line.split(' ')[-1]) if max_uid < current_uid: max_uid = current_uid if max_system_uid < current_uid and current_uid < 500: max_system_uid = current_uid if system and (0 < max_system_uid < 499): return max_system_uid + 1 return max_uid + 1 def _change_user_password(self): '''Change password for SELF.NAME against SELF.PASSWORD. Please note that password must be cleartext. ''' # some documentation on how is stored passwords on OSX: # http://blog.lostpassword.com/2012/07/cracking-mac-os-x-lion-accounts-passwords/ # http://null-byte.wonderhowto.com/how-to/hack-mac-os-x-lion-passwords-0130036/ # http://pastebin.com/RYqxi7Ca # on OSX 10.8+ hash is SALTED-SHA512-PBKDF2 # https://pythonhosted.org/passlib/lib/passlib.hash.pbkdf2_digest.html # https://gist.github.com/nueh/8252572 cmd = self._get_dscl() if self.password: cmd += ['-passwd', '/Users/%s' % self.name, self.password] else: cmd += ['-create', '/Users/%s' % self.name, 'Password', '*'] (rc, out, err) = self.execute_command(cmd) if rc != 0: self.module.fail_json(msg='Error when changing password', err=err, out=out, rc=rc) return (rc, out, err) def _make_group_numerical(self): '''Convert SELF.GROUP to is stringed numerical value suitable for dscl.''' if self.group is None: self.group = 'nogroup' try: self.group = grp.getgrnam(self.group).gr_gid except KeyError: self.module.fail_json(msg='Group "%s" not found. Try to create it first using "group" module.' % self.group) # We need to pass a string to dscl self.group = str(self.group) def __modify_group(self, group, action): '''Add or remove SELF.NAME to or from GROUP depending on ACTION. ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. ''' if action == 'add': option = '-a' else: option = '-d' cmd = ['dseditgroup', '-o', 'edit', option, self.name, '-t', 'user', group] (rc, out, err) = self.execute_command(cmd) if rc != 0: self.module.fail_json(msg='Cannot %s user "%s" to group "%s".' % (action, self.name, group), err=err, out=out, rc=rc) return (rc, out, err) def _modify_group(self): '''Add or remove SELF.NAME to or from GROUP depending on ACTION. ACTION can be 'add' or 'remove' otherwise 'remove' is assumed. ''' rc = 0 out = '' err = '' changed = False current = set(self._list_user_groups()) if self.groups is not None: target = set(self.groups.split(',')) else: target = set([]) if self.append is False: for remove in current - target: (_rc, _err, _out) = self.__modify_group(remove, 'delete') rc += rc out += _out err += _err changed = True for add in target - current: (_rc, _err, _out) = self.__modify_group(add, 'add') rc += _rc out += _out err += _err changed = True return (rc, err, out, changed) def _update_system_user(self): '''Hide or show user on login window according SELF.SYSTEM. Returns 0 if a change has been made, None otherwise.''' plist_file = '/Library/Preferences/com.apple.loginwindow.plist' # http://support.apple.com/kb/HT5017?viewlocale=en_US cmd = ['defaults', 'read', plist_file, 'HiddenUsersList'] (rc, out, err) = self.execute_command(cmd, obey_checkmode=False) # returned value is # ( # "_userA", # "_UserB", # userc # ) hidden_users = [] for x in out.splitlines()[1:-1]: try: x = x.split('"')[1] except IndexError: x = x.strip() hidden_users.append(x) if self.system: if self.name not in hidden_users: cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array-add', self.name] (rc, out, err) = self.execute_command(cmd) if rc != 0: self.module.fail_json(msg='Cannot user "%s" to hidden user list.' % self.name, err=err, out=out, rc=rc) return 0 else: if self.name in hidden_users: del (hidden_users[hidden_users.index(self.name)]) cmd = ['defaults', 'write', plist_file, 'HiddenUsersList', '-array'] + hidden_users (rc, out, err) = self.execute_command(cmd) if rc != 0: self.module.fail_json(msg='Cannot remove user "%s" from hidden user list.' % self.name, err=err, out=out, rc=rc) return 0 def user_exists(self): '''Check is SELF.NAME is a known user on the system.''' cmd = self._get_dscl() cmd += ['-list', '/Users/%s' % self.name] (rc, out, err) = self.execute_command(cmd, obey_checkmode=False) return rc == 0 def remove_user(self): '''Delete SELF.NAME. If SELF.FORCE is true, remove its home directory.''' info = self.user_info() cmd = self._get_dscl() cmd += ['-delete', '/Users/%s' % self.name] (rc, out, err) = self.execute_command(cmd) if rc != 0: self.module.fail_json(msg='Cannot delete user "%s".' % self.name, err=err, out=out, rc=rc) if self.force: if os.path.exists(info[5]): shutil.rmtree(info[5]) out += "Removed %s" % info[5] return (rc, out, err) def create_user(self, command_name='dscl'): cmd = self._get_dscl() cmd += ['-create', '/Users/%s' % self.name] (rc, err, out) = self.execute_command(cmd) if rc != 0: self.module.fail_json(msg='Cannot create user "%s".' % self.name, err=err, out=out, rc=rc) self._make_group_numerical() if self.uid is None: self.uid = str(self._get_next_uid(self.system)) # Homedir is not created by default if self.create_home: if self.home is None: self.home = '/Users/%s' % self.name if not self.module.check_mode: if not os.path.exists(self.home): os.makedirs(self.home) self.chown_homedir(int(self.uid), int(self.group), self.home) # dscl sets shell to /usr/bin/false when UserShell is not specified # so set the shell to /bin/bash when the user is not a system user if not self.system and self.shell is None: self.shell = '/bin/bash' for field in self.fields: if field[0] in self.__dict__ and self.__dict__[field[0]]: cmd = self._get_dscl() cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]] (rc, _err, _out) = self.execute_command(cmd) if rc != 0: self.module.fail_json(msg='Cannot add property "%s" to user "%s".' % (field[0], self.name), err=err, out=out, rc=rc) out += _out err += _err if rc != 0: return (rc, _err, _out) (rc, _err, _out) = self._change_user_password() out += _out err += _err self._update_system_user() # here we don't care about change status since it is a creation, # thus changed is always true. if self.groups: (rc, _out, _err, changed) = self._modify_group() out += _out err += _err return (rc, err, out) def modify_user(self): changed = None out = '' err = '' if self.group: self._make_group_numerical() for field in self.fields: if field[0] in self.__dict__ and self.__dict__[field[0]]: current = self._get_user_property(field[1]) if current is None or current != self.__dict__[field[0]]: cmd = self._get_dscl() cmd += ['-create', '/Users/%s' % self.name, field[1], self.__dict__[field[0]]] (rc, _err, _out) = self.execute_command(cmd) if rc != 0: self.module.fail_json( msg='Cannot update property "%s" for user "%s".' % (field[0], self.name), err=err, out=out, rc=rc) changed = rc out += _out err += _err if self.update_password == 'always' and self.password is not None: (rc, _err, _out) = self._change_user_password() out += _out err += _err changed = rc if self.groups: (rc, _out, _err, _changed) = self._modify_group() out += _out err += _err if _changed is True: changed = rc rc = self._update_system_user() if rc == 0: changed = rc return (changed, out, err) class AIX(User): """ This is a AIX User manipulation class. This overrides the following methods from the generic class:- - create_user() - remove_user() - modify_user() - parse_shadow_file() """ platform = 'AIX' distribution = None SHADOWFILE = '/etc/security/passwd' def remove_user(self): cmd = [self.module.get_bin_path('userdel', True)] if self.remove: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def create_user_useradd(self, command_name='useradd'): cmd = [self.module.get_bin_path(command_name, True)] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) if self.groups is not None and len(self.groups): groups = self.get_groups_set() cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.create_home: cmd.append('-m') if self.skeleton is not None: cmd.append('-k') cmd.append(self.skeleton) cmd.append(self.name) (rc, out, err) = self.execute_command(cmd) # set password with chpasswd if self.password is not None: cmd = [] cmd.append(self.module.get_bin_path('chpasswd', True)) cmd.append('-e') cmd.append('-c') self.execute_command(cmd, data="%s:%s" % (self.name, self.password)) return (rc, out, err) def modify_user_usermod(self): cmd = [self.module.get_bin_path('usermod', True)] info = self.user_info() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.groups is not None: current_groups = self.user_group_membership() groups_need_mod = False groups = [] if self.groups == '': if current_groups and not self.append: groups_need_mod = True else: groups = self.get_groups_set() group_diff = set(current_groups).symmetric_difference(groups) if group_diff: if self.append: for g in groups: if g in group_diff: groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: if self.move_home: cmd.append('-m') cmd.append('-d') cmd.append(self.home) if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) # skip if no changes to be made if len(cmd) == 1: (rc, out, err) = (None, '', '') else: cmd.append(self.name) (rc, out, err) = self.execute_command(cmd) # set password with chpasswd if self.update_password == 'always' and self.password is not None and info[1] != self.password: cmd = [] cmd.append(self.module.get_bin_path('chpasswd', True)) cmd.append('-e') cmd.append('-c') (rc2, out2, err2) = self.execute_command(cmd, data="%s:%s" % (self.name, self.password)) else: (rc2, out2, err2) = (None, '', '') if rc is not None: return (rc, out + out2, err + err2) else: return (rc2, out + out2, err + err2) def parse_shadow_file(self): """Example AIX shadowfile data: nobody: password = * operator1: password = {ssha512}06$xxxxxxxxxxxx.... lastupdate = 1549558094 test1: password = * lastupdate = 1553695126 """ b_name = to_bytes(self.name) b_passwd = b'' b_expires = b'' if os.path.exists(self.SHADOWFILE) and os.access(self.SHADOWFILE, os.R_OK): with open(self.SHADOWFILE, 'rb') as bf: b_lines = bf.readlines() b_passwd_line = b'' b_expires_line = b'' try: for index, b_line in enumerate(b_lines): # Get password and lastupdate lines which come after the username if b_line.startswith(b'%s:' % b_name): b_passwd_line = b_lines[index + 1] b_expires_line = b_lines[index + 2] break # Sanity check the lines because sometimes both are not present if b' = ' in b_passwd_line: b_passwd = b_passwd_line.split(b' = ', 1)[-1].strip() if b' = ' in b_expires_line: b_expires = b_expires_line.split(b' = ', 1)[-1].strip() except IndexError: self.module.fail_json(msg='Failed to parse shadow file %s' % self.SHADOWFILE) passwd = to_native(b_passwd) expires = to_native(b_expires) or -1 return passwd, expires class HPUX(User): """ This is a HP-UX User manipulation class. This overrides the following methods from the generic class:- - create_user() - remove_user() - modify_user() """ platform = 'HP-UX' distribution = None SHADOWFILE = '/etc/shadow' def create_user(self): cmd = ['/usr/sam/lbin/useradd.sam'] if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) cmd.append('-g') cmd.append(self.group) if self.groups is not None and len(self.groups): groups = self.get_groups_set() cmd.append('-G') cmd.append(','.join(groups)) if self.comment is not None: cmd.append('-c') cmd.append(self.comment) if self.home is not None: cmd.append('-d') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if self.password is not None: cmd.append('-p') cmd.append(self.password) if self.create_home: cmd.append('-m') else: cmd.append('-M') if self.system: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def remove_user(self): cmd = ['/usr/sam/lbin/userdel.sam'] if self.force: cmd.append('-F') if self.remove: cmd.append('-r') cmd.append(self.name) return self.execute_command(cmd) def modify_user(self): cmd = ['/usr/sam/lbin/usermod.sam'] info = self.user_info() if self.uid is not None and info[2] != int(self.uid): cmd.append('-u') cmd.append(self.uid) if self.non_unique: cmd.append('-o') if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg="Group %s does not exist" % self.group) ginfo = self.group_info(self.group) if info[3] != ginfo[2]: cmd.append('-g') cmd.append(self.group) if self.groups is not None: current_groups = self.user_group_membership() groups_need_mod = False groups = [] if self.groups == '': if current_groups and not self.append: groups_need_mod = True else: groups = self.get_groups_set(remove_existing=False) group_diff = set(current_groups).symmetric_difference(groups) if group_diff: if self.append: for g in groups: if g in group_diff: groups_need_mod = True break else: groups_need_mod = True if groups_need_mod: cmd.append('-G') new_groups = groups if self.append: new_groups = groups | set(current_groups) cmd.append(','.join(new_groups)) if self.comment is not None and info[4] != self.comment: cmd.append('-c') cmd.append(self.comment) if self.home is not None and info[5] != self.home: cmd.append('-d') cmd.append(self.home) if self.move_home: cmd.append('-m') if self.shell is not None and info[6] != self.shell: cmd.append('-s') cmd.append(self.shell) if self.update_password == 'always' and self.password is not None and info[1] != self.password: cmd.append('-F') cmd.append('-p') cmd.append(self.password) # skip if no changes to be made if len(cmd) == 1: return (None, '', '') cmd.append(self.name) return self.execute_command(cmd) class BusyBox(User): """ This is the BusyBox class for use on systems that have adduser, deluser, and delgroup commands. It overrides the following methods: - create_user() - remove_user() - modify_user() """ def create_user(self): cmd = [self.module.get_bin_path('adduser', True)] cmd.append('-D') if self.uid is not None: cmd.append('-u') cmd.append(self.uid) if self.group is not None: if not self.group_exists(self.group): self.module.fail_json(msg='Group {0} does not exist'.format(self.group)) cmd.append('-G') cmd.append(self.group) if self.comment is not None: cmd.append('-g') cmd.append(self.comment) if self.home is not None: cmd.append('-h') cmd.append(self.home) if self.shell is not None: cmd.append('-s') cmd.append(self.shell) if not self.create_home: cmd.append('-H') if self.skeleton is not None: cmd.append('-k') cmd.append(self.skeleton) if self.system: cmd.append('-S') cmd.append(self.name) rc, out, err = self.execute_command(cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) if self.password is not None: cmd = [self.module.get_bin_path('chpasswd', True)] cmd.append('--encrypted') data = '{name}:{password}'.format(name=self.name, password=self.password) rc, out, err = self.execute_command(cmd, data=data) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) # Add to additional groups if self.groups is not None and len(self.groups): groups = self.get_groups_set() add_cmd_bin = self.module.get_bin_path('adduser', True) for group in groups: cmd = [add_cmd_bin, self.name, group] rc, out, err = self.execute_command(cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) return rc, out, err def remove_user(self): cmd = [ self.module.get_bin_path('deluser', True), self.name ] if self.remove: cmd.append('--remove-home') return self.execute_command(cmd) def modify_user(self): current_groups = self.user_group_membership() groups = [] rc = None out = '' err = '' info = self.user_info() add_cmd_bin = self.module.get_bin_path('adduser', True) remove_cmd_bin = self.module.get_bin_path('delgroup', True) # Manage group membership if self.groups is not None and len(self.groups): groups = self.get_groups_set() group_diff = set(current_groups).symmetric_difference(groups) if group_diff: for g in groups: if g in group_diff: add_cmd = [add_cmd_bin, self.name, g] rc, out, err = self.execute_command(add_cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) for g in group_diff: if g not in groups and not self.append: remove_cmd = [remove_cmd_bin, self.name, g] rc, out, err = self.execute_command(remove_cmd) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) # Manage password if self.password is not None: if info[1] != self.password: cmd = [self.module.get_bin_path('chpasswd', True)] cmd.append('--encrypted') data = '{name}:{password}'.format(name=self.name, password=self.password) rc, out, err = self.execute_command(cmd, data=data) if rc is not None and rc != 0: self.module.fail_json(name=self.name, msg=err, rc=rc) return rc, out, err class Alpine(BusyBox): """ This is the Alpine User manipulation class. It inherits the BusyBox class behaviors such as using adduser and deluser commands. """ platform = 'Linux' distribution = 'Alpine' def main(): ssh_defaults = dict( bits=0, type='rsa', passphrase=None, comment='ansible-generated on %s' % socket.gethostname() ) module = AnsibleModule( argument_spec=dict( state=dict(type='str', default='present', choices=['absent', 'present']), name=dict(type='str', required=True, aliases=['user']), uid=dict(type='int'), non_unique=dict(type='bool', default=False), group=dict(type='str'), groups=dict(type='list'), comment=dict(type='str'), home=dict(type='path'), shell=dict(type='str'), password=dict(type='str', no_log=True), login_class=dict(type='str'), # following options are specific to macOS hidden=dict(type='bool'), # following options are specific to selinux seuser=dict(type='str'), # following options are specific to userdel force=dict(type='bool', default=False), remove=dict(type='bool', default=False), # following options are specific to useradd create_home=dict(type='bool', default=True, aliases=['createhome']), skeleton=dict(type='str'), system=dict(type='bool', default=False), # following options are specific to usermod move_home=dict(type='bool', default=False), append=dict(type='bool', default=False), # following are specific to ssh key generation generate_ssh_key=dict(type='bool'), ssh_key_bits=dict(type='int', default=ssh_defaults['bits']), ssh_key_type=dict(type='str', default=ssh_defaults['type']), ssh_key_file=dict(type='path'), ssh_key_comment=dict(type='str', default=ssh_defaults['comment']), ssh_key_passphrase=dict(type='str', no_log=True), update_password=dict(type='str', default='always', choices=['always', 'on_create']), expires=dict(type='float'), password_lock=dict(type='bool'), local=dict(type='bool'), profile=dict(type='str'), authorization=dict(type='str'), role=dict(type='str'), ), supports_check_mode=True, mutually_exclusive=[ ('local', 'groups'), ('local', 'append') ] ) user = User(module) user.check_password_encrypted() module.debug('User instantiated - platform %s' % user.platform) if user.distribution: module.debug('User instantiated - distribution %s' % user.distribution) rc = None out = '' err = '' result = {} result['name'] = user.name result['state'] = user.state if user.state == 'absent': if user.user_exists(): if module.check_mode: module.exit_json(changed=True) (rc, out, err) = user.remove_user() if rc != 0: module.fail_json(name=user.name, msg=err, rc=rc) result['force'] = user.force result['remove'] = user.remove elif user.state == 'present': if not user.user_exists(): if module.check_mode: module.exit_json(changed=True) # Check to see if the provided home path contains parent directories # that do not exist. path_needs_parents = False if user.home: parent = os.path.dirname(user.home) if not os.path.isdir(parent): path_needs_parents = True (rc, out, err) = user.create_user() # If the home path had parent directories that needed to be created, # make sure file permissions are correct in the created home directory. if path_needs_parents: info = user.user_info() if info is not False: user.chown_homedir(info[2], info[3], user.home) if module.check_mode: result['system'] = user.name else: result['system'] = user.system result['create_home'] = user.create_home else: # modify user (note: this function is check mode aware) (rc, out, err) = user.modify_user() result['append'] = user.append result['move_home'] = user.move_home if rc is not None and rc != 0: module.fail_json(name=user.name, msg=err, rc=rc) if user.password is not None: result['password'] = 'NOT_LOGGING_PASSWORD' if rc is None: result['changed'] = False else: result['changed'] = True if out: result['stdout'] = out if err: result['stderr'] = err if user.user_exists() and user.state == 'present': info = user.user_info() if info is False: result['msg'] = "failed to look up user name: %s" % user.name result['failed'] = True result['uid'] = info[2] result['group'] = info[3] result['comment'] = info[4] result['home'] = info[5] result['shell'] = info[6] if user.groups is not None: result['groups'] = user.groups # handle missing homedirs info = user.user_info() if user.home is None: user.home = info[5] if not os.path.exists(user.home) and user.create_home: if not module.check_mode: user.create_homedir(user.home) user.chown_homedir(info[2], info[3], user.home) result['changed'] = True # deal with ssh key if user.sshkeygen: # generate ssh key (note: this function is check mode aware) (rc, out, err) = user.ssh_key_gen() if rc is not None and rc != 0: module.fail_json(name=user.name, msg=err, rc=rc) if rc == 0: result['changed'] = True (rc, out, err) = user.ssh_key_fingerprint() if rc == 0: result['ssh_fingerprint'] = out.strip() else: result['ssh_fingerprint'] = err.strip() result['ssh_key_file'] = user.get_ssh_key_path() result['ssh_public_key'] = user.get_ssh_public_key() module.exit_json(**result) # import module snippets if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
62,969
User module on Darwin is not idempotent
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Running the [user module](https://docs.ansible.com/ansible/latest/modules/user_module.html) on Ansible `2.8.5` and `devel` branch to create a user on a Darwin system fails if the user already exists with an error: `Cannot update property "uid" for user "user"` The issue exists in `lib/ansible/modules/system/user.py` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> user module ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.0.dev0 config file = /Users/johnchen/repos/buildhost-configuration3/scripts/macos_bootstrap/ansible.cfg configured module search path = [u'/Users/johnchen/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /Users/johnchen/repos/ansible/lib/ansible executable location = /Users/johnchen/repos/ansible/bin/ansible python version = 2.7.10 (default, Feb 22 2019, 21:55:15) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below HOST_KEY_CHECKING(ansible.cfg) = False RETRY_FILES_ENABLED(ansible.cfg) = False ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> macOS 10.14.5 (18F132) Kernel Version: Darwin 18.6.0 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Run the user task on a macOS host to create a user with any name and uid. The first run should complete successfully. Any subsequent runs will result in the mentioned failure. <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Add Mac User become: true user: name: "test" uid: "500" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> Play Recap should show no changes on second run. Example: ``` ok: [127.0.0.1] => (item={u'uid': 500, u'name': u'test'}) ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> The second run of the playbook fails with a `Cannot update property` error <!--- Paste verbatim command output between quotes --> ```paste below failed: [127.0.0.1] (item={u'uid': u'500', u'name': u'test'}) => {"ansible_loop_var": "item", "changed": false, "err": "", "item": {"name": "test", "uid": "500"}, "msg": "Cannot update property \"uid\" for user \"test\".", "out": "", "rc": 40} ```
https://github.com/ansible/ansible/issues/62969
https://github.com/ansible/ansible/pull/62973
d4c4c92c97e8dc2791f6a9f63ba0a3a0ce467a6b
c73288ad5387a728349fae772aa9d1769af73a13
2019-09-30T16:19:22Z
python
2019-11-22T22:05:17Z
test/integration/targets/user/tasks/main.yml
# Test code for the user module. # (c) 2017, James Tanner <[email protected]> # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # ## user add - name: remove the test user user: name: ansibulluser state: absent - name: try to create a user user: name: ansibulluser state: present register: user_test0_0 - name: create the user again user: name: ansibulluser state: present register: user_test0_1 - debug: var: user_test0 verbosity: 2 - name: make a list of users script: userlist.sh {{ ansible_facts.distribution }} register: user_names - debug: var: user_names verbosity: 2 - name: validate results for testcase 0 assert: that: - user_test0_0 is changed - user_test0_1 is not changed - '"ansibulluser" in user_names.stdout_lines' # test user add with password - name: add an encrypted password for user user: name: ansibulluser password: "$6$rounds=656000$TT4O7jz2M57npccl$33LF6FcUMSW11qrESXL1HX0BS.bsiT6aenFLLiVpsQh6hDtI9pJh5iY7x8J7ePkN4fP8hmElidHXaeD51pbGS." state: present update_password: always register: test_user_encrypt0 - name: there should not be warnings assert: that: "'warnings' not in test_user_encrypt0" - block: - name: add an plaintext password for user user: name: ansibulluser password: "plaintextpassword" state: present update_password: always register: test_user_encrypt1 - name: there should be a warning complains that the password is plaintext assert: that: "'warnings' in test_user_encrypt1" - name: add an invalid hashed password user: name: ansibulluser password: "$6$rounds=656000$tgK3gYTyRLUmhyv2$lAFrYUQwn7E6VsjPOwQwoSx30lmpiU9r/E0Al7tzKrR9mkodcMEZGe9OXD0H/clOn6qdsUnaL4zefy5fG+++++" state: present update_password: always register: test_user_encrypt2 - name: there should be a warning complains about the character set of password assert: that: "'warnings' in test_user_encrypt2" - name: change password to '!' user: name: ansibulluser password: '!' register: test_user_encrypt3 - name: change password to '*' user: name: ansibulluser password: '*' register: test_user_encrypt4 - name: change password to '*************' user: name: ansibulluser password: '*************' register: test_user_encrypt5 - name: there should be no warnings when setting the password to '!', '*' or '*************' assert: that: - "'warnings' not in test_user_encrypt3" - "'warnings' not in test_user_encrypt4" - "'warnings' not in test_user_encrypt5" when: ansible_facts.system != 'Darwin' # https://github.com/ansible/ansible/issues/42484 # Skipping macOS for now since there is a bug when changing home directory - block: - name: create user specifying home user: name: ansibulluser state: present home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser" register: user_test3_0 - name: create user again specifying home user: name: ansibulluser state: present home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser" register: user_test3_1 - name: change user home user: name: ansibulluser state: present home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser-mod" register: user_test3_2 - name: change user home back user: name: ansibulluser state: present home: "{{ user_home_prefix[ansible_facts.system] }}/ansibulluser" register: user_test3_3 - name: validate results for testcase 3 assert: that: - user_test3_0 is not changed - user_test3_1 is not changed - user_test3_2 is changed - user_test3_3 is changed when: ansible_facts.system != 'Darwin' # https://github.com/ansible/ansible/issues/41393 # Create a new user account with a path that has parent directories that do not exist - name: Create user with home path that has parents that do not exist user: name: ansibulluser2 state: present home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2" register: create_home_with_no_parent_1 - name: Create user with home path that has parents that do not exist again user: name: ansibulluser2 state: present home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2" register: create_home_with_no_parent_2 - name: Check the created home directory stat: path: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2" register: home_with_no_parent_3 - name: Ensure user with non-existing parent paths was created successfully assert: that: - create_home_with_no_parent_1 is changed - create_home_with_no_parent_1.home == user_home_prefix[ansible_facts.system] ~ '/in2deep/ansibulluser2' - create_home_with_no_parent_2 is not changed - home_with_no_parent_3.stat.uid == create_home_with_no_parent_1.uid - home_with_no_parent_3.stat.gr_name == default_user_group[ansible_facts.distribution] | default('ansibulluser2') - name: Cleanup test account user: name: ansibulluser2 home: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/ansibulluser2" state: absent remove: yes - name: Remove testing dir file: path: "{{ user_home_prefix[ansible_facts.system] }}/in2deep/" state: absent # https://github.com/ansible/ansible/issues/60307 # Make sure we can create a user when the home directory is missing - name: Create user with home path that does not exist user: name: ansibulluser3 state: present home: "{{ user_home_prefix[ansible_facts.system] }}/nosuchdir" createhome: no - name: Cleanup test account user: name: ansibulluser3 state: absent remove: yes ## user check - name: run existing user check tests user: name: "{{ user_names.stdout_lines | random }}" state: present create_home: no loop: "{{ range(1, 5+1) | list }}" register: user_test1 - debug: var: user_test1 verbosity: 2 - name: validate results for testcase 1 assert: that: - user_test1.results is defined - user_test1.results | length == 5 - name: validate changed results for testcase 1 assert: that: - "user_test1.results[0] is not changed" - "user_test1.results[1] is not changed" - "user_test1.results[2] is not changed" - "user_test1.results[3] is not changed" - "user_test1.results[4] is not changed" - "user_test1.results[0]['state'] == 'present'" - "user_test1.results[1]['state'] == 'present'" - "user_test1.results[2]['state'] == 'present'" - "user_test1.results[3]['state'] == 'present'" - "user_test1.results[4]['state'] == 'present'" ## user remove - name: try to delete the user user: name: ansibulluser state: absent force: true register: user_test2 - name: make a new list of users script: userlist.sh {{ ansible_facts.distribution }} register: user_names2 - debug: var: user_names2 verbosity: 2 - name: validate results for testcase 2 assert: that: - '"ansibulluser" not in user_names2.stdout_lines' ## create user without home and test fallback home dir create - block: - name: create the user user: name: ansibulluser - name: delete the user and home dir user: name: ansibulluser state: absent force: true remove: true - name: create the user without home user: name: ansibulluser create_home: no - name: create the user home dir user: name: ansibulluser register: user_create_home_fallback - name: stat home dir stat: path: '{{ user_create_home_fallback.home }}' register: user_create_home_fallback_dir - name: read UMASK from /etc/login.defs and return mode shell: | import re import os try: for line in open('/etc/login.defs').readlines(): m = re.match(r'^UMASK\s+(\d+)$', line) if m: umask = int(m.group(1), 8) except: umask = os.umask(0) mode = oct(0o777 & ~umask) print(str(mode).replace('o', '')) args: executable: "{{ ansible_python_interpreter }}" register: user_login_defs_umask - name: validate that user home dir is created assert: that: - user_create_home_fallback is changed - user_create_home_fallback_dir.stat.exists - user_create_home_fallback_dir.stat.isdir - user_create_home_fallback_dir.stat.pw_name == 'ansibulluser' - user_create_home_fallback_dir.stat.mode == user_login_defs_umask.stdout when: ansible_facts.system != 'Darwin' - block: - name: create non-system user on macOS to test the shell is set to /bin/bash user: name: macosuser register: macosuser_output - name: validate the shell is set to /bin/bash assert: that: - 'macosuser_output.shell == "/bin/bash"' - name: cleanup user: name: macosuser state: absent - name: create system user on macos to test the shell is set to /usr/bin/false user: name: macosuser system: yes register: macosuser_output - name: validate the shell is set to /usr/bin/false assert: that: - 'macosuser_output.shell == "/usr/bin/false"' - name: cleanup user: name: macosuser state: absent - name: create non-system user on macos and set the shell to /bin/sh user: name: macosuser shell: /bin/sh register: macosuser_output - name: validate the shell is set to /bin/sh assert: that: - 'macosuser_output.shell == "/bin/sh"' - name: cleanup user: name: macosuser state: absent when: ansible_facts.distribution == "MacOSX" ## user expires # Date is March 3, 2050 - name: Set user expiration user: name: ansibulluser state: present expires: 2529881062 register: user_test_expires1 tags: - timezone - name: Set user expiration again to ensure no change is made user: name: ansibulluser state: present expires: 2529881062 register: user_test_expires2 tags: - timezone - name: Ensure that account with expiration was created and did not change on subsequent run assert: that: - user_test_expires1 is changed - user_test_expires2 is not changed - name: Verify expiration date for Linux block: - name: LINUX | Get expiration date for ansibulluser getent: database: shadow key: ansibulluser - name: LINUX | Ensure proper expiration date was set assert: that: - getent_shadow['ansibulluser'][6] == '29281' when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse'] - name: Verify expiration date for BSD block: - name: BSD | Get expiration date for ansibulluser shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7' changed_when: no register: bsd_account_expiration - name: BSD | Ensure proper expiration date was set assert: that: - bsd_account_expiration.stdout == '2529881062' when: ansible_facts.os_family == 'FreeBSD' - name: Change timezone timezone: name: America/Denver register: original_timezone tags: - timezone - name: Change system timezone to make sure expiration comparison works properly block: - name: Create user with expiration again to ensure no change is made in a new timezone user: name: ansibulluser state: present expires: 2529881062 register: user_test_different_tz tags: - timezone - name: Ensure that no change was reported assert: that: - user_test_different_tz is not changed tags: - timezone always: - name: Restore original timezone - {{ original_timezone.diff.before.name }} timezone: name: "{{ original_timezone.diff.before.name }}" when: original_timezone.diff.before.name != "n/a" tags: - timezone - name: Restore original timezone when n/a file: path: /etc/sysconfig/clock state: absent when: - original_timezone.diff.before.name == "n/a" - "'/etc/sysconfig/clock' in original_timezone.msg" tags: - timezone - name: Unexpire user user: name: ansibulluser state: present expires: -1 register: user_test_expires3 - name: Verify un expiration date for Linux block: - name: LINUX | Get expiration date for ansibulluser getent: database: shadow key: ansibulluser - name: LINUX | Ensure proper expiration date was set assert: msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}" that: - not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0 when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse'] - name: Verify un expiration date for Linux/BSD block: - name: Unexpire user again to check for change user: name: ansibulluser state: present expires: -1 register: user_test_expires4 - name: Ensure first expiration reported a change and second did not assert: msg: The second run of the expiration removal task reported a change when it should not that: - user_test_expires3 is changed - user_test_expires4 is not changed when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse', 'FreeBSD'] - name: Verify un expiration date for BSD block: - name: BSD | Get expiration date for ansibulluser shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7' changed_when: no register: bsd_account_expiration - name: BSD | Ensure proper expiration date was set assert: msg: "expiry is supposed to be '0', not {{ bsd_account_expiration.stdout }}" that: - bsd_account_expiration.stdout == '0' when: ansible_facts.os_family == 'FreeBSD' # Test setting no expiration when creating a new account # https://github.com/ansible/ansible/issues/44155 - name: Remove ansibulluser user: name: ansibulluser state: absent - name: Create user account without expiration user: name: ansibulluser state: present expires: -1 register: user_test_create_no_expires_1 - name: Create user account without expiration again user: name: ansibulluser state: present expires: -1 register: user_test_create_no_expires_2 - name: Ensure changes were made appropriately assert: msg: Setting 'expires='-1 resulted in incorrect changes that: - user_test_create_no_expires_1 is changed - user_test_create_no_expires_2 is not changed - name: Verify un expiration date for Linux block: - name: LINUX | Get expiration date for ansibulluser getent: database: shadow key: ansibulluser - name: LINUX | Ensure proper expiration date was set assert: msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}" that: - not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0 when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse'] - name: Verify un expiration date for BSD block: - name: BSD | Get expiration date for ansibulluser shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7' changed_when: no register: bsd_account_expiration - name: BSD | Ensure proper expiration date was set assert: msg: "expiry is supposed to be '0', not {{ bsd_account_expiration.stdout }}" that: - bsd_account_expiration.stdout == '0' when: ansible_facts.os_family == 'FreeBSD' # Test setting epoch 0 expiration when creating a new account, then removing the expiry # https://github.com/ansible/ansible/issues/47114 - name: Remove ansibulluser user: name: ansibulluser state: absent - name: Create user account with epoch 0 expiration user: name: ansibulluser state: present expires: 0 register: user_test_expires_create0_1 - name: Create user account with epoch 0 expiration again user: name: ansibulluser state: present expires: 0 register: user_test_expires_create0_2 - name: Change the user account to remove the expiry time user: name: ansibulluser expires: -1 register: user_test_remove_expires_1 - name: Change the user account to remove the expiry time again user: name: ansibulluser expires: -1 register: user_test_remove_expires_2 - name: Verify un expiration date for Linux block: - name: LINUX | Ensure changes were made appropriately assert: msg: Creating an account with 'expries=0' then removing that expriation with 'expires=-1' resulted in incorrect changes that: - user_test_expires_create0_1 is changed - user_test_expires_create0_2 is not changed - user_test_remove_expires_1 is changed - user_test_remove_expires_2 is not changed - name: LINUX | Get expiration date for ansibulluser getent: database: shadow key: ansibulluser - name: LINUX | Ensure proper expiration date was set assert: msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}" that: - not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0 when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse'] - name: Verify proper expiration behavior for BSD block: - name: BSD | Ensure changes were made appropriately assert: msg: Creating an account with 'expries=0' then removing that expriation with 'expires=-1' resulted in incorrect changes that: - user_test_expires_create0_1 is changed - user_test_expires_create0_2 is not changed - user_test_remove_expires_1 is not changed - user_test_remove_expires_2 is not changed when: ansible_facts.os_family == 'FreeBSD' # Test expiration with a very large negative number. This should have the same # result as setting -1. - name: Set expiration date using very long negative number user: name: ansibulluser state: present expires: -2529881062 register: user_test_expires5 - name: Ensure no change was made assert: that: - user_test_expires5 is not changed - name: Verify un expiration date for Linux block: - name: LINUX | Get expiration date for ansibulluser getent: database: shadow key: ansibulluser - name: LINUX | Ensure proper expiration date was set assert: msg: "expiry is supposed to be empty or -1, not {{ getent_shadow['ansibulluser'][6] }}" that: - not getent_shadow['ansibulluser'][6] or getent_shadow['ansibulluser'][6] | int < 0 when: ansible_facts.os_family in ['RedHat', 'Debian', 'Suse'] - name: Verify un expiration date for BSD block: - name: BSD | Get expiration date for ansibulluser shell: 'grep ansibulluser /etc/master.passwd | cut -d: -f 7' changed_when: no register: bsd_account_expiration - name: BSD | Ensure proper expiration date was set assert: msg: "expiry is supposed to be '0', not {{ bsd_account_expiration.stdout }}" that: - bsd_account_expiration.stdout == '0' when: ansible_facts.os_family == 'FreeBSD' ## shadow backup - block: - name: Create a user to test shadow file backup user: name: ansibulluser state: present register: result - name: Find shadow backup files find: path: /etc patterns: 'shadow\..*~$' use_regex: yes register: shadow_backups - name: Assert that a backup file was created assert: that: - result.bakup - shadow_backups.files | map(attribute='path') | list | length > 0 when: ansible_facts.os_family == 'Solaris' # Test creating ssh key with passphrase - name: Remove ansibulluser user: name: ansibulluser state: absent - name: Create user with ssh key user: name: ansibulluser state: present generate_ssh_key: yes ssh_key_file: "{{ output_dir }}/test_id_rsa" ssh_key_passphrase: secret_passphrase - name: Unlock ssh key command: "ssh-keygen -y -f {{ output_dir }}/test_id_rsa -P secret_passphrase" register: result - name: Check that ssh key was unlocked successfully assert: that: - result.rc == 0 - name: Clean ssh key file: path: "{{ output_dir }}/test_id_rsa" state: absent when: ansible_os_family == 'FreeBSD' ## password lock - block: - name: Set password for ansibulluser user: name: ansibulluser password: "$6$rounds=656000$TT4O7jz2M57npccl$33LF6FcUMSW11qrESXL1HX0BS.bsiT6aenFLLiVpsQh6hDtI9pJh5iY7x8J7ePkN4fP8hmElidHXaeD51pbGS." - name: Lock account user: name: ansibulluser password_lock: yes register: password_lock_1 - name: Lock account again user: name: ansibulluser password_lock: yes register: password_lock_2 - name: Unlock account user: name: ansibulluser password_lock: no register: password_lock_3 - name: Unlock account again user: name: ansibulluser password_lock: no register: password_lock_4 - name: Ensure task reported changes appropriately assert: msg: The password_lock tasks did not make changes appropriately that: - password_lock_1 is changed - password_lock_2 is not changed - password_lock_3 is changed - password_lock_4 is not changed - name: Lock account user: name: ansibulluser password_lock: yes - name: Verify account lock for BSD block: - name: BSD | Get account status shell: "{{ status_command[ansible_facts['system']] }}" register: account_status_locked - name: Unlock account user: name: ansibulluser password_lock: no - name: BSD | Get account status shell: "{{ status_command[ansible_facts['system']] }}" register: account_status_unlocked - name: FreeBSD | Ensure account is locked assert: that: - "'LOCKED' in account_status_locked.stdout" - "'LOCKED' not in account_status_unlocked.stdout" when: ansible_facts['system'] == 'FreeBSD' when: ansible_facts['system'] in ['FreeBSD', 'OpenBSD'] - name: Verify account lock for Linux block: - name: LINUX | Get account status getent: database: shadow key: ansibulluser - name: LINUX | Ensure account is locked assert: that: - getent_shadow['ansibulluser'][0].startswith('!') - name: Unlock account user: name: ansibulluser password_lock: no - name: LINUX | Get account status getent: database: shadow key: ansibulluser - name: LINUX | Ensure account is unlocked assert: that: - not getent_shadow['ansibulluser'][0].startswith('!') when: ansible_facts['system'] == 'Linux' always: - name: Unlock account user: name: ansibulluser password_lock: no when: ansible_facts['system'] in ['FreeBSD', 'OpenBSD', 'Linux'] ## Check local mode # Even if we don't have a system that is bound to a directory, it's useful # to run with local: true to exercise the code path that reads through the local # user database file. # https://github.com/ansible/ansible/issues/50947 - name: Create /etc/gshadow file: path: /etc/gshadow state: touch when: ansible_facts.os_family == 'Suse' tags: - user_test_local_mode - name: Create /etc/libuser.conf file: path: /etc/libuser.conf state: touch when: - ansible_facts.distribution == 'Ubuntu' - ansible_facts.distribution_major_version is version_compare('16', '==') tags: - user_test_local_mode - name: Ensure luseradd is present action: "{{ ansible_facts.pkg_mgr }}" args: name: libuser state: present when: ansible_facts.system in ['Linux'] tags: - user_test_local_mode - name: Create local account that already exists to check for warning user: name: root local: yes register: local_existing tags: - user_test_local_mode - name: Create local_ansibulluser user: name: local_ansibulluser state: present local: yes register: local_user_test_1 tags: - user_test_local_mode - name: Create local_ansibulluser again user: name: local_ansibulluser state: present local: yes register: local_user_test_2 tags: - user_test_local_mode - name: Remove local_ansibulluser user: name: local_ansibulluser state: absent remove: yes local: yes register: local_user_test_remove_1 tags: - user_test_local_mode - name: Remove local_ansibulluser again user: name: local_ansibulluser state: absent remove: yes local: yes register: local_user_test_remove_2 tags: - user_test_local_mode - name: Create test group group: name: testgroup tags: - user_test_local_mode - name: Create local_ansibulluser with groups user: name: local_ansibulluser state: present local: yes groups: testgroup register: local_user_test_3 ignore_errors: yes tags: - user_test_local_mode - name: Append groups for local_ansibulluser user: name: local_ansibulluser state: present local: yes append: yes register: local_user_test_4 ignore_errors: yes tags: - user_test_local_mode - name: Ensure local user accounts were created and removed properly assert: that: - local_user_test_1 is changed - local_user_test_2 is not changed - local_user_test_3 is failed - "local_user_test_3['msg'] is search('parameters are mutually exclusive: groups|local')" - local_user_test_4 is failed - "local_user_test_4['msg'] is search('parameters are mutually exclusive: groups|append')" - local_user_test_remove_1 is changed - local_user_test_remove_2 is not changed tags: - user_test_local_mode - name: Ensure warnings were displayed properly assert: that: - local_user_test_1['warnings'] | length > 0 - local_user_test_1['warnings'] | first is search('The local user account may already exist') - local_existing['warnings'] is not defined when: ansible_facts.system in ['Linux'] tags: - user_test_local_mode
closed
ansible/ansible
https://github.com/ansible/ansible
64,150
unable to read from KV secrets engine vault enterprise
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Hi, i am trying to read a secret from the following path secret: "{{ lookup('hashi_vault', 'namespace={{NAMESPACE}} secret={{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo url={{ VAULT_ADDR }} auth_method={{ VAULT_AUTH_METHOD }} role_id={{ VAULT_ROLE_ID }} secret_id={{ VAULT_SECRET_ID }}')}}" but this fails with no secret found at location. we are using enterprise and this uses the KV engine vault kv get {{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo ##### ISSUE TYPE - Bug Report unable to read from kv data store ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> hashi_vault ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below 2.8 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64150
https://github.com/ansible/ansible/pull/64288
e74cf5e4b300aa8ce6c88a967298fc26ee471e42
8daa42bb3d30ec758105c5661967ec2c13f73a5d
2019-10-31T13:54:20Z
python
2019-11-25T05:18:49Z
changelogs/fragments/64288-fix-hashi-vault-kv-v2.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
64,150
unable to read from KV secrets engine vault enterprise
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Hi, i am trying to read a secret from the following path secret: "{{ lookup('hashi_vault', 'namespace={{NAMESPACE}} secret={{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo url={{ VAULT_ADDR }} auth_method={{ VAULT_AUTH_METHOD }} role_id={{ VAULT_ROLE_ID }} secret_id={{ VAULT_SECRET_ID }}')}}" but this fails with no secret found at location. we are using enterprise and this uses the KV engine vault kv get {{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo ##### ISSUE TYPE - Bug Report unable to read from kv data store ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> hashi_vault ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below 2.8 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64150
https://github.com/ansible/ansible/pull/64288
e74cf5e4b300aa8ce6c88a967298fc26ee471e42
8daa42bb3d30ec758105c5661967ec2c13f73a5d
2019-10-31T13:54:20Z
python
2019-11-25T05:18:49Z
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
.. _porting_2.10_guide: ************************** Ansible 2.10 Porting Guide ************************** This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10. It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible. We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make. This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`. .. contents:: Topics Playbook ======== No notable changes Command Line ============ No notable changes Deprecated ========== No notable changes Modules ======= Modules removed --------------- The following modules no longer exist: * letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead. Deprecation notices ------------------- The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly. * ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead. The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly. * The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version). * :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module. * :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module. * :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module. * :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module. * The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead. * :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3. * :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module. * :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3. * :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5. * :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5. * :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module. * :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin. The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly. * The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings. * The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``. * :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``. * :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false`` The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly. * ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead. Noteworthy module changes ------------------------- * :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``. * :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``. * :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``. * :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``. * :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``. * The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead. * :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10. * :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``. * :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``. * :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself Plugins ======= No notable changes Porting custom scripts ====================== No notable changes Networking ========== No notable changes
closed
ansible/ansible
https://github.com/ansible/ansible
64,150
unable to read from KV secrets engine vault enterprise
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Hi, i am trying to read a secret from the following path secret: "{{ lookup('hashi_vault', 'namespace={{NAMESPACE}} secret={{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo url={{ VAULT_ADDR }} auth_method={{ VAULT_AUTH_METHOD }} role_id={{ VAULT_ROLE_ID }} secret_id={{ VAULT_SECRET_ID }}')}}" but this fails with no secret found at location. we are using enterprise and this uses the KV engine vault kv get {{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo ##### ISSUE TYPE - Bug Report unable to read from kv data store ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> hashi_vault ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below 2.8 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64150
https://github.com/ansible/ansible/pull/64288
e74cf5e4b300aa8ce6c88a967298fc26ee471e42
8daa42bb3d30ec758105c5661967ec2c13f73a5d
2019-10-31T13:54:20Z
python
2019-11-25T05:18:49Z
lib/ansible/plugins/lookup/hashi_vault.py
# (c) 2015, Jonathan Davila <jonathan(at)davila.io> # (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = """ lookup: hashi_vault author: Jonathan Davila <jdavila(at)ansible.com> version_added: "2.0" short_description: retrieve secrets from HashiCorp's vault requirements: - hvac (python library) description: - retrieve secrets from HashiCorp's vault notes: - Due to a current limitation in the HVAC library there won't necessarily be an error if a bad endpoint is specified. options: secret: description: query you are making. required: True token: description: vault token. env: - name: VAULT_TOKEN url: description: URL to vault service. env: - name: VAULT_ADDR default: 'http://127.0.0.1:8200' username: description: Authentication user name. password: description: Authentication password. role_id: description: Role id for a vault AppRole auth. env: - name: VAULT_ROLE_ID secret_id: description: Secret id for a vault AppRole auth. env: - name: VAULT_SECRET_ID auth_method: description: - Authentication method to be used. - C(userpass) is added in version 2.8. env: - name: VAULT_AUTH_METHOD choices: - userpass - ldap - approle mount_point: description: vault mount point, only required if you have a custom mount point. default: ldap ca_cert: description: path to certificate to use for authentication. aliases: [ cacert ] validate_certs: description: controls verification and validation of SSL certificates, mostly you only want to turn off with self signed ones. type: boolean default: True namespace: version_added: "2.8" description: namespace where secrets reside. requires HVAC 0.7.0+ and Vault 0.11+. """ EXAMPLES = """ - debug: msg: "{{ lookup('hashi_vault', 'secret=secret/hello:value token=c975b780-d1be-8016-866b-01d0f9b688a5 url=http://myvault:8200')}}" - name: Return all secrets from a path debug: msg: "{{ lookup('hashi_vault', 'secret=secret/hello token=c975b780-d1be-8016-866b-01d0f9b688a5 url=http://myvault:8200')}}" - name: Vault that requires authentication via LDAP debug: msg: "{{ lookup('hashi_vault', 'secret=secret/hello:value auth_method=ldap mount_point=ldap username=myuser password=mypas url=http://myvault:8200')}}" - name: Vault that requires authentication via username and password debug: msg: "{{ lookup('hashi_vault', 'secret=secret/hello:value auth_method=userpass username=myuser password=mypas url=http://myvault:8200')}}" - name: Using an ssl vault debug: msg: "{{ lookup('hashi_vault', 'secret=secret/hola:value token=c975b780-d1be-8016-866b-01d0f9b688a5 url=https://myvault:8200 validate_certs=False')}}" - name: using certificate auth debug: msg: "{{ lookup('hashi_vault', 'secret=secret/hi:value token=xxxx-xxx-xxx url=https://myvault:8200 validate_certs=True cacert=/cacert/path/ca.pem')}}" - name: authenticate with a Vault app role debug: msg: "{{ lookup('hashi_vault', 'secret=secret/hello:value auth_method=approle role_id=myroleid secret_id=mysecretid url=http://myvault:8200')}}" - name: Return all secrets from a path in a namespace debug: msg: "{{ lookup('hashi_vault', 'secret=secret/hello token=c975b780-d1be-8016-866b-01d0f9b688a5 url=http://myvault:8200 namespace=teama/admins')}}" # to work with kv v2 (vault api - for kv v2 - GET method requires that PATH should be "secret/data/:path") - name: Return all kv v2 secrets from a path debug: msg: "{{ lookup('hashi_vault', 'secret=secret/data/hello token=my_vault_token url=http://myvault_url:8200') }}" """ RETURN = """ _raw: description: - secrets(s) requested """ import os from ansible.errors import AnsibleError from ansible.module_utils.parsing.convert_bool import boolean from ansible.plugins.lookup import LookupBase HAS_HVAC = False try: import hvac HAS_HVAC = True except ImportError: HAS_HVAC = False ANSIBLE_HASHI_VAULT_ADDR = 'http://127.0.0.1:8200' if os.getenv('VAULT_ADDR') is not None: ANSIBLE_HASHI_VAULT_ADDR = os.environ['VAULT_ADDR'] class HashiVault: def __init__(self, **kwargs): self.url = kwargs.get('url', ANSIBLE_HASHI_VAULT_ADDR) self.namespace = kwargs.get('namespace', None) self.avail_auth_method = ['approle', 'userpass', 'ldap'] # split secret arg, which has format 'secret/hello:value' into secret='secret/hello' and secret_field='value' s = kwargs.get('secret') if s is None: raise AnsibleError("No secret specified for hashi_vault lookup") s_f = s.rsplit(':', 1) self.secret = s_f[0] if len(s_f) >= 2: self.secret_field = s_f[1] else: self.secret_field = '' self.verify = self.boolean_or_cacert(kwargs.get('validate_certs', True), kwargs.get('cacert', '')) # If a particular backend is asked for (and its method exists) we call it, otherwise drop through to using # token auth. This means if a particular auth backend is requested and a token is also given, then we # ignore the token and attempt authentication against the specified backend. # # to enable a new auth backend, simply add a new 'def auth_<type>' method below. # self.auth_method = kwargs.get('auth_method', os.environ.get('VAULT_AUTH_METHOD')) self.verify = self.boolean_or_cacert(kwargs.get('validate_certs', True), kwargs.get('cacert', '')) if self.auth_method and self.auth_method != 'token': try: if self.namespace is not None: self.client = hvac.Client(url=self.url, verify=self.verify, namespace=self.namespace) else: self.client = hvac.Client(url=self.url, verify=self.verify) # prefixing with auth_ to limit which methods can be accessed getattr(self, 'auth_' + self.auth_method)(**kwargs) except AttributeError: raise AnsibleError("Authentication method '%s' not supported." " Available options are %r" % (self.auth_method, self.avail_auth_method)) else: self.token = kwargs.get('token', os.environ.get('VAULT_TOKEN', None)) if self.token is None and os.environ.get('HOME'): token_filename = os.path.join( os.environ.get('HOME'), '.vault-token' ) if os.path.exists(token_filename): with open(token_filename) as token_file: self.token = token_file.read().strip() if self.token is None: raise AnsibleError("No Vault Token specified") if self.namespace is not None: self.client = hvac.Client(url=self.url, token=self.token, verify=self.verify, namespace=self.namespace) else: self.client = hvac.Client(url=self.url, token=self.token, verify=self.verify) if not self.client.is_authenticated(): raise AnsibleError("Invalid Hashicorp Vault Token Specified for hashi_vault lookup") def get(self): data = self.client.read(self.secret) if data is None: raise AnsibleError("The secret %s doesn't seem to exist for hashi_vault lookup" % self.secret) if self.secret_field == '': return data['data'] if self.secret_field not in data['data']: raise AnsibleError("The secret %s does not contain the field '%s'. for hashi_vault lookup" % (self.secret, self.secret_field)) return data['data'][self.secret_field] def check_params(self, **kwargs): username = kwargs.get('username') if username is None: raise AnsibleError("Authentication method %s requires a username" % self.auth_method) password = kwargs.get('password') if password is None: raise AnsibleError("Authentication method %s requires a password" % self.auth_method) mount_point = kwargs.get('mount_point') return username, password, mount_point def auth_userpass(self, **kwargs): username, password, mount_point = self.check_params(**kwargs) if mount_point is None: mount_point = 'userpass' self.client.auth_userpass(username, password, mount_point=mount_point) def auth_ldap(self, **kwargs): username, password, mount_point = self.check_params(**kwargs) if mount_point is None: mount_point = 'ldap' self.client.auth_ldap(username, password, mount_point=mount_point) def boolean_or_cacert(self, validate_certs, cacert): validate_certs = boolean(validate_certs, strict=False) '''' return a bool or cacert ''' if validate_certs is True: if cacert != '': return cacert else: return True else: return False def auth_approle(self, **kwargs): role_id = kwargs.get('role_id', os.environ.get('VAULT_ROLE_ID', None)) if role_id is None: raise AnsibleError("Authentication method app role requires a role_id") secret_id = kwargs.get('secret_id', os.environ.get('VAULT_SECRET_ID', None)) if secret_id is None: raise AnsibleError("Authentication method app role requires a secret_id") self.client.auth_approle(role_id, secret_id) class LookupModule(LookupBase): def run(self, terms, variables=None, **kwargs): if not HAS_HVAC: raise AnsibleError("Please pip install hvac to use the hashi_vault lookup module.") vault_args = terms[0].split() vault_dict = {} ret = [] for param in vault_args: try: key, value = param.split('=') except ValueError: raise AnsibleError("hashi_vault lookup plugin needs key=value pairs, but received %s" % terms) vault_dict[key] = value if 'ca_cert' in vault_dict.keys(): vault_dict['cacert'] = vault_dict['ca_cert'] vault_dict.pop('ca_cert', None) vault_conn = HashiVault(**vault_dict) for term in terms: key = term.split()[0] value = vault_conn.get() ret.append(value) return ret
closed
ansible/ansible
https://github.com/ansible/ansible
64,150
unable to read from KV secrets engine vault enterprise
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Hi, i am trying to read a secret from the following path secret: "{{ lookup('hashi_vault', 'namespace={{NAMESPACE}} secret={{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo url={{ VAULT_ADDR }} auth_method={{ VAULT_AUTH_METHOD }} role_id={{ VAULT_ROLE_ID }} secret_id={{ VAULT_SECRET_ID }}')}}" but this fails with no secret found at location. we are using enterprise and this uses the KV engine vault kv get {{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo ##### ISSUE TYPE - Bug Report unable to read from kv data store ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> hashi_vault ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below 2.8 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64150
https://github.com/ansible/ansible/pull/64288
e74cf5e4b300aa8ce6c88a967298fc26ee471e42
8daa42bb3d30ec758105c5661967ec2c13f73a5d
2019-10-31T13:54:20Z
python
2019-11-25T05:18:49Z
test/integration/targets/lookup_hashi_vault/lookup_hashi_vault/defaults/main.yml
--- vault_base_path: 'secret/data/testproject' vault_base_path_kv: 'secret/testproject' # required by KV 2 engine
closed
ansible/ansible
https://github.com/ansible/ansible
64,150
unable to read from KV secrets engine vault enterprise
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Hi, i am trying to read a secret from the following path secret: "{{ lookup('hashi_vault', 'namespace={{NAMESPACE}} secret={{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo url={{ VAULT_ADDR }} auth_method={{ VAULT_AUTH_METHOD }} role_id={{ VAULT_ROLE_ID }} secret_id={{ VAULT_SECRET_ID }}')}}" but this fails with no secret found at location. we are using enterprise and this uses the KV engine vault kv get {{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo ##### ISSUE TYPE - Bug Report unable to read from kv data store ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> hashi_vault ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below 2.8 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64150
https://github.com/ansible/ansible/pull/64288
e74cf5e4b300aa8ce6c88a967298fc26ee471e42
8daa42bb3d30ec758105c5661967ec2c13f73a5d
2019-10-31T13:54:20Z
python
2019-11-25T05:18:49Z
test/integration/targets/lookup_hashi_vault/lookup_hashi_vault/tasks/approle_test.yml
- vars: role_id: '{{ role_id_cmd.stdout }}' secret_id: '{{ secret_id_cmd.stdout }}' block: - name: 'Fetch secrets using "hashi_vault" lookup' set_fact: secret1: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_base_path ~ '/secret1 auth_method=approle secret_id=' ~ secret_id ~ ' role_id=' ~ role_id) }}" secret2: "{{ lookup('hashi_vault', conn_params ~ 'secret='Β ~ vault_base_path ~ '/secret2 auth_method=approle secret_id=' ~ secret_id ~ ' role_id=' ~ role_id) }}" - name: 'Check secret values' fail: msg: 'unexpected secret values' when: secret1['data']['value'] != 'foo1' or secret2['data']['value'] != 'foo2' - name: 'Failure expected when erroneous credentials are used' vars: secret_wrong_cred: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_base_path ~ '/secret2 auth_method=approle secret_id=toto role_id=' ~ role_id) }}" debug: msg: 'Failure is expected ({{ secret_wrong_cred }})' register: test_wrong_cred ignore_errors: true - name: 'Failure expected when unauthorized secret is read' vars: secret_unauthorized: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_base_path ~ '/secret3 auth_method=approle secret_id=' ~ secret_id ~ ' role_id=' ~ role_id) }}" debug: msg: 'Failure is expected ({{ secret_unauthorized }})' register: test_unauthorized ignore_errors: true - name: 'Failure expected when inexistent secret is read' vars: secret_inexistent: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_base_path ~ '/secret4 auth_method=approle secret_id=' ~ secret_id ~ ' role_id=' ~ role_id) }}" debug: msg: 'Failure is expected ({{ secret_inexistent }})' register: test_inexistent ignore_errors: true - name: 'Check expected failures' assert: msg: "an expected failure didn't occur" that: - test_wrong_cred is failed - test_unauthorized is failed - test_inexistent is failed
closed
ansible/ansible
https://github.com/ansible/ansible
64,150
unable to read from KV secrets engine vault enterprise
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Hi, i am trying to read a secret from the following path secret: "{{ lookup('hashi_vault', 'namespace={{NAMESPACE}} secret={{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo url={{ VAULT_ADDR }} auth_method={{ VAULT_AUTH_METHOD }} role_id={{ VAULT_ROLE_ID }} secret_id={{ VAULT_SECRET_ID }}')}}" but this fails with no secret found at location. we are using enterprise and this uses the KV engine vault kv get {{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo ##### ISSUE TYPE - Bug Report unable to read from kv data store ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> hashi_vault ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below 2.8 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64150
https://github.com/ansible/ansible/pull/64288
e74cf5e4b300aa8ce6c88a967298fc26ee471e42
8daa42bb3d30ec758105c5661967ec2c13f73a5d
2019-10-31T13:54:20Z
python
2019-11-25T05:18:49Z
test/integration/targets/lookup_hashi_vault/lookup_hashi_vault/tasks/main.yml
--- - name: Install Hashi Vault on controlled node and test vars: vault_version: '0.11.0' vault_uri: 'https://ansible-ci-files.s3.amazonaws.com/test/integration/targets/lookup_hashi_vault/vault_{{ vault_version }}_{{ ansible_system | lower }}_{{ vault_arch }}.zip' vault_cmd: '{{ local_temp_dir }}/vault' block: - name: Create a local temporary directory tempfile: state: directory register: tempfile_result - set_fact: local_temp_dir: '{{ tempfile_result.path }}' - when: pyopenssl_version.stdout is version('0.15', '>=') block: - name: Generate privatekey openssl_privatekey: path: '{{ local_temp_dir }}/privatekey.pem' - name: Generate CSR openssl_csr: path: '{{ local_temp_dir }}/csr.csr' privatekey_path: '{{ local_temp_dir }}/privatekey.pem' subject: commonName: localhost - name: Generate selfsigned certificate openssl_certificate: path: '{{ local_temp_dir }}/cert.pem' csr_path: '{{ local_temp_dir }}/csr.csr' privatekey_path: '{{ local_temp_dir }}/privatekey.pem' provider: selfsigned selfsigned_digest: sha256 register: selfsigned_certificate - name: 'Install unzip' package: name: unzip when: ansible_distribution != "MacOSX" # unzip already installed - assert: # Linux: x86_64, FreeBSD: amd64 that: ansible_architecture in ['i386', 'x86_64', 'amd64'] - set_fact: vault_arch: '386' when: ansible_architecture == 'i386' - set_fact: vault_arch: amd64 when: ansible_architecture in ['x86_64', 'amd64'] - name: 'Download vault binary' unarchive: src: '{{ vault_uri }}' dest: '{{ local_temp_dir }}' remote_src: true - environment: # used by vault command VAULT_DEV_ROOT_TOKEN_ID: '47542cbc-6bf8-4fba-8eda-02e0a0d29a0a' block: - name: 'Create configuration file' template: src: vault_config.hcl.j2 dest: '{{ local_temp_dir }}/vault_config.hcl' - name: 'Start vault service' environment: VAULT_ADDR: 'http://localhost:8200' block: - name: 'Start vault server (dev mode enabled)' shell: 'nohup {{ vault_cmd }} server -dev -config {{ local_temp_dir }}/vault_config.hcl </dev/null >/dev/null 2>&1 &' - name: 'Create a test policy' shell: "echo '{{ policy }}' | {{ vault_cmd }} policy write test-policy -" vars: policy: | path "{{ vault_base_path }}/secret1" { capabilities = ["read"] } path "{{ vault_base_path }}/secret2" { capabilities = ["read", "update"] } path "{{ vault_base_path }}/secret3" { capabilities = ["deny"] } - name: 'Create secrets' command: '{{ vault_cmd }} kv put {{ vault_base_path_kv }}/secret{{ item }} value=foo{{ item }}' loop: [1, 2, 3] - name: setup approle auth import_tasks: approle_setup.yml when: ansible_distribution != 'RedHat' or ansible_distribution_major_version is version('7', '>') - name: setup token auth import_tasks: token_setup.yml - import_tasks: tests.yml vars: auth_type: approle when: ansible_distribution != 'RedHat' or ansible_distribution_major_version is version('7', '>') - import_tasks: tests.yml vars: auth_type: token always: - name: 'Kill vault process' shell: "kill $(cat {{ local_temp_dir }}/vault.pid)" ignore_errors: true always: - name: 'Delete temp dir' file: path: '{{ local_temp_dir }}' state: absent
closed
ansible/ansible
https://github.com/ansible/ansible
64,150
unable to read from KV secrets engine vault enterprise
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> Hi, i am trying to read a secret from the following path secret: "{{ lookup('hashi_vault', 'namespace={{NAMESPACE}} secret={{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo url={{ VAULT_ADDR }} auth_method={{ VAULT_AUTH_METHOD }} role_id={{ VAULT_ROLE_ID }} secret_id={{ VAULT_SECRET_ID }}')}}" but this fails with no secret found at location. we are using enterprise and this uses the KV engine vault kv get {{ BRAND }}/{{ ENVIRONMENT }}/secrets/placeholder:foo ##### ISSUE TYPE - Bug Report unable to read from kv data store ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> hashi_vault ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below 2.8 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64150
https://github.com/ansible/ansible/pull/64288
e74cf5e4b300aa8ce6c88a967298fc26ee471e42
8daa42bb3d30ec758105c5661967ec2c13f73a5d
2019-10-31T13:54:20Z
python
2019-11-25T05:18:49Z
test/integration/targets/lookup_hashi_vault/lookup_hashi_vault/tasks/token_test.yml
- vars: user_token: '{{ user_token_cmd.stdout }}' block: - name: 'Fetch secrets using "hashi_vault" lookup' set_fact: secret1: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_base_path ~ '/secret1 auth_method=token token=' ~ user_token) }}" secret2: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_base_path ~ '/secret2 token=' ~ user_token) }}" secret3: "{{ lookup('hashi_vault', conn_params ~ ' secret=' ~ vault_base_path ~ '/secret2 token=' ~ user_token) }}" - name: 'Check secret values' fail: msg: 'unexpected secret values' when: secret1['data']['value'] != 'foo1' or secret2['data']['value'] != 'foo2' or secret3['data']['value'] != 'foo2' - name: 'Failure expected when erroneous credentials are used' vars: secret_wrong_cred: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_base_path ~ '/secret2 auth_method=token token=wrong_token') }}" debug: msg: 'Failure is expected ({{ secret_wrong_cred }})' register: test_wrong_cred ignore_errors: true - name: 'Failure expected when unauthorized secret is read' vars: secret_unauthorized: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_base_path ~ '/secret3 token=' ~ user_token) }}" debug: msg: 'Failure is expected ({{ secret_unauthorized }})' register: test_unauthorized ignore_errors: true - name: 'Failure expected when inexistent secret is read' vars: secret_inexistent: "{{ lookup('hashi_vault', conn_params ~ 'secret=' ~ vault_base_path ~ '/secret4 token=' ~ user_token) }}" debug: msg: 'Failure is expected ({{ secret_inexistent }})' register: test_inexistent ignore_errors: true - name: 'Check expected failures' assert: msg: "an expected failure didn't occur" that: - test_wrong_cred is failed - test_unauthorized is failed - test_inexistent is failed
closed
ansible/ansible
https://github.com/ansible/ansible
65,223
LibraryError doesn't exist in the module ansible.module_utils.postgres
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Line 556 in the current file `ansible/modules/database/postgresql/postgresql_db.py` references an exception LibraryError, that doesn't exist. This leads to the code to stop at that point instead of showing the actual error. The error message is `AttributeError: 'module' object has no attribute 'LibraryError`. This line seems to be new since 2.9 version. It is still existent in the current master code. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME postgresql ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.1 config file = /home/gerhard/projects/localserver/ansible.cfg configured module search path = ['/home/gerhard/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/gerhard/projects/localserver/ansible/lib/python3.7/site-packages/ansible executable location = ansible/bin/ansible python version = 3.7.5 (default, Oct 27 2019, 15:43:29) [GCC 9.2.1 20191022] ``` but the line is also in the current HEAD. ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ALLOW_WORLD_READABLE_TMPFILES(/home/gerhard/projects/localserver/ansible.cfg) = True DEFAULT_BECOME(/home/gerhard/projects/localserver/ansible.cfg) = True DEFAULT_BECOME_ASK_PASS(/home/gerhard/projects/localserver/ansible.cfg) = False DEFAULT_BECOME_USER(/home/gerhard/projects/localserver/ansible.cfg) = root DEFAULT_REMOTE_USER(/home/gerhard/projects/localserver/ansible.cfg) = gerhard DEFAULT_ROLES_PATH(/home/gerhard/projects/localserver/ansible.cfg) = ['/home/gerhard/projects/localserver/external/roles'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Linux ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Create database postgresql_db: name: testdatabase state: present ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The database should be created. For some reasons there seems to be a configuration error though, so access to the database doesn't seem to work. I would now expect an error message regarding authentication to the database. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below failed: [planet.jerri.home] (item=mayanedms) => { "ansible_loop_var": "item", "changed": false, "item": "mayanedms", "module_stderr": "Shared connection to planet.jerri.home closed.\r\n", "module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/home/gerhard/.ansible/tmp/ansible-tmp-1574545223.9061017-167794219070878/AnsiballZ_postgresql_db.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/home/gerhard/.ansible/tmp/ansible-tmp-1574545223.9061017-167794219070878/AnsiballZ_postgresql_db.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/gerhard/.ansible/tmp/ansible-tmp-1574545223.9061017-167794219070878/AnsiballZ_postgresql_db.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible.modules.database.postgresql.postgresql_db', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib/python2.7/runpy.py\", line 188, in run_module\r\n fname, loader, pkg_name)\r\n File \"/usr/lib/python2.7/runpy.py\", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File \"/usr/lib/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/tmp/ansible_postgresql_db_payload_7L6Qx4/ansible_postgresql_db_payload.zip/ansible/modules/database/postgresql/postgresql_db.py\", line 611, in <module>\r\n File \"/tmp/ansible_postgresql_db_payload_7L6Qx4/ansible_postgresql_db_payload.zip/ansible/modules/database/postgresql/postgresql_db.py\", line 550, in main\r\nAttributeError: 'module' object has no attribute 'LibraryError'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } ```
https://github.com/ansible/ansible/issues/65223
https://github.com/ansible/ansible/pull/65229
89954e6ef51c10e42b13613ee6e26983d76d82c9
5f8ec4d46e1fec49eca5c2f351141ed5da6d259e
2019-11-23T21:41:28Z
python
2019-11-25T09:42:18Z
changelogs/fragments/65223-postgresql_db-exception-added.yml
closed
ansible/ansible
https://github.com/ansible/ansible
65,223
LibraryError doesn't exist in the module ansible.module_utils.postgres
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Line 556 in the current file `ansible/modules/database/postgresql/postgresql_db.py` references an exception LibraryError, that doesn't exist. This leads to the code to stop at that point instead of showing the actual error. The error message is `AttributeError: 'module' object has no attribute 'LibraryError`. This line seems to be new since 2.9 version. It is still existent in the current master code. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME postgresql ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.9.1 config file = /home/gerhard/projects/localserver/ansible.cfg configured module search path = ['/home/gerhard/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/gerhard/projects/localserver/ansible/lib/python3.7/site-packages/ansible executable location = ansible/bin/ansible python version = 3.7.5 (default, Oct 27 2019, 15:43:29) [GCC 9.2.1 20191022] ``` but the line is also in the current HEAD. ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ALLOW_WORLD_READABLE_TMPFILES(/home/gerhard/projects/localserver/ansible.cfg) = True DEFAULT_BECOME(/home/gerhard/projects/localserver/ansible.cfg) = True DEFAULT_BECOME_ASK_PASS(/home/gerhard/projects/localserver/ansible.cfg) = False DEFAULT_BECOME_USER(/home/gerhard/projects/localserver/ansible.cfg) = root DEFAULT_REMOTE_USER(/home/gerhard/projects/localserver/ansible.cfg) = gerhard DEFAULT_ROLES_PATH(/home/gerhard/projects/localserver/ansible.cfg) = ['/home/gerhard/projects/localserver/external/roles'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Linux ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Create database postgresql_db: name: testdatabase state: present ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The database should be created. For some reasons there seems to be a configuration error though, so access to the database doesn't seem to work. I would now expect an error message regarding authentication to the database. ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below failed: [planet.jerri.home] (item=mayanedms) => { "ansible_loop_var": "item", "changed": false, "item": "mayanedms", "module_stderr": "Shared connection to planet.jerri.home closed.\r\n", "module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/home/gerhard/.ansible/tmp/ansible-tmp-1574545223.9061017-167794219070878/AnsiballZ_postgresql_db.py\", line 102, in <module>\r\n _ansiballz_main()\r\n File \"/home/gerhard/.ansible/tmp/ansible-tmp-1574545223.9061017-167794219070878/AnsiballZ_postgresql_db.py\", line 94, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/gerhard/.ansible/tmp/ansible-tmp-1574545223.9061017-167794219070878/AnsiballZ_postgresql_db.py\", line 40, in invoke_module\r\n runpy.run_module(mod_name='ansible.modules.database.postgresql.postgresql_db', init_globals=None, run_name='__main__', alter_sys=True)\r\n File \"/usr/lib/python2.7/runpy.py\", line 188, in run_module\r\n fname, loader, pkg_name)\r\n File \"/usr/lib/python2.7/runpy.py\", line 82, in _run_module_code\r\n mod_name, mod_fname, mod_loader, pkg_name)\r\n File \"/usr/lib/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/tmp/ansible_postgresql_db_payload_7L6Qx4/ansible_postgresql_db_payload.zip/ansible/modules/database/postgresql/postgresql_db.py\", line 611, in <module>\r\n File \"/tmp/ansible_postgresql_db_payload_7L6Qx4/ansible_postgresql_db_payload.zip/ansible/modules/database/postgresql/postgresql_db.py\", line 550, in main\r\nAttributeError: 'module' object has no attribute 'LibraryError'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } ```
https://github.com/ansible/ansible/issues/65223
https://github.com/ansible/ansible/pull/65229
89954e6ef51c10e42b13613ee6e26983d76d82c9
5f8ec4d46e1fec49eca5c2f351141ed5da6d259e
2019-11-23T21:41:28Z
python
2019-11-25T09:42:18Z
lib/ansible/modules/database/postgresql/postgresql_db.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['stableinterface'], 'supported_by': 'community'} DOCUMENTATION = r''' --- module: postgresql_db short_description: Add or remove PostgreSQL databases from a remote host. description: - Add or remove PostgreSQL databases from a remote host. version_added: '0.6' options: name: description: - Name of the database to add or remove type: str required: true aliases: [ db ] port: description: - Database port to connect (if needed) type: int default: 5432 aliases: - login_port owner: description: - Name of the role to set as owner of the database type: str template: description: - Template used to create the database type: str encoding: description: - Encoding of the database type: str lc_collate: description: - Collation order (LC_COLLATE) to use in the database. Must match collation order of template database unless C(template0) is used as template. type: str lc_ctype: description: - Character classification (LC_CTYPE) to use in the database (e.g. lower, upper, ...) Must match LC_CTYPE of template database unless C(template0) is used as template. type: str session_role: description: - Switch to session_role after connecting. The specified session_role must be a role that the current login_user is a member of. - Permissions checking for SQL commands is carried out as though the session_role were the one that had logged in originally. type: str version_added: '2.8' state: description: - The database state. - C(present) implies that the database should be created if necessary. - C(absent) implies that the database should be removed if present. - C(dump) requires a target definition to which the database will be backed up. (Added in Ansible 2.4) Note that in some PostgreSQL versions of pg_dump, which is an embedded PostgreSQL utility and is used by the module, returns rc 0 even when errors occurred (e.g. the connection is forbidden by pg_hba.conf, etc.), so the module returns changed=True but the dump has not actually been done. Please, be sure that your version of pg_dump returns rc 1 in this case. - C(restore) also requires a target definition from which the database will be restored. (Added in Ansible 2.4) - The format of the backup will be detected based on the target name. - Supported compression formats for dump and restore include C(.pgc), C(.bz2), C(.gz) and C(.xz) - Supported formats for dump and restore include C(.sql) and C(.tar) type: str choices: [ absent, dump, present, restore ] default: present target: description: - File to back up or restore from. - Used when I(state) is C(dump) or C(restore). type: path version_added: '2.4' target_opts: description: - Further arguments for pg_dump or pg_restore. - Used when I(state) is C(dump) or C(restore). type: str version_added: '2.4' maintenance_db: description: - The value specifies the initial database (which is also called as maintenance DB) that Ansible connects to. type: str default: postgres version_added: '2.5' conn_limit: description: - Specifies the database connection limit. type: str version_added: '2.8' tablespace: description: - The tablespace to set for the database U(https://www.postgresql.org/docs/current/sql-alterdatabase.html). - If you want to move the database back to the default tablespace, explicitly set this to pg_default. type: path version_added: '2.9' seealso: - name: CREATE DATABASE reference description: Complete reference of the CREATE DATABASE command documentation. link: https://www.postgresql.org/docs/current/sql-createdatabase.html - name: DROP DATABASE reference description: Complete reference of the DROP DATABASE command documentation. link: https://www.postgresql.org/docs/current/sql-dropdatabase.html - name: pg_dump reference description: Complete reference of pg_dump documentation. link: https://www.postgresql.org/docs/current/app-pgdump.html - name: pg_restore reference description: Complete reference of pg_restore documentation. link: https://www.postgresql.org/docs/current/app-pgrestore.html - module: postgresql_tablespace - module: postgresql_info - module: postgresql_ping notes: - State C(dump) and C(restore) don't require I(psycopg2) since version 2.8. author: "Ansible Core Team" extends_documentation_fragment: - postgres ''' EXAMPLES = r''' - name: Create a new database with name "acme" postgresql_db: name: acme # Note: If a template different from "template0" is specified, encoding and locale settings must match those of the template. - name: Create a new database with name "acme" and specific encoding and locale # settings. postgresql_db: name: acme encoding: UTF-8 lc_collate: de_DE.UTF-8 lc_ctype: de_DE.UTF-8 template: template0 # Note: Default limit for the number of concurrent connections to a specific database is "-1", which means "unlimited" - name: Create a new database with name "acme" which has a limit of 100 concurrent connections postgresql_db: name: acme conn_limit: "100" - name: Dump an existing database to a file postgresql_db: name: acme state: dump target: /tmp/acme.sql - name: Dump an existing database to a file (with compression) postgresql_db: name: acme state: dump target: /tmp/acme.sql.gz - name: Dump a single schema for an existing database postgresql_db: name: acme state: dump target: /tmp/acme.sql target_opts: "-n public" # Note: In the example below, if database foo exists and has another tablespace # the tablespace will be changed to foo. Access to the database will be locked # until the copying of database files is finished. - name: Create a new database called foo in tablespace bar postgresql_db: name: foo tablespace: bar ''' import os import subprocess import traceback try: import psycopg2 import psycopg2.extras except ImportError: HAS_PSYCOPG2 = False else: HAS_PSYCOPG2 = True import ansible.module_utils.postgres as pgutils from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.database import SQLParseError, pg_quote_identifier from ansible.module_utils.six import iteritems from ansible.module_utils.six.moves import shlex_quote from ansible.module_utils._text import to_native class NotSupportedError(Exception): pass # =========================================== # PostgreSQL module specific support methods. # def set_owner(cursor, db, owner): query = 'ALTER DATABASE %s OWNER TO "%s"' % ( pg_quote_identifier(db, 'database'), owner) cursor.execute(query) return True def set_conn_limit(cursor, db, conn_limit): query = "ALTER DATABASE %s CONNECTION LIMIT %s" % ( pg_quote_identifier(db, 'database'), conn_limit) cursor.execute(query) return True def get_encoding_id(cursor, encoding): query = "SELECT pg_char_to_encoding(%(encoding)s) AS encoding_id;" cursor.execute(query, {'encoding': encoding}) return cursor.fetchone()['encoding_id'] def get_db_info(cursor, db): query = """ SELECT rolname AS owner, pg_encoding_to_char(encoding) AS encoding, encoding AS encoding_id, datcollate AS lc_collate, datctype AS lc_ctype, pg_database.datconnlimit AS conn_limit, spcname AS tablespace FROM pg_database JOIN pg_roles ON pg_roles.oid = pg_database.datdba JOIN pg_tablespace ON pg_tablespace.oid = pg_database.dattablespace WHERE datname = %(db)s """ cursor.execute(query, {'db': db}) return cursor.fetchone() def db_exists(cursor, db): query = "SELECT * FROM pg_database WHERE datname=%(db)s" cursor.execute(query, {'db': db}) return cursor.rowcount == 1 def db_delete(cursor, db): if db_exists(cursor, db): query = "DROP DATABASE %s" % pg_quote_identifier(db, 'database') cursor.execute(query) return True else: return False def db_create(cursor, db, owner, template, encoding, lc_collate, lc_ctype, conn_limit, tablespace): params = dict(enc=encoding, collate=lc_collate, ctype=lc_ctype, conn_limit=conn_limit, tablespace=tablespace) if not db_exists(cursor, db): query_fragments = ['CREATE DATABASE %s' % pg_quote_identifier(db, 'database')] if owner: query_fragments.append('OWNER "%s"' % owner) if template: query_fragments.append('TEMPLATE %s' % pg_quote_identifier(template, 'database')) if encoding: query_fragments.append('ENCODING %(enc)s') if lc_collate: query_fragments.append('LC_COLLATE %(collate)s') if lc_ctype: query_fragments.append('LC_CTYPE %(ctype)s') if tablespace: query_fragments.append('TABLESPACE %s' % pg_quote_identifier(tablespace, 'tablespace')) if conn_limit: query_fragments.append("CONNECTION LIMIT %(conn_limit)s" % {"conn_limit": conn_limit}) query = ' '.join(query_fragments) cursor.execute(query, params) return True else: db_info = get_db_info(cursor, db) if (encoding and get_encoding_id(cursor, encoding) != db_info['encoding_id']): raise NotSupportedError( 'Changing database encoding is not supported. ' 'Current encoding: %s' % db_info['encoding'] ) elif lc_collate and lc_collate != db_info['lc_collate']: raise NotSupportedError( 'Changing LC_COLLATE is not supported. ' 'Current LC_COLLATE: %s' % db_info['lc_collate'] ) elif lc_ctype and lc_ctype != db_info['lc_ctype']: raise NotSupportedError( 'Changing LC_CTYPE is not supported.' 'Current LC_CTYPE: %s' % db_info['lc_ctype'] ) else: changed = False if owner and owner != db_info['owner']: changed = set_owner(cursor, db, owner) if conn_limit and conn_limit != str(db_info['conn_limit']): changed = set_conn_limit(cursor, db, conn_limit) if tablespace and tablespace != db_info['tablespace']: changed = set_tablespace(cursor, db, tablespace) return changed def db_matches(cursor, db, owner, template, encoding, lc_collate, lc_ctype, conn_limit, tablespace): if not db_exists(cursor, db): return False else: db_info = get_db_info(cursor, db) if (encoding and get_encoding_id(cursor, encoding) != db_info['encoding_id']): return False elif lc_collate and lc_collate != db_info['lc_collate']: return False elif lc_ctype and lc_ctype != db_info['lc_ctype']: return False elif owner and owner != db_info['owner']: return False elif conn_limit and conn_limit != str(db_info['conn_limit']): return False elif tablespace and tablespace != db_info['tablespace']: return False else: return True def db_dump(module, target, target_opts="", db=None, user=None, password=None, host=None, port=None, **kw): flags = login_flags(db, host, port, user, db_prefix=False) cmd = module.get_bin_path('pg_dump', True) comp_prog_path = None if os.path.splitext(target)[-1] == '.tar': flags.append(' --format=t') elif os.path.splitext(target)[-1] == '.pgc': flags.append(' --format=c') if os.path.splitext(target)[-1] == '.gz': if module.get_bin_path('pigz'): comp_prog_path = module.get_bin_path('pigz', True) else: comp_prog_path = module.get_bin_path('gzip', True) elif os.path.splitext(target)[-1] == '.bz2': comp_prog_path = module.get_bin_path('bzip2', True) elif os.path.splitext(target)[-1] == '.xz': comp_prog_path = module.get_bin_path('xz', True) cmd += "".join(flags) if target_opts: cmd += " {0} ".format(target_opts) if comp_prog_path: # Use a fifo to be notified of an error in pg_dump # Using shell pipe has no way to return the code of the first command # in a portable way. fifo = os.path.join(module.tmpdir, 'pg_fifo') os.mkfifo(fifo) cmd = '{1} <{3} > {2} & {0} >{3}'.format(cmd, comp_prog_path, shlex_quote(target), fifo) else: cmd = '{0} > {1}'.format(cmd, shlex_quote(target)) return do_with_password(module, cmd, password) def db_restore(module, target, target_opts="", db=None, user=None, password=None, host=None, port=None, **kw): flags = login_flags(db, host, port, user) comp_prog_path = None cmd = module.get_bin_path('psql', True) if os.path.splitext(target)[-1] == '.sql': flags.append(' --file={0}'.format(target)) elif os.path.splitext(target)[-1] == '.tar': flags.append(' --format=Tar') cmd = module.get_bin_path('pg_restore', True) elif os.path.splitext(target)[-1] == '.pgc': flags.append(' --format=Custom') cmd = module.get_bin_path('pg_restore', True) elif os.path.splitext(target)[-1] == '.gz': comp_prog_path = module.get_bin_path('zcat', True) elif os.path.splitext(target)[-1] == '.bz2': comp_prog_path = module.get_bin_path('bzcat', True) elif os.path.splitext(target)[-1] == '.xz': comp_prog_path = module.get_bin_path('xzcat', True) cmd += "".join(flags) if target_opts: cmd += " {0} ".format(target_opts) if comp_prog_path: env = os.environ.copy() if password: env = {"PGPASSWORD": password} p1 = subprocess.Popen([comp_prog_path, target], stdout=subprocess.PIPE, stderr=subprocess.PIPE) p2 = subprocess.Popen(cmd, stdin=p1.stdout, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=env) (stdout2, stderr2) = p2.communicate() p1.stdout.close() p1.wait() if p1.returncode != 0: stderr1 = p1.stderr.read() return p1.returncode, '', stderr1, 'cmd: ****' else: return p2.returncode, '', stderr2, 'cmd: ****' else: cmd = '{0} < {1}'.format(cmd, shlex_quote(target)) return do_with_password(module, cmd, password) def login_flags(db, host, port, user, db_prefix=True): """ returns a list of connection argument strings each prefixed with a space and quoted where necessary to later be combined in a single shell string with `"".join(rv)` db_prefix determines if "--dbname" is prefixed to the db argument, since the argument was introduced in 9.3. """ flags = [] if db: if db_prefix: flags.append(' --dbname={0}'.format(shlex_quote(db))) else: flags.append(' {0}'.format(shlex_quote(db))) if host: flags.append(' --host={0}'.format(host)) if port: flags.append(' --port={0}'.format(port)) if user: flags.append(' --username={0}'.format(user)) return flags def do_with_password(module, cmd, password): env = {} if password: env = {"PGPASSWORD": password} rc, stderr, stdout = module.run_command(cmd, use_unsafe_shell=True, environ_update=env) return rc, stderr, stdout, cmd def set_tablespace(cursor, db, tablespace): query = "ALTER DATABASE %s SET TABLESPACE %s" % ( pg_quote_identifier(db, 'database'), pg_quote_identifier(tablespace, 'tablespace')) cursor.execute(query) return True # =========================================== # Module execution. # def main(): argument_spec = pgutils.postgres_common_argument_spec() argument_spec.update( db=dict(type='str', required=True, aliases=['name']), owner=dict(type='str', default=''), template=dict(type='str', default=''), encoding=dict(type='str', default=''), lc_collate=dict(type='str', default=''), lc_ctype=dict(type='str', default=''), state=dict(type='str', default='present', choices=['absent', 'dump', 'present', 'restore']), target=dict(type='path', default=''), target_opts=dict(type='str', default=''), maintenance_db=dict(type='str', default="postgres"), session_role=dict(type='str'), conn_limit=dict(type='str', default=''), tablespace=dict(type='path', default=''), ) module = AnsibleModule( argument_spec=argument_spec, supports_check_mode=True ) db = module.params["db"] owner = module.params["owner"] template = module.params["template"] encoding = module.params["encoding"] lc_collate = module.params["lc_collate"] lc_ctype = module.params["lc_ctype"] target = module.params["target"] target_opts = module.params["target_opts"] state = module.params["state"] changed = False maintenance_db = module.params['maintenance_db'] session_role = module.params["session_role"] conn_limit = module.params['conn_limit'] tablespace = module.params['tablespace'] raw_connection = state in ("dump", "restore") if not raw_connection: pgutils.ensure_required_libs(module) # To use defaults values, keyword arguments must be absent, so # check which values are empty and don't include in the **kw # dictionary params_map = { "login_host": "host", "login_user": "user", "login_password": "password", "port": "port", "ssl_mode": "sslmode", "ca_cert": "sslrootcert" } kw = dict((params_map[k], v) for (k, v) in iteritems(module.params) if k in params_map and v != '' and v is not None) # If a login_unix_socket is specified, incorporate it here. is_localhost = "host" not in kw or kw["host"] == "" or kw["host"] == "localhost" if is_localhost and module.params["login_unix_socket"] != "": kw["host"] = module.params["login_unix_socket"] if target == "": target = "{0}/{1}.sql".format(os.getcwd(), db) target = os.path.expanduser(target) if not raw_connection: try: db_connection = psycopg2.connect(database=maintenance_db, **kw) # Enable autocommit so we can create databases if psycopg2.__version__ >= '2.4.2': db_connection.autocommit = True else: db_connection.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT) cursor = db_connection.cursor(cursor_factory=psycopg2.extras.DictCursor) except pgutils.LibraryError as e: module.fail_json(msg="unable to connect to database: {0}".format(to_native(e)), exception=traceback.format_exc()) except TypeError as e: if 'sslrootcert' in e.args[0]: module.fail_json(msg='Postgresql server must be at least version 8.4 to support sslrootcert. Exception: {0}'.format(to_native(e)), exception=traceback.format_exc()) module.fail_json(msg="unable to connect to database: %s" % to_native(e), exception=traceback.format_exc()) except Exception as e: module.fail_json(msg="unable to connect to database: %s" % to_native(e), exception=traceback.format_exc()) if session_role: try: cursor.execute('SET ROLE "%s"' % session_role) except Exception as e: module.fail_json(msg="Could not switch role: %s" % to_native(e), exception=traceback.format_exc()) try: if module.check_mode: if state == "absent": changed = db_exists(cursor, db) elif state == "present": changed = not db_matches(cursor, db, owner, template, encoding, lc_collate, lc_ctype, conn_limit, tablespace) module.exit_json(changed=changed, db=db) if state == "absent": try: changed = db_delete(cursor, db) except SQLParseError as e: module.fail_json(msg=to_native(e), exception=traceback.format_exc()) elif state == "present": try: changed = db_create(cursor, db, owner, template, encoding, lc_collate, lc_ctype, conn_limit, tablespace) except SQLParseError as e: module.fail_json(msg=to_native(e), exception=traceback.format_exc()) elif state in ("dump", "restore"): method = state == "dump" and db_dump or db_restore try: rc, stdout, stderr, cmd = method(module, target, target_opts, db, **kw) if rc != 0: module.fail_json(msg=stderr, stdout=stdout, rc=rc, cmd=cmd) else: module.exit_json(changed=True, msg=stdout, stderr=stderr, rc=rc, cmd=cmd) except SQLParseError as e: module.fail_json(msg=to_native(e), exception=traceback.format_exc()) except NotSupportedError as e: module.fail_json(msg=to_native(e), exception=traceback.format_exc()) except SystemExit: # Avoid catching this on Python 2.4 raise except Exception as e: module.fail_json(msg="Database query failed: %s" % to_native(e), exception=traceback.format_exc()) module.exit_json(changed=changed, db=db) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
65,064
Regression - no longer returning vars plugin host_vars from ansible-inventory
##### SUMMARY If I have a host with a name like `foo` and have variables inside a file `host_vars/foo.yml` inside of a directory properly passed via `--playbook-dir` to the `ansible-inventory` command, vars from that file are not present in the output. This used to work. The regression was first seen the night of Nov 4, 2019 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/cli/inventory.py ##### ANSIBLE VERSION ```paste below ansible --version ansible 2.10.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible executable location = /Users/alancoding/.virtualenvs/ansible3/bin/ansible python version = 3.6.5 (default, Apr 25 2018, 14:23:58) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)] ``` ##### CONFIGURATION ```paste below $ ansible-config dump --only-changed DEPRECATION_WARNINGS(/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Mac OS ##### STEPS TO REPRODUCE https://github.com/ansible/test-playbooks/tree/master/inventories host of interest is `group_one_host_01` Inventory file: ``` $ cat inventories/inventory.ini ungrouped_host_[01:05] group_one_host_01 group_one_host_01_has_this_var=True group_two_host_01 group_two_host_01_has_this_var=True group_three_host_01 group_three_host_01_has_this_var=True [group_one] group_one_host_[01:05] group_one_and_two_host_[01:05] group_one_two_and_three_host_[01:05] [group_one:vars] is_in_group_one=True complex_var=[{"dir": "/opt/gwaf/logs", "sourcetype": "gwaf", "something_else": [1, 2, 3]}] [group_two] group_two_host_[01:05] group_one_and_two_host_[01:05] group_two_and_three_host_[01:05] group_one_two_and_three_host_[01:05] [group_two:vars] is_in_group_two=True [group_three] group_three_host_[01:05] group_two_and_three_host_[01:05] group_one_two_and_three_host_[01:05] [group_three:vars] is_in_group_three=True [all:vars] ansible_connection=local inventories_var=True ``` vars file: ``` $ cat inventories//host_vars/group_one_host_01 group_one_host_01_should_have_this_var: True ``` run: ``` ansible-inventory -i inventories/inventory.ini --list --export --playbook-dir=inventories/ | grep -r "group_one_host_01_should_have_this_var" ``` ##### EXPECTED RESULTS `group_one_host_01_should_have_this_var` should be in the hostvars for `group_one_host_01` ##### ACTUAL RESULTS The string `group_one_host_01_should_have_this_var` is not seen anywhere in the output.
https://github.com/ansible/ansible/issues/65064
https://github.com/ansible/ansible/pull/65073
8f1af618518b657b914f7ce24492e6f444f1c5e1
c1f280ba6e4a1e5867720e8c8426bc451ad32126
2019-11-19T15:19:54Z
python
2019-11-25T18:16:03Z
changelogs/fragments/65073-fix-inventory-cli-loading-vars-plugins.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
65,064
Regression - no longer returning vars plugin host_vars from ansible-inventory
##### SUMMARY If I have a host with a name like `foo` and have variables inside a file `host_vars/foo.yml` inside of a directory properly passed via `--playbook-dir` to the `ansible-inventory` command, vars from that file are not present in the output. This used to work. The regression was first seen the night of Nov 4, 2019 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/cli/inventory.py ##### ANSIBLE VERSION ```paste below ansible --version ansible 2.10.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible executable location = /Users/alancoding/.virtualenvs/ansible3/bin/ansible python version = 3.6.5 (default, Apr 25 2018, 14:23:58) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)] ``` ##### CONFIGURATION ```paste below $ ansible-config dump --only-changed DEPRECATION_WARNINGS(/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Mac OS ##### STEPS TO REPRODUCE https://github.com/ansible/test-playbooks/tree/master/inventories host of interest is `group_one_host_01` Inventory file: ``` $ cat inventories/inventory.ini ungrouped_host_[01:05] group_one_host_01 group_one_host_01_has_this_var=True group_two_host_01 group_two_host_01_has_this_var=True group_three_host_01 group_three_host_01_has_this_var=True [group_one] group_one_host_[01:05] group_one_and_two_host_[01:05] group_one_two_and_three_host_[01:05] [group_one:vars] is_in_group_one=True complex_var=[{"dir": "/opt/gwaf/logs", "sourcetype": "gwaf", "something_else": [1, 2, 3]}] [group_two] group_two_host_[01:05] group_one_and_two_host_[01:05] group_two_and_three_host_[01:05] group_one_two_and_three_host_[01:05] [group_two:vars] is_in_group_two=True [group_three] group_three_host_[01:05] group_two_and_three_host_[01:05] group_one_two_and_three_host_[01:05] [group_three:vars] is_in_group_three=True [all:vars] ansible_connection=local inventories_var=True ``` vars file: ``` $ cat inventories//host_vars/group_one_host_01 group_one_host_01_should_have_this_var: True ``` run: ``` ansible-inventory -i inventories/inventory.ini --list --export --playbook-dir=inventories/ | grep -r "group_one_host_01_should_have_this_var" ``` ##### EXPECTED RESULTS `group_one_host_01_should_have_this_var` should be in the hostvars for `group_one_host_01` ##### ACTUAL RESULTS The string `group_one_host_01_should_have_this_var` is not seen anywhere in the output.
https://github.com/ansible/ansible/issues/65064
https://github.com/ansible/ansible/pull/65073
8f1af618518b657b914f7ce24492e6f444f1c5e1
c1f280ba6e4a1e5867720e8c8426bc451ad32126
2019-11-19T15:19:54Z
python
2019-11-25T18:16:03Z
docs/docsite/rst/plugins/vars.rst
.. _vars_plugins: Vars Plugins ============ .. contents:: :local: :depth: 2 Vars plugins inject additional variable data into Ansible runs that did not come from an inventory source, playbook, or command line. Playbook constructs like 'host_vars' and 'group_vars' work using vars plugins. Vars plugins were partially implemented in Ansible 2.0 and rewritten to be fully implemented starting with Ansible 2.4. The :ref:`host_group_vars <host_group_vars_vars>` plugin shipped with Ansible enables reading variables from :ref:`host_variables` and :ref:`group_variables`. .. _enable_vars: Enabling vars plugins --------------------- You can activate a custom vars plugin by either dropping it into a ``vars_plugins`` directory adjacent to your play, inside a role, or by putting it in one of the directory sources configured in :ref:`ansible.cfg <ansible_configuration_settings>`. Starting in Ansible 2.10, vars plugins can require whitelisting rather than running by default. To enable a plugin that requires whitelisting set ``vars_plugins_enabled`` in the ``defaults`` section of :ref:`ansible.cfg <ansible_configuration_settings>` or set the ``ANSIBLE_VARS_ENABLED`` environment variable to the list of vars plugins you want to execute. By default, the :ref:`host_group_vars <host_group_vars_vars>` plugin shipped with Ansible is whitelisted. Starting in Ansible 2.10, you can use vars plugins in collections. All vars plugins in collections require whitelisting and need to use the fully qualified collection name in the format ``namespace.collection_name.vars_plugin_name``. .. code-block:: yaml [defaults] vars_plugins_enabled = host_group_vars,namespace.collection_name.vars_plugin_name .. _using_vars: Using vars plugins ------------------ By default, vars plugins are used on demand automatically after they are enabled. Starting in Ansible 2.10, vars plugins can be made to run at specific times. The global setting ``RUN_VARS_PLUGINS`` can be set in ``ansible.cfg`` using ``run_vars_plugins`` in the ``defaults`` section or by the ``ANSIBLE_RUN_VARS_PLUGINS`` environment variable. The default option, ``demand``, runs any enabled vars plugins relative to inventory sources whenever variables are demanded by tasks. You can use the option ``start`` to run any enabled vars plugins relative to inventory sources after importing that inventory source instead. You can also control vars plugin execution on a per-plugin basis for vars plugins that support the ``stage`` option. To run the :ref:`host_group_vars <host_group_vars_vars>` plugin after importing inventory you can add the following to :ref:`ansible.cfg <ansible_configuration_settings>`: .. code-block:: ini [vars_host_group_vars] stage = inventory .. _vars_plugin_list: Plugin Lists ------------ You can use ``ansible-doc -t vars -l`` to see the list of available plugins. Use ``ansible-doc -t vars <plugin name>`` to see specific plugin-specific documentation and examples. .. toctree:: :maxdepth: 1 :glob: vars/* .. seealso:: :ref:`action_plugins` Ansible Action plugins :ref:`cache_plugins` Ansible Cache plugins :ref:`callback_plugins` Ansible callback plugins :ref:`connection_plugins` Ansible connection plugins :ref:`inventory_plugins` Ansible inventory plugins :ref:`shell_plugins` Ansible Shell plugins :ref:`strategy_plugins` Ansible Strategy plugins `User Mailing List <https://groups.google.com/group/ansible-devel>`_ Have a question? Stop by the google group! `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
65,064
Regression - no longer returning vars plugin host_vars from ansible-inventory
##### SUMMARY If I have a host with a name like `foo` and have variables inside a file `host_vars/foo.yml` inside of a directory properly passed via `--playbook-dir` to the `ansible-inventory` command, vars from that file are not present in the output. This used to work. The regression was first seen the night of Nov 4, 2019 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/cli/inventory.py ##### ANSIBLE VERSION ```paste below ansible --version ansible 2.10.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible executable location = /Users/alancoding/.virtualenvs/ansible3/bin/ansible python version = 3.6.5 (default, Apr 25 2018, 14:23:58) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)] ``` ##### CONFIGURATION ```paste below $ ansible-config dump --only-changed DEPRECATION_WARNINGS(/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Mac OS ##### STEPS TO REPRODUCE https://github.com/ansible/test-playbooks/tree/master/inventories host of interest is `group_one_host_01` Inventory file: ``` $ cat inventories/inventory.ini ungrouped_host_[01:05] group_one_host_01 group_one_host_01_has_this_var=True group_two_host_01 group_two_host_01_has_this_var=True group_three_host_01 group_three_host_01_has_this_var=True [group_one] group_one_host_[01:05] group_one_and_two_host_[01:05] group_one_two_and_three_host_[01:05] [group_one:vars] is_in_group_one=True complex_var=[{"dir": "/opt/gwaf/logs", "sourcetype": "gwaf", "something_else": [1, 2, 3]}] [group_two] group_two_host_[01:05] group_one_and_two_host_[01:05] group_two_and_three_host_[01:05] group_one_two_and_three_host_[01:05] [group_two:vars] is_in_group_two=True [group_three] group_three_host_[01:05] group_two_and_three_host_[01:05] group_one_two_and_three_host_[01:05] [group_three:vars] is_in_group_three=True [all:vars] ansible_connection=local inventories_var=True ``` vars file: ``` $ cat inventories//host_vars/group_one_host_01 group_one_host_01_should_have_this_var: True ``` run: ``` ansible-inventory -i inventories/inventory.ini --list --export --playbook-dir=inventories/ | grep -r "group_one_host_01_should_have_this_var" ``` ##### EXPECTED RESULTS `group_one_host_01_should_have_this_var` should be in the hostvars for `group_one_host_01` ##### ACTUAL RESULTS The string `group_one_host_01_should_have_this_var` is not seen anywhere in the output.
https://github.com/ansible/ansible/issues/65064
https://github.com/ansible/ansible/pull/65073
8f1af618518b657b914f7ce24492e6f444f1c5e1
c1f280ba6e4a1e5867720e8c8426bc451ad32126
2019-11-19T15:19:54Z
python
2019-11-25T18:16:03Z
lib/ansible/cli/inventory.py
# Copyright: (c) 2017, Brian Coca <[email protected]> # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import argparse from operator import attrgetter from ansible import constants as C from ansible import context from ansible.cli import CLI from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.inventory.host import Host from ansible.module_utils._text import to_bytes, to_native from ansible.plugins.loader import vars_loader from ansible.utils.vars import combine_vars from ansible.utils.display import Display from ansible.vars.plugins import get_vars_from_inventory_sources display = Display() INTERNAL_VARS = frozenset(['ansible_diff_mode', 'ansible_facts', 'ansible_forks', 'ansible_inventory_sources', 'ansible_limit', 'ansible_playbook_python', 'ansible_run_tags', 'ansible_skip_tags', 'ansible_verbosity', 'ansible_version', 'inventory_dir', 'inventory_file', 'inventory_hostname', 'inventory_hostname_short', 'groups', 'group_names', 'omit', 'playbook_dir', ]) class InventoryCLI(CLI): ''' used to display or dump the configured inventory as Ansible sees it ''' ARGUMENTS = {'host': 'The name of a host to match in the inventory, relevant when using --list', 'group': 'The name of a group in the inventory, relevant when using --graph', } def __init__(self, args): super(InventoryCLI, self).__init__(args) self.vm = None self.loader = None self.inventory = None def init_parser(self): super(InventoryCLI, self).init_parser( usage='usage: %prog [options] [host|group]', epilog='Show Ansible inventory information, by default it uses the inventory script JSON format') opt_help.add_inventory_options(self.parser) opt_help.add_vault_options(self.parser) opt_help.add_basedir_options(self.parser) # remove unused default options self.parser.add_argument('-l', '--limit', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument, nargs='?') self.parser.add_argument('--list-hosts', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument) self.parser.add_argument('args', metavar='host|group', nargs='?') # Actions action_group = self.parser.add_argument_group("Actions", "One of following must be used on invocation, ONLY ONE!") action_group.add_argument("--list", action="store_true", default=False, dest='list', help='Output all hosts info, works as inventory script') action_group.add_argument("--host", action="store", default=None, dest='host', help='Output specific host info, works as inventory script') action_group.add_argument("--graph", action="store_true", default=False, dest='graph', help='create inventory graph, if supplying pattern it must be a valid group name') self.parser.add_argument_group(action_group) # graph self.parser.add_argument("-y", "--yaml", action="store_true", default=False, dest='yaml', help='Use YAML format instead of default JSON, ignored for --graph') self.parser.add_argument('--toml', action='store_true', default=False, dest='toml', help='Use TOML format instead of default JSON, ignored for --graph') self.parser.add_argument("--vars", action="store_true", default=False, dest='show_vars', help='Add vars to graph display, ignored unless used with --graph') # list self.parser.add_argument("--export", action="store_true", default=C.INVENTORY_EXPORT, dest='export', help="When doing an --list, represent in a way that is optimized for export," "not as an accurate representation of how Ansible has processed it") self.parser.add_argument('--output', default=None, dest='output_file', help="When doing --list, send the inventory to a file instead of to the screen") # self.parser.add_argument("--ignore-vars-plugins", action="store_true", default=False, dest='ignore_vars_plugins', # help="When doing an --list, skip vars data from vars plugins, by default, this would include group_vars/ and host_vars/") def post_process_args(self, options): options = super(InventoryCLI, self).post_process_args(options) display.verbosity = options.verbosity self.validate_conflicts(options) # there can be only one! and, at least, one! used = 0 for opt in (options.list, options.host, options.graph): if opt: used += 1 if used == 0: raise AnsibleOptionsError("No action selected, at least one of --host, --graph or --list needs to be specified.") elif used > 1: raise AnsibleOptionsError("Conflicting options used, only one of --host, --graph or --list can be used at the same time.") # set host pattern to default if not supplied if options.args: options.pattern = options.args else: options.pattern = 'all' return options def run(self): super(InventoryCLI, self).run() # Initialize needed objects self.loader, self.inventory, self.vm = self._play_prereqs() results = None if context.CLIARGS['host']: hosts = self.inventory.get_hosts(context.CLIARGS['host']) if len(hosts) != 1: raise AnsibleOptionsError("You must pass a single valid host to --host parameter") myvars = self._get_host_variables(host=hosts[0]) # FIXME: should we template first? results = self.dump(myvars) elif context.CLIARGS['graph']: results = self.inventory_graph() elif context.CLIARGS['list']: top = self._get_group('all') if context.CLIARGS['yaml']: results = self.yaml_inventory(top) elif context.CLIARGS['toml']: results = self.toml_inventory(top) else: results = self.json_inventory(top) results = self.dump(results) if results: outfile = context.CLIARGS['output_file'] if outfile is None: # FIXME: pager? display.display(results) else: try: with open(to_bytes(outfile), 'wt') as f: f.write(results) except (OSError, IOError) as e: raise AnsibleError('Unable to write to destination file (%s): %s' % (to_native(outfile), to_native(e))) exit(0) exit(1) @staticmethod def dump(stuff): if context.CLIARGS['yaml']: import yaml from ansible.parsing.yaml.dumper import AnsibleDumper results = yaml.dump(stuff, Dumper=AnsibleDumper, default_flow_style=False) elif context.CLIARGS['toml']: from ansible.plugins.inventory.toml import toml_dumps, HAS_TOML if not HAS_TOML: raise AnsibleError( 'The python "toml" library is required when using the TOML output format' ) results = toml_dumps(stuff) else: import json from ansible.parsing.ajson import AnsibleJSONEncoder results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4, preprocess_unsafe=True) return results def _get_group_variables(self, group): # get info from inventory source res = group.get_vars() res = combine_vars(res, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [group], 'inventory')) if group.priority != 1: res['ansible_group_priority'] = group.priority return self._remove_internal(res) def _get_host_variables(self, host): if context.CLIARGS['export']: # only get vars defined directly host hostvars = host.get_vars() hostvars = combine_vars(hostvars, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [host], 'inventory')) else: # get all vars flattened by host, but skip magic hostvars hostvars = self.vm.get_vars(host=host, include_hostvars=False, stage='inventory') return self._remove_internal(hostvars) def _get_group(self, gname): group = self.inventory.groups.get(gname) return group @staticmethod def _remove_internal(dump): for internal in INTERNAL_VARS: if internal in dump: del dump[internal] return dump @staticmethod def _remove_empty(dump): # remove empty keys for x in ('hosts', 'vars', 'children'): if x in dump and not dump[x]: del dump[x] @staticmethod def _show_vars(dump, depth): result = [] if context.CLIARGS['show_vars']: for (name, val) in sorted(dump.items()): result.append(InventoryCLI._graph_name('{%s = %s}' % (name, val), depth)) return result @staticmethod def _graph_name(name, depth=0): if depth: name = " |" * (depth) + "--%s" % name return name def _graph_group(self, group, depth=0): result = [self._graph_name('@%s:' % group.name, depth)] depth = depth + 1 for kid in sorted(group.child_groups, key=attrgetter('name')): result.extend(self._graph_group(kid, depth)) if group.name != 'all': for host in sorted(group.hosts, key=attrgetter('name')): result.append(self._graph_name(host.name, depth)) result.extend(self._show_vars(self._get_host_variables(host), depth + 1)) result.extend(self._show_vars(self._get_group_variables(group), depth)) return result def inventory_graph(self): start_at = self._get_group(context.CLIARGS['pattern']) if start_at: return '\n'.join(self._graph_group(start_at)) else: raise AnsibleOptionsError("Pattern must be valid group name when using --graph") def json_inventory(self, top): seen = set() def format_group(group): results = {} results[group.name] = {} if group.name != 'all': results[group.name]['hosts'] = [h.name for h in sorted(group.hosts, key=attrgetter('name'))] results[group.name]['children'] = [] for subgroup in sorted(group.child_groups, key=attrgetter('name')): results[group.name]['children'].append(subgroup.name) if subgroup.name not in seen: results.update(format_group(subgroup)) seen.add(subgroup.name) if context.CLIARGS['export']: results[group.name]['vars'] = self._get_group_variables(group) self._remove_empty(results[group.name]) if not results[group.name]: del results[group.name] return results results = format_group(top) # populate meta results['_meta'] = {'hostvars': {}} hosts = self.inventory.get_hosts() for host in hosts: hvars = self._get_host_variables(host) if hvars: results['_meta']['hostvars'][host.name] = hvars return results def yaml_inventory(self, top): seen = [] def format_group(group): results = {} # initialize group + vars results[group.name] = {} # subgroups results[group.name]['children'] = {} for subgroup in sorted(group.child_groups, key=attrgetter('name')): if subgroup.name != 'all': results[group.name]['children'].update(format_group(subgroup)) # hosts for group results[group.name]['hosts'] = {} if group.name != 'all': for h in sorted(group.hosts, key=attrgetter('name')): myvars = {} if h.name not in seen: # avoid defining host vars more than once seen.append(h.name) myvars = self._get_host_variables(host=h) results[group.name]['hosts'][h.name] = myvars if context.CLIARGS['export']: gvars = self._get_group_variables(group) if gvars: results[group.name]['vars'] = gvars self._remove_empty(results[group.name]) return results return format_group(top) def toml_inventory(self, top): seen = set() has_ungrouped = bool(next(g.hosts for g in top.child_groups if g.name == 'ungrouped')) def format_group(group): results = {} results[group.name] = {} results[group.name]['children'] = [] for subgroup in sorted(group.child_groups, key=attrgetter('name')): if subgroup.name == 'ungrouped' and not has_ungrouped: continue if group.name != 'all': results[group.name]['children'].append(subgroup.name) results.update(format_group(subgroup)) if group.name != 'all': for host in sorted(group.hosts, key=attrgetter('name')): if host.name not in seen: seen.add(host.name) host_vars = self._get_host_variables(host=host) else: host_vars = {} try: results[group.name]['hosts'][host.name] = host_vars except KeyError: results[group.name]['hosts'] = {host.name: host_vars} if context.CLIARGS['export']: results[group.name]['vars'] = self._get_group_variables(group) self._remove_empty(results[group.name]) if not results[group.name]: del results[group.name] return results results = format_group(top) return results
closed
ansible/ansible
https://github.com/ansible/ansible
65,064
Regression - no longer returning vars plugin host_vars from ansible-inventory
##### SUMMARY If I have a host with a name like `foo` and have variables inside a file `host_vars/foo.yml` inside of a directory properly passed via `--playbook-dir` to the `ansible-inventory` command, vars from that file are not present in the output. This used to work. The regression was first seen the night of Nov 4, 2019 ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/cli/inventory.py ##### ANSIBLE VERSION ```paste below ansible --version ansible 2.10.0.dev0 config file = /Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg configured module search path = ['/Users/alancoding/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /Users/alancoding/Documents/repos/ansible/lib/ansible executable location = /Users/alancoding/.virtualenvs/ansible3/bin/ansible python version = 3.6.5 (default, Apr 25 2018, 14:23:58) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)] ``` ##### CONFIGURATION ```paste below $ ansible-config dump --only-changed DEPRECATION_WARNINGS(/Users/alancoding/Documents/repos/jlaska-ansible-playbooks/ansible.cfg) = False ``` ##### OS / ENVIRONMENT Mac OS ##### STEPS TO REPRODUCE https://github.com/ansible/test-playbooks/tree/master/inventories host of interest is `group_one_host_01` Inventory file: ``` $ cat inventories/inventory.ini ungrouped_host_[01:05] group_one_host_01 group_one_host_01_has_this_var=True group_two_host_01 group_two_host_01_has_this_var=True group_three_host_01 group_three_host_01_has_this_var=True [group_one] group_one_host_[01:05] group_one_and_two_host_[01:05] group_one_two_and_three_host_[01:05] [group_one:vars] is_in_group_one=True complex_var=[{"dir": "/opt/gwaf/logs", "sourcetype": "gwaf", "something_else": [1, 2, 3]}] [group_two] group_two_host_[01:05] group_one_and_two_host_[01:05] group_two_and_three_host_[01:05] group_one_two_and_three_host_[01:05] [group_two:vars] is_in_group_two=True [group_three] group_three_host_[01:05] group_two_and_three_host_[01:05] group_one_two_and_three_host_[01:05] [group_three:vars] is_in_group_three=True [all:vars] ansible_connection=local inventories_var=True ``` vars file: ``` $ cat inventories//host_vars/group_one_host_01 group_one_host_01_should_have_this_var: True ``` run: ``` ansible-inventory -i inventories/inventory.ini --list --export --playbook-dir=inventories/ | grep -r "group_one_host_01_should_have_this_var" ``` ##### EXPECTED RESULTS `group_one_host_01_should_have_this_var` should be in the hostvars for `group_one_host_01` ##### ACTUAL RESULTS The string `group_one_host_01_should_have_this_var` is not seen anywhere in the output.
https://github.com/ansible/ansible/issues/65064
https://github.com/ansible/ansible/pull/65073
8f1af618518b657b914f7ce24492e6f444f1c5e1
c1f280ba6e4a1e5867720e8c8426bc451ad32126
2019-11-19T15:19:54Z
python
2019-11-25T18:16:03Z
test/integration/targets/collections/vars_plugin_tests.sh
#!/usr/bin/env bash set -eux # Collections vars plugins must be whitelisted with FQCN because PluginLoader.all() does not search collections # Let vars plugins run for inventory by using the global setting export ANSIBLE_RUN_VARS_PLUGINS=start # Test vars plugin in a playbook-adjacent collection export ANSIBLE_VARS_ENABLED=testns.content_adj.custom_adj_vars ansible-inventory -i a.statichost.yml --list --playbook-dir=./ | tee out.txt grep '"collection": "adjacent"' out.txt grep '"adj_var": "value"' out.txt # Test vars plugin in a collection path export ANSIBLE_VARS_ENABLED=testns.testcoll.custom_vars export ANSIBLE_COLLECTIONS_PATHS=$PWD/collection_root_user:$PWD/collection_root_sys ansible-inventory -i a.statichost.yml --list --playbook-dir=./ | tee out.txt grep '"collection": "collection_root_user"' out.txt grep -v '"adj_var": "value"' out.txt # Test enabled vars plugins order reflects the order in which variables are merged export ANSIBLE_VARS_ENABLED=testns.content_adj.custom_adj_vars,testns.testcoll.custom_vars ansible-inventory -i a.statichost.yml --list --playbook-dir=./ | tee out.txt grep '"collection": "collection_root_user"' out.txt grep '"adj_var": "value"' out.txt grep -v '"collection": "adjacent"' out.txt # Test that 3rd party plugins in plugin_path do not need to require whitelisting by default # Plugins shipped with Ansible and in the custom plugin dir should be used first export ANSIBLE_VARS_PLUGINS=./custom_vars_plugins ansible-inventory -i a.statichost.yml --list --playbook-dir=./ | tee out.txt grep '"name": "v2_vars_plugin"' out.txt grep '"collection": "collection_root_user"' out.txt grep '"adj_var": "value"' out.txt grep -v '"whitelisted": true' out.txt # Test plugins in plugin paths that opt-in to require whitelisting unset ANSIBLE_VARS_ENABLED unset ANSIBLE_COLLECTIONS_PATHS ANSIBLE_VARS_ENABLED=vars_req_whitelist ansible-inventory -i a.statichost.yml --list --playbook-dir=./ | tee out.txt grep '"whitelisted": true' out.txt # Test vars plugins that support the stage setting don't run for inventory when stage is set to 'task' # and that the vars plugins that don't support the stage setting don't run for inventory when the global setting is 'demand' ANSIBLE_VARS_PLUGIN_STAGE=task ansible-inventory -i a.statichost.yml --list --playbook-dir=./ | tee out.txt grep -v '"v1_vars_plugin": true' out.txt grep -v '"v2_vars_plugin": true' out.txt grep -v '"vars_req_whitelist": true' out.txt grep -v '"collection": "adjacent"' out.txt grep -v '"collection": "collection_root_user"' out.txt grep -v '"adj_var": "value"' out.txt # Test vars plugins that support the stage setting run for inventory when stage is set to 'inventory' ANSIBLE_VARS_PLUGIN_STAGE=inventory ansible-inventory -i a.statichost.yml --list --playbook-dir=./ | tee out.txt grep -v '"v1_vars_plugin": true' out.txt grep -v '"vars_req_whitelist": true' out.txt grep '"v2_vars_plugin": true' out.txt grep '"name": "v2_vars_plugin"' out.txt # Test that the global setting allows v1 and v2 plugins to run after importing inventory ANSIBLE_RUN_VARS_PLUGINS=start ansible-inventory -i a.statichost.yml --list --playbook-dir=./ | tee out.txt grep -v '"vars_req_whitelist": true' out.txt grep '"v1_vars_plugin": true' out.txt grep '"v2_vars_plugin": true' out.txt grep '"name": "v2_vars_plugin"' out.txt # Test that vars plugins in collections and in the vars plugin path are available for tasks cat << EOF > "test_task_vars.yml" --- - hosts: localhost connection: local gather_facts: no tasks: - debug: msg="{{ name }}" - debug: msg="{{ collection }}" - debug: msg="{{ adj_var }}" EOF export ANSIBLE_VARS_ENABLED=testns.content_adj.custom_adj_vars ANSIBLE_VARS_PLUGIN_STAGE=task ANSIBLE_VARS_PLUGINS=./custom_vars_plugins ansible-playbook test_task_vars.yml | grep "ok=3" ANSIBLE_RUN_VARS_PLUGINS=start ANSIBLE_VARS_PLUGIN_STAGE=inventory ANSIBLE_VARS_PLUGINS=./custom_vars_plugins ansible-playbook test_task_vars.yml | grep "ok=3" ANSIBLE_RUN_VARS_PLUGINS=demand ANSIBLE_VARS_PLUGIN_STAGE=inventory ANSIBLE_VARS_PLUGINS=./custom_vars_plugins ansible-playbook test_task_vars.yml | grep "ok=3" ANSIBLE_VARS_PLUGINS=./custom_vars_plugins ansible-playbook test_task_vars.yml | grep "ok=3"
closed
ansible/ansible
https://github.com/ansible/ansible
64,564
retry files management
##### SUMMARY There is no clear description as to how retry files are managed once a playbook is run with `--limit @/retry/file` Say I run a playbook with a lot of hosts that might be *unreachable*. First time it runs there will be a retry file created. Second run will be run against the remaining hosts in the retry file. Q1: Will the successfull hosts at the second run be "removed" from the retry file and only have the rest of the *unreachables*? Or, to put it differently, will there be another retry file created containing only the *unreachable* hosts after the second run? Similarly for the third, fourth, n-th run. Q2: What happens to the retry file if ALL the hosts within are successfull? Is it removed/updated/etc? ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME C.RETRY_FILES_ENABLED ##### ANSIBLE VERSION ``` ansible 2.8.5 config file = /path/to/config/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION ``` ACTION_WARNINGS(/path/to/config/ansible.cfg) = False ANSIBLE_NOCOWS(/path/to/config/ansible.cfg) = True ANSIBLE_PIPELINING(/path/to/config/ansible.cfg) = True CACHE_PLUGIN(/path/to/config/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/path/to/config/ansible.cfg) = /etc/ansible/cache CACHE_PLUGIN_TIMEOUT(/path/to/config/ansible.cfg) = 86400 COMMAND_WARNINGS(/path/to/config/ansible.cfg) = False DEFAULT_CALLBACK_WHITELIST(/path/to/config/ansible.cfg) = [u'timer', u'log_plays'] DEFAULT_FORKS(/path/to/config/ansible.cfg) = 10 DEFAULT_GATHERING(/path/to/config/ansible.cfg) = smart DEFAULT_HOST_LIST(/path/to/config/ansible.cfg) = [u'/path/to/config/hosts'] DEFAULT_LOG_PATH(/path/to/config/ansible.cfg) = /var/log/ansible.log DEFAULT_ROLES_PATH(/path/to/config/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles'] HOST_KEY_CHECKING(/path/to/config/ansible.cfg) = False RETRY_FILES_ENABLED(/path/to/config/ansible.cfg) = True ``` ##### OS / ENVIRONMENT OS: CentOS Linux release 7.7.1908 (Core) ##### ADDITIONAL INFORMATION I cannot seem to find this information anywhere! No official documentation versus different outcomes in practice.
https://github.com/ansible/ansible/issues/64564
https://github.com/ansible/ansible/pull/65153
c1f280ba6e4a1e5867720e8c8426bc451ad32126
0471ed37316b4d32a12c006015e3b7c4611a86ef
2019-11-07T15:05:21Z
python
2019-11-25T19:26:46Z
docs/docsite/rst/user_guide/intro_patterns.rst
.. _intro_patterns: Patterns: targeting hosts and groups ==================================== When you execute Ansible through an ad-hoc command or by running a playbook, you must choose which managed nodes or groups you want to execute against. Patterns let you run commands and playbooks against specific hosts and/or groups in your inventory. An Ansible pattern can refer to a single host, an IP address, an inventory group, a set of groups, or all hosts in your inventory. Patterns are highly flexible - you can exclude or require subsets of hosts, use wildcards or regular expressions, and more. Ansible executes on all inventory hosts included in the pattern. .. contents:: :local: Using patterns -------------- You use a pattern almost any time you execute an ad-hoc command or a playbook. The pattern is the only element of an :ref:`ad-hoc command<intro_adhoc>` that has no flag. It is usually the second element:: ansible <pattern> -m <module_name> -a "<module options>"" For example:: ansible webservers -m service -a "name=httpd state=restarted" In a playbook the pattern is the content of the ``hosts:`` line for each play: .. code-block:: yaml - name: <play_name> hosts: <pattern> For example:: - name: restart webservers hosts: webservers Since you often want to run a command or playbook against multiple hosts at once, patterns often refer to inventory groups. Both the ad-hoc command and the playbook above will execute against all machines in the ``webservers`` group. .. _common_patterns: Common patterns --------------- This table lists common patterns for targeting inventory hosts and groups. .. table:: :class: documentation-table ====================== ================================ =================================================== Description Pattern(s) Targets ====================== ================================ =================================================== All hosts all (or \*) One host host1 Multiple hosts host1:host2 (or host1,host2) One group webservers Multiple groups webservers:dbservers all hosts in webservers plus all hosts in dbservers Excluding groups webservers:!atlanta all hosts in webservers except those in atlanta Intersection of groups webservers:&staging any hosts in webservers that are also in staging ====================== ================================ =================================================== .. note:: You can use either a comma (``,``) or a colon (``:``) to separate a list of hosts. The comma is preferred when dealing with ranges and IPv6 addresses. Once you know the basic patterns, you can combine them. This example:: webservers:dbservers:&staging:!phoenix targets all machines in the groups 'webservers' and 'dbservers' that are also in the group 'staging', except any machines in the group 'phoenix'. You can use wildcard patterns with FQDNs or IP addresses, as long as the hosts are named in your inventory by FQDN or IP address:: 192.0.\* \*.example.com \*.com You can mix wildcard patterns and groups at the same time:: one*.com:dbservers Limitations of patterns ----------------------- Patterns depend on inventory. If a host or group is not listed in your inventory, you cannot use a pattern to target it. If your pattern includes an IP address or hostname that does not appear in your inventory, you will see an error like this: .. code-block:: text [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: Could not match supplied host pattern, ignoring: *.not_in_inventory.com Your pattern must match your inventory syntax. If you define a host as an :ref:`alias<inventory_aliases>`: .. code-block:: yaml atlanta: host1: http_port: 80 maxRequestsPerChild: 808 host: 127.0.0.2 you must use the alias in your pattern. In the example above, you must use ``host1`` in your pattern. If you use the IP address, you will once again get the error:: [WARNING]: Could not match supplied host pattern, ignoring: 127.0.0.2 Advanced pattern options ------------------------ The common patterns described above will meet most of your needs, but Ansible offers several other ways to define the hosts and groups you want to target. Using variables in patterns ^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can use variables to enable passing group specifiers via the ``-e`` argument to ansible-playbook:: webservers:!{{ excluded }}:&{{ required }} Using group position in patterns ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can define a host or subset of hosts by its position in a group. For example, given the following group:: [webservers] cobweb webbing weber you can use subscripts to select individual hosts or ranges within the webservers group:: webservers[0] # == cobweb webservers[-1] # == weber webservers[0:2] # == webservers[0],webservers[1] # == cobweb,webbing webservers[1:] # == webbing,weber webservers[:3] # == cobweb,webbing,weber Using regexes in patterns ^^^^^^^^^^^^^^^^^^^^^^^^^ You can specify a pattern as a regular expression by starting the pattern with ``~``:: ~(web|db).*\.example\.com Patterns and ansible-playbook flags ----------------------------------- You can change the behavior of the patterns defined in playbooks using command-line options. For example, you can run a playbook that defines ``hosts: all`` on a single host by specifying ``-i 127.0.0.2,``. This works even if the host you target is not defined in your inventory. You can also limit the hosts you target on a particular run with the ``--limit`` flag:: ansible-playbook site.yml --limit datacenter2 Finally, you can use ``--limit`` to read the list of hosts from a file by prefixing the file name with ``@``:: ansible-playbook site.yml --limit @retry_hosts.txt To apply your knowledge of patterns with Ansible commands and playbooks, read :ref:`intro_adhoc` and :ref:`playbooks_intro`. .. seealso:: :ref:`intro_adhoc` Examples of basic commands :ref:`working_with_playbooks` Learning the Ansible configuration management language `Mailing List <https://groups.google.com/group/ansible-project>`_ Questions? Help? Ideas? Stop by the list on Google Groups `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
64,564
retry files management
##### SUMMARY There is no clear description as to how retry files are managed once a playbook is run with `--limit @/retry/file` Say I run a playbook with a lot of hosts that might be *unreachable*. First time it runs there will be a retry file created. Second run will be run against the remaining hosts in the retry file. Q1: Will the successfull hosts at the second run be "removed" from the retry file and only have the rest of the *unreachables*? Or, to put it differently, will there be another retry file created containing only the *unreachable* hosts after the second run? Similarly for the third, fourth, n-th run. Q2: What happens to the retry file if ALL the hosts within are successfull? Is it removed/updated/etc? ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME C.RETRY_FILES_ENABLED ##### ANSIBLE VERSION ``` ansible 2.8.5 config file = /path/to/config/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION ``` ACTION_WARNINGS(/path/to/config/ansible.cfg) = False ANSIBLE_NOCOWS(/path/to/config/ansible.cfg) = True ANSIBLE_PIPELINING(/path/to/config/ansible.cfg) = True CACHE_PLUGIN(/path/to/config/ansible.cfg) = jsonfile CACHE_PLUGIN_CONNECTION(/path/to/config/ansible.cfg) = /etc/ansible/cache CACHE_PLUGIN_TIMEOUT(/path/to/config/ansible.cfg) = 86400 COMMAND_WARNINGS(/path/to/config/ansible.cfg) = False DEFAULT_CALLBACK_WHITELIST(/path/to/config/ansible.cfg) = [u'timer', u'log_plays'] DEFAULT_FORKS(/path/to/config/ansible.cfg) = 10 DEFAULT_GATHERING(/path/to/config/ansible.cfg) = smart DEFAULT_HOST_LIST(/path/to/config/ansible.cfg) = [u'/path/to/config/hosts'] DEFAULT_LOG_PATH(/path/to/config/ansible.cfg) = /var/log/ansible.log DEFAULT_ROLES_PATH(/path/to/config/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles'] HOST_KEY_CHECKING(/path/to/config/ansible.cfg) = False RETRY_FILES_ENABLED(/path/to/config/ansible.cfg) = True ``` ##### OS / ENVIRONMENT OS: CentOS Linux release 7.7.1908 (Core) ##### ADDITIONAL INFORMATION I cannot seem to find this information anywhere! No official documentation versus different outcomes in practice.
https://github.com/ansible/ansible/issues/64564
https://github.com/ansible/ansible/pull/65153
c1f280ba6e4a1e5867720e8c8426bc451ad32126
0471ed37316b4d32a12c006015e3b7c4611a86ef
2019-11-07T15:05:21Z
python
2019-11-25T19:26:46Z
lib/ansible/config/base.yml
# Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) --- ALLOW_WORLD_READABLE_TMPFILES: name: Allow world-readable temporary files default: False description: - This makes the temporary files created on the machine world-readable and will issue a warning instead of failing the task. - It is useful when becoming an unprivileged user. env: [] ini: - {key: allow_world_readable_tmpfiles, section: defaults} type: boolean yaml: {key: defaults.allow_world_readable_tmpfiles} version_added: "2.1" ANSIBLE_CONNECTION_PATH: name: Path of ansible-connection script default: null description: - Specify where to look for the ansible-connection script. This location will be checked before searching $PATH. - If null, ansible will start with the same directory as the ansible script. type: path env: [{name: ANSIBLE_CONNECTION_PATH}] ini: - {key: ansible_connection_path, section: persistent_connection} yaml: {key: persistent_connection.ansible_connection_path} version_added: "2.8" ANSIBLE_COW_SELECTION: name: Cowsay filter selection default: default description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them. env: [{name: ANSIBLE_COW_SELECTION}] ini: - {key: cow_selection, section: defaults} ANSIBLE_COW_WHITELIST: name: Cowsay filter whitelist default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www'] description: White list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates. env: [{name: ANSIBLE_COW_WHITELIST}] ini: - {key: cow_whitelist, section: defaults} type: list yaml: {key: display.cowsay_whitelist} ANSIBLE_FORCE_COLOR: name: Force color output default: False description: This options forces color mode even when running without a TTY or the "nocolor" setting is True. env: [{name: ANSIBLE_FORCE_COLOR}] ini: - {key: force_color, section: defaults} type: boolean yaml: {key: display.force_color} ANSIBLE_NOCOLOR: name: Suppress color output default: False description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information. env: [{name: ANSIBLE_NOCOLOR}] ini: - {key: nocolor, section: defaults} type: boolean yaml: {key: display.nocolor} ANSIBLE_NOCOWS: name: Suppress cowsay output default: False description: If you have cowsay installed but want to avoid the 'cows' (why????), use this. env: [{name: ANSIBLE_NOCOWS}] ini: - {key: nocows, section: defaults} type: boolean yaml: {key: display.i_am_no_fun} ANSIBLE_COW_PATH: name: Set path to cowsay command default: null description: Specify a custom cowsay path or swap in your cowsay implementation of choice env: [{name: ANSIBLE_COW_PATH}] ini: - {key: cowpath, section: defaults} type: string yaml: {key: display.cowpath} ANSIBLE_PIPELINING: name: Connection pipelining default: False description: - Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server, by executing many Ansible modules without actual file transfer. - This can result in a very significant performance improvement when enabled. - "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default." - This options is disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled. env: - name: ANSIBLE_PIPELINING - name: ANSIBLE_SSH_PIPELINING ini: - section: connection key: pipelining - section: ssh_connection key: pipelining type: boolean yaml: {key: plugins.connection.pipelining} ANSIBLE_SSH_ARGS: # TODO: move to ssh plugin default: -C -o ControlMaster=auto -o ControlPersist=60s description: - If set, this will override the Ansible default ssh arguments. - In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may be appropriate. - Be aware that if `-o ControlPath` is set in ssh_args, the control path setting is not used. env: [{name: ANSIBLE_SSH_ARGS}] ini: - {key: ssh_args, section: ssh_connection} yaml: {key: ssh_connection.ssh_args} ANSIBLE_SSH_CONTROL_PATH: # TODO: move to ssh plugin default: null description: - This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution. - Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting. - Before 2.3 it defaulted to `control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r`. - Be aware that this setting is ignored if `-o ControlPath` is set in ssh args. env: [{name: ANSIBLE_SSH_CONTROL_PATH}] ini: - {key: control_path, section: ssh_connection} yaml: {key: ssh_connection.control_path} ANSIBLE_SSH_CONTROL_PATH_DIR: # TODO: move to ssh plugin default: ~/.ansible/cp description: - This sets the directory to use for ssh control path if the control path setting is null. - Also, provides the `%(directory)s` variable for the control path setting. env: [{name: ANSIBLE_SSH_CONTROL_PATH_DIR}] ini: - {key: control_path_dir, section: ssh_connection} yaml: {key: ssh_connection.control_path_dir} ANSIBLE_SSH_EXECUTABLE: # TODO: move to ssh plugin default: ssh description: - This defines the location of the ssh binary. It defaults to `ssh` which will use the first ssh binary available in $PATH. - This option is usually not required, it might be useful when access to system ssh is restricted, or when using ssh wrappers to connect to remote hosts. env: [{name: ANSIBLE_SSH_EXECUTABLE}] ini: - {key: ssh_executable, section: ssh_connection} yaml: {key: ssh_connection.ssh_executable} version_added: "2.2" ANSIBLE_SSH_RETRIES: # TODO: move to ssh plugin default: 0 description: Number of attempts to establish a connection before we give up and report the host as 'UNREACHABLE' env: [{name: ANSIBLE_SSH_RETRIES}] ini: - {key: retries, section: ssh_connection} type: integer yaml: {key: ssh_connection.retries} ANY_ERRORS_FATAL: name: Make Task failures fatal default: False description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors. env: - name: ANSIBLE_ANY_ERRORS_FATAL ini: - section: defaults key: any_errors_fatal type: boolean yaml: {key: errors.any_task_errors_fatal} version_added: "2.4" BECOME_ALLOW_SAME_USER: name: Allow becoming the same user default: False description: This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root. env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}] ini: - {key: become_allow_same_user, section: privilege_escalation} type: boolean yaml: {key: privilege_escalation.become_allow_same_user} AGNOSTIC_BECOME_PROMPT: name: Display an agnostic become prompt default: True type: boolean description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}] ini: - {key: agnostic_become_prompt, section: privilege_escalation} yaml: {key: privilege_escalation.agnostic_become_prompt} version_added: "2.5" CACHE_PLUGIN: name: Persistent Cache plugin default: memory description: Chooses which cache plugin to use, the default 'memory' is ephimeral. env: [{name: ANSIBLE_CACHE_PLUGIN}] ini: - {key: fact_caching, section: defaults} yaml: {key: facts.cache.plugin} CACHE_PLUGIN_CONNECTION: name: Cache Plugin URI default: ~ description: Defines connection or path information for the cache plugin env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}] ini: - {key: fact_caching_connection, section: defaults} yaml: {key: facts.cache.uri} CACHE_PLUGIN_PREFIX: name: Cache Plugin table prefix default: ansible_facts description: Prefix to use for cache plugin files/tables env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}] ini: - {key: fact_caching_prefix, section: defaults} yaml: {key: facts.cache.prefix} CACHE_PLUGIN_TIMEOUT: name: Cache Plugin expiration timeout default: 86400 description: Expiration timeout for the cache plugin data env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}] ini: - {key: fact_caching_timeout, section: defaults} type: integer yaml: {key: facts.cache.timeout} COLLECTIONS_PATHS: name: ordered list of root paths for loading installed Ansible collections content description: Colon separated paths in which Ansible will search for collections content. default: ~/.ansible/collections:/usr/share/ansible/collections type: pathspec env: - {name: ANSIBLE_COLLECTIONS_PATHS} ini: - {key: collections_paths, section: defaults} COLOR_CHANGED: name: Color for 'changed' task status default: yellow description: Defines the color to use on 'Changed' task status env: [{name: ANSIBLE_COLOR_CHANGED}] ini: - {key: changed, section: colors} yaml: {key: display.colors.changed} COLOR_CONSOLE_PROMPT: name: "Color for ansible-console's prompt task status" default: white description: Defines the default color to use for ansible-console env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}] ini: - {key: console_prompt, section: colors} version_added: "2.7" COLOR_DEBUG: name: Color for debug statements default: dark gray description: Defines the color to use when emitting debug messages env: [{name: ANSIBLE_COLOR_DEBUG}] ini: - {key: debug, section: colors} yaml: {key: display.colors.debug} COLOR_DEPRECATE: name: Color for deprecation messages default: purple description: Defines the color to use when emitting deprecation messages env: [{name: ANSIBLE_COLOR_DEPRECATE}] ini: - {key: deprecate, section: colors} yaml: {key: display.colors.deprecate} COLOR_DIFF_ADD: name: Color for diff added display default: green description: Defines the color to use when showing added lines in diffs env: [{name: ANSIBLE_COLOR_DIFF_ADD}] ini: - {key: diff_add, section: colors} yaml: {key: display.colors.diff.add} COLOR_DIFF_LINES: name: Color for diff lines display default: cyan description: Defines the color to use when showing diffs env: [{name: ANSIBLE_COLOR_DIFF_LINES}] ini: - {key: diff_lines, section: colors} COLOR_DIFF_REMOVE: name: Color for diff removed display default: red description: Defines the color to use when showing removed lines in diffs env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}] ini: - {key: diff_remove, section: colors} COLOR_ERROR: name: Color for error messages default: red description: Defines the color to use when emitting error messages env: [{name: ANSIBLE_COLOR_ERROR}] ini: - {key: error, section: colors} yaml: {key: colors.error} COLOR_HIGHLIGHT: name: Color for highlighting default: white description: Defines the color to use for highlighting env: [{name: ANSIBLE_COLOR_HIGHLIGHT}] ini: - {key: highlight, section: colors} COLOR_OK: name: Color for 'ok' task status default: green description: Defines the color to use when showing 'OK' task status env: [{name: ANSIBLE_COLOR_OK}] ini: - {key: ok, section: colors} COLOR_SKIP: name: Color for 'skip' task status default: cyan description: Defines the color to use when showing 'Skipped' task status env: [{name: ANSIBLE_COLOR_SKIP}] ini: - {key: skip, section: colors} COLOR_UNREACHABLE: name: Color for 'unreachable' host state default: bright red description: Defines the color to use on 'Unreachable' status env: [{name: ANSIBLE_COLOR_UNREACHABLE}] ini: - {key: unreachable, section: colors} COLOR_VERBOSE: name: Color for verbose messages default: blue description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's. env: [{name: ANSIBLE_COLOR_VERBOSE}] ini: - {key: verbose, section: colors} COLOR_WARN: name: Color for warning messages default: bright purple description: Defines the color to use when emitting warning messages env: [{name: ANSIBLE_COLOR_WARN}] ini: - {key: warn, section: colors} CONDITIONAL_BARE_VARS: name: Allow bare variable evaluation in conditionals default: True type: boolean description: - With this setting on (True), running conditional evaluation 'var' is treated differently than 'var.subkey' as the first is evaluated directly while the second goes through the Jinja2 parser. But 'false' strings in 'var' get evaluated as booleans. - With this setting off they both evaluate the same but in cases in which 'var' was 'false' (a string) it won't get evaluated as a boolean anymore. - Currently this setting defaults to 'True' but will soon change to 'False' and the setting itself will be removed in the future. - Expect the default to change in version 2.10 and that this setting eventually will be deprecated after 2.12 env: [{name: ANSIBLE_CONDITIONAL_BARE_VARS}] ini: - {key: conditional_bare_variables, section: defaults} version_added: "2.8" COVERAGE_REMOTE_OUTPUT: name: Sets the output directory and filename prefix to generate coverage run info. description: - Sets the output directory on the remote host to generate coverage reports to. - Currently only used for remote coverage on PowerShell modules. - This is for internal use only. env: - {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT} vars: - {name: _ansible_coverage_remote_output} type: str version_added: '2.9' COVERAGE_REMOTE_WHITELIST: name: Sets the list of paths to run coverage for. description: - A list of paths for files on the Ansible controller to run coverage for when executing on the remote host. - Only files that match the path glob will have its coverage collected. - Multiple path globs can be specified and are separated by ``:``. - Currently only used for remote coverage on PowerShell modules. - This is for internal use only. default: '*' env: - {name: _ANSIBLE_COVERAGE_REMOTE_WHITELIST} type: str version_added: '2.9' ACTION_WARNINGS: name: Toggle action warnings default: True description: - By default Ansible will issue a warning when received from a task action (module or action plugin) - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_ACTION_WARNINGS}] ini: - {key: action_warnings, section: defaults} type: boolean version_added: "2.5" COMMAND_WARNINGS: name: Command module warnings default: True description: - By default Ansible will issue a warning when the shell or command module is used and the command appears to be similar to an existing Ansible module. - These warnings can be silenced by adjusting this setting to False. You can also control this at the task level with the module option ``warn``. env: [{name: ANSIBLE_COMMAND_WARNINGS}] ini: - {key: command_warnings, section: defaults} type: boolean version_added: "1.8" LOCALHOST_WARNING: name: Warning when using implicit inventory with only localhost default: True description: - By default Ansible will issue a warning when there are no hosts in the inventory. - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_LOCALHOST_WARNING}] ini: - {key: localhost_warning, section: defaults} type: boolean version_added: "2.6" DOC_FRAGMENT_PLUGIN_PATH: name: documentation fragment plugins path default: ~/.ansible/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins. env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}] ini: - {key: doc_fragment_plugins, section: defaults} type: pathspec DEFAULT_ACTION_PLUGIN_PATH: name: Action plugins path default: ~/.ansible/plugins/action:/usr/share/ansible/plugins/action description: Colon separated paths in which Ansible will search for Action Plugins. env: [{name: ANSIBLE_ACTION_PLUGINS}] ini: - {key: action_plugins, section: defaults} type: pathspec yaml: {key: plugins.action.path} DEFAULT_ALLOW_UNSAFE_LOOKUPS: name: Allow unsafe lookups default: False description: - "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo) to return data that is not marked 'unsafe'." - By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language, as this could represent a security risk. This option is provided to allow for backwards-compatibility, however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run through the templating engine late env: [] ini: - {key: allow_unsafe_lookups, section: defaults} type: boolean version_added: "2.2.3" DEFAULT_ASK_PASS: name: Ask for the login password default: False description: - This controls whether an Ansible playbook should prompt for a login password. If using SSH keys for authentication, you probably do not needed to change this setting. env: [{name: ANSIBLE_ASK_PASS}] ini: - {key: ask_pass, section: defaults} type: boolean yaml: {key: defaults.ask_pass} DEFAULT_ASK_VAULT_PASS: name: Ask for the vault password(s) default: False description: - This controls whether an Ansible playbook should prompt for a vault password. env: [{name: ANSIBLE_ASK_VAULT_PASS}] ini: - {key: ask_vault_pass, section: defaults} type: boolean DEFAULT_BECOME: name: Enable privilege escalation (become) default: False description: Toggles the use of privilege escalation, allowing you to 'become' another user after login. env: [{name: ANSIBLE_BECOME}] ini: - {key: become, section: privilege_escalation} type: boolean DEFAULT_BECOME_ASK_PASS: name: Ask for the privilege escalation (become) password default: False description: Toggle to prompt for privilege escalation password. env: [{name: ANSIBLE_BECOME_ASK_PASS}] ini: - {key: become_ask_pass, section: privilege_escalation} type: boolean DEFAULT_BECOME_METHOD: name: Choose privilege escalation method default: 'sudo' description: Privilege escalation method to use when `become` is enabled. env: [{name: ANSIBLE_BECOME_METHOD}] ini: - {section: privilege_escalation, key: become_method} DEFAULT_BECOME_EXE: name: Choose 'become' executable default: ~ description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH' env: [{name: ANSIBLE_BECOME_EXE}] ini: - {key: become_exe, section: privilege_escalation} DEFAULT_BECOME_FLAGS: name: Set 'become' executable options default: '' description: Flags to pass to the privilege escalation executable. env: [{name: ANSIBLE_BECOME_FLAGS}] ini: - {key: become_flags, section: privilege_escalation} BECOME_PLUGIN_PATH: name: Become plugins path default: ~/.ansible/plugins/become:/usr/share/ansible/plugins/become description: Colon separated paths in which Ansible will search for Become Plugins. env: [{name: ANSIBLE_BECOME_PLUGINS}] ini: - {key: become_plugins, section: defaults} type: pathspec version_added: "2.8" DEFAULT_BECOME_USER: # FIXME: should really be blank and make -u passing optional depending on it name: Set the user you 'become' via privilege escalation default: root description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified. env: [{name: ANSIBLE_BECOME_USER}] ini: - {key: become_user, section: privilege_escalation} yaml: {key: become.user} DEFAULT_CACHE_PLUGIN_PATH: name: Cache Plugins Path default: ~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache description: Colon separated paths in which Ansible will search for Cache Plugins. env: [{name: ANSIBLE_CACHE_PLUGINS}] ini: - {key: cache_plugins, section: defaults} type: pathspec DEFAULT_CALLABLE_WHITELIST: name: Template 'callable' whitelist default: [] description: Whitelist of callable methods to be made available to template evaluation env: [{name: ANSIBLE_CALLABLE_WHITELIST}] ini: - {key: callable_whitelist, section: defaults} type: list DEFAULT_CALLBACK_PLUGIN_PATH: name: Callback Plugins Path default: ~/.ansible/plugins/callback:/usr/share/ansible/plugins/callback description: Colon separated paths in which Ansible will search for Callback Plugins. env: [{name: ANSIBLE_CALLBACK_PLUGINS}] ini: - {key: callback_plugins, section: defaults} type: pathspec yaml: {key: plugins.callback.path} DEFAULT_CALLBACK_WHITELIST: name: Callback Whitelist default: [] description: - "List of whitelisted callbacks, not all callbacks need whitelisting, but many of those shipped with Ansible do as we don't want them activated by default." env: [{name: ANSIBLE_CALLBACK_WHITELIST}] ini: - {key: callback_whitelist, section: defaults} type: list yaml: {key: plugins.callback.whitelist} DEFAULT_CLICONF_PLUGIN_PATH: name: Cliconf Plugins Path default: ~/.ansible/plugins/cliconf:/usr/share/ansible/plugins/cliconf description: Colon separated paths in which Ansible will search for Cliconf Plugins. env: [{name: ANSIBLE_CLICONF_PLUGINS}] ini: - {key: cliconf_plugins, section: defaults} type: pathspec DEFAULT_CONNECTION_PLUGIN_PATH: name: Connection Plugins Path default: ~/.ansible/plugins/connection:/usr/share/ansible/plugins/connection description: Colon separated paths in which Ansible will search for Connection Plugins. env: [{name: ANSIBLE_CONNECTION_PLUGINS}] ini: - {key: connection_plugins, section: defaults} type: pathspec yaml: {key: plugins.connection.path} DEFAULT_DEBUG: name: Debug mode default: False description: - "Toggles debug output in Ansible. This is *very* verbose and can hinder multiprocessing. Debug output can also include secret information despite no_log settings being enabled, which means debug mode should not be used in production." env: [{name: ANSIBLE_DEBUG}] ini: - {key: debug, section: defaults} type: boolean DEFAULT_EXECUTABLE: name: Target shell executable default: /bin/sh description: - "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target. Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is." env: [{name: ANSIBLE_EXECUTABLE}] ini: - {key: executable, section: defaults} DEFAULT_FACT_PATH: name: local fact path default: ~ description: - "This option allows you to globally configure a custom path for 'local_facts' for the implied M(setup) task when using fact gathering." - "If not set, it will fallback to the default from the M(setup) module: ``/etc/ansible/facts.d``." - "This does **not** affect user defined tasks that use the M(setup) module." env: [{name: ANSIBLE_FACT_PATH}] ini: - {key: fact_path, section: defaults} type: path yaml: {key: facts.gathering.fact_path} DEFAULT_FILTER_PLUGIN_PATH: name: Jinja2 Filter Plugins Path default: ~/.ansible/plugins/filter:/usr/share/ansible/plugins/filter description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins. env: [{name: ANSIBLE_FILTER_PLUGINS}] ini: - {key: filter_plugins, section: defaults} type: pathspec DEFAULT_FORCE_HANDLERS: name: Force handlers to run after failure default: False description: - This option controls if notified handlers run on a host even if a failure occurs on that host. - When false, the handlers will not run if a failure has occurred on a host. - This can also be set per play or on the command line. See Handlers and Failure for more details. env: [{name: ANSIBLE_FORCE_HANDLERS}] ini: - {key: force_handlers, section: defaults} type: boolean version_added: "1.9.1" DEFAULT_FORKS: name: Number of task forks default: 5 description: Maximum number of forks Ansible will use to execute tasks on target hosts. env: [{name: ANSIBLE_FORKS}] ini: - {key: forks, section: defaults} type: integer DEFAULT_GATHERING: name: Gathering behaviour default: 'implicit' description: - This setting controls the default policy of fact gathering (facts discovered about remote systems). - "When 'implicit' (the default), the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set." - "When 'explicit' the inverse is true, facts will not be gathered unless directly requested in the play." - "The 'smart' value means each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the playbook run." - "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin." env: [{name: ANSIBLE_GATHERING}] ini: - key: gathering section: defaults version_added: "1.6" choices: ['smart', 'explicit', 'implicit'] DEFAULT_GATHER_SUBSET: name: Gather facts subset default: ['all'] description: - Set the `gather_subset` option for the M(setup) task in the implicit fact gathering. See the module documentation for specifics. - "It does **not** apply to user defined M(setup) tasks." env: [{name: ANSIBLE_GATHER_SUBSET}] ini: - key: gather_subset section: defaults version_added: "2.1" type: list DEFAULT_GATHER_TIMEOUT: name: Gather facts timeout default: 10 description: - Set the timeout in seconds for the implicit fact gathering. - "It does **not** apply to user defined M(setup) tasks." env: [{name: ANSIBLE_GATHER_TIMEOUT}] ini: - {key: gather_timeout, section: defaults} type: integer yaml: {key: defaults.gather_timeout} DEFAULT_HANDLER_INCLUDES_STATIC: name: Make handler M(include) static default: False description: - "Since 2.0 M(include) can be 'dynamic', this setting (if True) forces that if the include appears in a ``handlers`` section to be 'static'." env: [{name: ANSIBLE_HANDLER_INCLUDES_STATIC}] ini: - {key: handler_includes_static, section: defaults} type: boolean deprecated: why: include itself is deprecated and this setting will not matter in the future version: "2.12" alternatives: none as its already built into the decision between include_tasks and import_tasks DEFAULT_HASH_BEHAVIOUR: name: Hash merge behaviour default: replace type: string choices: ["replace", "merge"] description: - This setting controls how variables merge in Ansible. By default Ansible will override variables in specific precedence orders, as described in Variables. When a variable of higher precedence wins, it will replace the other value. - "Some users prefer that variables that are hashes (aka 'dictionaries' in Python terms) are merged. This setting is called 'merge'. This is not the default behavior and it does not affect variables whose values are scalars (integers, strings) or arrays. We generally recommend not using this setting unless you think you have an absolute need for it, and playbooks in the official examples repos do not use this setting" - In version 2.0 a ``combine`` filter was added to allow doing this for a particular variable (described in Filters). env: [{name: ANSIBLE_HASH_BEHAVIOUR}] ini: - {key: hash_behaviour, section: defaults} deprecated: why: This feature is fragile and not portable, leading to continual confusion and misuse version: "2.13" alternatives: the ``combine`` filter explicitly DEFAULT_HOST_LIST: name: Inventory Source default: /etc/ansible/hosts description: Comma separated list of Ansible inventory sources env: - name: ANSIBLE_INVENTORY expand_relative_paths: True ini: - key: inventory section: defaults type: pathlist yaml: {key: defaults.inventory} DEFAULT_HTTPAPI_PLUGIN_PATH: name: HttpApi Plugins Path default: ~/.ansible/plugins/httpapi:/usr/share/ansible/plugins/httpapi description: Colon separated paths in which Ansible will search for HttpApi Plugins. env: [{name: ANSIBLE_HTTPAPI_PLUGINS}] ini: - {key: httpapi_plugins, section: defaults} type: pathspec DEFAULT_INTERNAL_POLL_INTERVAL: name: Internal poll interval default: 0.001 env: [] ini: - {key: internal_poll_interval, section: defaults} type: float version_added: "2.2" description: - This sets the interval (in seconds) of Ansible internal processes polling each other. Lower values improve performance with large playbooks at the expense of extra CPU load. Higher values are more suitable for Ansible usage in automation scenarios, when UI responsiveness is not required but CPU usage might be a concern. - "The default corresponds to the value hardcoded in Ansible <= 2.1" DEFAULT_INVENTORY_PLUGIN_PATH: name: Inventory Plugins Path default: ~/.ansible/plugins/inventory:/usr/share/ansible/plugins/inventory description: Colon separated paths in which Ansible will search for Inventory Plugins. env: [{name: ANSIBLE_INVENTORY_PLUGINS}] ini: - {key: inventory_plugins, section: defaults} type: pathspec DEFAULT_JINJA2_EXTENSIONS: name: Enabled Jinja2 extensions default: [] description: - This is a developer-specific feature that allows enabling additional Jinja2 extensions. - "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)" env: [{name: ANSIBLE_JINJA2_EXTENSIONS}] ini: - {key: jinja2_extensions, section: defaults} DEFAULT_JINJA2_NATIVE: name: Use Jinja2's NativeEnvironment for templating default: False description: This option preserves variable types during template operations. This requires Jinja2 >= 2.10. env: [{name: ANSIBLE_JINJA2_NATIVE}] ini: - {key: jinja2_native, section: defaults} type: boolean yaml: {key: jinja2_native} version_added: 2.7 DEFAULT_KEEP_REMOTE_FILES: name: Keep remote files default: False description: - Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote. - If this option is enabled it will disable ``ANSIBLE_PIPELINING``. env: [{name: ANSIBLE_KEEP_REMOTE_FILES}] ini: - {key: keep_remote_files, section: defaults} type: boolean DEFAULT_LIBVIRT_LXC_NOSECLABEL: # TODO: move to plugin name: No security label on Lxc default: False description: - "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh. This is necessary when running on systems which do not have SELinux." env: - name: LIBVIRT_LXC_NOSECLABEL deprecated: why: environment variables without "ANSIBLE_" prefix are deprecated version: "2.12" alternatives: the "ANSIBLE_LIBVIRT_LXC_NOSECLABEL" environment variable - name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL ini: - {key: libvirt_lxc_noseclabel, section: selinux} type: boolean version_added: "2.1" DEFAULT_LOAD_CALLBACK_PLUGINS: name: Load callbacks for adhoc default: False description: - Controls whether callback plugins are loaded when running /usr/bin/ansible. This may be used to log activity from the command line, send notifications, and so on. Callback plugins are always loaded for ``ansible-playbook``. env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}] ini: - {key: bin_ansible_callbacks, section: defaults} type: boolean version_added: "1.8" DEFAULT_LOCAL_TMP: name: Controller temporary directory default: ~/.ansible/tmp description: Temporary directory for Ansible to use on the controller. env: [{name: ANSIBLE_LOCAL_TEMP}] ini: - {key: local_tmp, section: defaults} type: tmppath DEFAULT_LOG_PATH: name: Ansible log file path default: ~ description: File to which Ansible will log on the controller. When empty logging is disabled. env: [{name: ANSIBLE_LOG_PATH}] ini: - {key: log_path, section: defaults} type: path DEFAULT_LOG_FILTER: name: Name filters for python logger default: [] description: List of logger names to filter out of the log file env: [{name: ANSIBLE_LOG_FILTER}] ini: - {key: log_filter, section: defaults} type: list DEFAULT_LOOKUP_PLUGIN_PATH: name: Lookup Plugins Path description: Colon separated paths in which Ansible will search for Lookup Plugins. default: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup env: [{name: ANSIBLE_LOOKUP_PLUGINS}] ini: - {key: lookup_plugins, section: defaults} type: pathspec yaml: {key: defaults.lookup_plugins} DEFAULT_MANAGED_STR: name: Ansible managed default: 'Ansible managed' description: Sets the macro for the 'ansible_managed' variable available for M(template) and M(win_template) modules. This is only relevant for those two modules. env: [] ini: - {key: ansible_managed, section: defaults} yaml: {key: defaults.ansible_managed} DEFAULT_MODULE_ARGS: name: Adhoc default arguments default: '' description: - This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified. env: [{name: ANSIBLE_MODULE_ARGS}] ini: - {key: module_args, section: defaults} DEFAULT_MODULE_COMPRESSION: name: Python module compression default: ZIP_DEFLATED description: Compression scheme to use when transferring Python modules to the target. env: [] ini: - {key: module_compression, section: defaults} # vars: # - name: ansible_module_compression DEFAULT_MODULE_NAME: name: Default adhoc module default: command description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``." env: [] ini: - {key: module_name, section: defaults} DEFAULT_MODULE_PATH: name: Modules Path description: Colon separated paths in which Ansible will search for Modules. default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules env: [{name: ANSIBLE_LIBRARY}] ini: - {key: library, section: defaults} type: pathspec DEFAULT_MODULE_UTILS_PATH: name: Module Utils Path description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules. default: ~/.ansible/plugins/module_utils:/usr/share/ansible/plugins/module_utils env: [{name: ANSIBLE_MODULE_UTILS}] ini: - {key: module_utils, section: defaults} type: pathspec DEFAULT_NETCONF_PLUGIN_PATH: name: Netconf Plugins Path default: ~/.ansible/plugins/netconf:/usr/share/ansible/plugins/netconf description: Colon separated paths in which Ansible will search for Netconf Plugins. env: [{name: ANSIBLE_NETCONF_PLUGINS}] ini: - {key: netconf_plugins, section: defaults} type: pathspec DEFAULT_NO_LOG: name: No log default: False description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures." env: [{name: ANSIBLE_NO_LOG}] ini: - {key: no_log, section: defaults} type: boolean DEFAULT_NO_TARGET_SYSLOG: name: No syslog on target default: False description: Toggle Ansible logging to syslog on the target when it executes tasks. env: [{name: ANSIBLE_NO_TARGET_SYSLOG}] ini: - {key: no_target_syslog, section: defaults} type: boolean yaml: {key: defaults.no_target_syslog} DEFAULT_NULL_REPRESENTATION: name: Represent a null default: ~ description: What templating should return as a 'null' value. When not set it will let Jinja2 decide. env: [{name: ANSIBLE_NULL_REPRESENTATION}] ini: - {key: null_representation, section: defaults} type: none DEFAULT_POLL_INTERVAL: name: Async poll interval default: 15 description: - For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how often to check back on the status of those tasks when an explicit poll interval is not supplied. The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and providing a quick turnaround when something may have completed. env: [{name: ANSIBLE_POLL_INTERVAL}] ini: - {key: poll_interval, section: defaults} type: integer DEFAULT_PRIVATE_KEY_FILE: name: Private key file default: ~ description: - Option for connections using a certificate or key file to authenticate, rather than an agent or passwords, you can set the default value here to avoid re-specifying --private-key with every invocation. env: [{name: ANSIBLE_PRIVATE_KEY_FILE}] ini: - {key: private_key_file, section: defaults} type: path DEFAULT_PRIVATE_ROLE_VARS: name: Private role variables default: False description: - Makes role variables inaccessible from other roles. - This was introduced as a way to reset role variables to default values if a role is used more than once in a playbook. env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}] ini: - {key: private_role_vars, section: defaults} type: boolean yaml: {key: defaults.private_role_vars} DEFAULT_REMOTE_PORT: name: Remote port default: ~ description: Port to use in remote connections, when blank it will use the connection plugin default. env: [{name: ANSIBLE_REMOTE_PORT}] ini: - {key: remote_port, section: defaults} type: integer yaml: {key: defaults.remote_port} DEFAULT_REMOTE_USER: name: Login/Remote User default: description: - Sets the login user for the target machines - "When blank it uses the connection plugin's default, normally the user currently executing Ansible." env: [{name: ANSIBLE_REMOTE_USER}] ini: - {key: remote_user, section: defaults} DEFAULT_ROLES_PATH: name: Roles path default: ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles description: Colon separated paths in which Ansible will search for Roles. env: [{name: ANSIBLE_ROLES_PATH}] expand_relative_paths: True ini: - {key: roles_path, section: defaults} type: pathspec yaml: {key: defaults.roles_path} DEFAULT_SCP_IF_SSH: # TODO: move to ssh plugin default: smart description: - "Preferred method to use when transferring files over ssh." - When set to smart, Ansible will try them until one succeeds or they all fail. - If set to True, it will force 'scp', if False it will use 'sftp'. env: [{name: ANSIBLE_SCP_IF_SSH}] ini: - {key: scp_if_ssh, section: ssh_connection} DEFAULT_SELINUX_SPECIAL_FS: name: Problematic file systems default: fuse, nfs, vboxsf, ramfs, 9p, vfat description: - "Some filesystems do not support safe operations and/or return inconsistent errors, this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors." - Data corruption may occur and writes are not always verified when a filesystem is in the list. env: - name: ANSIBLE_SELINUX_SPECIAL_FS version_added: "2.9" ini: - {key: special_context_filesystems, section: selinux} type: list DEFAULT_SFTP_BATCH_MODE: # TODO: move to ssh plugin default: True description: 'TODO: write it' env: [{name: ANSIBLE_SFTP_BATCH_MODE}] ini: - {key: sftp_batch_mode, section: ssh_connection} type: boolean yaml: {key: ssh_connection.sftp_batch_mode} DEFAULT_SQUASH_ACTIONS: name: Squashable actions default: apk, apt, dnf, homebrew, openbsd_pkg, pacman, pip, pkgng, yum, zypper description: - Ansible can optimise actions that call modules that support list parameters when using ``with_`` looping. Instead of calling the module once for each item, the module is called once with the full list. - The default value for this setting is only for certain package managers, but it can be used for any module. - Currently, this is only supported for modules that have a name or pkg parameter, and only when the item is the only thing being passed to the parameter. env: [{name: ANSIBLE_SQUASH_ACTIONS}] ini: - {key: squash_actions, section: defaults} type: list version_added: "2.0" deprecated: why: Loop squashing is deprecated and this configuration will no longer be used version: "2.11" alternatives: a list directly with the module argument DEFAULT_SSH_TRANSFER_METHOD: # TODO: move to ssh plugin default: description: 'unused?' # - "Preferred method to use when transferring files over ssh" # - Setting to smart will try them until one succeeds or they all fail #choices: ['sftp', 'scp', 'dd', 'smart'] env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}] ini: - {key: transfer_method, section: ssh_connection} DEFAULT_STDOUT_CALLBACK: name: Main display callback plugin default: default description: - "Set the main callback used to display Ansible output, you can only have one at a time." - You can have many other callbacks, but just one can be in charge of stdout. env: [{name: ANSIBLE_STDOUT_CALLBACK}] ini: - {key: stdout_callback, section: defaults} ENABLE_TASK_DEBUGGER: name: Whether to enable the task debugger default: False description: - Whether or not to enable the task debugger, this previously was done as a strategy plugin. - Now all strategy plugins can inherit this behavior. The debugger defaults to activating when - a task is failed on unreachable. Use the debugger keyword for more flexibility. type: boolean env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}] ini: - {key: enable_task_debugger, section: defaults} version_added: "2.5" TASK_DEBUGGER_IGNORE_ERRORS: name: Whether a failed task with ignore_errors=True will still invoke the debugger default: True description: - This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True is specified. - True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors. type: boolean env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}] ini: - {key: task_debugger_ignore_errors, section: defaults} version_added: "2.7" DEFAULT_STRATEGY: name: Implied strategy default: 'linear' description: Set the default strategy used for plays. env: [{name: ANSIBLE_STRATEGY}] ini: - {key: strategy, section: defaults} version_added: "2.3" DEFAULT_STRATEGY_PLUGIN_PATH: name: Strategy Plugins Path description: Colon separated paths in which Ansible will search for Strategy Plugins. default: ~/.ansible/plugins/strategy:/usr/share/ansible/plugins/strategy env: [{name: ANSIBLE_STRATEGY_PLUGINS}] ini: - {key: strategy_plugins, section: defaults} type: pathspec DEFAULT_SU: default: False description: 'Toggle the use of "su" for tasks.' env: [{name: ANSIBLE_SU}] ini: - {key: su, section: defaults} type: boolean yaml: {key: defaults.su} DEFAULT_SYSLOG_FACILITY: name: syslog facility default: LOG_USER description: Syslog facility to use when Ansible logs to the remote target env: [{name: ANSIBLE_SYSLOG_FACILITY}] ini: - {key: syslog_facility, section: defaults} DEFAULT_TASK_INCLUDES_STATIC: name: Task include static default: False description: - The `include` tasks can be static or dynamic, this toggles the default expected behaviour if autodetection fails and it is not explicitly set in task. env: [{name: ANSIBLE_TASK_INCLUDES_STATIC}] ini: - {key: task_includes_static, section: defaults} type: boolean version_added: "2.1" deprecated: why: include itself is deprecated and this setting will not matter in the future version: "2.12" alternatives: None, as its already built into the decision between include_tasks and import_tasks DEFAULT_TERMINAL_PLUGIN_PATH: name: Terminal Plugins Path default: ~/.ansible/plugins/terminal:/usr/share/ansible/plugins/terminal description: Colon separated paths in which Ansible will search for Terminal Plugins. env: [{name: ANSIBLE_TERMINAL_PLUGINS}] ini: - {key: terminal_plugins, section: defaults} type: pathspec DEFAULT_TEST_PLUGIN_PATH: name: Jinja2 Test Plugins Path description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins. default: ~/.ansible/plugins/test:/usr/share/ansible/plugins/test env: [{name: ANSIBLE_TEST_PLUGINS}] ini: - {key: test_plugins, section: defaults} type: pathspec DEFAULT_TIMEOUT: name: Connection timeout default: 10 description: This is the default timeout for connection plugins to use. env: [{name: ANSIBLE_TIMEOUT}] ini: - {key: timeout, section: defaults} type: integer DEFAULT_TRANSPORT: name: Connection plugin default: smart description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions" env: [{name: ANSIBLE_TRANSPORT}] ini: - {key: transport, section: defaults} DEFAULT_UNDEFINED_VAR_BEHAVIOR: name: Jinja2 fail on undefined default: True version_added: "1.3" description: - When True, this causes ansible templating to fail steps that reference variable names that are likely typoed. - "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written." env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}] ini: - {key: error_on_undefined_vars, section: defaults} type: boolean DEFAULT_VARS_PLUGIN_PATH: name: Vars Plugins Path default: ~/.ansible/plugins/vars:/usr/share/ansible/plugins/vars description: Colon separated paths in which Ansible will search for Vars Plugins. env: [{name: ANSIBLE_VARS_PLUGINS}] ini: - {key: vars_plugins, section: defaults} type: pathspec # TODO: unused? #DEFAULT_VAR_COMPRESSION_LEVEL: # default: 0 # description: 'TODO: write it' # env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}] # ini: # - {key: var_compression_level, section: defaults} # type: integer # yaml: {key: defaults.var_compression_level} DEFAULT_VAULT_ID_MATCH: name: Force vault id match default: False description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id' env: [{name: ANSIBLE_VAULT_ID_MATCH}] ini: - {key: vault_id_match, section: defaults} yaml: {key: defaults.vault_id_match} DEFAULT_VAULT_IDENTITY: name: Vault id label default: default description: 'The label to use for the default vault id label in cases where a vault id label is not provided' env: [{name: ANSIBLE_VAULT_IDENTITY}] ini: - {key: vault_identity, section: defaults} yaml: {key: defaults.vault_identity} DEFAULT_VAULT_ENCRYPT_IDENTITY: name: Vault id to use for encryption default: description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.' env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}] ini: - {key: vault_encrypt_identity, section: defaults} yaml: {key: defaults.vault_encrypt_identity} DEFAULT_VAULT_IDENTITY_LIST: name: Default vault ids default: [] description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.' env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}] ini: - {key: vault_identity_list, section: defaults} type: list yaml: {key: defaults.vault_identity_list} DEFAULT_VAULT_PASSWORD_FILE: name: Vault password file default: ~ description: 'The vault password file to use. Equivalent to --vault-password-file or --vault-id' env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}] ini: - {key: vault_password_file, section: defaults} type: path yaml: {key: defaults.vault_password_file} DEFAULT_VERBOSITY: name: Verbosity default: 0 description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line. env: [{name: ANSIBLE_VERBOSITY}] ini: - {key: verbosity, section: defaults} type: integer DEPRECATION_WARNINGS: name: Deprecation messages default: True description: "Toggle to control the showing of deprecation warnings" env: [{name: ANSIBLE_DEPRECATION_WARNINGS}] ini: - {key: deprecation_warnings, section: defaults} type: boolean DIFF_ALWAYS: name: Show differences default: False description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``. env: [{name: ANSIBLE_DIFF_ALWAYS}] ini: - {key: always, section: diff} type: bool DIFF_CONTEXT: name: Difference context default: 3 description: How many lines of context to show when displaying the differences between files. env: [{name: ANSIBLE_DIFF_CONTEXT}] ini: - {key: context, section: diff} type: integer DISPLAY_ARGS_TO_STDOUT: name: Show task arguments default: False description: - "Normally ``ansible-playbook`` will print a header for each task that is run. These headers will contain the name: field from the task if you specified one. If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running. Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action. If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header." - "This setting defaults to False because there is a chance that you have sensitive values in your parameters and you do not want those to be printed." - "If you set this to True you should be sure that you have secured your environment's stdout (no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values See How do I keep secret data in my playbook? for more information." env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}] ini: - {key: display_args_to_stdout, section: defaults} type: boolean version_added: "2.1" DISPLAY_SKIPPED_HOSTS: name: Show skipped results default: True description: "Toggle to control displaying skipped task/host entries in a task in the default callback" env: - name: DISPLAY_SKIPPED_HOSTS deprecated: why: environment variables without "ANSIBLE_" prefix are deprecated version: "2.12" alternatives: the "ANSIBLE_DISPLAY_SKIPPED_HOSTS" environment variable - name: ANSIBLE_DISPLAY_SKIPPED_HOSTS ini: - {key: display_skipped_hosts, section: defaults} type: boolean DOCSITE_ROOT_URL: name: Root docsite URL default: https://docs.ansible.com/ansible/ description: Root docsite URL used to generate docs URLs in warning/error text; must be an absolute URL with valid scheme and trailing slash. ini: - {key: docsite_root_url, section: defaults} version_added: "2.8" DUPLICATE_YAML_DICT_KEY: name: Controls ansible behaviour when finding duplicate keys in YAML. default: warn description: - By default Ansible will issue a warning when a duplicate dict key is encountered in YAML. - These warnings can be silenced by adjusting this setting to False. env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}] ini: - {key: duplicate_dict_key, section: defaults} type: string choices: ['warn', 'error', 'ignore'] version_added: "2.9" ERROR_ON_MISSING_HANDLER: name: Missing handler error default: True description: "Toggle to allow missing handlers to become a warning instead of an error when notifying." env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}] ini: - {key: error_on_missing_handler, section: defaults} type: boolean CONNECTION_FACTS_MODULES: name: Map of connections to fact modules default: eos: eos_facts frr: frr_facts ios: ios_facts iosxr: iosxr_facts junos: junos_facts nxos: nxos_facts vyos: vyos_facts exos: exos_facts slxos: slxos_facts voss: voss_facts ironware: ironware_facts description: "Which modules to run during a play's fact gathering stage based on connection" env: [{name: ANSIBLE_CONNECTION_FACTS_MODULES}] ini: - {key: connection_facts_modules, section: defaults} type: dict FACTS_MODULES: name: Gather Facts Modules default: - smart description: "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type." env: [{name: ANSIBLE_FACTS_MODULES}] ini: - {key: facts_modules, section: defaults} type: list vars: - name: ansible_facts_modules GALAXY_IGNORE_CERTS: name: Galaxy validate certs default: False description: - If set to yes, ansible-galaxy will not validate TLS certificates. This can be useful for testing against a server with a self-signed certificate. env: [{name: ANSIBLE_GALAXY_IGNORE}] ini: - {key: ignore_certs, section: galaxy} type: boolean GALAXY_ROLE_SKELETON: name: Galaxy role or collection skeleton directory default: description: Role or collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``, same as ``--role-skeleton``. env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}] ini: - {key: role_skeleton, section: galaxy} type: path GALAXY_ROLE_SKELETON_IGNORE: name: Galaxy skeleton ignore default: ["^.git$", "^.*/.git_keep$"] description: patterns of files to ignore inside a Galaxy role or collection skeleton directory env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}] ini: - {key: role_skeleton_ignore, section: galaxy} type: list # TODO: unused? #GALAXY_SCMS: # name: Galaxy SCMS # default: git, hg # description: Available galaxy source control management systems. # env: [{name: ANSIBLE_GALAXY_SCMS}] # ini: # - {key: scms, section: galaxy} # type: list GALAXY_SERVER: default: https://galaxy.ansible.com description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source." env: [{name: ANSIBLE_GALAXY_SERVER}] ini: - {key: server, section: galaxy} yaml: {key: galaxy.server} GALAXY_SERVER_LIST: description: - A list of Galaxy servers to use when installing a collection. - The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details. - 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.' - The order of servers in this list is used to as the order in which a collection is resolved. - Setting this config option will ignore the :ref:`galaxy_server` config option. env: [{name: ANSIBLE_GALAXY_SERVER_LIST}] ini: - {key: server_list, section: galaxy} type: list version_added: "2.9" GALAXY_TOKEN: default: null description: "GitHub personal access token" env: [{name: ANSIBLE_GALAXY_TOKEN}] ini: - {key: token, section: galaxy} yaml: {key: galaxy.token} GALAXY_TOKEN_PATH: default: ~/.ansible/galaxy_token description: "Local path to galaxy access token file" env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}] ini: - {key: token_path, section: galaxy} type: path version_added: "2.9" GALAXY_DISPLAY_PROGRESS: default: ~ description: - Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when outputing the stdout to a file. - This config option controls whether the display wheel is shown or not. - The default is to show the display wheel if stdout has a tty. env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}] ini: - {key: display_progress, section: galaxy} type: bool version_added: "2.10" HOST_KEY_CHECKING: name: Check host keys default: True description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host' env: [{name: ANSIBLE_HOST_KEY_CHECKING}] ini: - {key: host_key_checking, section: defaults} type: boolean HOST_PATTERN_MISMATCH: name: Control host pattern mismatch behaviour default: 'warning' description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}] ini: - {key: host_pattern_mismatch, section: inventory} choices: ['warning', 'error', 'ignore'] version_added: "2.8" INTERPRETER_PYTHON: name: Python interpreter path (or automatic discovery behavior) used for module execution default: auto_legacy env: [{name: ANSIBLE_PYTHON_INTERPRETER}] ini: - {key: interpreter_python, section: defaults} vars: - {name: ansible_python_interpreter} version_added: "2.8" description: - Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode. Supported discovery modes are ``auto``, ``auto_silent``, and ``auto_legacy`` (the default). All discovery modes employ a lookup table to use the included system Python (on distributions known to include one), falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent``. The default value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present (and issue a warning that the default behavior will change to that of ``auto`` in a future Ansible release. INTERPRETER_PYTHON_DISTRO_MAP: name: Mapping of known included platform pythons for various Linux distros default: centos: &rhelish '6': /usr/bin/python '8': /usr/libexec/platform-python debian: '10': /usr/bin/python3 fedora: '23': /usr/bin/python3 redhat: *rhelish rhel: *rhelish ubuntu: '14': /usr/bin/python '16': /usr/bin/python3 version_added: "2.8" # FUTURE: add inventory override once we're sure it can't be abused by a rogue target # FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc? INTERPRETER_PYTHON_FALLBACK: name: Ordered list of Python interpreters to check for in discovery default: - /usr/bin/python - python3.7 - python3.6 - python3.5 - python2.7 - python2.6 - /usr/libexec/platform-python - /usr/bin/python3 - python # FUTURE: add inventory override once we're sure it can't be abused by a rogue target version_added: "2.8" TRANSFORM_INVALID_GROUP_CHARS: name: Transform invalid characters in group names default: 'never' description: - Make ansible transform invalid characters in group names supplied by inventory sources. - If 'never' it will allow for the group name but warn about the issue. - When 'ignore', it does the same as 'never', without issuing a warning. - When 'always' it will replace any invalid characters with '_' (underscore) and warn the user - When 'silently', it does the same as 'always', without issuing a warning. env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}] ini: - {key: force_valid_group_names, section: defaults} type: string choices: ['always', 'never', 'ignore', 'silently'] version_added: '2.8' INVALID_TASK_ATTRIBUTE_FAILED: name: Controls whether invalid attributes for a task result in errors instead of warnings default: True description: If 'false', invalid attributes for a task will result in warnings instead of errors type: boolean env: - name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED ini: - key: invalid_task_attribute_failed section: defaults version_added: "2.7" INVENTORY_ANY_UNPARSED_IS_FAILED: name: Controls whether any unparseable inventory source is a fatal error default: False description: > If 'true', it is a fatal error when any given inventory source cannot be successfully parsed by any available inventory plugin; otherwise, this situation only attracts a warning. type: boolean env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}] ini: - {key: any_unparsed_is_failed, section: inventory} version_added: "2.7" INVENTORY_CACHE_ENABLED: name: Inventory caching enabled default: False description: Toggle to turn on inventory caching env: [{name: ANSIBLE_INVENTORY_CACHE}] ini: - {key: cache, section: inventory} type: bool INVENTORY_CACHE_PLUGIN: name: Inventory cache plugin description: The plugin for caching inventory. If INVENTORY_CACHE_PLUGIN is not provided CACHE_PLUGIN can be used instead. env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}] ini: - {key: cache_plugin, section: inventory} INVENTORY_CACHE_PLUGIN_CONNECTION: name: Inventory cache plugin URI to override the defaults section description: The inventory cache connection. If INVENTORY_CACHE_PLUGIN_CONNECTION is not provided CACHE_PLUGIN_CONNECTION can be used instead. env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}] ini: - {key: cache_connection, section: inventory} INVENTORY_CACHE_PLUGIN_PREFIX: name: Inventory cache plugin table prefix description: The table prefix for the cache plugin. If INVENTORY_CACHE_PLUGIN_PREFIX is not provided CACHE_PLUGIN_PREFIX can be used instead. env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}] default: ansible_facts ini: - {key: cache_prefix, section: inventory} INVENTORY_CACHE_TIMEOUT: name: Inventory cache plugin expiration timeout description: Expiration timeout for the inventory cache plugin data. If INVENTORY_CACHE_TIMEOUT is not provided CACHE_TIMEOUT can be used instead. default: 3600 env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}] ini: - {key: cache_timeout, section: inventory} INVENTORY_ENABLED: name: Active Inventory plugins default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml'] description: List of enabled inventory plugins, it also determines the order in which they are used. env: [{name: ANSIBLE_INVENTORY_ENABLED}] ini: - {key: enable_plugins, section: inventory} type: list INVENTORY_EXPORT: name: Set ansible-inventory into export mode default: False description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting. env: [{name: ANSIBLE_INVENTORY_EXPORT}] ini: - {key: export, section: inventory} type: bool INVENTORY_IGNORE_EXTS: name: Inventory ignore extensions default: "{{(BLACKLIST_EXTS + ( '.orig', '.ini', '.cfg', '.retry'))}}" description: List of extensions to ignore when using a directory as an inventory source env: [{name: ANSIBLE_INVENTORY_IGNORE}] ini: - {key: inventory_ignore_extensions, section: defaults} - {key: ignore_extensions, section: inventory} type: list INVENTORY_IGNORE_PATTERNS: name: Inventory ignore patterns default: [] description: List of patterns to ignore when using a directory as an inventory source env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}] ini: - {key: inventory_ignore_patterns, section: defaults} - {key: ignore_patterns, section: inventory} type: list INVENTORY_UNPARSED_IS_FAILED: name: Unparsed Inventory failure default: False description: > If 'true' it is a fatal error if every single potential inventory source fails to parse, otherwise this situation will only attract a warning. env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}] ini: - {key: unparsed_is_failed, section: inventory} type: bool MAX_FILE_SIZE_FOR_DIFF: name: Diff maximum file size default: 104448 description: Maximum size of files to be considered for diff display env: [{name: ANSIBLE_MAX_DIFF_SIZE}] ini: - {key: max_diff_size, section: defaults} type: int NETWORK_GROUP_MODULES: name: Network module families default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos] description: 'TODO: write it' env: - name: NETWORK_GROUP_MODULES deprecated: why: environment variables without "ANSIBLE_" prefix are deprecated version: "2.12" alternatives: the "ANSIBLE_NETWORK_GROUP_MODULES" environment variable - name: ANSIBLE_NETWORK_GROUP_MODULES ini: - {key: network_group_modules, section: defaults} type: list yaml: {key: defaults.network_group_modules} INJECT_FACTS_AS_VARS: default: True description: - Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace. - Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix. env: [{name: ANSIBLE_INJECT_FACT_VARS}] ini: - {key: inject_facts_as_vars, section: defaults} type: boolean version_added: "2.5" OLD_PLUGIN_CACHE_CLEARING: description: Previouslly Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour. env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}] ini: - {key: old_plugin_cache_clear, section: defaults} type: boolean default: False version_added: "2.8" PARAMIKO_HOST_KEY_AUTO_ADD: # TODO: move to plugin default: False description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}] ini: - {key: host_key_auto_add, section: paramiko_connection} type: boolean PARAMIKO_LOOK_FOR_KEYS: name: look for keys default: True description: 'TODO: write it' env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}] ini: - {key: look_for_keys, section: paramiko_connection} type: boolean PERSISTENT_CONTROL_PATH_DIR: name: Persistence socket path default: ~/.ansible/pc description: Path to socket to be used by the connection persistence system. env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}] ini: - {key: control_path_dir, section: persistent_connection} type: path PERSISTENT_CONNECT_TIMEOUT: name: Persistence timeout default: 30 description: This controls how long the persistent connection will remain idle before it is destroyed. env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}] ini: - {key: connect_timeout, section: persistent_connection} type: integer PERSISTENT_CONNECT_RETRY_TIMEOUT: name: Persistence connection retry timeout default: 15 description: This controls the retry timeout for persistent connection to connect to the local domain socket. env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}] ini: - {key: connect_retry_timeout, section: persistent_connection} type: integer PERSISTENT_COMMAND_TIMEOUT: name: Persistence command timeout default: 30 description: This controls the amount of time to wait for response from remote device before timing out persistent connection. env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}] ini: - {key: command_timeout, section: persistent_connection} type: int PLAYBOOK_DIR: name: playbook dir override for non-playbook CLIs (ala --playbook-dir) version_added: "2.9" description: - A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it. env: [{name: ANSIBLE_PLAYBOOK_DIR}] ini: [{key: playbook_dir, section: defaults}] type: path PLAYBOOK_VARS_ROOT: name: playbook vars files root default: top version_added: "2.4.1" description: - This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars - The ``top`` option follows the traditional behaviour of using the top playbook in the chain to find the root directory. - The ``bottom`` option follows the 2.4.0 behaviour of using the current playbook to find the root directory. - The ``all`` option examines from the first parent to the current playbook. env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}] ini: - {key: playbook_vars_root, section: defaults} choices: [ top, bottom, all ] PLUGIN_FILTERS_CFG: name: Config file for limiting valid plugins default: null version_added: "2.5.0" description: - "A path to configuration for filtering which plugins installed on the system are allowed to be used." - "See :ref:`plugin_filtering_config` for details of the filter file's format." - " The default is /etc/ansible/plugin_filters.yml" ini: - key: plugin_filters_cfg section: default deprecated: why: Specifying "plugin_filters_cfg" under the "default" section is deprecated version: "2.12" alternatives: the "defaults" section instead - key: plugin_filters_cfg section: defaults type: path PYTHON_MODULE_RLIMIT_NOFILE: name: Adjust maximum file descriptor soft limit during Python module execution description: - Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default value of 0 does not attempt to adjust existing system-defined limits. default: 0 env: - {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE} ini: - {key: python_module_rlimit_nofile, section: defaults} vars: - {name: ansible_python_module_rlimit_nofile} version_added: '2.8' RETRY_FILES_ENABLED: name: Retry files default: False description: This controls whether a failed Ansible playbook should create a .retry file. env: [{name: ANSIBLE_RETRY_FILES_ENABLED}] ini: - {key: retry_files_enabled, section: defaults} type: bool RETRY_FILES_SAVE_PATH: name: Retry files path default: ~ description: This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled. env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}] ini: - {key: retry_files_save_path, section: defaults} type: path RUN_VARS_PLUGINS: name: When should vars plugins run relative to inventory default: demand description: - This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection. - Setting to C(demand) will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks. - Setting to C(start) will run vars_plugins relative to inventory sources after importing that inventory source. env: [{name: ANSIBLE_RUN_VARS_PLUGINS}] ini: - {key: run_vars_plugins, section: defaults} type: str choices: ['demand', 'start'] version_added: "2.10" SHOW_CUSTOM_STATS: name: Display custom stats default: False description: 'This adds the custom stats set via the set_stats plugin to the default output' env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}] ini: - {key: show_custom_stats, section: defaults} type: bool STRING_TYPE_FILTERS: name: Filters to preserve strings default: [string, to_json, to_nice_json, to_yaml, ppretty, json] description: - "This list of filters avoids 'type conversion' when templating variables" - Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example. env: [{name: ANSIBLE_STRING_TYPE_FILTERS}] ini: - {key: dont_type_filters, section: jinja2} type: list SYSTEM_WARNINGS: name: System warnings default: True description: - Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts) - These may include warnings about 3rd party packages or other conditions that should be resolved if possible. env: [{name: ANSIBLE_SYSTEM_WARNINGS}] ini: - {key: system_warnings, section: defaults} type: boolean TAGS_RUN: name: Run Tags default: [] type: list description: default list of tags to run in your plays, Skip Tags has precedence. env: [{name: ANSIBLE_RUN_TAGS}] ini: - {key: run, section: tags} version_added: "2.5" TAGS_SKIP: name: Skip Tags default: [] type: list description: default list of tags to skip in your plays, has precedence over Run Tags env: [{name: ANSIBLE_SKIP_TAGS}] ini: - {key: skip, section: tags} version_added: "2.5" USE_PERSISTENT_CONNECTIONS: name: Persistence default: False description: Toggles the use of persistence for connections. env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}] ini: - {key: use_persistent_connections, section: defaults} type: boolean VARIABLE_PLUGINS_ENABLED: name: Vars plugin whitelist default: ['host_group_vars'] description: Whitelist for variable plugins that require it. env: [{name: ANSIBLE_VARS_ENABLED}] ini: - {key: vars_plugins_enabled, section: defaults} type: list version_added: "2.10" VARIABLE_PRECEDENCE: name: Group variable precedence default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play'] description: Allows to change the group variable precedence merge order. env: [{name: ANSIBLE_PRECEDENCE}] ini: - {key: precedence, section: defaults} type: list version_added: "2.4" YAML_FILENAME_EXTENSIONS: name: Valid YAML extensions default: [".yml", ".yaml", ".json"] description: - "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these." - 'This affects vars_files, include_vars, inventory and vars plugins among others.' env: - name: ANSIBLE_YAML_FILENAME_EXT ini: - section: defaults key: yaml_valid_extensions type: list NETCONF_SSH_CONFIG: description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump host ssh settings should be present in ~/.ssh/config file, alternatively it can be set to custom ssh configuration file path to read the bastion/jump host settings. env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}] ini: - {key: ssh_config, section: netconf_connection} yaml: {key: netconf_connection.ssh_config} default: null STRING_CONVERSION_ACTION: version_added: '2.8' description: - Action to take when a module parameter value is converted to a string (this does not affect variables). For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc. will be converted by the YAML parser unless fully quoted. - Valid options are 'error', 'warn', and 'ignore'. - Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12. default: 'warn' env: - name: ANSIBLE_STRING_CONVERSION_ACTION ini: - section: defaults key: string_conversion_action type: string VERBOSE_TO_STDERR: version_added: '2.8' description: - Force 'verbose' option to use stderr instead of stdout default: False env: - name: ANSIBLE_VERBOSE_TO_STDERR ini: - section: defaults key: verbose_to_stderr type: bool ...
closed
ansible/ansible
https://github.com/ansible/ansible
63,907
Zabbix: Need the _info module of zabbix_template?
##### SUMMARY It is a suggestion, the zabbix_template module has a dump option, but I think it needs to be implemented with `_info`. **reason** The zabbix_user_group module has been pointed out to go to the `_info` module instead of using dump. https://github.com/ansible/ansible/pull/58051#pullrequestreview-304208011 The development document also states that modules that return general information need to be `_info`. https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_checklist.html?highlight=_info%20_facts#contributing-to-ansible-objective-requirements If need zabbix_template_info module, I created a zabbix_templet_info module, hence I can PR. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME zabbix_template thanks
https://github.com/ansible/ansible/issues/63907
https://github.com/ansible/ansible/pull/65236
eb423ecec0a51b5f2c2ac54abf8d07bede9f3fdf
2242c385b25cc6fdcaaa53560c7f56f819bafe23
2019-10-24T14:40:39Z
python
2019-11-26T11:18:55Z
lib/ansible/modules/monitoring/zabbix/zabbix_template.py
#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2017, sookido # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: zabbix_template short_description: Create/update/delete/dump Zabbix template description: - This module allows you to create, modify, delete and dump Zabbix templates. - Multiple templates can be created or modified at once if passing JSON or XML to module. version_added: "2.5" author: - "sookido (@sookido)" - "Logan Vig (@logan2211)" - "Dusan Matejka (@D3DeFi)" requirements: - "python >= 2.6" - "zabbix-api >= 0.5.4" options: template_name: description: - Name of Zabbix template. - Required when I(template_json) or I(template_xml) are not used. - Mutually exclusive with I(template_json) and I(template_xml). required: false template_json: description: - JSON dump of templates to import. - Multiple templates can be imported this way. - Mutually exclusive with I(template_name) and I(template_xml). required: false type: json template_xml: description: - XML dump of templates to import. - Multiple templates can be imported this way. - You are advised to pass XML structure matching the structure used by your version of Zabbix server. - Custom XML structure can be imported as long as it is valid, but may not yield consistent idempotent results on subsequent runs. - Mutually exclusive with I(template_name) and I(template_json). required: false version_added: '2.9' template_groups: description: - List of host groups to add template to when template is created. - Replaces the current host groups the template belongs to if the template is already present. - Required when creating a new template with C(state=present) and I(template_name) is used. Not required when updating an existing template. required: false type: list link_templates: description: - List of template names to be linked to the template. - Templates that are not specified and are linked to the existing template will be only unlinked and not cleared from the template. required: false type: list clear_templates: description: - List of template names to be unlinked and cleared from the template. - This option is ignored if template is being created for the first time. required: false type: list macros: description: - List of user macros to create for the template. - Macros that are not specified and are present on the existing template will be replaced. - See examples on how to pass macros. required: false type: list suboptions: name: description: - Name of the macro. - Must be specified in {$NAME} format. value: description: - Value of the macro. dump_format: description: - Format to use when dumping template with C(state=dump). required: false choices: [json, xml] default: "json" version_added: '2.9' state: description: - Required state of the template. - On C(state=present) template will be created/imported or updated depending if it is already present. - On C(state=dump) template content will get dumped into required format specified in I(dump_format). - On C(state=absent) template will be deleted. required: false choices: [present, absent, dump] default: "present" extends_documentation_fragment: - zabbix ''' EXAMPLES = ''' --- - name: Create a new Zabbix template linked to groups, macros and templates local_action: module: zabbix_template server_url: http://127.0.0.1 login_user: username login_password: password template_name: ExampleHost template_groups: - Role - Role2 link_templates: - Example template1 - Example template2 macros: - macro: '{$EXAMPLE_MACRO1}' value: 30000 - macro: '{$EXAMPLE_MACRO2}' value: 3 - macro: '{$EXAMPLE_MACRO3}' value: 'Example' state: present - name: Unlink and clear templates from the existing Zabbix template local_action: module: zabbix_template server_url: http://127.0.0.1 login_user: username login_password: password template_name: ExampleHost clear_templates: - Example template3 - Example template4 state: present - name: Import Zabbix templates from JSON local_action: module: zabbix_template server_url: http://127.0.0.1 login_user: username login_password: password template_json: "{{ lookup('file', 'zabbix_apache2.json') }}" state: present - name: Import Zabbix templates from XML local_action: module: zabbix_template server_url: http://127.0.0.1 login_user: username login_password: password template_xml: "{{ lookup('file', 'zabbix_apache2.json') }}" state: present - name: Import Zabbix template from Ansible dict variable zabbix_template: login_user: username login_password: password server_url: http://127.0.0.1 template_json: zabbix_export: version: '3.2' templates: - name: Template for Testing description: 'Testing template import' template: Test Template groups: - name: Templates applications: - name: Test Application state: present - name: Configure macros on the existing Zabbix template local_action: module: zabbix_template server_url: http://127.0.0.1 login_user: username login_password: password template_name: Template macros: - macro: '{$TEST_MACRO}' value: 'Example' state: present - name: Delete Zabbix template local_action: module: zabbix_template server_url: http://127.0.0.1 login_user: username login_password: password template_name: Template state: absent - name: Dump Zabbix template as JSON local_action: module: zabbix_template server_url: http://127.0.0.1 login_user: username login_password: password template_name: Template state: dump register: template_dump - name: Dump Zabbix template as XML local_action: module: zabbix_template server_url: http://127.0.0.1 login_user: username login_password: password template_name: Template dump_format: xml state: dump register: template_dump ''' RETURN = ''' --- template_json: description: The JSON dump of the template returned: when state is dump type: str sample: { "zabbix_export":{ "date":"2017-11-29T16:37:24Z", "templates":[{ "templates":[], "description":"", "httptests":[], "screens":[], "applications":[], "discovery_rules":[], "groups":[{"name":"Templates"}], "name":"Test Template", "items":[], "macros":[], "template":"test" }], "version":"3.2", "groups":[{ "name":"Templates" }] } } template_xml: description: dump of the template in XML representation returned: when state is dump and dump_format is xml type: str sample: |- <?xml version="1.0" ?> <zabbix_export> <version>4.2</version> <date>2019-07-12T13:37:26Z</date> <groups> <group> <name>Templates</name> </group> </groups> <templates> <template> <template>test</template> <name>Test Template</name> <description/> <groups> <group> <name>Templates</name> </group> </groups> <applications/> <items/> <discovery_rules/> <httptests/> <macros/> <templates/> <screens/> <tags/> </template> </templates> </zabbix_export> ''' import atexit import json import traceback import xml.etree.ElementTree as ET from distutils.version import LooseVersion from ansible.module_utils.basic import AnsibleModule, missing_required_lib from ansible.module_utils._text import to_native try: from zabbix_api import ZabbixAPI, ZabbixAPIException HAS_ZABBIX_API = True except ImportError: ZBX_IMP_ERR = traceback.format_exc() HAS_ZABBIX_API = False class Template(object): def __init__(self, module, zbx): self._module = module self._zapi = zbx # check if host group exists def check_host_group_exist(self, group_names): for group_name in group_names: result = self._zapi.hostgroup.get({'filter': {'name': group_name}}) if not result: self._module.fail_json(msg="Hostgroup not found: %s" % group_name) return True # get group ids by group names def get_group_ids_by_group_names(self, group_names): group_ids = [] if group_names is None or len(group_names) == 0: return group_ids if self.check_host_group_exist(group_names): group_list = self._zapi.hostgroup.get( {'output': 'extend', 'filter': {'name': group_names}}) for group in group_list: group_id = group['groupid'] group_ids.append({'groupid': group_id}) return group_ids def get_template_ids(self, template_list): template_ids = [] if template_list is None or len(template_list) == 0: return template_ids for template in template_list: template_list = self._zapi.template.get( {'output': 'extend', 'filter': {'host': template}}) if len(template_list) < 1: continue else: template_id = template_list[0]['templateid'] template_ids.append(template_id) return template_ids def add_template(self, template_name, group_ids, link_template_ids, macros): if self._module.check_mode: self._module.exit_json(changed=True) self._zapi.template.create({'host': template_name, 'groups': group_ids, 'templates': link_template_ids, 'macros': macros}) def check_template_changed(self, template_ids, template_groups, link_templates, clear_templates, template_macros, template_content, template_type): """Compares template parameters to already existing values if any are found. template_json - JSON structures are compared as deep sorted dictionaries, template_xml - XML structures are compared as strings, but filtered and formatted first, If none above is used, all the other arguments are compared to their existing counterparts retrieved from Zabbix API.""" changed = False # Compare filtered and formatted XMLs strings for any changes. It is expected that provided # XML has same structure as Zabbix uses (e.g. it was optimally exported via Zabbix GUI or API) if template_content is not None and template_type == 'xml': existing_template = self.dump_template(template_ids, template_type='xml') if self.filter_xml_template(template_content) != self.filter_xml_template(existing_template): changed = True return changed existing_template = self.dump_template(template_ids, template_type='json') # Compare JSON objects as deep sorted python dictionaries if template_content is not None and template_type == 'json': parsed_template_json = self.load_json_template(template_content) if self.diff_template(parsed_template_json, existing_template): changed = True return changed # If neither template_json or template_xml were used, user provided all parameters via module options if template_groups is not None: existing_groups = [g['name'] for g in existing_template['zabbix_export']['groups']] if set(template_groups) != set(existing_groups): changed = True # Check if any new templates would be linked or any existing would be unlinked exist_child_templates = [t['name'] for t in existing_template['zabbix_export']['templates'][0]['templates']] if link_templates is not None: if set(link_templates) != set(exist_child_templates): changed = True # Mark that there will be changes when at least one existing template will be unlinked if clear_templates is not None: for t in clear_templates: if t in exist_child_templates: changed = True break if template_macros is not None: existing_macros = existing_template['zabbix_export']['templates'][0]['macros'] if template_macros != existing_macros: changed = True return changed def update_template(self, template_ids, group_ids, link_template_ids, clear_template_ids, template_macros): template_changes = {} if group_ids is not None: template_changes.update({'groups': group_ids}) if link_template_ids is not None: template_changes.update({'templates': link_template_ids}) if clear_template_ids is not None: template_changes.update({'templates_clear': clear_template_ids}) if template_macros is not None: template_changes.update({'macros': template_macros}) if template_changes: # If we got here we know that only one template was provided via template_name template_changes.update({'templateid': template_ids[0]}) self._zapi.template.update(template_changes) def delete_template(self, templateids): if self._module.check_mode: self._module.exit_json(changed=True) self._zapi.template.delete(templateids) def ordered_json(self, obj): # Deep sort json dicts for comparison if isinstance(obj, dict): return sorted((k, self.ordered_json(v)) for k, v in obj.items()) if isinstance(obj, list): return sorted(self.ordered_json(x) for x in obj) else: return obj def dump_template(self, template_ids, template_type='json'): if self._module.check_mode: self._module.exit_json(changed=True) try: dump = self._zapi.configuration.export({'format': template_type, 'options': {'templates': template_ids}}) if template_type == 'xml': return str(ET.tostring(ET.fromstring(dump.encode('utf-8')), encoding='utf-8').decode('utf-8')) else: return self.load_json_template(dump) except ZabbixAPIException as e: self._module.fail_json(msg='Unable to export template: %s' % e) def diff_template(self, template_json_a, template_json_b): # Compare 2 zabbix templates and return True if they differ. template_json_a = self.filter_template(template_json_a) template_json_b = self.filter_template(template_json_b) if self.ordered_json(template_json_a) == self.ordered_json(template_json_b): return False return True def filter_template(self, template_json): # Filter the template json to contain only the keys we will update keep_keys = set(['graphs', 'templates', 'triggers', 'value_maps']) unwanted_keys = set(template_json['zabbix_export']) - keep_keys for unwanted_key in unwanted_keys: del template_json['zabbix_export'][unwanted_key] # Versions older than 2.4 do not support description field within template desc_not_supported = False if LooseVersion(self._zapi.api_version()).version[:2] < LooseVersion('2.4').version: desc_not_supported = True # Filter empty attributes from template object to allow accurate comparison for template in template_json['zabbix_export']['templates']: for key in list(template.keys()): if not template[key] or (key == 'description' and desc_not_supported): template.pop(key) return template_json def filter_xml_template(self, template_xml): """Filters out keys from XML template that may wary between exports (e.g date or version) and keys that are not imported via this module. It is advised that provided XML template exactly matches XML structure used by Zabbix""" # Strip last new line and convert string to ElementTree parsed_xml_root = self.load_xml_template(template_xml.strip()) keep_keys = ['graphs', 'templates', 'triggers', 'value_maps'] # Remove unwanted XML nodes for node in list(parsed_xml_root): if node.tag not in keep_keys: parsed_xml_root.remove(node) # Filter empty attributes from template objects to allow accurate comparison for template in list(parsed_xml_root.find('templates')): for element in list(template): if element.text is None and len(list(element)) == 0: template.remove(element) # Filter new lines and indentation xml_root_text = list(line.strip() for line in ET.tostring(parsed_xml_root).split('\n')) return ''.join(xml_root_text) def load_json_template(self, template_json): try: return json.loads(template_json) except ValueError as e: self._module.fail_json(msg='Invalid JSON provided', details=to_native(e), exception=traceback.format_exc()) def load_xml_template(self, template_xml): try: return ET.fromstring(template_xml) except ET.ParseError as e: self._module.fail_json(msg='Invalid XML provided', details=to_native(e), exception=traceback.format_exc()) def import_template(self, template_content, template_type='json'): # rules schema latest version update_rules = { 'applications': { 'createMissing': True, 'deleteMissing': True }, 'discoveryRules': { 'createMissing': True, 'updateExisting': True, 'deleteMissing': True }, 'graphs': { 'createMissing': True, 'updateExisting': True, 'deleteMissing': True }, 'httptests': { 'createMissing': True, 'updateExisting': True, 'deleteMissing': True }, 'items': { 'createMissing': True, 'updateExisting': True, 'deleteMissing': True }, 'templates': { 'createMissing': True, 'updateExisting': True }, 'templateLinkage': { 'createMissing': True }, 'templateScreens': { 'createMissing': True, 'updateExisting': True, 'deleteMissing': True }, 'triggers': { 'createMissing': True, 'updateExisting': True, 'deleteMissing': True }, 'valueMaps': { 'createMissing': True, 'updateExisting': True } } try: # old api version support here api_version = self._zapi.api_version() # updateExisting for application removed from zabbix api after 3.2 if LooseVersion(api_version).version[:2] <= LooseVersion('3.2').version: update_rules['applications']['updateExisting'] = True import_data = {'format': template_type, 'source': template_content, 'rules': update_rules} self._zapi.configuration.import_(import_data) except ZabbixAPIException as e: self._module.fail_json(msg='Unable to import template', details=to_native(e), exception=traceback.format_exc()) def main(): module = AnsibleModule( argument_spec=dict( server_url=dict(type='str', required=True, aliases=['url']), login_user=dict(type='str', required=True), login_password=dict(type='str', required=True, no_log=True), http_login_user=dict(type='str', required=False, default=None), http_login_password=dict(type='str', required=False, default=None, no_log=True), validate_certs=dict(type='bool', required=False, default=True), template_name=dict(type='str', required=False), template_json=dict(type='json', required=False), template_xml=dict(type='str', required=False), template_groups=dict(type='list', required=False), link_templates=dict(type='list', required=False), clear_templates=dict(type='list', required=False), macros=dict(type='list', required=False), dump_format=dict(type='str', required=False, default='json', choices=['json', 'xml']), state=dict(default="present", choices=['present', 'absent', 'dump']), timeout=dict(type='int', default=10) ), required_one_of=[ ['template_name', 'template_json', 'template_xml'] ], mutually_exclusive=[ ['template_name', 'template_json', 'template_xml'] ], required_if=[ ['state', 'absent', ['template_name']], ['state', 'dump', ['template_name']] ], supports_check_mode=True ) if not HAS_ZABBIX_API: module.fail_json(msg=missing_required_lib('zabbix-api', url='https://pypi.org/project/zabbix-api/'), exception=ZBX_IMP_ERR) server_url = module.params['server_url'] login_user = module.params['login_user'] login_password = module.params['login_password'] http_login_user = module.params['http_login_user'] http_login_password = module.params['http_login_password'] validate_certs = module.params['validate_certs'] template_name = module.params['template_name'] template_json = module.params['template_json'] template_xml = module.params['template_xml'] template_groups = module.params['template_groups'] link_templates = module.params['link_templates'] clear_templates = module.params['clear_templates'] template_macros = module.params['macros'] dump_format = module.params['dump_format'] state = module.params['state'] timeout = module.params['timeout'] zbx = None try: zbx = ZabbixAPI(server_url, timeout=timeout, user=http_login_user, passwd=http_login_password, validate_certs=validate_certs) zbx.login(login_user, login_password) atexit.register(zbx.logout) except ZabbixAPIException as e: module.fail_json(msg="Failed to connect to Zabbix server: %s" % e) template = Template(module, zbx) # Identify template names for IDs retrieval # Template names are expected to reside in ['zabbix_export']['templates'][*]['template'] for both data types template_content, template_type = None, None if template_json is not None: template_type = 'json' template_content = template_json json_parsed = template.load_json_template(template_content) template_names = list(t['template'] for t in json_parsed['zabbix_export']['templates']) elif template_xml is not None: template_type = 'xml' template_content = template_xml xml_parsed = template.load_xml_template(template_content) template_names = list(t.find('template').text for t in list(xml_parsed.find('templates'))) else: template_names = [template_name] template_ids = template.get_template_ids(template_names) if state == "absent": if not template_ids: module.exit_json(changed=False, msg="Template not found. No changed: %s" % template_name) template.delete_template(template_ids) module.exit_json(changed=True, result="Successfully deleted template %s" % template_name) elif state == "dump": if not template_ids: module.fail_json(msg='Template not found: %s' % template_name) if dump_format == 'json': module.exit_json(changed=False, template_json=template.dump_template(template_ids, template_type='json')) elif dump_format == 'xml': module.exit_json(changed=False, template_xml=template.dump_template(template_ids, template_type='xml')) elif state == "present": # Load all subelements for template that were provided by user group_ids = None if template_groups is not None: group_ids = template.get_group_ids_by_group_names(template_groups) link_template_ids = None if link_templates is not None: link_template_ids = template.get_template_ids(link_templates) clear_template_ids = None if clear_templates is not None: clear_template_ids = template.get_template_ids(clear_templates) if template_macros is not None: # Zabbix configuration.export does not differentiate python types (numbers are returned as strings) for macroitem in template_macros: for key in macroitem: macroitem[key] = str(macroitem[key]) if not template_ids: # Assume new templates are being added when no ID's were found if template_content is not None: template.import_template(template_content, template_type) module.exit_json(changed=True, result="Template import successful") else: if group_ids is None: module.fail_json(msg='template_groups are required when creating a new Zabbix template') template.add_template(template_name, group_ids, link_template_ids, template_macros) module.exit_json(changed=True, result="Successfully added template: %s" % template_name) else: changed = template.check_template_changed(template_ids, template_groups, link_templates, clear_templates, template_macros, template_content, template_type) if module.check_mode: module.exit_json(changed=changed) if changed: if template_type is not None: template.import_template(template_content, template_type) else: template.update_template(template_ids, group_ids, link_template_ids, clear_template_ids, template_macros) module.exit_json(changed=changed, result="Template successfully updated") if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
63,907
Zabbix: Need the _info module of zabbix_template?
##### SUMMARY It is a suggestion, the zabbix_template module has a dump option, but I think it needs to be implemented with `_info`. **reason** The zabbix_user_group module has been pointed out to go to the `_info` module instead of using dump. https://github.com/ansible/ansible/pull/58051#pullrequestreview-304208011 The development document also states that modules that return general information need to be `_info`. https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_checklist.html?highlight=_info%20_facts#contributing-to-ansible-objective-requirements If need zabbix_template_info module, I created a zabbix_templet_info module, hence I can PR. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME zabbix_template thanks
https://github.com/ansible/ansible/issues/63907
https://github.com/ansible/ansible/pull/65236
eb423ecec0a51b5f2c2ac54abf8d07bede9f3fdf
2242c385b25cc6fdcaaa53560c7f56f819bafe23
2019-10-24T14:40:39Z
python
2019-11-26T11:18:55Z
test/integration/targets/zabbix_template/aliases
closed
ansible/ansible
https://github.com/ansible/ansible
63,907
Zabbix: Need the _info module of zabbix_template?
##### SUMMARY It is a suggestion, the zabbix_template module has a dump option, but I think it needs to be implemented with `_info`. **reason** The zabbix_user_group module has been pointed out to go to the `_info` module instead of using dump. https://github.com/ansible/ansible/pull/58051#pullrequestreview-304208011 The development document also states that modules that return general information need to be `_info`. https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_checklist.html?highlight=_info%20_facts#contributing-to-ansible-objective-requirements If need zabbix_template_info module, I created a zabbix_templet_info module, hence I can PR. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME zabbix_template thanks
https://github.com/ansible/ansible/issues/63907
https://github.com/ansible/ansible/pull/65236
eb423ecec0a51b5f2c2ac54abf8d07bede9f3fdf
2242c385b25cc6fdcaaa53560c7f56f819bafe23
2019-10-24T14:40:39Z
python
2019-11-26T11:18:55Z
test/integration/targets/zabbix_template/defaults/main.yml
closed
ansible/ansible
https://github.com/ansible/ansible
63,907
Zabbix: Need the _info module of zabbix_template?
##### SUMMARY It is a suggestion, the zabbix_template module has a dump option, but I think it needs to be implemented with `_info`. **reason** The zabbix_user_group module has been pointed out to go to the `_info` module instead of using dump. https://github.com/ansible/ansible/pull/58051#pullrequestreview-304208011 The development document also states that modules that return general information need to be `_info`. https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_checklist.html?highlight=_info%20_facts#contributing-to-ansible-objective-requirements If need zabbix_template_info module, I created a zabbix_templet_info module, hence I can PR. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME zabbix_template thanks
https://github.com/ansible/ansible/issues/63907
https://github.com/ansible/ansible/pull/65236
eb423ecec0a51b5f2c2ac54abf8d07bede9f3fdf
2242c385b25cc6fdcaaa53560c7f56f819bafe23
2019-10-24T14:40:39Z
python
2019-11-26T11:18:55Z
test/integration/targets/zabbix_template/meta/main.yml
closed
ansible/ansible
https://github.com/ansible/ansible
63,907
Zabbix: Need the _info module of zabbix_template?
##### SUMMARY It is a suggestion, the zabbix_template module has a dump option, but I think it needs to be implemented with `_info`. **reason** The zabbix_user_group module has been pointed out to go to the `_info` module instead of using dump. https://github.com/ansible/ansible/pull/58051#pullrequestreview-304208011 The development document also states that modules that return general information need to be `_info`. https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_checklist.html?highlight=_info%20_facts#contributing-to-ansible-objective-requirements If need zabbix_template_info module, I created a zabbix_templet_info module, hence I can PR. ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME zabbix_template thanks
https://github.com/ansible/ansible/issues/63907
https://github.com/ansible/ansible/pull/65236
eb423ecec0a51b5f2c2ac54abf8d07bede9f3fdf
2242c385b25cc6fdcaaa53560c7f56f819bafe23
2019-10-24T14:40:39Z
python
2019-11-26T11:18:55Z
test/integration/targets/zabbix_template/tasks/main.yml
closed
ansible/ansible
https://github.com/ansible/ansible
64,937
ignore_unreachable documentation needs improvement
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> The `ignore_unreachable` keyword is not offered in the error handling documentation. The `ignore_unreachable` keyword description is misleading. <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> playbooks_error_handling.rst playbooks_keywords.rst.j2 keyword_desc.yml ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.0.dev0 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/aschultz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/aschultz/dev/vendor/ansible/lib/ansible executable location = /home/aschultz/dev/vendor/ansible/bin/ansible python version = 3.7.3 (default, Apr 09 2019, 05:18:21) [GCC] ``` ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> With this change, I'd expect users to more easily discover `ignore_unreachable` and better understand that it ignores task errors rather than the unreachable host itself. ##### RELATED ISSUES #18075 #26227 #50946 <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/64937
https://github.com/ansible/ansible/pull/64938
2460470ae784f894d861f07dddef99fde332ff53
756ac826fe31666116d96ece2d07f9f9b4dd26af
2019-11-16T21:29:43Z
python
2019-11-26T18:35:02Z
docs/docsite/keyword_desc.yml
accelerate: "*DEPRECATED*, set to True to use accelerate connection plugin." accelerate_ipv6: "*DEPRECATED*, set to True to force accelerate plugin to use ipv6 for its connection." accelerate_port: "*DEPRECATED*, set to override default port use for accelerate connection." action: "The 'action' to execute for a task, it normally translates into a C(module) or action plugin." args: "A secondary way to add arguments into a task. Takes a dictionary in which keys map to options and values." always: List of tasks, in a block, that execute no matter if there is an error in the block or not. any_errors_fatal: Force any un-handled task errors on any host to propagate to all hosts and end the play. async: Run a task asynchronously if the C(action) supports this; value is maximum runtime in seconds. become: Boolean that controls if privilege escalation is used or not on :term:`Task` execution. become_flags: A string of flag(s) to pass to the privilege escalation program when :term:`become` is True. become_method: Which method of privilege escalation to use (such as sudo or su). become_user: "User that you 'become' after using privilege escalation. The remote/login user must have permissions to become this user." block: List of tasks in a block. changed_when: "Conditional expression that overrides the task's normal 'changed' status." check_mode: | A boolean that controls if a task is executed in 'check' mode .. seealso:: :ref:`check_mode_dry` connection: | Allows you to change the connection plugin used for tasks to execute on the target. .. seealso:: :ref:`using_connection` debugger: Enable debugging tasks based on state of the task result. See :ref:`playbook_debugger` delay: Number of seconds to delay between retries. This setting is only used in combination with :term:`until`. delegate_facts: Boolean that allows you to apply facts to a delegated host instead of inventory_hostname. delegate_to: Host to execute task instead of the target (inventory_hostname). Connection vars from the delegated host will also be used for the task. diff: "Toggle to make tasks return 'diff' information or not." environment: A dictionary that gets converted into environment vars to be provided for the task upon execution. This can ONLY be used with modules. This isn't supported for any other type of plugins nor Ansible itself nor its configuration, it just sets the variables for the code responsible for executing the task. fact_path: Set the fact path option for the fact gathering plugin controlled by :term:`gather_facts`. failed_when: "Conditional expression that overrides the task's normal 'failed' status." force_handlers: Will force notified handler execution for hosts even if they failed during the play. Will not trigger if the play itself fails. gather_facts: "A boolean that controls if the play will automatically run the 'setup' task to gather facts for the hosts." gather_subset: Allows you to pass subset options to the fact gathering plugin controlled by :term:`gather_facts`. gather_timeout: Allows you to set the timeout for the fact gathering plugin controlled by :term:`gather_facts`. handlers: "A section with tasks that are treated as handlers, these won't get executed normally, only when notified after each section of tasks is complete. A handler's `listen` field is not templatable." hosts: "A list of groups, hosts or host pattern that translates into a list of hosts that are the play's target." ignore_errors: Boolean that allows you to ignore task failures and continue with play. It does not affect connection errors. ignore_unreachable: Boolean that allows you to ignore unreachable hosts and continue with play. This does not affect other task errors (see :term:`ignore_errors`) but is useful for groups of volatile/ephemeral hosts. loop: "Takes a list for the task to iterate over, saving each list element into the ``item`` variable (configurable via loop_control)" loop_control: | Several keys here allow you to modify/set loop behaviour in a task. .. seealso:: :ref:`loop_control` max_fail_percentage: can be used to abort the run after a given percentage of hosts in the current batch has failed. module_defaults: Specifies default parameter values for modules. name: "Identifier. Can be used for documentation, in or tasks/handlers." no_log: Boolean that controls information disclosure. notify: "List of handlers to notify when the task returns a 'changed=True' status." order: Controls the sorting of hosts as they are used for executing the play. Possible values are inventory (default), sorted, reverse_sorted, reverse_inventory and shuffle. poll: Sets the polling interval in seconds for async tasks (default 10s). port: Used to override the default port used in a connection. post_tasks: A list of tasks to execute after the :term:`tasks` section. pre_tasks: A list of tasks to execute before :term:`roles`. remote_user: User used to log into the target via the connection plugin. register: Name of variable that will contain task status and module return data. rescue: List of tasks in a :term:`block` that run if there is a task error in the main :term:`block` list. retries: "Number of retries before giving up in a :term:`until` loop. This setting is only used in combination with :term:`until`." roles: List of roles to be imported into the play run_once: Boolean that will bypass the host loop, forcing the task to attempt to execute on the first host available and afterwards apply any results and facts to all active hosts in the same batch. serial: | Explicitly define how Ansible batches the execution of the current play on the play's target .. seealso:: :ref:`rolling_update_batch_size` strategy: Allows you to choose the connection plugin to use for the play. tags: Tags applied to the task or included tasks, this allows selecting subsets of tasks from the command line. tasks: Main list of tasks to execute in the play, they run after :term:`roles` and before :term:`post_tasks`. throttle: Limit number of concurrent task runs on task, block and playbook level. This is independent of the forks and serial settings, but cannot be set higher than those limits. For example, if forks is set to 10 and the throttle is set to 15, at most 10 hosts will be operated on in parallel. until: "This keyword implies a ':term:`retries` loop' that will go on until the condition supplied here is met or we hit the :term:`retries` limit." vars: Dictionary/map of variables vars_files: List of files that contain vars to include in the play. vars_prompt: list of variables to prompt for. when: Conditional expression, determines if an iteration of a task is run or not.
closed
ansible/ansible
https://github.com/ansible/ansible
64,937
ignore_unreachable documentation needs improvement
<!--- Verify first that your improvement is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below, add suggestions to wording or structure --> The `ignore_unreachable` keyword is not offered in the error handling documentation. The `ignore_unreachable` keyword description is misleading. <!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? --> ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME <!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure --> playbooks_error_handling.rst playbooks_keywords.rst.j2 keyword_desc.yml ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.10.0.dev0 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/aschultz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/aschultz/dev/vendor/ansible/lib/ansible executable location = /home/aschultz/dev/vendor/ansible/bin/ansible python version = 3.7.3 (default, Apr 09 2019, 05:18:21) [GCC] ``` ##### ADDITIONAL INFORMATION <!--- Describe how this improves the documentation, e.g. before/after situation or screenshots --> With this change, I'd expect users to more easily discover `ignore_unreachable` and better understand that it ignores task errors rather than the unreachable host itself. ##### RELATED ISSUES #18075 #26227 #50946 <!--- HINT: You can paste gist.github.com links for larger files -->
https://github.com/ansible/ansible/issues/64937
https://github.com/ansible/ansible/pull/64938
2460470ae784f894d861f07dddef99fde332ff53
756ac826fe31666116d96ece2d07f9f9b4dd26af
2019-11-16T21:29:43Z
python
2019-11-26T18:35:02Z
docs/docsite/rst/user_guide/playbooks_error_handling.rst
Error Handling In Playbooks =========================== .. contents:: Topics Ansible normally has defaults that make sure to check the return codes of commands and modules and it fails fast -- forcing an error to be dealt with unless you decide otherwise. Sometimes a command that returns different than 0 isn't an error. Sometimes a command might not always need to report that it 'changed' the remote system. This section describes how to change the default behavior of Ansible for certain tasks so output and error handling behavior is as desired. .. _ignoring_failed_commands: Ignoring Failed Commands ```````````````````````` Generally playbooks will stop executing any more steps on a host that has a task fail. Sometimes, though, you want to continue on. To do so, write a task that looks like this:: - name: this will not be counted as a failure command: /bin/false ignore_errors: yes Note that the above system only governs the return value of failure of the particular task, so if you have an undefined variable used or a syntax error, it will still raise an error that users will need to address. Note that this will not prevent failures on connection or execution issues. This feature only works when the task must be able to run and return a value of 'failed'. .. _resetting_unreachable: Resetting Unreachable Hosts ``````````````````````````` .. versionadded:: 2.2 Connection failures set hosts as 'UNREACHABLE', which will remove them from the list of active hosts for the run. To recover from these issues you can use `meta: clear_host_errors` to have all currently flagged hosts reactivated, so subsequent tasks can try to use them again. .. _handlers_and_failure: Handlers and Failure ```````````````````` When a task fails on a host, handlers which were previously notified will *not* be run on that host. This can lead to cases where an unrelated failure can leave a host in an unexpected state. For example, a task could update a configuration file and notify a handler to restart some service. If a task later on in the same play fails, the service will not be restarted despite the configuration change. You can change this behavior with the ``--force-handlers`` command-line option, or by including ``force_handlers: True`` in a play, or ``force_handlers = True`` in ansible.cfg. When handlers are forced, they will run when notified even if a task fails on that host. (Note that certain errors could still prevent the handler from running, such as a host becoming unreachable.) .. _controlling_what_defines_failure: Controlling What Defines Failure ```````````````````````````````` Ansible lets you define what "failure" means in each task using the ``failed_when`` conditional. As with all conditionals in Ansible, lists of multiple ``failed_when`` conditions are joined with an implicit ``and``, meaning the task only fails when *all* conditions are met. If you want to trigger a failure when any of the conditions is met, you must define the conditions in a string with an explicit ``or`` operator. You may check for failure by searching for a word or phrase in the output of a command:: - name: Fail task when the command error output prints FAILED command: /usr/bin/example-command -x -y -z register: command_result failed_when: "'FAILED' in command_result.stderr" or based on the return code:: - name: Fail task when both files are identical raw: diff foo/file1 bar/file2 register: diff_cmd failed_when: diff_cmd.rc == 0 or diff_cmd.rc >= 2 In previous version of Ansible, this can still be accomplished as follows:: - name: this command prints FAILED when it fails command: /usr/bin/example-command -x -y -z register: command_result ignore_errors: True - name: fail the play if the previous command did not succeed fail: msg: "the command failed" when: "'FAILED' in command_result.stderr" You can also combine multiple conditions for failure. This task will fail if both conditions are true:: - name: Check if a file exists in temp and fail task if it does command: ls /tmp/this_should_not_be_here register: result failed_when: - result.rc == 0 - '"No such" not in result.stdout' If you want the task to fail when only one condition is satisfied, change the ``failed_when`` definition to:: failed_when: result.rc == 0 or "No such" not in result.stdout If you have too many conditions to fit neatly into one line, you can split it into a multi-line yaml value with ``>``:: - name: example of many failed_when conditions with OR shell: "./myBinary" register: ret failed_when: > ("No such file or directory" in ret.stdout) or (ret.stderr != '') or (ret.rc == 10) .. _override_the_changed_result: Overriding The Changed Result ````````````````````````````` When a shell/command or other module runs it will typically report "changed" status based on whether it thinks it affected machine state. Sometimes you will know, based on the return code or output that it did not make any changes, and wish to override the "changed" result such that it does not appear in report output or does not cause handlers to fire:: tasks: - shell: /usr/bin/billybass --mode="take me to the river" register: bass_result changed_when: "bass_result.rc != 2" # this will never report 'changed' status - shell: wall 'beep' changed_when: False You can also combine multiple conditions to override "changed" result:: - command: /bin/fake_command register: result ignore_errors: True changed_when: - '"ERROR" in result.stderr' - result.rc == 2 Aborting the play ````````````````` Sometimes it's desirable to abort the entire play on failure, not just skip remaining tasks for a host. The ``any_errors_fatal`` option will end the play and prevent any subsequent plays from running. When an error is encountered, all hosts in the current batch are given the opportunity to finish the fatal task and then the execution of the play stops. ``any_errors_fatal`` can be set at the play or block level:: - hosts: somehosts any_errors_fatal: true roles: - myrole - hosts: somehosts tasks: - block: - include_tasks: mytasks.yml any_errors_fatal: true for finer-grained control ``max_fail_percentage`` can be used to abort the run after a given percentage of hosts has failed. Using blocks ```````````` Most of what you can apply to a single task (with the exception of loops) can be applied at the :ref:`playbooks_blocks` level, which also makes it much easier to set data or directives common to the tasks. Blocks also introduce the ability to handle errors in a way similar to exceptions in most programming languages. Blocks only deal with 'failed' status of a task. A bad task definition or an unreachable host are not 'rescuable' errors:: tasks: - name: Handle the error block: - debug: msg: 'I execute normally' - name: i force a failure command: /bin/false - debug: msg: 'I never execute, due to the above task failing, :-(' rescue: - debug: msg: 'I caught an error, can do stuff here to fix it, :-)' This will 'revert' the failed status of the outer ``block`` task for the run and the play will continue as if it had succeeded. See :ref:`block_error_handling` for more examples. .. seealso:: :ref:`playbooks_intro` An introduction to playbooks :ref:`playbooks_best_practices` Best practices in playbooks :ref:`playbooks_conditionals` Conditional statements in playbooks :ref:`playbooks_variables` All about variables `User Mailing List <https://groups.google.com/group/ansible-devel>`_ Have a question? Stop by the google group! `irc.freenode.net <http://irc.freenode.net>`_ #ansible IRC chat channel
closed
ansible/ansible
https://github.com/ansible/ansible
63,721
Inventory plugin azure_rm: incomplete inventory results if no primary ipConfiguration field
##### SUMMARY When using azure_rm inventory plug-in, given Azure subscription, the dynamic inventory results in an incomplete list of VMs, compared to the one that can be obtained in portal.azure.com or via Azure CLI. The issue is mainly due to a parsing error that occurs in azure_rm.py. The code expects the 'primary' field to exist in the __nic_model['ipConfigurations'][X]['properties']__ dict. In our existing fleet of Azure VMs, some VMs do not have a primary ipConfiguration set. More precisely, that field is set to 'null', not 'True' or 'False'. Example: 1. Using Azure cli, dump the ipConfiguration of VM vm01 that has the 'primary' field set to 'true': ```bash az network nic ip-config list --subscription MySubscription -g MyRG --nic-name vm01nic [ { "applicationGatewayBackendAddressPools": null, "applicationSecurityGroups": null, "etag": "W/\"***\"", "id": "/subscriptions/***/resourceGroups/***/providers/Microsoft.Network/networkInterfaces/vm01nic/ipConfigurations/ipconfig1", "loadBalancerBackendAddressPools": null, "loadBalancerInboundNatRules": null, "name": "ipconfig1", "primary": true, "privateIpAddress": "***", "privateIpAddressVersion": "IPv4", "privateIpAllocationMethod": "Static", "provisioningState": "Succeeded", "publicIpAddress": null, "resourceGroup": "***", "subnet": { "addressPrefix": null, "etag": null, "id": "/subscriptions/***/resourceGroups/***/providers/Microsoft.Network/virtualNetworks/***/subnets/***", "ipConfigurations": null, "name": null, "networkSecurityGroup": null, "provisioningState": null, "resourceGroup": "***", "resourceNavigationLinks": null, "routeTable": null, "serviceEndpoints": null }, "type": "Microsoft.Network/networkInterfaces/ipConfigurations" } ] ``` 2. Using Azure cli, dump the ipConfiguration of VM vm02 that does not have a boolean value for the 'primary' field: ```bash az network nic ip-config list --subscription MySubscription -g MyRG --nic-name vm02nic [ { "applicationGatewayBackendAddressPools": null, "applicationSecurityGroups": null, "etag": "W/\"***\"", "id": "/subscriptions/***/resourceGroups/***/providers/Microsoft.Network/networkInterfaces/vm02nic/ipConfigurations/ipconfig1", "loadBalancerBackendAddressPools": null, "loadBalancerInboundNatRules": null, "name": "ipconfig1", "primary": null, "privateIpAddress": "***", "privateIpAddressVersion": "IPv4", "privateIpAllocationMethod": "Static", "provisioningState": "Succeeded", "publicIpAddress": null, "resourceGroup": "***", "subnet": { "addressPrefix": null, "etag": null, "id": "/subscriptions/***/resourceGroups/***/providers/Microsoft.Network/virtualNetworks/***/subnets/***", "ipConfigurations": null, "name": null, "networkSecurityGroup": null, "provisioningState": null, "resourceGroup": "***", "resourceNavigationLinks": null, "routeTable": null, "serviceEndpoints": null }, "type": "Microsoft.Network/networkInterfaces/ipConfigurations" } ] ``` In the second example, because of the 'null' value, that field is later removed from the __nic_model__ dict., causing a python error while fetching the inventory. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lib/ansible/plugins/inventory/azure_rm.py ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.8.5 config file = /home/***/ansible/ansible_dev.cfg configured module search path = [u'/home/***/ansible/library'] ansible python module location = /home/***/ansible/.venv/lib/python2.7/site-packages/ansible executable location = /home/***/ansible/.venv/bin/ansible python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below DEFAULT_BECOME_METHOD(/home/***/ansible/ansible_dev.cfg) = sudo DEFAULT_BECOME_USER(/home/***/ansible/ansible_dev.cfg) = root DEFAULT_GATHER_TIMEOUT(/home/***/ansible/ansible_dev.cfg) = 30 DEFAULT_HOST_LIST(/home/***/ansible/ansible_dev.cfg) = [u'/home/***/ansible/inventory/azure-dev'] DEFAULT_LOG_PATH(/home/***/ansible/ansible_dev.cfg) = /home/***/ansible.log DEFAULT_MODULE_PATH(/home/***/ansible/ansible_dev.cfg) = [u'/home/***/ansible/library'] DEFAULT_ROLES_PATH(/home/***/ansible/ansible_dev.cfg) = [u'/home/***/ansible/roles'] DEFAULT_VAULT_IDENTITY_LIST(/home/***/ansible/ansible_dev.cfg) = *** HOST_KEY_CHECKING(/home/***/ansible/ansible_dev.cfg) = False INVENTORY_ENABLED(/home/***/ansible/ansible_dev.cfg) = [u'yaml', u'azure_rm', u'script'] RETRY_FILES_ENABLED(/home/***/ansible/ansible_dev.cfg) = False ``` ##### OS / ENVIRONMENT Ansible controller on RHEL 7.4 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> 1. Use a resource group with one VM that is correctly retrieved via the azure_rm plugin. 1. In that resource group, create another VM with at least one NIC but with no primary ipConfiguration (I don't know whether it is still possible to do so but in our existing fleet of VMs, some do not have any NIC with a primary ipConfiguration) 1. Relaunch the dynamic inventory. ##### EXPECTED RESULTS All resource group VMs are retrieved. ##### ACTUAL RESULTS No VM shows up
https://github.com/ansible/ansible/issues/63721
https://github.com/ansible/ansible/pull/63722
4c589661c2add45023f2bff9203e0c4e11efe5f6
4b240a5d740c20c4821eb944e2df37446a4b0dd2
2019-10-20T12:35:00Z
python
2019-11-27T07:53:05Z
lib/ansible/plugins/inventory/azure_rm.py
# Copyright (c) 2018 Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type DOCUMENTATION = r''' name: azure_rm plugin_type: inventory short_description: Azure Resource Manager inventory plugin extends_documentation_fragment: - azure description: - Query VM details from Azure Resource Manager - Requires a YAML configuration file whose name ends with 'azure_rm.(yml|yaml)' - By default, sets C(ansible_host) to the first public IP address found (preferring the primary NIC). If no public IPs are found, the first private IP (also preferring the primary NIC). The default may be overridden via C(hostvar_expressions); see examples. options: plugin: description: marks this as an instance of the 'azure_rm' plugin required: true choices: ['azure_rm'] include_vm_resource_groups: description: A list of resource group names to search for virtual machines. '\*' will include all resource groups in the subscription. default: ['*'] include_vmss_resource_groups: description: A list of resource group names to search for virtual machine scale sets (VMSSs). '\*' will include all resource groups in the subscription. default: [] fail_on_template_errors: description: When false, template failures during group and filter processing are silently ignored (eg, if a filter or group expression refers to an undefined host variable) choices: [True, False] default: True keyed_groups: description: Creates groups based on the value of a host variable. Requires a list of dictionaries, defining C(key) (the source dictionary-typed variable), C(prefix) (the prefix to use for the new group name), and optionally C(separator) (which defaults to C(_)) conditional_groups: description: A mapping of group names to Jinja2 expressions. When the mapped expression is true, the host is added to the named group. hostvar_expressions: description: A mapping of hostvar names to Jinja2 expressions. The value for each host is the result of the Jinja2 expression (which may refer to any of the host's existing variables at the time this inventory plugin runs). exclude_host_filters: description: Excludes hosts from the inventory with a list of Jinja2 conditional expressions. Each expression in the list is evaluated for each host; when the expression is true, the host is excluded from the inventory. default: [] batch_fetch: description: To improve performance, results are fetched using an unsupported batch API. Disabling C(batch_fetch) uses a much slower serial fetch, resulting in many more round-trips. Generally only useful for troubleshooting. default: true default_host_filters: description: A default set of filters that is applied in addition to the conditions in C(exclude_host_filters) to exclude powered-off and not-fully-provisioned hosts. Set this to a different value or empty list if you need to include hosts in these states. default: ['powerstate != "running"', 'provisioning_state != "succeeded"'] use_contrib_script_compatible_sanitization: description: - By default this plugin is using a general group name sanitization to create safe and usable group names for use in Ansible. This option allows you to override that, in efforts to allow migration from the old inventory script and matches the sanitization of groups when the script's ``replace_dash_in_groups`` option is set to ``False``. To replicate behavior of ``replace_dash_in_groups = True`` with constructed groups, you will need to replace hyphens with underscores via the regex_replace filter for those entries. - For this to work you should also turn off the TRANSFORM_INVALID_GROUP_CHARS setting, otherwise the core engine will just use the standard sanitization on top. - This is not the default as such names break certain functionality as not all characters are valid Python identifiers which group names end up being used as. type: bool default: False version_added: '2.8' plain_host_names: description: - By default this plugin will use globally unique host names. This option allows you to override that, and use the name that matches the old inventory script naming. - This is not the default, as these names are not truly unique, and can conflict with other hosts. The default behavior will add extra hashing to the end of the hostname to prevent such conflicts. type: bool default: False version_added: '2.8' ''' EXAMPLES = ''' # The following host variables are always available: # public_ipv4_addresses: all public IP addresses, with the primary IP config from the primary NIC first # public_dns_hostnames: all public DNS hostnames, with the primary IP config from the primary NIC first # private_ipv4_addresses: all private IP addressses, with the primary IP config from the primary NIC first # id: the VM's Azure resource ID, eg /subscriptions/00000000-0000-0000-1111-1111aaaabb/resourceGroups/my_rg/providers/Microsoft.Compute/virtualMachines/my_vm # location: the VM's Azure location, eg 'westus', 'eastus' # name: the VM's resource name, eg 'myvm' # os_profile: The VM OS properties, a dictionary, only system is currently available, eg 'os_profile.system not in ['linux']' # powerstate: the VM's current power state, eg: 'running', 'stopped', 'deallocated' # provisioning_state: the VM's current provisioning state, eg: 'succeeded' # tags: dictionary of the VM's defined tag values # resource_type: the VM's resource type, eg: 'Microsoft.Compute/virtualMachine', 'Microsoft.Compute/virtualMachineScaleSets/virtualMachines' # vmid: the VM's internal SMBIOS ID, eg: '36bca69d-c365-4584-8c06-a62f4a1dc5d2' # vmss: if the VM is a member of a scaleset (vmss), a dictionary including the id and name of the parent scaleset # sample 'myazuresub.azure_rm.yaml' # required for all azure_rm inventory plugin configs plugin: azure_rm # forces this plugin to use a CLI auth session instead of the automatic auth source selection (eg, prevents the # presence of 'ANSIBLE_AZURE_RM_X' environment variables from overriding CLI auth) auth_source: cli # fetches VMs from an explicit list of resource groups instead of default all (- '*') include_vm_resource_groups: - myrg1 - myrg2 # fetches VMs from VMSSs in all resource groups (defaults to no VMSS fetch) include_vmss_resource_groups: - '*' # places a host in the named group if the associated condition evaluates to true conditional_groups: # since this will be true for every host, every host sourced from this inventory plugin config will be in the # group 'all_the_hosts' all_the_hosts: true # if the VM's "name" variable contains "dbserver", it will be placed in the 'db_hosts' group db_hosts: "'dbserver' in name" # adds variables to each host found by this inventory plugin, whose values are the result of the associated expression hostvar_expressions: my_host_var: # A statically-valued expression has to be both single and double-quoted, or use escaped quotes, since the outer # layer of quotes will be consumed by YAML. Without the second set of quotes, it interprets 'staticvalue' as a # variable instead of a string literal. some_statically_valued_var: "'staticvalue'" # overrides the default ansible_host value with a custom Jinja2 expression, in this case, the first DNS hostname, or # if none are found, the first public IP address. ansible_host: (public_dns_hostnames + public_ipv4_addresses) | first # places hosts in dynamically-created groups based on a variable value. keyed_groups: # places each host in a group named 'tag_(tag name)_(tag value)' for each tag on a VM. - prefix: tag key: tags # places each host in a group named 'azure_loc_(location name)', depending on the VM's location - prefix: azure_loc key: location # places host in a group named 'some_tag_X' using the value of the 'sometag' tag on a VM as X, and defaulting to the # value 'none' (eg, the group 'some_tag_none') if the 'sometag' tag is not defined for a VM. - prefix: some_tag key: tags.sometag | default('none') # excludes a host from the inventory when any of these expressions is true, can refer to any vars defined on the host exclude_host_filters: # excludes hosts in the eastus region - location in ['eastus'] # excludes hosts that are powered off - powerstate != 'running' ''' # FUTURE: do we need a set of sane default filters, separate from the user-defineable ones? # eg, powerstate==running, provisioning_state==succeeded import hashlib import json import re import uuid try: from queue import Queue, Empty except ImportError: from Queue import Queue, Empty from collections import namedtuple from ansible import release from ansible.plugins.inventory import BaseInventoryPlugin, Constructable from ansible.module_utils.six import iteritems from ansible.module_utils.azure_rm_common import AzureRMAuth from ansible.errors import AnsibleParserError, AnsibleError from ansible.module_utils.parsing.convert_bool import boolean from ansible.module_utils._text import to_native, to_bytes from itertools import chain from msrest import ServiceClient, Serializer, Deserializer from msrestazure import AzureConfiguration from msrestazure.polling.arm_polling import ARMPolling from msrestazure.tools import parse_resource_id class AzureRMRestConfiguration(AzureConfiguration): def __init__(self, credentials, subscription_id, base_url=None): if credentials is None: raise ValueError("Parameter 'credentials' must not be None.") if subscription_id is None: raise ValueError("Parameter 'subscription_id' must not be None.") if not base_url: base_url = 'https://management.azure.com' super(AzureRMRestConfiguration, self).__init__(base_url) self.add_user_agent('ansible-dynamic-inventory/{0}'.format(release.__version__)) self.credentials = credentials self.subscription_id = subscription_id UrlAction = namedtuple('UrlAction', ['url', 'api_version', 'handler', 'handler_args']) # FUTURE: add Cacheable support once we have a sane serialization format class InventoryModule(BaseInventoryPlugin, Constructable): NAME = 'azure_rm' def __init__(self): super(InventoryModule, self).__init__() self._serializer = Serializer() self._deserializer = Deserializer() self._hosts = [] self._filters = None # FUTURE: use API profiles with defaults self._compute_api_version = '2017-03-30' self._network_api_version = '2015-06-15' self._default_header_parameters = {'Content-Type': 'application/json; charset=utf-8'} self._request_queue = Queue() self.azure_auth = None self._batch_fetch = False def verify_file(self, path): ''' :param loader: an ansible.parsing.dataloader.DataLoader object :param path: the path to the inventory config file :return the contents of the config file ''' if super(InventoryModule, self).verify_file(path): if re.match(r'.{0,}azure_rm\.y(a)?ml$', path): return True # display.debug("azure_rm inventory filename must end with 'azure_rm.yml' or 'azure_rm.yaml'") return False def parse(self, inventory, loader, path, cache=True): super(InventoryModule, self).parse(inventory, loader, path) self._read_config_data(path) if self.get_option('use_contrib_script_compatible_sanitization'): self._sanitize_group_name = self._legacy_script_compatible_group_sanitization self._batch_fetch = self.get_option('batch_fetch') self._legacy_hostnames = self.get_option('plain_host_names') self._filters = self.get_option('exclude_host_filters') + self.get_option('default_host_filters') try: self._credential_setup() self._get_hosts() except Exception: raise def _credential_setup(self): auth_options = dict( auth_source=self.get_option('auth_source'), profile=self.get_option('profile'), subscription_id=self.get_option('subscription_id'), client_id=self.get_option('client_id'), secret=self.get_option('secret'), tenant=self.get_option('tenant'), ad_user=self.get_option('ad_user'), password=self.get_option('password'), cloud_environment=self.get_option('cloud_environment'), cert_validation_mode=self.get_option('cert_validation_mode'), api_profile=self.get_option('api_profile'), adfs_authority_url=self.get_option('adfs_authority_url') ) self.azure_auth = AzureRMAuth(**auth_options) self._clientconfig = AzureRMRestConfiguration(self.azure_auth.azure_credentials, self.azure_auth.subscription_id, self.azure_auth._cloud_environment.endpoints.resource_manager) self._client = ServiceClient(self._clientconfig.credentials, self._clientconfig) def _enqueue_get(self, url, api_version, handler, handler_args=None): if not handler_args: handler_args = {} self._request_queue.put_nowait(UrlAction(url=url, api_version=api_version, handler=handler, handler_args=handler_args)) def _enqueue_vm_list(self, rg='*'): if not rg or rg == '*': url = '/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachines' else: url = '/subscriptions/{subscriptionId}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachines' url = url.format(subscriptionId=self._clientconfig.subscription_id, rg=rg) self._enqueue_get(url=url, api_version=self._compute_api_version, handler=self._on_vm_page_response) def _enqueue_vmss_list(self, rg=None): if not rg or rg == '*': url = '/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachineScaleSets' else: url = '/subscriptions/{subscriptionId}/resourceGroups/{rg}/providers/Microsoft.Compute/virtualMachineScaleSets' url = url.format(subscriptionId=self._clientconfig.subscription_id, rg=rg) self._enqueue_get(url=url, api_version=self._compute_api_version, handler=self._on_vmss_page_response) def _get_hosts(self): for vm_rg in self.get_option('include_vm_resource_groups'): self._enqueue_vm_list(vm_rg) for vmss_rg in self.get_option('include_vmss_resource_groups'): self._enqueue_vmss_list(vmss_rg) if self._batch_fetch: self._process_queue_batch() else: self._process_queue_serial() constructable_config_strict = boolean(self.get_option('fail_on_template_errors')) constructable_config_compose = self.get_option('hostvar_expressions') constructable_config_groups = self.get_option('conditional_groups') constructable_config_keyed_groups = self.get_option('keyed_groups') for h in self._hosts: inventory_hostname = self._get_hostname(h) if self._filter_host(inventory_hostname, h.hostvars): continue self.inventory.add_host(inventory_hostname) # FUTURE: configurable default IP list? can already do this via hostvar_expressions self.inventory.set_variable(inventory_hostname, "ansible_host", next(chain(h.hostvars['public_ipv4_addresses'], h.hostvars['private_ipv4_addresses']), None)) for k, v in iteritems(h.hostvars): # FUTURE: configurable hostvar prefix? Makes docs harder... self.inventory.set_variable(inventory_hostname, k, v) # constructable delegation self._set_composite_vars(constructable_config_compose, h.hostvars, inventory_hostname, strict=constructable_config_strict) self._add_host_to_composed_groups(constructable_config_groups, h.hostvars, inventory_hostname, strict=constructable_config_strict) self._add_host_to_keyed_groups(constructable_config_keyed_groups, h.hostvars, inventory_hostname, strict=constructable_config_strict) # FUTURE: fix underlying inventory stuff to allow us to quickly access known groupvars from reconciled host def _filter_host(self, inventory_hostname, hostvars): self.templar.available_variables = hostvars for condition in self._filters: # FUTURE: should warn/fail if conditional doesn't return True or False conditional = "{{% if {0} %}} True {{% else %}} False {{% endif %}}".format(condition) try: if boolean(self.templar.template(conditional)): return True except Exception as e: if boolean(self.get_option('fail_on_template_errors')): raise AnsibleParserError("Error evaluating filter condition '{0}' for host {1}: {2}".format(condition, inventory_hostname, to_native(e))) continue return False def _get_hostname(self, host): # FUTURE: configurable hostname sources return host.default_inventory_hostname def _process_queue_serial(self): try: while True: item = self._request_queue.get_nowait() resp = self.send_request(item.url, item.api_version) item.handler(resp, **item.handler_args) except Empty: pass def _on_vm_page_response(self, response, vmss=None): next_link = response.get('nextLink') if next_link: self._enqueue_get(url=next_link, api_version=self._compute_api_version, handler=self._on_vm_page_response) if 'value' in response: for h in response['value']: # FUTURE: add direct VM filtering by tag here (performance optimization)? self._hosts.append(AzureHost(h, self, vmss=vmss, legacy_name=self._legacy_hostnames)) def _on_vmss_page_response(self, response): next_link = response.get('nextLink') if next_link: self._enqueue_get(url=next_link, api_version=self._compute_api_version, handler=self._on_vmss_page_response) # FUTURE: add direct VMSS filtering by tag here (performance optimization)? for vmss in response['value']: url = '{0}/virtualMachines'.format(vmss['id']) # VMSS instances look close enough to regular VMs that we can share the handler impl... self._enqueue_get(url=url, api_version=self._compute_api_version, handler=self._on_vm_page_response, handler_args=dict(vmss=vmss)) # use the undocumented /batch endpoint to bulk-send up to 500 requests in a single round-trip # def _process_queue_batch(self): while True: batch_requests = [] batch_item_index = 0 batch_response_handlers = dict() try: while batch_item_index < 100: item = self._request_queue.get_nowait() name = str(uuid.uuid4()) query_parameters = {'api-version': item.api_version} req = self._client.get(item.url, query_parameters) batch_requests.append(dict(httpMethod="GET", url=req.url, name=name)) batch_response_handlers[name] = item batch_item_index += 1 except Empty: pass if not batch_requests: break batch_resp = self._send_batch(batch_requests) key_name = None if 'responses' in batch_resp: key_name = 'responses' elif 'value' in batch_resp: key_name = 'value' else: raise AnsibleError("didn't find expected key responses/value in batch response") for idx, r in enumerate(batch_resp[key_name]): status_code = r.get('httpStatusCode') returned_name = r['name'] result = batch_response_handlers[returned_name] if status_code != 200: # FUTURE: error-tolerant operation mode (eg, permissions) raise AnsibleError("a batched request failed with status code {0}, url {1}".format(status_code, result.url)) # FUTURE: store/handle errors from individual handlers result.handler(r['content'], **result.handler_args) def _send_batch(self, batched_requests): url = '/batch' query_parameters = {'api-version': '2015-11-01'} body_obj = dict(requests=batched_requests) body_content = self._serializer.body(body_obj, 'object') header = {'x-ms-client-request-id': str(uuid.uuid4())} header.update(self._default_header_parameters) request = self._client.post(url, query_parameters) initial_response = self._client.send(request, header, body_content) # FUTURE: configurable timeout? poller = ARMPolling(timeout=2) poller.initialize(client=self._client, initial_response=initial_response, deserialization_callback=lambda r: self._deserializer('object', r)) poller.run() return poller.resource() def send_request(self, url, api_version): query_parameters = {'api-version': api_version} req = self._client.get(url, query_parameters) resp = self._client.send(req, self._default_header_parameters, stream=False) resp.raise_for_status() content = resp.content return json.loads(content) @staticmethod def _legacy_script_compatible_group_sanitization(name): # note that while this mirrors what the script used to do, it has many issues with unicode and usability in python regex = re.compile(r"[^A-Za-z0-9\_\-]") return regex.sub('_', name) # VM list (all, N resource groups): VM -> InstanceView, N NICs, N PublicIPAddress) # VMSS VMs (all SS, N specific SS, N resource groups?): SS -> VM -> InstanceView, N NICs, N PublicIPAddress) class AzureHost(object): _powerstate_regex = re.compile('^PowerState/(?P<powerstate>.+)$') def __init__(self, vm_model, inventory_client, vmss=None, legacy_name=False): self._inventory_client = inventory_client self._vm_model = vm_model self._vmss = vmss self._instanceview = None self._powerstate = "unknown" self.nics = [] if legacy_name: self.default_inventory_hostname = vm_model['name'] else: # Azure often doesn't provide a globally-unique filename, so use resource name + a chunk of ID hash self.default_inventory_hostname = '{0}_{1}'.format(vm_model['name'], hashlib.sha1(to_bytes(vm_model['id'])).hexdigest()[0:4]) self._hostvars = {} inventory_client._enqueue_get(url="{0}/instanceView".format(vm_model['id']), api_version=self._inventory_client._compute_api_version, handler=self._on_instanceview_response) nic_refs = vm_model['properties']['networkProfile']['networkInterfaces'] for nic in nic_refs: # single-nic instances don't set primary, so figure it out... is_primary = nic.get('properties', {}).get('primary', len(nic_refs) == 1) inventory_client._enqueue_get(url=nic['id'], api_version=self._inventory_client._network_api_version, handler=self._on_nic_response, handler_args=dict(is_primary=is_primary)) @property def hostvars(self): if self._hostvars != {}: return self._hostvars system = "unknown" if 'osProfile' in self._vm_model['properties']: if 'linuxConfiguration' in self._vm_model['properties']['osProfile']: system = 'linux' if 'windowsConfiguration' in self._vm_model['properties']['osProfile']: system = 'windows' new_hostvars = dict( public_ipv4_addresses=[], public_dns_hostnames=[], private_ipv4_addresses=[], id=self._vm_model['id'], location=self._vm_model['location'], name=self._vm_model['name'], powerstate=self._powerstate, provisioning_state=self._vm_model['properties']['provisioningState'].lower(), tags=self._vm_model.get('tags', {}), resource_type=self._vm_model.get('type', "unknown"), vmid=self._vm_model['properties']['vmId'], os_profile=dict( system=system, ), vmss=dict( id=self._vmss['id'], name=self._vmss['name'], ) if self._vmss else {}, virtual_machine_size=self._vm_model['properties']['hardwareProfile']['vmSize'] if self._vm_model['properties'].get('hardwareProfile') else None, plan=self._vm_model['properties']['plan']['name'] if self._vm_model['properties'].get('plan') else None, resource_group=parse_resource_id(self._vm_model['id']).get('resource_group').lower() ) # set nic-related values from the primary NIC first for nic in sorted(self.nics, key=lambda n: n.is_primary, reverse=True): # and from the primary IP config per NIC first for ipc in sorted(nic._nic_model['properties']['ipConfigurations'], key=lambda i: i['properties']['primary'], reverse=True): private_ip = ipc['properties'].get('privateIPAddress') if private_ip: new_hostvars['private_ipv4_addresses'].append(private_ip) pip_id = ipc['properties'].get('publicIPAddress', {}).get('id') if pip_id: new_hostvars['public_ip_id'] = pip_id pip = nic.public_ips[pip_id] new_hostvars['public_ip_name'] = pip._pip_model['name'] new_hostvars['public_ipv4_addresses'].append(pip._pip_model['properties'].get('ipAddress', None)) pip_fqdn = pip._pip_model['properties'].get('dnsSettings', {}).get('fqdn') if pip_fqdn: new_hostvars['public_dns_hostnames'].append(pip_fqdn) new_hostvars['mac_address'] = nic._nic_model['properties'].get('macAddress') new_hostvars['network_interface'] = nic._nic_model['name'] new_hostvars['network_interface_id'] = nic._nic_model['id'] new_hostvars['security_group_id'] = nic._nic_model['properties']['networkSecurityGroup']['id'] \ if nic._nic_model['properties'].get('networkSecurityGroup') else None new_hostvars['security_group'] = parse_resource_id(new_hostvars['security_group_id'])['resource_name'] \ if nic._nic_model['properties'].get('networkSecurityGroup') else None # set image and os_disk new_hostvars['image'] = {} new_hostvars['os_disk'] = {} storageProfile = self._vm_model['properties'].get('storageProfile') if storageProfile: imageReference = storageProfile.get('imageReference') if imageReference: if imageReference.get('publisher'): new_hostvars['image'] = dict( sku=imageReference.get('sku'), publisher=imageReference.get('publisher'), version=imageReference.get('version'), offer=imageReference.get('offer') ) elif imageReference.get('id'): new_hostvars['image'] = dict( id=imageReference.get('id') ) osDisk = storageProfile.get('osDisk') new_hostvars['os_disk'] = dict( name=osDisk.get('name'), operating_system_type=osDisk.get('osType').lower() if osDisk.get('osType') else None ) self._hostvars = new_hostvars return self._hostvars def _on_instanceview_response(self, vm_instanceview_model): self._instanceview = vm_instanceview_model self._powerstate = next((self._powerstate_regex.match(s.get('code', '')).group('powerstate') for s in vm_instanceview_model.get('statuses', []) if self._powerstate_regex.match(s.get('code', ''))), 'unknown') def _on_nic_response(self, nic_model, is_primary=False): nic = AzureNic(nic_model=nic_model, inventory_client=self._inventory_client, is_primary=is_primary) self.nics.append(nic) class AzureNic(object): def __init__(self, nic_model, inventory_client, is_primary=False): self._nic_model = nic_model self.is_primary = is_primary self._inventory_client = inventory_client self.public_ips = {} if nic_model.get('properties', {}).get('ipConfigurations'): for ipc in nic_model['properties']['ipConfigurations']: pip = ipc['properties'].get('publicIPAddress') if pip: self._inventory_client._enqueue_get(url=pip['id'], api_version=self._inventory_client._network_api_version, handler=self._on_pip_response) def _on_pip_response(self, pip_model): self.public_ips[pip_model['id']] = AzurePip(pip_model) class AzurePip(object): def __init__(self, pip_model): self._pip_model = pip_model
closed
ansible/ansible
https://github.com/ansible/ansible
64,319
azure_rm_storageaccount_info: Unable to retrieve all storage accounts
##### SUMMARY If the resource_group module parameter is not specified, the module is supposed to fetch all storage account of the current subscription. Unfortunately, the module is bugged and the module 'list_all' method calls 'list_by_resource_group()' rather than 'list()'. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME * azure_rm_storageaccount_facts * azure_rm_storageaccount_info ##### ANSIBLE VERSION ```paste below ansible 2.8.5 config file = /home/***/ansible/ansible_dev.cfg configured module search path = [u'/home/***/ansible/library'] ansible python module location = /home/***/ansible/.venv/lib/python2.7/site-packages/ansible executable location = /home/***/ansible/.venv/bin/ansible python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below DEFAULT_BECOME_METHOD(/home/***/ansible/ansible_dev.cfg) = sudo DEFAULT_BECOME_USER(/home/***/ansible/ansible_dev.cfg) = root DEFAULT_GATHER_TIMEOUT(/home/***/ansible/ansible_dev.cfg) = 30 HOST_KEY_CHECKING(/home/***/ansible/ansible_dev.cfg) = False INVENTORY_ENABLED(/home/***/ansible/ansible_dev.cfg) = [u'yaml', u'azure_rm', u'script'] RETRY_FILES_ENABLED(/home/***/ansible/ansible_dev.cfg) = False ``` ##### OS / ENVIRONMENT * Red Hat Enterprise Linux Server release 7.4 (Maipo) ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> Call the module with no name and no resrouce_group speficied. <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Fetch all the storage accounts in the subscription azure_rm_storageaccount_facts: show_blob_cors: no show_connection_string: no subscription_id: "***" tenant: "***" client_id: "***" secret: "***" ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> The module succeeds to fetch all storage accounts. ##### ACTUAL RESULTS ```paste below An exception occurred during task execution. To see the full traceback, use -vvv. The error was: msrest.exceptions.ValidationError: Parameter 'resource_group_name' can not be None. ```
https://github.com/ansible/ansible/issues/64319
https://github.com/ansible/ansible/pull/64321
4b240a5d740c20c4821eb944e2df37446a4b0dd2
7c65ad11e2914bc9774abd37cdd1ac455f1c9433
2019-11-01T22:08:52Z
python
2019-11-27T07:53:51Z
lib/ansible/modules/cloud/azure/azure_rm_storageaccount_info.py
#!/usr/bin/python # # Copyright (c) 2016 Matt Davis, <[email protected]> # Chris Houseknecht, <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: azure_rm_storageaccount_info version_added: "2.9" short_description: Get storage account facts description: - Get facts for one storage account or all storage accounts within a resource group. options: name: description: - Only show results for a specific account. resource_group: description: - Limit results to a resource group. Required when filtering by name. aliases: - resource_group_name tags: description: - Limit results by providing a list of tags. Format tags as 'key' or 'key:value'. show_connection_string: description: - Show the connection string for each of the storageaccount's endpoints. - For convenient usage, C(show_connection_string) will also show the access keys for each of the storageaccount's endpoints. - Note that it will cost a lot of time when list all storageaccount rather than query a single one. type: bool version_added: "2.8" show_blob_cors: description: - Show the blob CORS settings for each blob related to the storage account. - Querying all storage accounts will take a long time. type: bool version_added: "2.8" extends_documentation_fragment: - azure author: - Chris Houseknecht (@chouseknecht) - Matt Davis (@nitzmahone) ''' EXAMPLES = ''' - name: Get facts for one account azure_rm_storageaccount_info: resource_group: myResourceGroup name: clh0002 - name: Get facts for all accounts in a resource group azure_rm_storageaccount_info: resource_group: myResourceGroup - name: Get facts for all accounts by tags azure_rm_storageaccount_info: tags: - testing - foo:bar ''' RETURN = ''' azure_storageaccounts: description: - List of storage account dicts. returned: always type: list example: [{ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/myResourceGroups/testing/providers/Microsoft.Storage/storageAccounts/testaccount001", "location": "eastus2", "name": "testaccount001", "properties": { "accountType": "Standard_LRS", "creationTime": "2016-03-28T02:46:58.290113Z", "primaryEndpoints": { "blob": "https://testaccount001.blob.core.windows.net/", "file": "https://testaccount001.file.core.windows.net/", "queue": "https://testaccount001.queue.core.windows.net/", "table": "https://testaccount001.table.core.windows.net/" }, "primaryLocation": "eastus2", "provisioningState": "Succeeded", "statusOfPrimary": "Available" }, "tags": {}, "type": "Microsoft.Storage/storageAccounts" }] storageaccounts: description: - List of storage account dicts in resource module's parameter format. returned: always type: complex contains: id: description: - Resource ID. returned: always type: str sample: "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/t estaccount001" name: description: - Name of the storage account to update or create. returned: always type: str sample: testaccount001 location: description: - Valid Azure location. Defaults to location of the resource group. returned: always type: str sample: eastus account_type: description: - Type of storage account. - C(Standard_ZRS) and C(Premium_LRS) accounts cannot be changed to other account types. - Other account types cannot be changed to C(Standard_ZRS) or C(Premium_LRS). returned: always type: str sample: Standard_ZRS custom_domain: description: - User domain assigned to the storage account. - Must be a dictionary with I(name) and I(use_sub_domain) keys where I(name) is the CNAME source. returned: always type: complex contains: name: description: - CNAME source. returned: always type: str sample: testaccount use_sub_domain: description: - Whether to use sub domain. returned: always type: bool sample: true kind: description: - The kind of storage. returned: always type: str sample: Storage access_tier: description: - The access tier for this storage account. returned: always type: str sample: Hot https_only: description: - Allows https traffic only to storage service when set to C(true). returned: always type: bool sample: false provisioning_state: description: - The status of the storage account at the time the operation was called. - Possible values include C(Creating), C(ResolvingDNS), C(Succeeded). returned: always type: str sample: Succeeded secondary_location: description: - The location of the geo-replicated secondary for the storage account. - Only available if the I(account_type=Standard_GRS) or I(account_type=Standard_RAGRS). returned: always type: str sample: westus status_of_primary: description: - Status of the primary location of the storage account; either C(available) or C(unavailable). returned: always type: str sample: available status_of_secondary: description: - Status of the secondary location of the storage account; either C(available) or C(unavailable). returned: always type: str sample: available primary_location: description: - The location of the primary data center for the storage account. returned: always type: str sample: eastus primary_endpoints: description: - URLs to retrieve a public I(blob), I(queue), or I(table) object. - Note that C(Standard_ZRS) and C(Premium_LRS) accounts only return the blob endpoint. returned: always type: complex contains: blob: description: - The primary blob endpoint and connection string. returned: always type: complex contains: endpoint: description: - The primary blob endpoint. returned: always type: str sample: "https://testaccount001.blob.core.windows.net/" connectionstring: description: - Connectionstring of the blob endpoint. returned: always type: str sample: "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=X;AccountKey=X;BlobEndpoint=X" queue: description: - The primary queue endpoint and connection string. returned: always type: complex contains: endpoint: description: - The primary queue endpoint. returned: always type: str sample: "https://testaccount001.queue.core.windows.net/" connectionstring: description: - Connectionstring of the queue endpoint. returned: always type: str sample: "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=X;AccountKey=X;QueueEndpoint=X" table: description: - The primary table endpoint and connection string. returned: always type: complex contains: endpoint: description: - The primary table endpoint. returned: always type: str sample: "https://testaccount001.table.core.windows.net/" connectionstring: description: - Connectionstring of the table endpoint. returned: always type: str sample: "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=X;AccountKey=X;TableEndpoint=X" key: description: - The account key for the primary_endpoints returned: always type: str sample: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx secondary_endpoints: description: - The URLs to retrieve a public I(blob), I(queue), or I(table) object from the secondary location. - Only available if the SKU I(name=Standard_RAGRS). returned: always type: complex contains: blob: description: - The secondary blob endpoint and connection string. returned: always type: complex contains: endpoint: description: - The secondary blob endpoint. returned: always type: str sample: "https://testaccount001.blob.core.windows.net/" connectionstring: description: - Connectionstring of the blob endpoint. returned: always type: str sample: "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=X;AccountKey=X;BlobEndpoint=X" queue: description: - The secondary queue endpoint and connection string. returned: always type: complex contains: endpoint: description: - The secondary queue endpoint. returned: always type: str sample: "https://testaccount001.queue.core.windows.net/" connectionstring: description: - Connectionstring of the queue endpoint. returned: always type: str sample: "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=X;AccountKey=X;QueueEndpoint=X" table: description: - The secondary table endpoint and connection string. returned: always type: complex contains: endpoint: description: - The secondary table endpoint. returned: always type: str sample: "https://testaccount001.table.core.windows.net/" connectionstring: description: - Connectionstring of the table endpoint. returned: always type: str sample: "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=X;AccountKey=X;TableEndpoint=X" key: description: - The account key for the secondary_endpoints sample: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx tags: description: - Resource tags. returned: always type: dict sample: { "tag1": "abc" } ''' try: from msrestazure.azure_exceptions import CloudError except Exception: # This is handled in azure_rm_common pass from ansible.module_utils.azure_rm_common import AzureRMModuleBase from ansible.module_utils._text import to_native AZURE_OBJECT_CLASS = 'StorageAccount' class AzureRMStorageAccountInfo(AzureRMModuleBase): def __init__(self): self.module_arg_spec = dict( name=dict(type='str'), resource_group=dict(type='str', aliases=['resource_group_name']), tags=dict(type='list'), show_connection_string=dict(type='bool'), show_blob_cors=dict(type='bool') ) self.results = dict( changed=False, storageaccounts=[] ) self.name = None self.resource_group = None self.tags = None self.show_connection_string = None self.show_blob_cors = None super(AzureRMStorageAccountInfo, self).__init__(self.module_arg_spec, supports_tags=False, facts_module=True) def exec_module(self, **kwargs): is_old_facts = self.module._name == 'azure_rm_storageaccount_facts' if is_old_facts: self.module.deprecate("The 'azure_rm_storageaccount_facts' module has been renamed to 'azure_rm_storageaccount_info'", version='2.13') for key in self.module_arg_spec: setattr(self, key, kwargs[key]) if self.name and not self.resource_group: self.fail("Parameter error: resource group required when filtering by name.") results = [] if self.name: results = self.get_account() elif self.resource_group: results = self.list_resource_group() else: results = self.list_all() filtered = self.filter_tag(results) if is_old_facts: self.results['ansible_facts'] = { 'azure_storageaccounts': self.serialize(filtered), 'storageaccounts': self.format_to_dict(filtered), } self.results['storageaccounts'] = self.format_to_dict(filtered) return self.results def get_account(self): self.log('Get properties for account {0}'.format(self.name)) account = None try: account = self.storage_client.storage_accounts.get_properties(self.resource_group, self.name) return [account] except CloudError: pass return [] def list_resource_group(self): self.log('List items') try: response = self.storage_client.storage_accounts.list_by_resource_group(self.resource_group) except Exception as exc: self.fail("Error listing for resource group {0} - {1}".format(self.resource_group, str(exc))) return response def list_all(self): self.log('List all items') try: response = self.storage_client.storage_accounts.list_by_resource_group(self.resource_group) except Exception as exc: self.fail("Error listing all items - {0}".format(str(exc))) return response def filter_tag(self, raw): return [item for item in raw if self.has_tags(item.tags, self.tags)] def serialize(self, raw): return [self.serialize_obj(item, AZURE_OBJECT_CLASS) for item in raw] def format_to_dict(self, raw): return [self.account_obj_to_dict(item) for item in raw] def account_obj_to_dict(self, account_obj, blob_service_props=None): account_dict = dict( id=account_obj.id, name=account_obj.name, location=account_obj.location, access_tier=(account_obj.access_tier.value if account_obj.access_tier is not None else None), account_type=account_obj.sku.name.value, kind=account_obj.kind.value if account_obj.kind else None, provisioning_state=account_obj.provisioning_state.value, secondary_location=account_obj.secondary_location, status_of_primary=(account_obj.status_of_primary.value if account_obj.status_of_primary is not None else None), status_of_secondary=(account_obj.status_of_secondary.value if account_obj.status_of_secondary is not None else None), primary_location=account_obj.primary_location, https_only=account_obj.enable_https_traffic_only ) id_dict = self.parse_resource_to_dict(account_obj.id) account_dict['resource_group'] = id_dict.get('resource_group') account_key = self.get_connectionstring(account_dict['resource_group'], account_dict['name']) account_dict['custom_domain'] = None if account_obj.custom_domain: account_dict['custom_domain'] = dict( name=account_obj.custom_domain.name, use_sub_domain=account_obj.custom_domain.use_sub_domain ) account_dict['primary_endpoints'] = None if account_obj.primary_endpoints: account_dict['primary_endpoints'] = dict( blob=self.format_endpoint_dict(account_dict['name'], account_key[0], account_obj.primary_endpoints.blob, 'blob'), queue=self.format_endpoint_dict(account_dict['name'], account_key[0], account_obj.primary_endpoints.queue, 'queue'), table=self.format_endpoint_dict(account_dict['name'], account_key[0], account_obj.primary_endpoints.table, 'table') ) if account_key[0]: account_dict['primary_endpoints']['key'] = '{0}'.format(account_key[0]) account_dict['secondary_endpoints'] = None if account_obj.secondary_endpoints: account_dict['secondary_endpoints'] = dict( blob=self.format_endpoint_dict(account_dict['name'], account_key[1], account_obj.primary_endpoints.blob, 'blob'), queue=self.format_endpoint_dict(account_dict['name'], account_key[1], account_obj.primary_endpoints.queue, 'queue'), table=self.format_endpoint_dict(account_dict['name'], account_key[1], account_obj.primary_endpoints.table, 'table'), ) if account_key[1]: account_dict['secondary_endpoints']['key'] = '{0}'.format(account_key[1]) account_dict['tags'] = None if account_obj.tags: account_dict['tags'] = account_obj.tags blob_service_props = self.get_blob_service_props(account_dict['resource_group'], account_dict['name']) if blob_service_props and blob_service_props.cors and blob_service_props.cors.cors_rules: account_dict['blob_cors'] = [dict( allowed_origins=to_native(x.allowed_origins), allowed_methods=to_native(x.allowed_methods), max_age_in_seconds=x.max_age_in_seconds, exposed_headers=to_native(x.exposed_headers), allowed_headers=to_native(x.allowed_headers) ) for x in blob_service_props.cors.cors_rules] return account_dict def format_endpoint_dict(self, name, key, endpoint, storagetype, protocol='https'): result = dict(endpoint=endpoint) if key: result['connectionstring'] = 'DefaultEndpointsProtocol={0};EndpointSuffix={1};AccountName={2};AccountKey={3};{4}Endpoint={5}'.format( protocol, self._cloud_environment.suffixes.storage_endpoint, name, key, str.title(storagetype), endpoint) return result def get_blob_service_props(self, resource_group, name): if not self.show_blob_cors: return None try: blob_service_props = self.storage_client.blob_services.get_service_properties(resource_group, name) return blob_service_props except Exception: pass return None def get_connectionstring(self, resource_group, name): keys = ['', ''] if not self.show_connection_string: return keys try: cred = self.storage_client.storage_accounts.list_keys(resource_group, name) # get the following try catch from CLI try: keys = [cred.keys[0].value, cred.keys[1].value] except AttributeError: keys = [cred.key1, cred.key2] except Exception: pass return keys def main(): AzureRMStorageAccountInfo() if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
64,506
azure_rm_virtualmachine: Error parsing blob URI
##### SUMMARY When creating a VM with unmanaged disks, VHD URIs must be specified. As explained in the online [Azure VHD documentation](https://docs.microsoft.com/en-us/azure/marketplace/cloud-partner-portal/virtual-machine/cpp-deploy-json-template), the URI is created with __http__ as scheme. When VM facts are retrieved, the URI set at deployment time are correctly retrieved, untouched. Unfortunately, when it comes to delete the VM, in the module's __extract_names_from_blob_uri__ method, the code validates the URI using a regexp that only considers __https__ as a valid scheme. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME * azure_rm_virtualmachine ##### ANSIBLE VERSION ```paste below ansible 2.8.5 config file = /home/***/ansible/ansible_dev.cfg configured module search path = [u'/home/***/ansible/library'] ansible python module location = /home/***/ansible/.venv/lib/python2.7/site-packages/ansible executable location = /home/***/ansible/.venv/bin/ansible python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION ```paste below DEFAULT_BECOME_METHOD(/home/***/ansible/ansible_dev.cfg) = sudo DEFAULT_BECOME_USER(/home/***/ansible/ansible_dev.cfg) = root DEFAULT_GATHER_TIMEOUT(/home/***/ansible/ansible_dev.cfg) = 30 DEFAULT_HOST_LIST(/home/***/ansible/ansible_dev.cfg) = [u'/home/***/ansible/inventory/azure-dev'] DEFAULT_LOG_PATH(/home/***/ansible/ansible_dev.cfg) = /home/***/ansible.log DEFAULT_MODULE_PATH(/home/***/ansible/ansible_dev.cfg) = [u'/home/***/ansible/library'] DEFAULT_ROLES_PATH(/home/***/ansible/ansible_dev.cfg) = [u'/home/***/ansible/roles'] DEFAULT_VAULT_IDENTITY_LIST(/home/***/ansible/ansible_dev.cfg) = [] HOST_KEY_CHECKING(/home/***/ansible/ansible_dev.cfg) = False INVENTORY_ENABLED(/home/***/ansible/ansible_dev.cfg) = [u'yaml', u'azure_rm', u'script'] RETRY_FILES_ENABLED(/home/***/ansible/ansible_dev.cfg) = False ``` ##### OS / ENVIRONMENT Controller: RHEL 7.4 ##### STEPS TO REPRODUCE * Deploy a new VM using the following ARM template: https://docs.microsoft.com/en-us/azure/marketplace/cloud-partner-portal/virtual-machine/cpp-deploy-json-template * Use azure_rm_virtualmachine to delete the VM using the __state: absent__ ##### EXPECTED RESULTS The module runs w/o any errors and the VHD disks are deleted. ##### ACTUAL RESULTS The module fails with the following error: ```json { "msg": "Error parsing blob URI unable to parse blob uri 'http://my-sa.blob.core.windows.net/vhds/my-vm_osdisk.vhd'", "changed": false, "_ansible_no_log": false } ```
https://github.com/ansible/ansible/issues/64506
https://github.com/ansible/ansible/pull/64507
7c65ad11e2914bc9774abd37cdd1ac455f1c9433
8df03a6f6e5eb8696b5a8514535093c53b3f512d
2019-11-06T12:55:55Z
python
2019-11-27T07:55:06Z
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py
#!/usr/bin/python # # Copyright (c) 2016 Matt Davis, <[email protected]> # Chris Houseknecht, <[email protected]> # Copyright (c) 2018 James E. King, III (@jeking3) <[email protected]> # # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: azure_rm_virtualmachine version_added: "2.1" short_description: Manage Azure virtual machines description: - Manage and configure virtual machines (VMs) and associated resources on Azure. - Requires a resource group containing at least one virtual network with at least one subnet. - Supports images from the Azure Marketplace, which can be discovered with M(azure_rm_virtualmachineimage_facts). - Supports custom images since Ansible 2.5. - To use I(custom_data) on a Linux image, the image must have cloud-init enabled. If cloud-init is not enabled, I(custom_data) is ignored. options: resource_group: description: - Name of the resource group containing the VM. required: true name: description: - Name of the VM. required: true custom_data: description: - Data made available to the VM and used by C(cloud-init). - Only used on Linux images with C(cloud-init) enabled. - Consult U(https://docs.microsoft.com/en-us/azure/virtual-machines/linux/using-cloud-init#cloud-init-overview) for cloud-init ready images. - To enable cloud-init on a Linux image, follow U(https://docs.microsoft.com/en-us/azure/virtual-machines/linux/cloudinit-prepare-custom-image). version_added: "2.5" state: description: - State of the VM. - Set to C(present) to create a VM with the configuration specified by other options, or to update the configuration of an existing VM. - Set to C(absent) to remove a VM. - Does not affect power state. Use I(started)/I(allocated)/I(restarted) parameters to change the power state of a VM. default: present choices: - absent - present started: description: - Whether the VM is started or stopped. - Set to (true) with I(state=present) to start the VM. - Set to C(false) to stop the VM. default: true type: bool allocated: description: - Whether the VM is allocated or deallocated, only useful with I(state=present). default: True type: bool generalized: description: - Whether the VM is generalized or not. - Set to C(true) with I(state=present) to generalize the VM. - Generalizing a VM is irreversible. type: bool version_added: "2.8" restarted: description: - Set to C(true) with I(state=present) to restart a running VM. type: bool location: description: - Valid Azure location for the VM. Defaults to location of the resource group. short_hostname: description: - Name assigned internally to the host. On a Linux VM this is the name returned by the C(hostname) command. - When creating a VM, short_hostname defaults to I(name). vm_size: description: - A valid Azure VM size value. For example, C(Standard_D4). - Choices vary depending on the subscription and location. Check your subscription for available choices. - Required when creating a VM. admin_username: description: - Admin username used to access the VM after it is created. - Required when creating a VM. admin_password: description: - Password for the admin username. - Not required if the I(os_type=Linux) and SSH password authentication is disabled by setting I(ssh_password_enabled=false). ssh_password_enabled: description: - Whether to enable or disable SSH passwords. - When I(os_type=Linux), set to C(false) to disable SSH password authentication and require use of SSH keys. default: true type: bool ssh_public_keys: description: - For I(os_type=Linux) provide a list of SSH keys. - Accepts a list of dicts where each dictionary contains two keys, I(path) and I(key_data). - Set I(path) to the default location of the authorized_keys files. For example, I(path=/home/<admin username>/.ssh/authorized_keys). - Set I(key_data) to the actual value of the public key. image: description: - The image used to build the VM. - For custom images, the name of the image. To narrow the search to a specific resource group, a dict with the keys I(name) and I(resource_group). - For Marketplace images, a dict with the keys I(publisher), I(offer), I(sku), and I(version). - Set I(version=latest) to get the most recent version of a given image. required: true availability_set: description: - Name or ID of an existing availability set to add the VM to. The I(availability_set) should be in the same resource group as VM. version_added: "2.5" storage_account_name: description: - Name of a storage account that supports creation of VHD blobs. - If not specified for a new VM, a new storage account named <vm name>01 will be created using storage type C(Standard_LRS). aliases: - storage_account storage_container_name: description: - Name of the container to use within the storage account to store VHD blobs. - If not specified, a default container will be created. default: vhds aliases: - storage_container storage_blob_name: description: - Name of the storage blob used to hold the OS disk image of the VM. - Must end with '.vhd'. - If not specified, defaults to the VM name + '.vhd'. aliases: - storage_blob managed_disk_type: description: - Managed OS disk type. - Create OS disk with managed disk if defined. - If not defined, the OS disk will be created with virtual hard disk (VHD). choices: - Standard_LRS - StandardSSD_LRS - Premium_LRS version_added: "2.4" os_disk_name: description: - OS disk name. version_added: "2.8" os_disk_caching: description: - Type of OS disk caching. choices: - ReadOnly - ReadWrite default: ReadOnly aliases: - disk_caching os_disk_size_gb: description: - Type of OS disk size in GB. version_added: "2.7" os_type: description: - Base type of operating system. choices: - Windows - Linux default: Linux data_disks: description: - Describes list of data disks. - Use M(azure_rm_mangeddisk) to manage the specific disk. version_added: "2.4" suboptions: lun: description: - The logical unit number for data disk. - This value is used to identify data disks within the VM and therefore must be unique for each data disk attached to a VM. required: true version_added: "2.4" disk_size_gb: description: - The initial disk size in GB for blank data disks. - This value cannot be larger than C(1023) GB. - Size can be changed only when the virtual machine is deallocated. - Not sure when I(managed_disk_id) defined. version_added: "2.4" managed_disk_type: description: - Managed data disk type. - Only used when OS disk created with managed disk. choices: - Standard_LRS - StandardSSD_LRS - Premium_LRS version_added: "2.4" storage_account_name: description: - Name of an existing storage account that supports creation of VHD blobs. - If not specified for a new VM, a new storage account started with I(name) will be created using storage type C(Standard_LRS). - Only used when OS disk created with virtual hard disk (VHD). - Used when I(managed_disk_type) not defined. - Cannot be updated unless I(lun) updated. version_added: "2.4" storage_container_name: description: - Name of the container to use within the storage account to store VHD blobs. - If no name is specified a default container named 'vhds' will created. - Only used when OS disk created with virtual hard disk (VHD). - Used when I(managed_disk_type) not defined. - Cannot be updated unless I(lun) updated. default: vhds version_added: "2.4" storage_blob_name: description: - Name of the storage blob used to hold the OS disk image of the VM. - Must end with '.vhd'. - Default to the I(name) + timestamp + I(lun) + '.vhd'. - Only used when OS disk created with virtual hard disk (VHD). - Used when I(managed_disk_type) not defined. - Cannot be updated unless I(lun) updated. version_added: "2.4" caching: description: - Type of data disk caching. choices: - ReadOnly - ReadWrite default: ReadOnly version_added: "2.4" public_ip_allocation_method: description: - Allocation method for the public IP of the VM. - Used only if a network interface is not specified. - When set to C(Dynamic), the public IP address may change any time the VM is rebooted or power cycled. - The C(Disabled) choice was added in Ansible 2.6. choices: - Dynamic - Static - Disabled default: Static aliases: - public_ip_allocation open_ports: description: - List of ports to open in the security group for the VM, when a security group and network interface are created with a VM. - For Linux hosts, defaults to allowing inbound TCP connections to port 22. - For Windows hosts, defaults to opening ports 3389 and 5986. network_interface_names: description: - Network interface names to add to the VM. - Can be a string of name or resource ID of the network interface. - Can be a dict containing I(resource_group) and I(name) of the network interface. - If a network interface name is not provided when the VM is created, a default network interface will be created. - To create a new network interface, at least one Virtual Network with one Subnet must exist. type: list aliases: - network_interfaces virtual_network_resource_group: description: - The resource group to use when creating a VM with another resource group's virtual network. version_added: "2.4" virtual_network_name: description: - The virtual network to use when creating a VM. - If not specified, a new network interface will be created and assigned to the first virtual network found in the resource group. - Use with I(virtual_network_resource_group) to place the virtual network in another resource group. aliases: - virtual_network subnet_name: description: - Subnet for the VM. - Defaults to the first subnet found in the virtual network or the subnet of the I(network_interface_name), if provided. - If the subnet is in another resource group, specify the resource group with I(virtual_network_resource_group). aliases: - subnet remove_on_absent: description: - Associated resources to remove when removing a VM using I(state=absent). - To remove all resources related to the VM being removed, including auto-created resources, set to C(all). - To remove only resources that were automatically created while provisioning the VM being removed, set to C(all_autocreated). - To remove only specific resources, set to C(network_interfaces), C(virtual_storage) or C(public_ips). - Any other input will be ignored. type: list default: ['all'] plan: description: - Third-party billing plan for the VM. version_added: "2.5" type: dict suboptions: name: description: - Billing plan name. required: true product: description: - Product name. required: true publisher: description: - Publisher offering the plan. required: true promotion_code: description: - Optional promotion code. accept_terms: description: - Accept terms for Marketplace images that require it. - Only Azure service admin/account admin users can purchase images from the Marketplace. - Only valid when a I(plan) is specified. type: bool default: false version_added: "2.7" zones: description: - A list of Availability Zones for your VM. type: list version_added: "2.8" license_type: description: - On-premise license for the image or disk. - Only used for images that contain the Windows Server operating system. - To remove all license type settings, set to the string C(None). version_added: "2.8" choices: - Windows_Server - Windows_Client vm_identity: description: - Identity for the VM. version_added: "2.8" choices: - SystemAssigned winrm: description: - List of Windows Remote Management configurations of the VM. version_added: "2.8" suboptions: protocol: description: - The protocol of the winrm listener. required: true choices: - http - https source_vault: description: - The relative URL of the Key Vault containing the certificate. certificate_url: description: - The URL of a certificate that has been uploaded to Key Vault as a secret. certificate_store: description: - The certificate store on the VM to which the certificate should be added. - The specified certificate store is implicitly in the LocalMachine account. boot_diagnostics: description: - Manage boot diagnostics settings for a VM. - Boot diagnostics includes a serial console and remote console screenshots. version_added: '2.9' suboptions: enabled: description: - Flag indicating if boot diagnostics are enabled. required: true type: bool storage_account: description: - The name of an existing storage account to use for boot diagnostics. - If not specified, uses I(storage_account_name) defined one level up. - If storage account is not specified anywhere, and C(enabled) is C(true), a default storage account is created for boot diagnostics data. required: false extends_documentation_fragment: - azure - azure_tags author: - Chris Houseknecht (@chouseknecht) - Matt Davis (@nitzmahone) - Christopher Perrin (@cperrin88) - James E. King III (@jeking3) ''' EXAMPLES = ''' - name: Create VM with defaults azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm10 admin_username: chouseknecht admin_password: <your password here> image: offer: CentOS publisher: OpenLogic sku: '7.1' version: latest - name: Create an availability set for managed disk vm azure_rm_availabilityset: name: avs-managed-disk resource_group: myResourceGroup platform_update_domain_count: 5 platform_fault_domain_count: 2 sku: Aligned - name: Create a VM with managed disk azure_rm_virtualmachine: resource_group: myResourceGroup name: vm-managed-disk admin_username: adminUser availability_set: avs-managed-disk managed_disk_type: Standard_LRS image: offer: CoreOS publisher: CoreOS sku: Stable version: latest vm_size: Standard_D4 - name: Create a VM with existing storage account and NIC azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm002 vm_size: Standard_D4 storage_account: testaccount001 admin_username: adminUser ssh_public_keys: - path: /home/adminUser/.ssh/authorized_keys key_data: < insert yor ssh public key here... > network_interfaces: testvm001 image: offer: CentOS publisher: OpenLogic sku: '7.1' version: latest - name: Create a VM with OS and multiple data managed disks azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm001 vm_size: Standard_D4 managed_disk_type: Standard_LRS admin_username: adminUser ssh_public_keys: - path: /home/adminUser/.ssh/authorized_keys key_data: < insert yor ssh public key here... > image: offer: CoreOS publisher: CoreOS sku: Stable version: latest data_disks: - lun: 0 managed_disk_id: "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Compute/disks/myDisk" - lun: 1 disk_size_gb: 128 managed_disk_type: Premium_LRS - name: Create a VM with OS and multiple data storage accounts azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm001 vm_size: Standard_DS1_v2 admin_username: adminUser ssh_password_enabled: false ssh_public_keys: - path: /home/adminUser/.ssh/authorized_keys key_data: < insert yor ssh public key here... > network_interfaces: testvm001 storage_container: osdisk storage_blob: osdisk.vhd boot_diagnostics: enabled: yes image: offer: CoreOS publisher: CoreOS sku: Stable version: latest data_disks: - lun: 0 disk_size_gb: 64 storage_container_name: datadisk1 storage_blob_name: datadisk1.vhd - lun: 1 disk_size_gb: 128 storage_container_name: datadisk2 storage_blob_name: datadisk2.vhd - name: Create a VM with a custom image azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm001 vm_size: Standard_DS1_v2 admin_username: adminUser admin_password: password01 image: customimage001 - name: Create a VM with a custom image from a particular resource group azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm001 vm_size: Standard_DS1_v2 admin_username: adminUser admin_password: password01 image: name: customimage001 resource_group: myResourceGroup - name: Create a VM with an image id azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm001 vm_size: Standard_DS1_v2 admin_username: adminUser admin_password: password01 image: id: '{{image_id}}' - name: Create VM with specified OS disk size azure_rm_virtualmachine: resource_group: myResourceGroup name: big-os-disk admin_username: chouseknecht admin_password: <your password here> os_disk_size_gb: 512 image: offer: CentOS publisher: OpenLogic sku: '7.1' version: latest - name: Create VM with OS and Plan, accepting the terms azure_rm_virtualmachine: resource_group: myResourceGroup name: f5-nva admin_username: chouseknecht admin_password: <your password here> image: publisher: f5-networks offer: f5-big-ip-best sku: f5-bigip-virtual-edition-200m-best-hourly version: latest plan: name: f5-bigip-virtual-edition-200m-best-hourly product: f5-big-ip-best publisher: f5-networks - name: Power Off azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm002 started: no - name: Deallocate azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm002 allocated: no - name: Power On azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm002 - name: Restart azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm002 restarted: yes - name: Create a VM with an Availability Zone azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm001 vm_size: Standard_DS1_v2 admin_username: adminUser admin_password: password01 image: customimage001 zones: [1] - name: Remove a VM and all resources that were autocreated azure_rm_virtualmachine: resource_group: myResourceGroup name: testvm002 remove_on_absent: all_autocreated state: absent ''' RETURN = ''' powerstate: description: - Indicates if the state is C(running), C(stopped), C(deallocated), C(generalized). returned: always type: str sample: running deleted_vhd_uris: description: - List of deleted Virtual Hard Disk URIs. returned: 'on delete' type: list sample: ["https://testvm104519.blob.core.windows.net/vhds/testvm10.vhd"] deleted_network_interfaces: description: - List of deleted NICs. returned: 'on delete' type: list sample: ["testvm1001"] deleted_public_ips: description: - List of deleted public IP address names. returned: 'on delete' type: list sample: ["testvm1001"] azure_vm: description: - Facts about the current state of the object. Note that facts are not part of the registered output but available directly. returned: always type: dict sample: { "properties": { "availabilitySet": { "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Compute/availabilitySets/MYAVAILABILITYSET" }, "hardwareProfile": { "vmSize": "Standard_D1" }, "instanceView": { "disks": [ { "name": "testvm10.vhd", "statuses": [ { "code": "ProvisioningState/succeeded", "displayStatus": "Provisioning succeeded", "level": "Info", "time": "2016-03-30T07:11:16.187272Z" } ] } ], "statuses": [ { "code": "ProvisioningState/succeeded", "displayStatus": "Provisioning succeeded", "level": "Info", "time": "2016-03-30T20:33:38.946916Z" }, { "code": "PowerState/running", "displayStatus": "VM running", "level": "Info" } ], "vmAgent": { "extensionHandlers": [], "statuses": [ { "code": "ProvisioningState/succeeded", "displayStatus": "Ready", "level": "Info", "message": "GuestAgent is running and accepting new configurations.", "time": "2016-03-30T20:31:16.000Z" } ], "vmAgentVersion": "WALinuxAgent-2.0.16" } }, "networkProfile": { "networkInterfaces": [ { "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/networkInterfaces/testvm10_NIC01", "name": "testvm10_NIC01", "properties": { "dnsSettings": { "appliedDnsServers": [], "dnsServers": [] }, "enableIPForwarding": false, "ipConfigurations": [ { "etag": 'W/"041c8c2a-d5dd-4cd7-8465-9125cfbe2cf8"', "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/networkInterfaces/testvm10_NIC01/ipConfigurations/default", "name": "default", "properties": { "privateIPAddress": "10.10.0.5", "privateIPAllocationMethod": "Dynamic", "provisioningState": "Succeeded", "publicIPAddress": { "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/testvm10_PIP01", "name": "testvm10_PIP01", "properties": { "idleTimeoutInMinutes": 4, "ipAddress": "13.92.246.197", "ipConfiguration": { "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/networkInterfaces/testvm10_NIC01/ipConfigurations/default" }, "provisioningState": "Succeeded", "publicIPAllocationMethod": "Static", "resourceGuid": "3447d987-ca0d-4eca-818b-5dddc0625b42" } } } } ], "macAddress": "00-0D-3A-12-AA-14", "primary": true, "provisioningState": "Succeeded", "resourceGuid": "10979e12-ccf9-42ee-9f6d-ff2cc63b3844", "virtualMachine": { "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Compute/virtualMachines/testvm10" } } } ] }, "osProfile": { "adminUsername": "chouseknecht", "computerName": "test10", "linuxConfiguration": { "disablePasswordAuthentication": false }, "secrets": [] }, "provisioningState": "Succeeded", "storageProfile": { "dataDisks": [ { "caching": "ReadWrite", "createOption": "empty", "diskSizeGB": 64, "lun": 0, "name": "datadisk1.vhd", "vhd": { "uri": "https://testvm10sa1.blob.core.windows.net/datadisk/datadisk1.vhd" } } ], "imageReference": { "offer": "CentOS", "publisher": "OpenLogic", "sku": "7.1", "version": "7.1.20160308" }, "osDisk": { "caching": "ReadOnly", "createOption": "fromImage", "name": "testvm10.vhd", "osType": "Linux", "vhd": { "uri": "https://testvm10sa1.blob.core.windows.net/vhds/testvm10.vhd" } } } }, "type": "Microsoft.Compute/virtualMachines" } ''' # NOQA import base64 import random import re try: from msrestazure.azure_exceptions import CloudError from msrestazure.tools import parse_resource_id from msrest.polling import LROPoller except ImportError: # This is handled in azure_rm_common pass from ansible.module_utils.basic import to_native, to_bytes from ansible.module_utils.azure_rm_common import AzureRMModuleBase, azure_id_to_dict, normalize_location_name, format_resource_id AZURE_OBJECT_CLASS = 'VirtualMachine' AZURE_ENUM_MODULES = ['azure.mgmt.compute.models'] def extract_names_from_blob_uri(blob_uri, storage_suffix): # HACK: ditch this once python SDK supports get by URI m = re.match(r'^https://(?P<accountname>[^.]+)\.blob\.{0}/' r'(?P<containername>[^/]+)/(?P<blobname>.+)$'.format(storage_suffix), blob_uri) if not m: raise Exception("unable to parse blob uri '%s'" % blob_uri) extracted_names = m.groupdict() return extracted_names class AzureRMVirtualMachine(AzureRMModuleBase): def __init__(self): self.module_arg_spec = dict( resource_group=dict(type='str', required=True), name=dict(type='str', required=True), custom_data=dict(type='str'), state=dict(choices=['present', 'absent'], default='present', type='str'), location=dict(type='str'), short_hostname=dict(type='str'), vm_size=dict(type='str'), admin_username=dict(type='str'), admin_password=dict(type='str', no_log=True), ssh_password_enabled=dict(type='bool', default=True), ssh_public_keys=dict(type='list'), image=dict(type='raw'), availability_set=dict(type='str'), storage_account_name=dict(type='str', aliases=['storage_account']), storage_container_name=dict(type='str', aliases=['storage_container'], default='vhds'), storage_blob_name=dict(type='str', aliases=['storage_blob']), os_disk_caching=dict(type='str', aliases=['disk_caching'], choices=['ReadOnly', 'ReadWrite'], default='ReadOnly'), os_disk_size_gb=dict(type='int'), managed_disk_type=dict(type='str', choices=['Standard_LRS', 'StandardSSD_LRS', 'Premium_LRS']), os_disk_name=dict(type='str'), os_type=dict(type='str', choices=['Linux', 'Windows'], default='Linux'), public_ip_allocation_method=dict(type='str', choices=['Dynamic', 'Static', 'Disabled'], default='Static', aliases=['public_ip_allocation']), open_ports=dict(type='list'), network_interface_names=dict(type='list', aliases=['network_interfaces'], elements='raw'), remove_on_absent=dict(type='list', default=['all']), virtual_network_resource_group=dict(type='str'), virtual_network_name=dict(type='str', aliases=['virtual_network']), subnet_name=dict(type='str', aliases=['subnet']), allocated=dict(type='bool', default=True), restarted=dict(type='bool', default=False), started=dict(type='bool', default=True), generalized=dict(type='bool', default=False), data_disks=dict(type='list'), plan=dict(type='dict'), zones=dict(type='list'), accept_terms=dict(type='bool', default=False), license_type=dict(type='str', choices=['Windows_Server', 'Windows_Client']), vm_identity=dict(type='str', choices=['SystemAssigned']), winrm=dict(type='list'), boot_diagnostics=dict(type='dict'), ) self.resource_group = None self.name = None self.custom_data = None self.state = None self.location = None self.short_hostname = None self.vm_size = None self.admin_username = None self.admin_password = None self.ssh_password_enabled = None self.ssh_public_keys = None self.image = None self.availability_set = None self.storage_account_name = None self.storage_container_name = None self.storage_blob_name = None self.os_type = None self.os_disk_caching = None self.os_disk_size_gb = None self.managed_disk_type = None self.os_disk_name = None self.network_interface_names = None self.remove_on_absent = set() self.tags = None self.force = None self.public_ip_allocation_method = None self.open_ports = None self.virtual_network_resource_group = None self.virtual_network_name = None self.subnet_name = None self.allocated = None self.restarted = None self.started = None self.generalized = None self.differences = None self.data_disks = None self.plan = None self.accept_terms = None self.zones = None self.license_type = None self.vm_identity = None self.boot_diagnostics = None self.results = dict( changed=False, actions=[], powerstate_change=None, ansible_facts=dict(azure_vm=None) ) super(AzureRMVirtualMachine, self).__init__(derived_arg_spec=self.module_arg_spec, supports_check_mode=True) @property def boot_diagnostics_present(self): return self.boot_diagnostics is not None and 'enabled' in self.boot_diagnostics def get_boot_diagnostics_storage_account(self, limited=False, vm_dict=None): """ Get the boot diagnostics storage account. Arguments: - limited - if true, limit the logic to the boot_diagnostics storage account this is used if initial creation of the VM has a stanza with boot_diagnostics disabled, so we only create a storage account if the user specifies a storage account name inside the boot_diagnostics schema - vm_dict - if invoked on an update, this is the current state of the vm including tags, like the default storage group tag '_own_sa_'. Normal behavior: - try the self.boot_diagnostics.storage_account field - if not there, try the self.storage_account_name field - if not there, use the default storage account If limited is True: - try the self.boot_diagnostics.storage_account field - if not there, None """ bsa = None if 'storage_account' in self.boot_diagnostics: bsa = self.get_storage_account(self.boot_diagnostics['storage_account']) elif limited: return None elif self.storage_account_name: bsa = self.get_storage_account(self.storage_account_name) else: bsa = self.create_default_storage_account(vm_dict=vm_dict) self.log("boot diagnostics storage account:") self.log(self.serialize_obj(bsa, 'StorageAccount'), pretty_print=True) return bsa def exec_module(self, **kwargs): for key in list(self.module_arg_spec.keys()) + ['tags']: setattr(self, key, kwargs[key]) # make sure options are lower case self.remove_on_absent = set([resource.lower() for resource in self.remove_on_absent]) # convert elements to ints self.zones = [int(i) for i in self.zones] if self.zones else None changed = False powerstate_change = None results = dict() vm = None network_interfaces = [] requested_storage_uri = None requested_vhd_uri = None data_disk_requested_vhd_uri = None disable_ssh_password = None vm_dict = None image_reference = None custom_image = False resource_group = self.get_resource_group(self.resource_group) if not self.location: # Set default location self.location = resource_group.location self.location = normalize_location_name(self.location) if self.state == 'present': # Verify parameters and resolve any defaults if self.vm_size and not self.vm_size_is_valid(): self.fail("Parameter error: vm_size {0} is not valid for your subscription and location.".format( self.vm_size )) if self.network_interface_names: for nic_name in self.network_interface_names: nic = self.parse_network_interface(nic_name) network_interfaces.append(nic) if self.ssh_public_keys: msg = "Parameter error: expecting ssh_public_keys to be a list of type dict where " \ "each dict contains keys: path, key_data." for key in self.ssh_public_keys: if not isinstance(key, dict): self.fail(msg) if not key.get('path') or not key.get('key_data'): self.fail(msg) if self.image and isinstance(self.image, dict): if all(key in self.image for key in ('publisher', 'offer', 'sku', 'version')): marketplace_image = self.get_marketplace_image_version() if self.image['version'] == 'latest': self.image['version'] = marketplace_image.name self.log("Using image version {0}".format(self.image['version'])) image_reference = self.compute_models.ImageReference( publisher=self.image['publisher'], offer=self.image['offer'], sku=self.image['sku'], version=self.image['version'] ) elif self.image.get('name'): custom_image = True image_reference = self.get_custom_image_reference( self.image.get('name'), self.image.get('resource_group')) elif self.image.get('id'): try: image_reference = self.compute_models.ImageReference(id=self.image['id']) except Exception as exc: self.fail("id Error: Cannot get image from the reference id - {0}".format(self.image['id'])) else: self.fail("parameter error: expecting image to contain [publisher, offer, sku, version], [name, resource_group] or [id]") elif self.image and isinstance(self.image, str): custom_image = True image_reference = self.get_custom_image_reference(self.image) elif self.image: self.fail("parameter error: expecting image to be a string or dict not {0}".format(type(self.image).__name__)) if self.plan: if not self.plan.get('name') or not self.plan.get('product') or not self.plan.get('publisher'): self.fail("parameter error: plan must include name, product, and publisher") if not self.storage_blob_name and not self.managed_disk_type: self.storage_blob_name = self.name + '.vhd' elif self.managed_disk_type: self.storage_blob_name = self.name if self.storage_account_name and not self.managed_disk_type: properties = self.get_storage_account(self.storage_account_name) requested_storage_uri = properties.primary_endpoints.blob requested_vhd_uri = '{0}{1}/{2}'.format(requested_storage_uri, self.storage_container_name, self.storage_blob_name) disable_ssh_password = not self.ssh_password_enabled try: self.log("Fetching virtual machine {0}".format(self.name)) vm = self.compute_client.virtual_machines.get(self.resource_group, self.name, expand='instanceview') self.check_provisioning_state(vm, self.state) vm_dict = self.serialize_vm(vm) if self.state == 'present': differences = [] current_nics = [] results = vm_dict # Try to determine if the VM needs to be updated if self.network_interface_names: for nic in vm_dict['properties']['networkProfile']['networkInterfaces']: current_nics.append(nic['id']) if set(current_nics) != set(network_interfaces): self.log('CHANGED: virtual machine {0} - network interfaces are different.'.format(self.name)) differences.append('Network Interfaces') updated_nics = [dict(id=id, primary=(i == 0)) for i, id in enumerate(network_interfaces)] vm_dict['properties']['networkProfile']['networkInterfaces'] = updated_nics changed = True if self.os_disk_caching and \ self.os_disk_caching != vm_dict['properties']['storageProfile']['osDisk']['caching']: self.log('CHANGED: virtual machine {0} - OS disk caching'.format(self.name)) differences.append('OS Disk caching') changed = True vm_dict['properties']['storageProfile']['osDisk']['caching'] = self.os_disk_caching if self.os_disk_name and \ self.os_disk_name != vm_dict['properties']['storageProfile']['osDisk']['name']: self.log('CHANGED: virtual machine {0} - OS disk name'.format(self.name)) differences.append('OS Disk name') changed = True vm_dict['properties']['storageProfile']['osDisk']['name'] = self.os_disk_name if self.os_disk_size_gb and \ self.os_disk_size_gb != vm_dict['properties']['storageProfile']['osDisk'].get('diskSizeGB'): self.log('CHANGED: virtual machine {0} - OS disk size '.format(self.name)) differences.append('OS Disk size') changed = True vm_dict['properties']['storageProfile']['osDisk']['diskSizeGB'] = self.os_disk_size_gb if self.vm_size and \ self.vm_size != vm_dict['properties']['hardwareProfile']['vmSize']: self.log('CHANGED: virtual machine {0} - size '.format(self.name)) differences.append('VM size') changed = True vm_dict['properties']['hardwareProfile']['vmSize'] = self.vm_size update_tags, vm_dict['tags'] = self.update_tags(vm_dict.get('tags', dict())) if update_tags: differences.append('Tags') changed = True if self.short_hostname and self.short_hostname != vm_dict['properties']['osProfile']['computerName']: self.log('CHANGED: virtual machine {0} - short hostname'.format(self.name)) differences.append('Short Hostname') changed = True vm_dict['properties']['osProfile']['computerName'] = self.short_hostname if self.started and vm_dict['powerstate'] not in ['starting', 'running'] and self.allocated: self.log("CHANGED: virtual machine {0} not running and requested state 'running'".format(self.name)) changed = True powerstate_change = 'poweron' elif self.state == 'present' and vm_dict['powerstate'] == 'running' and self.restarted: self.log("CHANGED: virtual machine {0} {1} and requested state 'restarted'" .format(self.name, vm_dict['powerstate'])) changed = True powerstate_change = 'restarted' elif self.state == 'present' and not self.allocated and vm_dict['powerstate'] not in ['deallocated', 'deallocating']: self.log("CHANGED: virtual machine {0} {1} and requested state 'deallocated'" .format(self.name, vm_dict['powerstate'])) changed = True powerstate_change = 'deallocated' elif not self.started and vm_dict['powerstate'] == 'running': self.log("CHANGED: virtual machine {0} running and requested state 'stopped'".format(self.name)) changed = True powerstate_change = 'poweroff' elif self.generalized and vm_dict['powerstate'] != 'generalized': self.log("CHANGED: virtual machine {0} requested to be 'generalized'".format(self.name)) changed = True powerstate_change = 'generalized' vm_dict['zones'] = [int(i) for i in vm_dict['zones']] if 'zones' in vm_dict and vm_dict['zones'] else None if self.zones != vm_dict['zones']: self.log("CHANGED: virtual machine {0} zones".format(self.name)) differences.append('Zones') changed = True if self.license_type is not None and vm_dict['properties'].get('licenseType') != self.license_type: differences.append('License Type') changed = True # Defaults for boot diagnostics if 'diagnosticsProfile' not in vm_dict['properties']: vm_dict['properties']['diagnosticsProfile'] = {} if 'bootDiagnostics' not in vm_dict['properties']['diagnosticsProfile']: vm_dict['properties']['diagnosticsProfile']['bootDiagnostics'] = { 'enabled': False, 'storageUri': None } if self.boot_diagnostics_present: current_boot_diagnostics = vm_dict['properties']['diagnosticsProfile']['bootDiagnostics'] boot_diagnostics_changed = False if self.boot_diagnostics['enabled'] != current_boot_diagnostics['enabled']: current_boot_diagnostics['enabled'] = self.boot_diagnostics['enabled'] boot_diagnostics_changed = True boot_diagnostics_storage_account = self.get_boot_diagnostics_storage_account( limited=not self.boot_diagnostics['enabled'], vm_dict=vm_dict) boot_diagnostics_blob = boot_diagnostics_storage_account.primary_endpoints.blob if boot_diagnostics_storage_account else None if current_boot_diagnostics['storageUri'] != boot_diagnostics_blob: current_boot_diagnostics['storageUri'] = boot_diagnostics_blob boot_diagnostics_changed = True if boot_diagnostics_changed: differences.append('Boot Diagnostics') changed = True # Adding boot diagnostics can create a default storage account after initial creation # this means we might also need to update the _own_sa_ tag own_sa = (self.tags or {}).get('_own_sa_', None) cur_sa = vm_dict.get('tags', {}).get('_own_sa_', None) if own_sa and own_sa != cur_sa: if 'Tags' not in differences: differences.append('Tags') if 'tags' not in vm_dict: vm_dict['tags'] = {} vm_dict['tags']['_own_sa_'] = own_sa changed = True self.differences = differences elif self.state == 'absent': self.log("CHANGED: virtual machine {0} exists and requested state is 'absent'".format(self.name)) results = dict() changed = True except CloudError: self.log('Virtual machine {0} does not exist'.format(self.name)) if self.state == 'present': self.log("CHANGED: virtual machine {0} does not exist but state is 'present'.".format(self.name)) changed = True self.results['changed'] = changed self.results['ansible_facts']['azure_vm'] = results self.results['powerstate_change'] = powerstate_change if self.check_mode: return self.results if changed: if self.state == 'present': if not vm: # Create the VM self.log("Create virtual machine {0}".format(self.name)) self.results['actions'].append('Created VM {0}'.format(self.name)) if self.os_type == 'Linux': if disable_ssh_password and not self.ssh_public_keys: self.fail("Parameter error: ssh_public_keys required when disabling SSH password.") if not image_reference: self.fail("Parameter error: an image is required when creating a virtual machine.") availability_set_resource = None if self.availability_set: parsed_availability_set = parse_resource_id(self.availability_set) availability_set = self.get_availability_set(parsed_availability_set.get('resource_group', self.resource_group), parsed_availability_set.get('name')) availability_set_resource = self.compute_models.SubResource(id=availability_set.id) if self.zones: self.fail("Parameter error: you can't use Availability Set and Availability Zones at the same time") # Get defaults if not self.network_interface_names: default_nic = self.create_default_nic() self.log("network interface:") self.log(self.serialize_obj(default_nic, 'NetworkInterface'), pretty_print=True) network_interfaces = [default_nic.id] # os disk if not self.storage_account_name and not self.managed_disk_type: storage_account = self.create_default_storage_account() self.log("os disk storage account:") self.log(self.serialize_obj(storage_account, 'StorageAccount'), pretty_print=True) requested_storage_uri = 'https://{0}.blob.{1}/'.format( storage_account.name, self._cloud_environment.suffixes.storage_endpoint) requested_vhd_uri = '{0}{1}/{2}'.format( requested_storage_uri, self.storage_container_name, self.storage_blob_name) if not self.short_hostname: self.short_hostname = self.name nics = [self.compute_models.NetworkInterfaceReference(id=id, primary=(i == 0)) for i, id in enumerate(network_interfaces)] # os disk if self.managed_disk_type: vhd = None managed_disk = self.compute_models.ManagedDiskParameters(storage_account_type=self.managed_disk_type) elif custom_image: vhd = None managed_disk = None else: vhd = self.compute_models.VirtualHardDisk(uri=requested_vhd_uri) managed_disk = None plan = None if self.plan: plan = self.compute_models.Plan(name=self.plan.get('name'), product=self.plan.get('product'), publisher=self.plan.get('publisher'), promotion_code=self.plan.get('promotion_code')) # do this before creating vm_resource as it can modify tags if self.boot_diagnostics_present and self.boot_diagnostics['enabled']: boot_diag_storage_account = self.get_boot_diagnostics_storage_account() os_profile = None if self.admin_username: os_profile = self.compute_models.OSProfile( admin_username=self.admin_username, computer_name=self.short_hostname, ) vm_resource = self.compute_models.VirtualMachine( location=self.location, tags=self.tags, os_profile=os_profile, hardware_profile=self.compute_models.HardwareProfile( vm_size=self.vm_size ), storage_profile=self.compute_models.StorageProfile( os_disk=self.compute_models.OSDisk( name=self.os_disk_name if self.os_disk_name else self.storage_blob_name, vhd=vhd, managed_disk=managed_disk, create_option=self.compute_models.DiskCreateOptionTypes.from_image, caching=self.os_disk_caching, disk_size_gb=self.os_disk_size_gb ), image_reference=image_reference, ), network_profile=self.compute_models.NetworkProfile( network_interfaces=nics ), availability_set=availability_set_resource, plan=plan, zones=self.zones, ) if self.license_type is not None: vm_resource.license_type = self.license_type if self.vm_identity: vm_resource.identity = self.compute_models.VirtualMachineIdentity(type=self.vm_identity) if self.winrm: winrm_listeners = list() for winrm_listener in self.winrm: winrm_listeners.append(self.compute_models.WinRMListener( protocol=winrm_listener.get('protocol'), certificate_url=winrm_listener.get('certificate_url') )) if winrm_listener.get('source_vault'): if not vm_resource.os_profile.secrets: vm_resource.os_profile.secrets = list() vm_resource.os_profile.secrets.append(self.compute_models.VaultSecretGroup( source_vault=self.compute_models.SubResource( id=winrm_listener.get('source_vault') ), vault_certificates=[ self.compute_models.VaultCertificate( certificate_url=winrm_listener.get('certificate_url'), certificate_store=winrm_listener.get('certificate_store') ), ] )) winrm = self.compute_models.WinRMConfiguration( listeners=winrm_listeners ) if not vm_resource.os_profile.windows_configuration: vm_resource.os_profile.windows_configuration = self.compute_models.WindowsConfiguration( win_rm=winrm ) elif not vm_resource.os_profile.windows_configuration.win_rm: vm_resource.os_profile.windows_configuration.win_rm = winrm if self.boot_diagnostics_present: vm_resource.diagnostics_profile = self.compute_models.DiagnosticsProfile( boot_diagnostics=self.compute_models.BootDiagnostics( enabled=self.boot_diagnostics['enabled'], storage_uri=boot_diag_storage_account.primary_endpoints.blob)) if self.admin_password: vm_resource.os_profile.admin_password = self.admin_password if self.custom_data: # Azure SDK (erroneously?) wants native string type for this vm_resource.os_profile.custom_data = to_native(base64.b64encode(to_bytes(self.custom_data))) if self.os_type == 'Linux' and os_profile: vm_resource.os_profile.linux_configuration = self.compute_models.LinuxConfiguration( disable_password_authentication=disable_ssh_password ) if self.ssh_public_keys: ssh_config = self.compute_models.SshConfiguration() ssh_config.public_keys = \ [self.compute_models.SshPublicKey(path=key['path'], key_data=key['key_data']) for key in self.ssh_public_keys] vm_resource.os_profile.linux_configuration.ssh = ssh_config # data disk if self.data_disks: data_disks = [] count = 0 for data_disk in self.data_disks: if not data_disk.get('managed_disk_type'): if not data_disk.get('storage_blob_name'): data_disk['storage_blob_name'] = self.name + '-data-' + str(count) + '.vhd' count += 1 if data_disk.get('storage_account_name'): data_disk_storage_account = self.get_storage_account(data_disk['storage_account_name']) else: data_disk_storage_account = self.create_default_storage_account() self.log("data disk storage account:") self.log(self.serialize_obj(data_disk_storage_account, 'StorageAccount'), pretty_print=True) if not data_disk.get('storage_container_name'): data_disk['storage_container_name'] = 'vhds' data_disk_requested_vhd_uri = 'https://{0}.blob.{1}/{2}/{3}'.format( data_disk_storage_account.name, self._cloud_environment.suffixes.storage_endpoint, data_disk['storage_container_name'], data_disk['storage_blob_name'] ) if not data_disk.get('managed_disk_type'): data_disk_managed_disk = None disk_name = data_disk['storage_blob_name'] data_disk_vhd = self.compute_models.VirtualHardDisk(uri=data_disk_requested_vhd_uri) else: data_disk_vhd = None data_disk_managed_disk = self.compute_models.ManagedDiskParameters(storage_account_type=data_disk['managed_disk_type']) disk_name = self.name + "-datadisk-" + str(count) count += 1 data_disk['caching'] = data_disk.get( 'caching', 'ReadOnly' ) data_disks.append(self.compute_models.DataDisk( lun=data_disk['lun'], name=disk_name, vhd=data_disk_vhd, caching=data_disk['caching'], create_option=self.compute_models.DiskCreateOptionTypes.empty, disk_size_gb=data_disk['disk_size_gb'], managed_disk=data_disk_managed_disk, )) vm_resource.storage_profile.data_disks = data_disks # Before creating VM accept terms of plan if `accept_terms` is True if self.accept_terms is True: if not self.plan or not all([self.plan.get('name'), self.plan.get('product'), self.plan.get('publisher')]): self.fail("parameter error: plan must be specified and include name, product, and publisher") try: plan_name = self.plan.get('name') plan_product = self.plan.get('product') plan_publisher = self.plan.get('publisher') term = self.marketplace_client.marketplace_agreements.get( publisher_id=plan_publisher, offer_id=plan_product, plan_id=plan_name) term.accepted = True self.marketplace_client.marketplace_agreements.create( publisher_id=plan_publisher, offer_id=plan_product, plan_id=plan_name, parameters=term) except Exception as exc: self.fail(("Error accepting terms for virtual machine {0} with plan {1}. " + "Only service admin/account admin users can purchase images " + "from the marketplace. - {2}").format(self.name, self.plan, str(exc))) self.log("Create virtual machine with parameters:") self.create_or_update_vm(vm_resource, 'all_autocreated' in self.remove_on_absent) elif self.differences and len(self.differences) > 0: # Update the VM based on detected config differences self.log("Update virtual machine {0}".format(self.name)) self.results['actions'].append('Updated VM {0}'.format(self.name)) nics = [self.compute_models.NetworkInterfaceReference(id=interface['id'], primary=(i == 0)) for i, interface in enumerate(vm_dict['properties']['networkProfile']['networkInterfaces'])] # os disk if not vm_dict['properties']['storageProfile']['osDisk'].get('managedDisk'): managed_disk = None vhd = self.compute_models.VirtualHardDisk(uri=vm_dict['properties']['storageProfile']['osDisk'].get('vhd', {}).get('uri')) else: vhd = None managed_disk = self.compute_models.ManagedDiskParameters( storage_account_type=vm_dict['properties']['storageProfile']['osDisk']['managedDisk'].get('storageAccountType') ) availability_set_resource = None try: availability_set_resource = self.compute_models.SubResource(id=vm_dict['properties']['availabilitySet'].get('id')) except Exception: # pass if the availability set is not set pass if 'imageReference' in vm_dict['properties']['storageProfile'].keys(): if 'id' in vm_dict['properties']['storageProfile']['imageReference'].keys(): image_reference = self.compute_models.ImageReference( id=vm_dict['properties']['storageProfile']['imageReference']['id'] ) else: image_reference = self.compute_models.ImageReference( publisher=vm_dict['properties']['storageProfile']['imageReference'].get('publisher'), offer=vm_dict['properties']['storageProfile']['imageReference'].get('offer'), sku=vm_dict['properties']['storageProfile']['imageReference'].get('sku'), version=vm_dict['properties']['storageProfile']['imageReference'].get('version') ) else: image_reference = None # You can't change a vm zone if vm_dict['zones'] != self.zones: self.fail("You can't change the Availability Zone of a virtual machine (have: {0}, want: {1})".format(vm_dict['zones'], self.zones)) if 'osProfile' in vm_dict['properties']: os_profile = self.compute_models.OSProfile( admin_username=vm_dict['properties'].get('osProfile', {}).get('adminUsername'), computer_name=vm_dict['properties'].get('osProfile', {}).get('computerName') ) else: os_profile = None vm_resource = self.compute_models.VirtualMachine( location=vm_dict['location'], os_profile=os_profile, hardware_profile=self.compute_models.HardwareProfile( vm_size=vm_dict['properties']['hardwareProfile'].get('vmSize') ), storage_profile=self.compute_models.StorageProfile( os_disk=self.compute_models.OSDisk( name=vm_dict['properties']['storageProfile']['osDisk'].get('name'), vhd=vhd, managed_disk=managed_disk, create_option=vm_dict['properties']['storageProfile']['osDisk'].get('createOption'), os_type=vm_dict['properties']['storageProfile']['osDisk'].get('osType'), caching=vm_dict['properties']['storageProfile']['osDisk'].get('caching'), disk_size_gb=vm_dict['properties']['storageProfile']['osDisk'].get('diskSizeGB') ), image_reference=image_reference ), availability_set=availability_set_resource, network_profile=self.compute_models.NetworkProfile( network_interfaces=nics ) ) if self.license_type is not None: vm_resource.license_type = self.license_type if self.boot_diagnostics is not None: vm_resource.diagnostics_profile = self.compute_models.DiagnosticsProfile( boot_diagnostics=self.compute_models.BootDiagnostics( enabled=vm_dict['properties']['diagnosticsProfile']['bootDiagnostics']['enabled'], storage_uri=vm_dict['properties']['diagnosticsProfile']['bootDiagnostics']['storageUri'])) if vm_dict.get('tags'): vm_resource.tags = vm_dict['tags'] # Add custom_data, if provided if vm_dict['properties'].get('osProfile', {}).get('customData'): custom_data = vm_dict['properties']['osProfile']['customData'] # Azure SDK (erroneously?) wants native string type for this vm_resource.os_profile.custom_data = to_native(base64.b64encode(to_bytes(custom_data))) # Add admin password, if one provided if vm_dict['properties'].get('osProfile', {}).get('adminPassword'): vm_resource.os_profile.admin_password = vm_dict['properties']['osProfile']['adminPassword'] # Add linux configuration, if applicable linux_config = vm_dict['properties'].get('osProfile', {}).get('linuxConfiguration') if linux_config: ssh_config = linux_config.get('ssh', None) vm_resource.os_profile.linux_configuration = self.compute_models.LinuxConfiguration( disable_password_authentication=linux_config.get('disablePasswordAuthentication', False) ) if ssh_config: public_keys = ssh_config.get('publicKeys') if public_keys: vm_resource.os_profile.linux_configuration.ssh = self.compute_models.SshConfiguration(public_keys=[]) for key in public_keys: vm_resource.os_profile.linux_configuration.ssh.public_keys.append( self.compute_models.SshPublicKey(path=key['path'], key_data=key['keyData']) ) # data disk if vm_dict['properties']['storageProfile'].get('dataDisks'): data_disks = [] for data_disk in vm_dict['properties']['storageProfile']['dataDisks']: if data_disk.get('managedDisk'): managed_disk_type = data_disk['managedDisk'].get('storageAccountType') data_disk_managed_disk = self.compute_models.ManagedDiskParameters(storage_account_type=managed_disk_type) data_disk_vhd = None else: data_disk_vhd = data_disk['vhd']['uri'] data_disk_managed_disk = None data_disks.append(self.compute_models.DataDisk( lun=int(data_disk['lun']), name=data_disk.get('name'), vhd=data_disk_vhd, caching=data_disk.get('caching'), create_option=data_disk.get('createOption'), disk_size_gb=int(data_disk['diskSizeGB']), managed_disk=data_disk_managed_disk, )) vm_resource.storage_profile.data_disks = data_disks self.log("Update virtual machine with parameters:") self.create_or_update_vm(vm_resource, False) # Make sure we leave the machine in requested power state if (powerstate_change == 'poweron' and self.results['ansible_facts']['azure_vm']['powerstate'] != 'running'): # Attempt to power on the machine self.power_on_vm() elif (powerstate_change == 'poweroff' and self.results['ansible_facts']['azure_vm']['powerstate'] == 'running'): # Attempt to power off the machine self.power_off_vm() elif powerstate_change == 'restarted': self.restart_vm() elif powerstate_change == 'deallocated': self.deallocate_vm() elif powerstate_change == 'generalized': self.power_off_vm() self.generalize_vm() self.results['ansible_facts']['azure_vm'] = self.serialize_vm(self.get_vm()) elif self.state == 'absent': # delete the VM self.log("Delete virtual machine {0}".format(self.name)) self.results['ansible_facts']['azure_vm'] = None self.delete_vm(vm) # until we sort out how we want to do this globally del self.results['actions'] return self.results def get_vm(self): ''' Get the VM with expanded instanceView :return: VirtualMachine object ''' try: vm = self.compute_client.virtual_machines.get(self.resource_group, self.name, expand='instanceview') return vm except Exception as exc: self.fail("Error getting virtual machine {0} - {1}".format(self.name, str(exc))) def serialize_vm(self, vm): ''' Convert a VirtualMachine object to dict. :param vm: VirtualMachine object :return: dict ''' result = self.serialize_obj(vm, AZURE_OBJECT_CLASS, enum_modules=AZURE_ENUM_MODULES) result['id'] = vm.id result['name'] = vm.name result['type'] = vm.type result['location'] = vm.location result['tags'] = vm.tags result['powerstate'] = dict() if vm.instance_view: result['powerstate'] = next((s.code.replace('PowerState/', '') for s in vm.instance_view.statuses if s.code.startswith('PowerState')), None) for s in vm.instance_view.statuses: if s.code.lower() == "osstate/generalized": result['powerstate'] = 'generalized' # Expand network interfaces to include config properties for interface in vm.network_profile.network_interfaces: int_dict = azure_id_to_dict(interface.id) nic = self.get_network_interface(int_dict['resourceGroups'], int_dict['networkInterfaces']) for interface_dict in result['properties']['networkProfile']['networkInterfaces']: if interface_dict['id'] == interface.id: nic_dict = self.serialize_obj(nic, 'NetworkInterface') interface_dict['name'] = int_dict['networkInterfaces'] interface_dict['properties'] = nic_dict['properties'] # Expand public IPs to include config properties for interface in result['properties']['networkProfile']['networkInterfaces']: for config in interface['properties']['ipConfigurations']: if config['properties'].get('publicIPAddress'): pipid_dict = azure_id_to_dict(config['properties']['publicIPAddress']['id']) try: pip = self.network_client.public_ip_addresses.get(pipid_dict['resourceGroups'], pipid_dict['publicIPAddresses']) except Exception as exc: self.fail("Error fetching public ip {0} - {1}".format(pipid_dict['publicIPAddresses'], str(exc))) pip_dict = self.serialize_obj(pip, 'PublicIPAddress') config['properties']['publicIPAddress']['name'] = pipid_dict['publicIPAddresses'] config['properties']['publicIPAddress']['properties'] = pip_dict['properties'] self.log(result, pretty_print=True) if self.state != 'absent' and not result['powerstate']: self.fail("Failed to determine PowerState of virtual machine {0}".format(self.name)) return result def power_off_vm(self): self.log("Powered off virtual machine {0}".format(self.name)) self.results['actions'].append("Powered off virtual machine {0}".format(self.name)) try: poller = self.compute_client.virtual_machines.power_off(self.resource_group, self.name) self.get_poller_result(poller) except Exception as exc: self.fail("Error powering off virtual machine {0} - {1}".format(self.name, str(exc))) return True def power_on_vm(self): self.results['actions'].append("Powered on virtual machine {0}".format(self.name)) self.log("Power on virtual machine {0}".format(self.name)) try: poller = self.compute_client.virtual_machines.start(self.resource_group, self.name) self.get_poller_result(poller) except Exception as exc: self.fail("Error powering on virtual machine {0} - {1}".format(self.name, str(exc))) return True def restart_vm(self): self.results['actions'].append("Restarted virtual machine {0}".format(self.name)) self.log("Restart virtual machine {0}".format(self.name)) try: poller = self.compute_client.virtual_machines.restart(self.resource_group, self.name) self.get_poller_result(poller) except Exception as exc: self.fail("Error restarting virtual machine {0} - {1}".format(self.name, str(exc))) return True def deallocate_vm(self): self.results['actions'].append("Deallocated virtual machine {0}".format(self.name)) self.log("Deallocate virtual machine {0}".format(self.name)) try: poller = self.compute_client.virtual_machines.deallocate(self.resource_group, self.name) self.get_poller_result(poller) except Exception as exc: self.fail("Error deallocating virtual machine {0} - {1}".format(self.name, str(exc))) return True def generalize_vm(self): self.results['actions'].append("Generalize virtual machine {0}".format(self.name)) self.log("Generalize virtual machine {0}".format(self.name)) try: response = self.compute_client.virtual_machines.generalize(self.resource_group, self.name) if isinstance(response, LROPoller): self.get_poller_result(response) except Exception as exc: self.fail("Error generalizing virtual machine {0} - {1}".format(self.name, str(exc))) return True def remove_autocreated_resources(self, tags): if tags: sa_name = tags.get('_own_sa_') nic_name = tags.get('_own_nic_') pip_name = tags.get('_own_pip_') nsg_name = tags.get('_own_nsg_') if sa_name: self.delete_storage_account(self.resource_group, sa_name) if nic_name: self.delete_nic(self.resource_group, nic_name) if pip_name: self.delete_pip(self.resource_group, pip_name) if nsg_name: self.delete_nsg(self.resource_group, nsg_name) def delete_vm(self, vm): vhd_uris = [] managed_disk_ids = [] nic_names = [] pip_names = [] if 'all_autocreated' not in self.remove_on_absent: if self.remove_on_absent.intersection(set(['all', 'virtual_storage'])): # store the attached vhd info so we can nuke it after the VM is gone if(vm.storage_profile.os_disk.managed_disk): self.log('Storing managed disk ID for deletion') managed_disk_ids.append(vm.storage_profile.os_disk.managed_disk.id) elif(vm.storage_profile.os_disk.vhd): self.log('Storing VHD URI for deletion') vhd_uris.append(vm.storage_profile.os_disk.vhd.uri) data_disks = vm.storage_profile.data_disks for data_disk in data_disks: if data_disk is not None: if(data_disk.vhd): vhd_uris.append(data_disk.vhd.uri) elif(data_disk.managed_disk): managed_disk_ids.append(data_disk.managed_disk.id) # FUTURE enable diff mode, move these there... self.log("VHD URIs to delete: {0}".format(', '.join(vhd_uris))) self.results['deleted_vhd_uris'] = vhd_uris self.log("Managed disk IDs to delete: {0}".format(', '.join(managed_disk_ids))) self.results['deleted_managed_disk_ids'] = managed_disk_ids if self.remove_on_absent.intersection(set(['all', 'network_interfaces'])): # store the attached nic info so we can nuke them after the VM is gone self.log('Storing NIC names for deletion.') for interface in vm.network_profile.network_interfaces: id_dict = azure_id_to_dict(interface.id) nic_names.append(dict(name=id_dict['networkInterfaces'], resource_group=id_dict['resourceGroups'])) self.log('NIC names to delete {0}'.format(str(nic_names))) self.results['deleted_network_interfaces'] = nic_names if self.remove_on_absent.intersection(set(['all', 'public_ips'])): # also store each nic's attached public IPs and delete after the NIC is gone for nic_dict in nic_names: nic = self.get_network_interface(nic_dict['resource_group'], nic_dict['name']) for ipc in nic.ip_configurations: if ipc.public_ip_address: pip_dict = azure_id_to_dict(ipc.public_ip_address.id) pip_names.append(dict(name=pip_dict['publicIPAddresses'], resource_group=pip_dict['resourceGroups'])) self.log('Public IPs to delete are {0}'.format(str(pip_names))) self.results['deleted_public_ips'] = pip_names self.log("Deleting virtual machine {0}".format(self.name)) self.results['actions'].append("Deleted virtual machine {0}".format(self.name)) try: poller = self.compute_client.virtual_machines.delete(self.resource_group, self.name) # wait for the poller to finish self.get_poller_result(poller) except Exception as exc: self.fail("Error deleting virtual machine {0} - {1}".format(self.name, str(exc))) # TODO: parallelize nic, vhd, and public ip deletions with begin_deleting # TODO: best-effort to keep deleting other linked resources if we encounter an error if self.remove_on_absent.intersection(set(['all', 'virtual_storage'])): self.log('Deleting VHDs') self.delete_vm_storage(vhd_uris) self.log('Deleting managed disks') self.delete_managed_disks(managed_disk_ids) if 'all' in self.remove_on_absent or 'all_autocreated' in self.remove_on_absent: self.remove_autocreated_resources(vm.tags) if self.remove_on_absent.intersection(set(['all', 'network_interfaces'])): self.log('Deleting network interfaces') for nic_dict in nic_names: self.delete_nic(nic_dict['resource_group'], nic_dict['name']) if self.remove_on_absent.intersection(set(['all', 'public_ips'])): self.log('Deleting public IPs') for pip_dict in pip_names: self.delete_pip(pip_dict['resource_group'], pip_dict['name']) if 'all' in self.remove_on_absent or 'all_autocreated' in self.remove_on_absent: self.remove_autocreated_resources(vm.tags) return True def get_network_interface(self, resource_group, name): try: nic = self.network_client.network_interfaces.get(resource_group, name) return nic except Exception as exc: self.fail("Error fetching network interface {0} - {1}".format(name, str(exc))) return True def delete_nic(self, resource_group, name): self.log("Deleting network interface {0}".format(name)) self.results['actions'].append("Deleted network interface {0}".format(name)) try: poller = self.network_client.network_interfaces.delete(resource_group, name) except Exception as exc: self.fail("Error deleting network interface {0} - {1}".format(name, str(exc))) self.get_poller_result(poller) # Delete doesn't return anything. If we get this far, assume success return True def delete_pip(self, resource_group, name): self.results['actions'].append("Deleted public IP {0}".format(name)) try: poller = self.network_client.public_ip_addresses.delete(resource_group, name) self.get_poller_result(poller) except Exception as exc: self.fail("Error deleting {0} - {1}".format(name, str(exc))) # Delete returns nada. If we get here, assume that all is well. return True def delete_nsg(self, resource_group, name): self.results['actions'].append("Deleted NSG {0}".format(name)) try: poller = self.network_client.network_security_groups.delete(resource_group, name) self.get_poller_result(poller) except Exception as exc: self.fail("Error deleting {0} - {1}".format(name, str(exc))) return True def delete_managed_disks(self, managed_disk_ids): for mdi in managed_disk_ids: try: poller = self.rm_client.resources.delete_by_id(mdi, '2017-03-30') self.get_poller_result(poller) except Exception as exc: self.fail("Error deleting managed disk {0} - {1}".format(mdi, str(exc))) return True def delete_storage_account(self, resource_group, name): self.log("Delete storage account {0}".format(name)) self.results['actions'].append("Deleted storage account {0}".format(name)) try: self.storage_client.storage_accounts.delete(self.resource_group, name) except Exception as exc: self.fail("Error deleting storage account {0} - {1}".format(name, str(exc))) return True def delete_vm_storage(self, vhd_uris): # FUTURE: figure out a cloud_env independent way to delete these for uri in vhd_uris: self.log("Extracting info from blob uri '{0}'".format(uri)) try: blob_parts = extract_names_from_blob_uri(uri, self._cloud_environment.suffixes.storage_endpoint) except Exception as exc: self.fail("Error parsing blob URI {0}".format(str(exc))) storage_account_name = blob_parts['accountname'] container_name = blob_parts['containername'] blob_name = blob_parts['blobname'] blob_client = self.get_blob_client(self.resource_group, storage_account_name) self.log("Delete blob {0}:{1}".format(container_name, blob_name)) self.results['actions'].append("Deleted blob {0}:{1}".format(container_name, blob_name)) try: blob_client.delete_blob(container_name, blob_name) except Exception as exc: self.fail("Error deleting blob {0}:{1} - {2}".format(container_name, blob_name, str(exc))) return True def get_marketplace_image_version(self): try: versions = self.compute_client.virtual_machine_images.list(self.location, self.image['publisher'], self.image['offer'], self.image['sku']) except Exception as exc: self.fail("Error fetching image {0} {1} {2} - {3}".format(self.image['publisher'], self.image['offer'], self.image['sku'], str(exc))) if versions and len(versions) > 0: if self.image['version'] == 'latest': return versions[len(versions) - 1] for version in versions: if version.name == self.image['version']: return version self.fail("Error could not find image {0} {1} {2} {3}".format(self.image['publisher'], self.image['offer'], self.image['sku'], self.image['version'])) return None def get_custom_image_reference(self, name, resource_group=None): try: if resource_group: vm_images = self.compute_client.images.list_by_resource_group(resource_group) else: vm_images = self.compute_client.images.list() except Exception as exc: self.fail("Error fetching custom images from subscription - {0}".format(str(exc))) for vm_image in vm_images: if vm_image.name == name: self.log("Using custom image id {0}".format(vm_image.id)) return self.compute_models.ImageReference(id=vm_image.id) self.fail("Error could not find image with name {0}".format(name)) return None def get_availability_set(self, resource_group, name): try: return self.compute_client.availability_sets.get(resource_group, name) except Exception as exc: self.fail("Error fetching availability set {0} - {1}".format(name, str(exc))) def get_storage_account(self, name): try: account = self.storage_client.storage_accounts.get_properties(self.resource_group, name) return account except Exception as exc: self.fail("Error fetching storage account {0} - {1}".format(name, str(exc))) def create_or_update_vm(self, params, remove_autocreated_on_failure): try: poller = self.compute_client.virtual_machines.create_or_update(self.resource_group, self.name, params) self.get_poller_result(poller) except Exception as exc: if remove_autocreated_on_failure: self.remove_autocreated_resources(params.tags) self.fail("Error creating or updating virtual machine {0} - {1}".format(self.name, str(exc))) def vm_size_is_valid(self): ''' Validate self.vm_size against the list of virtual machine sizes available for the account and location. :return: boolean ''' try: sizes = self.compute_client.virtual_machine_sizes.list(self.location) except Exception as exc: self.fail("Error retrieving available machine sizes - {0}".format(str(exc))) for size in sizes: if size.name == self.vm_size: return True return False def create_default_storage_account(self, vm_dict=None): ''' Create (once) a default storage account <vm name>XXXX, where XXXX is a random number. NOTE: If <vm name>XXXX exists, use it instead of failing. Highly unlikely. If this method is called multiple times across executions it will return the same storage account created with the random name which is stored in a tag on the VM. vm_dict is passed in during an update, so we can obtain the _own_sa_ tag and return the default storage account we created in a previous invocation :return: storage account object ''' account = None valid_name = False if self.tags is None: self.tags = {} if self.tags.get('_own_sa_', None): # We previously created one in the same invocation return self.get_storage_account(self.tags['_own_sa_']) if vm_dict and vm_dict.get('tags', {}).get('_own_sa_', None): # We previously created one in a previous invocation # We must be updating, like adding boot diagnostics return self.get_storage_account(vm_dict['tags']['_own_sa_']) # Attempt to find a valid storage account name storage_account_name_base = re.sub('[^a-zA-Z0-9]', '', self.name[:20].lower()) for i in range(0, 5): rand = random.randrange(1000, 9999) storage_account_name = storage_account_name_base + str(rand) if self.check_storage_account_name(storage_account_name): valid_name = True break if not valid_name: self.fail("Failed to create a unique storage account name for {0}. Try using a different VM name." .format(self.name)) try: account = self.storage_client.storage_accounts.get_properties(self.resource_group, storage_account_name) except CloudError: pass if account: self.log("Storage account {0} found.".format(storage_account_name)) self.check_provisioning_state(account) return account sku = self.storage_models.Sku(name=self.storage_models.SkuName.standard_lrs) sku.tier = self.storage_models.SkuTier.standard kind = self.storage_models.Kind.storage parameters = self.storage_models.StorageAccountCreateParameters(sku=sku, kind=kind, location=self.location) self.log("Creating storage account {0} in location {1}".format(storage_account_name, self.location)) self.results['actions'].append("Created storage account {0}".format(storage_account_name)) try: poller = self.storage_client.storage_accounts.create(self.resource_group, storage_account_name, parameters) self.get_poller_result(poller) except Exception as exc: self.fail("Failed to create storage account: {0} - {1}".format(storage_account_name, str(exc))) self.tags['_own_sa_'] = storage_account_name return self.get_storage_account(storage_account_name) def check_storage_account_name(self, name): self.log("Checking storage account name availability for {0}".format(name)) try: response = self.storage_client.storage_accounts.check_name_availability(name) if response.reason == 'AccountNameInvalid': raise Exception("Invalid default storage account name: {0}".format(name)) except Exception as exc: self.fail("Error checking storage account name availability for {0} - {1}".format(name, str(exc))) return response.name_available def create_default_nic(self): ''' Create a default Network Interface <vm name>01. Requires an existing virtual network with one subnet. If NIC <vm name>01 exists, use it. Otherwise, create one. :return: NIC object ''' network_interface_name = self.name + '01' nic = None if self.tags is None: self.tags = {} self.log("Create default NIC {0}".format(network_interface_name)) self.log("Check to see if NIC {0} exists".format(network_interface_name)) try: nic = self.network_client.network_interfaces.get(self.resource_group, network_interface_name) except CloudError: pass if nic: self.log("NIC {0} found.".format(network_interface_name)) self.check_provisioning_state(nic) return nic self.log("NIC {0} does not exist.".format(network_interface_name)) virtual_network_resource_group = None if self.virtual_network_resource_group: virtual_network_resource_group = self.virtual_network_resource_group else: virtual_network_resource_group = self.resource_group if self.virtual_network_name: try: self.network_client.virtual_networks.list(virtual_network_resource_group, self.virtual_network_name) virtual_network_name = self.virtual_network_name except CloudError as exc: self.fail("Error: fetching virtual network {0} - {1}".format(self.virtual_network_name, str(exc))) else: # Find a virtual network no_vnets_msg = "Error: unable to find virtual network in resource group {0}. A virtual network " \ "with at least one subnet must exist in order to create a NIC for the virtual " \ "machine.".format(virtual_network_resource_group) virtual_network_name = None try: vnets = self.network_client.virtual_networks.list(virtual_network_resource_group) except CloudError: self.log('cloud error!') self.fail(no_vnets_msg) for vnet in vnets: virtual_network_name = vnet.name self.log('vnet name: {0}'.format(vnet.name)) break if not virtual_network_name: self.fail(no_vnets_msg) if self.subnet_name: try: subnet = self.network_client.subnets.get(virtual_network_resource_group, virtual_network_name, self.subnet_name) subnet_id = subnet.id except Exception as exc: self.fail("Error: fetching subnet {0} - {1}".format(self.subnet_name, str(exc))) else: no_subnets_msg = "Error: unable to find a subnet in virtual network {0}. A virtual network " \ "with at least one subnet must exist in order to create a NIC for the virtual " \ "machine.".format(virtual_network_name) subnet_id = None try: subnets = self.network_client.subnets.list(virtual_network_resource_group, virtual_network_name) except CloudError: self.fail(no_subnets_msg) for subnet in subnets: subnet_id = subnet.id self.log('subnet id: {0}'.format(subnet_id)) break if not subnet_id: self.fail(no_subnets_msg) pip = None if self.public_ip_allocation_method != 'Disabled': self.results['actions'].append('Created default public IP {0}'.format(self.name + '01')) sku = self.network_models.PublicIPAddressSku(name="Standard") if self.zones else None pip_facts = self.create_default_pip(self.resource_group, self.location, self.name + '01', self.public_ip_allocation_method, sku=sku) pip = self.network_models.PublicIPAddress(id=pip_facts.id, location=pip_facts.location, resource_guid=pip_facts.resource_guid, sku=sku) self.tags['_own_pip_'] = self.name + '01' self.results['actions'].append('Created default security group {0}'.format(self.name + '01')) group = self.create_default_securitygroup(self.resource_group, self.location, self.name + '01', self.os_type, self.open_ports) self.tags['_own_nsg_'] = self.name + '01' parameters = self.network_models.NetworkInterface( location=self.location, ip_configurations=[ self.network_models.NetworkInterfaceIPConfiguration( private_ip_allocation_method='Dynamic', ) ] ) parameters.ip_configurations[0].subnet = self.network_models.Subnet(id=subnet_id) parameters.ip_configurations[0].name = 'default' parameters.network_security_group = self.network_models.NetworkSecurityGroup(id=group.id, location=group.location, resource_guid=group.resource_guid) parameters.ip_configurations[0].public_ip_address = pip self.log("Creating NIC {0}".format(network_interface_name)) self.log(self.serialize_obj(parameters, 'NetworkInterface'), pretty_print=True) self.results['actions'].append("Created NIC {0}".format(network_interface_name)) try: poller = self.network_client.network_interfaces.create_or_update(self.resource_group, network_interface_name, parameters) new_nic = self.get_poller_result(poller) self.tags['_own_nic_'] = network_interface_name except Exception as exc: self.fail("Error creating network interface {0} - {1}".format(network_interface_name, str(exc))) return new_nic def parse_network_interface(self, nic): nic = self.parse_resource_to_dict(nic) if 'name' not in nic: self.fail("Invalid network interface {0}".format(str(nic))) return format_resource_id(val=nic['name'], subscription_id=nic['subscription_id'], resource_group=nic['resource_group'], namespace='Microsoft.Network', types='networkInterfaces') def main(): AzureRMVirtualMachine() if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
65,354
VMware: vmware_host_dns misses required parameter in examples
##### SUMMARY As @k3x pointed out in #63374 the examples in `vmware_host_dns` miss a required parameter. ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME vmware_host_dns ##### ANSIBLE VERSION ``` ansible 2.10.0.dev0 config file = None configured module search path = [u'/home/mario/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/mario/rme/ansible/lib/ansible executable location = /home/mario/rme/ansible/bin/ansible python version = 2.7.16 (default, Oct 10 2019, 22:02:15) [GCC 8.3.0] ```
https://github.com/ansible/ansible/issues/65354
https://github.com/ansible/ansible/pull/65355
34ecd6cb25258116d394ffcbaf9ec29504a3919a
deb0cbbf738967c2d7ff03229e7ff9eff51854a2
2019-11-28T18:26:07Z
python
2019-11-29T01:20:33Z
lib/ansible/modules/cloud/vmware/vmware_host_dns.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2018, Christian Kotte <[email protected]> # # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = { 'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community' } DOCUMENTATION = r''' --- module: vmware_host_dns short_description: Manage DNS configuration of an ESXi host system description: - This module can be used to configure DNS for the default TCP/IP stack on an ESXi host system. version_added: '2.10' author: - Christian Kotte (@ckotte) - Mario Lenz (@mariolenz) notes: - This module is a replacement for the module C(vmware_dns_config) - Tested on vSphere 6.7 requirements: - python >= 2.6 - PyVmomi options: type: description: - Type of IP assignment. Either C(dhcp) or C(static). - A VMkernel adapter needs to be set to DHCP if C(type) is set to C(dhcp). type: str choices: [ 'dhcp', 'static' ] required: true device: description: - The VMkernel network adapter to obtain DNS settings from. - Needs to get its IP through DHCP, a static network configuration combined with a dynamic DNS configuration doesn't work. - The parameter is only required in case of C(type) is set to C(dhcp). type: str host_name: description: - The hostname to be used for the ESXi host. - Cannot be used when configuring a complete cluster. type: str domain: description: - The domain name to be used for the the ESXi host. type: str dns_servers: description: - A list of DNS servers to be used. - The order of the DNS servers is important as they are used consecutively in order. type: list search_domains: description: - A list of domains to be searched through by the resolver. type: list verbose: description: - Verbose output of the DNS server configuration change. - Explains if an DNS server was added, removed, or if the DNS server sequence was changed. type: bool default: false esxi_hostname: description: - Name of the host system to work with. - This parameter is required if C(cluster_name) is not specified and you connect to a vCenter. - Cannot be used when you connect directly to an ESXi host. type: str cluster_name: description: - Name of the cluster from which all host systems will be used. - This parameter is required if C(esxi_hostname) is not specified and you connect to a vCenter. - Cannot be used when you connect directly to an ESXi host. type: str extends_documentation_fragment: vmware.documentation ''' EXAMPLES = r''' - name: Configure DNS for an ESXi host vmware_host_dns: hostname: '{{ vcenter_hostname }}' username: '{{ vcenter_username }}' password: '{{ vcenter_password }}' esxi_hostname: '{{ esxi_hostname }}' host_name: esx01 domain: example.local dns_servers: - 192.168.1.10 - 192.168.1.11 search_domains: - subdomain.example.local - example.local delegate_to: localhost - name: Configure DNS for all ESXi hosts of a cluster vmware_host_dns: hostname: '{{ vcenter_hostname }}' username: '{{ vcenter_username }}' password: '{{ vcenter_password }}' cluster_name: '{{ cluster_name }}' domain: example.local dns_servers: - 192.168.1.10 - 192.168.1.11 search_domains: - subdomain.example.local - example.local delegate_to: localhost - name: Configure DNS via DHCP for an ESXi host vmware_host_dns: hostname: '{{ vcenter_hostname }}' username: '{{ vcenter_username }}' password: '{{ vcenter_password }}' esxi_hostname: '{{ esxi_hostname }}' type: dhcp device: vmk0 delegate_to: localhost ''' RETURN = r''' dns_config_result: description: metadata about host system's DNS configuration returned: always type: dict sample: { "esx01.example.local": { "changed": true, "dns_servers_changed": ["192.168.1.12", "192.168.1.13"], "dns_servers": ["192.168.1.10", "192.168.1.11"], "dns_servers_previous": ["192.168.1.10", "192.168.1.11", "192.168.1.12", "192.168.1.13"], "domain": "example.local", "host_name": "esx01", "msg": "DNS servers and Search domains changed", "search_domains_changed": ["subdomain.example.local"], "search_domains": ["subdomain.example.local", "example.local"], "search_domains_previous": ["example.local"], }, } ''' try: from pyVmomi import vim, vmodl except ImportError: pass from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec from ansible.module_utils._text import to_native class VmwareHostDNS(PyVmomi): """Class to manage DNS configuration of an ESXi host system""" def __init__(self, module): super(VmwareHostDNS, self).__init__(module) self.cluster_name = self.params.get('cluster_name') self.esxi_host_name = self.params.get('esxi_hostname') if self.is_vcenter(): if not self.cluster_name and not self.esxi_host_name: self.module.fail_json( msg="You connected to a vCenter but didn't specify the cluster_name or esxi_hostname you want to configure." ) else: if self.cluster_name: self.module.warn( "You connected directly to an ESXi host, cluster_name will be ignored." ) if self.esxi_host_name: self.module.warn( "You connected directly to an ESXi host, esxi_host_name will be ignored." ) self.hosts = self.get_all_host_objs(cluster_name=self.cluster_name, esxi_host_name=self.esxi_host_name) if not self.hosts: self.module.fail_json(msg="Failed to find host system(s).") self.network_type = self.params.get('type') self.vmkernel_device = self.params.get('device') self.host_name = self.params.get('host_name') self.domain = self.params.get('domain') self.dns_servers = self.params.get('dns_servers') self.search_domains = self.params.get('search_domains') def ensure(self): """Function to manage DNS configuration of an ESXi host system""" results = dict(changed=False, dns_config_result=dict()) verbose = self.module.params.get('verbose', False) host_change_list = [] for host in self.hosts: changed = False changed_list = [] results['dns_config_result'][host.name] = dict(changed='', msg='') host_netstack_config = host.config.network.netStackInstance for instance in host_netstack_config: if instance.key == 'defaultTcpipStack': netstack_spec = vim.host.NetworkConfig.NetStackSpec() netstack_spec.operation = 'edit' netstack_spec.netStackInstance = vim.host.NetStackInstance() netstack_spec.netStackInstance.key = 'defaultTcpipStack' dns_config = vim.host.DnsConfig() results['dns_config_result'][host.name]['dns_config'] = self.network_type if self.network_type == 'static': if instance.dnsConfig.dhcp: results['dns_config_result'][host.name]['domain'] = self.domain results['dns_config_result'][host.name]['dns_servers'] = self.dns_servers results['dns_config_result'][host.name]['search_domains'] = self.search_domains results['dns_config_result'][host.name]['dns_config_previous'] = 'DHCP' changed = True changed_list.append("DNS configuration") dns_config.dhcp = False dns_config.virtualNicDevice = None if self.host_name: dns_config.hostName = self.host_name else: dns_config.hostName = instance.dnsConfig.hostName dns_config.domainName = self.domain dns_config.address = self.dns_servers dns_config.searchDomain = self.search_domains else: results['dns_config_result'][host.name]['host_name'] = self.host_name # Check host name if self.host_name: if instance.dnsConfig.hostName != self.host_name: results['dns_config_result'][host.name]['host_name_previous'] = instance.dnsConfig.hostName changed = True changed_list.append("Host name") dns_config.hostName = self.host_name else: dns_config.hostName = instance.dnsConfig.hostName # Check domain results['dns_config_result'][host.name]['domain'] = self.domain if self.domain: if instance.dnsConfig.domainName != self.domain: results['dns_config_result'][host.name]['domain_previous'] = instance.dnsConfig.domainName changed = True changed_list.append("Domain") dns_config.domainName = self.domain else: dns_config.domainName = instance.dnsConfig.domainName # Check DNS server(s) results['dns_config_result'][host.name]['dns_servers'] = self.dns_servers if self.dns_servers: if instance.dnsConfig.address != self.dns_servers: results['dns_config_result'][host.name]['dns_servers_previous'] = instance.dnsConfig.address results['dns_config_result'][host.name]['dns_servers_changed'] = ( self.get_differt_entries(instance.dnsConfig.address, self.dns_servers) ) changed = True # build verbose message if verbose: dns_servers_verbose_message = self.build_changed_message( instance.dnsConfig.address, self.dns_servers ) else: changed_list.append("DNS servers") dns_config.address = self.dns_servers else: dns_config.address = instance.dnsConfig.address # Check search domain config results['dns_config_result'][host.name]['search_domains'] = self.search_domains if self.search_domains: if instance.dnsConfig.searchDomain != self.search_domains: results['dns_config_result'][host.name]['search_domains_previous'] = instance.dnsConfig.searchDomain results['dns_config_result'][host.name]['search_domains_changed'] = ( self.get_differt_entries(instance.dnsConfig.searchDomain, self.search_domains) ) changed = True changed_list.append("Search domains") dns_config.searchDomain = self.search_domains else: dns_config.searchDomain = instance.dnsConfig.searchDomain elif self.network_type == 'dhcp' and not instance.dnsConfig.dhcp: results['dns_config_result'][host.name]['device'] = self.vmkernel_device results['dns_config_result'][host.name]['dns_config_previous'] = 'static' changed = True changed_list.append("DNS configuration") dns_config.dhcp = True dns_config.virtualNicDevice = self.vmkernel_device netstack_spec.netStackInstance.dnsConfig = dns_config config = vim.host.NetworkConfig() config.netStackSpec = [netstack_spec] if changed: if self.module.check_mode: changed_suffix = ' would be changed' else: changed_suffix = ' changed' if len(changed_list) > 2: message = ', '.join(changed_list[:-1]) + ', and ' + str(changed_list[-1]) elif len(changed_list) == 2: message = ' and '.join(changed_list) elif len(changed_list) == 1: message = changed_list[0] if verbose and dns_servers_verbose_message: if changed_list: message = message + changed_suffix + '. ' + dns_servers_verbose_message + '.' else: message = dns_servers_verbose_message else: message += changed_suffix results['dns_config_result'][host.name]['changed'] = True host_network_system = host.configManager.networkSystem if not self.module.check_mode: try: host_network_system.UpdateNetworkConfig(config, 'modify') except vim.fault.AlreadyExists: self.module.fail_json( msg="Network entity specified in the configuration already exist on host '%s'" % host.name ) except vim.fault.NotFound: self.module.fail_json( msg="Network entity specified in the configuration doesn't exist on host '%s'" % host.name ) except vim.fault.ResourceInUse: self.module.fail_json(msg="Resource is in use on host '%s'" % host.name) except vmodl.fault.InvalidArgument: self.module.fail_json( msg="An invalid parameter is passed in for one of the networking objects for host '%s'" % host.name ) except vmodl.fault.NotSupported as not_supported: self.module.fail_json( msg="Operation isn't supported for the instance on '%s' : %s" % (host.name, to_native(not_supported.msg)) ) except vim.fault.HostConfigFault as config_fault: self.module.fail_json( msg="Failed to configure TCP/IP stacks for host '%s' due to : %s" % (host.name, to_native(config_fault.msg)) ) else: results['dns_config_result'][host.name]['changed'] = False message = 'All settings are already configured' results['dns_config_result'][host.name]['msg'] = message host_change_list.append(changed) if any(host_change_list): results['changed'] = True self.module.exit_json(**results) def build_changed_message(self, dns_servers_configured, dns_servers_new): """Build changed message""" check_mode = 'would be ' if self.module.check_mode else '' # get differences add = self.get_not_in_list_one(dns_servers_new, dns_servers_configured) remove = self.get_not_in_list_one(dns_servers_configured, dns_servers_new) diff_servers = list(dns_servers_configured) if add and remove: for server in add: diff_servers.append(server) for server in remove: diff_servers.remove(server) if dns_servers_new != diff_servers: message = ( "DNS server %s %sadded and %s %sremoved and the server sequence %schanged as well" % (self.array_to_string(add), check_mode, self.array_to_string(remove), check_mode, check_mode) ) else: if dns_servers_new != dns_servers_configured: message = ( "DNS server %s %sreplaced with %s" % (self.array_to_string(remove), check_mode, self.array_to_string(add)) ) else: message = ( "DNS server %s %sremoved and %s %sadded" % (self.array_to_string(remove), check_mode, self.array_to_string(add), check_mode) ) elif add: for server in add: diff_servers.append(server) if dns_servers_new != diff_servers: message = ( "DNS server %s %sadded and the server sequence %schanged as well" % (self.array_to_string(add), check_mode, check_mode) ) else: message = "DNS server %s %sadded" % (self.array_to_string(add), check_mode) elif remove: for server in remove: diff_servers.remove(server) if dns_servers_new != diff_servers: message = ( "DNS server %s %sremoved and the server sequence %schanged as well" % (self.array_to_string(remove), check_mode, check_mode) ) else: message = "DNS server %s %sremoved" % (self.array_to_string(remove), check_mode) else: message = "DNS server sequence %schanged" % check_mode return message @staticmethod def get_not_in_list_one(list1, list2): """Return entries that ore not in list one""" return [x for x in list1 if x not in set(list2)] @staticmethod def array_to_string(array): """Return string from array""" if len(array) > 2: string = ( ', '.join("'{0}'".format(element) for element in array[:-1]) + ', and ' + "'{0}'".format(str(array[-1])) ) elif len(array) == 2: string = ' and '.join("'{0}'".format(element) for element in array) elif len(array) == 1: string = "'{0}'".format(array[0]) return string @staticmethod def get_differt_entries(list1, list2): """Return different entries of two lists""" return [a for a in list1 + list2 if (a not in list1) or (a not in list2)] def main(): """Main""" argument_spec = vmware_argument_spec() argument_spec.update( type=dict(required=True, type='str', choices=['dhcp', 'static']), device=dict(type='str'), host_name=dict(required=False, type='str'), domain=dict(required=False, type='str'), dns_servers=dict(required=False, type='list'), search_domains=dict(required=False, type='list'), esxi_hostname=dict(required=False, type='str'), cluster_name=dict(required=False, type='str'), verbose=dict(type='bool', default=False, required=False) ) module = AnsibleModule( argument_spec=argument_spec, required_if=[ ['type', 'dhcp', ['device']], ], mutually_exclusive=[ ['cluster_name', 'host_name'], ['cluster_name', 'esxi_host_name'], ['static', 'device'], ['dhcp', 'host_name'], ['dhcp', 'domain'], ['dhcp', 'dns_servers'], ['dhcp', 'search_domains'], ], supports_check_mode=True ) dns = VmwareHostDNS(module) dns.ensure() if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
59,164
Proxmox version detection is broken in proxmox 6
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY An hour ago I upgraded my Proxmox to the new stable version 6. Since then my playbooks fail with this error: `fatal: [proxmox]: FAILED! => {"changed": false, "msg": "authorization on proxmox cluster failed with exception: could not convert string to float: '6.0-4'"}` It seems like the proxmox changed the API. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME proxmox (https://github.com/mrdrogdrog/ansible/blob/6430205d396b7c1733de22a898c51823f67d5bf4/lib/ansible/modules/cloud/misc/proxmox.py#L484) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.2 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/tilman/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.7/site-packages/ansible executable location = /usr/bin/ansible python version = 3.7.3 (default, Jun 24 2019, 04:54:02) [GCC 9.1.0] ``` ##### OS / ENVIRONMENT Proxmox 6 ##### STEPS TO REPRODUCE - Use the latest ansible version - Use the latest stable version of proxmox (6) - Use the proxmox module to do anything. E.g. create a container - Get the error ##### EXPECTED RESULTS Normal runthrough ##### ACTUAL RESULTS Crash with an error
https://github.com/ansible/ansible/issues/59164
https://github.com/ansible/ansible/pull/59165
0407af936a093c9e3c9feb098bf21e13f69abd7e
38193f6b60caa2e3725cb987376a80074821a950
2019-07-16T21:48:58Z
python
2019-11-29T17:16:40Z
changelogs/fragments/proxmox-6-version-detection.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
59,164
Proxmox version detection is broken in proxmox 6
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY An hour ago I upgraded my Proxmox to the new stable version 6. Since then my playbooks fail with this error: `fatal: [proxmox]: FAILED! => {"changed": false, "msg": "authorization on proxmox cluster failed with exception: could not convert string to float: '6.0-4'"}` It seems like the proxmox changed the API. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME proxmox (https://github.com/mrdrogdrog/ansible/blob/6430205d396b7c1733de22a898c51823f67d5bf4/lib/ansible/modules/cloud/misc/proxmox.py#L484) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.2 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/tilman/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.7/site-packages/ansible executable location = /usr/bin/ansible python version = 3.7.3 (default, Jun 24 2019, 04:54:02) [GCC 9.1.0] ``` ##### OS / ENVIRONMENT Proxmox 6 ##### STEPS TO REPRODUCE - Use the latest ansible version - Use the latest stable version of proxmox (6) - Use the proxmox module to do anything. E.g. create a container - Get the error ##### EXPECTED RESULTS Normal runthrough ##### ACTUAL RESULTS Crash with an error
https://github.com/ansible/ansible/issues/59164
https://github.com/ansible/ansible/pull/59165
0407af936a093c9e3c9feb098bf21e13f69abd7e
38193f6b60caa2e3725cb987376a80074821a950
2019-07-16T21:48:58Z
python
2019-11-29T17:16:40Z
lib/ansible/modules/cloud/misc/proxmox.py
#!/usr/bin/python # Copyright: Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: proxmox short_description: management of instances in Proxmox VE cluster description: - allows you to create/delete/stop instances in Proxmox VE cluster - Starting in Ansible 2.1, it automatically detects containerization type (lxc for PVE 4, openvz for older) version_added: "2.0" options: api_host: description: - the host of the Proxmox VE cluster required: true api_user: description: - the user to authenticate with required: true api_password: description: - the password to authenticate with - you can use PROXMOX_PASSWORD environment variable vmid: description: - the instance id - if not set, the next available VM ID will be fetched from ProxmoxAPI. - if not set, will be fetched from PromoxAPI based on the hostname validate_certs: description: - enable / disable https certificate verification type: bool default: 'no' node: description: - Proxmox VE node, when new VM will be created - required only for C(state=present) - for another states will be autodiscovered pool: description: - Proxmox VE resource pool version_added: "2.3" password: description: - the instance root password - required only for C(state=present) hostname: description: - the instance hostname - required only for C(state=present) - must be unique if vmid is not passed ostemplate: description: - the template for VM creating - required only for C(state=present) disk: description: - hard disk size in GB for instance default: 3 cores: description: - Specify number of cores per socket. default: 1 version_added: 2.4 cpus: description: - numbers of allocated cpus for instance default: 1 memory: description: - memory size in MB for instance default: 512 swap: description: - swap memory size in MB for instance default: 0 netif: description: - specifies network interfaces for the container. As a hash/dictionary defining interfaces. mounts: description: - specifies additional mounts (separate disks) for the container. As a hash/dictionary defining mount points version_added: "2.2" ip_address: description: - specifies the address the container will be assigned onboot: description: - specifies whether a VM will be started during system bootup type: bool default: 'no' storage: description: - target storage default: 'local' cpuunits: description: - CPU weight for a VM default: 1000 nameserver: description: - sets DNS server IP address for a container searchdomain: description: - sets DNS search domain for a container timeout: description: - timeout for operations default: 30 force: description: - forcing operations - can be used only with states C(present), C(stopped), C(restarted) - with C(state=present) force option allow to overwrite existing container - with states C(stopped) , C(restarted) allow to force stop instance type: bool default: 'no' state: description: - Indicate desired state of the instance choices: ['present', 'started', 'absent', 'stopped', 'restarted'] default: present pubkey: description: - Public key to add to /root/.ssh/authorized_keys. This was added on Proxmox 4.2, it is ignored for earlier versions version_added: "2.3" unprivileged: version_added: "2.3" description: - Indicate if the container should be unprivileged type: bool default: 'no' notes: - Requires proxmoxer and requests modules on host. This modules can be installed with pip. requirements: [ "proxmoxer", "python >= 2.7", "requests" ] author: Sergei Antipov (@UnderGreen) ''' EXAMPLES = ''' # Create new container with minimal options - proxmox: vmid: 100 node: uk-mc02 api_user: root@pam api_password: 1q2w3e api_host: node1 password: 123456 hostname: example.org ostemplate: 'local:vztmpl/ubuntu-14.04-x86_64.tar.gz' # Create new container automatically selecting the next available vmid. - proxmox: node: 'uk-mc02' api_user: 'root@pam' api_password: '1q2w3e' api_host: 'node1' password: '123456' hostname: 'example.org' ostemplate: 'local:vztmpl/ubuntu-14.04-x86_64.tar.gz' # Create new container with minimal options with force(it will rewrite existing container) - proxmox: vmid: 100 node: uk-mc02 api_user: root@pam api_password: 1q2w3e api_host: node1 password: 123456 hostname: example.org ostemplate: 'local:vztmpl/ubuntu-14.04-x86_64.tar.gz' force: yes # Create new container with minimal options use environment PROXMOX_PASSWORD variable(you should export it before) - proxmox: vmid: 100 node: uk-mc02 api_user: root@pam api_host: node1 password: 123456 hostname: example.org ostemplate: 'local:vztmpl/ubuntu-14.04-x86_64.tar.gz' # Create new container with minimal options defining network interface with dhcp - proxmox: vmid: 100 node: uk-mc02 api_user: root@pam api_password: 1q2w3e api_host: node1 password: 123456 hostname: example.org ostemplate: 'local:vztmpl/ubuntu-14.04-x86_64.tar.gz' netif: '{"net0":"name=eth0,ip=dhcp,ip6=dhcp,bridge=vmbr0"}' # Create new container with minimal options defining network interface with static ip - proxmox: vmid: 100 node: uk-mc02 api_user: root@pam api_password: 1q2w3e api_host: node1 password: 123456 hostname: example.org ostemplate: 'local:vztmpl/ubuntu-14.04-x86_64.tar.gz' netif: '{"net0":"name=eth0,gw=192.168.0.1,ip=192.168.0.2/24,bridge=vmbr0"}' # Create new container with minimal options defining a mount with 8GB - proxmox: vmid: 100 node: uk-mc02 api_user: root@pam api_password: 1q2w3e api_host: node1 password: 123456 hostname: example.org ostemplate: local:vztmpl/ubuntu-14.04-x86_64.tar.gz' mounts: '{"mp0":"local:8,mp=/mnt/test/"}' # Create new container with minimal options defining a cpu core limit - proxmox: vmid: 100 node: uk-mc02 api_user: root@pam api_password: 1q2w3e api_host: node1 password: 123456 hostname: example.org ostemplate: local:vztmpl/ubuntu-14.04-x86_64.tar.gz' cores: 2 # Start container - proxmox: vmid: 100 api_user: root@pam api_password: 1q2w3e api_host: node1 state: started # Start container with mount. You should enter a 90-second timeout because servers with additional disks take longer to boot. - proxmox: vmid: 100 api_user: root@pam api_password: 1q2w3e api_host: node1 state: started timeout: 90 # Stop container - proxmox: vmid: 100 api_user: root@pam api_password: 1q2w3e api_host: node1 state: stopped # Stop container with force - proxmox: vmid: 100 api_user: root@pam api_password: 1q2w3e api_host: node1 force: yes state: stopped # Restart container(stopped or mounted container you can't restart) - proxmox: vmid: 100 api_user: root@pam api_password: 1q2w3e api_host: node1 state: restarted # Remove container - proxmox: vmid: 100 api_user: root@pam api_password: 1q2w3e api_host: node1 state: absent ''' import os import time import traceback try: from proxmoxer import ProxmoxAPI HAS_PROXMOXER = True except ImportError: HAS_PROXMOXER = False from ansible.module_utils.basic import AnsibleModule from ansible.module_utils._text import to_native VZ_TYPE = None def get_nextvmid(module, proxmox): try: vmid = proxmox.cluster.nextid.get() return vmid except Exception as e: module.fail_json(msg="Unable to get next vmid. Failed with exception: %s" % to_native(e), exception=traceback.format_exc()) def get_vmid(proxmox, hostname): return [vm['vmid'] for vm in proxmox.cluster.resources.get(type='vm') if 'name' in vm and vm['name'] == hostname] def get_instance(proxmox, vmid): return [vm for vm in proxmox.cluster.resources.get(type='vm') if vm['vmid'] == int(vmid)] def content_check(proxmox, node, ostemplate, template_store): return [True for cnt in proxmox.nodes(node).storage(template_store).content.get() if cnt['volid'] == ostemplate] def node_check(proxmox, node): return [True for nd in proxmox.nodes.get() if nd['node'] == node] def create_instance(module, proxmox, vmid, node, disk, storage, cpus, memory, swap, timeout, **kwargs): proxmox_node = proxmox.nodes(node) kwargs = dict((k, v) for k, v in kwargs.items() if v is not None) if VZ_TYPE == 'lxc': kwargs['cpulimit'] = cpus kwargs['rootfs'] = disk if 'netif' in kwargs: kwargs.update(kwargs['netif']) del kwargs['netif'] if 'mounts' in kwargs: kwargs.update(kwargs['mounts']) del kwargs['mounts'] if 'pubkey' in kwargs: if float(proxmox.version.get()['version']) >= 4.2: kwargs['ssh-public-keys'] = kwargs['pubkey'] del kwargs['pubkey'] else: kwargs['cpus'] = cpus kwargs['disk'] = disk taskid = getattr(proxmox_node, VZ_TYPE).create(vmid=vmid, storage=storage, memory=memory, swap=swap, **kwargs) while timeout: if (proxmox_node.tasks(taskid).status.get()['status'] == 'stopped' and proxmox_node.tasks(taskid).status.get()['exitstatus'] == 'OK'): return True timeout -= 1 if timeout == 0: module.fail_json(msg='Reached timeout while waiting for creating VM. Last line in task before timeout: %s' % proxmox_node.tasks(taskid).log.get()[:1]) time.sleep(1) return False def start_instance(module, proxmox, vm, vmid, timeout): taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.start.post() while timeout: if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'): return True timeout -= 1 if timeout == 0: module.fail_json(msg='Reached timeout while waiting for starting VM. Last line in task before timeout: %s' % proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1]) time.sleep(1) return False def stop_instance(module, proxmox, vm, vmid, timeout, force): if force: taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.shutdown.post(forceStop=1) else: taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.shutdown.post() while timeout: if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'): return True timeout -= 1 if timeout == 0: module.fail_json(msg='Reached timeout while waiting for stopping VM. Last line in task before timeout: %s' % proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1]) time.sleep(1) return False def umount_instance(module, proxmox, vm, vmid, timeout): taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.umount.post() while timeout: if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'): return True timeout -= 1 if timeout == 0: module.fail_json(msg='Reached timeout while waiting for unmounting VM. Last line in task before timeout: %s' % proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1]) time.sleep(1) return False def main(): module = AnsibleModule( argument_spec=dict( api_host=dict(required=True), api_user=dict(required=True), api_password=dict(no_log=True), vmid=dict(required=False), validate_certs=dict(type='bool', default='no'), node=dict(), pool=dict(), password=dict(no_log=True), hostname=dict(), ostemplate=dict(), disk=dict(type='str', default='3'), cores=dict(type='int', default=1), cpus=dict(type='int', default=1), memory=dict(type='int', default=512), swap=dict(type='int', default=0), netif=dict(type='dict'), mounts=dict(type='dict'), ip_address=dict(), onboot=dict(type='bool', default='no'), storage=dict(default='local'), cpuunits=dict(type='int', default=1000), nameserver=dict(), searchdomain=dict(), timeout=dict(type='int', default=30), force=dict(type='bool', default='no'), state=dict(default='present', choices=['present', 'absent', 'stopped', 'started', 'restarted']), pubkey=dict(type='str', default=None), unprivileged=dict(type='bool', default='no') ) ) if not HAS_PROXMOXER: module.fail_json(msg='proxmoxer required for this module') state = module.params['state'] api_user = module.params['api_user'] api_host = module.params['api_host'] api_password = module.params['api_password'] vmid = module.params['vmid'] validate_certs = module.params['validate_certs'] node = module.params['node'] disk = module.params['disk'] cpus = module.params['cpus'] memory = module.params['memory'] swap = module.params['swap'] storage = module.params['storage'] hostname = module.params['hostname'] if module.params['ostemplate'] is not None: template_store = module.params['ostemplate'].split(":")[0] timeout = module.params['timeout'] # If password not set get it from PROXMOX_PASSWORD env if not api_password: try: api_password = os.environ['PROXMOX_PASSWORD'] except KeyError as e: module.fail_json(msg='You should set api_password param or use PROXMOX_PASSWORD environment variable') try: proxmox = ProxmoxAPI(api_host, user=api_user, password=api_password, verify_ssl=validate_certs) global VZ_TYPE VZ_TYPE = 'openvz' if float(proxmox.version.get()['version']) < 4.0 else 'lxc' except Exception as e: module.fail_json(msg='authorization on proxmox cluster failed with exception: %s' % e) # If vmid not set get the Next VM id from ProxmoxAPI # If hostname is set get the VM id from ProxmoxAPI if not vmid and state == 'present': vmid = get_nextvmid(module, proxmox) elif not vmid and hostname: hosts = get_vmid(proxmox, hostname) if len(hosts) == 0: module.fail_json(msg="Vmid could not be fetched => Hostname doesn't exist (action: %s)" % state) vmid = hosts[0] elif not vmid: module.exit_json(changed=False, msg="Vmid could not be fetched for the following action: %s" % state) if state == 'present': try: if get_instance(proxmox, vmid) and not module.params['force']: module.exit_json(changed=False, msg="VM with vmid = %s is already exists" % vmid) # If no vmid was passed, there cannot be another VM named 'hostname' if not module.params['vmid'] and get_vmid(proxmox, hostname) and not module.params['force']: module.exit_json(changed=False, msg="VM with hostname %s already exists and has ID number %s" % (hostname, get_vmid(proxmox, hostname)[0])) elif not (node, module.params['hostname'] and module.params['password'] and module.params['ostemplate']): module.fail_json(msg='node, hostname, password and ostemplate are mandatory for creating vm') elif not node_check(proxmox, node): module.fail_json(msg="node '%s' not exists in cluster" % node) elif not content_check(proxmox, node, module.params['ostemplate'], template_store): module.fail_json(msg="ostemplate '%s' not exists on node %s and storage %s" % (module.params['ostemplate'], node, template_store)) create_instance(module, proxmox, vmid, node, disk, storage, cpus, memory, swap, timeout, cores=module.params['cores'], pool=module.params['pool'], password=module.params['password'], hostname=module.params['hostname'], ostemplate=module.params['ostemplate'], netif=module.params['netif'], mounts=module.params['mounts'], ip_address=module.params['ip_address'], onboot=int(module.params['onboot']), cpuunits=module.params['cpuunits'], nameserver=module.params['nameserver'], searchdomain=module.params['searchdomain'], force=int(module.params['force']), pubkey=module.params['pubkey'], unprivileged=int(module.params['unprivileged'])) module.exit_json(changed=True, msg="deployed VM %s from template %s" % (vmid, module.params['ostemplate'])) except Exception as e: module.fail_json(msg="creation of %s VM %s failed with exception: %s" % (VZ_TYPE, vmid, e)) elif state == 'started': try: vm = get_instance(proxmox, vmid) if not vm: module.fail_json(msg='VM with vmid = %s not exists in cluster' % vmid) if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'running': module.exit_json(changed=False, msg="VM %s is already running" % vmid) if start_instance(module, proxmox, vm, vmid, timeout): module.exit_json(changed=True, msg="VM %s started" % vmid) except Exception as e: module.fail_json(msg="starting of VM %s failed with exception: %s" % (vmid, e)) elif state == 'stopped': try: vm = get_instance(proxmox, vmid) if not vm: module.fail_json(msg='VM with vmid = %s not exists in cluster' % vmid) if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'mounted': if module.params['force']: if umount_instance(module, proxmox, vm, vmid, timeout): module.exit_json(changed=True, msg="VM %s is shutting down" % vmid) else: module.exit_json(changed=False, msg=("VM %s is already shutdown, but mounted. " "You can use force option to umount it.") % vmid) if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'stopped': module.exit_json(changed=False, msg="VM %s is already shutdown" % vmid) if stop_instance(module, proxmox, vm, vmid, timeout, force=module.params['force']): module.exit_json(changed=True, msg="VM %s is shutting down" % vmid) except Exception as e: module.fail_json(msg="stopping of VM %s failed with exception: %s" % (vmid, e)) elif state == 'restarted': try: vm = get_instance(proxmox, vmid) if not vm: module.fail_json(msg='VM with vmid = %s not exists in cluster' % vmid) if (getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'stopped' or getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'mounted'): module.exit_json(changed=False, msg="VM %s is not running" % vmid) if (stop_instance(module, proxmox, vm, vmid, timeout, force=module.params['force']) and start_instance(module, proxmox, vm, vmid, timeout)): module.exit_json(changed=True, msg="VM %s is restarted" % vmid) except Exception as e: module.fail_json(msg="restarting of VM %s failed with exception: %s" % (vmid, e)) elif state == 'absent': try: vm = get_instance(proxmox, vmid) if not vm: module.exit_json(changed=False, msg="VM %s does not exist" % vmid) if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'running': module.exit_json(changed=False, msg="VM %s is running. Stop it before deletion." % vmid) if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'mounted': module.exit_json(changed=False, msg="VM %s is mounted. Stop it with force option before deletion." % vmid) taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE).delete(vmid) while timeout: if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'): module.exit_json(changed=True, msg="VM %s removed" % vmid) timeout -= 1 if timeout == 0: module.fail_json(msg='Reached timeout while waiting for removing VM. Last line in task before timeout: %s' % proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1]) time.sleep(1) except Exception as e: module.fail_json(msg="deletion of VM %s failed with exception: %s" % (vmid, to_native(e))) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
59,164
Proxmox version detection is broken in proxmox 6
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY An hour ago I upgraded my Proxmox to the new stable version 6. Since then my playbooks fail with this error: `fatal: [proxmox]: FAILED! => {"changed": false, "msg": "authorization on proxmox cluster failed with exception: could not convert string to float: '6.0-4'"}` It seems like the proxmox changed the API. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME proxmox (https://github.com/mrdrogdrog/ansible/blob/6430205d396b7c1733de22a898c51823f67d5bf4/lib/ansible/modules/cloud/misc/proxmox.py#L484) ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.8.2 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/tilman/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.7/site-packages/ansible executable location = /usr/bin/ansible python version = 3.7.3 (default, Jun 24 2019, 04:54:02) [GCC 9.1.0] ``` ##### OS / ENVIRONMENT Proxmox 6 ##### STEPS TO REPRODUCE - Use the latest ansible version - Use the latest stable version of proxmox (6) - Use the proxmox module to do anything. E.g. create a container - Get the error ##### EXPECTED RESULTS Normal runthrough ##### ACTUAL RESULTS Crash with an error
https://github.com/ansible/ansible/issues/59164
https://github.com/ansible/ansible/pull/59165
0407af936a093c9e3c9feb098bf21e13f69abd7e
38193f6b60caa2e3725cb987376a80074821a950
2019-07-16T21:48:58Z
python
2019-11-29T17:16:40Z
lib/ansible/modules/cloud/misc/proxmox_kvm.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2016, Abdoul Bah (@helldorado) <bahabdoul at gmail.com> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = r''' --- module: proxmox_kvm short_description: Management of Qemu(KVM) Virtual Machines in Proxmox VE cluster. description: - Allows you to create/delete/stop Qemu(KVM) Virtual Machines in Proxmox VE cluster. version_added: "2.3" author: "Abdoul Bah (@helldorado) <bahabdoul at gmail.com>" options: acpi: description: - Specify if ACPI should be enabled/disabled. type: bool default: 'yes' agent: description: - Specify if the QEMU Guest Agent should be enabled/disabled. type: bool args: description: - Pass arbitrary arguments to kvm. - This option is for experts only! default: "-serial unix:/var/run/qemu-server/VMID.serial,server,nowait" api_host: description: - Specify the target host of the Proxmox VE cluster. required: true api_user: description: - Specify the user to authenticate with. required: true api_password: description: - Specify the password to authenticate with. - You can use C(PROXMOX_PASSWORD) environment variable. autostart: description: - Specify if the VM should be automatically restarted after crash (currently ignored in PVE API). type: bool default: 'no' balloon: description: - Specify the amount of RAM for the VM in MB. - Using zero disables the balloon driver. default: 0 bios: description: - Specify the BIOS implementation. choices: ['seabios', 'ovmf'] boot: description: - Specify the boot order -> boot on floppy C(a), hard disk C(c), CD-ROM C(d), or network C(n). - You can combine to set order. default: cnd bootdisk: description: - Enable booting from specified disk. C((ide|sata|scsi|virtio)\d+) clone: description: - Name of VM to be cloned. If C(vmid) is setted, C(clone) can take arbitrary value but required for initiating the clone. cores: description: - Specify number of cores per socket. default: 1 cpu: description: - Specify emulated CPU type. default: kvm64 cpulimit: description: - Specify if CPU usage will be limited. Value 0 indicates no CPU limit. - If the computer has 2 CPUs, it has total of '2' CPU time cpuunits: description: - Specify CPU weight for a VM. - You can disable fair-scheduler configuration by setting this to 0 default: 1000 delete: description: - Specify a list of settings you want to delete. description: description: - Specify the description for the VM. Only used on the configuration web interface. - This is saved as comment inside the configuration file. digest: description: - Specify if to prevent changes if current configuration file has different SHA1 digest. - This can be used to prevent concurrent modifications. force: description: - Allow to force stop VM. - Can be used only with states C(stopped), C(restarted). type: bool format: description: - Target drive's backing file's data format. - Used only with clone default: qcow2 choices: [ "cloop", "cow", "qcow", "qcow2", "qed", "raw", "vmdk" ] freeze: description: - Specify if PVE should freeze CPU at startup (use 'c' monitor command to start execution). type: bool full: description: - Create a full copy of all disk. This is always done when you clone a normal VM. - For VM templates, we try to create a linked clone by default. - Used only with clone type: bool default: 'yes' hostpci: description: - Specify a hash/dictionary of map host pci devices into guest. C(hostpci='{"key":"value", "key":"value"}'). - Keys allowed are - C(hostpci[n]) where 0 ≀ n ≀ N. - Values allowed are - C("host="HOSTPCIID[;HOSTPCIID2...]",pcie="1|0",rombar="1|0",x-vga="1|0""). - The C(host) parameter is Host PCI device pass through. HOSTPCIID syntax is C(bus:dev.func) (hexadecimal numbers). - C(pcie=boolean) I(default=0) Choose the PCI-express bus (needs the q35 machine model). - C(rombar=boolean) I(default=1) Specify whether or not the device's ROM will be visible in the guest's memory map. - C(x-vga=boolean) I(default=0) Enable vfio-vga device support. - /!\ This option allows direct access to host hardware. So it is no longer possible to migrate such machines - use with special care. hotplug: description: - Selectively enable hotplug features. - This is a comma separated list of hotplug features C('network', 'disk', 'cpu', 'memory' and 'usb'). - Value 0 disables hotplug completely and value 1 is an alias for the default C('network,disk,usb'). hugepages: description: - Enable/disable hugepages memory. choices: ['any', '2', '1024'] ide: description: - A hash/dictionary of volume used as IDE hard disk or CD-ROM. C(ide='{"key":"value", "key":"value"}'). - Keys allowed are - C(ide[n]) where 0 ≀ n ≀ 3. - Values allowed are - C("storage:size,format=value"). - C(storage) is the storage identifier where to create the disk. - C(size) is the size of the disk in GB. - C(format) is the drive's backing file's data format. C(qcow2|raw|subvol). keyboard: description: - Sets the keyboard layout for VNC server. kvm: description: - Enable/disable KVM hardware virtualization. type: bool default: 'yes' localtime: description: - Sets the real time clock to local time. - This is enabled by default if ostype indicates a Microsoft OS. type: bool lock: description: - Lock/unlock the VM. choices: ['migrate', 'backup', 'snapshot', 'rollback'] machine: description: - Specifies the Qemu machine type. - type => C((pc|pc(-i440fx)?-\d+\.\d+(\.pxe)?|q35|pc-q35-\d+\.\d+(\.pxe)?)) memory: description: - Memory size in MB for instance. default: 512 migrate_downtime: description: - Sets maximum tolerated downtime (in seconds) for migrations. migrate_speed: description: - Sets maximum speed (in MB/s) for migrations. - A value of 0 is no limit. name: description: - Specifies the VM name. Only used on the configuration web interface. - Required only for C(state=present). net: description: - A hash/dictionary of network interfaces for the VM. C(net='{"key":"value", "key":"value"}'). - Keys allowed are - C(net[n]) where 0 ≀ n ≀ N. - Values allowed are - C("model="XX:XX:XX:XX:XX:XX",bridge="value",rate="value",tag="value",firewall="1|0",trunks="vlanid""). - Model is one of C(e1000 e1000-82540em e1000-82544gc e1000-82545em i82551 i82557b i82559er ne2k_isa ne2k_pci pcnet rtl8139 virtio vmxnet3). - C(XX:XX:XX:XX:XX:XX) should be an unique MAC address. This is automatically generated if not specified. - The C(bridge) parameter can be used to automatically add the interface to a bridge device. The Proxmox VE standard bridge is called 'vmbr0'. - Option C(rate) is used to limit traffic bandwidth from and to this interface. It is specified as floating point number, unit is 'Megabytes per second'. - If you specify no bridge, we create a kvm 'user' (NATed) network device, which provides DHCP and DNS services. newid: description: - VMID for the clone. Used only with clone. - If newid is not set, the next available VM ID will be fetched from ProxmoxAPI. node: description: - Proxmox VE node, where the new VM will be created. - Only required for C(state=present). - For other states, it will be autodiscovered. numa: description: - A hash/dictionaries of NUMA topology. C(numa='{"key":"value", "key":"value"}'). - Keys allowed are - C(numa[n]) where 0 ≀ n ≀ N. - Values allowed are - C("cpu="<id[-id];...>",hostnodes="<id[-id];...>",memory="number",policy="(bind|interleave|preferred)""). - C(cpus) CPUs accessing this NUMA node. - C(hostnodes) Host NUMA nodes to use. - C(memory) Amount of memory this NUMA node provides. - C(policy) NUMA allocation policy. onboot: description: - Specifies whether a VM will be started during system bootup. type: bool default: 'yes' ostype: description: - Specifies guest operating system. This is used to enable special optimization/features for specific operating systems. - The l26 is Linux 2.6/3.X Kernel. choices: ['other', 'wxp', 'w2k', 'w2k3', 'w2k8', 'wvista', 'win7', 'win8', 'win10', 'l24', 'l26', 'solaris'] default: l26 parallel: description: - A hash/dictionary of map host parallel devices. C(parallel='{"key":"value", "key":"value"}'). - Keys allowed are - (parallel[n]) where 0 ≀ n ≀ 2. - Values allowed are - C("/dev/parport\d+|/dev/usb/lp\d+"). pool: description: - Add the new VM to the specified pool. protection: description: - Enable/disable the protection flag of the VM. This will enable/disable the remove VM and remove disk operations. type: bool reboot: description: - Allow reboot. If set to C(yes), the VM exit on reboot. type: bool revert: description: - Revert a pending change. sata: description: - A hash/dictionary of volume used as sata hard disk or CD-ROM. C(sata='{"key":"value", "key":"value"}'). - Keys allowed are - C(sata[n]) where 0 ≀ n ≀ 5. - Values allowed are - C("storage:size,format=value"). - C(storage) is the storage identifier where to create the disk. - C(size) is the size of the disk in GB. - C(format) is the drive's backing file's data format. C(qcow2|raw|subvol). scsi: description: - A hash/dictionary of volume used as SCSI hard disk or CD-ROM. C(scsi='{"key":"value", "key":"value"}'). - Keys allowed are - C(sata[n]) where 0 ≀ n ≀ 13. - Values allowed are - C("storage:size,format=value"). - C(storage) is the storage identifier where to create the disk. - C(size) is the size of the disk in GB. - C(format) is the drive's backing file's data format. C(qcow2|raw|subvol). scsihw: description: - Specifies the SCSI controller model. choices: ['lsi', 'lsi53c810', 'virtio-scsi-pci', 'virtio-scsi-single', 'megasas', 'pvscsi'] serial: description: - A hash/dictionary of serial device to create inside the VM. C('{"key":"value", "key":"value"}'). - Keys allowed are - serial[n](str; required) where 0 ≀ n ≀ 3. - Values allowed are - C((/dev/.+|socket)). - /!\ If you pass through a host serial device, it is no longer possible to migrate such machines - use with special care. shares: description: - Rets amount of memory shares for auto-ballooning. (0 - 50000). - The larger the number is, the more memory this VM gets. - The number is relative to weights of all other running VMs. - Using 0 disables auto-ballooning, this means no limit. skiplock: description: - Ignore locks - Only root is allowed to use this option. smbios: description: - Specifies SMBIOS type 1 fields. snapname: description: - The name of the snapshot. Used only with clone. sockets: description: - Sets the number of CPU sockets. (1 - N). default: 1 startdate: description: - Sets the initial date of the real time clock. - Valid format for date are C('now') or C('2016-09-25T16:01:21') or C('2016-09-25'). startup: description: - Startup and shutdown behavior. C([[order=]\d+] [,up=\d+] [,down=\d+]). - Order is a non-negative number defining the general startup order. - Shutdown in done with reverse ordering. state: description: - Indicates desired state of the instance. - If C(current), the current state of the VM will be fetched. You can access it with C(results.status) choices: ['present', 'started', 'absent', 'stopped', 'restarted','current'] default: present storage: description: - Target storage for full clone. tablet: description: - Enables/disables the USB tablet device. type: bool default: 'no' target: description: - Target node. Only allowed if the original VM is on shared storage. - Used only with clone tdf: description: - Enables/disables time drift fix. type: bool template: description: - Enables/disables the template. type: bool default: 'no' timeout: description: - Timeout for operations. default: 30 update: description: - If C(yes), the VM will be update with new value. - Cause of the operations of the API and security reasons, I have disabled the update of the following parameters - C(net, virtio, ide, sata, scsi). Per example updating C(net) update the MAC address and C(virtio) create always new disk... type: bool default: 'no' validate_certs: description: - If C(no), SSL certificates will not be validated. This should only be used on personally controlled sites using self-signed certificates. type: bool default: 'no' vcpus: description: - Sets number of hotplugged vcpus. vga: description: - Select VGA type. If you want to use high resolution modes (>= 1280x1024x16) then you should use option 'std' or 'vmware'. choices: ['std', 'cirrus', 'vmware', 'qxl', 'serial0', 'serial1', 'serial2', 'serial3', 'qxl2', 'qxl3', 'qxl4'] default: std virtio: description: - A hash/dictionary of volume used as VIRTIO hard disk. C(virtio='{"key":"value", "key":"value"}'). - Keys allowed are - C(virto[n]) where 0 ≀ n ≀ 15. - Values allowed are - C("storage:size,format=value"). - C(storage) is the storage identifier where to create the disk. - C(size) is the size of the disk in GB. - C(format) is the drive's backing file's data format. C(qcow2|raw|subvol). vmid: description: - Specifies the VM ID. Instead use I(name) parameter. - If vmid is not set, the next available VM ID will be fetched from ProxmoxAPI. watchdog: description: - Creates a virtual hardware watchdog device. requirements: [ "proxmoxer", "requests" ] ''' EXAMPLES = ''' # Create new VM with minimal options - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf # Create new VM with minimal options and given vmid - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf vmid : 100 # Create new VM with two network interface options. - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf net : '{"net0":"virtio,bridge=vmbr1,rate=200", "net1":"e1000,bridge=vmbr2,"}' # Create new VM with one network interface, three virto hard disk, 4 cores, and 2 vcpus. - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf net : '{"net0":"virtio,bridge=vmbr1,rate=200"}' virtio : '{"virtio0":"VMs_LVM:10", "virtio1":"VMs:2,format=qcow2", "virtio2":"VMs:5,format=raw"}' cores : 4 vcpus : 2 # Clone VM with only source VM name - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado clone : spynal # The VM source name : zavala # The target VM name node : sabrewulf storage : VMs format : qcow2 timeout : 500 # Note: The task can take a while. Adapt # Clone VM with source vmid and target newid and raw format - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado clone : arbitrary_name vmid : 108 newid : 152 name : zavala # The target VM name node : sabrewulf storage : LVM_STO format : raw timeout : 300 # Note: The task can take a while. Adapt # Create new VM and lock it for snapashot. - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf lock : snapshot # Create new VM and set protection to disable the remove VM and remove disk operations - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf protection : yes # Start VM - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf state : started # Stop VM - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf state : stopped # Stop VM with force - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf state : stopped force : yes # Restart VM - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf state : restarted # Remove VM - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf state : absent # Get VM current state - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf state : current # Update VM configuration - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf cores : 8 memory : 16384 update : yes # Delete QEMU parameters - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf delete : 'args,template,cpulimit' # Revert a pending change - proxmox_kvm: api_user : root@pam api_password: secret api_host : helldorado name : spynal node : sabrewulf revert : 'template,cpulimit' ''' RETURN = ''' devices: description: The list of devices created or used. returned: success type: dict sample: ' { "ide0": "VMS_LVM:vm-115-disk-1", "ide1": "VMs:115/vm-115-disk-3.raw", "virtio0": "VMS_LVM:vm-115-disk-2", "virtio1": "VMs:115/vm-115-disk-1.qcow2", "virtio2": "VMs:115/vm-115-disk-2.raw" }' mac: description: List of mac address created and net[n] attached. Useful when you want to use provision systems like Foreman via PXE. returned: success type: dict sample: ' { "net0": "3E:6E:97:D2:31:9F", "net1": "B6:A1:FC:EF:78:A4" }' vmid: description: The VM vmid. returned: success type: int sample: 115 status: description: - The current virtual machine status. - Returned only when C(state=current) returned: success type: dict sample: '{ "changed": false, "msg": "VM kropta with vmid = 110 is running", "status": "running" }' ''' import os import re import time import traceback try: from proxmoxer import ProxmoxAPI HAS_PROXMOXER = True except ImportError: HAS_PROXMOXER = False from ansible.module_utils.basic import AnsibleModule from ansible.module_utils._text import to_native VZ_TYPE = 'qemu' def get_nextvmid(module, proxmox): try: vmid = proxmox.cluster.nextid.get() return vmid except Exception as e: module.fail_json(msg="Unable to get next vmid. Failed with exception: %s" % to_native(e), exception=traceback.format_exc()) def get_vmid(proxmox, name): return [vm['vmid'] for vm in proxmox.cluster.resources.get(type='vm') if vm.get('name') == name] def get_vm(proxmox, vmid): return [vm for vm in proxmox.cluster.resources.get(type='vm') if vm['vmid'] == int(vmid)] def node_check(proxmox, node): return [True for nd in proxmox.nodes.get() if nd['node'] == node] def get_vminfo(module, proxmox, node, vmid, **kwargs): global results results = {} mac = {} devices = {} try: vm = proxmox.nodes(node).qemu(vmid).config.get() except Exception as e: module.fail_json(msg='Getting information for VM with vmid = %s failed with exception: %s' % (vmid, e)) # Sanitize kwargs. Remove not defined args and ensure True and False converted to int. kwargs = dict((k, v) for k, v in kwargs.items() if v is not None) # Convert all dict in kwargs to elements. For hostpci[n], ide[n], net[n], numa[n], parallel[n], sata[n], scsi[n], serial[n], virtio[n] for k in list(kwargs.keys()): if isinstance(kwargs[k], dict): kwargs.update(kwargs[k]) del kwargs[k] # Split information by type for k, v in kwargs.items(): if re.match(r'net[0-9]', k) is not None: interface = k k = vm[k] k = re.search('=(.*?),', k).group(1) mac[interface] = k if (re.match(r'virtio[0-9]', k) is not None or re.match(r'ide[0-9]', k) is not None or re.match(r'scsi[0-9]', k) is not None or re.match(r'sata[0-9]', k) is not None): device = k k = vm[k] k = re.search('(.*?),', k).group(1) devices[device] = k results['mac'] = mac results['devices'] = devices results['vmid'] = int(vmid) def settings(module, proxmox, vmid, node, name, timeout, **kwargs): proxmox_node = proxmox.nodes(node) # Sanitize kwargs. Remove not defined args and ensure True and False converted to int. kwargs = dict((k, v) for k, v in kwargs.items() if v is not None) if getattr(proxmox_node, VZ_TYPE)(vmid).config.set(**kwargs) is None: return True else: return False def create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sockets, timeout, update, **kwargs): # Available only in PVE 4 only_v4 = ['force', 'protection', 'skiplock'] # valide clone parameters valid_clone_params = ['format', 'full', 'pool', 'snapname', 'storage', 'target'] clone_params = {} # Default args for vm. Note: -args option is for experts only. It allows you to pass arbitrary arguments to kvm. vm_args = "-serial unix:/var/run/qemu-server/{0}.serial,server,nowait".format(vmid) proxmox_node = proxmox.nodes(node) # Sanitize kwargs. Remove not defined args and ensure True and False converted to int. kwargs = dict((k, v) for k, v in kwargs.items() if v is not None) kwargs.update(dict([k, int(v)] for k, v in kwargs.items() if isinstance(v, bool))) # The features work only on PVE 4 if PVE_MAJOR_VERSION < 4: for p in only_v4: if p in kwargs: del kwargs[p] # If update, don't update disk (virtio, ide, sata, scsi) and network interface if update: if 'virtio' in kwargs: del kwargs['virtio'] if 'sata' in kwargs: del kwargs['sata'] if 'scsi' in kwargs: del kwargs['scsi'] if 'ide' in kwargs: del kwargs['ide'] if 'net' in kwargs: del kwargs['net'] # Convert all dict in kwargs to elements. For hostpci[n], ide[n], net[n], numa[n], parallel[n], sata[n], scsi[n], serial[n], virtio[n] for k in list(kwargs.keys()): if isinstance(kwargs[k], dict): kwargs.update(kwargs[k]) del kwargs[k] # Rename numa_enabled to numa. According the API documentation if 'numa_enabled' in kwargs: kwargs['numa'] = kwargs['numa_enabled'] del kwargs['numa_enabled'] # -args and skiplock require root@pam user if module.params['api_user'] == "root@pam" and module.params['args'] is None: if not update: kwargs['args'] = vm_args elif module.params['api_user'] == "root@pam" and module.params['args'] is not None: kwargs['args'] = module.params['args'] elif module.params['api_user'] != "root@pam" and module.params['args'] is not None: module.fail_json(msg='args parameter require root@pam user. ') if module.params['api_user'] != "root@pam" and module.params['skiplock'] is not None: module.fail_json(msg='skiplock parameter require root@pam user. ') if update: if getattr(proxmox_node, VZ_TYPE)(vmid).config.set(name=name, memory=memory, cpu=cpu, cores=cores, sockets=sockets, **kwargs) is None: return True else: return False elif module.params['clone'] is not None: for param in valid_clone_params: if module.params[param] is not None: clone_params[param] = module.params[param] clone_params.update(dict([k, int(v)] for k, v in clone_params.items() if isinstance(v, bool))) taskid = proxmox_node.qemu(vmid).clone.post(newid=newid, name=name, **clone_params) else: taskid = getattr(proxmox_node, VZ_TYPE).create(vmid=vmid, name=name, memory=memory, cpu=cpu, cores=cores, sockets=sockets, **kwargs) while timeout: if (proxmox_node.tasks(taskid).status.get()['status'] == 'stopped' and proxmox_node.tasks(taskid).status.get()['exitstatus'] == 'OK'): return True timeout = timeout - 1 if timeout == 0: module.fail_json(msg='Reached timeout while waiting for creating VM. Last line in task before timeout: %s' % proxmox_node.tasks(taskid).log.get()[:1]) time.sleep(1) return False def start_vm(module, proxmox, vm, vmid, timeout): taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.start.post() while timeout: if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'): return True timeout -= 1 if timeout == 0: module.fail_json(msg='Reached timeout while waiting for starting VM. Last line in task before timeout: %s' % proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1]) time.sleep(1) return False def stop_vm(module, proxmox, vm, vmid, timeout, force): if force: taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.shutdown.post(forceStop=1) else: taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.shutdown.post() while timeout: if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'): return True timeout -= 1 if timeout == 0: module.fail_json(msg='Reached timeout while waiting for stopping VM. Last line in task before timeout: %s' % proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1]) time.sleep(1) return False def main(): module = AnsibleModule( argument_spec=dict( acpi=dict(type='bool', default='yes'), agent=dict(type='bool'), args=dict(type='str', default=None), api_host=dict(required=True), api_user=dict(required=True), api_password=dict(no_log=True), autostart=dict(type='bool', default='no'), balloon=dict(type='int', default=0), bios=dict(choices=['seabios', 'ovmf']), boot=dict(type='str', default='cnd'), bootdisk=dict(type='str'), clone=dict(type='str', default=None), cores=dict(type='int', default=1), cpu=dict(type='str', default='kvm64'), cpulimit=dict(type='int'), cpuunits=dict(type='int', default=1000), delete=dict(type='str', default=None), description=dict(type='str'), digest=dict(type='str'), force=dict(type='bool', default=None), format=dict(type='str', default='qcow2', choices=['cloop', 'cow', 'qcow', 'qcow2', 'qed', 'raw', 'vmdk']), freeze=dict(type='bool'), full=dict(type='bool', default='yes'), hostpci=dict(type='dict'), hotplug=dict(type='str'), hugepages=dict(choices=['any', '2', '1024']), ide=dict(type='dict', default=None), keyboard=dict(type='str'), kvm=dict(type='bool', default='yes'), localtime=dict(type='bool'), lock=dict(choices=['migrate', 'backup', 'snapshot', 'rollback']), machine=dict(type='str'), memory=dict(type='int', default=512), migrate_downtime=dict(type='int'), migrate_speed=dict(type='int'), name=dict(type='str'), net=dict(type='dict'), newid=dict(type='int', default=None), node=dict(), numa=dict(type='dict'), numa_enabled=dict(type='bool'), onboot=dict(type='bool', default='yes'), ostype=dict(default='l26', choices=['other', 'wxp', 'w2k', 'w2k3', 'w2k8', 'wvista', 'win7', 'win8', 'win10', 'l24', 'l26', 'solaris']), parallel=dict(type='dict'), pool=dict(type='str'), protection=dict(type='bool'), reboot=dict(type='bool'), revert=dict(type='str', default=None), sata=dict(type='dict'), scsi=dict(type='dict'), scsihw=dict(choices=['lsi', 'lsi53c810', 'virtio-scsi-pci', 'virtio-scsi-single', 'megasas', 'pvscsi']), serial=dict(type='dict'), shares=dict(type='int'), skiplock=dict(type='bool'), smbios=dict(type='str'), snapname=dict(type='str'), sockets=dict(type='int', default=1), startdate=dict(type='str'), startup=dict(), state=dict(default='present', choices=['present', 'absent', 'stopped', 'started', 'restarted', 'current']), storage=dict(type='str'), tablet=dict(type='bool', default='no'), target=dict(type='str'), tdf=dict(type='bool'), template=dict(type='bool', default='no'), timeout=dict(type='int', default=30), update=dict(type='bool', default='no'), validate_certs=dict(type='bool', default='no'), vcpus=dict(type='int', default=None), vga=dict(default='std', choices=['std', 'cirrus', 'vmware', 'qxl', 'serial0', 'serial1', 'serial2', 'serial3', 'qxl2', 'qxl3', 'qxl4']), virtio=dict(type='dict', default=None), vmid=dict(type='int', default=None), watchdog=dict(), ), mutually_exclusive=[('delete', 'revert'), ('delete', 'update'), ('revert', 'update'), ('clone', 'update'), ('clone', 'delete'), ('clone', 'revert')], required_one_of=[('name', 'vmid',)], required_if=[('state', 'present', ['node'])] ) if not HAS_PROXMOXER: module.fail_json(msg='proxmoxer required for this module') api_user = module.params['api_user'] api_host = module.params['api_host'] api_password = module.params['api_password'] clone = module.params['clone'] cpu = module.params['cpu'] cores = module.params['cores'] delete = module.params['delete'] memory = module.params['memory'] name = module.params['name'] newid = module.params['newid'] node = module.params['node'] revert = module.params['revert'] sockets = module.params['sockets'] state = module.params['state'] timeout = module.params['timeout'] update = bool(module.params['update']) vmid = module.params['vmid'] validate_certs = module.params['validate_certs'] # If password not set get it from PROXMOX_PASSWORD env if not api_password: try: api_password = os.environ['PROXMOX_PASSWORD'] except KeyError as e: module.fail_json(msg='You should set api_password param or use PROXMOX_PASSWORD environment variable') try: proxmox = ProxmoxAPI(api_host, user=api_user, password=api_password, verify_ssl=validate_certs) global VZ_TYPE global PVE_MAJOR_VERSION PVE_MAJOR_VERSION = 3 if float(proxmox.version.get()['version']) < 4.0 else 4 except Exception as e: module.fail_json(msg='authorization on proxmox cluster failed with exception: %s' % e) # If vmid not set get the Next VM id from ProxmoxAPI # If vm name is set get the VM id from ProxmoxAPI if not vmid: if state == 'present' and (not update and not clone) and (not delete and not revert): try: vmid = get_nextvmid(module, proxmox) except Exception as e: module.fail_json(msg="Can't get the next vmid for VM {0} automatically. Ensure your cluster state is good".format(name)) else: try: if not clone: vmid = get_vmid(proxmox, name)[0] else: vmid = get_vmid(proxmox, clone)[0] except Exception as e: if not clone: module.fail_json(msg="VM {0} does not exist in cluster.".format(name)) else: module.fail_json(msg="VM {0} does not exist in cluster.".format(clone)) if clone is not None: if get_vmid(proxmox, name): module.exit_json(changed=False, msg="VM with name <%s> already exists" % name) if vmid is not None: vm = get_vm(proxmox, vmid) if not vm: module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid) if not newid: try: newid = get_nextvmid(module, proxmox) except Exception as e: module.fail_json(msg="Can't get the next vmid for VM {0} automatically. Ensure your cluster state is good".format(name)) else: vm = get_vm(proxmox, newid) if vm: module.exit_json(changed=False, msg="vmid %s with VM name %s already exists" % (newid, name)) if delete is not None: try: settings(module, proxmox, vmid, node, name, timeout, delete=delete) module.exit_json(changed=True, msg="Settings has deleted on VM {0} with vmid {1}".format(name, vmid)) except Exception as e: module.fail_json(msg='Unable to delete settings on VM {0} with vmid {1}: '.format(name, vmid) + str(e)) elif revert is not None: try: settings(module, proxmox, vmid, node, name, timeout, revert=revert) module.exit_json(changed=True, msg="Settings has reverted on VM {0} with vmid {1}".format(name, vmid)) except Exception as e: module.fail_json(msg='Unable to revert settings on VM {0} with vmid {1}: Maybe is not a pending task... '.format(name, vmid) + str(e)) if state == 'present': try: if get_vm(proxmox, vmid) and not (update or clone): module.exit_json(changed=False, msg="VM with vmid <%s> already exists" % vmid) elif get_vmid(proxmox, name) and not (update or clone): module.exit_json(changed=False, msg="VM with name <%s> already exists" % name) elif not (node, name): module.fail_json(msg='node, name is mandatory for creating/updating vm') elif not node_check(proxmox, node): module.fail_json(msg="node '%s' does not exist in cluster" % node) create_vm(module, proxmox, vmid, newid, node, name, memory, cpu, cores, sockets, timeout, update, acpi=module.params['acpi'], agent=module.params['agent'], autostart=module.params['autostart'], balloon=module.params['balloon'], bios=module.params['bios'], boot=module.params['boot'], bootdisk=module.params['bootdisk'], cpulimit=module.params['cpulimit'], cpuunits=module.params['cpuunits'], description=module.params['description'], digest=module.params['digest'], force=module.params['force'], freeze=module.params['freeze'], hostpci=module.params['hostpci'], hotplug=module.params['hotplug'], hugepages=module.params['hugepages'], ide=module.params['ide'], keyboard=module.params['keyboard'], kvm=module.params['kvm'], localtime=module.params['localtime'], lock=module.params['lock'], machine=module.params['machine'], migrate_downtime=module.params['migrate_downtime'], migrate_speed=module.params['migrate_speed'], net=module.params['net'], numa=module.params['numa'], numa_enabled=module.params['numa_enabled'], onboot=module.params['onboot'], ostype=module.params['ostype'], parallel=module.params['parallel'], pool=module.params['pool'], protection=module.params['protection'], reboot=module.params['reboot'], sata=module.params['sata'], scsi=module.params['scsi'], scsihw=module.params['scsihw'], serial=module.params['serial'], shares=module.params['shares'], skiplock=module.params['skiplock'], smbios1=module.params['smbios'], snapname=module.params['snapname'], startdate=module.params['startdate'], startup=module.params['startup'], tablet=module.params['tablet'], target=module.params['target'], tdf=module.params['tdf'], template=module.params['template'], vcpus=module.params['vcpus'], vga=module.params['vga'], virtio=module.params['virtio'], watchdog=module.params['watchdog']) if not clone: get_vminfo(module, proxmox, node, vmid, ide=module.params['ide'], net=module.params['net'], sata=module.params['sata'], scsi=module.params['scsi'], virtio=module.params['virtio']) if update: module.exit_json(changed=True, msg="VM %s with vmid %s updated" % (name, vmid)) elif clone is not None: module.exit_json(changed=True, msg="VM %s with newid %s cloned from vm with vmid %s" % (name, newid, vmid)) else: module.exit_json(changed=True, msg="VM %s with vmid %s deployed" % (name, vmid), **results) except Exception as e: if update: module.fail_json(msg="Unable to update vm {0} with vmid {1}=".format(name, vmid) + str(e)) elif clone is not None: module.fail_json(msg="Unable to clone vm {0} from vmid {1}=".format(name, vmid) + str(e)) else: module.fail_json(msg="creation of %s VM %s with vmid %s failed with exception=%s" % (VZ_TYPE, name, vmid, e)) elif state == 'started': try: vm = get_vm(proxmox, vmid) if not vm: module.fail_json(msg='VM with vmid <%s> does not exist in cluster' % vmid) if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'running': module.exit_json(changed=False, msg="VM %s is already running" % vmid) if start_vm(module, proxmox, vm, vmid, timeout): module.exit_json(changed=True, msg="VM %s started" % vmid) except Exception as e: module.fail_json(msg="starting of VM %s failed with exception: %s" % (vmid, e)) elif state == 'stopped': try: vm = get_vm(proxmox, vmid) if not vm: module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid) if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'stopped': module.exit_json(changed=False, msg="VM %s is already stopped" % vmid) if stop_vm(module, proxmox, vm, vmid, timeout, force=module.params['force']): module.exit_json(changed=True, msg="VM %s is shutting down" % vmid) except Exception as e: module.fail_json(msg="stopping of VM %s failed with exception: %s" % (vmid, e)) elif state == 'restarted': try: vm = get_vm(proxmox, vmid) if not vm: module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid) if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'stopped': module.exit_json(changed=False, msg="VM %s is not running" % vmid) if stop_vm(module, proxmox, vm, vmid, timeout, force=module.params['force']) and start_vm(module, proxmox, vm, vmid, timeout): module.exit_json(changed=True, msg="VM %s is restarted" % vmid) except Exception as e: module.fail_json(msg="restarting of VM %s failed with exception: %s" % (vmid, e)) elif state == 'absent': try: vm = get_vm(proxmox, vmid) if not vm: module.exit_json(changed=False, msg="VM %s does not exist" % vmid) if getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] == 'running': module.exit_json(changed=False, msg="VM %s is running. Stop it before deletion." % vmid) taskid = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE).delete(vmid) while timeout: if (proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['status'] == 'stopped' and proxmox.nodes(vm[0]['node']).tasks(taskid).status.get()['exitstatus'] == 'OK'): module.exit_json(changed=True, msg="VM %s removed" % vmid) timeout -= 1 if timeout == 0: module.fail_json(msg='Reached timeout while waiting for removing VM. Last line in task before timeout: %s' % proxmox.nodes(vm[0]['node']).tasks(taskid).log.get()[:1]) time.sleep(1) except Exception as e: module.fail_json(msg="deletion of VM %s failed with exception: %s" % (vmid, e)) elif state == 'current': status = {} try: vm = get_vm(proxmox, vmid) if not vm: module.fail_json(msg='VM with vmid = %s does not exist in cluster' % vmid) current = getattr(proxmox.nodes(vm[0]['node']), VZ_TYPE)(vmid).status.current.get()['status'] status['status'] = current if status: module.exit_json(changed=False, msg="VM %s with vmid = %s is %s" % (name, vmid, current), **status) except Exception as e: module.fail_json(msg="Unable to get vm {0} with vmid = {1} status: ".format(name, vmid) + str(e)) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
62,620
win_nssm Is setting some defaults that should have an option to overwrite
<!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Describe the new feature/improvement briefly below --> The win_nssm module has a few defaults set now that I think should allow for customization. For example I'd like to be able to set the following: `AppRotateBytes 1000000` `AppRotateOnline 1` `AppStopMethodConsole 15000` `AppStopMethodSkip 6`. I had been setting them via win_shell command, but now the module and shell commands are battling one another. https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/windows/win_nssm.ps1#L449 https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/windows/win_nssm.ps1#L439 ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> win_nssm.ps1
https://github.com/ansible/ansible/issues/62620
https://github.com/ansible/ansible/pull/65131
2acfa0e08cf27400282f87e6b1b1cfdbcbc103a3
d8982b4992c5944dc060a59728243169669956cc
2019-09-19T19:28:53Z
python
2019-12-01T20:49:07Z
changelogs/fragments/win_nssm-Implement-additional-parameters.yml
closed
ansible/ansible
https://github.com/ansible/ansible
62,620
win_nssm Is setting some defaults that should have an option to overwrite
<!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Describe the new feature/improvement briefly below --> The win_nssm module has a few defaults set now that I think should allow for customization. For example I'd like to be able to set the following: `AppRotateBytes 1000000` `AppRotateOnline 1` `AppStopMethodConsole 15000` `AppStopMethodSkip 6`. I had been setting them via win_shell command, but now the module and shell commands are battling one another. https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/windows/win_nssm.ps1#L449 https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/windows/win_nssm.ps1#L439 ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> win_nssm.ps1
https://github.com/ansible/ansible/issues/62620
https://github.com/ansible/ansible/pull/65131
2acfa0e08cf27400282f87e6b1b1cfdbcbc103a3
d8982b4992c5944dc060a59728243169669956cc
2019-09-19T19:28:53Z
python
2019-12-01T20:49:07Z
lib/ansible/modules/windows/win_nssm.ps1
#!powershell # Copyright: (c) 2015, George Frank <[email protected]> # Copyright: (c) 2015, Adam Keech <[email protected]> # Copyright: (c) 2015, Hans-Joachim Kliemeck <[email protected]> # Copyright: (c) 2019, Kevin Subileau (@ksubileau) # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) #Requires -Module Ansible.ModuleUtils.Legacy #Requires -Module Ansible.ModuleUtils.ArgvParser #Requires -Module Ansible.ModuleUtils.CommandUtil $ErrorActionPreference = "Stop" $start_modes_map = @{ "auto" = "SERVICE_AUTO_START" "delayed" = "SERVICE_DELAYED_AUTO_START" "manual" = "SERVICE_DEMAND_START" "disabled" = "SERVICE_DISABLED" } $params = Parse-Args -arguments $args -supports_check_mode $true $check_mode = Get-AnsibleParam -obj $params -name "_ansible_check_mode" -type "bool" -default $false $diff_mode = Get-AnsibleParam -obj $params -name "_ansible_diff" -type "bool" -default $false $name = Get-AnsibleParam -obj $params -name "name" -type "str" -failifempty $true $state = Get-AnsibleParam -obj $params -name "state" -type "str" -default "present" -validateset "present","absent","started","stopped","restarted" -resultobj $result $display_name = Get-AnsibleParam -obj $params -name 'display_name' -type 'str' $description = Get-AnsibleParam -obj $params -name 'description' -type 'str' $application = Get-AnsibleParam -obj $params -name "application" -type "path" $appDirectory = Get-AnsibleParam -obj $params -name "working_directory" -aliases "app_directory","chdir" -type "path" $appParameters = Get-AnsibleParam -obj $params -name "app_parameters" $appArguments = Get-AnsibleParam -obj $params -name "arguments" -aliases "app_parameters_free_form" $stdoutFile = Get-AnsibleParam -obj $params -name "stdout_file" -type "path" $stderrFile = Get-AnsibleParam -obj $params -name "stderr_file" -type "path" $executable = Get-AnsibleParam -obj $params -name "executable" -type "path" -default "nssm.exe" # Deprecated options since 2.8. Remove in 2.12 $startMode = Get-AnsibleParam -obj $params -name "start_mode" -type "str" -default "auto" -validateset $start_modes_map.Keys -resultobj $result $dependencies = Get-AnsibleParam -obj $params -name "dependencies" -type "list" $user = Get-AnsibleParam -obj $params -name "user" -type "str" $password = Get-AnsibleParam -obj $params -name "password" -type "str" $result = @{ changed = $false } $diff_text = $null function Invoke-NssmCommand { [CmdletBinding()] param( [Parameter(Mandatory=$true,ValueFromRemainingArguments=$true)] [string[]]$arguments ) $command = Argv-ToString -arguments (@($executable) + $arguments) $result = Run-Command -command $command $result.arguments = $command return $result } function Get-NssmServiceStatus { [CmdletBinding()] param( [Parameter(Mandatory=$true)] [string]$service ) return Invoke-NssmCommand -arguments @("status", $service) } function Get-NssmServiceParameter { [CmdletBinding()] param( [Parameter(Mandatory=$true)] [string]$service, [Parameter(Mandatory=$true)] [Alias("param")] [string]$parameter, [Parameter(Mandatory=$false)] [string]$subparameter ) $arguments = @("get", $service, $parameter) if($subparameter -ne "") { $arguments += $subparameter } return Invoke-NssmCommand -arguments $arguments } function Set-NssmServiceParameter { [CmdletBinding()] param( [Parameter(Mandatory=$true)] [string]$service, [Parameter(Mandatory=$true)] [string]$parameter, [Parameter(Mandatory=$true,ValueFromRemainingArguments=$true)] [Alias("value")] [string[]]$arguments ) return Invoke-NssmCommand -arguments (@("set", $service, $parameter) + $arguments) } function Reset-NssmServiceParameter { [CmdletBinding()] param( [Parameter(Mandatory=$true)] [string]$service, [Parameter(Mandatory=$true)] [Alias("param")] [string]$parameter ) return Invoke-NssmCommand -arguments @("reset", $service, $parameter) } function Update-NssmServiceParameter { <# .SYNOPSIS A generic cmdlet to idempotently set a nssm service parameter. .PARAMETER service [String] The service name .PARAMETER parameter [String] The name of the nssm parameter to set. .PARAMETER arguments [String[]] Target value (or list of value) or array of arguments to pass to the 'nssm set' command. .PARAMETER compare [scriptblock] An optionnal idempotency check scriptblock that must return true when the current value is equal to the desired value. Usefull when 'nssm get' doesn't return the same value as 'nssm set' takes in argument, like for the ObjectName parameter. #> [CmdletBinding(SupportsShouldProcess=$true)] param( [Parameter(Mandatory=$true)] [string]$service, [Parameter(Mandatory=$true)] [string]$parameter, [Parameter(Mandatory=$true,ValueFromRemainingArguments=$true)] [AllowEmptyString()] [AllowNull()] [Alias("value")] [string[]]$arguments, [Parameter()] [scriptblock]$compare = {param($actual,$expected) @(Compare-Object -ReferenceObject $actual -DifferenceObject $expected).Length -eq 0} ) if($null -eq $arguments) { return } $arguments = @($arguments | Where-Object { $_ -ne '' }) $nssm_result = Get-NssmServiceParameter -service $service -parameter $parameter if ($nssm_result.rc -ne 0) { $result.nssm_error_cmd = $nssm_result.arguments $result.nssm_error_log = $nssm_result.stderr Fail-Json -obj $result -message "Error retrieving $parameter for service ""$service""" } $current_values = @($nssm_result.stdout.split("`n`r") | Where-Object { $_ -ne '' }) if (-not $compare.Invoke($current_values,$arguments)) { if ($PSCmdlet.ShouldProcess($service, "Update '$parameter' parameter")) { if($arguments.Count -gt 0) { $nssm_result = Set-NssmServiceParameter -service $service -parameter $parameter -arguments $arguments } else { $nssm_result = Reset-NssmServiceParameter -service $service -parameter $parameter } if ($nssm_result.rc -ne 0) { $result.nssm_error_cmd = $nssm_result.arguments $result.nssm_error_log = $nssm_result.stderr Fail-Json -obj $result -message "Error setting $parameter for service ""$service""" } } $script:diff_text += "-$parameter = $($current_values -join ', ')`n+$parameter = $($arguments -join ', ')`n" $result.changed_by = $parameter $result.changed = $true } } function Test-NssmServiceExists { [CmdletBinding()] param( [Parameter(Mandatory=$true)] [string]$service ) return [bool](Get-Service -Name $service -ErrorAction SilentlyContinue) } function Invoke-NssmStart { [CmdletBinding()] param( [Parameter(Mandatory=$true)] [string]$service ) $nssm_result = Invoke-NssmCommand -arguments @("start", $service) if ($nssm_result.rc -ne 0) { $result.nssm_error_cmd = $nssm_result.arguments $result.nssm_error_log = $nssm_result.stderr Fail-Json -obj $result -message "Error starting service ""$service""" } } function Invoke-NssmStop { [CmdletBinding()] param( [Parameter(Mandatory=$true)] [string]$service ) $nssm_result = Invoke-NssmCommand -arguments @("stop", $service) if ($nssm_result.rc -ne 0) { $result.nssm_error_cmd = $nssm_result.arguments $result.nssm_error_log = $nssm_result.stderr Fail-Json -obj $result -message "Error stopping service ""$service""" } } function Start-NssmService { [CmdletBinding(SupportsShouldProcess=$true)] param( [Parameter(Mandatory=$true)] [string]$service ) $currentStatus = Get-NssmServiceStatus -service $service if ($currentStatus.rc -ne 0) { $result.nssm_error_cmd = $currentStatus.arguments $result.nssm_error_log = $currentStatus.stderr Fail-Json -obj $result -message "Error starting service ""$service""" } if ($currentStatus.stdout -notlike "*SERVICE_RUNNING*") { if ($PSCmdlet.ShouldProcess($service, "Start service")) { switch -wildcard ($currentStatus.stdout) { "*SERVICE_STOPPED*" { Invoke-NssmStart -service $service } "*SERVICE_CONTINUE_PENDING*" { Invoke-NssmStop -service $service; Invoke-NssmStart -service $service } "*SERVICE_PAUSE_PENDING*" { Invoke-NssmStop -service $service; Invoke-NssmStart -service $service } "*SERVICE_PAUSED*" { Invoke-NssmStop -service $service; Invoke-NssmStart -service $service } "*SERVICE_START_PENDING*" { Invoke-NssmStop -service $service; Invoke-NssmStart -service $service } "*SERVICE_STOP_PENDING*" { Invoke-NssmStop -service $service; Invoke-NssmStart -service $service } } } $result.changed_by = "start_service" $result.changed = $true } } function Stop-NssmService { [CmdletBinding(SupportsShouldProcess=$true)] param( [Parameter(Mandatory=$true)] [string]$service ) $currentStatus = Get-NssmServiceStatus -service $service if ($currentStatus.rc -ne 0) { $result.nssm_error_cmd = $currentStatus.arguments $result.nssm_error_log = $currentStatus.stderr Fail-Json -obj $result -message "Error stopping service ""$service""" } if ($currentStatus.stdout -notlike "*SERVICE_STOPPED*") { if ($PSCmdlet.ShouldProcess($service, "Stop service")) { Invoke-NssmStop -service $service } $result.changed_by = "stop_service" $result.changed = $true } } if (($null -ne $appParameters) -and ($null -ne $appArguments)) { Fail-Json $result "'app_parameters' and 'arguments' are mutually exclusive but have both been set." } # Backward compatibility for old parameters style. Remove the block bellow in 2.12 if ($null -ne $appParameters) { Add-DeprecationWarning -obj $result -message "The parameter 'app_parameters' will be removed soon, use 'arguments' instead" -version 2.12 if ($appParameters -isnot [string]) { Fail-Json -obj $result -message "The app_parameters parameter must be a string representing a dictionary." } # Convert dict-as-string form to list $escapedAppParameters = $appParameters.TrimStart("@").TrimStart("{").TrimEnd("}").Replace("; ","`n").Replace("\","\\") $appParametersHash = ConvertFrom-StringData -StringData $escapedAppParameters $appParamsArray = @() $appParametersHash.GetEnumerator() | Foreach-Object { if ($_.Name -ne "_") { $appParamsArray += $_.Name } $appParamsArray += $_.Value } $appArguments = @($appParamsArray) # The rest of the code should use only the new $appArguments variable } if ($state -in @("started","stopped","restarted")) { Add-DeprecationWarning -obj $result -message "The values 'started', 'stopped', and 'restarted' for 'state' will be removed soon, use the win_service module to start or stop the service instead" -version 2.12 } if ($params.ContainsKey('start_mode')) { Add-DeprecationWarning -obj $result -message "The parameter 'start_mode' will be removed soon, use the win_service module instead" -version 2.12 } if ($null -ne $dependencies) { Add-DeprecationWarning -obj $result -message "The parameter 'dependencies' will be removed soon, use the win_service module instead" -version 2.12 } if ($null -ne $user) { Add-DeprecationWarning -obj $result -message "The parameter 'user' will be removed soon, use the win_service module instead" -version 2.12 } if ($null -ne $password) { Add-DeprecationWarning -obj $result -message "The parameter 'password' will be removed soon, use the win_service module instead" -version 2.12 } if ($state -ne 'absent') { if ($null -eq $application) { Fail-Json -obj $result -message "The application parameter must be defined when the state is not absent." } if (-not (Test-Path -LiteralPath $application -PathType Leaf)) { Fail-Json -obj $result -message "The application specified ""$application"" does not exist on the host." } if($null -eq $appDirectory) { $appDirectory = (Get-Item -LiteralPath $application).DirectoryName } if ($user -and -not $password) { Fail-Json -obj $result -message "User without password is informed for service ""$name""" } } $service_exists = Test-NssmServiceExists -service $name if ($state -eq 'absent') { if ($service_exists) { if(-not $check_mode) { if ((Get-Service -Name $name).Status -ne "Stopped") { $nssm_result = Invoke-NssmStop -service $name } $nssm_result = Invoke-NssmCommand -arguments @("remove", $name, "confirm") if ($nssm_result.rc -ne 0) { $result.nssm_error_cmd = $nssm_result.arguments $result.nssm_error_log = $nssm_result.stderr Fail-Json -obj $result -message "Error removing service ""$name""" } } $diff_text += "-[$name]" $result.changed_by = "remove_service" $result.changed = $true } } else { $diff_text_added_prefix = '' if (-not $service_exists) { if(-not $check_mode) { $nssm_result = Invoke-NssmCommand -arguments @("install", $name, $application) if ($nssm_result.rc -ne 0) { $result.nssm_error_cmd = $nssm_result.arguments $result.nssm_error_log = $nssm_result.stderr Fail-Json -obj $result -message "Error installing service ""$name""" } $service_exists = $true } $diff_text_added_prefix = '+' $result.changed_by = "install_service" $result.changed = $true } $diff_text += "$diff_text_added_prefix[$name]`n" # We cannot configure a service that was created above in check mode as it won't actually exist if ($service_exists) { $common_params = @{ service = $name WhatIf = $check_mode } Update-NssmServiceParameter -parameter "Application" -value $application @common_params Update-NssmServiceParameter -parameter "DisplayName" -value $display_name @common_params Update-NssmServiceParameter -parameter "Description" -value $description @common_params Update-NssmServiceParameter -parameter "AppDirectory" -value $appDirectory @common_params if ($null -ne $appArguments) { $singleLineParams = "" if ($appArguments -is [array]) { $singleLineParams = Argv-ToString -arguments $appArguments } else { $singleLineParams = $appArguments.ToString() } $result.nssm_app_parameters = $appArguments $result.nssm_single_line_app_parameters = $singleLineParams Update-NssmServiceParameter -parameter "AppParameters" -value $singleLineParams @common_params } Update-NssmServiceParameter -parameter "AppStdout" -value $stdoutFile @common_params Update-NssmServiceParameter -parameter "AppStderr" -value $stderrFile @common_params ### # Setup file rotation so we don't accidentally consume too much disk ### #set files to overwrite Update-NssmServiceParameter -parameter "AppStdoutCreationDisposition" -value 2 @common_params Update-NssmServiceParameter -parameter "AppStderrCreationDisposition" -value 2 @common_params #enable file rotation Update-NssmServiceParameter -parameter "AppRotateFiles" -value 1 @common_params #don't rotate until the service restarts Update-NssmServiceParameter -parameter "AppRotateOnline" -value 0 @common_params #both of the below conditions must be met before rotation will happen #minimum age before rotating Update-NssmServiceParameter -parameter "AppRotateSeconds" -value 86400 @common_params #minimum size before rotating Update-NssmServiceParameter -parameter "AppRotateBytes" -value 104858 @common_params ############## DEPRECATED block since 2.8. Remove in 2.12 ############## Update-NssmServiceParameter -parameter "DependOnService" -arguments $dependencies @common_params if ($user) { $fullUser = $user if (-Not($user.contains("@")) -And ($user.Split("\").count -eq 1)) { $fullUser = ".\" + $user } # Use custom compare callback to test only the username (and not the password) Update-NssmServiceParameter -parameter "ObjectName" -arguments @($fullUser, $password) -compare {param($actual,$expected) $actual[0] -eq $expected[0]} @common_params } $mappedMode = $start_modes_map.$startMode Update-NssmServiceParameter -parameter "Start" -value $mappedMode @common_params if ($state -in "stopped","restarted") { Stop-NssmService @common_params } if($state -in "started","restarted") { Start-NssmService @common_params } ######################################################################## } } if ($diff_mode -and $result.changed -eq $true) { $result.diff = @{ prepared = $diff_text } } Exit-Json $result
closed
ansible/ansible
https://github.com/ansible/ansible
62,620
win_nssm Is setting some defaults that should have an option to overwrite
<!--- Verify first that your feature was not already discussed on GitHub --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Describe the new feature/improvement briefly below --> The win_nssm module has a few defaults set now that I think should allow for customization. For example I'd like to be able to set the following: `AppRotateBytes 1000000` `AppRotateOnline 1` `AppStopMethodConsole 15000` `AppStopMethodSkip 6`. I had been setting them via win_shell command, but now the module and shell commands are battling one another. https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/windows/win_nssm.ps1#L449 https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/windows/win_nssm.ps1#L439 ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> win_nssm.ps1
https://github.com/ansible/ansible/issues/62620
https://github.com/ansible/ansible/pull/65131
2acfa0e08cf27400282f87e6b1b1cfdbcbc103a3
d8982b4992c5944dc060a59728243169669956cc
2019-09-19T19:28:53Z
python
2019-12-01T20:49:07Z
lib/ansible/modules/windows/win_nssm.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2015, Heyo # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # this is a windows documentation stub. actual code lives in the .ps1 # file of the same name ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = r''' --- module: win_nssm version_added: "2.0" short_description: Install a service using NSSM description: - Install a Windows service using the NSSM wrapper. - NSSM is a service helper which doesn't suck. See U(https://nssm.cc/) for more information. requirements: - "nssm >= 2.24.0 # (install via M(win_chocolatey)) C(win_chocolatey: name=nssm)" options: name: description: - Name of the service to operate on. type: str required: true state: description: - State of the service on the system. - Values C(started), C(stopped), and C(restarted) are deprecated since v2.8, please use the M(win_service) module instead to start, stop or restart the service. type: str choices: [ absent, present, started, stopped, restarted ] default: present application: description: - The application binary to run as a service - Required when I(state) is C(present), C(started), C(stopped), or C(restarted). type: path executable: description: - The location of the NSSM utility (in case it is not located in your PATH). type: path default: nssm.exe version_added: "2.8.0" description: description: - The description to set for the service. type: str version_added: "2.8.0" display_name: description: - The display name to set for the service. type: str version_added: "2.8.0" working_directory: version_added: "2.8.0" description: - The working directory to run the service executable from (defaults to the directory containing the application binary) type: path aliases: [ app_directory, chdir ] stdout_file: description: - Path to receive output. type: path stderr_file: description: - Path to receive error output. type: path app_parameters: description: - A string representing a dictionary of parameters to be passed to the application when it starts. - DEPRECATED since v2.8, please use I(arguments) instead. - This is mutually exclusive with I(arguments). type: str arguments: description: - Parameters to be passed to the application when it starts. - This can be either a simple string or a list. - This parameter was renamed from I(app_parameters_free_form) in 2.8. - This is mutually exclusive with I(app_parameters). aliases: [ app_parameters_free_form ] type: str version_added: "2.3" dependencies: description: - Service dependencies that has to be started to trigger startup, separated by comma. - DEPRECATED since v2.8, please use the M(win_service) module instead. type: list user: description: - User to be used for service startup. - DEPRECATED since v2.8, please use the M(win_service) module instead. type: str password: description: - Password to be used for service startup. - DEPRECATED since v2.8, please use the M(win_service) module instead. type: str start_mode: description: - If C(auto) is selected, the service will start at bootup. - C(delayed) causes a delayed but automatic start after boot (added in version 2.5). - C(manual) means that the service will start only when another service needs it. - C(disabled) means that the service will stay off, regardless if it is needed or not. - DEPRECATED since v2.8, please use the M(win_service) module instead. type: str choices: [ auto, delayed, disabled, manual ] default: auto seealso: - module: win_service notes: - The service will NOT be started after its creation when C(state=present). - Once the service is created, you can use the M(win_service) module to start it or configure some additionals properties, such as its startup type, dependencies, service account, and so on. author: - Adam Keech (@smadam813) - George Frank (@georgefrank) - Hans-Joachim Kliemeck (@h0nIg) - Michael Wild (@themiwi) - Kevin Subileau (@ksubileau) ''' EXAMPLES = r''' - name: Install the foo service win_nssm: name: foo application: C:\windows\foo.exe # This will yield the following command: C:\windows\foo.exe bar "true" - name: Install the Consul service with a list of parameters win_nssm: name: Consul application: C:\consul\consul.exe arguments: - agent - -config-dir=C:\consul\config # This is strictly equivalent to the previous example - name: Install the Consul service with an arbitrary string of parameters win_nssm: name: Consul application: C:\consul\consul.exe arguments: agent -config-dir=C:\consul\config # Install the foo service, and then configure and start it with win_service - name: Install the foo service, redirecting stdout and stderr to the same file win_nssm: name: foo application: C:\windows\foo.exe stdout_file: C:\windows\foo.log stderr_file: C:\windows\foo.log - name: Configure and start the foo service using win_service win_service: name: foo dependencies: [ adf, tcpip ] user: foouser password: secret start_mode: manual state: started - name: Remove the foo service win_nssm: name: foo state: absent '''
closed
ansible/ansible
https://github.com/ansible/ansible
63,998
Order of elements in ansible_facts['disks'] array does not match actual disk order
##### SUMMARY On a system with multiple disks, ansible_facts['disks'][0] is not guaranteed to be the first disk. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME win_disk_facts ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below Ansible 2.8.5 on Tower 3.5.2 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT Windows Server 2019 Datacenter running in AWS ##### STEPS TO REPRODUCE Use the following playbook to illustrate the issue <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - name: Configure local storage hosts: win1.example.com tasks: - name: Disk facts are populated win_disk_facts: - name: Print second disk size debug: var: ansible_facts['disks'][1]['size'] - name: Extract second disk as standalone fact set_fact: second_disk: "{{ ansible_facts['disks'] | json_query('[?number==`1`]') }}" - name: Print actual second disk size debug: var: second_disk[0]['size'] ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS ansible_facts['disks'][0] should contain information for the first disk (number: 0) ansible_facts['disks'][1] should contain information for the second disk (number: 1) ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> The first element in the ansible_facts['disks'] array is the second disk instead of the first. <!--- Paste verbatim command output between quotes --> ```paste below SSH password: PLAY [Configure local storage] ************************************************* TASK [Gathering Facts] ********************************************************* ok: [win1.example.com] TASK [Disk facts are populated] ************************************************ ok: [win1.example.com] TASK [Print second disk size] ************************************************** ok: [win1.example.com] => { "ansible_facts['disks'][1]['size']": "32212254720" } TASK [Extract second disk as standalone fact] ********************************** ok: [win1.example.com] TASK [Print actual second disk size] ******************************************* ok: [win1.example.com] => { "second_disk[0]['size']": "5368709120" } PLAY RECAP ********************************************************************* win1.example.com : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` Users should not need to add json_query filters to obtain the correct disk. This may be a bug, or an enhancement, or a documentation issue.
https://github.com/ansible/ansible/issues/63998
https://github.com/ansible/ansible/pull/64997
d8982b4992c5944dc060a59728243169669956cc
03dce68227cb5732ef463943cfb2bd0e09d5d4ed
2019-10-27T23:37:06Z
python
2019-12-01T20:54:18Z
changelogs/fragments/win_disk_facts-Set-output-array-order-by-disk-number.yml
closed
ansible/ansible
https://github.com/ansible/ansible
63,998
Order of elements in ansible_facts['disks'] array does not match actual disk order
##### SUMMARY On a system with multiple disks, ansible_facts['disks'][0] is not guaranteed to be the first disk. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME win_disk_facts ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below Ansible 2.8.5 on Tower 3.5.2 ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT Windows Server 2019 Datacenter running in AWS ##### STEPS TO REPRODUCE Use the following playbook to illustrate the issue <!--- Paste example playbooks or commands between quotes below --> ```yaml --- - name: Configure local storage hosts: win1.example.com tasks: - name: Disk facts are populated win_disk_facts: - name: Print second disk size debug: var: ansible_facts['disks'][1]['size'] - name: Extract second disk as standalone fact set_fact: second_disk: "{{ ansible_facts['disks'] | json_query('[?number==`1`]') }}" - name: Print actual second disk size debug: var: second_disk[0]['size'] ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS ansible_facts['disks'][0] should contain information for the first disk (number: 0) ansible_facts['disks'][1] should contain information for the second disk (number: 1) ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> The first element in the ansible_facts['disks'] array is the second disk instead of the first. <!--- Paste verbatim command output between quotes --> ```paste below SSH password: PLAY [Configure local storage] ************************************************* TASK [Gathering Facts] ********************************************************* ok: [win1.example.com] TASK [Disk facts are populated] ************************************************ ok: [win1.example.com] TASK [Print second disk size] ************************************************** ok: [win1.example.com] => { "ansible_facts['disks'][1]['size']": "32212254720" } TASK [Extract second disk as standalone fact] ********************************** ok: [win1.example.com] TASK [Print actual second disk size] ******************************************* ok: [win1.example.com] => { "second_disk[0]['size']": "5368709120" } PLAY RECAP ********************************************************************* win1.example.com : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` Users should not need to add json_query filters to obtain the correct disk. This may be a bug, or an enhancement, or a documentation issue.
https://github.com/ansible/ansible/issues/63998
https://github.com/ansible/ansible/pull/64997
d8982b4992c5944dc060a59728243169669956cc
03dce68227cb5732ef463943cfb2bd0e09d5d4ed
2019-10-27T23:37:06Z
python
2019-12-01T20:54:18Z
lib/ansible/modules/windows/win_disk_facts.ps1
#!powershell # Copyright: (c) 2017, Marc Tschapek <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) #Requires -Module Ansible.ModuleUtils.Legacy #AnsibleRequires -OSVersion 6.2 $ErrorActionPreference = "Stop" Set-StrictMode -Version 2.0 # Functions function Test-Admin { $CurrentUser = New-Object Security.Principal.WindowsPrincipal $([Security.Principal.WindowsIdentity]::GetCurrent()) $IsAdmin = $CurrentUser.IsInRole([Security.Principal.WindowsBuiltinRole]::Administrator) return $IsAdmin } # Check admin rights if (-not (Test-Admin)) { Fail-Json -obj @{} -message "Module was not started with elevated rights" } # Create a new result object $result = @{ changed = $false ansible_facts = @{ ansible_disks = @() } } # Search disks try { $disks = Get-Disk } catch { Fail-Json -obj $result -message "Failed to search the disks on the target: $($_.Exception.Message)" } foreach ($disk in $disks) { $disk_info = @{} $pdisk = Get-PhysicalDisk -ErrorAction SilentlyContinue | Where-Object { $_.DeviceId -eq $disk.Number } if ($pdisk) { $disk_info["physical_disk"] += @{ size = $pdisk.Size allocated_size = $pdisk.AllocatedSize device_id = $pdisk.DeviceId friendly_name = $pdisk.FriendlyName operational_status = $pdisk.OperationalStatus health_status = $pdisk.HealthStatus bus_type = $pdisk.BusType usage_type = $pdisk.Usage supported_usages = $pdisk.SupportedUsages spindle_speed = $pdisk.SpindleSpeed firmware_version = $pdisk.FirmwareVersion physical_location = $pdisk.PhysicalLocation manufacturer = $pdisk.Manufacturer model = $pdisk.Model can_pool = $pdisk.CanPool indication_enabled = $pdisk.IsIndicationEnabled partial = $pdisk.IsPartial serial_number = $pdisk.SerialNumber object_id = $pdisk.ObjectId unique_id = $pdisk.UniqueId } if ([single]"$([System.Environment]::OSVersion.Version.Major).$([System.Environment]::OSVersion.Version.Minor)" -ge 6.3) { $disk_info.physical_disk.media_type = $pdisk.MediaType } if (-not $pdisk.CanPool) { $disk_info.physical_disk.cannot_pool_reason = $pdisk.CannotPoolReason } $vdisk = Get-VirtualDisk -PhysicalDisk $pdisk -ErrorAction SilentlyContinue if ($vdisk) { $disk_info["virtual_disk"] += @{ size = $vdisk.Size allocated_size = $vdisk.AllocatedSize footprint_on_pool = $vdisk.FootprintOnPool name = $vdisk.name friendly_name = $vdisk.FriendlyName operational_status = $vdisk.OperationalStatus health_status = $vdisk.HealthStatus provisioning_type = $vdisk.ProvisioningType allocation_unit_size = $vdisk.AllocationUnitSize media_type = $vdisk.MediaType parity_layout = $vdisk.ParityLayout access = $vdisk.Access detached_reason = $vdisk.DetachedReason write_cache_size = $vdisk.WriteCacheSize fault_domain_awareness = $vdisk.FaultDomainAwareness inter_leave = $vdisk.InterLeave deduplication_enabled = $vdisk.IsDeduplicationEnabled enclosure_aware = $vdisk.IsEnclosureAware manual_attach = $vdisk.IsManualAttach snapshot = $vdisk.IsSnapshot tiered = $vdisk.IsTiered physical_sector_size = $vdisk.PhysicalSectorSize logical_sector_size = $vdisk.LogicalSectorSize available_copies = $vdisk.NumberOfAvailableCopies columns = $vdisk.NumberOfColumns groups = $vdisk.NumberOfGroups physical_disk_redundancy = $vdisk.PhysicalDiskRedundancy read_cache_size = $vdisk.ReadCacheSize request_no_spof = $vdisk.RequestNoSinglePointOfFailure resiliency_setting_name = $vdisk.ResiliencySettingName object_id = $vdisk.ObjectId unique_id_format = $vdisk.UniqueIdFormat unique_id = $vdisk.UniqueId } } } $win32_disk_drive = Get-CimInstance -ClassName Win32_DiskDrive -ErrorAction SilentlyContinue | Where-Object { if ($_.SerialNumber) { $_.SerialNumber -eq $disk.SerialNumber } elseif ($disk.UniqueIdFormat -eq 'Vendor Specific') { $_.PNPDeviceID -eq $disk.UniqueId.split(':')[0] } } if ($win32_disk_drive) { $disk_info["win32_disk_drive"] += @{ availability=$win32_disk_drive.Availability bytes_per_sector=$win32_disk_drive.BytesPerSector capabilities=$win32_disk_drive.Capabilities capability_descriptions=$win32_disk_drive.CapabilityDescriptions caption=$win32_disk_drive.Caption compression_method=$win32_disk_drive.CompressionMethod config_manager_error_code=$win32_disk_drive.ConfigManagerErrorCode config_manager_user_config=$win32_disk_drive.ConfigManagerUserConfig creation_class_name=$win32_disk_drive.CreationClassName default_block_size=$win32_disk_drive.DefaultBlockSize description=$win32_disk_drive.Description device_id=$win32_disk_drive.DeviceID error_cleared=$win32_disk_drive.ErrorCleared error_description=$win32_disk_drive.ErrorDescription error_methodology=$win32_disk_drive.ErrorMethodology firmware_revision=$win32_disk_drive.FirmwareRevision index=$win32_disk_drive.Index install_date=$win32_disk_drive.InstallDate interface_type=$win32_disk_drive.InterfaceType last_error_code=$win32_disk_drive.LastErrorCode manufacturer=$win32_disk_drive.Manufacturer max_block_size=$win32_disk_drive.MaxBlockSize max_media_size=$win32_disk_drive.MaxMediaSize media_loaded=$win32_disk_drive.MediaLoaded media_type=$win32_disk_drive.MediaType min_block_size=$win32_disk_drive.MinBlockSize model=$win32_disk_drive.Model name=$win32_disk_drive.Name needs_cleaning=$win32_disk_drive.NeedsCleaning number_of_media_supported=$win32_disk_drive.NumberOfMediaSupported partitions=$win32_disk_drive.Partitions pnp_device_id=$win32_disk_drive.PNPDeviceID power_management_capabilities=$win32_disk_drive.PowerManagementCapabilities power_management_supported=$win32_disk_drive.PowerManagementSupported scsi_bus=$win32_disk_drive.SCSIBus scsi_logical_unit=$win32_disk_drive.SCSILogicalUnit scsi_port=$win32_disk_drive.SCSIPort scsi_target_id=$win32_disk_drive.SCSITargetId sectors_per_track=$win32_disk_drive.SectorsPerTrack serial_number=$win32_disk_drive.SerialNumber signature=$win32_disk_drive.Signature size=$win32_disk_drive.Size status=$win32_disk_drive.status status_info=$win32_disk_drive.StatusInfo system_creation_class_name=$win32_disk_drive.SystemCreationClassName system_name=$win32_disk_drive.SystemName total_cylinders=$win32_disk_drive.TotalCylinders total_heads=$win32_disk_drive.TotalHeads total_sectors=$win32_disk_drive.TotalSectors total_tracks=$win32_disk_drive.TotalTracks tracks_per_cylinder=$win32_disk_drive.TracksPerCylinder } } $disk_info.number = $disk.Number $disk_info.size = $disk.Size $disk_info.bus_type = $disk.BusType $disk_info.friendly_name = $disk.FriendlyName $disk_info.partition_style = $disk.PartitionStyle $disk_info.partition_count = $disk.NumberOfPartitions $disk_info.operational_status = $disk.OperationalStatus $disk_info.sector_size = $disk.PhysicalSectorSize $disk_info.read_only = $disk.IsReadOnly $disk_info.bootable = $disk.IsBoot $disk_info.system_disk = $disk.IsSystem $disk_info.clustered = $disk.IsClustered $disk_info.manufacturer = $disk.Manufacturer $disk_info.model = $disk.Model $disk_info.firmware_version = $disk.FirmwareVersion $disk_info.location = $disk.Location $disk_info.serial_number = $disk.SerialNumber $disk_info.unique_id = $disk.UniqueId $disk_info.guid = $disk.Guid $disk_info.path = $disk.Path $parts = Get-Partition -DiskNumber $($disk.Number) -ErrorAction SilentlyContinue if ($parts) { $disk_info["partitions"] += @() foreach ($part in $parts) { $partition_info = @{ number = $part.PartitionNumber size = $part.Size type = $part.Type drive_letter = $part.DriveLetter transition_state = $part.TransitionState offset = $part.Offset hidden = $part.IsHidden shadow_copy = $part.IsShadowCopy guid = $part.Guid access_paths = $part.AccessPaths } if ($disks.PartitionStyle -eq "GPT") { $partition_info.gpt_type = $part.GptType $partition_info.no_default_driveletter = $part.NoDefaultDriveLetter } elseif ($disks.PartitionStyle -eq "MBR") { $partition_info.mbr_type = $part.MbrType $partition_info.active = $part.IsActive } $vols = Get-Volume -Partition $part -ErrorAction SilentlyContinue if ($vols) { $partition_info["volumes"] += @() foreach ($vol in $vols) { $volume_info = @{ size = $vol.Size size_remaining = $vol.SizeRemaining type = $vol.FileSystem label = $vol.FileSystemLabel health_status = $vol.HealthStatus drive_type = $vol.DriveType object_id = $vol.ObjectId path = $vol.Path } if ([System.Environment]::OSVersion.Version.Major -ge 10) { $volume_info.allocation_unit_size = $vol.AllocationUnitSize } else { $volPath = ($vol.Path.TrimStart("\\?\")).TrimEnd("\") $BlockSize = (Get-CimInstance -Query "SELECT BlockSize FROM Win32_Volume WHERE DeviceID like '%$volPath%'" -ErrorAction SilentlyContinue | Select-Object BlockSize).BlockSize $volume_info.allocation_unit_size = $BlockSize } $partition_info.volumes += $volume_info } } $disk_info.partitions += $partition_info } } $result.ansible_facts.ansible_disks += $disk_info } # Return result Exit-Json -obj $result
closed
ansible/ansible
https://github.com/ansible/ansible
63,903
ufw module does not support specifying both in & out interfaces for forwarding rules
##### SUMMARY Currently the module ufw does not allow to specify two network interfaces for forwarding rules (rule='allow' routed='yes'). What I'd like to achieve is this rule: `ufw route allow in on ethX out on ethY to 10.0.0.0/8 from 192.168.0.0/24` To my knowledge, the best one can do is pick one interface and decide to flag it with "in" or "out" traffic using the `direction` property. So the equivalent of either: `ufw route allow in on ethX to 10.0.0.0/8 from 192.168.0.0/24` or `ufw route allow out on ethY to 10.0.0.0/8 from 192.168.0.0/24` Is there a mechanism to provide both interfaces to the module? ##### ISSUE TYPE - Feature request ##### COMPONENT NAME module ufw ##### ANSIBLE VERSION ``` ansible 2.8.2 config file = /mnt/c/dev/sandboxes/ansible/ansible.cfg configured module search path = [u'/home/me/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/me/virtualenvs/ansible/local/lib/python2.7/site-packages/ansible executable location = /home/me/virtualenvs/ansible/bin/ansible python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION ``` ANSIBLE_PIPELINING(/mnt/c/dev/sandboxes/ansible/ansible.cfg) = True DEFAULT_ROLES_PATH(/mnt/c/dev/sandboxes/ansible/ansible.cfg) = [u'/mnt/c/dev/sandboxes/ansible/.imported_roles', u'/mnt/c/dev/sandboxes/ansible/roles'] DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/me/.vault_pass.txt ``` ##### OS / ENVIRONMENT Ubuntu Linux 18.04 LTS ##### STEPS TO REPRODUCE Cannot really (re)produce, we just can't express it. There is a single parameter `interface` and we would need 2. (Or a convention to pass a tuple of interfaces into `interface`, comma separated?) ##### EXPECTED RESULTS Being able to specify an "in" interface and an "out" interface. In the following snippet, only the "out" interface can be mentioned. ```yaml changed: [testvm1] => (item={u'direction': u'out', u'to_ip': u'0.0.0.0/0', u'interface': u'ethY', u'route': u'yes', u'from_ip': u'192.168.0.0/24', u'rule': u'allow'}) => { "ansible_loop_var": "item", "changed": true, "commands": [ "/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules", "/usr/sbin/ufw --version", "/usr/sbin/ufw route allow out on ethY from 192.168.0.0/24 to 0.0.0.0/0", "/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules" ], "invocation": { "module_args": { "app": null, "comment": null, "default": null, "delete": false, "direction": "out", "from_ip": "192.168.0.0/24", "from_port": null, "insert": null, "insert_relative_to": "zero", "interface": "ethY", "log": false, "logging": null, "proto": null, "route": true, "rule": "allow", "state": null, "to_ip": "0.0.0.0/0", "to_port": null } }, "item": { "direction": "out", "from_ip": "192.168.0.0/24", "interface": "ethY", "route": "yes", "rule": "allow", "to_ip": "0.0.0.0/0" }, "msg": "Status: active\nLogging: off\nDefault: deny (incoming), allow (outgoing), deny (routed)\nNew profiles: skip\n\nTo Action From\n-- ------ ----\n10.0.0.0/8 on ethY ALLOW FWD 10.254.33.0/24" } ``` ##### ACTUAL RESULTS We must choose which interface we want to specify as there is only one `interface` parameter. (In my case, I will specify the output interface preferably over the incoming interface and count on the address range to be selective enough but YMVV)
https://github.com/ansible/ansible/issues/63903
https://github.com/ansible/ansible/pull/65382
03dce68227cb5732ef463943cfb2bd0e09d5d4ed
a0b8b85fa5ab512f0ece4c660aba754fc85edc9e
2019-10-24T12:42:48Z
python
2019-12-02T07:01:44Z
changelogs/fragments/63903-ufw.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
63,903
ufw module does not support specifying both in & out interfaces for forwarding rules
##### SUMMARY Currently the module ufw does not allow to specify two network interfaces for forwarding rules (rule='allow' routed='yes'). What I'd like to achieve is this rule: `ufw route allow in on ethX out on ethY to 10.0.0.0/8 from 192.168.0.0/24` To my knowledge, the best one can do is pick one interface and decide to flag it with "in" or "out" traffic using the `direction` property. So the equivalent of either: `ufw route allow in on ethX to 10.0.0.0/8 from 192.168.0.0/24` or `ufw route allow out on ethY to 10.0.0.0/8 from 192.168.0.0/24` Is there a mechanism to provide both interfaces to the module? ##### ISSUE TYPE - Feature request ##### COMPONENT NAME module ufw ##### ANSIBLE VERSION ``` ansible 2.8.2 config file = /mnt/c/dev/sandboxes/ansible/ansible.cfg configured module search path = [u'/home/me/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/me/virtualenvs/ansible/local/lib/python2.7/site-packages/ansible executable location = /home/me/virtualenvs/ansible/bin/ansible python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION ``` ANSIBLE_PIPELINING(/mnt/c/dev/sandboxes/ansible/ansible.cfg) = True DEFAULT_ROLES_PATH(/mnt/c/dev/sandboxes/ansible/ansible.cfg) = [u'/mnt/c/dev/sandboxes/ansible/.imported_roles', u'/mnt/c/dev/sandboxes/ansible/roles'] DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/me/.vault_pass.txt ``` ##### OS / ENVIRONMENT Ubuntu Linux 18.04 LTS ##### STEPS TO REPRODUCE Cannot really (re)produce, we just can't express it. There is a single parameter `interface` and we would need 2. (Or a convention to pass a tuple of interfaces into `interface`, comma separated?) ##### EXPECTED RESULTS Being able to specify an "in" interface and an "out" interface. In the following snippet, only the "out" interface can be mentioned. ```yaml changed: [testvm1] => (item={u'direction': u'out', u'to_ip': u'0.0.0.0/0', u'interface': u'ethY', u'route': u'yes', u'from_ip': u'192.168.0.0/24', u'rule': u'allow'}) => { "ansible_loop_var": "item", "changed": true, "commands": [ "/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules", "/usr/sbin/ufw --version", "/usr/sbin/ufw route allow out on ethY from 192.168.0.0/24 to 0.0.0.0/0", "/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules" ], "invocation": { "module_args": { "app": null, "comment": null, "default": null, "delete": false, "direction": "out", "from_ip": "192.168.0.0/24", "from_port": null, "insert": null, "insert_relative_to": "zero", "interface": "ethY", "log": false, "logging": null, "proto": null, "route": true, "rule": "allow", "state": null, "to_ip": "0.0.0.0/0", "to_port": null } }, "item": { "direction": "out", "from_ip": "192.168.0.0/24", "interface": "ethY", "route": "yes", "rule": "allow", "to_ip": "0.0.0.0/0" }, "msg": "Status: active\nLogging: off\nDefault: deny (incoming), allow (outgoing), deny (routed)\nNew profiles: skip\n\nTo Action From\n-- ------ ----\n10.0.0.0/8 on ethY ALLOW FWD 10.254.33.0/24" } ``` ##### ACTUAL RESULTS We must choose which interface we want to specify as there is only one `interface` parameter. (In my case, I will specify the output interface preferably over the incoming interface and count on the address range to be selective enough but YMVV)
https://github.com/ansible/ansible/issues/63903
https://github.com/ansible/ansible/pull/65382
03dce68227cb5732ef463943cfb2bd0e09d5d4ed
a0b8b85fa5ab512f0ece4c660aba754fc85edc9e
2019-10-24T12:42:48Z
python
2019-12-02T07:01:44Z
lib/ansible/modules/system/ufw.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: (c) 2014, Ahti Kitsik <[email protected]> # Copyright: (c) 2014, Jarno Keskikangas <[email protected]> # Copyright: (c) 2013, Aleksey Ovcharenko <[email protected]> # Copyright: (c) 2013, James Martin <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = r''' --- module: ufw short_description: Manage firewall with UFW description: - Manage firewall with UFW. version_added: 1.6 author: - Aleksey Ovcharenko (@ovcharenko) - Jarno Keskikangas (@pyykkis) - Ahti Kitsik (@ahtik) notes: - See C(man ufw) for more examples. requirements: - C(ufw) package options: state: description: - C(enabled) reloads firewall and enables firewall on boot. - C(disabled) unloads firewall and disables firewall on boot. - C(reloaded) reloads firewall. - C(reset) disables and resets firewall to installation defaults. type: str choices: [ disabled, enabled, reloaded, reset ] default: description: - Change the default policy for incoming or outgoing traffic. type: str choices: [ allow, deny, reject ] aliases: [ policy ] direction: description: - Select direction for a rule or default policy command. type: str choices: [ in, incoming, out, outgoing, routed ] logging: description: - Toggles logging. Logged packets use the LOG_KERN syslog facility. type: str choices: [ 'on', 'off', low, medium, high, full ] insert: description: - Insert the corresponding rule as rule number NUM. - Note that ufw numbers rules starting with 1. type: int insert_relative_to: description: - Allows to interpret the index in I(insert) relative to a position. - C(zero) interprets the rule number as an absolute index (i.e. 1 is the first rule). - C(first-ipv4) interprets the rule number relative to the index of the first IPv4 rule, or relative to the position where the first IPv4 rule would be if there is currently none. - C(last-ipv4) interprets the rule number relative to the index of the last IPv4 rule, or relative to the position where the last IPv4 rule would be if there is currently none. - C(first-ipv6) interprets the rule number relative to the index of the first IPv6 rule, or relative to the position where the first IPv6 rule would be if there is currently none. - C(last-ipv6) interprets the rule number relative to the index of the last IPv6 rule, or relative to the position where the last IPv6 rule would be if there is currently none. type: str choices: [ first-ipv4, first-ipv6, last-ipv4, last-ipv6, zero ] default: zero version_added: "2.8" rule: description: - Add firewall rule type: str choices: [ allow, deny, limit, reject ] log: description: - Log new connections matched to this rule type: bool from_ip: description: - Source IP address. type: str default: any aliases: [ from, src ] from_port: description: - Source port. type: str to_ip: description: - Destination IP address. type: str default: any aliases: [ dest, to] to_port: description: - Destination port. type: str aliases: [ port ] proto: description: - TCP/IP protocol. type: str choices: [ any, tcp, udp, ipv6, esp, ah, gre, igmp ] aliases: [ protocol ] name: description: - Use profile located in C(/etc/ufw/applications.d). type: str aliases: [ app ] delete: description: - Delete rule. type: bool interface: description: - Specify interface for rule. type: str aliases: [ if ] route: description: - Apply the rule to routed/forwarded packets. type: bool comment: description: - Add a comment to the rule. Requires UFW version >=0.35. type: str version_added: "2.4" ''' EXAMPLES = r''' - name: Allow everything and enable UFW ufw: state: enabled policy: allow - name: Set logging ufw: logging: 'on' # Sometimes it is desirable to let the sender know when traffic is # being denied, rather than simply ignoring it. In these cases, use # reject instead of deny. In addition, log rejected connections: - ufw: rule: reject port: auth log: yes # ufw supports connection rate limiting, which is useful for protecting # against brute-force login attacks. ufw will deny connections if an IP # address has attempted to initiate 6 or more connections in the last # 30 seconds. See http://www.debian-administration.org/articles/187 # for details. Typical usage is: - ufw: rule: limit port: ssh proto: tcp # Allow OpenSSH. (Note that as ufw manages its own state, simply removing # a rule=allow task can leave those ports exposed. Either use delete=yes # or a separate state=reset task) - ufw: rule: allow name: OpenSSH - name: Delete OpenSSH rule ufw: rule: allow name: OpenSSH delete: yes - name: Deny all access to port 53 ufw: rule: deny port: '53' - name: Allow port range 60000-61000 ufw: rule: allow port: 60000:61000 proto: tcp - name: Allow all access to tcp port 80 ufw: rule: allow port: '80' proto: tcp - name: Allow all access from RFC1918 networks to this host ufw: rule: allow src: '{{ item }}' loop: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16 - name: Deny access to udp port 514 from host 1.2.3.4 and include a comment ufw: rule: deny proto: udp src: 1.2.3.4 port: '514' comment: Block syslog - name: Allow incoming access to eth0 from 1.2.3.5 port 5469 to 1.2.3.4 port 5469 ufw: rule: allow interface: eth0 direction: in proto: udp src: 1.2.3.5 from_port: '5469' dest: 1.2.3.4 to_port: '5469' # Note that IPv6 must be enabled in /etc/default/ufw for IPv6 firewalling to work. - name: Deny all traffic from the IPv6 2001:db8::/32 to tcp port 25 on this host ufw: rule: deny proto: tcp src: 2001:db8::/32 port: '25' - name: Deny all IPv6 traffic to tcp port 20 on this host # this should be the first IPv6 rule ufw: rule: deny proto: tcp port: '20' to_ip: "::" insert: 0 insert_relative_to: first-ipv6 - name: Deny all IPv4 traffic to tcp port 20 on this host # This should be the third to last IPv4 rule # (insert: -1 addresses the second to last IPv4 rule; # so the new rule will be inserted before the second # to last IPv4 rule, and will be come the third to last # IPv4 rule.) ufw: rule: deny proto: tcp port: '20' to_ip: "::" insert: -1 insert_relative_to: last-ipv4 # Can be used to further restrict a global FORWARD policy set to allow - name: Deny forwarded/routed traffic from subnet 1.2.3.0/24 to subnet 4.5.6.0/24 ufw: rule: deny route: yes src: 1.2.3.0/24 dest: 4.5.6.0/24 ''' import re from operator import itemgetter from ansible.module_utils.basic import AnsibleModule def compile_ipv4_regexp(): r = r"((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}" r += r"(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])" return re.compile(r) def compile_ipv6_regexp(): """ validation pattern provided by : https://stackoverflow.com/questions/53497/regular-expression-that-matches- valid-ipv6-addresses#answer-17871737 """ r = r"(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:" r += r"|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}" r += r"(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4})" r += r"{1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]" r += r"{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]" r += r"{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4})" r += r"{0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]" r += r"|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}" r += r"[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}" r += r"[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))" return re.compile(r) def main(): command_keys = ['state', 'default', 'rule', 'logging'] module = AnsibleModule( argument_spec=dict( state=dict(type='str', choices=['enabled', 'disabled', 'reloaded', 'reset']), default=dict(type='str', aliases=['policy'], choices=['allow', 'deny', 'reject']), logging=dict(type='str', choices=['full', 'high', 'low', 'medium', 'off', 'on']), direction=dict(type='str', choices=['in', 'incoming', 'out', 'outgoing', 'routed']), delete=dict(type='bool', default=False), route=dict(type='bool', default=False), insert=dict(type='int'), insert_relative_to=dict(choices=['zero', 'first-ipv4', 'last-ipv4', 'first-ipv6', 'last-ipv6'], default='zero'), rule=dict(type='str', choices=['allow', 'deny', 'limit', 'reject']), interface=dict(type='str', aliases=['if']), log=dict(type='bool', default=False), from_ip=dict(type='str', default='any', aliases=['from', 'src']), from_port=dict(type='str'), to_ip=dict(type='str', default='any', aliases=['dest', 'to']), to_port=dict(type='str', aliases=['port']), proto=dict(type='str', aliases=['protocol'], choices=['ah', 'any', 'esp', 'ipv6', 'tcp', 'udp', 'gre', 'igmp']), name=dict(type='str', aliases=['app']), comment=dict(type='str'), ), supports_check_mode=True, mutually_exclusive=[ ['name', 'proto', 'logging'], ], required_one_of=([command_keys]), required_by=dict( interface=('direction', ), ), ) cmds = [] ipv4_regexp = compile_ipv4_regexp() ipv6_regexp = compile_ipv6_regexp() def filter_line_that_not_start_with(pattern, content): return ''.join([line for line in content.splitlines(True) if line.startswith(pattern)]) def filter_line_that_contains(pattern, content): return [line for line in content.splitlines(True) if pattern in line] def filter_line_that_not_contains(pattern, content): return ''.join([line for line in content.splitlines(True) if not line.contains(pattern)]) def filter_line_that_match_func(match_func, content): return ''.join([line for line in content.splitlines(True) if match_func(line) is not None]) def filter_line_that_contains_ipv4(content): return filter_line_that_match_func(ipv4_regexp.search, content) def filter_line_that_contains_ipv6(content): return filter_line_that_match_func(ipv6_regexp.search, content) def is_starting_by_ipv4(ip): return ipv4_regexp.match(ip) is not None def is_starting_by_ipv6(ip): return ipv6_regexp.match(ip) is not None def execute(cmd, ignore_error=False): cmd = ' '.join(map(itemgetter(-1), filter(itemgetter(0), cmd))) cmds.append(cmd) (rc, out, err) = module.run_command(cmd, environ_update={"LANG": "C"}) if rc != 0 and not ignore_error: module.fail_json(msg=err or out, commands=cmds) return out def get_current_rules(): user_rules_files = ["/lib/ufw/user.rules", "/lib/ufw/user6.rules", "/etc/ufw/user.rules", "/etc/ufw/user6.rules", "/var/lib/ufw/user.rules", "/var/lib/ufw/user6.rules"] cmd = [[grep_bin], ["-h"], ["'^### tuple'"]] cmd.extend([[f] for f in user_rules_files]) return execute(cmd, ignore_error=True) def ufw_version(): """ Returns the major and minor version of ufw installed on the system. """ out = execute([[ufw_bin], ["--version"]]) lines = [x for x in out.split('\n') if x.strip() != ''] if len(lines) == 0: module.fail_json(msg="Failed to get ufw version.", rc=0, out=out) matches = re.search(r'^ufw.+(\d+)\.(\d+)(?:\.(\d+))?.*$', lines[0]) if matches is None: module.fail_json(msg="Failed to get ufw version.", rc=0, out=out) # Convert version to numbers major = int(matches.group(1)) minor = int(matches.group(2)) rev = 0 if matches.group(3) is not None: rev = int(matches.group(3)) return major, minor, rev params = module.params commands = dict((key, params[key]) for key in command_keys if params[key]) # Ensure ufw is available ufw_bin = module.get_bin_path('ufw', True) grep_bin = module.get_bin_path('grep', True) # Save the pre state and rules in order to recognize changes pre_state = execute([[ufw_bin], ['status verbose']]) pre_rules = get_current_rules() changed = False # Execute filter for (command, value) in commands.items(): cmd = [[ufw_bin], [module.check_mode, '--dry-run']] if command == 'state': states = {'enabled': 'enable', 'disabled': 'disable', 'reloaded': 'reload', 'reset': 'reset'} if value in ['reloaded', 'reset']: changed = True if module.check_mode: # "active" would also match "inactive", hence the space ufw_enabled = pre_state.find(" active") != -1 if (value == 'disabled' and ufw_enabled) or (value == 'enabled' and not ufw_enabled): changed = True else: execute(cmd + [['-f'], [states[value]]]) elif command == 'logging': extract = re.search(r'Logging: (on|off)(?: \(([a-z]+)\))?', pre_state) if extract: current_level = extract.group(2) current_on_off_value = extract.group(1) if value != "off": if current_on_off_value == "off": changed = True elif value != "on" and value != current_level: changed = True elif current_on_off_value != "off": changed = True else: changed = True if not module.check_mode: execute(cmd + [[command], [value]]) elif command == 'default': if params['direction'] not in ['outgoing', 'incoming', 'routed', None]: module.fail_json(msg='For default, direction must be one of "outgoing", "incoming" and "routed", or direction must not be specified.') if module.check_mode: regexp = r'Default: (deny|allow|reject) \(incoming\), (deny|allow|reject) \(outgoing\), (deny|allow|reject|disabled) \(routed\)' extract = re.search(regexp, pre_state) if extract is not None: current_default_values = {} current_default_values["incoming"] = extract.group(1) current_default_values["outgoing"] = extract.group(2) current_default_values["routed"] = extract.group(3) v = current_default_values[params['direction'] or 'incoming'] if v not in (value, 'disabled'): changed = True else: changed = True else: execute(cmd + [[command], [value], [params['direction']]]) elif command == 'rule': if params['direction'] not in ['in', 'out', None]: module.fail_json(msg='For rules, direction must be one of "in" and "out", or direction must not be specified.') # Rules are constructed according to the long format # # ufw [--dry-run] [route] [delete] [insert NUM] allow|deny|reject|limit [in|out on INTERFACE] [log|log-all] \ # [from ADDRESS [port PORT]] [to ADDRESS [port PORT]] \ # [proto protocol] [app application] [comment COMMENT] cmd.append([module.boolean(params['route']), 'route']) cmd.append([module.boolean(params['delete']), 'delete']) if params['insert'] is not None: relative_to_cmd = params['insert_relative_to'] if relative_to_cmd == 'zero': insert_to = params['insert'] else: (dummy, numbered_state, dummy) = module.run_command([ufw_bin, 'status', 'numbered']) numbered_line_re = re.compile(R'^\[ *([0-9]+)\] ') lines = [(numbered_line_re.match(line), '(v6)' in line) for line in numbered_state.splitlines()] lines = [(int(matcher.group(1)), ipv6) for (matcher, ipv6) in lines if matcher] last_number = max([no for (no, ipv6) in lines]) if lines else 0 has_ipv4 = any([not ipv6 for (no, ipv6) in lines]) has_ipv6 = any([ipv6 for (no, ipv6) in lines]) if relative_to_cmd == 'first-ipv4': relative_to = 1 elif relative_to_cmd == 'last-ipv4': relative_to = max([no for (no, ipv6) in lines if not ipv6]) if has_ipv4 else 1 elif relative_to_cmd == 'first-ipv6': relative_to = max([no for (no, ipv6) in lines if not ipv6]) + 1 if has_ipv4 else 1 elif relative_to_cmd == 'last-ipv6': relative_to = last_number if has_ipv6 else last_number + 1 insert_to = params['insert'] + relative_to if insert_to > last_number: # ufw does not like it when the insert number is larger than the # maximal rule number for IPv4/IPv6. insert_to = None cmd.append([insert_to is not None, "insert %s" % insert_to]) cmd.append([value]) cmd.append([params['direction'], "%s" % params['direction']]) cmd.append([params['interface'], "on %s" % params['interface']]) cmd.append([module.boolean(params['log']), 'log']) for (key, template) in [('from_ip', "from %s"), ('from_port', "port %s"), ('to_ip', "to %s"), ('to_port', "port %s"), ('proto', "proto %s"), ('name', "app '%s'")]: value = params[key] cmd.append([value, template % (value)]) ufw_major, ufw_minor, dummy = ufw_version() # comment is supported only in ufw version after 0.35 if (ufw_major == 0 and ufw_minor >= 35) or ufw_major > 0: cmd.append([params['comment'], "comment '%s'" % params['comment']]) rules_dry = execute(cmd) if module.check_mode: nb_skipping_line = len(filter_line_that_contains("Skipping", rules_dry)) if not (nb_skipping_line > 0 and nb_skipping_line == len(rules_dry.splitlines(True))): rules_dry = filter_line_that_not_start_with("### tuple", rules_dry) # ufw dry-run doesn't send all rules so have to compare ipv4 or ipv6 rules if is_starting_by_ipv4(params['from_ip']) or is_starting_by_ipv4(params['to_ip']): if filter_line_that_contains_ipv4(pre_rules) != filter_line_that_contains_ipv4(rules_dry): changed = True elif is_starting_by_ipv6(params['from_ip']) or is_starting_by_ipv6(params['to_ip']): if filter_line_that_contains_ipv6(pre_rules) != filter_line_that_contains_ipv6(rules_dry): changed = True elif pre_rules != rules_dry: changed = True # Get the new state if module.check_mode: return module.exit_json(changed=changed, commands=cmds) else: post_state = execute([[ufw_bin], ['status'], ['verbose']]) if not changed: post_rules = get_current_rules() changed = (pre_state != post_state) or (pre_rules != post_rules) return module.exit_json(changed=changed, commands=cmds, msg=post_state.rstrip()) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
63,903
ufw module does not support specifying both in & out interfaces for forwarding rules
##### SUMMARY Currently the module ufw does not allow to specify two network interfaces for forwarding rules (rule='allow' routed='yes'). What I'd like to achieve is this rule: `ufw route allow in on ethX out on ethY to 10.0.0.0/8 from 192.168.0.0/24` To my knowledge, the best one can do is pick one interface and decide to flag it with "in" or "out" traffic using the `direction` property. So the equivalent of either: `ufw route allow in on ethX to 10.0.0.0/8 from 192.168.0.0/24` or `ufw route allow out on ethY to 10.0.0.0/8 from 192.168.0.0/24` Is there a mechanism to provide both interfaces to the module? ##### ISSUE TYPE - Feature request ##### COMPONENT NAME module ufw ##### ANSIBLE VERSION ``` ansible 2.8.2 config file = /mnt/c/dev/sandboxes/ansible/ansible.cfg configured module search path = [u'/home/me/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/me/virtualenvs/ansible/local/lib/python2.7/site-packages/ansible executable location = /home/me/virtualenvs/ansible/bin/ansible python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION ``` ANSIBLE_PIPELINING(/mnt/c/dev/sandboxes/ansible/ansible.cfg) = True DEFAULT_ROLES_PATH(/mnt/c/dev/sandboxes/ansible/ansible.cfg) = [u'/mnt/c/dev/sandboxes/ansible/.imported_roles', u'/mnt/c/dev/sandboxes/ansible/roles'] DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/me/.vault_pass.txt ``` ##### OS / ENVIRONMENT Ubuntu Linux 18.04 LTS ##### STEPS TO REPRODUCE Cannot really (re)produce, we just can't express it. There is a single parameter `interface` and we would need 2. (Or a convention to pass a tuple of interfaces into `interface`, comma separated?) ##### EXPECTED RESULTS Being able to specify an "in" interface and an "out" interface. In the following snippet, only the "out" interface can be mentioned. ```yaml changed: [testvm1] => (item={u'direction': u'out', u'to_ip': u'0.0.0.0/0', u'interface': u'ethY', u'route': u'yes', u'from_ip': u'192.168.0.0/24', u'rule': u'allow'}) => { "ansible_loop_var": "item", "changed": true, "commands": [ "/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules", "/usr/sbin/ufw --version", "/usr/sbin/ufw route allow out on ethY from 192.168.0.0/24 to 0.0.0.0/0", "/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules" ], "invocation": { "module_args": { "app": null, "comment": null, "default": null, "delete": false, "direction": "out", "from_ip": "192.168.0.0/24", "from_port": null, "insert": null, "insert_relative_to": "zero", "interface": "ethY", "log": false, "logging": null, "proto": null, "route": true, "rule": "allow", "state": null, "to_ip": "0.0.0.0/0", "to_port": null } }, "item": { "direction": "out", "from_ip": "192.168.0.0/24", "interface": "ethY", "route": "yes", "rule": "allow", "to_ip": "0.0.0.0/0" }, "msg": "Status: active\nLogging: off\nDefault: deny (incoming), allow (outgoing), deny (routed)\nNew profiles: skip\n\nTo Action From\n-- ------ ----\n10.0.0.0/8 on ethY ALLOW FWD 10.254.33.0/24" } ``` ##### ACTUAL RESULTS We must choose which interface we want to specify as there is only one `interface` parameter. (In my case, I will specify the output interface preferably over the incoming interface and count on the address range to be selective enough but YMVV)
https://github.com/ansible/ansible/issues/63903
https://github.com/ansible/ansible/pull/65382
03dce68227cb5732ef463943cfb2bd0e09d5d4ed
a0b8b85fa5ab512f0ece4c660aba754fc85edc9e
2019-10-24T12:42:48Z
python
2019-12-02T07:01:44Z
test/integration/targets/ufw/tasks/main.yml
--- # Make sure ufw is installed - name: Install EPEL repository (RHEL only) include_role: name: setup_epel when: ansible_distribution == 'RedHat' - name: Install iptables (SuSE only) package: name: iptables when: ansible_os_family == 'Suse' - name: Install ufw package: name: ufw # Run the tests - block: - include_tasks: run-test.yml with_fileglob: - "tests/*.yml" # Cleanup always: - pause: # ufw creates backups of the rule files with a timestamp; if reset is called # twice in a row fast enough (so that both timestamps are taken in the same second), # the second call will notice that the backup files are already there and fail. # Waiting one second fixes this problem. seconds: 1 - name: Reset ufw to factory defaults and disable ufw: state: reset
closed
ansible/ansible
https://github.com/ansible/ansible
63,903
ufw module does not support specifying both in & out interfaces for forwarding rules
##### SUMMARY Currently the module ufw does not allow to specify two network interfaces for forwarding rules (rule='allow' routed='yes'). What I'd like to achieve is this rule: `ufw route allow in on ethX out on ethY to 10.0.0.0/8 from 192.168.0.0/24` To my knowledge, the best one can do is pick one interface and decide to flag it with "in" or "out" traffic using the `direction` property. So the equivalent of either: `ufw route allow in on ethX to 10.0.0.0/8 from 192.168.0.0/24` or `ufw route allow out on ethY to 10.0.0.0/8 from 192.168.0.0/24` Is there a mechanism to provide both interfaces to the module? ##### ISSUE TYPE - Feature request ##### COMPONENT NAME module ufw ##### ANSIBLE VERSION ``` ansible 2.8.2 config file = /mnt/c/dev/sandboxes/ansible/ansible.cfg configured module search path = [u'/home/me/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/me/virtualenvs/ansible/local/lib/python2.7/site-packages/ansible executable location = /home/me/virtualenvs/ansible/bin/ansible python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION ``` ANSIBLE_PIPELINING(/mnt/c/dev/sandboxes/ansible/ansible.cfg) = True DEFAULT_ROLES_PATH(/mnt/c/dev/sandboxes/ansible/ansible.cfg) = [u'/mnt/c/dev/sandboxes/ansible/.imported_roles', u'/mnt/c/dev/sandboxes/ansible/roles'] DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/me/.vault_pass.txt ``` ##### OS / ENVIRONMENT Ubuntu Linux 18.04 LTS ##### STEPS TO REPRODUCE Cannot really (re)produce, we just can't express it. There is a single parameter `interface` and we would need 2. (Or a convention to pass a tuple of interfaces into `interface`, comma separated?) ##### EXPECTED RESULTS Being able to specify an "in" interface and an "out" interface. In the following snippet, only the "out" interface can be mentioned. ```yaml changed: [testvm1] => (item={u'direction': u'out', u'to_ip': u'0.0.0.0/0', u'interface': u'ethY', u'route': u'yes', u'from_ip': u'192.168.0.0/24', u'rule': u'allow'}) => { "ansible_loop_var": "item", "changed": true, "commands": [ "/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules", "/usr/sbin/ufw --version", "/usr/sbin/ufw route allow out on ethY from 192.168.0.0/24 to 0.0.0.0/0", "/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules" ], "invocation": { "module_args": { "app": null, "comment": null, "default": null, "delete": false, "direction": "out", "from_ip": "192.168.0.0/24", "from_port": null, "insert": null, "insert_relative_to": "zero", "interface": "ethY", "log": false, "logging": null, "proto": null, "route": true, "rule": "allow", "state": null, "to_ip": "0.0.0.0/0", "to_port": null } }, "item": { "direction": "out", "from_ip": "192.168.0.0/24", "interface": "ethY", "route": "yes", "rule": "allow", "to_ip": "0.0.0.0/0" }, "msg": "Status: active\nLogging: off\nDefault: deny (incoming), allow (outgoing), deny (routed)\nNew profiles: skip\n\nTo Action From\n-- ------ ----\n10.0.0.0/8 on ethY ALLOW FWD 10.254.33.0/24" } ``` ##### ACTUAL RESULTS We must choose which interface we want to specify as there is only one `interface` parameter. (In my case, I will specify the output interface preferably over the incoming interface and count on the address range to be selective enough but YMVV)
https://github.com/ansible/ansible/issues/63903
https://github.com/ansible/ansible/pull/65382
03dce68227cb5732ef463943cfb2bd0e09d5d4ed
a0b8b85fa5ab512f0ece4c660aba754fc85edc9e
2019-10-24T12:42:48Z
python
2019-12-02T07:01:44Z
test/integration/targets/ufw/tasks/tests/interface.yml
closed
ansible/ansible
https://github.com/ansible/ansible
63,903
ufw module does not support specifying both in & out interfaces for forwarding rules
##### SUMMARY Currently the module ufw does not allow to specify two network interfaces for forwarding rules (rule='allow' routed='yes'). What I'd like to achieve is this rule: `ufw route allow in on ethX out on ethY to 10.0.0.0/8 from 192.168.0.0/24` To my knowledge, the best one can do is pick one interface and decide to flag it with "in" or "out" traffic using the `direction` property. So the equivalent of either: `ufw route allow in on ethX to 10.0.0.0/8 from 192.168.0.0/24` or `ufw route allow out on ethY to 10.0.0.0/8 from 192.168.0.0/24` Is there a mechanism to provide both interfaces to the module? ##### ISSUE TYPE - Feature request ##### COMPONENT NAME module ufw ##### ANSIBLE VERSION ``` ansible 2.8.2 config file = /mnt/c/dev/sandboxes/ansible/ansible.cfg configured module search path = [u'/home/me/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/me/virtualenvs/ansible/local/lib/python2.7/site-packages/ansible executable location = /home/me/virtualenvs/ansible/bin/ansible python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] ``` ##### CONFIGURATION ``` ANSIBLE_PIPELINING(/mnt/c/dev/sandboxes/ansible/ansible.cfg) = True DEFAULT_ROLES_PATH(/mnt/c/dev/sandboxes/ansible/ansible.cfg) = [u'/mnt/c/dev/sandboxes/ansible/.imported_roles', u'/mnt/c/dev/sandboxes/ansible/roles'] DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/me/.vault_pass.txt ``` ##### OS / ENVIRONMENT Ubuntu Linux 18.04 LTS ##### STEPS TO REPRODUCE Cannot really (re)produce, we just can't express it. There is a single parameter `interface` and we would need 2. (Or a convention to pass a tuple of interfaces into `interface`, comma separated?) ##### EXPECTED RESULTS Being able to specify an "in" interface and an "out" interface. In the following snippet, only the "out" interface can be mentioned. ```yaml changed: [testvm1] => (item={u'direction': u'out', u'to_ip': u'0.0.0.0/0', u'interface': u'ethY', u'route': u'yes', u'from_ip': u'192.168.0.0/24', u'rule': u'allow'}) => { "ansible_loop_var": "item", "changed": true, "commands": [ "/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules", "/usr/sbin/ufw --version", "/usr/sbin/ufw route allow out on ethY from 192.168.0.0/24 to 0.0.0.0/0", "/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules" ], "invocation": { "module_args": { "app": null, "comment": null, "default": null, "delete": false, "direction": "out", "from_ip": "192.168.0.0/24", "from_port": null, "insert": null, "insert_relative_to": "zero", "interface": "ethY", "log": false, "logging": null, "proto": null, "route": true, "rule": "allow", "state": null, "to_ip": "0.0.0.0/0", "to_port": null } }, "item": { "direction": "out", "from_ip": "192.168.0.0/24", "interface": "ethY", "route": "yes", "rule": "allow", "to_ip": "0.0.0.0/0" }, "msg": "Status: active\nLogging: off\nDefault: deny (incoming), allow (outgoing), deny (routed)\nNew profiles: skip\n\nTo Action From\n-- ------ ----\n10.0.0.0/8 on ethY ALLOW FWD 10.254.33.0/24" } ``` ##### ACTUAL RESULTS We must choose which interface we want to specify as there is only one `interface` parameter. (In my case, I will specify the output interface preferably over the incoming interface and count on the address range to be selective enough but YMVV)
https://github.com/ansible/ansible/issues/63903
https://github.com/ansible/ansible/pull/65382
03dce68227cb5732ef463943cfb2bd0e09d5d4ed
a0b8b85fa5ab512f0ece4c660aba754fc85edc9e
2019-10-24T12:42:48Z
python
2019-12-02T07:01:44Z
test/units/modules/system/test_ufw.py
from units.compat import unittest from units.compat.mock import patch from ansible.module_utils import basic from ansible.module_utils._text import to_bytes import ansible.modules.system.ufw as module import json # mock ufw messages ufw_version_35 = """ufw 0.35\nCopyright 2008-2015 Canonical Ltd.\n""" ufw_verbose_header = """Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip To Action From -- ------ ----""" ufw_status_verbose_with_port_7000 = ufw_verbose_header + """ 7000/tcp ALLOW IN Anywhere 7000/tcp (v6) ALLOW IN Anywhere (v6) """ user_rules_with_port_7000 = """### tuple ### allow tcp 7000 0.0.0.0/0 any 0.0.0.0/0 in ### tuple ### allow tcp 7000 ::/0 any ::/0 in """ user_rules_with_ipv6 = """### tuple ### allow udp 5353 0.0.0.0/0 any 224.0.0.251 in ### tuple ### allow udp 5353 ::/0 any ff02::fb in """ ufw_status_verbose_with_ipv6 = ufw_verbose_header + """ 5353/udp ALLOW IN 224.0.0.251 5353/udp ALLOW IN ff02::fb """ ufw_status_verbose_nothing = ufw_verbose_header skippg_adding_existing_rules = "Skipping adding existing rule\nSkipping adding existing rule (v6)\n" grep_config_cli = "grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules " grep_config_cli += "/var/lib/ufw/user.rules /var/lib/ufw/user6.rules" dry_mode_cmd_with_port_700 = { "ufw status verbose": ufw_status_verbose_with_port_7000, "ufw --version": ufw_version_35, "ufw --dry-run allow from any to any port 7000 proto tcp": skippg_adding_existing_rules, "ufw --dry-run delete allow from any to any port 7000 proto tcp": "", "ufw --dry-run delete allow from any to any port 7001 proto tcp": user_rules_with_port_7000, grep_config_cli: user_rules_with_port_7000 } # setup configuration : # ufw reset # ufw enable # ufw allow proto udp to any port 5353 from 224.0.0.251 # ufw allow proto udp to any port 5353 from ff02::fb dry_mode_cmd_with_ipv6 = { "ufw status verbose": ufw_status_verbose_with_ipv6, "ufw --version": ufw_version_35, # CONTENT of the command sudo ufw --dry-run delete allow in from ff02::fb port 5353 proto udp | grep -E "^### tupple" "ufw --dry-run delete allow from ff02::fb to any port 5353 proto udp": "### tuple ### allow udp any ::/0 5353 ff02::fb in", grep_config_cli: user_rules_with_ipv6, "ufw --dry-run allow from ff02::fb to any port 5353 proto udp": skippg_adding_existing_rules, "ufw --dry-run allow from 224.0.0.252 to any port 5353 proto udp": """### tuple ### allow udp 5353 0.0.0.0/0 any 224.0.0.251 in ### tuple ### allow udp 5353 0.0.0.0/0 any 224.0.0.252 in """, "ufw --dry-run allow from 10.0.0.0/24 to any port 1577 proto udp": "### tuple ### allow udp 1577 0.0.0.0/0 any 10.0.0.0/24 in" } dry_mode_cmd_nothing = { "ufw status verbose": ufw_status_verbose_nothing, "ufw --version": ufw_version_35, grep_config_cli: "", "ufw --dry-run allow from any to :: port 23": "### tuple ### allow any 23 :: any ::/0 in" } def do_nothing_func_nothing(*args, **kwarg): return 0, dry_mode_cmd_nothing[args[0]], "" def do_nothing_func_ipv6(*args, **kwarg): return 0, dry_mode_cmd_with_ipv6[args[0]], "" def do_nothing_func_port_7000(*args, **kwarg): return 0, dry_mode_cmd_with_port_700[args[0]], "" def set_module_args(args): args = json.dumps({'ANSIBLE_MODULE_ARGS': args}) """prepare arguments so that they will be picked up during module creation""" basic._ANSIBLE_ARGS = to_bytes(args) class AnsibleExitJson(Exception): """Exception class to be raised by module.exit_json and caught by the test case""" pass class AnsibleFailJson(Exception): """Exception class to be raised by module.fail_json and caught by the test case""" pass def exit_json(*args, **kwargs): """function to patch over exit_json; package return data into an exception""" if 'changed' not in kwargs: kwargs['changed'] = False raise AnsibleExitJson(kwargs) def fail_json(*args, **kwargs): """function to patch over fail_json; package return data into an exception""" kwargs['failed'] = True raise AnsibleFailJson(kwargs) def get_bin_path(self, arg, required=False): """Mock AnsibleModule.get_bin_path""" return arg class TestUFW(unittest.TestCase): def setUp(self): self.mock_module_helper = patch.multiple(basic.AnsibleModule, exit_json=exit_json, fail_json=fail_json, get_bin_path=get_bin_path) self.mock_module_helper.start() self.addCleanup(self.mock_module_helper.stop) def test_filter_line_that_contains_ipv4(self): reg = module.compile_ipv4_regexp() self.assertTrue(reg.search("### tuple ### allow udp 5353 ::/0 any ff02::fb in") is None) self.assertTrue(reg.search("### tuple ### allow udp 5353 0.0.0.0/0 any 224.0.0.251 in") is not None) self.assertTrue(reg.match("ff02::fb") is None) self.assertTrue(reg.match("224.0.0.251") is not None) self.assertTrue(reg.match("10.0.0.0/8") is not None) self.assertTrue(reg.match("somethingElse") is None) self.assertTrue(reg.match("::") is None) self.assertTrue(reg.match("any") is None) def test_filter_line_that_contains_ipv6(self): reg = module.compile_ipv6_regexp() self.assertTrue(reg.search("### tuple ### allow udp 5353 ::/0 any ff02::fb in") is not None) self.assertTrue(reg.search("### tuple ### allow udp 5353 0.0.0.0/0 any 224.0.0.251 in") is None) self.assertTrue(reg.search("### tuple ### allow any 23 :: any ::/0 in") is not None) self.assertTrue(reg.match("ff02::fb") is not None) self.assertTrue(reg.match("224.0.0.251") is None) self.assertTrue(reg.match("::") is not None) def test_check_mode_add_rules(self): set_module_args({ 'rule': 'allow', 'proto': 'tcp', 'port': '7000', '_ansible_check_mode': True }) result = self.__getResult(do_nothing_func_port_7000) self.assertFalse(result.exception.args[0]['changed']) def test_check_mode_delete_existing_rules(self): set_module_args({ 'rule': 'allow', 'proto': 'tcp', 'port': '7000', 'delete': 'yes', '_ansible_check_mode': True, }) self.assertTrue(self.__getResult(do_nothing_func_port_7000).exception.args[0]['changed']) def test_check_mode_delete_not_existing_rules(self): set_module_args({ 'rule': 'allow', 'proto': 'tcp', 'port': '7001', 'delete': 'yes', '_ansible_check_mode': True, }) self.assertFalse(self.__getResult(do_nothing_func_port_7000).exception.args[0]['changed']) def test_enable_mode(self): set_module_args({ 'state': 'enabled', '_ansible_check_mode': True }) self.assertFalse(self.__getResult(do_nothing_func_port_7000).exception.args[0]['changed']) def test_disable_mode(self): set_module_args({ 'state': 'disabled', '_ansible_check_mode': True }) self.assertTrue(self.__getResult(do_nothing_func_port_7000).exception.args[0]['changed']) def test_logging_off(self): set_module_args({ 'logging': 'off', '_ansible_check_mode': True }) self.assertTrue(self.__getResult(do_nothing_func_port_7000).exception.args[0]['changed']) def test_logging_on(self): set_module_args({ 'logging': 'on', '_ansible_check_mode': True }) self.assertFalse(self.__getResult(do_nothing_func_port_7000).exception.args[0]['changed']) def test_default_changed(self): set_module_args({ 'default': 'allow', "direction": "incoming", '_ansible_check_mode': True }) self.assertTrue(self.__getResult(do_nothing_func_port_7000).exception.args[0]['changed']) def test_default_not_changed(self): set_module_args({ 'default': 'deny', "direction": "incoming", '_ansible_check_mode': True }) self.assertFalse(self.__getResult(do_nothing_func_port_7000).exception.args[0]['changed']) def test_ipv6_remove(self): set_module_args({ 'rule': 'allow', 'proto': 'udp', 'port': '5353', 'from': 'ff02::fb', 'delete': 'yes', '_ansible_check_mode': True, }) self.assertTrue(self.__getResult(do_nothing_func_ipv6).exception.args[0]['changed']) def test_ipv6_add_existing(self): set_module_args({ 'rule': 'allow', 'proto': 'udp', 'port': '5353', 'from': 'ff02::fb', '_ansible_check_mode': True, }) self.assertFalse(self.__getResult(do_nothing_func_ipv6).exception.args[0]['changed']) def test_add_not_existing_ipv4_submask(self): set_module_args({ 'rule': 'allow', 'proto': 'udp', 'port': '1577', 'from': '10.0.0.0/24', '_ansible_check_mode': True, }) self.assertTrue(self.__getResult(do_nothing_func_ipv6).exception.args[0]['changed']) def test_ipv4_add_with_existing_ipv6(self): set_module_args({ 'rule': 'allow', 'proto': 'udp', 'port': '5353', 'from': '224.0.0.252', '_ansible_check_mode': True, }) self.assertTrue(self.__getResult(do_nothing_func_ipv6).exception.args[0]['changed']) def test_ipv6_add_from_nothing(self): set_module_args({ 'rule': 'allow', 'port': '23', 'to': '::', '_ansible_check_mode': True, }) result = self.__getResult(do_nothing_func_nothing).exception.args[0] print(result) self.assertTrue(result['changed']) def __getResult(self, cmd_fun): with patch.object(basic.AnsibleModule, 'run_command') as mock_run_command: mock_run_command.side_effect = cmd_fun with self.assertRaises(AnsibleExitJson) as result: module.main() return result
closed
ansible/ansible
https://github.com/ansible/ansible
63,910
openssh_keypair handles encrypted key files poorly
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY `openssh_keypair` handles encrypted key files poorly. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> openssh_keypair ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.8.5 config file = /home/thom/git/dotfiles-ansible/ansible.cfg configured module search path = ['/home/thom/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.7/site-packages/ansible executable location = /usr/bin/ansible python version = 3.7.4 (default, Oct 4 2019, 06:57:26) [GCC 9.2.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_NOCOWS(/home/thom/git/dotfiles-ansible/ansible.cfg) = True DEFAULT_HOST_LIST(/home/thom/git/dotfiles-ansible/ansible.cfg) = ['/home/thom/git/dotfiles-ansible/hosts'] DEFAULT_MANAGED_STR(/home/thom/git/dotfiles-ansible/ansible.cfg) = This file is managed by Ansible.%n template: {file} date: %Y-%m-%d %H:%M:%S user: {uid} host: {host} ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Arch Linux ```OpenSSH_8.1p1, OpenSSL 1.1.1d 10 Sep 2019``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Obtain keyfile from this machine openssh_keypair: path: "/tmp/sshkey" type: ed25519 ``` I'm using ed25519 because it's independent of `size`; not sure if it also happens with eg. RSA. 0. Generate ``/tmp/sshkey`` using ``ssh-keygen -t ed25519 -f /tmp/sshkey -N password`` 1. Run the above playbook, observe it marks it as `changed`. 2. Observe ``/tmp/sshkey.pub`` is now empty. 3. Run it again, it's `changed` again 4. It has now replaced the SSH key and the public key is now correct. ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> 1. Either give me an error about encrypted `.ssh` files already existing or properly replace it.
https://github.com/ansible/ansible/issues/63910
https://github.com/ansible/ansible/pull/64436
a0b8b85fa5ab512f0ece4c660aba754fc85edc9e
da73bbd73c94b6bd5cc459f2e813b11014e44a7e
2019-10-24T15:17:25Z
python
2019-12-02T07:12:38Z
changelogs/fragments/64436-openssh_keypair-add-password-protected-key-check.yml
closed
ansible/ansible
https://github.com/ansible/ansible
63,910
openssh_keypair handles encrypted key files poorly
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY `openssh_keypair` handles encrypted key files poorly. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> openssh_keypair ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.8.5 config file = /home/thom/git/dotfiles-ansible/ansible.cfg configured module search path = ['/home/thom/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.7/site-packages/ansible executable location = /usr/bin/ansible python version = 3.7.4 (default, Oct 4 2019, 06:57:26) [GCC 9.2.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_NOCOWS(/home/thom/git/dotfiles-ansible/ansible.cfg) = True DEFAULT_HOST_LIST(/home/thom/git/dotfiles-ansible/ansible.cfg) = ['/home/thom/git/dotfiles-ansible/hosts'] DEFAULT_MANAGED_STR(/home/thom/git/dotfiles-ansible/ansible.cfg) = This file is managed by Ansible.%n template: {file} date: %Y-%m-%d %H:%M:%S user: {uid} host: {host} ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Arch Linux ```OpenSSH_8.1p1, OpenSSL 1.1.1d 10 Sep 2019``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Obtain keyfile from this machine openssh_keypair: path: "/tmp/sshkey" type: ed25519 ``` I'm using ed25519 because it's independent of `size`; not sure if it also happens with eg. RSA. 0. Generate ``/tmp/sshkey`` using ``ssh-keygen -t ed25519 -f /tmp/sshkey -N password`` 1. Run the above playbook, observe it marks it as `changed`. 2. Observe ``/tmp/sshkey.pub`` is now empty. 3. Run it again, it's `changed` again 4. It has now replaced the SSH key and the public key is now correct. ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> 1. Either give me an error about encrypted `.ssh` files already existing or properly replace it.
https://github.com/ansible/ansible/issues/63910
https://github.com/ansible/ansible/pull/64436
a0b8b85fa5ab512f0ece4c660aba754fc85edc9e
da73bbd73c94b6bd5cc459f2e813b11014e44a7e
2019-10-24T15:17:25Z
python
2019-12-02T07:12:38Z
lib/ansible/modules/crypto/openssh_keypair.py
#!/usr/bin/python # -*- coding: utf-8 -*- # (c) 2018, David Kainz <[email protected]> <[email protected]> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = { 'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community' } DOCUMENTATION = ''' --- module: openssh_keypair author: "David Kainz (@lolcube)" version_added: "2.8" short_description: Generate OpenSSH private and public keys. description: - "This module allows one to (re)generate OpenSSH private and public keys. It uses ssh-keygen to generate keys. One can generate C(rsa), C(dsa), C(rsa1), C(ed25519) or C(ecdsa) private keys." requirements: - "ssh-keygen" options: state: description: - Whether the private and public keys should exist or not, taking action if the state is different from what is stated. type: str default: present choices: [ present, absent ] size: description: - "Specifies the number of bits in the private key to create. For RSA keys, the minimum size is 1024 bits and the default is 4096 bits. Generally, 2048 bits is considered sufficient. DSA keys must be exactly 1024 bits as specified by FIPS 186-2. For ECDSA keys, size determines the key length by selecting from one of three elliptic curve sizes: 256, 384 or 521 bits. Attempting to use bit lengths other than these three values for ECDSA keys will cause this module to fail. Ed25519 keys have a fixed length and the size will be ignored." type: int type: description: - "The algorithm used to generate the SSH private key. C(rsa1) is for protocol version 1. C(rsa1) is deprecated and may not be supported by every version of ssh-keygen." type: str default: rsa choices: ['rsa', 'dsa', 'rsa1', 'ecdsa', 'ed25519'] force: description: - Should the key be regenerated even if it already exists type: bool default: false path: description: - Name of the files containing the public and private key. The file containing the public key will have the extension C(.pub). type: path required: true comment: description: - Provides a new comment to the public key. When checking if the key is in the correct state this will be ignored. type: str version_added: "2.9" extends_documentation_fragment: files ''' EXAMPLES = ''' # Generate an OpenSSH keypair with the default values (4096 bits, rsa) - openssh_keypair: path: /tmp/id_ssh_rsa # Generate an OpenSSH rsa keypair with a different size (2048 bits) - openssh_keypair: path: /tmp/id_ssh_rsa size: 2048 # Force regenerate an OpenSSH keypair if it already exists - openssh_keypair: path: /tmp/id_ssh_rsa force: True # Generate an OpenSSH keypair with a different algorithm (dsa) - openssh_keypair: path: /tmp/id_ssh_dsa type: dsa ''' RETURN = ''' size: description: Size (in bits) of the SSH private key returned: changed or success type: int sample: 4096 type: description: Algorithm used to generate the SSH private key returned: changed or success type: str sample: rsa filename: description: Path to the generated SSH private key file returned: changed or success type: str sample: /tmp/id_ssh_rsa fingerprint: description: The fingerprint of the key. returned: changed or success type: str sample: SHA256:r4YCZxihVjedH2OlfjVGI6Y5xAYtdCwk8VxKyzVyYfM public_key: description: The public key of the generated SSH private key returned: changed or success type: str sample: ssh-rsa AAAAB3Nza(...omitted...)veL4E3Xcw== test_key comment: description: The comment of the generated key returned: changed or success type: str sample: test@comment ''' import os import stat import errno from ansible.module_utils.basic import AnsibleModule from ansible.module_utils._text import to_native class KeypairError(Exception): pass class Keypair(object): def __init__(self, module): self.path = module.params['path'] self.state = module.params['state'] self.force = module.params['force'] self.size = module.params['size'] self.type = module.params['type'] self.comment = module.params['comment'] self.changed = False self.check_mode = module.check_mode self.privatekey = None self.fingerprint = {} self.public_key = {} if self.type in ('rsa', 'rsa1'): self.size = 4096 if self.size is None else self.size if self.size < 1024: module.fail_json(msg=('For RSA keys, the minimum size is 1024 bits and the default is 4096 bits. ' 'Attempting to use bit lengths under 1024 will cause the module to fail.')) if self.type == 'dsa': self.size = 1024 if self.size is None else self.size if self.size != 1024: module.fail_json(msg=('DSA keys must be exactly 1024 bits as specified by FIPS 186-2.')) if self.type == 'ecdsa': self.size = 256 if self.size is None else self.size if self.size not in (256, 384, 521): module.fail_json(msg=('For ECDSA keys, size determines the key length by selecting from ' 'one of three elliptic curve sizes: 256, 384 or 521 bits. ' 'Attempting to use bit lengths other than these three values for ' 'ECDSA keys will cause this module to fail. ')) if self.type == 'ed25519': self.size = 256 def generate(self, module): # generate a keypair if not self.isPrivateKeyValid(module, perms_required=False) or self.force: args = [ module.get_bin_path('ssh-keygen', True), '-q', '-N', '', '-b', str(self.size), '-t', self.type, '-f', self.path, ] if self.comment: args.extend(['-C', self.comment]) else: args.extend(['-C', ""]) try: if os.path.exists(self.path) and not os.access(self.path, os.W_OK): os.chmod(self.path, stat.S_IWUSR + stat.S_IRUSR) self.changed = True stdin_data = None if os.path.exists(self.path): stdin_data = 'y' module.run_command(args, data=stdin_data) proc = module.run_command([module.get_bin_path('ssh-keygen', True), '-lf', self.path]) self.fingerprint = proc[1].split() pubkey = module.run_command([module.get_bin_path('ssh-keygen', True), '-yf', self.path]) self.public_key = pubkey[1].strip('\n') except Exception as e: self.remove() module.fail_json(msg="%s" % to_native(e)) elif not self.isPublicKeyValid(module, perms_required=False): pubkey = module.run_command([module.get_bin_path('ssh-keygen', True), '-yf', self.path]) pubkey = pubkey[1].strip('\n') try: self.changed = True with open(self.path + ".pub", "w") as pubkey_f: pubkey_f.write(pubkey + '\n') os.chmod(self.path + ".pub", stat.S_IWUSR + stat.S_IRUSR + stat.S_IRGRP + stat.S_IROTH) except IOError: module.fail_json( msg='The public key is missing or does not match the private key. ' 'Unable to regenerate the public key.') self.public_key = pubkey if self.comment: try: if os.path.exists(self.path) and not os.access(self.path, os.W_OK): os.chmod(self.path, stat.S_IWUSR + stat.S_IRUSR) args = [module.get_bin_path('ssh-keygen', True), '-q', '-o', '-c', '-C', self.comment, '-f', self.path] module.run_command(args) except IOError: module.fail_json( msg='Unable to update the comment for the public key.') file_args = module.load_file_common_arguments(module.params) if module.set_fs_attributes_if_different(file_args, False): self.changed = True file_args['path'] = file_args['path'] + '.pub' if module.set_fs_attributes_if_different(file_args, False): self.changed = True def isPrivateKeyValid(self, module, perms_required=True): # check if the key is correct def _check_state(): return os.path.exists(self.path) if _check_state(): proc = module.run_command([module.get_bin_path('ssh-keygen', True), '-lf', self.path], check_rc=False) if not proc[0] == 0: if os.path.isdir(self.path): module.fail_json(msg='%s is a directory. Please specify a path to a file.' % (self.path)) return False fingerprint = proc[1].split() keysize = int(fingerprint[0]) keytype = fingerprint[-1][1:-1].lower() else: return False def _check_perms(module): file_args = module.load_file_common_arguments(module.params) return not module.set_fs_attributes_if_different(file_args, False) def _check_type(): return self.type == keytype def _check_size(): return self.size == keysize self.fingerprint = fingerprint if not perms_required: return _check_state() and _check_type() and _check_size() return _check_state() and _check_perms(module) and _check_type() and _check_size() def isPublicKeyValid(self, module, perms_required=True): def _get_pubkey_content(): if os.path.exists(self.path + ".pub"): with open(self.path + ".pub", "r") as pubkey_f: present_pubkey = pubkey_f.read().strip(' \n') return present_pubkey else: return False def _parse_pubkey(pubkey_content): if pubkey_content: parts = pubkey_content.split(' ', 2) return parts[0], parts[1], '' if len(parts) <= 2 else parts[2] return False def _pubkey_valid(pubkey): if pubkey_parts: return pubkey_parts[:2] == _parse_pubkey(pubkey)[:2] return False def _comment_valid(): if pubkey_parts: return pubkey_parts[2] == self.comment return False def _check_perms(module): file_args = module.load_file_common_arguments(module.params) file_args['path'] = file_args['path'] + '.pub' return not module.set_fs_attributes_if_different(file_args, False) pubkey = module.run_command([module.get_bin_path('ssh-keygen', True), '-yf', self.path]) pubkey = pubkey[1].strip('\n') pubkey_parts = _parse_pubkey(_get_pubkey_content()) if _pubkey_valid(pubkey): self.public_key = pubkey if not self.comment: return _pubkey_valid(pubkey) if not perms_required: return _pubkey_valid(pubkey) and _comment_valid() return _pubkey_valid(pubkey) and _comment_valid() and _check_perms(module) def dump(self): # return result as a dict """Serialize the object into a dictionary.""" result = { 'changed': self.changed, 'size': self.size, 'type': self.type, 'filename': self.path, # On removal this has no value 'fingerprint': self.fingerprint[1] if self.fingerprint else '', 'public_key': self.public_key, 'comment': self.comment if self.comment else '', } return result def remove(self): """Remove the resource from the filesystem.""" try: os.remove(self.path) self.changed = True except OSError as exc: if exc.errno != errno.ENOENT: raise KeypairError(exc) else: pass if os.path.exists(self.path + ".pub"): try: os.remove(self.path + ".pub") self.changed = True except OSError as exc: if exc.errno != errno.ENOENT: raise KeypairError(exc) else: pass def main(): # Define Ansible Module module = AnsibleModule( argument_spec=dict( state=dict(type='str', default='present', choices=['present', 'absent']), size=dict(type='int'), type=dict(type='str', default='rsa', choices=['rsa', 'dsa', 'rsa1', 'ecdsa', 'ed25519']), force=dict(type='bool', default=False), path=dict(type='path', required=True), comment=dict(type='str'), ), supports_check_mode=True, add_file_common_args=True, ) # Check if Path exists base_dir = os.path.dirname(module.params['path']) or '.' if not os.path.isdir(base_dir): module.fail_json( name=base_dir, msg='The directory %s does not exist or the file is not a directory' % base_dir ) keypair = Keypair(module) if keypair.state == 'present': if module.check_mode: result = keypair.dump() result['changed'] = module.params['force'] or not keypair.isPrivateKeyValid(module) or not keypair.isPublicKeyValid(module) module.exit_json(**result) try: keypair.generate(module) except Exception as exc: module.fail_json(msg=to_native(exc)) else: if module.check_mode: keypair.changed = os.path.exists(module.params['path']) if keypair.changed: keypair.fingerprint = {} result = keypair.dump() module.exit_json(**result) try: keypair.remove() except Exception as exc: module.fail_json(msg=to_native(exc)) result = keypair.dump() module.exit_json(**result) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
63,910
openssh_keypair handles encrypted key files poorly
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY `openssh_keypair` handles encrypted key files poorly. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> openssh_keypair ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.8.5 config file = /home/thom/git/dotfiles-ansible/ansible.cfg configured module search path = ['/home/thom/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.7/site-packages/ansible executable location = /usr/bin/ansible python version = 3.7.4 (default, Oct 4 2019, 06:57:26) [GCC 9.2.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_NOCOWS(/home/thom/git/dotfiles-ansible/ansible.cfg) = True DEFAULT_HOST_LIST(/home/thom/git/dotfiles-ansible/ansible.cfg) = ['/home/thom/git/dotfiles-ansible/hosts'] DEFAULT_MANAGED_STR(/home/thom/git/dotfiles-ansible/ansible.cfg) = This file is managed by Ansible.%n template: {file} date: %Y-%m-%d %H:%M:%S user: {uid} host: {host} ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Arch Linux ```OpenSSH_8.1p1, OpenSSL 1.1.1d 10 Sep 2019``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Obtain keyfile from this machine openssh_keypair: path: "/tmp/sshkey" type: ed25519 ``` I'm using ed25519 because it's independent of `size`; not sure if it also happens with eg. RSA. 0. Generate ``/tmp/sshkey`` using ``ssh-keygen -t ed25519 -f /tmp/sshkey -N password`` 1. Run the above playbook, observe it marks it as `changed`. 2. Observe ``/tmp/sshkey.pub`` is now empty. 3. Run it again, it's `changed` again 4. It has now replaced the SSH key and the public key is now correct. ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> 1. Either give me an error about encrypted `.ssh` files already existing or properly replace it.
https://github.com/ansible/ansible/issues/63910
https://github.com/ansible/ansible/pull/64436
a0b8b85fa5ab512f0ece4c660aba754fc85edc9e
da73bbd73c94b6bd5cc459f2e813b11014e44a7e
2019-10-24T15:17:25Z
python
2019-12-02T07:12:38Z
test/integration/targets/openssh_keypair/tasks/main.yml
- name: Generate privatekey1 - standard openssh_keypair: path: '{{ output_dir }}/privatekey1' register: privatekey1_result - name: Generate privatekey1 - standard (idempotent) openssh_keypair: path: '{{ output_dir }}/privatekey1' register: privatekey1_idem_result - name: Generate privatekey2 - size 2048 openssh_keypair: path: '{{ output_dir }}/privatekey2' size: 2048 - name: Generate privatekey3 - type dsa openssh_keypair: path: '{{ output_dir }}/privatekey3' type: dsa - name: Generate privatekey4 - standard openssh_keypair: path: '{{ output_dir }}/privatekey4' - name: Delete privatekey4 - standard openssh_keypair: state: absent path: '{{ output_dir }}/privatekey4' - name: Generate privatekey5 - standard openssh_keypair: path: '{{ output_dir }}/privatekey5' register: publickey_gen - name: Generate privatekey6 openssh_keypair: path: '{{ output_dir }}/privatekey6' type: rsa - name: Regenerate privatekey6 via force openssh_keypair: path: '{{ output_dir }}/privatekey6' type: rsa force: yes register: output_regenerated_via_force - name: Create broken key copy: dest: '{{ item }}' content: '' mode: '0700' loop: - '{{ output_dir }}/privatekeybroken' - '{{ output_dir }}/privatekeybroken.pub' - name: Regenerate broken key openssh_keypair: path: '{{ output_dir }}/privatekeybroken' type: rsa register: output_broken - name: Generate read-only private key openssh_keypair: path: '{{ output_dir }}/privatekeyreadonly' type: rsa mode: '0200' - name: Regenerate read-only private key via force openssh_keypair: path: '{{ output_dir }}/privatekeyreadonly' type: rsa force: yes register: output_read_only - name: Generate privatekey7 - standard with comment openssh_keypair: path: '{{ output_dir }}/privatekey7' comment: 'test@privatekey7' register: privatekey7_result - name: Modify privatekey7 comment openssh_keypair: path: '{{ output_dir }}/privatekey7' comment: 'test_modified@privatekey7' register: privatekey7_modified_result - import_tasks: ../tests/validate.yml
closed
ansible/ansible
https://github.com/ansible/ansible
63,910
openssh_keypair handles encrypted key files poorly
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY `openssh_keypair` handles encrypted key files poorly. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> openssh_keypair ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ```paste below ansible 2.8.5 config file = /home/thom/git/dotfiles-ansible/ansible.cfg configured module search path = ['/home/thom/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.7/site-packages/ansible executable location = /usr/bin/ansible python version = 3.7.4 (default, Oct 4 2019, 06:57:26) [GCC 9.2.0] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ANSIBLE_NOCOWS(/home/thom/git/dotfiles-ansible/ansible.cfg) = True DEFAULT_HOST_LIST(/home/thom/git/dotfiles-ansible/ansible.cfg) = ['/home/thom/git/dotfiles-ansible/hosts'] DEFAULT_MANAGED_STR(/home/thom/git/dotfiles-ansible/ansible.cfg) = This file is managed by Ansible.%n template: {file} date: %Y-%m-%d %H:%M:%S user: {uid} host: {host} ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> Arch Linux ```OpenSSH_8.1p1, OpenSSL 1.1.1d 10 Sep 2019``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ```yaml - name: Obtain keyfile from this machine openssh_keypair: path: "/tmp/sshkey" type: ed25519 ``` I'm using ed25519 because it's independent of `size`; not sure if it also happens with eg. RSA. 0. Generate ``/tmp/sshkey`` using ``ssh-keygen -t ed25519 -f /tmp/sshkey -N password`` 1. Run the above playbook, observe it marks it as `changed`. 2. Observe ``/tmp/sshkey.pub`` is now empty. 3. Run it again, it's `changed` again 4. It has now replaced the SSH key and the public key is now correct. ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> 1. Either give me an error about encrypted `.ssh` files already existing or properly replace it.
https://github.com/ansible/ansible/issues/63910
https://github.com/ansible/ansible/pull/64436
a0b8b85fa5ab512f0ece4c660aba754fc85edc9e
da73bbd73c94b6bd5cc459f2e813b11014e44a7e
2019-10-24T15:17:25Z
python
2019-12-02T07:12:38Z
test/integration/targets/openssh_keypair/tests/validate.yml
--- - name: Log privatekey1 return values debug: var: privatekey1_result - name: Validate privatekey1 return fingerprint assert: that: - privatekey1_result["fingerprint"] is string - privatekey1_result["fingerprint"].startswith("SHA256:") # only distro old enough that it still gives md5 with no prefix when: ansible_distribution != 'CentOS' and ansible_distribution_major_version != '6' - name: Validate privatekey1 return public_key assert: that: - privatekey1_result["public_key"] is string - privatekey1_result["public_key"].startswith("ssh-rsa ") - name: Validate privatekey1 return size value assert: that: - privatekey1_result["size"]|type_debug == 'int' - privatekey1_result["size"] == 4096 - name: Validate privatekey1 return key type assert: that: - privatekey1_result["type"] is string - privatekey1_result["type"] == "rsa" - name: Validate privatekey1 (test - RSA key with size 4096 bits) shell: "ssh-keygen -lf {{ output_dir }}/privatekey1 | grep -o -E '^[0-9]+'" register: privatekey1 - name: Validate privatekey1 (assert - RSA key with size 4096 bits) assert: that: - privatekey1.stdout == '4096' - name: Validate privatekey1 idempotence assert: that: - privatekey1_idem_result is not changed - name: Validate privatekey2 (test - RSA key with size 2048 bits) shell: "ssh-keygen -lf {{ output_dir }}/privatekey2 | grep -o -E '^[0-9]+'" register: privatekey2 - name: Validate privatekey2 (assert - RSA key with size 2048 bits) assert: that: - privatekey2.stdout == '2048' - name: Validate privatekey3 (test - DSA key with size 1024 bits) shell: "ssh-keygen -lf {{ output_dir }}/privatekey3 | grep -o -E '^[0-9]+'" register: privatekey3 - name: Validate privatekey3 (assert - DSA key with size 4096 bits) assert: that: - privatekey3.stdout == '1024' - name: Validate privatekey4 (test - Ensure key has been removed) stat: path: '{{ output_dir }}/privatekey4' register: privatekey4 - name: Validate privatekey4 (assert - Ensure key has been removed) assert: that: - privatekey4.stat.exists == False - name: Validate privatekey5 (assert - Public key module output equal to the public key on host) assert: that: - "publickey_gen.public_key == lookup('file', output_dir ~ '/privatekey5.pub').strip('\n')" - name: Verify that privatekey6 will be regenerated via force assert: that: - output_regenerated_via_force is changed - name: Verify that broken key will be regenerated assert: that: - output_broken is changed - name: Verify that read-only key will be regenerated assert: that: - output_read_only is changed - name: Validate privatekey7 (assert - Public key remains the same after comment change) assert: that: - privatekey7_result.public_key == privatekey7_modified_result.public_key - name: Validate privatekey7 comment on creation assert: that: - privatekey7_result.comment == 'test@privatekey7' - name: Validate privatekey7 comment update assert: that: - privatekey7_modified_result.comment == 'test_modified@privatekey7'
closed
ansible/ansible
https://github.com/ansible/ansible
65,109
"ansible-galaxy collection install" fails with URL as parameter
##### SUMMARY Ansible galaxy collection installation in command line fails when parameter is an URL. ```Shell ansible-galaxy collection install http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz Process install dependency map ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME **ansible-galaxy** *collection_name* as url parameter should be supported according the documentation: ```Shell ansible-galaxy collection install -h positional arguments: collection_name The collection(s) name or path/url to a tar.gz collection artifact. This is mutually exclusive with --requirements-file. ``` ##### ANSIBLE VERSION ``` ansible 2.9.1 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /tmp/venv-project/local/lib/python2.7/site-packages/ansible executable location = /tmp/venv-project/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION ``` DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/ubuntu/.ansible/.vault-dev.txt HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT OS : Ubuntu. Ansible usage via native *apt* package or *pip*. ##### STEPS TO REPRODUCE Create some ansible collection and publish/upload it on some http endpoint: `http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz` Installation via local file is working fine (=> collection package is correct): ```Shell rm -rf ~/.ansible/collections/ wget http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz ansible-galaxy collection install my_namespace-my_collection-0.1.tar.gz Process install dependency map Starting collection install process Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` Installation via [requirement file](https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#install-multiple-collections-with-a-requirements-file) (like #61680) is working fine too: With `requirements.yml`: ```YAML --- collections: - name: http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz ``` ```Shell rm -rf ~/.ansible/collections/ ansible-galaxy collection install -r requirements.yml Process install dependency map Starting collection install process Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` But using command line installation fails: ```Shell rm -rf ~/.ansible/collections/ ansible-galaxy collection install http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz Process install dependency map ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ``` ##### EXPECTED RESULTS ```Shell Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` ##### ACTUAL RESULTS ```Shell ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ```
https://github.com/ansible/ansible/issues/65109
https://github.com/ansible/ansible/pull/65272
41472ee3878be215af8b933b2b04b4a72b9165ca
694ef5660d45fcb97c9beea5b2750f6eadcf5e93
2019-11-20T12:51:12Z
python
2019-12-02T18:55:31Z
changelogs/fragments/collection-install-url.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
65,109
"ansible-galaxy collection install" fails with URL as parameter
##### SUMMARY Ansible galaxy collection installation in command line fails when parameter is an URL. ```Shell ansible-galaxy collection install http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz Process install dependency map ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME **ansible-galaxy** *collection_name* as url parameter should be supported according the documentation: ```Shell ansible-galaxy collection install -h positional arguments: collection_name The collection(s) name or path/url to a tar.gz collection artifact. This is mutually exclusive with --requirements-file. ``` ##### ANSIBLE VERSION ``` ansible 2.9.1 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /tmp/venv-project/local/lib/python2.7/site-packages/ansible executable location = /tmp/venv-project/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION ``` DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/ubuntu/.ansible/.vault-dev.txt HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT OS : Ubuntu. Ansible usage via native *apt* package or *pip*. ##### STEPS TO REPRODUCE Create some ansible collection and publish/upload it on some http endpoint: `http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz` Installation via local file is working fine (=> collection package is correct): ```Shell rm -rf ~/.ansible/collections/ wget http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz ansible-galaxy collection install my_namespace-my_collection-0.1.tar.gz Process install dependency map Starting collection install process Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` Installation via [requirement file](https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#install-multiple-collections-with-a-requirements-file) (like #61680) is working fine too: With `requirements.yml`: ```YAML --- collections: - name: http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz ``` ```Shell rm -rf ~/.ansible/collections/ ansible-galaxy collection install -r requirements.yml Process install dependency map Starting collection install process Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` But using command line installation fails: ```Shell rm -rf ~/.ansible/collections/ ansible-galaxy collection install http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz Process install dependency map ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ``` ##### EXPECTED RESULTS ```Shell Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` ##### ACTUAL RESULTS ```Shell ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ```
https://github.com/ansible/ansible/issues/65109
https://github.com/ansible/ansible/pull/65272
41472ee3878be215af8b933b2b04b4a72b9165ca
694ef5660d45fcb97c9beea5b2750f6eadcf5e93
2019-11-20T12:51:12Z
python
2019-12-02T18:55:31Z
lib/ansible/cli/galaxy.py
# Copyright: (c) 2013, James Cammarata <[email protected]> # Copyright: (c) 2018, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import os.path import re import shutil import textwrap import time import yaml from jinja2 import BaseLoader, Environment, FileSystemLoader from yaml.error import YAMLError import ansible.constants as C from ansible import context from ansible.cli import CLI from ansible.cli.arguments import option_helpers as opt_help from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info from ansible.galaxy.api import GalaxyAPI from ansible.galaxy.collection import build_collection, install_collections, publish_collection, \ validate_collection_name from ansible.galaxy.login import GalaxyLogin from ansible.galaxy.role import GalaxyRole from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel from ansible.module_utils.ansible_release import __version__ as ansible_version from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.parsing.yaml.loader import AnsibleLoader from ansible.playbook.role.requirement import RoleRequirement from ansible.utils.display import Display from ansible.utils.plugin_docs import get_versioned_doclink display = Display() class GalaxyCLI(CLI): '''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.''' SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url") def __init__(self, args): # Inject role into sys.argv[1] as a backwards compatibility step if len(args) > 1 and args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args: # TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice # Remove this in Ansible 2.13 when we also remove -v as an option on the root parser for ansible-galaxy. idx = 2 if args[1].startswith('-v') else 1 args.insert(idx, 'role') self.api_servers = [] self.galaxy = None super(GalaxyCLI, self).__init__(args) def init_parser(self): ''' create an options parser for bin/ansible ''' super(GalaxyCLI, self).init_parser( desc="Perform various Role and Collection related operations.", ) # Common arguments that apply to more than 1 action common = opt_help.argparse.ArgumentParser(add_help=False) common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL') common.add_argument('--api-key', dest='api_key', help='The Ansible Galaxy API key which can be found at ' 'https://galaxy.ansible.com/me/preferences. You can also use ansible-galaxy login to ' 'retrieve this key or set the token for the GALAXY_SERVER_LIST entry.') common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs', default=C.GALAXY_IGNORE_CERTS, help='Ignore SSL certificate validation errors.') opt_help.add_verbosity_options(common) force = opt_help.argparse.ArgumentParser(add_help=False) force.add_argument('-f', '--force', dest='force', action='store_true', default=False, help='Force overwriting an existing role or collection') github = opt_help.argparse.ArgumentParser(add_help=False) github.add_argument('github_user', help='GitHub username') github.add_argument('github_repo', help='GitHub repository') offline = opt_help.argparse.ArgumentParser(add_help=False) offline.add_argument('--offline', dest='offline', default=False, action='store_true', help="Don't query the galaxy API when creating roles") default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '') roles_path = opt_help.argparse.ArgumentParser(add_help=False) roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True), default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction, help='The path to the directory containing your roles. The default is the first ' 'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path) # Add sub parser for the Galaxy role type (role or collection) type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type') type_parser.required = True # Add sub parser for the Galaxy collection actions collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.') collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action') collection_parser.required = True self.add_init_options(collection_parser, parents=[common, force]) self.add_build_options(collection_parser, parents=[common, force]) self.add_publish_options(collection_parser, parents=[common]) self.add_install_options(collection_parser, parents=[common, force]) # Add sub parser for the Galaxy role actions role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.') role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action') role_parser.required = True self.add_init_options(role_parser, parents=[common, force, offline]) self.add_remove_options(role_parser, parents=[common, roles_path]) self.add_delete_options(role_parser, parents=[common, github]) self.add_list_options(role_parser, parents=[common, roles_path]) self.add_search_options(role_parser, parents=[common]) self.add_import_options(role_parser, parents=[common, github]) self.add_setup_options(role_parser, parents=[common, roles_path]) self.add_login_options(role_parser, parents=[common]) self.add_info_options(role_parser, parents=[common, roles_path, offline]) self.add_install_options(role_parser, parents=[common, force, roles_path]) def add_init_options(self, parser, parents=None): galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role' init_parser = parser.add_parser('init', parents=parents, help='Initialize new {0} with the base structure of a ' '{0}.'.format(galaxy_type)) init_parser.set_defaults(func=self.execute_init) init_parser.add_argument('--init-path', dest='init_path', default='./', help='The path in which the skeleton {0} will be created. The default is the ' 'current working directory.'.format(galaxy_type)) init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type), default=C.GALAXY_ROLE_SKELETON, help='The path to a {0} skeleton that the new {0} should be based ' 'upon.'.format(galaxy_type)) obj_name_kwargs = {} if galaxy_type == 'collection': obj_name_kwargs['type'] = validate_collection_name init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()), **obj_name_kwargs) if galaxy_type == 'role': init_parser.add_argument('--type', dest='role_type', action='store', default='default', help="Initialize using an alternate role type. Valid types include: 'container', " "'apb' and 'network'.") def add_remove_options(self, parser, parents=None): remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.') remove_parser.set_defaults(func=self.execute_remove) remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+') def add_delete_options(self, parser, parents=None): delete_parser = parser.add_parser('delete', parents=parents, help='Removes the role from Galaxy. It does not remove or alter the actual ' 'GitHub repository.') delete_parser.set_defaults(func=self.execute_delete) def add_list_options(self, parser, parents=None): list_parser = parser.add_parser('list', parents=parents, help='Show the name and version of each role installed in the roles_path.') list_parser.set_defaults(func=self.execute_list) list_parser.add_argument('role', help='Role', nargs='?', metavar='role') def add_search_options(self, parser, parents=None): search_parser = parser.add_parser('search', parents=parents, help='Search the Galaxy database by tags, platforms, author and multiple ' 'keywords.') search_parser.set_defaults(func=self.execute_search) search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by') search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by') search_parser.add_argument('--author', dest='author', help='GitHub username') search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*') def add_import_options(self, parser, parents=None): import_parser = parser.add_parser('import', parents=parents, help='Import a role') import_parser.set_defaults(func=self.execute_import) import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True, help="Don't wait for import results.") import_parser.add_argument('--branch', dest='reference', help='The name of a branch to import. Defaults to the repository\'s default branch ' '(usually master)') import_parser.add_argument('--role-name', dest='role_name', help='The name the role should have, if different than the repo name') import_parser.add_argument('--status', dest='check_status', action='store_true', default=False, help='Check the status of the most recent import request for given github_' 'user/github_repo.') def add_setup_options(self, parser, parents=None): setup_parser = parser.add_parser('setup', parents=parents, help='Manage the integration between Galaxy and the given source.') setup_parser.set_defaults(func=self.execute_setup) setup_parser.add_argument('--remove', dest='remove_id', default=None, help='Remove the integration matching the provided ID value. Use --list to see ' 'ID values.') setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False, help='List all of your integrations.') setup_parser.add_argument('source', help='Source') setup_parser.add_argument('github_user', help='GitHub username') setup_parser.add_argument('github_repo', help='GitHub repository') setup_parser.add_argument('secret', help='Secret') def add_login_options(self, parser, parents=None): login_parser = parser.add_parser('login', parents=parents, help="Login to api.github.com server in order to use ansible-galaxy role sub " "command such as 'import', 'delete', 'publish', and 'setup'") login_parser.set_defaults(func=self.execute_login) login_parser.add_argument('--github-token', dest='token', default=None, help='Identify with github token rather than username and password.') def add_info_options(self, parser, parents=None): info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.') info_parser.set_defaults(func=self.execute_info) info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]') def add_install_options(self, parser, parents=None): galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role' args_kwargs = {} if galaxy_type == 'collection': args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \ 'mutually exclusive with --requirements-file.' ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \ 'collection. This will not ignore dependency conflict errors.' else: args_kwargs['help'] = 'Role name, URL or tar file' ignore_errors_help = 'Ignore errors and continue with the next specified role.' install_parser = parser.add_parser('install', parents=parents, help='Install {0}(s) from file(s), URL(s) or Ansible ' 'Galaxy'.format(galaxy_type)) install_parser.set_defaults(func=self.execute_install) install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs) install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False, help=ignore_errors_help) install_exclusive = install_parser.add_mutually_exclusive_group() install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False, help="Don't download {0}s listed as dependencies.".format(galaxy_type)) install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False, help="Force overwriting an existing {0} and its " "dependencies.".format(galaxy_type)) if galaxy_type == 'collection': install_parser.add_argument('-p', '--collections-path', dest='collections_path', default=C.COLLECTIONS_PATHS[0], help='The path to the directory containing your collections.') install_parser.add_argument('-r', '--requirements-file', dest='requirements', help='A file containing a list of collections to be installed.') else: install_parser.add_argument('-r', '--role-file', dest='role_file', help='A file containing a list of roles to be imported.') install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true', default=False, help='Use tar instead of the scm archive option when packaging the role.') def add_build_options(self, parser, parents=None): build_parser = parser.add_parser('build', parents=parents, help='Build an Ansible collection artifact that can be publish to Ansible ' 'Galaxy.') build_parser.set_defaults(func=self.execute_build) build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',), help='Path to the collection(s) directory to build. This should be the directory ' 'that contains the galaxy.yml file. The default is the current working ' 'directory.') build_parser.add_argument('--output-path', dest='output_path', default='./', help='The path in which the collection is built to. The default is the current ' 'working directory.') def add_publish_options(self, parser, parents=None): publish_parser = parser.add_parser('publish', parents=parents, help='Publish a collection artifact to Ansible Galaxy.') publish_parser.set_defaults(func=self.execute_publish) publish_parser.add_argument('args', metavar='collection_path', help='The path to the collection tarball to publish.') publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True, help="Don't wait for import validation results.") publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0, help="The time to wait for the collection import process to finish.") def post_process_args(self, options): options = super(GalaxyCLI, self).post_process_args(options) display.verbosity = options.verbosity return options def run(self): super(GalaxyCLI, self).run() self.galaxy = Galaxy() def server_config_def(section, key, required): return { 'description': 'The %s of the %s Galaxy server' % (key, section), 'ini': [ { 'section': 'galaxy_server.%s' % section, 'key': key, } ], 'env': [ {'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())}, ], 'required': required, } server_def = [('url', True), ('username', False), ('password', False), ('token', False), ('auth_url', False)] config_servers = [] for server_key in (C.GALAXY_SERVER_LIST or []): # Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the # section [galaxy_server.<server>] for the values url, username, password, and token. config_dict = dict((k, server_config_def(server_key, k, req)) for k, req in server_def) defs = AnsibleLoader(yaml.safe_dump(config_dict)).get_single_data() C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs) server_options = C.config.get_plugin_options('galaxy_server', server_key) # auth_url is used to create the token, but not directly by GalaxyAPI, so # it doesn't need to be passed as kwarg to GalaxyApi auth_url = server_options.pop('auth_url', None) token_val = server_options['token'] or NoTokenSentinel username = server_options['username'] # default case if no auth info is provided. server_options['token'] = None if username: server_options['token'] = BasicAuthToken(username, server_options['password']) else: if token_val: if auth_url: server_options['token'] = KeycloakToken(access_token=token_val, auth_url=auth_url, validate_certs=not context.CLIARGS['ignore_certs']) else: # The galaxy v1 / github / django / 'Token' server_options['token'] = GalaxyToken(token=token_val) config_servers.append(GalaxyAPI(self.galaxy, server_key, **server_options)) cmd_server = context.CLIARGS['api_server'] cmd_token = GalaxyToken(token=context.CLIARGS['api_key']) if cmd_server: # Cmd args take precedence over the config entry but fist check if the arg was a name and use that config # entry, otherwise create a new API entry for the server specified. config_server = next((s for s in config_servers if s.name == cmd_server), None) if config_server: self.api_servers.append(config_server) else: self.api_servers.append(GalaxyAPI(self.galaxy, 'cmd_arg', cmd_server, token=cmd_token)) else: self.api_servers = config_servers # Default to C.GALAXY_SERVER if no servers were defined if len(self.api_servers) == 0: self.api_servers.append(GalaxyAPI(self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token)) context.CLIARGS['func']() @property def api(self): return self.api_servers[0] def _parse_requirements_file(self, requirements_file, allow_old_format=True): """ Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2 requirements file format: # v1 (roles only) - src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball. name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL. scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git. version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master. include: Path to additional requirements.yml files. # v2 (roles and collections) --- roles: # Same as v1 format just under the roles key collections: - namespace.collection - name: namespace.collection version: version identifier, multiple identifiers are separated by ',' source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST :param requirements_file: The path to the requirements file. :param allow_old_format: Will fail if a v1 requirements file is found and this is set to False. :return: a dict containing roles and collections to found in the requirements file. """ requirements = { 'roles': [], 'collections': [], } b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict') if not os.path.exists(b_requirements_file): raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file)) display.vvv("Reading requirement file at '%s'" % requirements_file) with open(b_requirements_file, 'rb') as req_obj: try: file_requirements = yaml.safe_load(req_obj) except YAMLError as err: raise AnsibleError( "Failed to parse the requirements yml at '%s' with the following error:\n%s" % (to_native(requirements_file), to_native(err))) if requirements_file is None: raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file)) def parse_role_req(requirement): if "include" not in requirement: role = RoleRequirement.role_yaml_parse(requirement) display.vvv("found role %s in yaml file" % to_text(role)) if "name" not in role and "src" not in role: raise AnsibleError("Must specify name or src for role") return [GalaxyRole(self.galaxy, self.api, **role)] else: b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict") if not os.path.isfile(b_include_path): raise AnsibleError("Failed to find include requirements file '%s' in '%s'" % (to_native(b_include_path), to_native(requirements_file))) with open(b_include_path, 'rb') as f_include: try: return [GalaxyRole(self.galaxy, self.api, **r) for r in (RoleRequirement.role_yaml_parse(i) for i in yaml.safe_load(f_include))] except Exception as e: raise AnsibleError("Unable to load data from include requirements file: %s %s" % (to_native(requirements_file), to_native(e))) if isinstance(file_requirements, list): # Older format that contains only roles if not allow_old_format: raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains " "a list of collections to install") for role_req in file_requirements: requirements['roles'] += parse_role_req(role_req) else: # Newer format with a collections and/or roles key extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections'])) if extra_keys: raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements " "file. Found: %s" % (to_native(", ".join(extra_keys)))) for role_req in file_requirements.get('roles', []): requirements['roles'] += parse_role_req(role_req) for collection_req in file_requirements.get('collections', []): if isinstance(collection_req, dict): req_name = collection_req.get('name', None) if req_name is None: raise AnsibleError("Collections requirement entry should contain the key name.") req_version = collection_req.get('version', '*') req_source = collection_req.get('source', None) if req_source: # Try and match up the requirement source with our list of Galaxy API servers defined in the # config, otherwise create a server with that URL without any auth. req_source = next(iter([a for a in self.api_servers if req_source in [a.name, a.api_server]]), GalaxyAPI(self.galaxy, "explicit_requirement_%s" % req_name, req_source)) requirements['collections'].append((req_name, req_version, req_source)) else: requirements['collections'].append((collection_req, '*', None)) return requirements @staticmethod def exit_without_ignore(rc=1): """ Exits with the specified return code unless the option --ignore-errors was specified """ if not context.CLIARGS['ignore_errors']: raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.') @staticmethod def _display_role_info(role_info): text = [u"", u"Role: %s" % to_text(role_info['name'])] text.append(u"\tdescription: %s" % role_info.get('description', '')) for k in sorted(role_info.keys()): if k in GalaxyCLI.SKIP_INFO_KEYS: continue if isinstance(role_info[k], dict): text.append(u"\t%s:" % (k)) for key in sorted(role_info[k].keys()): if key in GalaxyCLI.SKIP_INFO_KEYS: continue text.append(u"\t\t%s: %s" % (key, role_info[k][key])) else: text.append(u"\t%s: %s" % (k, role_info[k])) return u'\n'.join(text) @staticmethod def _resolve_path(path): return os.path.abspath(os.path.expanduser(os.path.expandvars(path))) @staticmethod def _get_skeleton_galaxy_yml(template_path, inject_data): with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj: meta_template = to_text(template_obj.read(), errors='surrogate_or_strict') galaxy_meta = get_collections_galaxy_meta_info() required_config = [] optional_config = [] for meta_entry in galaxy_meta: config_list = required_config if meta_entry.get('required', False) else optional_config value = inject_data.get(meta_entry['key'], None) if not value: meta_type = meta_entry.get('type', 'str') if meta_type == 'str': value = '' elif meta_type == 'list': value = [] elif meta_type == 'dict': value = {} meta_entry['value'] = value config_list.append(meta_entry) link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)") const_pattern = re.compile(r"C\(([^)]+)\)") def comment_ify(v): if isinstance(v, list): v = ". ".join([l.rstrip('.') for l in v]) v = link_pattern.sub(r"\1 <\2>", v) v = const_pattern.sub(r"'\1'", v) return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False) def to_yaml(v): return yaml.safe_dump(v, default_flow_style=False).rstrip() env = Environment(loader=BaseLoader) env.filters['comment_ify'] = comment_ify env.filters['to_yaml'] = to_yaml template = env.from_string(meta_template) meta_value = template.render({'required_config': required_config, 'optional_config': optional_config}) return meta_value ############################ # execute actions ############################ def execute_role(self): """ Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init as listed below. """ # To satisfy doc build pass def execute_collection(self): """ Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as listed below. """ # To satisfy doc build pass def execute_build(self): """ Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy. By default, this command builds from the current working directory. You can optionally pass in the collection input path (where the ``galaxy.yml`` file is). """ force = context.CLIARGS['force'] output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path']) b_output_path = to_bytes(output_path, errors='surrogate_or_strict') if not os.path.exists(b_output_path): os.makedirs(b_output_path) elif os.path.isfile(b_output_path): raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path)) for collection_path in context.CLIARGS['args']: collection_path = GalaxyCLI._resolve_path(collection_path) build_collection(collection_path, output_path, force) def execute_init(self): """ Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format. Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``. """ galaxy_type = context.CLIARGS['type'] init_path = context.CLIARGS['init_path'] force = context.CLIARGS['force'] obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)] obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)] inject_data = dict( description='your {0} description'.format(galaxy_type), ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'), ) if galaxy_type == 'role': inject_data.update(dict( author='your name', company='your company (optional)', license='license (GPL-2.0-or-later, MIT, etc)', role_name=obj_name, role_type=context.CLIARGS['role_type'], issue_tracker_url='http://example.com/issue/tracker', repository_url='http://example.com/repository', documentation_url='http://docs.example.com', homepage_url='http://example.com', min_ansible_version=ansible_version[:3], # x.y )) obj_path = os.path.join(init_path, obj_name) elif galaxy_type == 'collection': namespace, collection_name = obj_name.split('.', 1) inject_data.update(dict( namespace=namespace, collection_name=collection_name, version='1.0.0', readme='README.md', authors=['your name <[email protected]>'], license=['GPL-2.0-or-later'], repository='http://example.com/repository', documentation='http://docs.example.com', homepage='http://example.com', issues='http://example.com/issue/tracker', build_ignore=[], )) obj_path = os.path.join(init_path, namespace, collection_name) b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict') if os.path.exists(b_obj_path): if os.path.isfile(obj_path): raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path)) elif not force: raise AnsibleError("- the directory %s already exists. " "You can use --force to re-initialize this directory,\n" "however it will reset any main.yml files that may have\n" "been modified there already." % to_native(obj_path)) if obj_skeleton is not None: own_skeleton = False skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE else: own_skeleton = True obj_skeleton = self.galaxy.default_role_skeleton_path skeleton_ignore_expressions = ['^.*/.git_keep$'] obj_skeleton = os.path.expanduser(obj_skeleton) skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions] if not os.path.exists(obj_skeleton): raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format( to_native(obj_skeleton), galaxy_type) ) template_env = Environment(loader=FileSystemLoader(obj_skeleton)) # create role directory if not os.path.exists(b_obj_path): os.makedirs(b_obj_path) for root, dirs, files in os.walk(obj_skeleton, topdown=True): rel_root = os.path.relpath(root, obj_skeleton) rel_dirs = rel_root.split(os.sep) rel_root_dir = rel_dirs[0] if galaxy_type == 'collection': # A collection can contain templates in playbooks/*/templates and roles/*/templates in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs else: in_templates_dir = rel_root_dir == 'templates' dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)] for f in files: filename, ext = os.path.splitext(f) if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re): continue elif galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2': # Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options # dynamically which requires special options to be set. # The templated data's keys must match the key name but the inject data contains collection_name # instead of name. We just make a copy and change the key back to name for this file. template_data = inject_data.copy() template_data['name'] = template_data.pop('collection_name') meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data) b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict') with open(b_dest_file, 'wb') as galaxy_obj: galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict')) elif ext == ".j2" and not in_templates_dir: src_template = os.path.join(rel_root, f) dest_file = os.path.join(obj_path, rel_root, filename) template_env.get_template(src_template).stream(inject_data).dump(dest_file, encoding='utf-8') else: f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton) shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path)) for d in dirs: b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict') if not os.path.exists(b_dir_path): os.makedirs(b_dir_path) display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name)) def execute_info(self): """ prints out detailed information about an installed role as well as info available from the galaxy API. """ roles_path = context.CLIARGS['roles_path'] data = '' for role in context.CLIARGS['args']: role_info = {'path': roles_path} gr = GalaxyRole(self.galaxy, self.api, role) install_info = gr.install_info if install_info: if 'version' in install_info: install_info['installed_version'] = install_info['version'] del install_info['version'] role_info.update(install_info) remote_data = False if not context.CLIARGS['offline']: remote_data = self.api.lookup_role_by_name(role, False) if remote_data: role_info.update(remote_data) if gr.metadata: role_info.update(gr.metadata) req = RoleRequirement() role_spec = req.role_yaml_parse({'role': role}) if role_spec: role_info.update(role_spec) data = self._display_role_info(role_info) # FIXME: This is broken in both 1.9 and 2.0 as # _display_role_info() always returns something if not data: data = u"\n- the role %s was not found" % role self.pager(data) def execute_install(self): """ Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``). You can pass in a list (roles or collections) or use the file option listed below (these are mutually exclusive). If you pass in a list, it can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file. """ if context.CLIARGS['type'] == 'collection': collections = context.CLIARGS['args'] force = context.CLIARGS['force'] output_path = context.CLIARGS['collections_path'] ignore_certs = context.CLIARGS['ignore_certs'] ignore_errors = context.CLIARGS['ignore_errors'] requirements_file = context.CLIARGS['requirements'] no_deps = context.CLIARGS['no_deps'] force_deps = context.CLIARGS['force_with_deps'] if collections and requirements_file: raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.") elif not collections and not requirements_file: raise AnsibleError("You must specify a collection name or a requirements file.") if requirements_file: requirements_file = GalaxyCLI._resolve_path(requirements_file) requirements = self._parse_requirements_file(requirements_file, allow_old_format=False)['collections'] else: requirements = [] for collection_input in collections: name, dummy, requirement = collection_input.partition(':') requirements.append((name, requirement or '*', None)) output_path = GalaxyCLI._resolve_path(output_path) collections_path = C.COLLECTIONS_PATHS if len([p for p in collections_path if p.startswith(output_path)]) == 0: display.warning("The specified collections path '%s' is not part of the configured Ansible " "collections paths '%s'. The installed collection won't be picked up in an Ansible " "run." % (to_text(output_path), to_text(":".join(collections_path)))) if os.path.split(output_path)[1] != 'ansible_collections': output_path = os.path.join(output_path, 'ansible_collections') b_output_path = to_bytes(output_path, errors='surrogate_or_strict') if not os.path.exists(b_output_path): os.makedirs(b_output_path) install_collections(requirements, output_path, self.api_servers, (not ignore_certs), ignore_errors, no_deps, force, force_deps) return 0 role_file = context.CLIARGS['role_file'] if not context.CLIARGS['args'] and role_file is None: # the user needs to specify one of either --role-file or specify a single user/role name raise AnsibleOptionsError("- you must specify a user/role name or a roles file") no_deps = context.CLIARGS['no_deps'] force_deps = context.CLIARGS['force_with_deps'] force = context.CLIARGS['force'] or force_deps roles_left = [] if role_file: if not (role_file.endswith('.yaml') or role_file.endswith('.yml')): raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension") roles_left = self._parse_requirements_file(role_file)['roles'] else: # roles were specified directly, so we'll just go out grab them # (and their dependencies, unless the user doesn't want us to). for rname in context.CLIARGS['args']: role = RoleRequirement.role_yaml_parse(rname.strip()) roles_left.append(GalaxyRole(self.galaxy, self.api, **role)) for role in roles_left: # only process roles in roles files when names matches if given if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']: display.vvv('Skipping role %s' % role.name) continue display.vvv('Processing role %s ' % role.name) # query the galaxy API for the role data if role.install_info is not None: if role.install_info['version'] != role.version or force: if force: display.display('- changing role %s from %s to %s' % (role.name, role.install_info['version'], role.version or "unspecified")) role.remove() else: display.warning('- %s (%s) is already installed - use --force to change version to %s' % (role.name, role.install_info['version'], role.version or "unspecified")) continue else: if not force: display.display('- %s is already installed, skipping.' % str(role)) continue try: installed = role.install() except AnsibleError as e: display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e))) self.exit_without_ignore() continue # install dependencies, if we want them if not no_deps and installed: if not role.metadata: display.warning("Meta file %s is empty. Skipping dependencies." % role.path) else: role_dependencies = role.metadata.get('dependencies') or [] for dep in role_dependencies: display.debug('Installing dep %s' % dep) dep_req = RoleRequirement() dep_info = dep_req.role_yaml_parse(dep) dep_role = GalaxyRole(self.galaxy, self.api, **dep_info) if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None: # we know we can skip this, as it's not going to # be found on galaxy.ansible.com continue if dep_role.install_info is None: if dep_role not in roles_left: display.display('- adding dependency: %s' % to_text(dep_role)) roles_left.append(dep_role) else: display.display('- dependency %s already pending installation.' % dep_role.name) else: if dep_role.install_info['version'] != dep_role.version: if force_deps: display.display('- changing dependant role %s from %s to %s' % (dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified")) dep_role.remove() roles_left.append(dep_role) else: display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' % (to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version'])) else: if force_deps: roles_left.append(dep_role) else: display.display('- dependency %s is already installed, skipping.' % dep_role.name) if not installed: display.warning("- %s was NOT installed successfully." % role.name) self.exit_without_ignore() return 0 def execute_remove(self): """ removes the list of roles passed as arguments from the local system. """ if not context.CLIARGS['args']: raise AnsibleOptionsError('- you must specify at least one role to remove.') for role_name in context.CLIARGS['args']: role = GalaxyRole(self.galaxy, self.api, role_name) try: if role.remove(): display.display('- successfully removed %s' % role_name) else: display.display('- %s is not installed, skipping.' % role_name) except Exception as e: raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e))) return 0 def execute_list(self): """ lists the roles installed on the local system or matches a single role passed as an argument. """ def _display_role(gr): install_info = gr.install_info version = None if install_info: version = install_info.get("version", None) if not version: version = "(unknown version)" display.display("- %s, %s" % (gr.name, version)) if context.CLIARGS['role']: # show the requested role, if it exists name = context.CLIARGS['role'] gr = GalaxyRole(self.galaxy, self.api, name) if gr.metadata: display.display('# %s' % os.path.dirname(gr.path)) _display_role(gr) else: display.display("- the role %s was not found" % name) else: # show all valid roles in the roles_path directory roles_path = context.CLIARGS['roles_path'] path_found = False warnings = [] for path in roles_path: role_path = os.path.expanduser(path) if not os.path.exists(role_path): warnings.append("- the configured path %s does not exist." % role_path) continue elif not os.path.isdir(role_path): warnings.append("- the configured path %s, exists, but it is not a directory." % role_path) continue display.display('# %s' % role_path) path_files = os.listdir(role_path) path_found = True for path_file in path_files: gr = GalaxyRole(self.galaxy, self.api, path_file, path=path) if gr.metadata: _display_role(gr) for w in warnings: display.warning(w) if not path_found: raise AnsibleOptionsError("- None of the provided paths was usable. Please specify a valid path with --roles-path") return 0 def execute_publish(self): """ Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish. """ collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args']) wait = context.CLIARGS['wait'] timeout = context.CLIARGS['import_timeout'] publish_collection(collection_path, self.api, wait, timeout) def execute_search(self): ''' searches for roles on the Ansible Galaxy server''' page_size = 1000 search = None if context.CLIARGS['args']: search = '+'.join(context.CLIARGS['args']) if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']: raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.") response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'], tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size) if response['count'] == 0: display.display("No roles match your search.", color=C.COLOR_ERROR) return True data = [u''] if response['count'] > page_size: data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size)) else: data.append(u"Found %d roles matching your search:" % response['count']) max_len = [] for role in response['results']: max_len.append(len(role['username'] + '.' + role['name'])) name_len = max(max_len) format_str = u" %%-%ds %%s" % name_len data.append(u'') data.append(format_str % (u"Name", u"Description")) data.append(format_str % (u"----", u"-----------")) for role in response['results']: data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description'])) data = u'\n'.join(data) self.pager(data) return True def execute_login(self): """ verify user's identify via Github and retrieve an auth token from Ansible Galaxy. """ # Authenticate with github and retrieve a token if context.CLIARGS['token'] is None: if C.GALAXY_TOKEN: github_token = C.GALAXY_TOKEN else: login = GalaxyLogin(self.galaxy) github_token = login.create_github_token() else: github_token = context.CLIARGS['token'] galaxy_response = self.api.authenticate(github_token) if context.CLIARGS['token'] is None and C.GALAXY_TOKEN is None: # Remove the token we created login.remove_github_token() # Store the Galaxy token token = GalaxyToken() token.set(galaxy_response['token']) display.display("Successfully logged into Galaxy as %s" % galaxy_response['username']) return 0 def execute_import(self): """ used to import a role into Ansible Galaxy """ colors = { 'INFO': 'normal', 'WARNING': C.COLOR_WARN, 'ERROR': C.COLOR_ERROR, 'SUCCESS': C.COLOR_OK, 'FAILED': C.COLOR_ERROR, } github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict') github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict') if context.CLIARGS['check_status']: task = self.api.get_import_task(github_user=github_user, github_repo=github_repo) else: # Submit an import request task = self.api.create_import_task(github_user, github_repo, reference=context.CLIARGS['reference'], role_name=context.CLIARGS['role_name']) if len(task) > 1: # found multiple roles associated with github_user/github_repo display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo), color='yellow') display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED) for t in task: display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED) display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo), color=C.COLOR_CHANGED) return 0 # found a single role as expected display.display("Successfully submitted import request %d" % task[0]['id']) if not context.CLIARGS['wait']: display.display("Role name: %s" % task[0]['summary_fields']['role']['name']) display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo'])) if context.CLIARGS['check_status'] or context.CLIARGS['wait']: # Get the status of the import msg_list = [] finished = False while not finished: task = self.api.get_import_task(task_id=task[0]['id']) for msg in task[0]['summary_fields']['task_messages']: if msg['id'] not in msg_list: display.display(msg['message_text'], color=colors[msg['message_type']]) msg_list.append(msg['id']) if task[0]['state'] in ['SUCCESS', 'FAILED']: finished = True else: time.sleep(10) return 0 def execute_setup(self): """ Setup an integration from Github or Travis for Ansible Galaxy roles""" if context.CLIARGS['setup_list']: # List existing integration secrets secrets = self.api.list_secrets() if len(secrets) == 0: # None found display.display("No integrations found.") return 0 display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK) display.display("---------- ---------- ----------", color=C.COLOR_OK) for secret in secrets: display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'], secret['github_repo']), color=C.COLOR_OK) return 0 if context.CLIARGS['remove_id']: # Remove a secret self.api.remove_secret(context.CLIARGS['remove_id']) display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK) return 0 source = context.CLIARGS['source'] github_user = context.CLIARGS['github_user'] github_repo = context.CLIARGS['github_repo'] secret = context.CLIARGS['secret'] resp = self.api.add_secret(source, github_user, github_repo, secret) display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo'])) return 0 def execute_delete(self): """ Delete a role from Ansible Galaxy. """ github_user = context.CLIARGS['github_user'] github_repo = context.CLIARGS['github_repo'] resp = self.api.delete_role(github_user, github_repo) if len(resp['deleted_roles']) > 1: display.display("Deleted the following roles:") display.display("ID User Name") display.display("------ --------------- ----------") for role in resp['deleted_roles']: display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name)) display.display(resp['status']) return True
closed
ansible/ansible
https://github.com/ansible/ansible
65,109
"ansible-galaxy collection install" fails with URL as parameter
##### SUMMARY Ansible galaxy collection installation in command line fails when parameter is an URL. ```Shell ansible-galaxy collection install http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz Process install dependency map ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME **ansible-galaxy** *collection_name* as url parameter should be supported according the documentation: ```Shell ansible-galaxy collection install -h positional arguments: collection_name The collection(s) name or path/url to a tar.gz collection artifact. This is mutually exclusive with --requirements-file. ``` ##### ANSIBLE VERSION ``` ansible 2.9.1 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /tmp/venv-project/local/lib/python2.7/site-packages/ansible executable location = /tmp/venv-project/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION ``` DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/ubuntu/.ansible/.vault-dev.txt HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT OS : Ubuntu. Ansible usage via native *apt* package or *pip*. ##### STEPS TO REPRODUCE Create some ansible collection and publish/upload it on some http endpoint: `http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz` Installation via local file is working fine (=> collection package is correct): ```Shell rm -rf ~/.ansible/collections/ wget http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz ansible-galaxy collection install my_namespace-my_collection-0.1.tar.gz Process install dependency map Starting collection install process Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` Installation via [requirement file](https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#install-multiple-collections-with-a-requirements-file) (like #61680) is working fine too: With `requirements.yml`: ```YAML --- collections: - name: http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz ``` ```Shell rm -rf ~/.ansible/collections/ ansible-galaxy collection install -r requirements.yml Process install dependency map Starting collection install process Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` But using command line installation fails: ```Shell rm -rf ~/.ansible/collections/ ansible-galaxy collection install http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz Process install dependency map ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ``` ##### EXPECTED RESULTS ```Shell Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` ##### ACTUAL RESULTS ```Shell ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ```
https://github.com/ansible/ansible/issues/65109
https://github.com/ansible/ansible/pull/65272
41472ee3878be215af8b933b2b04b4a72b9165ca
694ef5660d45fcb97c9beea5b2750f6eadcf5e93
2019-11-20T12:51:12Z
python
2019-12-02T18:55:31Z
lib/ansible/galaxy/collection.py
# Copyright: (c) 2019, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import fnmatch import json import operator import os import shutil import sys import tarfile import tempfile import threading import time import yaml from contextlib import contextmanager from distutils.version import LooseVersion, StrictVersion from hashlib import sha256 from io import BytesIO from yaml.error import YAMLError try: import queue except ImportError: import Queue as queue # Python 2 import ansible.constants as C from ansible.errors import AnsibleError from ansible.galaxy import get_collections_galaxy_meta_info from ansible.galaxy.api import CollectionVersionMetadata, GalaxyError from ansible.module_utils import six from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.utils.collection_loader import AnsibleCollectionRef from ansible.utils.display import Display from ansible.utils.hashing import secure_hash, secure_hash_s from ansible.module_utils.urls import open_url urlparse = six.moves.urllib.parse.urlparse urllib_error = six.moves.urllib.error display = Display() MANIFEST_FORMAT = 1 class CollectionRequirement: _FILE_MAPPING = [(b'MANIFEST.json', 'manifest_file'), (b'FILES.json', 'files_file')] def __init__(self, namespace, name, b_path, api, versions, requirement, force, parent=None, metadata=None, files=None, skip=False): """ Represents a collection requirement, the versions that are available to be installed as well as any dependencies the collection has. :param namespace: The collection namespace. :param name: The collection name. :param b_path: Byte str of the path to the collection tarball if it has already been downloaded. :param api: The GalaxyAPI to use if the collection is from Galaxy. :param versions: A list of versions of the collection that are available. :param requirement: The version requirement string used to verify the list of versions fit the requirements. :param force: Whether the force flag applied to the collection. :param parent: The name of the parent the collection is a dependency of. :param metadata: The galaxy.api.CollectionVersionMetadata that has already been retrieved from the Galaxy server. :param files: The files that exist inside the collection. This is based on the FILES.json file inside the collection artifact. :param skip: Whether to skip installing the collection. Should be set if the collection is already installed and force is not set. """ self.namespace = namespace self.name = name self.b_path = b_path self.api = api self.versions = set(versions) self.force = force self.skip = skip self.required_by = [] self._metadata = metadata self._files = files self.add_requirement(parent, requirement) def __str__(self): return to_native("%s.%s" % (self.namespace, self.name)) def __unicode__(self): return u"%s.%s" % (self.namespace, self.name) @property def latest_version(self): try: return max([v for v in self.versions if v != '*'], key=LooseVersion) except ValueError: # ValueError: max() arg is an empty sequence return '*' @property def dependencies(self): if self._metadata: return self._metadata.dependencies elif len(self.versions) > 1: return None self._get_metadata() return self._metadata.dependencies def add_requirement(self, parent, requirement): self.required_by.append((parent, requirement)) new_versions = set(v for v in self.versions if self._meets_requirements(v, requirement, parent)) if len(new_versions) == 0: if self.skip: force_flag = '--force-with-deps' if parent else '--force' version = self.latest_version if self.latest_version != '*' else 'unknown' msg = "Cannot meet requirement %s:%s as it is already installed at version '%s'. Use %s to overwrite" \ % (to_text(self), requirement, version, force_flag) raise AnsibleError(msg) elif parent is None: msg = "Cannot meet requirement %s for dependency %s" % (requirement, to_text(self)) else: msg = "Cannot meet dependency requirement '%s:%s' for collection %s" \ % (to_text(self), requirement, parent) collection_source = to_text(self.b_path, nonstring='passthru') or self.api.api_server req_by = "\n".join( "\t%s - '%s:%s'" % (to_text(p) if p else 'base', to_text(self), r) for p, r in self.required_by ) versions = ", ".join(sorted(self.versions, key=LooseVersion)) raise AnsibleError( "%s from source '%s'. Available versions before last requirement added: %s\nRequirements from:\n%s" % (msg, collection_source, versions, req_by) ) self.versions = new_versions def install(self, path, b_temp_path): if self.skip: display.display("Skipping '%s' as it is already installed" % to_text(self)) return # Install if it is not collection_path = os.path.join(path, self.namespace, self.name) b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict') display.display("Installing '%s:%s' to '%s'" % (to_text(self), self.latest_version, collection_path)) if self.b_path is None: download_url = self._metadata.download_url artifact_hash = self._metadata.artifact_sha256 headers = {} self.api._add_auth_token(headers, download_url, required=False) self.b_path = _download_file(download_url, b_temp_path, artifact_hash, self.api.validate_certs, headers=headers) if os.path.exists(b_collection_path): shutil.rmtree(b_collection_path) os.makedirs(b_collection_path) with tarfile.open(self.b_path, mode='r') as collection_tar: files_member_obj = collection_tar.getmember('FILES.json') with _tarfile_extract(collection_tar, files_member_obj) as files_obj: files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict')) _extract_tar_file(collection_tar, 'MANIFEST.json', b_collection_path, b_temp_path) _extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path) for file_info in files['files']: file_name = file_info['name'] if file_name == '.': continue if file_info['ftype'] == 'file': _extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path, expected_hash=file_info['chksum_sha256']) else: os.makedirs(os.path.join(b_collection_path, to_bytes(file_name, errors='surrogate_or_strict'))) def set_latest_version(self): self.versions = set([self.latest_version]) self._get_metadata() def _get_metadata(self): if self._metadata: return self._metadata = self.api.get_collection_version_metadata(self.namespace, self.name, self.latest_version) def _meets_requirements(self, version, requirements, parent): """ Supports version identifiers can be '==', '!=', '>', '>=', '<', '<=', '*'. Each requirement is delimited by ',' """ op_map = { '!=': operator.ne, '==': operator.eq, '=': operator.eq, '>=': operator.ge, '>': operator.gt, '<=': operator.le, '<': operator.lt, } for req in list(requirements.split(',')): op_pos = 2 if len(req) > 1 and req[1] == '=' else 1 op = op_map.get(req[:op_pos]) requirement = req[op_pos:] if not op: requirement = req op = operator.eq # In the case we are checking a new requirement on a base requirement (parent != None) we can't accept # version as '*' (unknown version) unless the requirement is also '*'. if parent and version == '*' and requirement != '*': break elif requirement == '*' or version == '*': continue if not op(LooseVersion(version), LooseVersion(requirement)): break else: return True # The loop was broken early, it does not meet all the requirements return False @staticmethod def from_tar(b_path, force, parent=None): if not tarfile.is_tarfile(b_path): raise AnsibleError("Collection artifact at '%s' is not a valid tar file." % to_native(b_path)) info = {} with tarfile.open(b_path, mode='r') as collection_tar: for b_member_name, property_name in CollectionRequirement._FILE_MAPPING: n_member_name = to_native(b_member_name) try: member = collection_tar.getmember(n_member_name) except KeyError: raise AnsibleError("Collection at '%s' does not contain the required file %s." % (to_native(b_path), n_member_name)) with _tarfile_extract(collection_tar, member) as member_obj: try: info[property_name] = json.loads(to_text(member_obj.read(), errors='surrogate_or_strict')) except ValueError: raise AnsibleError("Collection tar file member %s does not contain a valid json string." % n_member_name) meta = info['manifest_file']['collection_info'] files = info['files_file']['files'] namespace = meta['namespace'] name = meta['name'] version = meta['version'] meta = CollectionVersionMetadata(namespace, name, version, None, None, meta['dependencies']) return CollectionRequirement(namespace, name, b_path, None, [version], version, force, parent=parent, metadata=meta, files=files) @staticmethod def from_path(b_path, force, parent=None): info = {} for b_file_name, property_name in CollectionRequirement._FILE_MAPPING: b_file_path = os.path.join(b_path, b_file_name) if not os.path.exists(b_file_path): continue with open(b_file_path, 'rb') as file_obj: try: info[property_name] = json.loads(to_text(file_obj.read(), errors='surrogate_or_strict')) except ValueError: raise AnsibleError("Collection file at '%s' does not contain a valid json string." % to_native(b_file_path)) if 'manifest_file' in info: manifest = info['manifest_file']['collection_info'] namespace = manifest['namespace'] name = manifest['name'] version = manifest['version'] dependencies = manifest['dependencies'] else: display.warning("Collection at '%s' does not have a MANIFEST.json file, cannot detect version." % to_text(b_path)) parent_dir, name = os.path.split(to_text(b_path, errors='surrogate_or_strict')) namespace = os.path.split(parent_dir)[1] version = '*' dependencies = {} meta = CollectionVersionMetadata(namespace, name, version, None, None, dependencies) files = info.get('files_file', {}).get('files', {}) return CollectionRequirement(namespace, name, b_path, None, [version], version, force, parent=parent, metadata=meta, files=files, skip=True) @staticmethod def from_name(collection, apis, requirement, force, parent=None): namespace, name = collection.split('.', 1) galaxy_meta = None for api in apis: try: if not (requirement == '*' or requirement.startswith('<') or requirement.startswith('>') or requirement.startswith('!=')): if requirement.startswith('='): requirement = requirement.lstrip('=') resp = api.get_collection_version_metadata(namespace, name, requirement) galaxy_meta = resp versions = [resp.version] else: resp = api.get_collection_versions(namespace, name) # Galaxy supports semver but ansible-galaxy does not. We ignore any versions that don't match # StrictVersion (x.y.z) and only support pre-releases if an explicit version was set (done above). versions = [v for v in resp if StrictVersion.version_re.match(v)] except GalaxyError as err: if err.http_code == 404: display.vvv("Collection '%s' is not available from server %s %s" % (collection, api.name, api.api_server)) continue raise display.vvv("Collection '%s' obtained from server %s %s" % (collection, api.name, api.api_server)) break else: raise AnsibleError("Failed to find collection %s:%s" % (collection, requirement)) req = CollectionRequirement(namespace, name, None, api, versions, requirement, force, parent=parent, metadata=galaxy_meta) return req def build_collection(collection_path, output_path, force): """ Creates the Ansible collection artifact in a .tar.gz file. :param collection_path: The path to the collection to build. This should be the directory that contains the galaxy.yml file. :param output_path: The path to create the collection build artifact. This should be a directory. :param force: Whether to overwrite an existing collection build artifact or fail. :return: The path to the collection build artifact. """ b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict') b_galaxy_path = os.path.join(b_collection_path, b'galaxy.yml') if not os.path.exists(b_galaxy_path): raise AnsibleError("The collection galaxy.yml path '%s' does not exist." % to_native(b_galaxy_path)) collection_meta = _get_galaxy_yml(b_galaxy_path) file_manifest = _build_files_manifest(b_collection_path, collection_meta['namespace'], collection_meta['name'], collection_meta['build_ignore']) collection_manifest = _build_manifest(**collection_meta) collection_output = os.path.join(output_path, "%s-%s-%s.tar.gz" % (collection_meta['namespace'], collection_meta['name'], collection_meta['version'])) b_collection_output = to_bytes(collection_output, errors='surrogate_or_strict') if os.path.exists(b_collection_output): if os.path.isdir(b_collection_output): raise AnsibleError("The output collection artifact '%s' already exists, " "but is a directory - aborting" % to_native(collection_output)) elif not force: raise AnsibleError("The file '%s' already exists. You can use --force to re-create " "the collection artifact." % to_native(collection_output)) _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest) def publish_collection(collection_path, api, wait, timeout): """ Publish an Ansible collection tarball into an Ansible Galaxy server. :param collection_path: The path to the collection tarball to publish. :param api: A GalaxyAPI to publish the collection to. :param wait: Whether to wait until the import process is complete. :param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite. """ import_uri = api.publish_collection(collection_path) if wait: # Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is # always the task_id, though. # v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"} # v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"} task_id = None for path_segment in reversed(import_uri.split('/')): if path_segment: task_id = path_segment break if not task_id: raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri) display.display("Collection has been published to the Galaxy server %s %s" % (api.name, api.api_server)) with _display_progress(): api.wait_import_task(task_id, timeout) display.display("Collection has been successfully published and imported to the Galaxy server %s %s" % (api.name, api.api_server)) else: display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has " "completed due to --no-wait being set. Import task results can be found at %s" % (api.name, api.api_server, import_uri)) def install_collections(collections, output_path, apis, validate_certs, ignore_errors, no_deps, force, force_deps): """ Install Ansible collections to the path specified. :param collections: The collections to install, should be a list of tuples with (name, requirement, Galaxy server). :param output_path: The path to install the collections to. :param apis: A list of GalaxyAPIs to query when searching for a collection. :param validate_certs: Whether to validate the certificates if downloading a tarball. :param ignore_errors: Whether to ignore any errors when installing the collection. :param no_deps: Ignore any collection dependencies and only install the base requirements. :param force: Re-install a collection if it has already been installed. :param force_deps: Re-install a collection as well as its dependencies if they have already been installed. """ existing_collections = _find_existing_collections(output_path) with _tempdir() as b_temp_path: display.display("Process install dependency map") with _display_progress(): dependency_map = _build_dependency_map(collections, existing_collections, b_temp_path, apis, validate_certs, force, force_deps, no_deps) display.display("Starting collection install process") with _display_progress(): for collection in dependency_map.values(): try: collection.install(output_path, b_temp_path) except AnsibleError as err: if ignore_errors: display.warning("Failed to install collection %s but skipping due to --ignore-errors being set. " "Error: %s" % (to_text(collection), to_text(err))) else: raise def validate_collection_name(name): """ Validates the collection name as an input from the user or a requirements file fit the requirements. :param name: The input name with optional range specifier split by ':'. :return: The input value, required for argparse validation. """ collection, dummy, dummy = name.partition(':') if AnsibleCollectionRef.is_valid_collection_name(collection): return name raise AnsibleError("Invalid collection name '%s', " "name must be in the format <namespace>.<collection>. " "Please make sure namespace and collection name contains " "characters from [a-zA-Z0-9_] only." % name) @contextmanager def _tempdir(): b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict')) yield b_temp_path shutil.rmtree(b_temp_path) @contextmanager def _tarfile_extract(tar, member): tar_obj = tar.extractfile(member) yield tar_obj tar_obj.close() @contextmanager def _display_progress(): config_display = C.GALAXY_DISPLAY_PROGRESS display_wheel = sys.stdout.isatty() if config_display is None else config_display if not display_wheel: yield return def progress(display_queue, actual_display): actual_display.debug("Starting display_progress display thread") t = threading.current_thread() while True: for c in "|/-\\": actual_display.display(c + "\b", newline=False) time.sleep(0.1) # Display a message from the main thread while True: try: method, args, kwargs = display_queue.get(block=False, timeout=0.1) except queue.Empty: break else: func = getattr(actual_display, method) func(*args, **kwargs) if getattr(t, "finish", False): actual_display.debug("Received end signal for display_progress display thread") return class DisplayThread(object): def __init__(self, display_queue): self.display_queue = display_queue def __getattr__(self, attr): def call_display(*args, **kwargs): self.display_queue.put((attr, args, kwargs)) return call_display # Temporary override the global display class with our own which add the calls to a queue for the thread to call. global display old_display = display try: display_queue = queue.Queue() display = DisplayThread(display_queue) t = threading.Thread(target=progress, args=(display_queue, old_display)) t.daemon = True t.start() try: yield finally: t.finish = True t.join() except Exception: # The exception is re-raised so we can sure the thread is finished and not using the display anymore raise finally: display = old_display def _get_galaxy_yml(b_galaxy_yml_path): meta_info = get_collections_galaxy_meta_info() mandatory_keys = set() string_keys = set() list_keys = set() dict_keys = set() for info in meta_info: if info.get('required', False): mandatory_keys.add(info['key']) key_list_type = { 'str': string_keys, 'list': list_keys, 'dict': dict_keys, }[info.get('type', 'str')] key_list_type.add(info['key']) all_keys = frozenset(list(mandatory_keys) + list(string_keys) + list(list_keys) + list(dict_keys)) try: with open(b_galaxy_yml_path, 'rb') as g_yaml: galaxy_yml = yaml.safe_load(g_yaml) except YAMLError as err: raise AnsibleError("Failed to parse the galaxy.yml at '%s' with the following error:\n%s" % (to_native(b_galaxy_yml_path), to_native(err))) set_keys = set(galaxy_yml.keys()) missing_keys = mandatory_keys.difference(set_keys) if missing_keys: raise AnsibleError("The collection galaxy.yml at '%s' is missing the following mandatory keys: %s" % (to_native(b_galaxy_yml_path), ", ".join(sorted(missing_keys)))) extra_keys = set_keys.difference(all_keys) if len(extra_keys) > 0: display.warning("Found unknown keys in collection galaxy.yml at '%s': %s" % (to_text(b_galaxy_yml_path), ", ".join(extra_keys))) # Add the defaults if they have not been set for optional_string in string_keys: if optional_string not in galaxy_yml: galaxy_yml[optional_string] = None for optional_list in list_keys: list_val = galaxy_yml.get(optional_list, None) if list_val is None: galaxy_yml[optional_list] = [] elif not isinstance(list_val, list): galaxy_yml[optional_list] = [list_val] for optional_dict in dict_keys: if optional_dict not in galaxy_yml: galaxy_yml[optional_dict] = {} # license is a builtin var in Python, to avoid confusion we just rename it to license_ids galaxy_yml['license_ids'] = galaxy_yml['license'] del galaxy_yml['license'] return galaxy_yml def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns): # We always ignore .pyc and .retry files as well as some well known version control directories. The ignore # patterns can be extended by the build_ignore key in galaxy.yml b_ignore_patterns = [ b'galaxy.yml', b'*.pyc', b'*.retry', b'tests/output', # Ignore ansible-test result output directory. to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir. ] b_ignore_patterns += [to_bytes(p) for p in ignore_patterns] b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox']) entry_template = { 'name': None, 'ftype': None, 'chksum_type': None, 'chksum_sha256': None, 'format': MANIFEST_FORMAT } manifest = { 'files': [ { 'name': '.', 'ftype': 'dir', 'chksum_type': None, 'chksum_sha256': None, 'format': MANIFEST_FORMAT, }, ], 'format': MANIFEST_FORMAT, } def _walk(b_path, b_top_level_dir): for b_item in os.listdir(b_path): b_abs_path = os.path.join(b_path, b_item) b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:] b_rel_path = os.path.join(b_rel_base_dir, b_item) rel_path = to_text(b_rel_path, errors='surrogate_or_strict') if os.path.isdir(b_abs_path): if any(b_item == b_path for b_path in b_ignore_dirs) or \ any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns): display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path)) continue if os.path.islink(b_abs_path): b_link_target = os.path.realpath(b_abs_path) if not b_link_target.startswith(b_top_level_dir): display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection" % to_text(b_abs_path)) continue manifest_entry = entry_template.copy() manifest_entry['name'] = rel_path manifest_entry['ftype'] = 'dir' manifest['files'].append(manifest_entry) _walk(b_abs_path, b_top_level_dir) else: if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns): display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path)) continue manifest_entry = entry_template.copy() manifest_entry['name'] = rel_path manifest_entry['ftype'] = 'file' manifest_entry['chksum_type'] = 'sha256' manifest_entry['chksum_sha256'] = secure_hash(b_abs_path, hash_func=sha256) manifest['files'].append(manifest_entry) _walk(b_collection_path, b_collection_path) return manifest def _build_manifest(namespace, name, version, authors, readme, tags, description, license_ids, license_file, dependencies, repository, documentation, homepage, issues, **kwargs): manifest = { 'collection_info': { 'namespace': namespace, 'name': name, 'version': version, 'authors': authors, 'readme': readme, 'tags': tags, 'description': description, 'license': license_ids, 'license_file': license_file if license_file else None, # Handle galaxy.yml having an empty string (None) 'dependencies': dependencies, 'repository': repository, 'documentation': documentation, 'homepage': homepage, 'issues': issues, }, 'file_manifest_file': { 'name': 'FILES.json', 'ftype': 'file', 'chksum_type': 'sha256', 'chksum_sha256': None, # Filled out in _build_collection_tar 'format': MANIFEST_FORMAT }, 'format': MANIFEST_FORMAT, } return manifest def _build_collection_tar(b_collection_path, b_tar_path, collection_manifest, file_manifest): files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict') collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256) collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict') with _tempdir() as b_temp_path: b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path)) with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file: # Add the MANIFEST.json and FILES.json file to the archive for name, b in [('MANIFEST.json', collection_manifest_json), ('FILES.json', files_manifest_json)]: b_io = BytesIO(b) tar_info = tarfile.TarInfo(name) tar_info.size = len(b) tar_info.mtime = time.time() tar_info.mode = 0o0644 tar_file.addfile(tarinfo=tar_info, fileobj=b_io) for file_info in file_manifest['files']: if file_info['name'] == '.': continue # arcname expects a native string, cannot be bytes filename = to_native(file_info['name'], errors='surrogate_or_strict') b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict')) def reset_stat(tarinfo): tarinfo.mode = 0o0755 if tarinfo.isdir() else 0o0644 tarinfo.uid = tarinfo.gid = 0 tarinfo.uname = tarinfo.gname = '' return tarinfo tar_file.add(os.path.realpath(b_src_path), arcname=filename, recursive=False, filter=reset_stat) shutil.copy(b_tar_filepath, b_tar_path) collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'], collection_manifest['collection_info']['name']) display.display('Created collection for %s at %s' % (collection_name, to_text(b_tar_path))) def _find_existing_collections(path): collections = [] b_path = to_bytes(path, errors='surrogate_or_strict') for b_namespace in os.listdir(b_path): b_namespace_path = os.path.join(b_path, b_namespace) if os.path.isfile(b_namespace_path): continue for b_collection in os.listdir(b_namespace_path): b_collection_path = os.path.join(b_namespace_path, b_collection) if os.path.isdir(b_collection_path): req = CollectionRequirement.from_path(b_collection_path, False) display.vvv("Found installed collection %s:%s at '%s'" % (to_text(req), req.latest_version, to_text(b_collection_path))) collections.append(req) return collections def _build_dependency_map(collections, existing_collections, b_temp_path, apis, validate_certs, force, force_deps, no_deps): dependency_map = {} # First build the dependency map on the actual requirements for name, version, source in collections: _get_collection_info(dependency_map, existing_collections, name, version, source, b_temp_path, apis, validate_certs, (force or force_deps)) checked_parents = set([to_text(c) for c in dependency_map.values() if c.skip]) while len(dependency_map) != len(checked_parents): while not no_deps: # Only parse dependencies if no_deps was not set parents_to_check = set(dependency_map.keys()).difference(checked_parents) deps_exhausted = True for parent in parents_to_check: parent_info = dependency_map[parent] if parent_info.dependencies: deps_exhausted = False for dep_name, dep_requirement in parent_info.dependencies.items(): _get_collection_info(dependency_map, existing_collections, dep_name, dep_requirement, parent_info.api, b_temp_path, apis, validate_certs, force_deps, parent=parent) checked_parents.add(parent) # No extra dependencies were resolved, exit loop if deps_exhausted: break # Now we have resolved the deps to our best extent, now select the latest version for collections with # multiple versions found and go from there deps_not_checked = set(dependency_map.keys()).difference(checked_parents) for collection in deps_not_checked: dependency_map[collection].set_latest_version() if no_deps or len(dependency_map[collection].dependencies) == 0: checked_parents.add(collection) return dependency_map def _get_collection_info(dep_map, existing_collections, collection, requirement, source, b_temp_path, apis, validate_certs, force, parent=None): dep_msg = "" if parent: dep_msg = " - as dependency of %s" % parent display.vvv("Processing requirement collection '%s'%s" % (to_text(collection), dep_msg)) b_tar_path = None if os.path.isfile(to_bytes(collection, errors='surrogate_or_strict')): display.vvvv("Collection requirement '%s' is a tar artifact" % to_text(collection)) b_tar_path = to_bytes(collection, errors='surrogate_or_strict') elif urlparse(collection).scheme: display.vvvv("Collection requirement '%s' is a URL to a tar artifact" % collection) b_tar_path = _download_file(collection, b_temp_path, None, validate_certs) if b_tar_path: req = CollectionRequirement.from_tar(b_tar_path, force, parent=parent) collection_name = to_text(req) if collection_name in dep_map: collection_info = dep_map[collection_name] collection_info.add_requirement(None, req.latest_version) else: collection_info = req else: validate_collection_name(collection) display.vvvv("Collection requirement '%s' is the name of a collection" % collection) if collection in dep_map: collection_info = dep_map[collection] collection_info.add_requirement(parent, requirement) else: apis = [source] if source else apis collection_info = CollectionRequirement.from_name(collection, apis, requirement, force, parent=parent) existing = [c for c in existing_collections if to_text(c) == to_text(collection_info)] if existing and not collection_info.force: # Test that the installed collection fits the requirement existing[0].add_requirement(to_text(collection_info), requirement) collection_info = existing[0] dep_map[to_text(collection_info)] = collection_info def _download_file(url, b_path, expected_hash, validate_certs, headers=None): bufsize = 65536 digest = sha256() urlsplit = os.path.splitext(to_text(url.rsplit('/', 1)[1])) b_file_name = to_bytes(urlsplit[0], errors='surrogate_or_strict') b_file_ext = to_bytes(urlsplit[1], errors='surrogate_or_strict') b_file_path = tempfile.NamedTemporaryFile(dir=b_path, prefix=b_file_name, suffix=b_file_ext, delete=False).name display.vvv("Downloading %s to %s" % (url, to_text(b_path))) # Galaxy redirs downloads to S3 which reject the request if an Authorization header is attached so don't redir that resp = open_url(to_native(url, errors='surrogate_or_strict'), validate_certs=validate_certs, headers=headers, unredirected_headers=['Authorization']) with open(b_file_path, 'wb') as download_file: data = resp.read(bufsize) while data: digest.update(data) download_file.write(data) data = resp.read(bufsize) if expected_hash: actual_hash = digest.hexdigest() display.vvvv("Validating downloaded file hash %s with expected hash %s" % (actual_hash, expected_hash)) if expected_hash != actual_hash: raise AnsibleError("Mismatch artifact hash with downloaded file") return b_file_path def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None): n_filename = to_native(filename, errors='surrogate_or_strict') try: member = tar.getmember(n_filename) except KeyError: raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (to_native(tar.name), n_filename)) with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj: bufsize = 65536 sha256_digest = sha256() with _tarfile_extract(tar, member) as tar_obj: data = tar_obj.read(bufsize) while data: tmpfile_obj.write(data) tmpfile_obj.flush() sha256_digest.update(data) data = tar_obj.read(bufsize) actual_hash = sha256_digest.hexdigest() if expected_hash and actual_hash != expected_hash: raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'" % (n_filename, to_native(tar.name))) b_dest_filepath = os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')) b_parent_dir = os.path.split(b_dest_filepath)[0] if not os.path.exists(b_parent_dir): # Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check # makes sure we create the parent directory even if it wasn't set in the metadata. os.makedirs(b_parent_dir) shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
closed
ansible/ansible
https://github.com/ansible/ansible
65,109
"ansible-galaxy collection install" fails with URL as parameter
##### SUMMARY Ansible galaxy collection installation in command line fails when parameter is an URL. ```Shell ansible-galaxy collection install http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz Process install dependency map ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ``` ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME **ansible-galaxy** *collection_name* as url parameter should be supported according the documentation: ```Shell ansible-galaxy collection install -h positional arguments: collection_name The collection(s) name or path/url to a tar.gz collection artifact. This is mutually exclusive with --requirements-file. ``` ##### ANSIBLE VERSION ``` ansible 2.9.1 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /tmp/venv-project/local/lib/python2.7/site-packages/ansible executable location = /tmp/venv-project/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION ``` DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /home/ubuntu/.ansible/.vault-dev.txt HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False ``` ##### OS / ENVIRONMENT OS : Ubuntu. Ansible usage via native *apt* package or *pip*. ##### STEPS TO REPRODUCE Create some ansible collection and publish/upload it on some http endpoint: `http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz` Installation via local file is working fine (=> collection package is correct): ```Shell rm -rf ~/.ansible/collections/ wget http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz ansible-galaxy collection install my_namespace-my_collection-0.1.tar.gz Process install dependency map Starting collection install process Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` Installation via [requirement file](https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#install-multiple-collections-with-a-requirements-file) (like #61680) is working fine too: With `requirements.yml`: ```YAML --- collections: - name: http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz ``` ```Shell rm -rf ~/.ansible/collections/ ansible-galaxy collection install -r requirements.yml Process install dependency map Starting collection install process Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` But using command line installation fails: ```Shell rm -rf ~/.ansible/collections/ ansible-galaxy collection install http://repo.company.com/ansible-collection/my_namespace-my_collection-0.1.tar.gz Process install dependency map ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ``` ##### EXPECTED RESULTS ```Shell Installing 'my_namespace-my_collection:0.1-SNAPSHOT' to '/home/ubuntu/.ansible/collections/ansible_collections/my_namespace/my_collection' ``` ##### ACTUAL RESULTS ```Shell ERROR! Invalid collection name 'http', name must be in the format <namespace>.<collection>. ```
https://github.com/ansible/ansible/issues/65109
https://github.com/ansible/ansible/pull/65272
41472ee3878be215af8b933b2b04b4a72b9165ca
694ef5660d45fcb97c9beea5b2750f6eadcf5e93
2019-11-20T12:51:12Z
python
2019-12-02T18:55:31Z
test/units/cli/test_galaxy.py
# -*- coding: utf-8 -*- # (c) 2016, Adrian Likins <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import ansible import json import os import pytest import shutil import tarfile import tempfile import yaml import ansible.constants as C from ansible import context from ansible.cli.galaxy import GalaxyCLI from ansible.galaxy.api import GalaxyAPI from ansible.errors import AnsibleError from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.utils import context_objects as co from units.compat import unittest from units.compat.mock import patch, MagicMock @pytest.fixture(autouse='function') def reset_cli_args(): co.GlobalCLIArgs._Singleton__instance = None yield co.GlobalCLIArgs._Singleton__instance = None class TestGalaxy(unittest.TestCase): @classmethod def setUpClass(cls): '''creating prerequisites for installing a role; setUpClass occurs ONCE whereas setUp occurs with every method tested.''' # class data for easy viewing: role_dir, role_tar, role_name, role_req, role_path cls.temp_dir = tempfile.mkdtemp(prefix='ansible-test_galaxy-') os.chdir(cls.temp_dir) if os.path.exists("./delete_me"): shutil.rmtree("./delete_me") # creating framework for a role gc = GalaxyCLI(args=["ansible-galaxy", "init", "--offline", "delete_me"]) gc.run() cls.role_dir = "./delete_me" cls.role_name = "delete_me" # making a temp dir for role installation cls.role_path = os.path.join(tempfile.mkdtemp(), "roles") if not os.path.isdir(cls.role_path): os.makedirs(cls.role_path) # creating a tar file name for class data cls.role_tar = './delete_me.tar.gz' cls.makeTar(cls.role_tar, cls.role_dir) # creating a temp file with installation requirements cls.role_req = './delete_me_requirements.yml' fd = open(cls.role_req, "w") fd.write("- 'src': '%s'\n 'name': '%s'\n 'path': '%s'" % (cls.role_tar, cls.role_name, cls.role_path)) fd.close() @classmethod def makeTar(cls, output_file, source_dir): ''' used for making a tarfile from a role directory ''' # adding directory into a tar file try: tar = tarfile.open(output_file, "w:gz") tar.add(source_dir, arcname=os.path.basename(source_dir)) except AttributeError: # tarfile obj. has no attribute __exit__ prior to python 2. 7 pass finally: # ensuring closure of tarfile obj tar.close() @classmethod def tearDownClass(cls): '''After tests are finished removes things created in setUpClass''' # deleting the temp role directory if os.path.exists(cls.role_dir): shutil.rmtree(cls.role_dir) if os.path.exists(cls.role_req): os.remove(cls.role_req) if os.path.exists(cls.role_tar): os.remove(cls.role_tar) if os.path.isdir(cls.role_path): shutil.rmtree(cls.role_path) os.chdir('/') shutil.rmtree(cls.temp_dir) def setUp(self): # Reset the stored command line args co.GlobalCLIArgs._Singleton__instance = None self.default_args = ['ansible-galaxy'] def tearDown(self): # Reset the stored command line args co.GlobalCLIArgs._Singleton__instance = None def test_init(self): galaxy_cli = GalaxyCLI(args=self.default_args) self.assertTrue(isinstance(galaxy_cli, GalaxyCLI)) def test_display_min(self): gc = GalaxyCLI(args=self.default_args) role_info = {'name': 'some_role_name'} display_result = gc._display_role_info(role_info) self.assertTrue(display_result.find('some_role_name') > -1) def test_display_galaxy_info(self): gc = GalaxyCLI(args=self.default_args) galaxy_info = {} role_info = {'name': 'some_role_name', 'galaxy_info': galaxy_info} display_result = gc._display_role_info(role_info) if display_result.find('\n\tgalaxy_info:') == -1: self.fail('Expected galaxy_info to be indented once') def test_run(self): ''' verifies that the GalaxyCLI object's api is created and that execute() is called. ''' gc = GalaxyCLI(args=["ansible-galaxy", "install", "--ignore-errors", "imaginary_role"]) gc.parse() with patch.object(ansible.cli.CLI, "run", return_value=None) as mock_run: gc.run() # testing self.assertIsInstance(gc.galaxy, ansible.galaxy.Galaxy) self.assertEqual(mock_run.call_count, 1) self.assertTrue(isinstance(gc.api, ansible.galaxy.api.GalaxyAPI)) def test_execute_remove(self): # installing role gc = GalaxyCLI(args=["ansible-galaxy", "install", "-p", self.role_path, "-r", self.role_req, '--force']) gc.run() # location where the role was installed role_file = os.path.join(self.role_path, self.role_name) # removing role # Have to reset the arguments in the context object manually since we're doing the # equivalent of running the command line program twice co.GlobalCLIArgs._Singleton__instance = None gc = GalaxyCLI(args=["ansible-galaxy", "remove", role_file, self.role_name]) gc.run() # testing role was removed removed_role = not os.path.exists(role_file) self.assertTrue(removed_role) def test_exit_without_ignore_without_flag(self): ''' tests that GalaxyCLI exits with the error specified if the --ignore-errors flag is not used ''' gc = GalaxyCLI(args=["ansible-galaxy", "install", "--server=None", "fake_role_name"]) with patch.object(ansible.utils.display.Display, "display", return_value=None) as mocked_display: # testing that error expected is raised self.assertRaises(AnsibleError, gc.run) self.assertTrue(mocked_display.called_once_with("- downloading role 'fake_role_name', owned by ")) def test_exit_without_ignore_with_flag(self): ''' tests that GalaxyCLI exits without the error specified if the --ignore-errors flag is used ''' # testing with --ignore-errors flag gc = GalaxyCLI(args=["ansible-galaxy", "install", "--server=None", "fake_role_name", "--ignore-errors"]) with patch.object(ansible.utils.display.Display, "display", return_value=None) as mocked_display: gc.run() self.assertTrue(mocked_display.called_once_with("- downloading role 'fake_role_name', owned by ")) def test_parse_no_action(self): ''' testing the options parser when no action is given ''' gc = GalaxyCLI(args=["ansible-galaxy", ""]) self.assertRaises(SystemExit, gc.parse) def test_parse_invalid_action(self): ''' testing the options parser when an invalid action is given ''' gc = GalaxyCLI(args=["ansible-galaxy", "NOT_ACTION"]) self.assertRaises(SystemExit, gc.parse) def test_parse_delete(self): ''' testing the options parser when the action 'delete' is given ''' gc = GalaxyCLI(args=["ansible-galaxy", "delete", "foo", "bar"]) gc.parse() self.assertEqual(context.CLIARGS['verbosity'], 0) def test_parse_import(self): ''' testing the options parser when the action 'import' is given ''' gc = GalaxyCLI(args=["ansible-galaxy", "import", "foo", "bar"]) gc.parse() self.assertEqual(context.CLIARGS['wait'], True) self.assertEqual(context.CLIARGS['reference'], None) self.assertEqual(context.CLIARGS['check_status'], False) self.assertEqual(context.CLIARGS['verbosity'], 0) def test_parse_info(self): ''' testing the options parser when the action 'info' is given ''' gc = GalaxyCLI(args=["ansible-galaxy", "info", "foo", "bar"]) gc.parse() self.assertEqual(context.CLIARGS['offline'], False) def test_parse_init(self): ''' testing the options parser when the action 'init' is given ''' gc = GalaxyCLI(args=["ansible-galaxy", "init", "foo"]) gc.parse() self.assertEqual(context.CLIARGS['offline'], False) self.assertEqual(context.CLIARGS['force'], False) def test_parse_install(self): ''' testing the options parser when the action 'install' is given ''' gc = GalaxyCLI(args=["ansible-galaxy", "install"]) gc.parse() self.assertEqual(context.CLIARGS['ignore_errors'], False) self.assertEqual(context.CLIARGS['no_deps'], False) self.assertEqual(context.CLIARGS['role_file'], None) self.assertEqual(context.CLIARGS['force'], False) def test_parse_list(self): ''' testing the options parser when the action 'list' is given ''' gc = GalaxyCLI(args=["ansible-galaxy", "list"]) gc.parse() self.assertEqual(context.CLIARGS['verbosity'], 0) def test_parse_login(self): ''' testing the options parser when the action 'login' is given ''' gc = GalaxyCLI(args=["ansible-galaxy", "login"]) gc.parse() self.assertEqual(context.CLIARGS['verbosity'], 0) self.assertEqual(context.CLIARGS['token'], None) def test_parse_remove(self): ''' testing the options parser when the action 'remove' is given ''' gc = GalaxyCLI(args=["ansible-galaxy", "remove", "foo"]) gc.parse() self.assertEqual(context.CLIARGS['verbosity'], 0) def test_parse_search(self): ''' testing the options parswer when the action 'search' is given ''' gc = GalaxyCLI(args=["ansible-galaxy", "search"]) gc.parse() self.assertEqual(context.CLIARGS['platforms'], None) self.assertEqual(context.CLIARGS['galaxy_tags'], None) self.assertEqual(context.CLIARGS['author'], None) def test_parse_setup(self): ''' testing the options parser when the action 'setup' is given ''' gc = GalaxyCLI(args=["ansible-galaxy", "setup", "source", "github_user", "github_repo", "secret"]) gc.parse() self.assertEqual(context.CLIARGS['verbosity'], 0) self.assertEqual(context.CLIARGS['remove_id'], None) self.assertEqual(context.CLIARGS['setup_list'], False) class ValidRoleTests(object): expected_role_dirs = ('defaults', 'files', 'handlers', 'meta', 'tasks', 'templates', 'vars', 'tests') @classmethod def setUpRole(cls, role_name, galaxy_args=None, skeleton_path=None, use_explicit_type=False): if galaxy_args is None: galaxy_args = [] if skeleton_path is not None: cls.role_skeleton_path = skeleton_path galaxy_args += ['--role-skeleton', skeleton_path] # Make temp directory for testing cls.test_dir = tempfile.mkdtemp() if not os.path.isdir(cls.test_dir): os.makedirs(cls.test_dir) cls.role_dir = os.path.join(cls.test_dir, role_name) cls.role_name = role_name # create role using default skeleton args = ['ansible-galaxy'] if use_explicit_type: args += ['role'] args += ['init', '-c', '--offline'] + galaxy_args + ['--init-path', cls.test_dir, cls.role_name] gc = GalaxyCLI(args=args) gc.run() cls.gc = gc if skeleton_path is None: cls.role_skeleton_path = gc.galaxy.default_role_skeleton_path @classmethod def tearDownClass(cls): if os.path.isdir(cls.test_dir): shutil.rmtree(cls.test_dir) def test_metadata(self): with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf: metadata = yaml.safe_load(mf) self.assertIn('galaxy_info', metadata, msg='unable to find galaxy_info in metadata') self.assertIn('dependencies', metadata, msg='unable to find dependencies in metadata') def test_readme(self): readme_path = os.path.join(self.role_dir, 'README.md') self.assertTrue(os.path.exists(readme_path), msg='Readme doesn\'t exist') def test_main_ymls(self): need_main_ymls = set(self.expected_role_dirs) - set(['meta', 'tests', 'files', 'templates']) for d in need_main_ymls: main_yml = os.path.join(self.role_dir, d, 'main.yml') self.assertTrue(os.path.exists(main_yml)) expected_string = "---\n# {0} file for {1}".format(d, self.role_name) with open(main_yml, 'r') as f: self.assertEqual(expected_string, f.read().strip()) def test_role_dirs(self): for d in self.expected_role_dirs: self.assertTrue(os.path.isdir(os.path.join(self.role_dir, d)), msg="Expected role subdirectory {0} doesn't exist".format(d)) def test_travis_yml(self): with open(os.path.join(self.role_dir, '.travis.yml'), 'r') as f: contents = f.read() with open(os.path.join(self.role_skeleton_path, '.travis.yml'), 'r') as f: expected_contents = f.read() self.assertEqual(expected_contents, contents, msg='.travis.yml does not match expected') def test_readme_contents(self): with open(os.path.join(self.role_dir, 'README.md'), 'r') as readme: contents = readme.read() with open(os.path.join(self.role_skeleton_path, 'README.md'), 'r') as f: expected_contents = f.read() self.assertEqual(expected_contents, contents, msg='README.md does not match expected') def test_test_yml(self): with open(os.path.join(self.role_dir, 'tests', 'test.yml'), 'r') as f: test_playbook = yaml.safe_load(f) print(test_playbook) self.assertEqual(len(test_playbook), 1) self.assertEqual(test_playbook[0]['hosts'], 'localhost') self.assertEqual(test_playbook[0]['remote_user'], 'root') self.assertListEqual(test_playbook[0]['roles'], [self.role_name], msg='The list of roles included in the test play doesn\'t match') class TestGalaxyInitDefault(unittest.TestCase, ValidRoleTests): @classmethod def setUpClass(cls): cls.setUpRole(role_name='delete_me') def test_metadata_contents(self): with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf: metadata = yaml.safe_load(mf) self.assertEqual(metadata.get('galaxy_info', dict()).get('author'), 'your name', msg='author was not set properly in metadata') class TestGalaxyInitAPB(unittest.TestCase, ValidRoleTests): @classmethod def setUpClass(cls): cls.setUpRole('delete_me_apb', galaxy_args=['--type=apb']) def test_metadata_apb_tag(self): with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf: metadata = yaml.safe_load(mf) self.assertIn('apb', metadata.get('galaxy_info', dict()).get('galaxy_tags', []), msg='apb tag not set in role metadata') def test_metadata_contents(self): with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf: metadata = yaml.safe_load(mf) self.assertEqual(metadata.get('galaxy_info', dict()).get('author'), 'your name', msg='author was not set properly in metadata') def test_apb_yml(self): self.assertTrue(os.path.exists(os.path.join(self.role_dir, 'apb.yml')), msg='apb.yml was not created') def test_test_yml(self): with open(os.path.join(self.role_dir, 'tests', 'test.yml'), 'r') as f: test_playbook = yaml.safe_load(f) print(test_playbook) self.assertEqual(len(test_playbook), 1) self.assertEqual(test_playbook[0]['hosts'], 'localhost') self.assertFalse(test_playbook[0]['gather_facts']) self.assertEqual(test_playbook[0]['connection'], 'local') self.assertIsNone(test_playbook[0]['tasks'], msg='We\'re expecting an unset list of tasks in test.yml') class TestGalaxyInitContainer(unittest.TestCase, ValidRoleTests): @classmethod def setUpClass(cls): cls.setUpRole('delete_me_container', galaxy_args=['--type=container']) def test_metadata_container_tag(self): with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf: metadata = yaml.safe_load(mf) self.assertIn('container', metadata.get('galaxy_info', dict()).get('galaxy_tags', []), msg='container tag not set in role metadata') def test_metadata_contents(self): with open(os.path.join(self.role_dir, 'meta', 'main.yml'), 'r') as mf: metadata = yaml.safe_load(mf) self.assertEqual(metadata.get('galaxy_info', dict()).get('author'), 'your name', msg='author was not set properly in metadata') def test_meta_container_yml(self): self.assertTrue(os.path.exists(os.path.join(self.role_dir, 'meta', 'container.yml')), msg='container.yml was not created') def test_test_yml(self): with open(os.path.join(self.role_dir, 'tests', 'test.yml'), 'r') as f: test_playbook = yaml.safe_load(f) print(test_playbook) self.assertEqual(len(test_playbook), 1) self.assertEqual(test_playbook[0]['hosts'], 'localhost') self.assertFalse(test_playbook[0]['gather_facts']) self.assertEqual(test_playbook[0]['connection'], 'local') self.assertIsNone(test_playbook[0]['tasks'], msg='We\'re expecting an unset list of tasks in test.yml') class TestGalaxyInitSkeleton(unittest.TestCase, ValidRoleTests): @classmethod def setUpClass(cls): role_skeleton_path = os.path.join(os.path.split(__file__)[0], 'test_data', 'role_skeleton') cls.setUpRole('delete_me_skeleton', skeleton_path=role_skeleton_path, use_explicit_type=True) def test_empty_files_dir(self): files_dir = os.path.join(self.role_dir, 'files') self.assertTrue(os.path.isdir(files_dir)) self.assertListEqual(os.listdir(files_dir), [], msg='we expect the files directory to be empty, is ignore working?') def test_template_ignore_jinja(self): test_conf_j2 = os.path.join(self.role_dir, 'templates', 'test.conf.j2') self.assertTrue(os.path.exists(test_conf_j2), msg="The test.conf.j2 template doesn't seem to exist, is it being rendered as test.conf?") with open(test_conf_j2, 'r') as f: contents = f.read() expected_contents = '[defaults]\ntest_key = {{ test_variable }}' self.assertEqual(expected_contents, contents.strip(), msg="test.conf.j2 doesn't contain what it should, is it being rendered?") def test_template_ignore_jinja_subfolder(self): test_conf_j2 = os.path.join(self.role_dir, 'templates', 'subfolder', 'test.conf.j2') self.assertTrue(os.path.exists(test_conf_j2), msg="The test.conf.j2 template doesn't seem to exist, is it being rendered as test.conf?") with open(test_conf_j2, 'r') as f: contents = f.read() expected_contents = '[defaults]\ntest_key = {{ test_variable }}' self.assertEqual(expected_contents, contents.strip(), msg="test.conf.j2 doesn't contain what it should, is it being rendered?") def test_template_ignore_similar_folder(self): self.assertTrue(os.path.exists(os.path.join(self.role_dir, 'templates_extra', 'templates.txt'))) def test_skeleton_option(self): self.assertEqual(self.role_skeleton_path, context.CLIARGS['role_skeleton'], msg='Skeleton path was not parsed properly from the command line') @pytest.mark.parametrize('cli_args, expected', [ (['ansible-galaxy', 'collection', 'init', 'abc.def'], 0), (['ansible-galaxy', 'collection', 'init', 'abc.def', '-vvv'], 3), (['ansible-galaxy', '-vv', 'collection', 'init', 'abc.def'], 2), # Due to our manual parsing we want to verify that -v set in the sub parser takes precedence. This behaviour is # deprecated and tests should be removed when the code that handles it is removed (['ansible-galaxy', '-vv', 'collection', 'init', 'abc.def', '-v'], 1), (['ansible-galaxy', '-vv', 'collection', 'init', 'abc.def', '-vvvv'], 4), (['ansible-galaxy', '-vvv', 'init', 'name'], 3), (['ansible-galaxy', '-vvvvv', 'init', '-v', 'name'], 1), ]) def test_verbosity_arguments(cli_args, expected, monkeypatch): # Mock out the functions so we don't actually execute anything for func_name in [f for f in dir(GalaxyCLI) if f.startswith("execute_")]: monkeypatch.setattr(GalaxyCLI, func_name, MagicMock()) cli = GalaxyCLI(args=cli_args) cli.run() assert context.CLIARGS['verbosity'] == expected @pytest.fixture() def collection_skeleton(request, tmp_path_factory): name, skeleton_path = request.param galaxy_args = ['ansible-galaxy', 'collection', 'init', '-c'] if skeleton_path is not None: galaxy_args += ['--collection-skeleton', skeleton_path] test_dir = to_text(tmp_path_factory.mktemp('test-Γ…Γ‘ΕšΓŒΞ²ΕΓˆ Collections')) galaxy_args += ['--init-path', test_dir, name] GalaxyCLI(args=galaxy_args).run() namespace_name, collection_name = name.split('.', 1) collection_dir = os.path.join(test_dir, namespace_name, collection_name) return collection_dir @pytest.mark.parametrize('collection_skeleton', [ ('ansible_test.my_collection', None), ], indirect=True) def test_collection_default(collection_skeleton): meta_path = os.path.join(collection_skeleton, 'galaxy.yml') with open(meta_path, 'r') as galaxy_meta: metadata = yaml.safe_load(galaxy_meta) assert metadata['namespace'] == 'ansible_test' assert metadata['name'] == 'my_collection' assert metadata['authors'] == ['your name <[email protected]>'] assert metadata['readme'] == 'README.md' assert metadata['version'] == '1.0.0' assert metadata['description'] == 'your collection description' assert metadata['license'] == ['GPL-2.0-or-later'] assert metadata['tags'] == [] assert metadata['dependencies'] == {} assert metadata['documentation'] == 'http://docs.example.com' assert metadata['repository'] == 'http://example.com/repository' assert metadata['homepage'] == 'http://example.com' assert metadata['issues'] == 'http://example.com/issue/tracker' for d in ['docs', 'plugins', 'roles']: assert os.path.isdir(os.path.join(collection_skeleton, d)), \ "Expected collection subdirectory {0} doesn't exist".format(d) @pytest.mark.parametrize('collection_skeleton', [ ('ansible_test.delete_me_skeleton', os.path.join(os.path.split(__file__)[0], 'test_data', 'collection_skeleton')), ], indirect=True) def test_collection_skeleton(collection_skeleton): meta_path = os.path.join(collection_skeleton, 'galaxy.yml') with open(meta_path, 'r') as galaxy_meta: metadata = yaml.safe_load(galaxy_meta) assert metadata['namespace'] == 'ansible_test' assert metadata['name'] == 'delete_me_skeleton' assert metadata['authors'] == ['Ansible Cow <[email protected]>', 'Tu Cow <[email protected]>'] assert metadata['version'] == '0.1.0' assert metadata['readme'] == 'README.md' assert len(metadata) == 5 assert os.path.exists(os.path.join(collection_skeleton, 'README.md')) # Test empty directories exist and are empty for empty_dir in ['plugins/action', 'plugins/filter', 'plugins/inventory', 'plugins/lookup', 'plugins/module_utils', 'plugins/modules']: assert os.listdir(os.path.join(collection_skeleton, empty_dir)) == [] # Test files that don't end with .j2 were not templated doc_file = os.path.join(collection_skeleton, 'docs', 'My Collection.md') with open(doc_file, 'r') as f: doc_contents = f.read() assert doc_contents.strip() == 'Welcome to my test collection doc for {{ namespace }}.' # Test files that end with .j2 but are in the templates directory were not templated for template_dir in ['playbooks/templates', 'playbooks/templates/subfolder', 'roles/common/templates', 'roles/common/templates/subfolder']: test_conf_j2 = os.path.join(collection_skeleton, template_dir, 'test.conf.j2') assert os.path.exists(test_conf_j2) with open(test_conf_j2, 'r') as f: contents = f.read() expected_contents = '[defaults]\ntest_key = {{ test_variable }}' assert expected_contents == contents.strip() @pytest.fixture() def collection_artifact(collection_skeleton, tmp_path_factory): ''' Creates a collection artifact tarball that is ready to be published and installed ''' output_dir = to_text(tmp_path_factory.mktemp('test-Γ…Γ‘ΕšΓŒΞ²ΕΓˆ Output')) # Because we call GalaxyCLI in collection_skeleton we need to reset the singleton back to None so it uses the new # args, we reset the original args once it is done. orig_cli_args = co.GlobalCLIArgs._Singleton__instance try: co.GlobalCLIArgs._Singleton__instance = None galaxy_args = ['ansible-galaxy', 'collection', 'build', collection_skeleton, '--output-path', output_dir] gc = GalaxyCLI(args=galaxy_args) gc.run() yield output_dir finally: co.GlobalCLIArgs._Singleton__instance = orig_cli_args def test_invalid_skeleton_path(): expected = "- the skeleton path '/fake/path' does not exist, cannot init collection" gc = GalaxyCLI(args=['ansible-galaxy', 'collection', 'init', 'my.collection', '--collection-skeleton', '/fake/path']) with pytest.raises(AnsibleError, match=expected): gc.run() @pytest.mark.parametrize("name", [ "", "invalid", "hypen-ns.collection", "ns.hyphen-collection", "ns.collection.weird", ]) def test_invalid_collection_name_init(name): expected = "Invalid collection name '%s', name must be in the format <namespace>.<collection>" % name gc = GalaxyCLI(args=['ansible-galaxy', 'collection', 'init', name]) with pytest.raises(AnsibleError, match=expected): gc.run() @pytest.mark.parametrize("name, expected", [ ("", ""), ("invalid", "invalid"), ("invalid:1.0.0", "invalid"), ("hypen-ns.collection", "hypen-ns.collection"), ("ns.hyphen-collection", "ns.hyphen-collection"), ("ns.collection.weird", "ns.collection.weird"), ]) def test_invalid_collection_name_install(name, expected, tmp_path_factory): install_path = to_text(tmp_path_factory.mktemp('test-Γ…Γ‘ΕšΓŒΞ²ΕΓˆ Collections')) expected = "Invalid collection name '%s', name must be in the format <namespace>.<collection>" % expected gc = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', name, '-p', os.path.join(install_path, 'install')]) with pytest.raises(AnsibleError, match=expected): gc.run() @pytest.mark.parametrize('collection_skeleton', [ ('ansible_test.build_collection', None), ], indirect=True) def test_collection_build(collection_artifact): tar_path = os.path.join(collection_artifact, 'ansible_test-build_collection-1.0.0.tar.gz') assert tarfile.is_tarfile(tar_path) with tarfile.open(tar_path, mode='r') as tar: tar_members = tar.getmembers() valid_files = ['MANIFEST.json', 'FILES.json', 'roles', 'docs', 'plugins', 'plugins/README.md', 'README.md'] assert len(tar_members) == 7 # Verify the uid and gid is 0 and the correct perms are set for member in tar_members: assert member.name in valid_files assert member.gid == 0 assert member.gname == '' assert member.uid == 0 assert member.uname == '' if member.isdir(): assert member.mode == 0o0755 else: assert member.mode == 0o0644 manifest_file = tar.extractfile(tar_members[0]) try: manifest = json.loads(to_text(manifest_file.read())) finally: manifest_file.close() coll_info = manifest['collection_info'] file_manifest = manifest['file_manifest_file'] assert manifest['format'] == 1 assert len(manifest.keys()) == 3 assert coll_info['namespace'] == 'ansible_test' assert coll_info['name'] == 'build_collection' assert coll_info['version'] == '1.0.0' assert coll_info['authors'] == ['your name <[email protected]>'] assert coll_info['readme'] == 'README.md' assert coll_info['tags'] == [] assert coll_info['description'] == 'your collection description' assert coll_info['license'] == ['GPL-2.0-or-later'] assert coll_info['license_file'] is None assert coll_info['dependencies'] == {} assert coll_info['repository'] == 'http://example.com/repository' assert coll_info['documentation'] == 'http://docs.example.com' assert coll_info['homepage'] == 'http://example.com' assert coll_info['issues'] == 'http://example.com/issue/tracker' assert len(coll_info.keys()) == 14 assert file_manifest['name'] == 'FILES.json' assert file_manifest['ftype'] == 'file' assert file_manifest['chksum_type'] == 'sha256' assert file_manifest['chksum_sha256'] is not None # Order of keys makes it hard to verify the checksum assert file_manifest['format'] == 1 assert len(file_manifest.keys()) == 5 files_file = tar.extractfile(tar_members[1]) try: files = json.loads(to_text(files_file.read())) finally: files_file.close() assert len(files['files']) == 6 assert files['format'] == 1 assert len(files.keys()) == 2 valid_files_entries = ['.', 'roles', 'docs', 'plugins', 'plugins/README.md', 'README.md'] for file_entry in files['files']: assert file_entry['name'] in valid_files_entries assert file_entry['format'] == 1 if file_entry['name'] == 'plugins/README.md': assert file_entry['ftype'] == 'file' assert file_entry['chksum_type'] == 'sha256' # Can't test the actual checksum as the html link changes based on the version. assert file_entry['chksum_sha256'] is not None elif file_entry['name'] == 'README.md': assert file_entry['ftype'] == 'file' assert file_entry['chksum_type'] == 'sha256' assert file_entry['chksum_sha256'] == '45923ca2ece0e8ce31d29e5df9d8b649fe55e2f5b5b61c9724d7cc187bd6ad4a' else: assert file_entry['ftype'] == 'dir' assert file_entry['chksum_type'] is None assert file_entry['chksum_sha256'] is None assert len(file_entry.keys()) == 5 @pytest.fixture() def collection_install(reset_cli_args, tmp_path_factory, monkeypatch): mock_install = MagicMock() monkeypatch.setattr(ansible.cli.galaxy, 'install_collections', mock_install) mock_warning = MagicMock() monkeypatch.setattr(ansible.utils.display.Display, 'warning', mock_warning) output_dir = to_text((tmp_path_factory.mktemp('test-Γ…Γ‘ΕšΓŒΞ²ΕΓˆ Output'))) yield mock_install, mock_warning, output_dir def test_collection_install_with_names(collection_install): mock_install, mock_warning, output_dir = collection_install galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', 'namespace2.collection:1.0.1', '--collections-path', output_dir] GalaxyCLI(args=galaxy_args).run() collection_path = os.path.join(output_dir, 'ansible_collections') assert os.path.isdir(collection_path) assert mock_warning.call_count == 1 assert "The specified collections path '%s' is not part of the configured Ansible collections path" % output_dir \ in mock_warning.call_args[0][0] assert mock_install.call_count == 1 assert mock_install.call_args[0][0] == [('namespace.collection', '*', None), ('namespace2.collection', '1.0.1', None)] assert mock_install.call_args[0][1] == collection_path assert len(mock_install.call_args[0][2]) == 1 assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com' assert mock_install.call_args[0][2][0].validate_certs is True assert mock_install.call_args[0][3] is True assert mock_install.call_args[0][4] is False assert mock_install.call_args[0][5] is False assert mock_install.call_args[0][6] is False assert mock_install.call_args[0][7] is False def test_collection_install_with_requirements_file(collection_install): mock_install, mock_warning, output_dir = collection_install requirements_file = os.path.join(output_dir, 'requirements.yml') with open(requirements_file, 'wb') as req_obj: req_obj.write(b'''--- collections: - namespace.coll - name: namespace2.coll version: '>2.0.1' ''') galaxy_args = ['ansible-galaxy', 'collection', 'install', '--requirements-file', requirements_file, '--collections-path', output_dir] GalaxyCLI(args=galaxy_args).run() collection_path = os.path.join(output_dir, 'ansible_collections') assert os.path.isdir(collection_path) assert mock_warning.call_count == 1 assert "The specified collections path '%s' is not part of the configured Ansible collections path" % output_dir \ in mock_warning.call_args[0][0] assert mock_install.call_count == 1 assert mock_install.call_args[0][0] == [('namespace.coll', '*', None), ('namespace2.coll', '>2.0.1', None)] assert mock_install.call_args[0][1] == collection_path assert len(mock_install.call_args[0][2]) == 1 assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com' assert mock_install.call_args[0][2][0].validate_certs is True assert mock_install.call_args[0][3] is True assert mock_install.call_args[0][4] is False assert mock_install.call_args[0][5] is False assert mock_install.call_args[0][6] is False assert mock_install.call_args[0][7] is False def test_collection_install_with_relative_path(collection_install, monkeypatch): mock_install = collection_install[0] mock_req = MagicMock() mock_req.return_value = {'collections': [('namespace.coll', '*', None)]} monkeypatch.setattr(ansible.cli.galaxy.GalaxyCLI, '_parse_requirements_file', mock_req) monkeypatch.setattr(os, 'makedirs', MagicMock()) requirements_file = './requirements.myl' collections_path = './ansible_collections' galaxy_args = ['ansible-galaxy', 'collection', 'install', '--requirements-file', requirements_file, '--collections-path', collections_path] GalaxyCLI(args=galaxy_args).run() assert mock_install.call_count == 1 assert mock_install.call_args[0][0] == [('namespace.coll', '*', None)] assert mock_install.call_args[0][1] == os.path.abspath(collections_path) assert len(mock_install.call_args[0][2]) == 1 assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com' assert mock_install.call_args[0][2][0].validate_certs is True assert mock_install.call_args[0][3] is True assert mock_install.call_args[0][4] is False assert mock_install.call_args[0][5] is False assert mock_install.call_args[0][6] is False assert mock_install.call_args[0][7] is False assert mock_req.call_count == 1 assert mock_req.call_args[0][0] == os.path.abspath(requirements_file) def test_collection_install_with_unexpanded_path(collection_install, monkeypatch): mock_install = collection_install[0] mock_req = MagicMock() mock_req.return_value = {'collections': [('namespace.coll', '*', None)]} monkeypatch.setattr(ansible.cli.galaxy.GalaxyCLI, '_parse_requirements_file', mock_req) monkeypatch.setattr(os, 'makedirs', MagicMock()) requirements_file = '~/requirements.myl' collections_path = '~/ansible_collections' galaxy_args = ['ansible-galaxy', 'collection', 'install', '--requirements-file', requirements_file, '--collections-path', collections_path] GalaxyCLI(args=galaxy_args).run() assert mock_install.call_count == 1 assert mock_install.call_args[0][0] == [('namespace.coll', '*', None)] assert mock_install.call_args[0][1] == os.path.expanduser(os.path.expandvars(collections_path)) assert len(mock_install.call_args[0][2]) == 1 assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com' assert mock_install.call_args[0][2][0].validate_certs is True assert mock_install.call_args[0][3] is True assert mock_install.call_args[0][4] is False assert mock_install.call_args[0][5] is False assert mock_install.call_args[0][6] is False assert mock_install.call_args[0][7] is False assert mock_req.call_count == 1 assert mock_req.call_args[0][0] == os.path.expanduser(os.path.expandvars(requirements_file)) def test_collection_install_in_collection_dir(collection_install, monkeypatch): mock_install, mock_warning, output_dir = collection_install collections_path = C.COLLECTIONS_PATHS[0] galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', 'namespace2.collection:1.0.1', '--collections-path', collections_path] GalaxyCLI(args=galaxy_args).run() assert mock_warning.call_count == 0 assert mock_install.call_count == 1 assert mock_install.call_args[0][0] == [('namespace.collection', '*', None), ('namespace2.collection', '1.0.1', None)] assert mock_install.call_args[0][1] == os.path.join(collections_path, 'ansible_collections') assert len(mock_install.call_args[0][2]) == 1 assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com' assert mock_install.call_args[0][2][0].validate_certs is True assert mock_install.call_args[0][3] is True assert mock_install.call_args[0][4] is False assert mock_install.call_args[0][5] is False assert mock_install.call_args[0][6] is False assert mock_install.call_args[0][7] is False def test_collection_install_name_and_requirements_fail(collection_install): test_path = collection_install[2] expected = 'The positional collection_name arg and --requirements-file are mutually exclusive.' with pytest.raises(AnsibleError, match=expected): GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', test_path, '--requirements-file', test_path]).run() def test_collection_install_no_name_and_requirements_fail(collection_install): test_path = collection_install[2] expected = 'You must specify a collection name or a requirements file.' with pytest.raises(AnsibleError, match=expected): GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', '--collections-path', test_path]).run() def test_collection_install_path_with_ansible_collections(collection_install): mock_install, mock_warning, output_dir = collection_install collection_path = os.path.join(output_dir, 'ansible_collections') galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', 'namespace2.collection:1.0.1', '--collections-path', collection_path] GalaxyCLI(args=galaxy_args).run() assert os.path.isdir(collection_path) assert mock_warning.call_count == 1 assert "The specified collections path '%s' is not part of the configured Ansible collections path" \ % collection_path in mock_warning.call_args[0][0] assert mock_install.call_count == 1 assert mock_install.call_args[0][0] == [('namespace.collection', '*', None), ('namespace2.collection', '1.0.1', None)] assert mock_install.call_args[0][1] == collection_path assert len(mock_install.call_args[0][2]) == 1 assert mock_install.call_args[0][2][0].api_server == 'https://galaxy.ansible.com' assert mock_install.call_args[0][2][0].validate_certs is True assert mock_install.call_args[0][3] is True assert mock_install.call_args[0][4] is False assert mock_install.call_args[0][5] is False assert mock_install.call_args[0][6] is False assert mock_install.call_args[0][7] is False def test_collection_install_ignore_certs(collection_install): mock_install, mock_warning, output_dir = collection_install galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir, '--ignore-certs'] GalaxyCLI(args=galaxy_args).run() assert mock_install.call_args[0][3] is False def test_collection_install_force(collection_install): mock_install, mock_warning, output_dir = collection_install galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir, '--force'] GalaxyCLI(args=galaxy_args).run() assert mock_install.call_args[0][6] is True def test_collection_install_force_deps(collection_install): mock_install, mock_warning, output_dir = collection_install galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir, '--force-with-deps'] GalaxyCLI(args=galaxy_args).run() assert mock_install.call_args[0][7] is True def test_collection_install_no_deps(collection_install): mock_install, mock_warning, output_dir = collection_install galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir, '--no-deps'] GalaxyCLI(args=galaxy_args).run() assert mock_install.call_args[0][5] is True def test_collection_install_ignore(collection_install): mock_install, mock_warning, output_dir = collection_install galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir, '--ignore-errors'] GalaxyCLI(args=galaxy_args).run() assert mock_install.call_args[0][4] is True def test_collection_install_custom_server(collection_install): mock_install, mock_warning, output_dir = collection_install galaxy_args = ['ansible-galaxy', 'collection', 'install', 'namespace.collection', '--collections-path', output_dir, '--server', 'https://galaxy-dev.ansible.com'] GalaxyCLI(args=galaxy_args).run() assert len(mock_install.call_args[0][2]) == 1 assert mock_install.call_args[0][2][0].api_server == 'https://galaxy-dev.ansible.com' assert mock_install.call_args[0][2][0].validate_certs is True @pytest.fixture() def requirements_file(request, tmp_path_factory): content = request.param test_dir = to_text(tmp_path_factory.mktemp('test-Γ…Γ‘ΕšΓŒΞ²ΕΓˆ Collections Requirements')) requirements_file = os.path.join(test_dir, 'requirements.yml') if content: with open(requirements_file, 'wb') as req_obj: req_obj.write(to_bytes(content)) yield requirements_file @pytest.fixture() def requirements_cli(monkeypatch): monkeypatch.setattr(GalaxyCLI, 'execute_install', MagicMock()) cli = GalaxyCLI(args=['ansible-galaxy', 'install']) cli.run() return cli @pytest.mark.parametrize('requirements_file', [None], indirect=True) def test_parse_requirements_file_that_doesnt_exist(requirements_cli, requirements_file): expected = "The requirements file '%s' does not exist." % to_native(requirements_file) with pytest.raises(AnsibleError, match=expected): requirements_cli._parse_requirements_file(requirements_file) @pytest.mark.parametrize('requirements_file', ['not a valid yml file: hi: world'], indirect=True) def test_parse_requirements_file_that_isnt_yaml(requirements_cli, requirements_file): expected = "Failed to parse the requirements yml at '%s' with the following error" % to_native(requirements_file) with pytest.raises(AnsibleError, match=expected): requirements_cli._parse_requirements_file(requirements_file) @pytest.mark.parametrize('requirements_file', [(''' # Older role based requirements.yml - galaxy.role - anotherrole ''')], indirect=True) def test_parse_requirements_in_older_format_illega(requirements_cli, requirements_file): expected = "Expecting requirements file to be a dict with the key 'collections' that contains a list of " \ "collections to install" with pytest.raises(AnsibleError, match=expected): requirements_cli._parse_requirements_file(requirements_file, allow_old_format=False) @pytest.mark.parametrize('requirements_file', [''' collections: - version: 1.0.0 '''], indirect=True) def test_parse_requirements_without_mandatory_name_key(requirements_cli, requirements_file): expected = "Collections requirement entry should contain the key name." with pytest.raises(AnsibleError, match=expected): requirements_cli._parse_requirements_file(requirements_file) @pytest.mark.parametrize('requirements_file', [(''' collections: - namespace.collection1 - namespace.collection2 '''), (''' collections: - name: namespace.collection1 - name: namespace.collection2 ''')], indirect=True) def test_parse_requirements(requirements_cli, requirements_file): expected = { 'roles': [], 'collections': [('namespace.collection1', '*', None), ('namespace.collection2', '*', None)] } actual = requirements_cli._parse_requirements_file(requirements_file) assert actual == expected @pytest.mark.parametrize('requirements_file', [''' collections: - name: namespace.collection1 version: ">=1.0.0,<=2.0.0" source: https://galaxy-dev.ansible.com - namespace.collection2'''], indirect=True) def test_parse_requirements_with_extra_info(requirements_cli, requirements_file): actual = requirements_cli._parse_requirements_file(requirements_file) assert len(actual['roles']) == 0 assert len(actual['collections']) == 2 assert actual['collections'][0][0] == 'namespace.collection1' assert actual['collections'][0][1] == '>=1.0.0,<=2.0.0' assert actual['collections'][0][2].api_server == 'https://galaxy-dev.ansible.com' assert actual['collections'][0][2].name == 'explicit_requirement_namespace.collection1' assert actual['collections'][0][2].token is None assert actual['collections'][0][2].username is None assert actual['collections'][0][2].password is None assert actual['collections'][0][2].validate_certs is True assert actual['collections'][1] == ('namespace.collection2', '*', None) @pytest.mark.parametrize('requirements_file', [''' roles: - username.role_name - src: username2.role_name2 - src: ssh://github.com/user/repo scm: git collections: - namespace.collection2 '''], indirect=True) def test_parse_requirements_with_roles_and_collections(requirements_cli, requirements_file): actual = requirements_cli._parse_requirements_file(requirements_file) assert len(actual['roles']) == 3 assert actual['roles'][0].name == 'username.role_name' assert actual['roles'][1].name == 'username2.role_name2' assert actual['roles'][2].name == 'repo' assert actual['roles'][2].src == 'ssh://github.com/user/repo' assert len(actual['collections']) == 1 assert actual['collections'][0] == ('namespace.collection2', '*', None) @pytest.mark.parametrize('requirements_file', [''' collections: - name: namespace.collection - name: namespace2.collection2 source: https://galaxy-dev.ansible.com/ - name: namespace3.collection3 source: server '''], indirect=True) def test_parse_requirements_with_collection_source(requirements_cli, requirements_file): galaxy_api = GalaxyAPI(requirements_cli.api, 'server', 'https://config-server') requirements_cli.api_servers.append(galaxy_api) actual = requirements_cli._parse_requirements_file(requirements_file) assert actual['roles'] == [] assert len(actual['collections']) == 3 assert actual['collections'][0] == ('namespace.collection', '*', None) assert actual['collections'][1][0] == 'namespace2.collection2' assert actual['collections'][1][1] == '*' assert actual['collections'][1][2].api_server == 'https://galaxy-dev.ansible.com/' assert actual['collections'][1][2].name == 'explicit_requirement_namespace2.collection2' assert actual['collections'][1][2].token is None assert actual['collections'][2] == ('namespace3.collection3', '*', galaxy_api) @pytest.mark.parametrize('requirements_file', [''' - username.included_role - src: https://github.com/user/repo '''], indirect=True) def test_parse_requirements_roles_with_include(requirements_cli, requirements_file): reqs = [ 'ansible.role', {'include': requirements_file}, ] parent_requirements = os.path.join(os.path.dirname(requirements_file), 'parent.yaml') with open(to_bytes(parent_requirements), 'wb') as req_fd: req_fd.write(to_bytes(yaml.safe_dump(reqs))) actual = requirements_cli._parse_requirements_file(parent_requirements) assert len(actual['roles']) == 3 assert actual['collections'] == [] assert actual['roles'][0].name == 'ansible.role' assert actual['roles'][1].name == 'username.included_role' assert actual['roles'][2].name == 'repo' assert actual['roles'][2].src == 'https://github.com/user/repo' @pytest.mark.parametrize('requirements_file', [''' - username.role - include: missing.yml '''], indirect=True) def test_parse_requirements_roles_with_include_missing(requirements_cli, requirements_file): expected = "Failed to find include requirements file 'missing.yml' in '%s'" % to_native(requirements_file) with pytest.raises(AnsibleError, match=expected): requirements_cli._parse_requirements_file(requirements_file)
closed
ansible/ansible
https://github.com/ansible/ansible
63,869
aws lightsail does not support wait: yes on create
##### SUMMARY In #63770 it was found that the `wait` specified in the module docs does not apply to create operations. Wait functions are only defined for `delete_instance`, `restart_instance`, and `startstop_instance`. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lightsail ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` $ ansible --version ansible 2.10.0.dev0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/jill/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/jill/src/ansible/lib/ansible executable location = /home/jill/src/ansible/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE ``` - name: Start the instance lightsail: name: "{{ instance_name }}" state: running wait: yes register: result - assert: that: - result.changed == True - result.instance.state.name == 'running' ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS test pass ##### ACTUAL RESULTS assertion failed
https://github.com/ansible/ansible/issues/63869
https://github.com/ansible/ansible/pull/65275
02e7c5a19f1e864d0c86b04a424bdea51fd5cb25
37ce55fd79c54d5b33f39fc1c50da382753bf6cb
2019-10-23T16:32:04Z
python
2019-12-02T20:12:44Z
hacking/aws_config/testing_policies/compute-policy.json
{# Not all Autoscaling API Actions allow specified resources #} {# See http://docs.aws.amazon.com/autoscaling/latest/userguide/control-access-using-iam.html#policy-auto-scaling-resources #} { "Version": "2012-10-17", "Statement": [ { "Sid": "DescribeAutoscaling", "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "autoscaling:DescribeLaunchConfigurations", "autoscaling:DescribePolicies" ], "Resource": "*" }, { "Sid": "AllowAutoscaling", "Effect": "Allow", "Action": [ "autoscaling:*LaunchConfiguration", "autoscaling:*LoadBalancers", "autoscaling:*AutoScalingGroup", "autoscaling:*MetricsCollection", "autoscaling:PutScalingPolicy", "autoscaling:DeletePolicy", "autoscaling:*Tags" ], "Resource": [ "arn:aws:autoscaling:{{aws_region}}:{{aws_account}}:*" ] }, {# Note that not all EC2 API Actions allow a specific resource #} {# See http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ec2-api-permissions.html#ec2-api-unsupported-resource-permissions #} { "Sid": "AllowUnspecifiedEC2Resource", "Effect": "Allow", "Action": [ "ec2:*LaunchTemplate", "ec2:*LaunchTemplateVersion", "ec2:*LaunchTemplateVersions", "ec2:AttachVolume", "ec2:CreateImage", "ec2:CreateKeyPair", "ec2:CreateSecurityGroup", "ec2:CreateSnapshot", "ec2:CreateTags", "ec2:DeleteKeyPair", "ec2:DeleteSnapshot", "ec2:DeleteTags", "ec2:DeregisterImage", "ec2:Describe*", "ec2:ImportKeyPair", "ec2:ModifyImageAttribute", "ec2:ModifyInstanceAttribute", "ec2:RegisterImage", "ec2:ReplaceIamInstanceProfileAssociation", "ec2:ReportInstanceStatus" ], "Resource": "*" }, { "Sid": "AllowSpecifiedEC2Resource", "Effect": "Allow", "Action": [ "ec2:AuthorizeSecurityGroupIngress", "ec2:AuthorizeSecurityGroupEgress", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteRouteTable", "ec2:DeleteSecurityGroup", "ec2:DeleteVolume", "ec2:RevokeSecurityGroupEgress", "ec2:RevokeSecurityGroupIngress", "ec2:RunInstances", "ec2:StartInstances", "ec2:StopInstances", "ec2:TerminateInstances", "ec2:UpdateSecurityGroupRuleDescriptionsIngress", "ec2:UpdateSecurityGroupRuleDescriptionsEgress" ], "Resource": [ "arn:aws:ec2:{{aws_region}}::image/*", "arn:aws:ec2:{{aws_region}}:{{aws_account}}:*" ] }, {# According to http://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/load-balancer-authentication-access-control.html #} {# Resource level access control is not possible for the new ELB API (providing Application Load Balancer functionality #} {# While it remains possible for the old API, there is no distinction of the Actions between old API and new API #} { "Sid": "AllowLoadBalancerOperations", "Effect": "Allow", "Action": [ "elasticloadbalancing:*LoadBalancer", "elasticloadbalancing:*LoadBalancers", "elasticloadbalancing:*LoadBalancerListeners", "elasticloadbalancing:*TargetGroup", "elasticloadbalancing:AddTags", "elasticloadbalancing:ConfigureHealthCheck", "elasticloadbalancing:CreateListener", "elasticloadbalancing:CreateRule", "elasticloadbalancing:DeleteListener", "elasticloadbalancing:DeleteRule", "elasticloadbalancing:DescribeInstanceHealth", "elasticloadbalancing:DescribeLoadBalancer*", "elasticloadbalancing:DescribeTags", "elasticloadbalancing:ModifyListener", "elasticloadbalancing:ModifyLoadBalancerAttributes", "elasticloadbalancing:ModifyRule", "elasticloadbalancing:RemoveTags" ], "Resource": "*" }, {# Only certain lambda actions can be restricted to a specific resource #} {# http://docs.aws.amazon.com/lambda/latest/dg/lambda-api-permissions-ref.html #} { "Sid": "AllowApiGateway", "Effect": "Allow", "Action": [ "apigateway:*" ], "Resource": [ "arn:aws:apigateway:{{aws_region}}::/*" ] }, { "Sid": "AllowGetUserForLambdaCreation", "Effect": "Allow", "Action": [ "iam:GetUser" ], "Resource": [ "arn:aws:iam::{{aws_account}}:user/ansible_integration_tests" ] }, { "Sid": "AllowLambdaManagementWithoutResource", "Effect": "Allow", "Action": [ "lambda:CreateEventSourceMapping", "lambda:GetAccountSettings", "lambda:GetEventSourceMapping", "lambda:List*", "lambda:TagResource", "lambda:UntagResource" ], "Resource": "*" }, { "Sid": "AllowLambdaManagementWithResource", "Effect": "Allow", "Action": [ "lambda:AddPermission", "lambda:CreateAlias", "lambda:CreateFunction", "lambda:DeleteAlias", "lambda:DeleteFunction", "lambda:GetAlias", "lambda:GetFunction", "lambda:GetFunctionConfiguration", "lambda:GetPolicy", "lambda:InvokeFunction", "lambda:PublishVersion", "lambda:RemovePermission", "lambda:UpdateAlias", "lambda:UpdateEventSourceMapping", "lambda:UpdateFunctionCode", "lambda:UpdateFunctionConfiguration" ], "Resource": "arn:aws:lambda:{{aws_region}}:{{aws_account}}:function:*" }, { "Sid": "AllowRoleManagement", "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": [ "arn:aws:iam::{{aws_account}}:role/ansible_lambda_role", "arn:aws:iam::{{aws_account}}:role/ecsInstanceRole", "arn:aws:iam::{{aws_account}}:role/ec2InstanceRole", "arn:aws:iam::{{aws_account}}:role/ecsServiceRole", "arn:aws:iam::{{aws_account}}:role/aws_eks_cluster_role", "arn:aws:iam::{{aws_account}}:role/ecsTaskExecutionRole" ] }, { "Sid": "AllowSESManagement", "Effect": "Allow", "Action": [ "ses:VerifyEmailIdentity", "ses:DeleteIdentity", "ses:GetIdentityVerificationAttributes", "ses:GetIdentityNotificationAttributes", "ses:VerifyDomainIdentity", "ses:SetIdentityNotificationTopic", "ses:SetIdentityHeadersInNotificationsEnabled", "ses:SetIdentityFeedbackForwardingEnabled", "ses:GetIdentityPolicies", "ses:PutIdentityPolicy", "ses:DeleteIdentityPolicy", "ses:ListIdentityPolicies", "ses:SetIdentityFeedbackForwardingEnabled", "ses:ListReceiptRuleSets", "ses:DescribeReceiptRuleSet", "ses:DescribeActiveReceiptRuleSet", "ses:SetActiveReceiptRuleSet", "ses:CreateReceiptRuleSet", "ses:DeleteReceiptRuleSet" ], "Resource": [ "*" ] }, { "Sid": "AllowSNSManagement", "Effect": "Allow", "Action": [ "SNS:CreateTopic", "SNS:DeleteTopic", "SNS:GetTopicAttributes", "SNS:ListSubscriptions", "SNS:ListSubscriptionsByTopic", "SNS:ListTopics", "SNS:SetTopicAttributes", "SNS:Subscribe", "SNS:Unsubscribe" ], "Resource": [ "*" ] }, { "Sid": "AllowStepFunctionsStateMachine", "Effect": "Allow", "Action": [ "states:CreateStateMachine", "states:DeleteStateMachine", "states:DescribeStateMachine", "states:ListStateMachines", "states:ListTagsForResource", "states:TagResource", "states:UntagResource", "states:UpdateStateMachine" ], "Resource": [ "arn:aws:states:*" ] }, { "Sid": "AllowLightsail", "Effect": "Allow", "Action": [ "lightsail:CreateInstances", "lightsail:CreateKeyPair", "lightsail:DeleteInstance", "lightsail:DeleteKeyPair", "lightsail:GetInstance", "lightsail:GetInstances", "lightsail:GetKeyPairs", "lightsail:RebootInstance", "lightsail:StartInstance", "lightsail:StopInstance" ], "Resource": "arn:aws:lightsail:*:*:*" } ] }
closed
ansible/ansible
https://github.com/ansible/ansible
63,869
aws lightsail does not support wait: yes on create
##### SUMMARY In #63770 it was found that the `wait` specified in the module docs does not apply to create operations. Wait functions are only defined for `delete_instance`, `restart_instance`, and `startstop_instance`. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lightsail ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` $ ansible --version ansible 2.10.0.dev0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/jill/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/jill/src/ansible/lib/ansible executable location = /home/jill/src/ansible/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE ``` - name: Start the instance lightsail: name: "{{ instance_name }}" state: running wait: yes register: result - assert: that: - result.changed == True - result.instance.state.name == 'running' ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS test pass ##### ACTUAL RESULTS assertion failed
https://github.com/ansible/ansible/issues/63869
https://github.com/ansible/ansible/pull/65275
02e7c5a19f1e864d0c86b04a424bdea51fd5cb25
37ce55fd79c54d5b33f39fc1c50da382753bf6cb
2019-10-23T16:32:04Z
python
2019-12-02T20:12:44Z
lib/ansible/modules/cloud/amazon/lightsail.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: lightsail short_description: Create or delete a virtual machine instance in AWS Lightsail description: - Creates or instances in AWS Lightsail and optionally wait for it to be 'running'. version_added: "2.4" author: "Nick Ball (@nickball)" options: state: description: - Indicate desired state of the target. default: present choices: ['present', 'absent', 'running', 'restarted', 'stopped'] type: str name: description: - Name of the instance. required: true type: str zone: description: - AWS availability zone in which to launch the instance. - Required when I(state=present) type: str blueprint_id: description: - ID of the instance blueprint image. - Required when I(state=present) type: str bundle_id: description: - Bundle of specification info for the instance. - Required when I(state=present). type: str user_data: description: - Launch script that can configure the instance with additional data. type: str key_pair_name: description: - Name of the key pair to use with the instance. type: str wait: description: - Wait for the instance to be in state 'running' before returning. - If I(wait=false) an ip_address may not be returned. type: bool default: true wait_timeout: description: - How long before I(wait) gives up, in seconds. default: 300 type: int requirements: - "python >= 2.6" - boto3 extends_documentation_fragment: - aws - ec2 ''' EXAMPLES = ''' # Create a new Lightsail instance, register the instance details - lightsail: state: present name: myinstance region: us-east-1 zone: us-east-1a blueprint_id: ubuntu_16_04 bundle_id: nano_1_0 key_pair_name: id_rsa user_data: " echo 'hello world' > /home/ubuntu/test.txt" wait_timeout: 500 register: my_instance - debug: msg: "Name is {{ my_instance.instance.name }}" - debug: msg: "IP is {{ my_instance.instance.public_ip_address }}" # Delete an instance if present - lightsail: state: absent region: us-east-1 name: myinstance ''' RETURN = ''' changed: description: if a snapshot has been modified/created returned: always type: bool sample: changed: true instance: description: instance data returned: always type: dict sample: arn: "arn:aws:lightsail:us-east-1:448830907657:Instance/1fef0175-d6c8-480e-84fa-214f969cda87" blueprint_id: "ubuntu_16_04" blueprint_name: "Ubuntu" bundle_id: "nano_1_0" created_at: "2017-03-27T08:38:59.714000-04:00" hardware: cpu_count: 1 ram_size_in_gb: 0.5 is_static_ip: false location: availability_zone: "us-east-1a" region_name: "us-east-1" name: "my_instance" networking: monthly_transfer: gb_per_month_allocated: 1024 ports: - access_direction: "inbound" access_from: "Anywhere (0.0.0.0/0)" access_type: "public" common_name: "" from_port: 80 protocol: tcp to_port: 80 - access_direction: "inbound" access_from: "Anywhere (0.0.0.0/0)" access_type: "public" common_name: "" from_port: 22 protocol: tcp to_port: 22 private_ip_address: "172.26.8.14" public_ip_address: "34.207.152.202" resource_type: "Instance" ssh_key_name: "keypair" state: code: 16 name: running support_code: "588307843083/i-0997c97831ee21e33" username: "ubuntu" ''' import time import traceback try: import botocore HAS_BOTOCORE = True except ImportError: HAS_BOTOCORE = False try: import boto3 except ImportError: # will be caught by imported HAS_BOTO3 pass from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.ec2 import (ec2_argument_spec, get_aws_connection_info, boto3_conn, HAS_BOTO3, camel_dict_to_snake_dict) def create_instance(module, client, instance_name): """ Create an instance module: Ansible module object client: authenticated lightsail connection object instance_name: name of instance to delete Returns a dictionary of instance information about the new instance. """ changed = False # Check if instance already exists inst = None try: inst = _find_instance_info(client, instance_name) except botocore.exceptions.ClientError as e: if e.response['Error']['Code'] != 'NotFoundException': module.fail_json(msg='Error finding instance {0}, error: {1}'.format(instance_name, e)) zone = module.params.get('zone') blueprint_id = module.params.get('blueprint_id') bundle_id = module.params.get('bundle_id') key_pair_name = module.params.get('key_pair_name') user_data = module.params.get('user_data') user_data = '' if user_data is None else user_data resp = None if inst is None: try: resp = client.create_instances( instanceNames=[ instance_name ], availabilityZone=zone, blueprintId=blueprint_id, bundleId=bundle_id, userData=user_data, keyPairName=key_pair_name, ) resp = resp['operations'][0] except botocore.exceptions.ClientError as e: module.fail_json(msg='Unable to create instance {0}, error: {1}'.format(instance_name, e)) changed = True inst = _find_instance_info(client, instance_name) return (changed, inst) def delete_instance(module, client, instance_name): """ Terminates an instance module: Ansible module object client: authenticated lightsail connection object instance_name: name of instance to delete Returns a dictionary of instance information about the instance deleted (pre-deletion). If the instance to be deleted is running "changed" will be set to False. """ # It looks like deleting removes the instance immediately, nothing to wait for wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) wait_max = time.time() + wait_timeout changed = False inst = None try: inst = _find_instance_info(client, instance_name) except botocore.exceptions.ClientError as e: if e.response['Error']['Code'] != 'NotFoundException': module.fail_json(msg='Error finding instance {0}, error: {1}'.format(instance_name, e)) # If instance doesn't exist, then return with 'changed:false' if not inst: return changed, {} # Wait for instance to exit transition state before deleting if wait: while wait_max > time.time() and inst is not None and inst['state']['name'] in ('pending', 'stopping'): try: time.sleep(5) inst = _find_instance_info(client, instance_name) except botocore.exceptions.ClientError as e: if e.response['ResponseMetadata']['HTTPStatusCode'] == "403": module.fail_json(msg="Failed to delete instance {0}. Check that you have permissions to perform the operation.".format(instance_name), exception=traceback.format_exc()) elif e.response['Error']['Code'] == "RequestExpired": module.fail_json(msg="RequestExpired: Failed to delete instance {0}.".format(instance_name), exception=traceback.format_exc()) # sleep and retry time.sleep(10) # Attempt to delete if inst is not None: while not changed and ((wait and wait_max > time.time()) or (not wait)): try: client.delete_instance(instanceName=instance_name) changed = True except botocore.exceptions.ClientError as e: module.fail_json(msg='Error deleting instance {0}, error: {1}'.format(instance_name, e)) # Timed out if wait and not changed and wait_max <= time.time(): module.fail_json(msg="wait for instance delete timeout at %s" % time.asctime()) return (changed, inst) def restart_instance(module, client, instance_name): """ Reboot an existing instance module: Ansible module object client: authenticated lightsail connection object instance_name: name of instance to reboot Returns a dictionary of instance information about the restarted instance If the instance was not able to reboot, "changed" will be set to False. Wait will not apply here as this is an OS-level operation """ wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) wait_max = time.time() + wait_timeout changed = False inst = None try: inst = _find_instance_info(client, instance_name) except botocore.exceptions.ClientError as e: if e.response['Error']['Code'] != 'NotFoundException': module.fail_json(msg='Error finding instance {0}, error: {1}'.format(instance_name, e)) # Wait for instance to exit transition state before state change if wait: while wait_max > time.time() and inst is not None and inst['state']['name'] in ('pending', 'stopping'): try: time.sleep(5) inst = _find_instance_info(client, instance_name) except botocore.exceptions.ClientError as e: if e.response['ResponseMetadata']['HTTPStatusCode'] == "403": module.fail_json(msg="Failed to restart instance {0}. Check that you have permissions to perform the operation.".format(instance_name), exception=traceback.format_exc()) elif e.response['Error']['Code'] == "RequestExpired": module.fail_json(msg="RequestExpired: Failed to restart instance {0}.".format(instance_name), exception=traceback.format_exc()) time.sleep(3) # send reboot if inst is not None: try: client.reboot_instance(instanceName=instance_name) except botocore.exceptions.ClientError as e: if e.response['Error']['Code'] != 'NotFoundException': module.fail_json(msg='Unable to reboot instance {0}, error: {1}'.format(instance_name, e)) changed = True return (changed, inst) def startstop_instance(module, client, instance_name, state): """ Starts or stops an existing instance module: Ansible module object client: authenticated lightsail connection object instance_name: name of instance to start/stop state: Target state ("running" or "stopped") Returns a dictionary of instance information about the instance started/stopped If the instance was not able to state change, "changed" will be set to False. """ wait = module.params.get('wait') wait_timeout = int(module.params.get('wait_timeout')) wait_max = time.time() + wait_timeout changed = False inst = None try: inst = _find_instance_info(client, instance_name) except botocore.exceptions.ClientError as e: if e.response['Error']['Code'] != 'NotFoundException': module.fail_json(msg='Error finding instance {0}, error: {1}'.format(instance_name, e)) # Wait for instance to exit transition state before state change if wait: while wait_max > time.time() and inst is not None and inst['state']['name'] in ('pending', 'stopping'): try: time.sleep(5) inst = _find_instance_info(client, instance_name) except botocore.exceptions.ClientError as e: if e.response['ResponseMetadata']['HTTPStatusCode'] == "403": module.fail_json(msg="Failed to start/stop instance {0}. Check that you have permissions to perform the operation".format(instance_name), exception=traceback.format_exc()) elif e.response['Error']['Code'] == "RequestExpired": module.fail_json(msg="RequestExpired: Failed to start/stop instance {0}.".format(instance_name), exception=traceback.format_exc()) time.sleep(1) # Try state change if inst is not None and inst['state']['name'] != state: try: if state == 'running': client.start_instance(instanceName=instance_name) else: client.stop_instance(instanceName=instance_name) except botocore.exceptions.ClientError as e: module.fail_json(msg='Unable to change state for instance {0}, error: {1}'.format(instance_name, e)) changed = True # Grab current instance info inst = _find_instance_info(client, instance_name) return (changed, inst) def core(module): region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module, boto3=True) if not region: module.fail_json(msg='region must be specified') client = None try: client = boto3_conn(module, conn_type='client', resource='lightsail', region=region, endpoint=ec2_url, **aws_connect_kwargs) except (botocore.exceptions.ClientError, botocore.exceptions.ValidationError) as e: module.fail_json(msg='Failed while connecting to the lightsail service: %s' % e, exception=traceback.format_exc()) changed = False state = module.params['state'] name = module.params['name'] if state == 'absent': changed, instance_dict = delete_instance(module, client, name) elif state in ('running', 'stopped'): changed, instance_dict = startstop_instance(module, client, name, state) elif state == 'restarted': changed, instance_dict = restart_instance(module, client, name) elif state == 'present': changed, instance_dict = create_instance(module, client, name) module.exit_json(changed=changed, instance=camel_dict_to_snake_dict(instance_dict)) def _find_instance_info(client, instance_name): ''' handle exceptions where this function is called ''' inst = None try: inst = client.get_instance(instanceName=instance_name) except botocore.exceptions.ClientError as e: raise return inst['instance'] def main(): argument_spec = ec2_argument_spec() argument_spec.update(dict( name=dict(type='str', required=True), state=dict(type='str', default='present', choices=['present', 'absent', 'stopped', 'running', 'restarted']), zone=dict(type='str'), blueprint_id=dict(type='str'), bundle_id=dict(type='str'), key_pair_name=dict(type='str'), user_data=dict(type='str'), wait=dict(type='bool', default=True), wait_timeout=dict(default=300, type='int'), )) module = AnsibleModule(argument_spec=argument_spec) if not HAS_BOTO3: module.fail_json(msg='Python module "boto3" is missing, please install it') if not HAS_BOTOCORE: module.fail_json(msg='Python module "botocore" is missing, please install it') try: core(module) except (botocore.exceptions.ClientError, Exception) as e: module.fail_json(msg=str(e), exception=traceback.format_exc()) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
63,869
aws lightsail does not support wait: yes on create
##### SUMMARY In #63770 it was found that the `wait` specified in the module docs does not apply to create operations. Wait functions are only defined for `delete_instance`, `restart_instance`, and `startstop_instance`. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lightsail ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` $ ansible --version ansible 2.10.0.dev0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/jill/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/jill/src/ansible/lib/ansible executable location = /home/jill/src/ansible/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE ``` - name: Start the instance lightsail: name: "{{ instance_name }}" state: running wait: yes register: result - assert: that: - result.changed == True - result.instance.state.name == 'running' ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS test pass ##### ACTUAL RESULTS assertion failed
https://github.com/ansible/ansible/issues/63869
https://github.com/ansible/ansible/pull/65275
02e7c5a19f1e864d0c86b04a424bdea51fd5cb25
37ce55fd79c54d5b33f39fc1c50da382753bf6cb
2019-10-23T16:32:04Z
python
2019-12-02T20:12:44Z
test/integration/targets/lightsail/defaults/main.yml
instance_name: "{{ resource_prefix }}_instance" keypair_name: "{{ resource_prefix }}_keypair" zone: "{{ aws_region }}a"
closed
ansible/ansible
https://github.com/ansible/ansible
63,869
aws lightsail does not support wait: yes on create
##### SUMMARY In #63770 it was found that the `wait` specified in the module docs does not apply to create operations. Wait functions are only defined for `delete_instance`, `restart_instance`, and `startstop_instance`. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lightsail ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` $ ansible --version ansible 2.10.0.dev0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/jill/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/jill/src/ansible/lib/ansible executable location = /home/jill/src/ansible/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE ``` - name: Start the instance lightsail: name: "{{ instance_name }}" state: running wait: yes register: result - assert: that: - result.changed == True - result.instance.state.name == 'running' ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS test pass ##### ACTUAL RESULTS assertion failed
https://github.com/ansible/ansible/issues/63869
https://github.com/ansible/ansible/pull/65275
02e7c5a19f1e864d0c86b04a424bdea51fd5cb25
37ce55fd79c54d5b33f39fc1c50da382753bf6cb
2019-10-23T16:32:04Z
python
2019-12-02T20:12:44Z
test/integration/targets/lightsail/library/lightsail_keypair.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} try: from botocore.exceptions import ClientError, BotoCoreError import boto3 except ImportError: pass # caught by AnsibleAWSModule from ansible.module_utils.aws.core import AnsibleAWSModule from ansible.module_utils.ec2 import (get_aws_connection_info, boto3_conn) def create_keypair(module, client, keypair_name): """ Create a keypair to use for your lightsail instance """ try: client.create_key_pair(keyPairName=keypair_name) except ClientError as e: if "Some names are already in use" in e.response['Error']['Message']: module.exit_json(changed=False) module.fail_json_aws(e) module.exit_json(changed=True) def delete_keypair(module, client, keypair_name): """ Delete a keypair in lightsail """ try: client.delete_key_pair(keyPairName=keypair_name) except ClientError as e: if e.response['Error']['Code'] == "NotFoundException": module.exit_json(changed=False) module.fail_json_aws(e) module.exit_json(changed=True) def main(): argument_spec = dict( name=dict(type='str', required=True), state=dict(type='str', default='present', choices=['present', 'absent']), ) module = AnsibleAWSModule(argument_spec=argument_spec) region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True) try: client = boto3_conn(module, conn_type='client', resource='lightsail', region=region, endpoint=ec2_url, **aws_connect_params) except ClientError as e: module.fail_json_aws(e) keypair_name = module.params.get('name') state = module.params.get('state') if state == 'present': create_keypair(module, client, keypair_name) else: delete_keypair(module, client, keypair_name) if __name__ == '__main__': main()
closed
ansible/ansible
https://github.com/ansible/ansible
63,869
aws lightsail does not support wait: yes on create
##### SUMMARY In #63770 it was found that the `wait` specified in the module docs does not apply to create operations. Wait functions are only defined for `delete_instance`, `restart_instance`, and `startstop_instance`. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME lightsail ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` $ ansible --version ansible 2.10.0.dev0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/jill/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/jill/src/ansible/lib/ansible executable location = /home/jill/src/ansible/bin/ansible python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0] ``` ##### CONFIGURATION N/A ##### OS / ENVIRONMENT N/A ##### STEPS TO REPRODUCE ``` - name: Start the instance lightsail: name: "{{ instance_name }}" state: running wait: yes register: result - assert: that: - result.changed == True - result.instance.state.name == 'running' ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS test pass ##### ACTUAL RESULTS assertion failed
https://github.com/ansible/ansible/issues/63869
https://github.com/ansible/ansible/pull/65275
02e7c5a19f1e864d0c86b04a424bdea51fd5cb25
37ce55fd79c54d5b33f39fc1c50da382753bf6cb
2019-10-23T16:32:04Z
python
2019-12-02T20:12:44Z
test/integration/targets/lightsail/tasks/main.yml
--- - module_defaults: group/aws: aws_access_key: '{{ aws_access_key | default(omit) }}' aws_secret_key: '{{ aws_secret_key | default(omit) }}' security_token: '{{ security_token | default(omit) }}' region: '{{ aws_region | default(omit) }}' block: # ==== Tests =================================================== - name: Create a new keypair in lightsail lightsail_keypair: name: "{{ keypair_name }}" - name: Create a new instance lightsail: name: "{{ instance_name }}" zone: "{{ zone }}" blueprint_id: amazon_linux bundle_id: nano_2_0 key_pair_name: "{{ keypair_name }}" register: result - assert: that: - result.changed == True - "'instance' in result and result.instance.name == instance_name" - "result.instance.state.name in ['pending', 'running']" - name: Make sure create is idempotent lightsail: name: "{{ instance_name }}" zone: "{{ zone }}" blueprint_id: amazon_linux bundle_id: nano_2_0 key_pair_name: "{{ keypair_name }}" register: result - assert: that: - result.changed == False - name: Start the running instance lightsail: name: "{{ instance_name }}" state: running register: result - assert: that: - result.changed == False - name: Stop the instance lightsail: name: "{{ instance_name }}" state: stopped register: result - assert: that: - result.changed == True - "result.instance.state.name in ['stopping', 'stopped']" - name: Stop the stopped instance lightsail: name: "{{ instance_name }}" state: stopped register: result - assert: that: - result.changed == False - name: Start the instance lightsail: name: "{{ instance_name }}" state: running register: result - assert: that: - result.changed == True - "result.instance.state.name in ['running', 'pending']" - name: Restart the instance lightsail: name: "{{ instance_name }}" state: restarted register: result - assert: that: - result.changed == True - name: Delete the instance lightsail: name: "{{ instance_name }}" state: absent register: result - assert: that: - result.changed == True - name: Make sure instance deletion is idempotent lightsail: name: "{{ instance_name }}" state: absent register: result - assert: that: - result.changed == False # ==== Cleanup ==================================================== always: - name: Cleanup - delete instance lightsail: name: "{{ instance_name }}" state: absent ignore_errors: yes - name: Cleanup - delete keypair lightsail_keypair: name: "{{ keypair_name }}" state: absent ignore_errors: yes
closed
ansible/ansible
https://github.com/ansible/ansible
64,850
Galaxy Collection throws generic 401 while installing collections from Automation Hub.
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> If a user passes an incorrect username or password via the configuration for a galaxy_server the error message is unhelpful. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> ansible-galaxy ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible --version ansible 2.9.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` ansible-config dump --only-changed DEFAULT_JINJA2_NATIVE(/etc/ansible/ansible.cfg) = True GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = [u'automation_hub'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> RHEL7 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ``` [galaxy] server_list = automation_hub [galaxy_server.automation_hub] url=https://cloud.redhat.com/api/automation-hub/ username=$INCORRECTUSERNAME password=$PASSWORD ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS Actual error from automation hub api is exposed: ``` < HTTP/1.1 401 Unauthorized < Server: openresty/1.13.6.1 < Content-Type: text/plain < Content-Length: 40 < x-rh-insights-request-id: < X-Content-Type-Options: nosniff < Date: Thu, 14 Nov 2019 18:05:51 GMT < Connection: keep-alive < Set-Cookie: ; path=/; HttpOnly; Secure < X-Frame-Options: SAMEORIGIN < ``` Insights services authentication failed <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS generic and not useful error is thrown: ``` ansible-galaxy collection install splunk.enterprise_security -vvv ansible-galaxy 2.9.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible-galaxy python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] Using /etc/ansible/ansible.cfg as config file Process install dependency map Processing requirement collection 'splunk.enterprise_security' ERROR! Error when finding available api versions from automation_hub (https://cloud.redhat.com/api/automation-hub/) (HTTP Code: 401, Message: Unknown error returned by Galaxy server.) ``` <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64850
https://github.com/ansible/ansible/pull/65273
0e5a83a1cc2379afc70c45588e677ddd3b911dc2
6586b7132c839b2f60582ff363a99c62156e2e50
2019-11-14T18:14:10Z
python
2019-12-02T21:36:05Z
changelogs/fragments/galaxy-error-reason.yaml
closed
ansible/ansible
https://github.com/ansible/ansible
64,850
Galaxy Collection throws generic 401 while installing collections from Automation Hub.
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> If a user passes an incorrect username or password via the configuration for a galaxy_server the error message is unhelpful. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> ansible-galaxy ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible --version ansible 2.9.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` ansible-config dump --only-changed DEFAULT_JINJA2_NATIVE(/etc/ansible/ansible.cfg) = True GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = [u'automation_hub'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> RHEL7 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ``` [galaxy] server_list = automation_hub [galaxy_server.automation_hub] url=https://cloud.redhat.com/api/automation-hub/ username=$INCORRECTUSERNAME password=$PASSWORD ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS Actual error from automation hub api is exposed: ``` < HTTP/1.1 401 Unauthorized < Server: openresty/1.13.6.1 < Content-Type: text/plain < Content-Length: 40 < x-rh-insights-request-id: < X-Content-Type-Options: nosniff < Date: Thu, 14 Nov 2019 18:05:51 GMT < Connection: keep-alive < Set-Cookie: ; path=/; HttpOnly; Secure < X-Frame-Options: SAMEORIGIN < ``` Insights services authentication failed <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS generic and not useful error is thrown: ``` ansible-galaxy collection install splunk.enterprise_security -vvv ansible-galaxy 2.9.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible-galaxy python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] Using /etc/ansible/ansible.cfg as config file Process install dependency map Processing requirement collection 'splunk.enterprise_security' ERROR! Error when finding available api versions from automation_hub (https://cloud.redhat.com/api/automation-hub/) (HTTP Code: 401, Message: Unknown error returned by Galaxy server.) ``` <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64850
https://github.com/ansible/ansible/pull/65273
0e5a83a1cc2379afc70c45588e677ddd3b911dc2
6586b7132c839b2f60582ff363a99c62156e2e50
2019-11-14T18:14:10Z
python
2019-12-02T21:36:05Z
lib/ansible/galaxy/api.py
# (C) 2013, James Cammarata <[email protected]> # Copyright: (c) 2019, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import (absolute_import, division, print_function) __metaclass__ = type import json import os import tarfile import uuid import time from ansible import context from ansible.errors import AnsibleError from ansible.module_utils.six import string_types from ansible.module_utils.six.moves.urllib.error import HTTPError from ansible.module_utils.six.moves.urllib.parse import quote as urlquote, urlencode, urlparse from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.module_utils.urls import open_url from ansible.utils.display import Display from ansible.utils.hashing import secure_hash_s try: from urllib.parse import urlparse except ImportError: # Python 2 from urlparse import urlparse display = Display() def g_connect(versions): """ Wrapper to lazily initialize connection info to Galaxy and verify the API versions required are available on the endpoint. :param versions: A list of API versions that the function supports. """ def decorator(method): def wrapped(self, *args, **kwargs): if not self._available_api_versions: display.vvvv("Initial connection to galaxy_server: %s" % self.api_server) # Determine the type of Galaxy server we are talking to. First try it unauthenticated then with Bearer # auth for Automation Hub. n_url = self.api_server error_context_msg = 'Error when finding available api versions from %s (%s)' % (self.name, n_url) if self.api_server == 'https://galaxy.ansible.com' or self.api_server == 'https://galaxy.ansible.com/': n_url = 'https://galaxy.ansible.com/api/' try: data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg) except (AnsibleError, GalaxyError, ValueError, KeyError): # Either the URL doesnt exist, or other error. Or the URL exists, but isn't a galaxy API # root (not JSON, no 'available_versions') so try appending '/api/' n_url = _urljoin(n_url, '/api/') # let exceptions here bubble up data = self._call_galaxy(n_url, method='GET', error_context_msg=error_context_msg) if 'available_versions' not in data: raise AnsibleError("Tried to find galaxy API root at %s but no 'available_versions' are available on %s" % (n_url, self.api_server)) # Update api_server to point to the "real" API root, which in this case # was the configured url + '/api/' appended. self.api_server = n_url # Default to only supporting v1, if only v1 is returned we also assume that v2 is available even though # it isn't returned in the available_versions dict. available_versions = data.get('available_versions', {u'v1': u'v1/'}) if list(available_versions.keys()) == [u'v1']: available_versions[u'v2'] = u'v2/' self._available_api_versions = available_versions display.vvvv("Found API version '%s' with Galaxy server %s (%s)" % (', '.join(available_versions.keys()), self.name, self.api_server)) # Verify that the API versions the function works with are available on the server specified. available_versions = set(self._available_api_versions.keys()) common_versions = set(versions).intersection(available_versions) if not common_versions: raise AnsibleError("Galaxy action %s requires API versions '%s' but only '%s' are available on %s %s" % (method.__name__, ", ".join(versions), ", ".join(available_versions), self.name, self.api_server)) return method(self, *args, **kwargs) return wrapped return decorator def _urljoin(*args): return '/'.join(to_native(a, errors='surrogate_or_strict').strip('/') for a in args + ('',) if a) class GalaxyError(AnsibleError): """ Error for bad Galaxy server responses. """ def __init__(self, http_error, message): super(GalaxyError, self).__init__(message) self.http_code = http_error.code self.url = http_error.geturl() try: http_msg = to_text(http_error.read()) err_info = json.loads(http_msg) except (AttributeError, ValueError): err_info = {} url_split = self.url.split('/') if 'v2' in url_split: galaxy_msg = err_info.get('message', 'Unknown error returned by Galaxy server.') code = err_info.get('code', 'Unknown') full_error_msg = u"%s (HTTP Code: %d, Message: %s Code: %s)" % (message, self.http_code, galaxy_msg, code) elif 'v3' in url_split: errors = err_info.get('errors', []) if not errors: errors = [{}] # Defaults are set below, we just need to make sure 1 error is present. message_lines = [] for error in errors: error_msg = error.get('detail') or error.get('title') or 'Unknown error returned by Galaxy server.' error_code = error.get('code') or 'Unknown' message_line = u"(HTTP Code: %d, Message: %s Code: %s)" % (self.http_code, error_msg, error_code) message_lines.append(message_line) full_error_msg = "%s %s" % (message, ', '.join(message_lines)) else: # v1 and unknown API endpoints galaxy_msg = err_info.get('default', 'Unknown error returned by Galaxy server.') full_error_msg = u"%s (HTTP Code: %d, Message: %s)" % (message, self.http_code, galaxy_msg) self.message = to_native(full_error_msg) class CollectionVersionMetadata: def __init__(self, namespace, name, version, download_url, artifact_sha256, dependencies): """ Contains common information about a collection on a Galaxy server to smooth through API differences for Collection and define a standard meta info for a collection. :param namespace: The namespace name. :param name: The collection name. :param version: The version that the metadata refers to. :param download_url: The URL to download the collection. :param artifact_sha256: The SHA256 of the collection artifact for later verification. :param dependencies: A dict of dependencies of the collection. """ self.namespace = namespace self.name = name self.version = version self.download_url = download_url self.artifact_sha256 = artifact_sha256 self.dependencies = dependencies class GalaxyAPI: """ This class is meant to be used as a API client for an Ansible Galaxy server """ def __init__(self, galaxy, name, url, username=None, password=None, token=None): self.galaxy = galaxy self.name = name self.username = username self.password = password self.token = token self.api_server = url self.validate_certs = not context.CLIARGS['ignore_certs'] self._available_api_versions = {} display.debug('Validate TLS certificates for %s: %s' % (self.api_server, self.validate_certs)) @property @g_connect(['v1', 'v2', 'v3']) def available_api_versions(self): # Calling g_connect will populate self._available_api_versions return self._available_api_versions def _call_galaxy(self, url, args=None, headers=None, method=None, auth_required=False, error_context_msg=None): headers = headers or {} self._add_auth_token(headers, url, required=auth_required) try: display.vvvv("Calling Galaxy at %s" % url) resp = open_url(to_native(url), data=args, validate_certs=self.validate_certs, headers=headers, method=method, timeout=20) except HTTPError as e: raise GalaxyError(e, error_context_msg) except Exception as e: raise AnsibleError("Unknown error when attempting to call Galaxy at '%s': %s" % (url, to_native(e))) resp_data = to_text(resp.read(), errors='surrogate_or_strict') try: data = json.loads(resp_data) except ValueError: raise AnsibleError("Failed to parse Galaxy response from '%s' as JSON:\n%s" % (resp.url, to_native(resp_data))) return data def _add_auth_token(self, headers, url, token_type=None, required=False): # Don't add the auth token if one is already present if 'Authorization' in headers: return if not self.token and required: raise AnsibleError("No access token or username set. A token can be set with --api-key, with " "'ansible-galaxy login', or set in ansible.cfg.") if self.token: headers.update(self.token.headers()) @g_connect(['v1']) def authenticate(self, github_token): """ Retrieve an authentication token """ url = _urljoin(self.api_server, self.available_api_versions['v1'], "tokens") + '/' args = urlencode({"github_token": github_token}) resp = open_url(url, data=args, validate_certs=self.validate_certs, method="POST") data = json.loads(to_text(resp.read(), errors='surrogate_or_strict')) return data @g_connect(['v1']) def create_import_task(self, github_user, github_repo, reference=None, role_name=None): """ Post an import request """ url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports") + '/' args = { "github_user": github_user, "github_repo": github_repo, "github_reference": reference if reference else "" } if role_name: args['alternate_role_name'] = role_name elif github_repo.startswith('ansible-role'): args['alternate_role_name'] = github_repo[len('ansible-role') + 1:] data = self._call_galaxy(url, args=urlencode(args), method="POST") if data.get('results', None): return data['results'] return data @g_connect(['v1']) def get_import_task(self, task_id=None, github_user=None, github_repo=None): """ Check the status of an import task. """ url = _urljoin(self.api_server, self.available_api_versions['v1'], "imports") if task_id is not None: url = "%s?id=%d" % (url, task_id) elif github_user is not None and github_repo is not None: url = "%s?github_user=%s&github_repo=%s" % (url, github_user, github_repo) else: raise AnsibleError("Expected task_id or github_user and github_repo") data = self._call_galaxy(url) return data['results'] @g_connect(['v1']) def lookup_role_by_name(self, role_name, notify=True): """ Find a role by name. """ role_name = to_text(urlquote(to_bytes(role_name))) try: parts = role_name.split(".") user_name = ".".join(parts[0:-1]) role_name = parts[-1] if notify: display.display("- downloading role '%s', owned by %s" % (role_name, user_name)) except Exception: raise AnsibleError("Invalid role name (%s). Specify role as format: username.rolename" % role_name) url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles", "?owner__username=%s&name=%s" % (user_name, role_name)) data = self._call_galaxy(url) if len(data["results"]) != 0: return data["results"][0] return None @g_connect(['v1']) def fetch_role_related(self, related, role_id): """ Fetch the list of related items for the given role. The url comes from the 'related' field of the role. """ results = [] try: url = _urljoin(self.api_server, self.available_api_versions['v1'], "roles", role_id, related, "?page_size=50") data = self._call_galaxy(url) results = data['results'] done = (data.get('next_link', None) is None) # https://github.com/ansible/ansible/issues/64355 # api_server contains part of the API path but next_link includes the the /api part so strip it out. url_info = urlparse(self.api_server) base_url = "%s://%s/" % (url_info.scheme, url_info.netloc) while not done: url = _urljoin(base_url, data['next_link']) data = self._call_galaxy(url) results += data['results'] done = (data.get('next_link', None) is None) except Exception as e: display.warning("Unable to retrieve role (id=%s) data (%s), but this is not fatal so we continue: %s" % (role_id, related, to_text(e))) return results @g_connect(['v1']) def get_list(self, what): """ Fetch the list of items specified. """ try: url = _urljoin(self.api_server, self.available_api_versions['v1'], what, "?page_size") data = self._call_galaxy(url) if "results" in data: results = data['results'] else: results = data done = True if "next" in data: done = (data.get('next_link', None) is None) while not done: url = _urljoin(self.api_server, data['next_link']) data = self._call_galaxy(url) results += data['results'] done = (data.get('next_link', None) is None) return results except Exception as error: raise AnsibleError("Failed to download the %s list: %s" % (what, to_native(error))) @g_connect(['v1']) def search_roles(self, search, **kwargs): search_url = _urljoin(self.api_server, self.available_api_versions['v1'], "search", "roles", "?") if search: search_url += '&autocomplete=' + to_text(urlquote(to_bytes(search))) tags = kwargs.get('tags', None) platforms = kwargs.get('platforms', None) page_size = kwargs.get('page_size', None) author = kwargs.get('author', None) if tags and isinstance(tags, string_types): tags = tags.split(',') search_url += '&tags_autocomplete=' + '+'.join(tags) if platforms and isinstance(platforms, string_types): platforms = platforms.split(',') search_url += '&platforms_autocomplete=' + '+'.join(platforms) if page_size: search_url += '&page_size=%s' % page_size if author: search_url += '&username_autocomplete=%s' % author data = self._call_galaxy(search_url) return data @g_connect(['v1']) def add_secret(self, source, github_user, github_repo, secret): url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets") + '/' args = urlencode({ "source": source, "github_user": github_user, "github_repo": github_repo, "secret": secret }) data = self._call_galaxy(url, args=args, method="POST") return data @g_connect(['v1']) def list_secrets(self): url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets") data = self._call_galaxy(url, auth_required=True) return data @g_connect(['v1']) def remove_secret(self, secret_id): url = _urljoin(self.api_server, self.available_api_versions['v1'], "notification_secrets", secret_id) + '/' data = self._call_galaxy(url, auth_required=True, method='DELETE') return data @g_connect(['v1']) def delete_role(self, github_user, github_repo): url = _urljoin(self.api_server, self.available_api_versions['v1'], "removerole", "?github_user=%s&github_repo=%s" % (github_user, github_repo)) data = self._call_galaxy(url, auth_required=True, method='DELETE') return data # Collection APIs # @g_connect(['v2', 'v3']) def publish_collection(self, collection_path): """ Publishes a collection to a Galaxy server and returns the import task URI. :param collection_path: The path to the collection tarball to publish. :return: The import task URI that contains the import results. """ display.display("Publishing collection artifact '%s' to %s %s" % (collection_path, self.name, self.api_server)) b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict') if not os.path.exists(b_collection_path): raise AnsibleError("The collection path specified '%s' does not exist." % to_native(collection_path)) elif not tarfile.is_tarfile(b_collection_path): raise AnsibleError("The collection path specified '%s' is not a tarball, use 'ansible-galaxy collection " "build' to create a proper release artifact." % to_native(collection_path)) with open(b_collection_path, 'rb') as collection_tar: data = collection_tar.read() boundary = '--------------------------%s' % uuid.uuid4().hex b_file_name = os.path.basename(b_collection_path) part_boundary = b"--" + to_bytes(boundary, errors='surrogate_or_strict') form = [ part_boundary, b"Content-Disposition: form-data; name=\"sha256\"", to_bytes(secure_hash_s(data), errors='surrogate_or_strict'), part_boundary, b"Content-Disposition: file; name=\"file\"; filename=\"%s\"" % b_file_name, b"Content-Type: application/octet-stream", b"", data, b"%s--" % part_boundary, ] data = b"\r\n".join(form) headers = { 'Content-type': 'multipart/form-data; boundary=%s' % boundary, 'Content-length': len(data), } if 'v3' in self.available_api_versions: n_url = _urljoin(self.api_server, self.available_api_versions['v3'], 'artifacts', 'collections') + '/' else: n_url = _urljoin(self.api_server, self.available_api_versions['v2'], 'collections') + '/' resp = self._call_galaxy(n_url, args=data, headers=headers, method='POST', auth_required=True, error_context_msg='Error when publishing collection to %s (%s)' % (self.name, self.api_server)) return resp['task'] @g_connect(['v2', 'v3']) def wait_import_task(self, task_id, timeout=0): """ Waits until the import process on the Galaxy server has completed or the timeout is reached. :param task_id: The id of the import task to wait for. This can be parsed out of the return value for GalaxyAPI.publish_collection. :param timeout: The timeout in seconds, 0 is no timeout. """ # TODO: actually verify that v3 returns the same structure as v2, right now this is just an assumption. state = 'waiting' data = None # Construct the appropriate URL per version if 'v3' in self.available_api_versions: full_url = _urljoin(self.api_server, self.available_api_versions['v3'], 'imports/collections', task_id, '/') else: # TODO: Should we have a trailing slash here? I'm working with what the unittests ask # for but a trailing slash may be more correct full_url = _urljoin(self.api_server, self.available_api_versions['v2'], 'collection-imports', task_id) display.display("Waiting until Galaxy import task %s has completed" % full_url) start = time.time() wait = 2 while timeout == 0 or (time.time() - start) < timeout: data = self._call_galaxy(full_url, method='GET', auth_required=True, error_context_msg='Error when getting import task results at %s' % full_url) state = data.get('state', 'waiting') if data.get('finished_at', None): break display.vvv('Galaxy import process has a status of %s, wait %d seconds before trying again' % (state, wait)) time.sleep(wait) # poor man's exponential backoff algo so we don't flood the Galaxy API, cap at 30 seconds. wait = min(30, wait * 1.5) if state == 'waiting': raise AnsibleError("Timeout while waiting for the Galaxy import process to finish, check progress at '%s'" % to_native(full_url)) for message in data.get('messages', []): level = message['level'] if level == 'error': display.error("Galaxy import error message: %s" % message['message']) elif level == 'warning': display.warning("Galaxy import warning message: %s" % message['message']) else: display.vvv("Galaxy import message: %s - %s" % (level, message['message'])) if state == 'failed': code = to_native(data['error'].get('code', 'UNKNOWN')) description = to_native( data['error'].get('description', "Unknown error, see %s for more details" % full_url)) raise AnsibleError("Galaxy import process failed: %s (Code: %s)" % (description, code)) @g_connect(['v2', 'v3']) def get_collection_version_metadata(self, namespace, name, version): """ Gets the collection information from the Galaxy server about a specific Collection version. :param namespace: The collection namespace. :param name: The collection name. :param version: Optional version of the collection to get the information for. :return: CollectionVersionMetadata about the collection at the version requested. """ api_path = self.available_api_versions.get('v3', self.available_api_versions.get('v2')) url_paths = [self.api_server, api_path, 'collections', namespace, name, 'versions', version] n_collection_url = _urljoin(*url_paths) error_context_msg = 'Error when getting collection version metadata for %s.%s:%s from %s (%s)' \ % (namespace, name, version, self.name, self.api_server) data = self._call_galaxy(n_collection_url, error_context_msg=error_context_msg) return CollectionVersionMetadata(data['namespace']['name'], data['collection']['name'], data['version'], data['download_url'], data['artifact']['sha256'], data['metadata']['dependencies']) @g_connect(['v2', 'v3']) def get_collection_versions(self, namespace, name): """ Gets a list of available versions for a collection on a Galaxy server. :param namespace: The collection namespace. :param name: The collection name. :return: A list of versions that are available. """ if 'v3' in self.available_api_versions: api_path = self.available_api_versions['v3'] results_key = 'data' pagination_path = ['links', 'next'] else: api_path = self.available_api_versions['v2'] results_key = 'results' pagination_path = ['next'] n_url = _urljoin(self.api_server, api_path, 'collections', namespace, name, 'versions') error_context_msg = 'Error when getting available collection versions for %s.%s from %s (%s)' \ % (namespace, name, self.name, self.api_server) data = self._call_galaxy(n_url, error_context_msg=error_context_msg) versions = [] while True: versions += [v['version'] for v in data[results_key]] next_link = data for path in pagination_path: next_link = next_link.get(path, {}) if not next_link: break data = self._call_galaxy(to_native(next_link, errors='surrogate_or_strict'), error_context_msg=error_context_msg) return versions
closed
ansible/ansible
https://github.com/ansible/ansible
64,850
Galaxy Collection throws generic 401 while installing collections from Automation Hub.
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> If a user passes an incorrect username or password via the configuration for a galaxy_server the error message is unhelpful. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> ansible-galaxy ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible --version ansible 2.9.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` ansible-config dump --only-changed DEFAULT_JINJA2_NATIVE(/etc/ansible/ansible.cfg) = True GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = [u'automation_hub'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> RHEL7 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ``` [galaxy] server_list = automation_hub [galaxy_server.automation_hub] url=https://cloud.redhat.com/api/automation-hub/ username=$INCORRECTUSERNAME password=$PASSWORD ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS Actual error from automation hub api is exposed: ``` < HTTP/1.1 401 Unauthorized < Server: openresty/1.13.6.1 < Content-Type: text/plain < Content-Length: 40 < x-rh-insights-request-id: < X-Content-Type-Options: nosniff < Date: Thu, 14 Nov 2019 18:05:51 GMT < Connection: keep-alive < Set-Cookie: ; path=/; HttpOnly; Secure < X-Frame-Options: SAMEORIGIN < ``` Insights services authentication failed <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS generic and not useful error is thrown: ``` ansible-galaxy collection install splunk.enterprise_security -vvv ansible-galaxy 2.9.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible-galaxy python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] Using /etc/ansible/ansible.cfg as config file Process install dependency map Processing requirement collection 'splunk.enterprise_security' ERROR! Error when finding available api versions from automation_hub (https://cloud.redhat.com/api/automation-hub/) (HTTP Code: 401, Message: Unknown error returned by Galaxy server.) ``` <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64850
https://github.com/ansible/ansible/pull/65273
0e5a83a1cc2379afc70c45588e677ddd3b911dc2
6586b7132c839b2f60582ff363a99c62156e2e50
2019-11-14T18:14:10Z
python
2019-12-02T21:36:05Z
test/units/galaxy/test_api.py
# -*- coding: utf-8 -*- # Copyright: (c) 2019, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import json import os import re import pytest import tarfile import tempfile import time from io import BytesIO, StringIO from units.compat.mock import MagicMock from ansible import context from ansible.errors import AnsibleError from ansible.galaxy import api as galaxy_api from ansible.galaxy.api import CollectionVersionMetadata, GalaxyAPI, GalaxyError from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken from ansible.module_utils._text import to_native, to_text from ansible.module_utils.six.moves.urllib import error as urllib_error from ansible.utils import context_objects as co from ansible.utils.display import Display @pytest.fixture(autouse='function') def reset_cli_args(): co.GlobalCLIArgs._Singleton__instance = None # Required to initialise the GalaxyAPI object context.CLIARGS._store = {'ignore_certs': False} yield co.GlobalCLIArgs._Singleton__instance = None @pytest.fixture() def collection_artifact(tmp_path_factory): ''' Creates a collection artifact tarball that is ready to be published ''' output_dir = to_text(tmp_path_factory.mktemp('test-Γ…Γ‘ΕšΓŒΞ²ΕΓˆ Output')) tar_path = os.path.join(output_dir, 'namespace-collection-v1.0.0.tar.gz') with tarfile.open(tar_path, 'w:gz') as tfile: b_io = BytesIO(b"\x00\x01\x02\x03") tar_info = tarfile.TarInfo('test') tar_info.size = 4 tar_info.mode = 0o0644 tfile.addfile(tarinfo=tar_info, fileobj=b_io) yield tar_path def get_test_galaxy_api(url, version, token_ins=None, token_value=None): token_value = token_value or "my token" token_ins = token_ins or GalaxyToken(token_value) api = GalaxyAPI(None, "test", url) # Warning, this doesn't test g_connect() because _availabe_api_versions is set here. That means # that urls for v2 servers have to append '/api/' themselves in the input data. api._available_api_versions = {version: '%s' % version} api.token = token_ins return api def test_api_no_auth(): api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/") actual = {} api._add_auth_token(actual, "") assert actual == {} def test_api_no_auth_but_required(): expected = "No access token or username set. A token can be set with --api-key, with 'ansible-galaxy login', " \ "or set in ansible.cfg." with pytest.raises(AnsibleError, match=expected): GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/")._add_auth_token({}, "", required=True) def test_api_token_auth(): token = GalaxyToken(token=u"my_token") api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} api._add_auth_token(actual, "", required=True) assert actual == {'Authorization': 'Token my_token'} def test_api_token_auth_with_token_type(monkeypatch): token = KeycloakToken(auth_url='https://api.test/') mock_token_get = MagicMock() mock_token_get.return_value = 'my_token' monkeypatch.setattr(token, 'get', mock_token_get) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} api._add_auth_token(actual, "", token_type="Bearer", required=True) assert actual == {'Authorization': 'Bearer my_token'} def test_api_token_auth_with_v3_url(monkeypatch): token = KeycloakToken(auth_url='https://api.test/') mock_token_get = MagicMock() mock_token_get.return_value = 'my_token' monkeypatch.setattr(token, 'get', mock_token_get) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} api._add_auth_token(actual, "https://galaxy.ansible.com/api/v3/resource/name", required=True) assert actual == {'Authorization': 'Bearer my_token'} def test_api_token_auth_with_v2_url(): token = GalaxyToken(token=u"my_token") api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} # Add v3 to random part of URL but response should only see the v2 as the full URI path segment. api._add_auth_token(actual, "https://galaxy.ansible.com/api/v2/resourcev3/name", required=True) assert actual == {'Authorization': 'Token my_token'} def test_api_basic_auth_password(): token = BasicAuthToken(username=u"user", password=u"pass") api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} api._add_auth_token(actual, "", required=True) assert actual == {'Authorization': 'Basic dXNlcjpwYXNz'} def test_api_basic_auth_no_password(): token = BasicAuthToken(username=u"user") api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) actual = {} api._add_auth_token(actual, "", required=True) assert actual == {'Authorization': 'Basic dXNlcjo='} def test_api_dont_override_auth_header(): api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/") actual = {'Authorization': 'Custom token'} api._add_auth_token(actual, "", required=True) assert actual == {'Authorization': 'Custom token'} def test_initialise_galaxy(monkeypatch): mock_open = MagicMock() mock_open.side_effect = [ StringIO(u'{"available_versions":{"v1":"v1/"}}'), StringIO(u'{"token":"my token"}'), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/") actual = api.authenticate("github_token") assert len(api.available_api_versions) == 2 assert api.available_api_versions['v1'] == u'v1/' assert api.available_api_versions['v2'] == u'v2/' assert actual == {u'token': u'my token'} assert mock_open.call_count == 2 assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/' assert mock_open.mock_calls[1][1][0] == 'https://galaxy.ansible.com/api/v1/tokens/' assert mock_open.mock_calls[1][2]['data'] == 'github_token=github_token' def test_initialise_galaxy_with_auth(monkeypatch): mock_open = MagicMock() mock_open.side_effect = [ StringIO(u'{"available_versions":{"v1":"v1/"}}'), StringIO(u'{"token":"my token"}'), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=GalaxyToken(token='my_token')) actual = api.authenticate("github_token") assert len(api.available_api_versions) == 2 assert api.available_api_versions['v1'] == u'v1/' assert api.available_api_versions['v2'] == u'v2/' assert actual == {u'token': u'my token'} assert mock_open.call_count == 2 assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/' assert mock_open.mock_calls[1][1][0] == 'https://galaxy.ansible.com/api/v1/tokens/' assert mock_open.mock_calls[1][2]['data'] == 'github_token=github_token' def test_initialise_automation_hub(monkeypatch): mock_open = MagicMock() mock_open.side_effect = [ StringIO(u'{"available_versions":{"v2": "v2/", "v3":"v3/"}}'), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) token = KeycloakToken(auth_url='https://api.test/') mock_token_get = MagicMock() mock_token_get.return_value = 'my_token' monkeypatch.setattr(token, 'get', mock_token_get) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=token) assert len(api.available_api_versions) == 2 assert api.available_api_versions['v2'] == u'v2/' assert api.available_api_versions['v3'] == u'v3/' assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/' assert mock_open.mock_calls[0][2]['headers'] == {'Authorization': 'Bearer my_token'} def test_initialise_unknown(monkeypatch): mock_open = MagicMock() mock_open.side_effect = [ urllib_error.HTTPError('https://galaxy.ansible.com/api/', 500, 'msg', {}, StringIO(u'{"msg":"raw error"}')), urllib_error.HTTPError('https://galaxy.ansible.com/api/api/', 500, 'msg', {}, StringIO(u'{"msg":"raw error"}')), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/", token=GalaxyToken(token='my_token')) expected = "Error when finding available api versions from test (%s) (HTTP Code: 500, Message: Unknown " \ "error returned by Galaxy server.)" % api.api_server with pytest.raises(AnsibleError, match=re.escape(expected)): api.authenticate("github_token") def test_get_available_api_versions(monkeypatch): mock_open = MagicMock() mock_open.side_effect = [ StringIO(u'{"available_versions":{"v1":"v1/","v2":"v2/"}}'), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) api = GalaxyAPI(None, "test", "https://galaxy.ansible.com/api/") actual = api.available_api_versions assert len(actual) == 2 assert actual['v1'] == u'v1/' assert actual['v2'] == u'v2/' assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/' def test_publish_collection_missing_file(): fake_path = u'/fake/Γ…Γ‘ΕšΓŒΞ²ΕΓˆ/path' expected = to_native("The collection path specified '%s' does not exist." % fake_path) api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v2") with pytest.raises(AnsibleError, match=expected): api.publish_collection(fake_path) def test_publish_collection_not_a_tarball(): expected = "The collection path specified '{0}' is not a tarball, use 'ansible-galaxy collection build' to " \ "create a proper release artifact." api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v2") with tempfile.NamedTemporaryFile(prefix=u'Γ…Γ‘ΕšΓŒΞ²ΕΓˆ') as temp_file: temp_file.write(b"\x00") temp_file.flush() with pytest.raises(AnsibleError, match=expected.format(to_native(temp_file.name))): api.publish_collection(temp_file.name) def test_publish_collection_unsupported_version(): expected = "Galaxy action publish_collection requires API versions 'v2, v3' but only 'v1' are available on test " \ "https://galaxy.ansible.com/api/" api = get_test_galaxy_api("https://galaxy.ansible.com/api/", "v1") with pytest.raises(AnsibleError, match=expected): api.publish_collection("path") @pytest.mark.parametrize('api_version, collection_url', [ ('v2', 'collections'), ('v3', 'artifacts/collections'), ]) def test_publish_collection(api_version, collection_url, collection_artifact, monkeypatch): api = get_test_galaxy_api("https://galaxy.ansible.com/api/", api_version) mock_call = MagicMock() mock_call.return_value = {'task': 'http://task.url/'} monkeypatch.setattr(api, '_call_galaxy', mock_call) actual = api.publish_collection(collection_artifact) assert actual == 'http://task.url/' assert mock_call.call_count == 1 assert mock_call.mock_calls[0][1][0] == 'https://galaxy.ansible.com/api/%s/%s/' % (api_version, collection_url) assert mock_call.mock_calls[0][2]['headers']['Content-length'] == len(mock_call.mock_calls[0][2]['args']) assert mock_call.mock_calls[0][2]['headers']['Content-type'].startswith( 'multipart/form-data; boundary=--------------------------') assert mock_call.mock_calls[0][2]['args'].startswith(b'--------------------------') assert mock_call.mock_calls[0][2]['method'] == 'POST' assert mock_call.mock_calls[0][2]['auth_required'] is True @pytest.mark.parametrize('api_version, collection_url, response, expected', [ ('v2', 'collections', {}, 'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Unknown error returned by Galaxy ' 'server. Code: Unknown)'), ('v2', 'collections', { 'message': u'Galaxy error messΓ€ge', 'code': 'GWE002', }, u'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Galaxy error messΓ€ge Code: GWE002)'), ('v3', 'artifact/collections', {}, 'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Unknown error returned by Galaxy ' 'server. Code: Unknown)'), ('v3', 'artifact/collections', { 'errors': [ { 'code': 'conflict.collection_exists', 'detail': 'Collection "mynamespace-mycollection-4.1.1" already exists.', 'title': 'Conflict.', 'status': '400', }, { 'code': 'quantum_improbability', 'title': u'RΓ€ndom(?) quantum improbability.', 'source': {'parameter': 'the_arrow_of_time'}, 'meta': {'remediation': 'Try again before'}, }, ], }, u'Error when publishing collection to test (%s) (HTTP Code: 500, Message: Collection ' u'"mynamespace-mycollection-4.1.1" already exists. Code: conflict.collection_exists), (HTTP Code: 500, ' u'Message: RΓ€ndom(?) quantum improbability. Code: quantum_improbability)') ]) def test_publish_failure(api_version, collection_url, response, expected, collection_artifact, monkeypatch): api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version) expected_url = '%s/api/%s/%s' % (api.api_server, api_version, collection_url) mock_open = MagicMock() mock_open.side_effect = urllib_error.HTTPError(expected_url, 500, 'msg', {}, StringIO(to_text(json.dumps(response)))) monkeypatch.setattr(galaxy_api, 'open_url', mock_open) with pytest.raises(GalaxyError, match=re.escape(to_native(expected % api.api_server))): api.publish_collection(collection_artifact) @pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [ ('https://galaxy.server.com/api', 'v2', 'Token', GalaxyToken('my token'), '1234', 'https://galaxy.server.com/api/v2/collection-imports/1234'), ('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), '1234', 'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'), ]) def test_wait_import_task(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch): api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.return_value = StringIO(u'{"state":"success","finished_at":"time"}') monkeypatch.setattr(galaxy_api, 'open_url', mock_open) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) api.wait_import_task(import_uri) assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == full_import_uri assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri @pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [ ('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my token'), '1234', 'https://galaxy.server.com/api/v2/collection-imports/1234'), ('https://galaxy.server.com/api/automation-hub', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), '1234', 'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'), ]) def test_wait_import_task_multiple_requests(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch): api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [ StringIO(u'{"state":"test"}'), StringIO(u'{"state":"success","finished_at":"time"}'), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) mock_vvv = MagicMock() monkeypatch.setattr(Display, 'vvv', mock_vvv) monkeypatch.setattr(time, 'sleep', MagicMock()) api.wait_import_task(import_uri) assert mock_open.call_count == 2 assert mock_open.mock_calls[0][1][0] == full_import_uri assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_open.mock_calls[1][1][0] == full_import_uri assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri assert mock_vvv.call_count == 1 assert mock_vvv.mock_calls[0][1][0] == \ 'Galaxy import process has a status of test, wait 2 seconds before trying again' @pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri,', [ ('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my token'), '1234', 'https://galaxy.server.com/api/v2/collection-imports/1234'), ('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), '1234', 'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'), ]) def test_wait_import_task_with_failure(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch): api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [ StringIO(to_text(json.dumps({ 'finished_at': 'some_time', 'state': 'failed', 'error': { 'code': 'GW001', 'description': u'BecΓ€use I said so!', }, 'messages': [ { 'level': 'error', 'message': u'SomΓ© error', }, { 'level': 'warning', 'message': u'Some wΓ€rning', }, { 'level': 'info', 'message': u'SomΓ© info', }, ], }))), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) mock_vvv = MagicMock() monkeypatch.setattr(Display, 'vvv', mock_vvv) mock_warn = MagicMock() monkeypatch.setattr(Display, 'warning', mock_warn) mock_err = MagicMock() monkeypatch.setattr(Display, 'error', mock_err) expected = to_native(u'Galaxy import process failed: BecΓ€use I said so! (Code: GW001)') with pytest.raises(AnsibleError, match=re.escape(expected)): api.wait_import_task(import_uri) assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == full_import_uri assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri assert mock_vvv.call_count == 1 assert mock_vvv.mock_calls[0][1][0] == u'Galaxy import message: info - SomΓ© info' assert mock_warn.call_count == 1 assert mock_warn.mock_calls[0][1][0] == u'Galaxy import warning message: Some wΓ€rning' assert mock_err.call_count == 1 assert mock_err.mock_calls[0][1][0] == u'Galaxy import error message: SomΓ© error' @pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [ ('https://galaxy.server.com/api/', 'v2', 'Token', GalaxyToken('my_token'), '1234', 'https://galaxy.server.com/api/v2/collection-imports/1234'), ('https://galaxy.server.com/api/automation-hub/', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), '1234', 'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'), ]) def test_wait_import_task_with_failure_no_error(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch): api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [ StringIO(to_text(json.dumps({ 'finished_at': 'some_time', 'state': 'failed', 'error': {}, 'messages': [ { 'level': 'error', 'message': u'SomΓ© error', }, { 'level': 'warning', 'message': u'Some wΓ€rning', }, { 'level': 'info', 'message': u'SomΓ© info', }, ], }))), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) mock_vvv = MagicMock() monkeypatch.setattr(Display, 'vvv', mock_vvv) mock_warn = MagicMock() monkeypatch.setattr(Display, 'warning', mock_warn) mock_err = MagicMock() monkeypatch.setattr(Display, 'error', mock_err) expected = 'Galaxy import process failed: Unknown error, see %s for more details \\(Code: UNKNOWN\\)' % full_import_uri with pytest.raises(AnsibleError, match=expected): api.wait_import_task(import_uri) assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == full_import_uri assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri assert mock_vvv.call_count == 1 assert mock_vvv.mock_calls[0][1][0] == u'Galaxy import message: info - SomΓ© info' assert mock_warn.call_count == 1 assert mock_warn.mock_calls[0][1][0] == u'Galaxy import warning message: Some wΓ€rning' assert mock_err.call_count == 1 assert mock_err.mock_calls[0][1][0] == u'Galaxy import error message: SomΓ© error' @pytest.mark.parametrize('server_url, api_version, token_type, token_ins, import_uri, full_import_uri', [ ('https://galaxy.server.com/api', 'v2', 'Token', GalaxyToken('my token'), '1234', 'https://galaxy.server.com/api/v2/collection-imports/1234'), ('https://galaxy.server.com/api/automation-hub', 'v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), '1234', 'https://galaxy.server.com/api/automation-hub/v3/imports/collections/1234/'), ]) def test_wait_import_task_timeout(server_url, api_version, token_type, token_ins, import_uri, full_import_uri, monkeypatch): api = get_test_galaxy_api(server_url, api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) def return_response(*args, **kwargs): return StringIO(u'{"state":"waiting"}') mock_open = MagicMock() mock_open.side_effect = return_response monkeypatch.setattr(galaxy_api, 'open_url', mock_open) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) mock_vvv = MagicMock() monkeypatch.setattr(Display, 'vvv', mock_vvv) monkeypatch.setattr(time, 'sleep', MagicMock()) expected = "Timeout while waiting for the Galaxy import process to finish, check progress at '%s'" % full_import_uri with pytest.raises(AnsibleError, match=expected): api.wait_import_task(import_uri, 1) assert mock_open.call_count > 1 assert mock_open.mock_calls[0][1][0] == full_import_uri assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_open.mock_calls[1][1][0] == full_import_uri assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == 'Waiting until Galaxy import task %s has completed' % full_import_uri # expected_wait_msg = 'Galaxy import process has a status of waiting, wait {0} seconds before trying again' assert mock_vvv.call_count > 9 # 1st is opening Galaxy token file. # FIXME: # assert mock_vvv.mock_calls[1][1][0] == expected_wait_msg.format(2) # assert mock_vvv.mock_calls[2][1][0] == expected_wait_msg.format(3) # assert mock_vvv.mock_calls[3][1][0] == expected_wait_msg.format(4) # assert mock_vvv.mock_calls[4][1][0] == expected_wait_msg.format(6) # assert mock_vvv.mock_calls[5][1][0] == expected_wait_msg.format(10) # assert mock_vvv.mock_calls[6][1][0] == expected_wait_msg.format(15) # assert mock_vvv.mock_calls[7][1][0] == expected_wait_msg.format(22) # assert mock_vvv.mock_calls[8][1][0] == expected_wait_msg.format(30) @pytest.mark.parametrize('api_version, token_type, version, token_ins', [ ('v2', None, 'v2.1.13', None), ('v3', 'Bearer', 'v1.0.0', KeycloakToken(auth_url='https://api.test/api/automation-hub/')), ]) def test_get_collection_version_metadata_no_version(api_version, token_type, version, token_ins, monkeypatch): api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [ StringIO(to_text(json.dumps({ 'download_url': 'https://downloadme.com', 'artifact': { 'sha256': 'ac47b6fac117d7c171812750dacda655b04533cf56b31080b82d1c0db3c9d80f', }, 'namespace': { 'name': 'namespace', }, 'collection': { 'name': 'collection', }, 'version': version, 'metadata': { 'dependencies': {}, } }))), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) actual = api.get_collection_version_metadata('namespace', 'collection', version) assert isinstance(actual, CollectionVersionMetadata) assert actual.namespace == u'namespace' assert actual.name == u'collection' assert actual.download_url == u'https://downloadme.com' assert actual.artifact_sha256 == u'ac47b6fac117d7c171812750dacda655b04533cf56b31080b82d1c0db3c9d80f' assert actual.version == version assert actual.dependencies == {} assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == '%s%s/collections/namespace/collection/versions/%s' \ % (api.api_server, api_version, version) # v2 calls dont need auth, so no authz header or token_type if token_type: assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type @pytest.mark.parametrize('api_version, token_type, token_ins, response', [ ('v2', None, None, { 'count': 2, 'next': None, 'previous': None, 'results': [ { 'version': '1.0.0', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0', }, { 'version': '1.0.1', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1', }, ], }), # TODO: Verify this once Automation Hub is actually out ('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), { 'count': 2, 'next': None, 'previous': None, 'data': [ { 'version': '1.0.0', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0', }, { 'version': '1.0.1', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1', }, ], }), ]) def test_get_collection_versions(api_version, token_type, token_ins, response, monkeypatch): api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [ StringIO(to_text(json.dumps(response))), ] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) actual = api.get_collection_versions('namespace', 'collection') assert actual == [u'1.0.0', u'1.0.1'] assert mock_open.call_count == 1 assert mock_open.mock_calls[0][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \ 'versions' % api_version if token_ins: assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type @pytest.mark.parametrize('api_version, token_type, token_ins, responses', [ ('v2', None, None, [ { 'count': 6, 'next': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=2', 'previous': None, 'results': [ { 'version': '1.0.0', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.0', }, { 'version': '1.0.1', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.1', }, ], }, { 'count': 6, 'next': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=3', 'previous': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions', 'results': [ { 'version': '1.0.2', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.2', }, { 'version': '1.0.3', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.3', }, ], }, { 'count': 6, 'next': None, 'previous': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/?page=2', 'results': [ { 'version': '1.0.4', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.4', }, { 'version': '1.0.5', 'href': 'https://galaxy.server.com/api/v2/collections/namespace/collection/versions/1.0.5', }, ], }, ]), ('v3', 'Bearer', KeycloakToken(auth_url='https://api.test/'), [ { 'count': 6, 'links': { 'next': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/?page=2', 'previous': None, }, 'data': [ { 'version': '1.0.0', 'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.0', }, { 'version': '1.0.1', 'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.1', }, ], }, { 'count': 6, 'links': { 'next': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/?page=3', 'previous': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions', }, 'data': [ { 'version': '1.0.2', 'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.2', }, { 'version': '1.0.3', 'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.3', }, ], }, { 'count': 6, 'links': { 'next': None, 'previous': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/?page=2', }, 'data': [ { 'version': '1.0.4', 'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.4', }, { 'version': '1.0.5', 'href': 'https://galaxy.server.com/api/v3/collections/namespace/collection/versions/1.0.5', }, ], }, ]), ]) def test_get_collection_versions_pagination(api_version, token_type, token_ins, responses, monkeypatch): api = get_test_galaxy_api('https://galaxy.server.com/api/', api_version, token_ins=token_ins) if token_ins: mock_token_get = MagicMock() mock_token_get.return_value = 'my token' monkeypatch.setattr(token_ins, 'get', mock_token_get) mock_open = MagicMock() mock_open.side_effect = [StringIO(to_text(json.dumps(r))) for r in responses] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) actual = api.get_collection_versions('namespace', 'collection') assert actual == [u'1.0.0', u'1.0.1', u'1.0.2', u'1.0.3', u'1.0.4', u'1.0.5'] assert mock_open.call_count == 3 assert mock_open.mock_calls[0][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \ 'versions' % api_version assert mock_open.mock_calls[1][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \ 'versions/?page=2' % api_version assert mock_open.mock_calls[2][1][0] == 'https://galaxy.server.com/api/%s/collections/namespace/collection/' \ 'versions/?page=3' % api_version if token_type: assert mock_open.mock_calls[0][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_open.mock_calls[1][2]['headers']['Authorization'] == '%s my token' % token_type assert mock_open.mock_calls[2][2]['headers']['Authorization'] == '%s my token' % token_type @pytest.mark.parametrize('responses', [ [ { 'count': 2, 'results': [{'name': '3.5.1', }, {'name': '3.5.2'}], 'next_link': None, 'next': None, 'previous_link': None, 'previous': None }, ], [ { 'count': 2, 'results': [{'name': '3.5.1'}], 'next_link': '/api/v1/roles/432/versions/?page=2&page_size=50', 'next': '/roles/432/versions/?page=2&page_size=50', 'previous_link': None, 'previous': None }, { 'count': 2, 'results': [{'name': '3.5.2'}], 'next_link': None, 'next': None, 'previous_link': '/api/v1/roles/432/versions/?&page_size=50', 'previous': '/roles/432/versions/?page_size=50', }, ] ]) def test_get_role_versions_pagination(monkeypatch, responses): api = get_test_galaxy_api('https://galaxy.com/api/', 'v1') mock_open = MagicMock() mock_open.side_effect = [StringIO(to_text(json.dumps(r))) for r in responses] monkeypatch.setattr(galaxy_api, 'open_url', mock_open) actual = api.fetch_role_related('versions', 432) assert actual == [{'name': '3.5.1'}, {'name': '3.5.2'}] assert mock_open.call_count == len(responses) assert mock_open.mock_calls[0][1][0] == 'https://galaxy.com/api/v1/roles/432/versions/?page_size=50' if len(responses) == 2: assert mock_open.mock_calls[1][1][0] == 'https://galaxy.com/api/v1/roles/432/versions/?page=2&page_size=50'
closed
ansible/ansible
https://github.com/ansible/ansible
64,850
Galaxy Collection throws generic 401 while installing collections from Automation Hub.
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> If a user passes an incorrect username or password via the configuration for a galaxy_server the error message is unhelpful. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> ansible-galaxy ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible --version ansible 2.9.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` ansible-config dump --only-changed DEFAULT_JINJA2_NATIVE(/etc/ansible/ansible.cfg) = True GALAXY_SERVER_LIST(/etc/ansible/ansible.cfg) = [u'automation_hub'] ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> RHEL7 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ``` [galaxy] server_list = automation_hub [galaxy_server.automation_hub] url=https://cloud.redhat.com/api/automation-hub/ username=$INCORRECTUSERNAME password=$PASSWORD ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS Actual error from automation hub api is exposed: ``` < HTTP/1.1 401 Unauthorized < Server: openresty/1.13.6.1 < Content-Type: text/plain < Content-Length: 40 < x-rh-insights-request-id: < X-Content-Type-Options: nosniff < Date: Thu, 14 Nov 2019 18:05:51 GMT < Connection: keep-alive < Set-Cookie: ; path=/; HttpOnly; Secure < X-Frame-Options: SAMEORIGIN < ``` Insights services authentication failed <!--- Describe what you expected to happen when running the steps above --> ##### ACTUAL RESULTS generic and not useful error is thrown: ``` ansible-galaxy collection install splunk.enterprise_security -vvv ansible-galaxy 2.9.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /bin/ansible-galaxy python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] Using /etc/ansible/ansible.cfg as config file Process install dependency map Processing requirement collection 'splunk.enterprise_security' ERROR! Error when finding available api versions from automation_hub (https://cloud.redhat.com/api/automation-hub/) (HTTP Code: 401, Message: Unknown error returned by Galaxy server.) ``` <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below ```
https://github.com/ansible/ansible/issues/64850
https://github.com/ansible/ansible/pull/65273
0e5a83a1cc2379afc70c45588e677ddd3b911dc2
6586b7132c839b2f60582ff363a99c62156e2e50
2019-11-14T18:14:10Z
python
2019-12-02T21:36:05Z
test/units/galaxy/test_collection_install.py
# -*- coding: utf-8 -*- # Copyright: (c) 2019, Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type import copy import json import os import pytest import re import shutil import tarfile import yaml from io import BytesIO, StringIO from units.compat.mock import MagicMock import ansible.module_utils.six.moves.urllib.error as urllib_error from ansible import context from ansible.cli.galaxy import GalaxyCLI from ansible.errors import AnsibleError from ansible.galaxy import collection, api from ansible.module_utils._text import to_bytes, to_native, to_text from ansible.utils import context_objects as co from ansible.utils.display import Display def call_galaxy_cli(args): orig = co.GlobalCLIArgs._Singleton__instance co.GlobalCLIArgs._Singleton__instance = None try: GalaxyCLI(args=['ansible-galaxy', 'collection'] + args).run() finally: co.GlobalCLIArgs._Singleton__instance = orig def artifact_json(namespace, name, version, dependencies, server): json_str = json.dumps({ 'artifact': { 'filename': '%s-%s-%s.tar.gz' % (namespace, name, version), 'sha256': '2d76f3b8c4bab1072848107fb3914c345f71a12a1722f25c08f5d3f51f4ab5fd', 'size': 1234, }, 'download_url': '%s/download/%s-%s-%s.tar.gz' % (server, namespace, name, version), 'metadata': { 'namespace': namespace, 'name': name, 'dependencies': dependencies, }, 'version': version }) return to_text(json_str) def artifact_versions_json(namespace, name, versions, galaxy_api, available_api_versions=None): results = [] available_api_versions = available_api_versions or {} api_version = 'v2' if 'v3' in available_api_versions: api_version = 'v3' for version in versions: results.append({ 'href': '%s/api/%s/%s/%s/versions/%s/' % (galaxy_api.api_server, api_version, namespace, name, version), 'version': version, }) if api_version == 'v2': json_str = json.dumps({ 'count': len(versions), 'next': None, 'previous': None, 'results': results }) if api_version == 'v3': response = {'meta': {'count': len(versions)}, 'data': results, 'links': {'first': None, 'last': None, 'next': None, 'previous': None}, } json_str = json.dumps(response) return to_text(json_str) def error_json(galaxy_api, errors_to_return=None, available_api_versions=None): errors_to_return = errors_to_return or [] available_api_versions = available_api_versions or {} response = {} api_version = 'v2' if 'v3' in available_api_versions: api_version = 'v3' if api_version == 'v2': assert len(errors_to_return) <= 1 if errors_to_return: response = errors_to_return[0] if api_version == 'v3': response['errors'] = errors_to_return json_str = json.dumps(response) return to_text(json_str) @pytest.fixture(autouse='function') def reset_cli_args(): co.GlobalCLIArgs._Singleton__instance = None yield co.GlobalCLIArgs._Singleton__instance = None @pytest.fixture() def collection_artifact(request, tmp_path_factory): test_dir = to_text(tmp_path_factory.mktemp('test-Γ…Γ‘ΕšΓŒΞ²ΕΓˆ Collections Input')) namespace = 'ansible_namespace' collection = 'collection' skeleton_path = os.path.join(os.path.dirname(os.path.split(__file__)[0]), 'cli', 'test_data', 'collection_skeleton') collection_path = os.path.join(test_dir, namespace, collection) call_galaxy_cli(['init', '%s.%s' % (namespace, collection), '-c', '--init-path', test_dir, '--collection-skeleton', skeleton_path]) dependencies = getattr(request, 'param', None) if dependencies: galaxy_yml = os.path.join(collection_path, 'galaxy.yml') with open(galaxy_yml, 'rb+') as galaxy_obj: existing_yaml = yaml.safe_load(galaxy_obj) existing_yaml['dependencies'] = dependencies galaxy_obj.seek(0) galaxy_obj.write(to_bytes(yaml.safe_dump(existing_yaml))) galaxy_obj.truncate() call_galaxy_cli(['build', collection_path, '--output-path', test_dir]) collection_tar = os.path.join(test_dir, '%s-%s-0.1.0.tar.gz' % (namespace, collection)) return to_bytes(collection_path), to_bytes(collection_tar) @pytest.fixture() def galaxy_server(): context.CLIARGS._store = {'ignore_certs': False} galaxy_api = api.GalaxyAPI(None, 'test_server', 'https://galaxy.ansible.com') return galaxy_api def test_build_requirement_from_path(collection_artifact): actual = collection.CollectionRequirement.from_path(collection_artifact[0], True) assert actual.namespace == u'ansible_namespace' assert actual.name == u'collection' assert actual.b_path == collection_artifact[0] assert actual.api is None assert actual.skip is True assert actual.versions == set([u'*']) assert actual.latest_version == u'*' assert actual.dependencies == {} def test_build_requirement_from_path_with_manifest(collection_artifact): manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json') manifest_value = json.dumps({ 'collection_info': { 'namespace': 'namespace', 'name': 'name', 'version': '1.1.1', 'dependencies': { 'ansible_namespace.collection': '*' } } }) with open(manifest_path, 'wb') as manifest_obj: manifest_obj.write(to_bytes(manifest_value)) actual = collection.CollectionRequirement.from_path(collection_artifact[0], True) # While the folder name suggests a different collection, we treat MANIFEST.json as the source of truth. assert actual.namespace == u'namespace' assert actual.name == u'name' assert actual.b_path == collection_artifact[0] assert actual.api is None assert actual.skip is True assert actual.versions == set([u'1.1.1']) assert actual.latest_version == u'1.1.1' assert actual.dependencies == {'ansible_namespace.collection': '*'} def test_build_requirement_from_path_invalid_manifest(collection_artifact): manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json') with open(manifest_path, 'wb') as manifest_obj: manifest_obj.write(b"not json") expected = "Collection file at '%s' does not contain a valid json string." % to_native(manifest_path) with pytest.raises(AnsibleError, match=expected): collection.CollectionRequirement.from_path(collection_artifact[0], True) def test_build_requirement_from_tar(collection_artifact): actual = collection.CollectionRequirement.from_tar(collection_artifact[1], True, True) assert actual.namespace == u'ansible_namespace' assert actual.name == u'collection' assert actual.b_path == collection_artifact[1] assert actual.api is None assert actual.skip is False assert actual.versions == set([u'0.1.0']) assert actual.latest_version == u'0.1.0' assert actual.dependencies == {} def test_build_requirement_from_tar_fail_not_tar(tmp_path_factory): test_dir = to_bytes(tmp_path_factory.mktemp('test-Γ…Γ‘ΕšΓŒΞ²ΕΓˆ Collections Input')) test_file = os.path.join(test_dir, b'fake.tar.gz') with open(test_file, 'wb') as test_obj: test_obj.write(b"\x00\x01\x02\x03") expected = "Collection artifact at '%s' is not a valid tar file." % to_native(test_file) with pytest.raises(AnsibleError, match=expected): collection.CollectionRequirement.from_tar(test_file, True, True) def test_build_requirement_from_tar_no_manifest(tmp_path_factory): test_dir = to_bytes(tmp_path_factory.mktemp('test-Γ…Γ‘ΕšΓŒΞ²ΕΓˆ Collections Input')) json_data = to_bytes(json.dumps( { 'files': [], 'format': 1, } )) tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz') with tarfile.open(tar_path, 'w:gz') as tfile: b_io = BytesIO(json_data) tar_info = tarfile.TarInfo('FILES.json') tar_info.size = len(json_data) tar_info.mode = 0o0644 tfile.addfile(tarinfo=tar_info, fileobj=b_io) expected = "Collection at '%s' does not contain the required file MANIFEST.json." % to_native(tar_path) with pytest.raises(AnsibleError, match=expected): collection.CollectionRequirement.from_tar(tar_path, True, True) def test_build_requirement_from_tar_no_files(tmp_path_factory): test_dir = to_bytes(tmp_path_factory.mktemp('test-Γ…Γ‘ΕšΓŒΞ²ΕΓˆ Collections Input')) json_data = to_bytes(json.dumps( { 'collection_info': {}, } )) tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz') with tarfile.open(tar_path, 'w:gz') as tfile: b_io = BytesIO(json_data) tar_info = tarfile.TarInfo('MANIFEST.json') tar_info.size = len(json_data) tar_info.mode = 0o0644 tfile.addfile(tarinfo=tar_info, fileobj=b_io) expected = "Collection at '%s' does not contain the required file FILES.json." % to_native(tar_path) with pytest.raises(AnsibleError, match=expected): collection.CollectionRequirement.from_tar(tar_path, True, True) def test_build_requirement_from_tar_invalid_manifest(tmp_path_factory): test_dir = to_bytes(tmp_path_factory.mktemp('test-Γ…Γ‘ΕšΓŒΞ²ΕΓˆ Collections Input')) json_data = b"not a json" tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz') with tarfile.open(tar_path, 'w:gz') as tfile: b_io = BytesIO(json_data) tar_info = tarfile.TarInfo('MANIFEST.json') tar_info.size = len(json_data) tar_info.mode = 0o0644 tfile.addfile(tarinfo=tar_info, fileobj=b_io) expected = "Collection tar file member MANIFEST.json does not contain a valid json string." with pytest.raises(AnsibleError, match=expected): collection.CollectionRequirement.from_tar(tar_path, True, True) def test_build_requirement_from_name(galaxy_server, monkeypatch): mock_get_versions = MagicMock() mock_get_versions.return_value = ['2.1.9', '2.1.10'] monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions) actual = collection.CollectionRequirement.from_name('namespace.collection', [galaxy_server], '*', True, True) assert actual.namespace == u'namespace' assert actual.name == u'collection' assert actual.b_path is None assert actual.api == galaxy_server assert actual.skip is False assert actual.versions == set([u'2.1.9', u'2.1.10']) assert actual.latest_version == u'2.1.10' assert actual.dependencies is None assert mock_get_versions.call_count == 1 assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection') def test_build_requirement_from_name_with_prerelease(galaxy_server, monkeypatch): mock_get_versions = MagicMock() mock_get_versions.return_value = ['1.0.1', '2.0.1-beta.1', '2.0.1'] monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions) actual = collection.CollectionRequirement.from_name('namespace.collection', [galaxy_server], '*', True, True) assert actual.namespace == u'namespace' assert actual.name == u'collection' assert actual.b_path is None assert actual.api == galaxy_server assert actual.skip is False assert actual.versions == set([u'1.0.1', u'2.0.1']) assert actual.latest_version == u'2.0.1' assert actual.dependencies is None assert mock_get_versions.call_count == 1 assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection') def test_build_requirment_from_name_with_prerelease_explicit(galaxy_server, monkeypatch): mock_get_info = MagicMock() mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1-beta.1', None, None, {}) monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info) actual = collection.CollectionRequirement.from_name('namespace.collection', [galaxy_server], '2.0.1-beta.1', True, True) assert actual.namespace == u'namespace' assert actual.name == u'collection' assert actual.b_path is None assert actual.api == galaxy_server assert actual.skip is False assert actual.versions == set([u'2.0.1-beta.1']) assert actual.latest_version == u'2.0.1-beta.1' assert actual.dependencies == {} assert mock_get_info.call_count == 1 assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.1-beta.1') def test_build_requirement_from_name_second_server(galaxy_server, monkeypatch): mock_get_versions = MagicMock() mock_get_versions.return_value = ['1.0.1', '1.0.2', '1.0.3'] monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions) broken_server = copy.copy(galaxy_server) broken_server.api_server = 'https://broken.com/' mock_404 = MagicMock() mock_404.side_effect = api.GalaxyError(urllib_error.HTTPError('https://galaxy.server.com', 404, 'msg', {}, StringIO()), "custom msg") monkeypatch.setattr(broken_server, 'get_collection_versions', mock_404) actual = collection.CollectionRequirement.from_name('namespace.collection', [broken_server, galaxy_server], '>1.0.1', False, True) assert actual.namespace == u'namespace' assert actual.name == u'collection' assert actual.b_path is None # assert actual.api == galaxy_server assert actual.skip is False assert actual.versions == set([u'1.0.2', u'1.0.3']) assert actual.latest_version == u'1.0.3' assert actual.dependencies is None assert mock_404.call_count == 1 assert mock_404.mock_calls[0][1] == ('namespace', 'collection') assert mock_get_versions.call_count == 1 assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection') def test_build_requirement_from_name_missing(galaxy_server, monkeypatch): mock_open = MagicMock() mock_open.side_effect = api.GalaxyError(urllib_error.HTTPError('https://galaxy.server.com', 404, 'msg', {}, StringIO()), "") monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_open) expected = "Failed to find collection namespace.collection:*" with pytest.raises(AnsibleError, match=expected): collection.CollectionRequirement.from_name('namespace.collection', [galaxy_server, galaxy_server], '*', False, True) def test_build_requirement_from_name_401_unauthorized(galaxy_server, monkeypatch): mock_open = MagicMock() mock_open.side_effect = api.GalaxyError(urllib_error.HTTPError('https://galaxy.server.com', 401, 'msg', {}, StringIO()), "error") monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_open) expected = "error (HTTP Code: 401, Message: Unknown error returned by Galaxy server.)" with pytest.raises(api.GalaxyError, match=re.escape(expected)): collection.CollectionRequirement.from_name('namespace.collection', [galaxy_server, galaxy_server], '*', False) def test_build_requirement_from_name_single_version(galaxy_server, monkeypatch): mock_get_info = MagicMock() mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.0', None, None, {}) monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info) actual = collection.CollectionRequirement.from_name('namespace.collection', [galaxy_server], '2.0.0', True, True) assert actual.namespace == u'namespace' assert actual.name == u'collection' assert actual.b_path is None assert actual.api == galaxy_server assert actual.skip is False assert actual.versions == set([u'2.0.0']) assert actual.latest_version == u'2.0.0' assert actual.dependencies == {} assert mock_get_info.call_count == 1 assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.0') def test_build_requirement_from_name_multiple_versions_one_match(galaxy_server, monkeypatch): mock_get_versions = MagicMock() mock_get_versions.return_value = ['2.0.0', '2.0.1', '2.0.2'] monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions) mock_get_info = MagicMock() mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None, {}) monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info) actual = collection.CollectionRequirement.from_name('namespace.collection', [galaxy_server], '>=2.0.1,<2.0.2', True, True) assert actual.namespace == u'namespace' assert actual.name == u'collection' assert actual.b_path is None assert actual.api == galaxy_server assert actual.skip is False assert actual.versions == set([u'2.0.1']) assert actual.latest_version == u'2.0.1' assert actual.dependencies == {} assert mock_get_versions.call_count == 1 assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection') assert mock_get_info.call_count == 1 assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.1') def test_build_requirement_from_name_multiple_version_results(galaxy_server, monkeypatch): mock_get_versions = MagicMock() mock_get_versions.return_value = ['2.0.0', '2.0.1', '2.0.2', '2.0.3', '2.0.4', '2.0.5'] monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions) actual = collection.CollectionRequirement.from_name('namespace.collection', [galaxy_server], '!=2.0.2', True, True) assert actual.namespace == u'namespace' assert actual.name == u'collection' assert actual.b_path is None assert actual.api == galaxy_server assert actual.skip is False assert actual.versions == set([u'2.0.0', u'2.0.1', u'2.0.3', u'2.0.4', u'2.0.5']) assert actual.latest_version == u'2.0.5' assert actual.dependencies is None assert mock_get_versions.call_count == 1 assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection') @pytest.mark.parametrize('versions, requirement, expected_filter, expected_latest', [ [['1.0.0', '1.0.1'], '*', ['1.0.0', '1.0.1'], '1.0.1'], [['1.0.0', '1.0.5', '1.1.0'], '>1.0.0,<1.1.0', ['1.0.5'], '1.0.5'], [['1.0.0', '1.0.5', '1.1.0'], '>1.0.0,<=1.0.5', ['1.0.5'], '1.0.5'], [['1.0.0', '1.0.5', '1.1.0'], '>=1.1.0', ['1.1.0'], '1.1.0'], [['1.0.0', '1.0.5', '1.1.0'], '!=1.1.0', ['1.0.0', '1.0.5'], '1.0.5'], [['1.0.0', '1.0.5', '1.1.0'], '==1.0.5', ['1.0.5'], '1.0.5'], [['1.0.0', '1.0.5', '1.1.0'], '1.0.5', ['1.0.5'], '1.0.5'], [['1.0.0', '2.0.0', '3.0.0'], '>=2', ['2.0.0', '3.0.0'], '3.0.0'], ]) def test_add_collection_requirements(versions, requirement, expected_filter, expected_latest): req = collection.CollectionRequirement('namespace', 'name', None, 'https://galaxy.com', versions, requirement, False) assert req.versions == set(expected_filter) assert req.latest_version == expected_latest def test_add_collection_requirement_to_unknown_installed_version(): req = collection.CollectionRequirement('namespace', 'name', None, 'https://galaxy.com', ['*'], '*', False, skip=True) expected = "Cannot meet requirement namespace.name:1.0.0 as it is already installed at version 'unknown'." with pytest.raises(AnsibleError, match=expected): req.add_requirement(str(req), '1.0.0') def test_add_collection_wildcard_requirement_to_unknown_installed_version(): req = collection.CollectionRequirement('namespace', 'name', None, 'https://galaxy.com', ['*'], '*', False, skip=True) req.add_requirement(str(req), '*') assert req.versions == set('*') assert req.latest_version == '*' def test_add_collection_requirement_with_conflict(galaxy_server): expected = "Cannot meet requirement ==1.0.2 for dependency namespace.name from source '%s'. Available versions " \ "before last requirement added: 1.0.0, 1.0.1\n" \ "Requirements from:\n" \ "\tbase - 'namespace.name:==1.0.2'" % galaxy_server.api_server with pytest.raises(AnsibleError, match=expected): collection.CollectionRequirement('namespace', 'name', None, galaxy_server, ['1.0.0', '1.0.1'], '==1.0.2', False) def test_add_requirement_to_existing_collection_with_conflict(galaxy_server): req = collection.CollectionRequirement('namespace', 'name', None, galaxy_server, ['1.0.0', '1.0.1'], '*', False) expected = "Cannot meet dependency requirement 'namespace.name:1.0.2' for collection namespace.collection2 from " \ "source '%s'. Available versions before last requirement added: 1.0.0, 1.0.1\n" \ "Requirements from:\n" \ "\tbase - 'namespace.name:*'\n" \ "\tnamespace.collection2 - 'namespace.name:1.0.2'" % galaxy_server.api_server with pytest.raises(AnsibleError, match=re.escape(expected)): req.add_requirement('namespace.collection2', '1.0.2') def test_add_requirement_to_installed_collection_with_conflict(): source = 'https://galaxy.ansible.com' req = collection.CollectionRequirement('namespace', 'name', None, source, ['1.0.0', '1.0.1'], '*', False, skip=True) expected = "Cannot meet requirement namespace.name:1.0.2 as it is already installed at version '1.0.1'. " \ "Use --force to overwrite" with pytest.raises(AnsibleError, match=re.escape(expected)): req.add_requirement(None, '1.0.2') def test_add_requirement_to_installed_collection_with_conflict_as_dep(): source = 'https://galaxy.ansible.com' req = collection.CollectionRequirement('namespace', 'name', None, source, ['1.0.0', '1.0.1'], '*', False, skip=True) expected = "Cannot meet requirement namespace.name:1.0.2 as it is already installed at version '1.0.1'. " \ "Use --force-with-deps to overwrite" with pytest.raises(AnsibleError, match=re.escape(expected)): req.add_requirement('namespace.collection2', '1.0.2') def test_install_skipped_collection(monkeypatch): mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) req = collection.CollectionRequirement('namespace', 'name', None, 'source', ['1.0.0'], '*', False, skip=True) req.install(None, None) assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == "Skipping 'namespace.name' as it is already installed" def test_install_collection(collection_artifact, monkeypatch): mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) collection_tar = collection_artifact[1] output_path = os.path.join(os.path.split(collection_tar)[0], b'output') collection_path = os.path.join(output_path, b'ansible_namespace', b'collection') os.makedirs(os.path.join(collection_path, b'delete_me')) # Create a folder to verify the install cleans out the dir temp_path = os.path.join(os.path.split(collection_tar)[0], b'temp') os.makedirs(temp_path) req = collection.CollectionRequirement.from_tar(collection_tar, True, True) req.install(to_text(output_path), temp_path) # Ensure the temp directory is empty, nothing is left behind assert os.listdir(temp_path) == [] actual_files = os.listdir(collection_path) actual_files.sort() assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles'] assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" \ % to_text(collection_path) def test_install_collection_with_download(galaxy_server, collection_artifact, monkeypatch): collection_tar = collection_artifact[1] output_path = os.path.join(os.path.split(collection_tar)[0], b'output') collection_path = os.path.join(output_path, b'ansible_namespace', b'collection') mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) mock_download = MagicMock() mock_download.return_value = collection_tar monkeypatch.setattr(collection, '_download_file', mock_download) monkeypatch.setattr(galaxy_server, '_available_api_versions', {'v2': 'v2/'}) temp_path = os.path.join(os.path.split(collection_tar)[0], b'temp') os.makedirs(temp_path) meta = api.CollectionVersionMetadata('ansible_namespace', 'collection', '0.1.0', 'https://downloadme.com', 'myhash', {}) req = collection.CollectionRequirement('ansible_namespace', 'collection', None, galaxy_server, ['0.1.0'], '*', False, metadata=meta) req.install(to_text(output_path), temp_path) # Ensure the temp directory is empty, nothing is left behind assert os.listdir(temp_path) == [] actual_files = os.listdir(collection_path) actual_files.sort() assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles'] assert mock_display.call_count == 1 assert mock_display.mock_calls[0][1][0] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" \ % to_text(collection_path) assert mock_download.call_count == 1 assert mock_download.mock_calls[0][1][0] == 'https://downloadme.com' assert mock_download.mock_calls[0][1][1] == temp_path assert mock_download.mock_calls[0][1][2] == 'myhash' assert mock_download.mock_calls[0][1][3] is True def test_install_collections_from_tar(collection_artifact, monkeypatch): collection_path, collection_tar = collection_artifact temp_path = os.path.split(collection_tar)[0] shutil.rmtree(collection_path) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) collection.install_collections([(to_text(collection_tar), '*', None,)], to_text(temp_path), [u'https://galaxy.ansible.com'], True, False, False, False, False) assert os.path.isdir(collection_path) actual_files = os.listdir(collection_path) actual_files.sort() assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles'] with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj: actual_manifest = json.loads(to_text(manifest_obj.read())) assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace' assert actual_manifest['collection_info']['name'] == 'collection' assert actual_manifest['collection_info']['version'] == '0.1.0' # Filter out the progress cursor display calls. display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1] assert len(display_msgs) == 3 assert display_msgs[0] == "Process install dependency map" assert display_msgs[1] == "Starting collection install process" assert display_msgs[2] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" % to_text(collection_path) def test_install_collections_existing_without_force(collection_artifact, monkeypatch): collection_path, collection_tar = collection_artifact temp_path = os.path.split(collection_tar)[0] mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) # If we don't delete collection_path it will think the original build skeleton is installed so we expect a skip collection.install_collections([(to_text(collection_tar), '*', None,)], to_text(temp_path), [u'https://galaxy.ansible.com'], True, False, False, False, False) assert os.path.isdir(collection_path) actual_files = os.listdir(collection_path) actual_files.sort() assert actual_files == [b'README.md', b'docs', b'galaxy.yml', b'playbooks', b'plugins', b'roles'] # Filter out the progress cursor display calls. display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1] assert len(display_msgs) == 4 # Msg1 is the warning about not MANIFEST.json, cannot really check message as it has line breaks which varies based # on the path size assert display_msgs[1] == "Process install dependency map" assert display_msgs[2] == "Starting collection install process" assert display_msgs[3] == "Skipping 'ansible_namespace.collection' as it is already installed" # Makes sure we don't get stuck in some recursive loop @pytest.mark.parametrize('collection_artifact', [ {'ansible_namespace.collection': '>=0.0.1'}, ], indirect=True) def test_install_collection_with_circular_dependency(collection_artifact, monkeypatch): collection_path, collection_tar = collection_artifact temp_path = os.path.split(collection_tar)[0] shutil.rmtree(collection_path) mock_display = MagicMock() monkeypatch.setattr(Display, 'display', mock_display) collection.install_collections([(to_text(collection_tar), '*', None,)], to_text(temp_path), [u'https://galaxy.ansible.com'], True, False, False, False, False) assert os.path.isdir(collection_path) actual_files = os.listdir(collection_path) actual_files.sort() assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles'] with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj: actual_manifest = json.loads(to_text(manifest_obj.read())) assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace' assert actual_manifest['collection_info']['name'] == 'collection' assert actual_manifest['collection_info']['version'] == '0.1.0' # Filter out the progress cursor display calls. display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1] assert len(display_msgs) == 3 assert display_msgs[0] == "Process install dependency map" assert display_msgs[1] == "Starting collection install process" assert display_msgs[2] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" % to_text(collection_path)
closed
ansible/ansible
https://github.com/ansible/ansible
64,902
allow_duplicates: an example of a document doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> [The document](https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#role-duplication-and-execution) says that `allow_duplicates: true` makes it possible to execute a same role multiple times in play but it doesn't work. Actually I'm not sure the current behavior is not intended or it's just an omission of correction in documents. However, I think the variable should enable the control of that execution as stated in the example because we can do that in `roles` directive. Edit (sivel): This appears to have been caused by 376b199c0540e39189bdf6b31b9a60eadffa3989 Something is likely looking at the wrong reference of `play.roles`, maybe instead of using `self._extend_value` in `_load_roles` we can switch to doing: `self.roles[:0] = roles` so the reference stays the same. Or someone can track down the incorrect reference. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> allow_duplicates ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.9.1 config file = /home/knagamin/t/ansible/ansible-test-allow_duplicates/ansible.cfg configured module search path = ['/home/knagamin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/knagamin/.local/lib/python3.7/site-packages/ansible executable location = /home/knagamin/.local/bin/ansible python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] ``` NOTE: I tried it with some older versions then I found that It works correctly in v2.7.0 but doesn't in v2.7.1 ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> * target OS version ``` $ uname -srvmpio Linux 5.3.11-300.fc31.x86_64 #1 SMP Tue Nov 12 19:08:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux ``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> * Directory structure ``` β”œβ”€β”€ playbook.yml └── roles └── test_role β”œβ”€β”€ meta β”‚Β Β  └── main.yml └── tasks Β Β  └── main.yml ``` * `./playbook.yml` ```yaml --- - name: test for allow_duplicates hosts: localhost gather_facts: false roles: - role: test_role - role: test_role - role: test_role ``` * `./roles/test_role/task/main.yml` ```yaml --- # tasks file for test_role - name: Just show a message debug: msg: "hoge" ``` * `./roles/test_role/meta/main.yml` ```yaml galaxy_info: author: your name description: your role description company: your company (optional) license: license (GPL-2.0-or-later, MIT, etc) min_ansible_version: 2.9 galaxy_tags: [] dependencies: [] allow_duplicates: true ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` PLAY [test for allow_duplicates] ************************************************************************************* TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } PLAY RECAP *********************************************************************************************************** localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below PLAY [test for allow_duplicates] ************************************************************************************* TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } PLAY RECAP *********************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/64902
https://github.com/ansible/ansible/pull/65063
4be8b2134f0f6ed794ef57a621534f9561f91895
daecbb9bf080bc639ca8a5e5d67cee5ed1a0b439
2019-11-15T16:05:02Z
python
2019-12-03T15:21:54Z
changelogs/fragments/64902-fix-allow-duplicates-in-single-role.yml
closed
ansible/ansible
https://github.com/ansible/ansible
64,902
allow_duplicates: an example of a document doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> [The document](https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#role-duplication-and-execution) says that `allow_duplicates: true` makes it possible to execute a same role multiple times in play but it doesn't work. Actually I'm not sure the current behavior is not intended or it's just an omission of correction in documents. However, I think the variable should enable the control of that execution as stated in the example because we can do that in `roles` directive. Edit (sivel): This appears to have been caused by 376b199c0540e39189bdf6b31b9a60eadffa3989 Something is likely looking at the wrong reference of `play.roles`, maybe instead of using `self._extend_value` in `_load_roles` we can switch to doing: `self.roles[:0] = roles` so the reference stays the same. Or someone can track down the incorrect reference. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> allow_duplicates ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.9.1 config file = /home/knagamin/t/ansible/ansible-test-allow_duplicates/ansible.cfg configured module search path = ['/home/knagamin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/knagamin/.local/lib/python3.7/site-packages/ansible executable location = /home/knagamin/.local/bin/ansible python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] ``` NOTE: I tried it with some older versions then I found that It works correctly in v2.7.0 but doesn't in v2.7.1 ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> * target OS version ``` $ uname -srvmpio Linux 5.3.11-300.fc31.x86_64 #1 SMP Tue Nov 12 19:08:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux ``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> * Directory structure ``` β”œβ”€β”€ playbook.yml └── roles └── test_role β”œβ”€β”€ meta β”‚Β Β  └── main.yml └── tasks Β Β  └── main.yml ``` * `./playbook.yml` ```yaml --- - name: test for allow_duplicates hosts: localhost gather_facts: false roles: - role: test_role - role: test_role - role: test_role ``` * `./roles/test_role/task/main.yml` ```yaml --- # tasks file for test_role - name: Just show a message debug: msg: "hoge" ``` * `./roles/test_role/meta/main.yml` ```yaml galaxy_info: author: your name description: your role description company: your company (optional) license: license (GPL-2.0-or-later, MIT, etc) min_ansible_version: 2.9 galaxy_tags: [] dependencies: [] allow_duplicates: true ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` PLAY [test for allow_duplicates] ************************************************************************************* TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } PLAY RECAP *********************************************************************************************************** localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below PLAY [test for allow_duplicates] ************************************************************************************* TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } PLAY RECAP *********************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/64902
https://github.com/ansible/ansible/pull/65063
4be8b2134f0f6ed794ef57a621534f9561f91895
daecbb9bf080bc639ca8a5e5d67cee5ed1a0b439
2019-11-15T16:05:02Z
python
2019-12-03T15:21:54Z
lib/ansible/playbook/play.py
# (c) 2012-2014, Michael DeHaan <[email protected]> # # This file is part of Ansible # # Ansible is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Ansible is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Ansible. If not, see <http://www.gnu.org/licenses/>. # Make coding more python3-ish from __future__ import (absolute_import, division, print_function) __metaclass__ = type from ansible import constants as C from ansible import context from ansible.errors import AnsibleParserError, AnsibleAssertionError from ansible.module_utils._text import to_native from ansible.module_utils.six import string_types from ansible.playbook.attribute import FieldAttribute from ansible.playbook.base import Base from ansible.playbook.block import Block from ansible.playbook.collectionsearch import CollectionSearch from ansible.playbook.helpers import load_list_of_blocks, load_list_of_roles from ansible.playbook.role import Role from ansible.playbook.taggable import Taggable from ansible.vars.manager import preprocess_vars from ansible.utils.display import Display display = Display() __all__ = ['Play'] class Play(Base, Taggable, CollectionSearch): """ A play is a language feature that represents a list of roles and/or task/handler blocks to execute on a given set of hosts. Usage: Play.load(datastructure) -> Play Play.something(...) """ # ================================================================================= _hosts = FieldAttribute(isa='list', required=True, listof=string_types, always_post_validate=True) # Facts _gather_facts = FieldAttribute(isa='bool', default=None, always_post_validate=True) _gather_subset = FieldAttribute(isa='list', default=(lambda: C.DEFAULT_GATHER_SUBSET), listof=string_types, always_post_validate=True) _gather_timeout = FieldAttribute(isa='int', default=C.DEFAULT_GATHER_TIMEOUT, always_post_validate=True) _fact_path = FieldAttribute(isa='string', default=C.DEFAULT_FACT_PATH) # Variable Attributes _vars_files = FieldAttribute(isa='list', default=list, priority=99) _vars_prompt = FieldAttribute(isa='list', default=list, always_post_validate=False) # Role Attributes _roles = FieldAttribute(isa='list', default=list, priority=90) # Block (Task) Lists Attributes _handlers = FieldAttribute(isa='list', default=list) _pre_tasks = FieldAttribute(isa='list', default=list) _post_tasks = FieldAttribute(isa='list', default=list) _tasks = FieldAttribute(isa='list', default=list) # Flag/Setting Attributes _force_handlers = FieldAttribute(isa='bool', default=context.cliargs_deferred_get('force_handlers'), always_post_validate=True) _max_fail_percentage = FieldAttribute(isa='percent', always_post_validate=True) _serial = FieldAttribute(isa='list', default=list, always_post_validate=True) _strategy = FieldAttribute(isa='string', default=C.DEFAULT_STRATEGY, always_post_validate=True) _order = FieldAttribute(isa='string', always_post_validate=True) # ================================================================================= def __init__(self): super(Play, self).__init__() self._included_conditional = None self._included_path = None self._removed_hosts = [] self.ROLE_CACHE = {} self.only_tags = set(context.CLIARGS.get('tags', [])) or frozenset(('all',)) self.skip_tags = set(context.CLIARGS.get('skip_tags', [])) def __repr__(self): return self.get_name() def get_name(self): ''' return the name of the Play ''' return self.name @staticmethod def load(data, variable_manager=None, loader=None, vars=None): if ('name' not in data or data['name'] is None) and 'hosts' in data: if data['hosts'] is None or all(host is None for host in data['hosts']): raise AnsibleParserError("Hosts list cannot be empty - please check your playbook") if isinstance(data['hosts'], list): data['name'] = ','.join(data['hosts']) else: data['name'] = data['hosts'] p = Play() if vars: p.vars = vars.copy() return p.load_data(data, variable_manager=variable_manager, loader=loader) def preprocess_data(self, ds): ''' Adjusts play datastructure to cleanup old/legacy items ''' if not isinstance(ds, dict): raise AnsibleAssertionError('while preprocessing data (%s), ds should be a dict but was a %s' % (ds, type(ds))) # The use of 'user' in the Play datastructure was deprecated to # line up with the same change for Tasks, due to the fact that # 'user' conflicted with the user module. if 'user' in ds: # this should never happen, but error out with a helpful message # to the user if it does... if 'remote_user' in ds: raise AnsibleParserError("both 'user' and 'remote_user' are set for %s. " "The use of 'user' is deprecated, and should be removed" % self.get_name(), obj=ds) ds['remote_user'] = ds['user'] del ds['user'] return super(Play, self).preprocess_data(ds) def _load_tasks(self, attr, ds): ''' Loads a list of blocks from a list which may be mixed tasks/blocks. Bare tasks outside of a block are given an implicit block. ''' try: return load_list_of_blocks(ds=ds, play=self, variable_manager=self._variable_manager, loader=self._loader) except AssertionError as e: raise AnsibleParserError("A malformed block was encountered while loading tasks: %s" % to_native(e), obj=self._ds, orig_exc=e) def _load_pre_tasks(self, attr, ds): ''' Loads a list of blocks from a list which may be mixed tasks/blocks. Bare tasks outside of a block are given an implicit block. ''' try: return load_list_of_blocks(ds=ds, play=self, variable_manager=self._variable_manager, loader=self._loader) except AssertionError as e: raise AnsibleParserError("A malformed block was encountered while loading pre_tasks", obj=self._ds, orig_exc=e) def _load_post_tasks(self, attr, ds): ''' Loads a list of blocks from a list which may be mixed tasks/blocks. Bare tasks outside of a block are given an implicit block. ''' try: return load_list_of_blocks(ds=ds, play=self, variable_manager=self._variable_manager, loader=self._loader) except AssertionError as e: raise AnsibleParserError("A malformed block was encountered while loading post_tasks", obj=self._ds, orig_exc=e) def _load_handlers(self, attr, ds): ''' Loads a list of blocks from a list which may be mixed handlers/blocks. Bare handlers outside of a block are given an implicit block. ''' try: return self._extend_value( self.handlers, load_list_of_blocks(ds=ds, play=self, use_handlers=True, variable_manager=self._variable_manager, loader=self._loader), prepend=True ) except AssertionError as e: raise AnsibleParserError("A malformed block was encountered while loading handlers", obj=self._ds, orig_exc=e) def _load_roles(self, attr, ds): ''' Loads and returns a list of RoleInclude objects from the datastructure list of role definitions and creates the Role from those objects ''' if ds is None: ds = [] try: role_includes = load_list_of_roles(ds, play=self, variable_manager=self._variable_manager, loader=self._loader, collection_search_list=self.collections) except AssertionError as e: raise AnsibleParserError("A malformed role declaration was encountered.", obj=self._ds, orig_exc=e) roles = [] for ri in role_includes: roles.append(Role.load(ri, play=self)) return self._extend_value( self.roles, roles, prepend=True ) def _load_vars_prompt(self, attr, ds): new_ds = preprocess_vars(ds) vars_prompts = [] if new_ds is not None: for prompt_data in new_ds: if 'name' not in prompt_data: raise AnsibleParserError("Invalid vars_prompt data structure", obj=ds) else: vars_prompts.append(prompt_data) return vars_prompts def _compile_roles(self): ''' Handles the role compilation step, returning a flat list of tasks with the lowest level dependencies first. For example, if a role R has a dependency D1, which also has a dependency D2, the tasks from D2 are merged first, followed by D1, and lastly by the tasks from the parent role R last. This is done for all roles in the Play. ''' block_list = [] if len(self.roles) > 0: for r in self.roles: # Don't insert tasks from ``import/include_role``, preventing # duplicate execution at the wrong time if r.from_include: continue block_list.extend(r.compile(play=self)) return block_list def compile_roles_handlers(self): ''' Handles the role handler compilation step, returning a flat list of Handlers This is done for all roles in the Play. ''' block_list = [] if len(self.roles) > 0: for r in self.roles: if r.from_include: continue block_list.extend(r.get_handler_blocks(play=self)) return block_list def compile(self): ''' Compiles and returns the task list for this play, compiled from the roles (which are themselves compiled recursively) and/or the list of tasks specified in the play. ''' # create a block containing a single flush handlers meta # task, so we can be sure to run handlers at certain points # of the playbook execution flush_block = Block.load( data={'meta': 'flush_handlers'}, play=self, variable_manager=self._variable_manager, loader=self._loader ) block_list = [] block_list.extend(self.pre_tasks) block_list.append(flush_block) block_list.extend(self._compile_roles()) block_list.extend(self.tasks) block_list.append(flush_block) block_list.extend(self.post_tasks) block_list.append(flush_block) return block_list def get_vars(self): return self.vars.copy() def get_vars_files(self): if self.vars_files is None: return [] elif not isinstance(self.vars_files, list): return [self.vars_files] return self.vars_files def get_handlers(self): return self.handlers[:] def get_roles(self): return self.roles[:] def get_tasks(self): tasklist = [] for task in self.pre_tasks + self.tasks + self.post_tasks: if isinstance(task, Block): tasklist.append(task.block + task.rescue + task.always) else: tasklist.append(task) return tasklist def serialize(self): data = super(Play, self).serialize() roles = [] for role in self.get_roles(): roles.append(role.serialize()) data['roles'] = roles data['included_path'] = self._included_path return data def deserialize(self, data): super(Play, self).deserialize(data) self._included_path = data.get('included_path', None) if 'roles' in data: role_data = data.get('roles', []) roles = [] for role in role_data: r = Role() r.deserialize(role) roles.append(r) setattr(self, 'roles', roles) del data['roles'] def copy(self): new_me = super(Play, self).copy() new_me.ROLE_CACHE = self.ROLE_CACHE.copy() new_me._included_conditional = self._included_conditional new_me._included_path = self._included_path return new_me
closed
ansible/ansible
https://github.com/ansible/ansible
64,902
allow_duplicates: an example of a document doesn't work
<!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY <!--- Explain the problem briefly below --> [The document](https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#role-duplication-and-execution) says that `allow_duplicates: true` makes it possible to execute a same role multiple times in play but it doesn't work. Actually I'm not sure the current behavior is not intended or it's just an omission of correction in documents. However, I think the variable should enable the control of that execution as stated in the example because we can do that in `roles` directive. Edit (sivel): This appears to have been caused by 376b199c0540e39189bdf6b31b9a60eadffa3989 Something is likely looking at the wrong reference of `play.roles`, maybe instead of using `self._extend_value` in `_load_roles` we can switch to doing: `self.roles[:0] = roles` so the reference stays the same. Or someone can track down the incorrect reference. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME <!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure --> allow_duplicates ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.9.1 config file = /home/knagamin/t/ansible/ansible-test-allow_duplicates/ansible.cfg configured module search path = ['/home/knagamin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /home/knagamin/.local/lib/python3.7/site-packages/ansible executable location = /home/knagamin/.local/bin/ansible python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)] ``` NOTE: I tried it with some older versions then I found that It works correctly in v2.7.0 but doesn't in v2.7.1 ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ```paste below ``` ##### OS / ENVIRONMENT <!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. --> * target OS version ``` $ uname -srvmpio Linux 5.3.11-300.fc31.x86_64 #1 SMP Tue Nov 12 19:08:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux ``` ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> * Directory structure ``` β”œβ”€β”€ playbook.yml └── roles └── test_role β”œβ”€β”€ meta β”‚Β Β  └── main.yml └── tasks Β Β  └── main.yml ``` * `./playbook.yml` ```yaml --- - name: test for allow_duplicates hosts: localhost gather_facts: false roles: - role: test_role - role: test_role - role: test_role ``` * `./roles/test_role/task/main.yml` ```yaml --- # tasks file for test_role - name: Just show a message debug: msg: "hoge" ``` * `./roles/test_role/meta/main.yml` ```yaml galaxy_info: author: your name description: your role description company: your company (optional) license: license (GPL-2.0-or-later, MIT, etc) min_ansible_version: 2.9 galaxy_tags: [] dependencies: [] allow_duplicates: true ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ``` PLAY [test for allow_duplicates] ************************************************************************************* TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } PLAY RECAP *********************************************************************************************************** localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ```paste below PLAY [test for allow_duplicates] ************************************************************************************* TASK [test_role : Just show a message] ******************************************************************************* ok: [localhost] => { "msg": "hoge" } PLAY RECAP *********************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
https://github.com/ansible/ansible/issues/64902
https://github.com/ansible/ansible/pull/65063
4be8b2134f0f6ed794ef57a621534f9561f91895
daecbb9bf080bc639ca8a5e5d67cee5ed1a0b439
2019-11-15T16:05:02Z
python
2019-12-03T15:21:54Z
test/integration/targets/include_import/roles/dup_allowed_role/meta/main.yml