status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,902 |
allow_duplicates: an example of a document doesn't work
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
[The document](https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#role-duplication-and-execution) says that `allow_duplicates: true` makes it possible to execute a same role multiple times in play but it doesn't work. Actually I'm not sure the current behavior is not intended or it's just an omission of correction in documents. However, I think the variable should enable the control of that execution as stated in the example because we can do that in `roles` directive.
Edit (sivel): This appears to have been caused by 376b199c0540e39189bdf6b31b9a60eadffa3989
Something is likely looking at the wrong reference of `play.roles`, maybe instead of using `self._extend_value` in `_load_roles` we can switch to doing: `self.roles[:0] = roles` so the reference stays the same. Or someone can track down the incorrect reference.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
allow_duplicates
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.1
config file = /home/knagamin/t/ansible/ansible-test-allow_duplicates/ansible.cfg
configured module search path = ['/home/knagamin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/knagamin/.local/lib/python3.7/site-packages/ansible
executable location = /home/knagamin/.local/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
NOTE: I tried it with some older versions then I found that It works correctly in v2.7.0 but doesn't in v2.7.1
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
* target OS version
```
$ uname -srvmpio
Linux 5.3.11-300.fc31.x86_64 #1 SMP Tue Nov 12 19:08:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
* Directory structure
```
βββ playbook.yml
βββ roles
βββ test_role
βββ meta
βΒ Β βββ main.yml
βββ tasks
Β Β βββ main.yml
```
* `./playbook.yml`
```yaml
---
- name: test for allow_duplicates
hosts: localhost
gather_facts: false
roles:
- role: test_role
- role: test_role
- role: test_role
```
* `./roles/test_role/task/main.yml`
```yaml
---
# tasks file for test_role
- name: Just show a message
debug:
msg: "hoge"
```
* `./roles/test_role/meta/main.yml`
```yaml
galaxy_info:
author: your name
description: your role description
company: your company (optional)
license: license (GPL-2.0-or-later, MIT, etc)
min_ansible_version: 2.9
galaxy_tags: []
dependencies: []
allow_duplicates: true
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
PLAY [test for allow_duplicates] *************************************************************************************
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
PLAY RECAP ***********************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [test for allow_duplicates] *************************************************************************************
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
PLAY RECAP ***********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64902
|
https://github.com/ansible/ansible/pull/65063
|
4be8b2134f0f6ed794ef57a621534f9561f91895
|
daecbb9bf080bc639ca8a5e5d67cee5ed1a0b439
| 2019-11-15T16:05:02Z |
python
| 2019-12-03T15:21:54Z |
test/integration/targets/include_import/roles/dup_allowed_role/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,902 |
allow_duplicates: an example of a document doesn't work
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
[The document](https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#role-duplication-and-execution) says that `allow_duplicates: true` makes it possible to execute a same role multiple times in play but it doesn't work. Actually I'm not sure the current behavior is not intended or it's just an omission of correction in documents. However, I think the variable should enable the control of that execution as stated in the example because we can do that in `roles` directive.
Edit (sivel): This appears to have been caused by 376b199c0540e39189bdf6b31b9a60eadffa3989
Something is likely looking at the wrong reference of `play.roles`, maybe instead of using `self._extend_value` in `_load_roles` we can switch to doing: `self.roles[:0] = roles` so the reference stays the same. Or someone can track down the incorrect reference.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
allow_duplicates
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.1
config file = /home/knagamin/t/ansible/ansible-test-allow_duplicates/ansible.cfg
configured module search path = ['/home/knagamin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/knagamin/.local/lib/python3.7/site-packages/ansible
executable location = /home/knagamin/.local/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
NOTE: I tried it with some older versions then I found that It works correctly in v2.7.0 but doesn't in v2.7.1
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
* target OS version
```
$ uname -srvmpio
Linux 5.3.11-300.fc31.x86_64 #1 SMP Tue Nov 12 19:08:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
* Directory structure
```
βββ playbook.yml
βββ roles
βββ test_role
βββ meta
βΒ Β βββ main.yml
βββ tasks
Β Β βββ main.yml
```
* `./playbook.yml`
```yaml
---
- name: test for allow_duplicates
hosts: localhost
gather_facts: false
roles:
- role: test_role
- role: test_role
- role: test_role
```
* `./roles/test_role/task/main.yml`
```yaml
---
# tasks file for test_role
- name: Just show a message
debug:
msg: "hoge"
```
* `./roles/test_role/meta/main.yml`
```yaml
galaxy_info:
author: your name
description: your role description
company: your company (optional)
license: license (GPL-2.0-or-later, MIT, etc)
min_ansible_version: 2.9
galaxy_tags: []
dependencies: []
allow_duplicates: true
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
PLAY [test for allow_duplicates] *************************************************************************************
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
PLAY RECAP ***********************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [test for allow_duplicates] *************************************************************************************
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
PLAY RECAP ***********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64902
|
https://github.com/ansible/ansible/pull/65063
|
4be8b2134f0f6ed794ef57a621534f9561f91895
|
daecbb9bf080bc639ca8a5e5d67cee5ed1a0b439
| 2019-11-15T16:05:02Z |
python
| 2019-12-03T15:21:54Z |
test/integration/targets/include_import/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_ROLES_PATH=./roles
function gen_task_files() {
for i in $(seq -f '%03g' 1 39); do
echo -e "- name: Hello Message\n debug:\n msg: Task file ${i}" > "tasks/hello/tasks-file-${i}.yml"
done
}
## Adhoc
ansible -m include_role -a name=role1 localhost
## Import (static)
# Playbook
ANSIBLE_STRATEGY='linear' ansible-playbook playbook/test_import_playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook playbook/test_import_playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='linear' ansible-playbook playbook/test_import_playbook_tags.yml -i inventory "$@" --tags canary1,canary22,validate --skip-tags skipme
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_import_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_import_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_import_role.yml -i inventory "$@"
## Include (dynamic)
# Tasks
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_tasks_tags.yml -i inventory "$@" --tags tasks1,canary1,validate
# Role
ANSIBLE_STRATEGY='linear' ansible-playbook role/test_include_role.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook role/test_include_role.yml -i inventory "$@"
## Max Recursion Depth
# https://github.com/ansible/ansible/issues/23609
ANSIBLE_STRATEGY='linear' ansible-playbook test_role_recursion.yml -i inventory "$@"
## Nested tasks
# https://github.com/ansible/ansible/issues/34782
ANSIBLE_STRATEGY='linear' ansible-playbook test_nested_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_nested_tasks.yml -i inventory "$@"
## Tons of top level include_tasks
# https://github.com/ansible/ansible/issues/36053
# Fixed by https://github.com/ansible/ansible/pull/36075
gen_task_files
ANSIBLE_STRATEGY='linear' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook test_copious_include_tasks.yml -i inventory "$@"
rm -f tasks/hello/*.yml
# Inlcuded tasks should inherit attrs from non-dynamic blocks in parent chain
# https://github.com/ansible/ansible/pull/38827
ANSIBLE_STRATEGY='linear' ansible-playbook test_grandparent_inheritance.yml -i inventory "$@"
# undefined_var
ANSIBLE_STRATEGY='linear' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
ANSIBLE_STRATEGY='free' ansible-playbook undefined_var/playbook.yml -i inventory "$@"
# include_ + apply (explicit inheritance)
ANSIBLE_STRATEGY='linear' ansible-playbook apply/include_apply.yml -i inventory "$@" --tags foo
set +e
OUT=$(ANSIBLE_STRATEGY='linear' ansible-playbook apply/import_apply.yml -i inventory "$@" --tags foo 2>&1 | grep 'ERROR! Invalid options for import_tasks: apply')
set -e
if [[ -z "$OUT" ]]; then
echo "apply on import_tasks did not cause error"
exit 1
fi
# Test that duplicate items in loop are not deduped
ANSIBLE_STRATEGY='linear' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ANSIBLE_STRATEGY='free' ansible-playbook tasks/test_include_dupe_loop.yml -i inventory "$@" | tee test_include_dupe_loop.out
test "$(grep -c '"item=foo"' test_include_dupe_loop.out)" = 3
ansible-playbook public_exposure/playbook.yml -i inventory "$@"
ansible-playbook public_exposure/no_bleeding.yml -i inventory "$@"
ansible-playbook public_exposure/no_overwrite_roles.yml -i inventory "$@"
# https://github.com/ansible/ansible/pull/48068
ANSIBLE_HOST_PATTERN_MISMATCH=warning ansible-playbook run_once/playbook.yml "$@"
# https://github.com/ansible/ansible/issues/48936
ansible-playbook -v handler_addressing/playbook.yml 2>&1 | tee test_handler_addressing.out
test "$(grep -E -c 'include handler task|ERROR! The requested handler '"'"'do_import'"'"' was not found' test_handler_addressing.out)" = 2
# https://github.com/ansible/ansible/issues/49969
ansible-playbook -v parent_templating/playbook.yml 2>&1 | tee test_parent_templating.out
test "$(grep -E -c 'Templating the path of the parent include_tasks failed.' test_parent_templating.out)" = 0
# https://github.com/ansible/ansible/issues/54618
ansible-playbook test_loop_var_bleed.yaml "$@"
# https://github.com/ansible/ansible/issues/56580
ansible-playbook valid_include_keywords/playbook.yml "$@"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,902 |
allow_duplicates: an example of a document doesn't work
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
[The document](https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#role-duplication-and-execution) says that `allow_duplicates: true` makes it possible to execute a same role multiple times in play but it doesn't work. Actually I'm not sure the current behavior is not intended or it's just an omission of correction in documents. However, I think the variable should enable the control of that execution as stated in the example because we can do that in `roles` directive.
Edit (sivel): This appears to have been caused by 376b199c0540e39189bdf6b31b9a60eadffa3989
Something is likely looking at the wrong reference of `play.roles`, maybe instead of using `self._extend_value` in `_load_roles` we can switch to doing: `self.roles[:0] = roles` so the reference stays the same. Or someone can track down the incorrect reference.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
allow_duplicates
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.1
config file = /home/knagamin/t/ansible/ansible-test-allow_duplicates/ansible.cfg
configured module search path = ['/home/knagamin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/knagamin/.local/lib/python3.7/site-packages/ansible
executable location = /home/knagamin/.local/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
NOTE: I tried it with some older versions then I found that It works correctly in v2.7.0 but doesn't in v2.7.1
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
* target OS version
```
$ uname -srvmpio
Linux 5.3.11-300.fc31.x86_64 #1 SMP Tue Nov 12 19:08:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
* Directory structure
```
βββ playbook.yml
βββ roles
βββ test_role
βββ meta
βΒ Β βββ main.yml
βββ tasks
Β Β βββ main.yml
```
* `./playbook.yml`
```yaml
---
- name: test for allow_duplicates
hosts: localhost
gather_facts: false
roles:
- role: test_role
- role: test_role
- role: test_role
```
* `./roles/test_role/task/main.yml`
```yaml
---
# tasks file for test_role
- name: Just show a message
debug:
msg: "hoge"
```
* `./roles/test_role/meta/main.yml`
```yaml
galaxy_info:
author: your name
description: your role description
company: your company (optional)
license: license (GPL-2.0-or-later, MIT, etc)
min_ansible_version: 2.9
galaxy_tags: []
dependencies: []
allow_duplicates: true
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
PLAY [test for allow_duplicates] *************************************************************************************
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
PLAY RECAP ***********************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [test for allow_duplicates] *************************************************************************************
TASK [test_role : Just show a message] *******************************************************************************
ok: [localhost] => {
"msg": "hoge"
}
PLAY RECAP ***********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/64902
|
https://github.com/ansible/ansible/pull/65063
|
4be8b2134f0f6ed794ef57a621534f9561f91895
|
daecbb9bf080bc639ca8a5e5d67cee5ed1a0b439
| 2019-11-15T16:05:02Z |
python
| 2019-12-03T15:21:54Z |
test/integration/targets/include_import/tasks/test_allow_single_role_dup.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,237 |
win_share: remove all other permissions of a share
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I have a playbook the user: App_Urb on a share EAI. Everything is ok, but If I have another user on that share, playbook remove it
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_share
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
[14:07:41]root@vsrvkermit playbook]# ansible-config dump --only-changed
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/datas/ansible/roles']
```
##### OS / ENVIRONMENT
Target is Windows 2012 R2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Case original:
```
C:\Windows\system32>net share EAI
Share name EAI
Path D:\Partage\EAI
Remark Repertoire EAI
Maximum users No limit
Users
Caching Manual caching of documents
Permission VSRVQSRM\Appli_Urba, FULL
PAM\EXP_TRT, READ
```
I manually added PAM\EXP_TRT with READ rules.
I run playbook like this:
```
- name: Add share EAI
win_share:
name: EAI
description: Repertoire EAI
path: D:\Partage\EAI
list: no
full: Appli_Urba
when: ansible_distribution_version is version('6.2','>=')
```
This remove the created user
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Keep original user, just add Appli_Urb if not exist
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
C:\Windows\system32>net share EAI
Share name EAI
Path D:\Partage\EAI
Remark Repertoire EAI
Maximum users No limit
Users
Caching Manual caching of documents
Permission VSRVQSRM\Appli_Urba, FULL
```
|
https://github.com/ansible/ansible/issues/59237
|
https://github.com/ansible/ansible/pull/59469
|
ed54b9b4418f895f0809bffb5f491553836ec634
|
584824f560dd88b4f35a4632e082e5945b0495bd
| 2019-07-18T12:12:04Z |
python
| 2019-12-04T04:16:10Z |
changelogs/fragments/win_share-Implement-append-paramtere-for-access-rules.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,237 |
win_share: remove all other permissions of a share
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I have a playbook the user: App_Urb on a share EAI. Everything is ok, but If I have another user on that share, playbook remove it
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_share
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
[14:07:41]root@vsrvkermit playbook]# ansible-config dump --only-changed
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/datas/ansible/roles']
```
##### OS / ENVIRONMENT
Target is Windows 2012 R2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Case original:
```
C:\Windows\system32>net share EAI
Share name EAI
Path D:\Partage\EAI
Remark Repertoire EAI
Maximum users No limit
Users
Caching Manual caching of documents
Permission VSRVQSRM\Appli_Urba, FULL
PAM\EXP_TRT, READ
```
I manually added PAM\EXP_TRT with READ rules.
I run playbook like this:
```
- name: Add share EAI
win_share:
name: EAI
description: Repertoire EAI
path: D:\Partage\EAI
list: no
full: Appli_Urba
when: ansible_distribution_version is version('6.2','>=')
```
This remove the created user
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Keep original user, just add Appli_Urb if not exist
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
C:\Windows\system32>net share EAI
Share name EAI
Path D:\Partage\EAI
Remark Repertoire EAI
Maximum users No limit
Users
Caching Manual caching of documents
Permission VSRVQSRM\Appli_Urba, FULL
```
|
https://github.com/ansible/ansible/issues/59237
|
https://github.com/ansible/ansible/pull/59469
|
ed54b9b4418f895f0809bffb5f491553836ec634
|
584824f560dd88b4f35a4632e082e5945b0495bd
| 2019-07-18T12:12:04Z |
python
| 2019-12-04T04:16:10Z |
lib/ansible/modules/windows/win_share.ps1
|
#!powershell
# Copyright: (c) 2015, Hans-Joachim Kliemeck <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#Requires -Module Ansible.ModuleUtils.Legacy
#Requires -Module Ansible.ModuleUtils.SID
#Functions
Function NormalizeAccounts {
param(
[parameter(valuefrompipeline=$true)]
$users
)
$users = $users.Trim()
If ($users -eq "") {
$splitUsers = [Collections.Generic.List[String]] @()
}
Else {
$splitUsers = [Collections.Generic.List[String]] $users.Split(",")
}
$normalizedUsers = [Collections.Generic.List[String]] @()
ForEach($splitUser in $splitUsers) {
$sid = Convert-ToSID -account_name $splitUser
if (!$sid) {
Fail-Json $result "$splitUser is not a valid user or group on the host machine or domain"
}
$normalizedUser = (New-Object System.Security.Principal.SecurityIdentifier($sid)).Translate([System.Security.Principal.NTAccount])
$normalizedUsers.Add($normalizedUser)
}
return ,$normalizedUsers
}
$result = @{
changed = $false
actions = @() # More for debug purposes
}
$params = Parse-Args $args -supports_check_mode $true
# While the -SmbShare cmdlets have a -WhatIf parameter, they don't honor it, need to skip the cmdlet if in check mode
$check_mode = Get-AnsibleParam -obj $params -name "_ansible_check_mode" -type "bool" -default $false
$name = Get-AnsibleParam -obj $params -name "name" -type "str" -failifempty $true
$state = Get-AnsibleParam -obj $params -name "state" -type "str" -default "present" -validateset "present","absent"
if (-not (Get-Command -Name Get-SmbShare -ErrorAction SilentlyContinue)) {
Fail-Json $result "The current host does not support the -SmbShare cmdlets required by this module. Please run on Server 2012 or Windows 8 and later"
}
$share = Get-SmbShare -Name $name -ErrorAction SilentlyContinue
If ($state -eq "absent") {
If ($share) {
# See message around -WhatIf where $check_mode is defined
if (-not $check_mode) {
Remove-SmbShare -Force -Name $name | Out-Null
}
$result.actions += "Remove-SmbShare -Force -Name $name"
$result.changed = $true
}
} Else {
$path = Get-AnsibleParam -obj $params -name "path" -type "path" -failifempty $true
$description = Get-AnsibleParam -obj $params -name "description" -type "str" -default ""
$permissionList = Get-AnsibleParam -obj $params -name "list" -type "bool" -default $false
$folderEnum = if ($permissionList) { "Unrestricted" } else { "AccessBased" }
$permissionRead = Get-AnsibleParam -obj $params -name "read" -type "str" -default "" | NormalizeAccounts
$permissionChange = Get-AnsibleParam -obj $params -name "change" -type "str" -default "" | NormalizeAccounts
$permissionFull = Get-AnsibleParam -obj $params -name "full" -type "str" -default "" | NormalizeAccounts
$permissionDeny = Get-AnsibleParam -obj $params -name "deny" -type "str" -default "" | NormalizeAccounts
$cachingMode = Get-AnsibleParam -obj $params -name "caching_mode" -type "str" -default "Manual" -validateSet "BranchCache","Documents","Manual","None","Programs","Unknown"
$encrypt = Get-AnsibleParam -obj $params -name "encrypt" -type "bool" -default $false
If (-Not (Test-Path -Path $path)) {
Fail-Json $result "$path directory does not exist on the host"
}
# normalize path and remove slash at the end
$path = (Get-Item $path).FullName -replace "\\$"
# need to (re-)create share
If (-not $share) {
if (-not $check_mode) {
New-SmbShare -Name $name -Path $path | Out-Null
}
$share = Get-SmbShare -Name $name -ErrorAction SilentlyContinue
$result.changed = $true
$result.actions += "New-SmbShare -Name $name -Path $path"
# if in check mode we cannot run the below as no share exists so just
# exit early
if ($check_mode) {
Exit-Json -obj $result
}
}
If ($share.Path -ne $path) {
if (-not $check_mode) {
Remove-SmbShare -Force -Name $name | Out-Null
New-SmbShare -Name $name -Path $path | Out-Null
}
$share = Get-SmbShare -Name $name -ErrorAction SilentlyContinue
$result.changed = $true
$result.actions += "Remove-SmbShare -Force -Name $name"
$result.actions += "New-SmbShare -Name $name -Path $path"
}
# updates
If ($share.Description -ne $description) {
if (-not $check_mode) {
Set-SmbShare -Force -Name $name -Description $description | Out-Null
}
$result.changed = $true
$result.actions += "Set-SmbShare -Force -Name $name -Description $description"
}
If ($share.FolderEnumerationMode -ne $folderEnum) {
if (-not $check_mode) {
Set-SmbShare -Force -Name $name -FolderEnumerationMode $folderEnum | Out-Null
}
$result.changed = $true
$result.actions += "Set-SmbShare -Force -Name $name -FolderEnumerationMode $folderEnum"
}
if ($share.CachingMode -ne $cachingMode) {
if (-not $check_mode) {
Set-SmbShare -Force -Name $name -CachingMode $cachingMode | Out-Null
}
$result.changed = $true
$result.actions += "Set-SmbShare -Force -Name $name -CachingMode $cachingMode"
}
if ($share.EncryptData -ne $encrypt) {
if (-not $check_mode) {
Set-SmbShare -Force -Name $name -EncryptData $encrypt | Out-Null
}
$result.changed = $true
$result.actions += "Set-SmbShare -Force -Name $name -EncryptData $encrypt"
}
# clean permissions that imply others
ForEach ($user in $permissionFull) {
$permissionChange.remove($user) | Out-Null
$permissionRead.remove($user) | Out-Null
}
ForEach ($user in $permissionChange) {
$permissionRead.remove($user) | Out-Null
}
# remove permissions
$permissions = Get-SmbShareAccess -Name $name
ForEach ($permission in $permissions) {
If ($permission.AccessControlType -eq "Deny") {
$cim_count = 0
foreach ($count in $permissions) {
$cim_count++
}
# Don't remove the Deny entry for Everyone if there are no other permissions set (cim_count == 1)
if (-not ($permission.AccountName -eq 'Everyone' -and $cim_count -eq 1)) {
If (-not ($permissionDeny.Contains($permission.AccountName))) {
if (-not $check_mode) {
Unblock-SmbShareAccess -Force -Name $name -AccountName $permission.AccountName | Out-Null
}
$result.changed = $true
$result.actions += "Unblock-SmbShareAccess -Force -Name $name -AccountName $($permission.AccountName)"
} else {
# Remove from the deny list as it already has the permissions
$permissionDeny.remove($permission.AccountName) | Out-Null
}
}
} ElseIf ($permission.AccessControlType -eq "Allow") {
If ($permission.AccessRight -eq "Full") {
If (-not ($permissionFull.Contains($permission.AccountName))) {
if (-not $check_mode) {
Revoke-SmbShareAccess -Force -Name $name -AccountName $permission.AccountName | Out-Null
}
$result.changed = $true
$result.actions += "Revoke-SmbShareAccess -Force -Name $name -AccountName $($permission.AccountName)"
Continue
}
# user got requested permissions
$permissionFull.remove($permission.AccountName) | Out-Null
} ElseIf ($permission.AccessRight -eq "Change") {
If (-not ($permissionChange.Contains($permission.AccountName))) {
if (-not $check_mode) {
Revoke-SmbShareAccess -Force -Name $name -AccountName $permission.AccountName | Out-Null
}
$result.changed = $true
$result.actions += "Revoke-SmbShareAccess -Force -Name $name -AccountName $($permission.AccountName)"
Continue
}
# user got requested permissions
$permissionChange.remove($permission.AccountName) | Out-Null
} ElseIf ($permission.AccessRight -eq "Read") {
If (-not ($permissionRead.Contains($permission.AccountName))) {
if (-not $check_mode) {
Revoke-SmbShareAccess -Force -Name $name -AccountName $permission.AccountName | Out-Null
}
$result.changed = $true
$result.actions += "Revoke-SmbShareAccess -Force -Name $name -AccountName $($permission.AccountName)"
Continue
}
# user got requested permissions
$permissionRead.Remove($permission.AccountName) | Out-Null
}
}
}
# add missing permissions
ForEach ($user in $permissionRead) {
if (-not $check_mode) {
Grant-SmbShareAccess -Force -Name $name -AccountName $user -AccessRight "Read" | Out-Null
}
$result.changed = $true
$result.actions += "Grant-SmbShareAccess -Force -Name $name -AccountName $user -AccessRight Read"
}
ForEach ($user in $permissionChange) {
if (-not $check_mode) {
Grant-SmbShareAccess -Force -Name $name -AccountName $user -AccessRight "Change" | Out-Null
}
$result.changed = $true
$result.actions += "Grant-SmbShareAccess -Force -Name $name -AccountName $user -AccessRight Change"
}
ForEach ($user in $permissionFull) {
if (-not $check_mode) {
Grant-SmbShareAccess -Force -Name $name -AccountName $user -AccessRight "Full" | Out-Null
}
$result.changed = $true
$result.actions += "Grant-SmbShareAccess -Force -Name $name -AccountName $user -AccessRight Full"
}
ForEach ($user in $permissionDeny) {
if (-not $check_mode) {
Block-SmbShareAccess -Force -Name $name -AccountName $user | Out-Null
}
$result.changed = $true
$result.actions += "Block-SmbShareAccess -Force -Name $name -AccountName $user"
}
}
Exit-Json $result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,237 |
win_share: remove all other permissions of a share
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I have a playbook the user: App_Urb on a share EAI. Everything is ok, but If I have another user on that share, playbook remove it
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_share
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
[14:07:41]root@vsrvkermit playbook]# ansible-config dump --only-changed
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/datas/ansible/roles']
```
##### OS / ENVIRONMENT
Target is Windows 2012 R2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Case original:
```
C:\Windows\system32>net share EAI
Share name EAI
Path D:\Partage\EAI
Remark Repertoire EAI
Maximum users No limit
Users
Caching Manual caching of documents
Permission VSRVQSRM\Appli_Urba, FULL
PAM\EXP_TRT, READ
```
I manually added PAM\EXP_TRT with READ rules.
I run playbook like this:
```
- name: Add share EAI
win_share:
name: EAI
description: Repertoire EAI
path: D:\Partage\EAI
list: no
full: Appli_Urba
when: ansible_distribution_version is version('6.2','>=')
```
This remove the created user
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Keep original user, just add Appli_Urb if not exist
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
C:\Windows\system32>net share EAI
Share name EAI
Path D:\Partage\EAI
Remark Repertoire EAI
Maximum users No limit
Users
Caching Manual caching of documents
Permission VSRVQSRM\Appli_Urba, FULL
```
|
https://github.com/ansible/ansible/issues/59237
|
https://github.com/ansible/ansible/pull/59469
|
ed54b9b4418f895f0809bffb5f491553836ec634
|
584824f560dd88b4f35a4632e082e5945b0495bd
| 2019-07-18T12:12:04Z |
python
| 2019-12-04T04:16:10Z |
lib/ansible/modules/windows/win_share.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Hans-Joachim Kliemeck <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: win_share
version_added: "2.1"
short_description: Manage Windows shares
description:
- Add, modify or remove Windows share and set share permissions.
requirements:
- As this module used newer cmdlets like New-SmbShare this can only run on
Windows 8 / Windows 2012 or newer.
- This is due to the reliance on the WMI provider MSFT_SmbShare
U(https://msdn.microsoft.com/en-us/library/hh830471) which was only added
with these Windows releases.
options:
name:
description:
- Share name.
type: str
required: yes
path:
description:
- Share directory.
type: path
required: yes
state:
description:
- Specify whether to add C(present) or remove C(absent) the specified share.
type: str
choices: [ absent, present ]
default: present
description:
description:
- Share description.
type: str
list:
description:
- Specify whether to allow or deny file listing, in case user has no permission on share. Also known as Access-Based Enumeration.
type: bool
default: no
read:
description:
- Specify user list that should get read access on share, separated by comma.
type: str
change:
description:
- Specify user list that should get read and write access on share, separated by comma.
type: str
full:
description:
- Specify user list that should get full access on share, separated by comma.
type: str
deny:
description:
- Specify user list that should get no access, regardless of implied access on share, separated by comma.
type: str
caching_mode:
description:
- Set the CachingMode for this share.
type: str
choices: [ BranchCache, Documents, Manual, None, Programs, Unknown ]
default: Manual
version_added: "2.3"
encrypt:
description: Sets whether to encrypt the traffic to the share or not.
type: bool
default: no
version_added: "2.4"
author:
- Hans-Joachim Kliemeck (@h0nIg)
- David Baumann (@daBONDi)
'''
EXAMPLES = r'''
# Playbook example
# Add share and set permissions
---
- name: Add secret share
win_share:
name: internal
description: top secret share
path: C:\shares\internal
list: no
full: Administrators,CEO
read: HR-Global
deny: HR-External
- name: Add public company share
win_share:
name: company
description: top secret share
path: C:\shares\company
list: yes
full: Administrators,CEO
read: Global
- name: Remove previously added share
win_share:
name: internal
state: absent
'''
RETURN = r'''
actions:
description: A list of action cmdlets that were run by the module.
returned: success
type: list
sample: ['New-SmbShare -Name share -Path C:\temp']
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,237 |
win_share: remove all other permissions of a share
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I have a playbook the user: App_Urb on a share EAI. Everything is ok, but If I have another user on that share, playbook remove it
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_share
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.8.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
[14:07:41]root@vsrvkermit playbook]# ansible-config dump --only-changed
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/datas/ansible/roles']
```
##### OS / ENVIRONMENT
Target is Windows 2012 R2
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
Case original:
```
C:\Windows\system32>net share EAI
Share name EAI
Path D:\Partage\EAI
Remark Repertoire EAI
Maximum users No limit
Users
Caching Manual caching of documents
Permission VSRVQSRM\Appli_Urba, FULL
PAM\EXP_TRT, READ
```
I manually added PAM\EXP_TRT with READ rules.
I run playbook like this:
```
- name: Add share EAI
win_share:
name: EAI
description: Repertoire EAI
path: D:\Partage\EAI
list: no
full: Appli_Urba
when: ansible_distribution_version is version('6.2','>=')
```
This remove the created user
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Keep original user, just add Appli_Urb if not exist
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
C:\Windows\system32>net share EAI
Share name EAI
Path D:\Partage\EAI
Remark Repertoire EAI
Maximum users No limit
Users
Caching Manual caching of documents
Permission VSRVQSRM\Appli_Urba, FULL
```
|
https://github.com/ansible/ansible/issues/59237
|
https://github.com/ansible/ansible/pull/59469
|
ed54b9b4418f895f0809bffb5f491553836ec634
|
584824f560dd88b4f35a4632e082e5945b0495bd
| 2019-07-18T12:12:04Z |
python
| 2019-12-04T04:16:10Z |
test/integration/targets/win_share/tasks/tests.yml
|
---
- name: create share check
win_share:
name: "{{test_win_share_name}}"
path: "{{test_win_share_path}}"
state: present
register: create_share_check
check_mode: yes
- name: check if share exists check
win_shell: Get-SmbShare | Where-Object { $_.Name -eq '{{test_win_share_name}}' }
register: create_share_actual_check
- name: assert create share check
assert:
that:
- create_share_check is changed
- create_share_actual_check.stdout_lines == []
- name: create share
win_share:
name: "{{test_win_share_name}}"
path: "{{test_win_share_path}}"
state: present
register: create_share
- name: check if share exists
win_shell: Get-SmbShare | Where-Object { $_.Name -eq '{{test_win_share_name}}' }
register: create_share_actual
- name: assert create share
assert:
that:
- create_share is changed
- create_share_actual.stdout_lines != []
- name: create share again
win_share:
name: "{{test_win_share_name}}"
path: "{{test_win_share_path}}"
state: present
register: create_share_again
- name: check if share exists again
win_shell: Get-SmbShare | Where-Object { $_.Name -eq '{{test_win_share_name}}' }
register: create_share_actual_again
- name: assert create share again
assert:
that:
- create_share_again is not changed
- create_share_actual_again.stdout_lines == create_share_actual.stdout_lines
- name: set caching mode to Programs check
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
caching_mode: Programs
register: caching_mode_programs_check
check_mode: yes
- name: get actual caching mode check
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').CachingMode"
register: caching_mode_programs_actual_check
- name: assert caching mode to Programs check
assert:
that:
- caching_mode_programs_check is changed
- caching_mode_programs_actual_check.stdout == "Manual\r\n"
- name: set caching mode to Programs
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
caching_mode: Programs
register: caching_mode_programs
- name: get actual caching mode
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').CachingMode"
register: caching_mode_programs_actual
- name: assert caching mode to Programs
assert:
that:
- caching_mode_programs is changed
- caching_mode_programs_actual.stdout == "Programs\r\n"
- name: set caching mode to Programs again
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
caching_mode: Programs
register: caching_mode_programs_again
- name: get actual caching mode again
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').CachingMode"
register: caching_mode_programs_actual_again
- name: assert caching mode to Programs again
assert:
that:
- caching_mode_programs_again is not changed
- caching_mode_programs_actual_again.stdout == "Programs\r\n"
- name: set encryption on share check
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
encrypt: True
register: encrypt_on_check
check_mode: yes
- name: get actual encrypt mode check
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').EncryptData"
register: encrypt_on_actual_check
- name: assert set encryption on check
assert:
that:
- encrypt_on_check is changed
- encrypt_on_actual_check.stdout == "False\r\n"
- name: set encryption on share
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
encrypt: True
register: encrypt_on
- name: get actual encrypt mode
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').EncryptData"
register: encrypt_on_actual
- name: assert set encryption on
assert:
that:
- encrypt_on is changed
- encrypt_on_actual.stdout == "True\r\n"
- name: set encryption on share again
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
encrypt: True
register: encrypt_on_again
- name: get actual encrypt mode again
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').EncryptData"
register: encrypt_on_actual
- name: assert set encryption on again
assert:
that:
- encrypt_on_again is not changed
- encrypt_on_actual.stdout == "True\r\n"
- name: set description check
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
description: description
register: change_decription_check
check_mode: yes
- name: get actual description check
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').Description"
register: change_description_actual_check
- name: assert change description check
assert:
that:
- change_decription_check is changed
- change_description_actual_check.stdout == "\r\n"
- name: set description
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
description: description
register: change_decription
- name: get actual description
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').Description"
register: change_description_actual
- name: assert change description
assert:
that:
- change_decription is changed
- change_description_actual.stdout == "description\r\n"
- name: set description again
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
description: description
register: change_decription_again
- name: get actual description again
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').Description"
register: change_description_actual_again
- name: assert change description again
assert:
that:
- change_decription_again is not changed
- change_description_actual_again.stdout == "description\r\n"
- name: set allow list check
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
list: True
register: allow_list_check
check_mode: yes
- name: get actual allow listing check
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').FolderEnumerationMode"
register: allow_list_actual_check
- name: assert allow list check
assert:
that:
- allow_list_check is changed
- allow_list_actual_check.stdout == "AccessBased\r\n"
- name: set allow list
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
list: True
register: allow_list
- name: get actual allow listing
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').FolderEnumerationMode"
register: allow_list_actual
- name: assert allow list
assert:
that:
- allow_list is changed
- allow_list_actual.stdout == "Unrestricted\r\n"
- name: set allow list again
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
list: True
register: allow_list_again
- name: get actual allow listing again
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').FolderEnumerationMode"
register: allow_list_actual_again
- name: assert allow list check again
assert:
that:
- allow_list_again is not changed
- allow_list_actual_again.stdout == "Unrestricted\r\n"
- name: set deny list check
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
list: False
register: deny_list_check
check_mode: yes
- name: get actual deny listing check
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').FolderEnumerationMode"
register: deny_list_actual_check
- name: assert deny list check
assert:
that:
- deny_list_check is changed
- deny_list_actual_check.stdout == "Unrestricted\r\n"
- name: set deny list
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
list: False
register: deny_list
- name: get actual deny listing
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').FolderEnumerationMode"
register: deny_list_actual
- name: assert deny list
assert:
that:
- deny_list is changed
- deny_list_actual.stdout == "AccessBased\r\n"
- name: set deny list again
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
list: False
register: deny_list_again
- name: get actual deny listing again
win_command: powershell.exe "(Get-SmbShare -Name '{{test_win_share_name}}').FolderEnumerationMode"
register: deny_list_actual_again
- name: assert deny list again
assert:
that:
- deny_list_again is not changed
- deny_list_actual_again.stdout == "AccessBased\r\n"
- name: set ACLs on share check
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
full: Administrators
change: Users
read: Guests
deny: Remote Desktop Users
register: set_acl_check
check_mode: yes
- name: get actual share ACLs check
win_shell: foreach ($acl in Get-SmbShareAccess -Name '{{test_win_share_name}}') { Write-Host "$($acl.AccessRight)|$($acl.AccessControlType)|$($acl.AccountName)" }
register: set_acl_actual_check
- name: assert set ACLs on share check
assert:
that:
- set_acl_check is changed
- set_acl_actual_check.stdout == "Full|Deny|Everyone\n"
- name: set ACLs on share
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
full: Administrators
change: Users
read: Guests
deny: Remote Desktop Users
register: set_acl
- name: get actual share ACLs
win_shell: foreach ($acl in Get-SmbShareAccess -Name '{{test_win_share_name}}') { Write-Host "$($acl.AccessRight)|$($acl.AccessControlType)|$($acl.AccountName)" }
register: set_acl_actual
- name: assert set ACLs on share
assert:
that:
- set_acl is changed
- set_acl_actual.stdout_lines|length == 4
- set_acl_actual.stdout_lines[0] == 'Full|Deny|BUILTIN\\Remote Desktop Users'
- set_acl_actual.stdout_lines[1] == 'Read|Allow|BUILTIN\\Guests'
- set_acl_actual.stdout_lines[2] == 'Change|Allow|BUILTIN\\Users'
- set_acl_actual.stdout_lines[3] == 'Full|Allow|BUILTIN\\Administrators'
- name: set ACLs on share again
win_share:
name: "{{test_win_share_name}}"
state: present
path: "{{test_win_share_path}}"
full: Administrators
change: Users
read: Guests
deny: Remote Desktop Users
register: set_acl_again
- name: get actual share ACLs again
win_shell: foreach ($acl in Get-SmbShareAccess -Name '{{test_win_share_name}}') { Write-Host "$($acl.AccessRight)|$($acl.AccessControlType)|$($acl.AccountName)" }
register: set_acl_actual_again
- name: assert set ACLs on share again
assert:
that:
- set_acl_again is not changed
- set_acl_actual_again.stdout_lines|length == 4
- set_acl_actual_again.stdout_lines[0] == 'Full|Deny|BUILTIN\\Remote Desktop Users'
- set_acl_actual_again.stdout_lines[1] == 'Read|Allow|BUILTIN\\Guests'
- set_acl_actual_again.stdout_lines[2] == 'Change|Allow|BUILTIN\\Users'
- set_acl_actual_again.stdout_lines[3] == 'Full|Allow|BUILTIN\\Administrators'
- name: remove share check
win_share:
name: "{{test_win_share_name}}"
state: absent
register: remove_share_check
check_mode: yes
- name: check if share is removed check
win_shell: Get-SmbShare | Where-Object { $_.Name -eq '{{test_win_share_name}}' }
register: remove_share_actual_check
- name: assert remove share check
assert:
that:
- remove_share_check is changed
- remove_share_actual_check.stdout_lines != []
- name: remove share
win_share:
name: "{{test_win_share_name}}"
state: absent
register: remove_share
- name: check if share is removed
win_shell: Get-SmbShare | Where-Object { $_.Name -eq '{{test_win_share_name}}' }
register: remove_share_actual
- name: assert remove share
assert:
that:
- remove_share is changed
- remove_share_actual.stdout_lines == []
- name: remove share again
win_share:
name: "{{test_win_share_name}}"
state: absent
register: remove_share_again
- name: check if share is removed again
win_shell: Get-SmbShare | Where-Object { $_.Name -eq '{{test_win_share_name}}' }
register: remove_share_actual_again
- name: assert remove share again
assert:
that:
- remove_share_again is not changed
- remove_share_actual_again.stdout_lines == []
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,957 |
extract() filter fails when key does not exist in container
|
##### SUMMARY
`extract()` filter fails when key does not exist in container
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
core filters
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/bidord/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/bidord/dev/ansible/lib/ansible
executable location = /home/bidord/dev/ansible/bin/ansible
python version = 2.7.15+ (default, Jul 9 2019, 16:51:35) [GCC 7.4.0]
```
##### CONFIGURATION
(default)
```
```
##### OS / ENVIRONMENT
Any.
##### STEPS TO REPRODUCE
test-extract.yml:
```yaml
---
- hosts: localhost
gather_facts: false
vars:
container:
key:
subkey: value
tasks:
- ignore_errors: true
block:
- name: bad container
debug:
msg: "{{ 'key' | extract(badcontainer) | default('SUCCESS') }}"
- name: bad container, subkey
debug:
msg: "{{ 'key' | extract(badcontainer, 'subkey') | default('SUCCESS') }}"
- name: bad container, subkey as attribute
debug:
msg: "{{ ('key' | extract(badcontainer)).subkey | default('SUCCESS') }}"
- name: standard dict, bad key
debug:
msg: "{{ 'badkey' | extract(container) | default('SUCCESS') }}"
- name: standard dict, bad key, subkey
debug:
msg: "{{ 'badkey' | extract(container, 'subkey') | default('SUCCESS') }}"
- name: standard dict, bad key, subkey as attribute
debug:
msg: "{{ ('badkey' | extract(container)).subsubkey | default('SUCCESS') }}"
- name: standard dict, bad subkey
debug:
msg: "{{ 'key' | extract(container, 'badsubkey') | default('SUCCESS') }}"
- name: standard dict, bad subkey, subsubkey
debug:
msg: "{{ 'key' | extract(container, ['badsubkey', 'subsubkey']) | default('SUCCESS') }}"
- name: standard dict, bad subkey, subkey as attribute
debug:
msg: "{{ ('key' | extract(container, 'badsubkey')).subsubkey | default('SUCCESS') }}"
- name: hostvars, bad key
debug:
msg: "{{ 'badkey' | extract(hostvars) | default('SUCCESS') }}"
- name: hostvars, bad key, subkey
debug:
msg: "{{ 'badkey' | extract(hostvars, 'subkey') | default('SUCCESS') }}"
- name: hostvars, bad key, subkey as attribute
debug:
msg: "{{ ('badkey' | extract(hostvars)).subsubkey | default('SUCCESS') }}"
- name: hostvars, bad subkey
debug:
msg: "{{ 'localhost' | extract(hostvars, 'badsubkey') | default('SUCCESS') }}"
- name: hostvars, bad subkey, subsubkey
debug:
msg: "{{ 'localhost' | extract(hostvars, ['badsubkey', 'subsubkey']) | default('SUCCESS') }}"
- name: hostvars, bad subkey, subsubkey as attribute
debug:
msg: "{{ ('localhost' | extract(hostvars, 'badsubkey')).subsubkey | default('SUCCESS') }}"
```
##### EXPECTED RESULTS
All tests should print `SUCCESS`.
```
$ ANSIBLE_STDOUT_CALLBACK=yaml ansible-playbook test-extract.yml
PLAY [localhost] **************************************************************************************************
TASK [bad container] **********************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey] **************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey as attribute] *************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key] *************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key, subkey] *****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key, subkey as attribute] ****************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subsubkey] ***********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subkey as attribute] *************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key] ******************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey as attribute] *********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey] ***************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey] ****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey as attribute] ***************************************************************
ok: [localhost] =>
msg: SUCCESS
PLAY RECAP ********************************************************************************************************
localhost : ok=15 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
Some tests fail during the execution of `extract()`.
Others return `Undefined` instead of `AnsibleUndefined`, which then fails if we try to access a subkey using jinja2 `.` syntax.
```
$ ANSIBLE_STDOUT_CALLBACK=yaml ansible-playbook test-extract.yml
PLAY [localhost] **************************************************************************************************
TASK [bad container] **********************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey] **************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey as attribute] *************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key] *************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad key, subkey] *****************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad key, subkey as attribute] ****************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subsubkey] ***********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subkey as attribute] *************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: None is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 44, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: standard dict, bad subkey, subkey as attribute
^ here
...ignoring
TASK [hostvars, bad key] ******************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey] **********************************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: "hostvars['badkey']" is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 52, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad key, subkey
^ here
...ignoring
TASK [hostvars, bad key, subkey as attribute] *********************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: "hostvars['badkey']" is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 56, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad key, subkey as attribute
^ here
...ignoring
TASK [hostvars, bad subkey] ***************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey] ****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey as attribute] ***************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: None is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 68, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad subkey, subsubkey as attribute
^ here
...ignoring
PLAY RECAP ********************************************************************************************************
localhost : ok=15 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=7
```
Edit: actual results was a wrong copy/paste from a previous version of test-extract.yml
|
https://github.com/ansible/ansible/issues/64957
|
https://github.com/ansible/ansible/pull/64959
|
94043849855d4c4f573c4844aa7ac3e797b387d7
|
03c16096d737a43166719e9b8e9f816a533200f4
| 2019-11-17T14:04:14Z |
python
| 2019-12-04T12:24:52Z |
changelogs/fragments/64959-extract-filter-when-key-does-not-exist.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,957 |
extract() filter fails when key does not exist in container
|
##### SUMMARY
`extract()` filter fails when key does not exist in container
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
core filters
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/bidord/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/bidord/dev/ansible/lib/ansible
executable location = /home/bidord/dev/ansible/bin/ansible
python version = 2.7.15+ (default, Jul 9 2019, 16:51:35) [GCC 7.4.0]
```
##### CONFIGURATION
(default)
```
```
##### OS / ENVIRONMENT
Any.
##### STEPS TO REPRODUCE
test-extract.yml:
```yaml
---
- hosts: localhost
gather_facts: false
vars:
container:
key:
subkey: value
tasks:
- ignore_errors: true
block:
- name: bad container
debug:
msg: "{{ 'key' | extract(badcontainer) | default('SUCCESS') }}"
- name: bad container, subkey
debug:
msg: "{{ 'key' | extract(badcontainer, 'subkey') | default('SUCCESS') }}"
- name: bad container, subkey as attribute
debug:
msg: "{{ ('key' | extract(badcontainer)).subkey | default('SUCCESS') }}"
- name: standard dict, bad key
debug:
msg: "{{ 'badkey' | extract(container) | default('SUCCESS') }}"
- name: standard dict, bad key, subkey
debug:
msg: "{{ 'badkey' | extract(container, 'subkey') | default('SUCCESS') }}"
- name: standard dict, bad key, subkey as attribute
debug:
msg: "{{ ('badkey' | extract(container)).subsubkey | default('SUCCESS') }}"
- name: standard dict, bad subkey
debug:
msg: "{{ 'key' | extract(container, 'badsubkey') | default('SUCCESS') }}"
- name: standard dict, bad subkey, subsubkey
debug:
msg: "{{ 'key' | extract(container, ['badsubkey', 'subsubkey']) | default('SUCCESS') }}"
- name: standard dict, bad subkey, subkey as attribute
debug:
msg: "{{ ('key' | extract(container, 'badsubkey')).subsubkey | default('SUCCESS') }}"
- name: hostvars, bad key
debug:
msg: "{{ 'badkey' | extract(hostvars) | default('SUCCESS') }}"
- name: hostvars, bad key, subkey
debug:
msg: "{{ 'badkey' | extract(hostvars, 'subkey') | default('SUCCESS') }}"
- name: hostvars, bad key, subkey as attribute
debug:
msg: "{{ ('badkey' | extract(hostvars)).subsubkey | default('SUCCESS') }}"
- name: hostvars, bad subkey
debug:
msg: "{{ 'localhost' | extract(hostvars, 'badsubkey') | default('SUCCESS') }}"
- name: hostvars, bad subkey, subsubkey
debug:
msg: "{{ 'localhost' | extract(hostvars, ['badsubkey', 'subsubkey']) | default('SUCCESS') }}"
- name: hostvars, bad subkey, subsubkey as attribute
debug:
msg: "{{ ('localhost' | extract(hostvars, 'badsubkey')).subsubkey | default('SUCCESS') }}"
```
##### EXPECTED RESULTS
All tests should print `SUCCESS`.
```
$ ANSIBLE_STDOUT_CALLBACK=yaml ansible-playbook test-extract.yml
PLAY [localhost] **************************************************************************************************
TASK [bad container] **********************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey] **************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey as attribute] *************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key] *************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key, subkey] *****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key, subkey as attribute] ****************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subsubkey] ***********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subkey as attribute] *************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key] ******************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey as attribute] *********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey] ***************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey] ****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey as attribute] ***************************************************************
ok: [localhost] =>
msg: SUCCESS
PLAY RECAP ********************************************************************************************************
localhost : ok=15 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
Some tests fail during the execution of `extract()`.
Others return `Undefined` instead of `AnsibleUndefined`, which then fails if we try to access a subkey using jinja2 `.` syntax.
```
$ ANSIBLE_STDOUT_CALLBACK=yaml ansible-playbook test-extract.yml
PLAY [localhost] **************************************************************************************************
TASK [bad container] **********************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey] **************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey as attribute] *************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key] *************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad key, subkey] *****************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad key, subkey as attribute] ****************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subsubkey] ***********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subkey as attribute] *************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: None is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 44, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: standard dict, bad subkey, subkey as attribute
^ here
...ignoring
TASK [hostvars, bad key] ******************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey] **********************************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: "hostvars['badkey']" is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 52, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad key, subkey
^ here
...ignoring
TASK [hostvars, bad key, subkey as attribute] *********************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: "hostvars['badkey']" is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 56, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad key, subkey as attribute
^ here
...ignoring
TASK [hostvars, bad subkey] ***************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey] ****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey as attribute] ***************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: None is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 68, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad subkey, subsubkey as attribute
^ here
...ignoring
PLAY RECAP ********************************************************************************************************
localhost : ok=15 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=7
```
Edit: actual results was a wrong copy/paste from a previous version of test-extract.yml
|
https://github.com/ansible/ansible/issues/64957
|
https://github.com/ansible/ansible/pull/64959
|
94043849855d4c4f573c4844aa7ac3e797b387d7
|
03c16096d737a43166719e9b8e9f816a533200f4
| 2019-11-17T14:04:14Z |
python
| 2019-12-04T12:24:52Z |
lib/ansible/plugins/filter/core.py
|
# (c) 2012, Jeroen Hoekx <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import crypt
import glob
import hashlib
import itertools
import json
import ntpath
import os.path
import re
import string
import sys
import time
import uuid
import yaml
import datetime
from functools import partial
from random import Random, SystemRandom, shuffle
from jinja2.filters import environmentfilter, do_groupby as _do_groupby
from ansible.errors import AnsibleError, AnsibleFilterError
from ansible.module_utils.six import iteritems, string_types, integer_types, reraise
from ansible.module_utils.six.moves import reduce, shlex_quote
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common._collections_compat import Mapping, MutableMapping
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.template import recursive_check_defined
from ansible.utils.display import Display
from ansible.utils.encrypt import passlib_or_crypt
from ansible.utils.hashing import md5s, checksum_s
from ansible.utils.unicode import unicode_wrap
from ansible.utils.vars import merge_hash
display = Display()
UUID_NAMESPACE_ANSIBLE = uuid.UUID('361E6D51-FAEC-444A-9079-341386DA8E2E')
def to_yaml(a, *args, **kw):
'''Make verbose, human readable yaml'''
default_flow_style = kw.pop('default_flow_style', None)
transformed = yaml.dump(a, Dumper=AnsibleDumper, allow_unicode=True, default_flow_style=default_flow_style, **kw)
return to_text(transformed)
def to_nice_yaml(a, indent=4, *args, **kw):
'''Make verbose, human readable yaml'''
transformed = yaml.dump(a, Dumper=AnsibleDumper, indent=indent, allow_unicode=True, default_flow_style=False, **kw)
return to_text(transformed)
def to_json(a, *args, **kw):
''' Convert the value to JSON '''
return json.dumps(a, cls=AnsibleJSONEncoder, *args, **kw)
def to_nice_json(a, indent=4, sort_keys=True, *args, **kw):
'''Make verbose, human readable JSON'''
try:
return json.dumps(a, indent=indent, sort_keys=sort_keys, separators=(',', ': '), cls=AnsibleJSONEncoder, *args, **kw)
except Exception as e:
# Fallback to the to_json filter
display.warning(u'Unable to convert data using to_nice_json, falling back to to_json: %s' % to_text(e))
return to_json(a, *args, **kw)
def to_bool(a):
''' return a bool for the arg '''
if a is None or isinstance(a, bool):
return a
if isinstance(a, string_types):
a = a.lower()
if a in ('yes', 'on', '1', 'true', 1):
return True
return False
def to_datetime(string, format="%Y-%m-%d %H:%M:%S"):
return datetime.datetime.strptime(string, format)
def strftime(string_format, second=None):
''' return a date string using string. See https://docs.python.org/2/library/time.html#time.strftime for format '''
if second is not None:
try:
second = int(second)
except Exception:
raise AnsibleFilterError('Invalid value for epoch value (%s)' % second)
return time.strftime(string_format, time.localtime(second))
def quote(a):
''' return its argument quoted for shell usage '''
return shlex_quote(to_text(a))
def fileglob(pathname):
''' return list of matched regular files for glob '''
return [g for g in glob.glob(pathname) if os.path.isfile(g)]
def regex_replace(value='', pattern='', replacement='', ignorecase=False):
''' Perform a `re.sub` returning a string '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
if ignorecase:
flags = re.I
else:
flags = 0
_re = re.compile(pattern, flags=flags)
return _re.sub(replacement, value)
def regex_findall(value, regex, multiline=False, ignorecase=False):
''' Perform re.findall and return the list of matches '''
flags = 0
if ignorecase:
flags |= re.I
if multiline:
flags |= re.M
return re.findall(regex, value, flags)
def regex_search(value, regex, *args, **kwargs):
''' Perform re.search and return the list of matches or a backref '''
groups = list()
for arg in args:
if arg.startswith('\\g'):
match = re.match(r'\\g<(\S+)>', arg).group(1)
groups.append(match)
elif arg.startswith('\\'):
match = int(re.match(r'\\(\d+)', arg).group(1))
groups.append(match)
else:
raise AnsibleFilterError('Unknown argument')
flags = 0
if kwargs.get('ignorecase'):
flags |= re.I
if kwargs.get('multiline'):
flags |= re.M
match = re.search(regex, value, flags)
if match:
if not groups:
return match.group()
else:
items = list()
for item in groups:
items.append(match.group(item))
return items
def ternary(value, true_val, false_val, none_val=None):
''' value ? true_val : false_val '''
if value is None and none_val is not None:
return none_val
elif bool(value):
return true_val
else:
return false_val
def regex_escape(string, re_type='python'):
'''Escape all regular expressions special characters from STRING.'''
if re_type == 'python':
return re.escape(string)
elif re_type == 'posix_basic':
# list of BRE special chars:
# https://en.wikibooks.org/wiki/Regular_Expressions/POSIX_Basic_Regular_Expressions
return regex_replace(string, r'([].[^$*\\])', r'\\\1')
# TODO: implement posix_extended
# It's similar to, but different from python regex, which is similar to,
# but different from PCRE. It's possible that re.escape would work here.
# https://remram44.github.io/regex-cheatsheet/regex.html#programs
elif re_type == 'posix_extended':
raise AnsibleFilterError('Regex type (%s) not yet implemented' % re_type)
else:
raise AnsibleFilterError('Invalid regex type (%s)' % re_type)
def from_yaml(data):
if isinstance(data, string_types):
return yaml.safe_load(data)
return data
def from_yaml_all(data):
if isinstance(data, string_types):
return yaml.safe_load_all(data)
return data
@environmentfilter
def rand(environment, end, start=None, step=None, seed=None):
if seed is None:
r = SystemRandom()
else:
r = Random(seed)
if isinstance(end, integer_types):
if not start:
start = 0
if not step:
step = 1
return r.randrange(start, end, step)
elif hasattr(end, '__iter__'):
if start or step:
raise AnsibleFilterError('start and step can only be used with integer values')
return r.choice(end)
else:
raise AnsibleFilterError('random can only be used on sequences and integers')
def randomize_list(mylist, seed=None):
try:
mylist = list(mylist)
if seed:
r = Random(seed)
r.shuffle(mylist)
else:
shuffle(mylist)
except Exception:
pass
return mylist
def get_hash(data, hashtype='sha1'):
try: # see if hash is supported
h = hashlib.new(hashtype)
except Exception:
return None
h.update(to_bytes(data, errors='surrogate_or_strict'))
return h.hexdigest()
def get_encrypted_password(password, hashtype='sha512', salt=None, salt_size=None, rounds=None):
passlib_mapping = {
'md5': 'md5_crypt',
'blowfish': 'bcrypt',
'sha256': 'sha256_crypt',
'sha512': 'sha512_crypt',
}
hashtype = passlib_mapping.get(hashtype, hashtype)
try:
return passlib_or_crypt(password, hashtype, salt=salt, salt_size=salt_size, rounds=rounds)
except AnsibleError as e:
reraise(AnsibleFilterError, AnsibleFilterError(to_native(e), orig_exc=e), sys.exc_info()[2])
def to_uuid(string, namespace=UUID_NAMESPACE_ANSIBLE):
uuid_namespace = namespace
if not isinstance(uuid_namespace, uuid.UUID):
try:
uuid_namespace = uuid.UUID(namespace)
except (AttributeError, ValueError) as e:
raise AnsibleFilterError("Invalid value '%s' for 'namespace': %s" % (to_native(namespace), to_native(e)))
# uuid.uuid5() requires bytes on Python 2 and bytes or text or Python 3
return to_text(uuid.uuid5(uuid_namespace, to_native(string, errors='surrogate_or_strict')))
def mandatory(a, msg=None):
from jinja2.runtime import Undefined
''' Make a variable mandatory '''
if isinstance(a, Undefined):
if a._undefined_name is not None:
name = "'%s' " % to_text(a._undefined_name)
else:
name = ''
if msg is not None:
raise AnsibleFilterError(to_native(msg))
else:
raise AnsibleFilterError("Mandatory variable %s not defined." % name)
return a
def combine(*terms, **kwargs):
recursive = kwargs.get('recursive', False)
if len(kwargs) > 1 or (len(kwargs) == 1 and 'recursive' not in kwargs):
raise AnsibleFilterError("'recursive' is the only valid keyword argument")
dicts = []
for t in terms:
if isinstance(t, MutableMapping):
recursive_check_defined(t)
dicts.append(t)
elif isinstance(t, list):
recursive_check_defined(t)
dicts.append(combine(*t, **kwargs))
else:
raise AnsibleFilterError("|combine expects dictionaries, got " + repr(t))
if recursive:
return reduce(merge_hash, dicts)
else:
return dict(itertools.chain(*map(iteritems, dicts)))
def comment(text, style='plain', **kw):
# Predefined comment types
comment_styles = {
'plain': {
'decoration': '# '
},
'erlang': {
'decoration': '% '
},
'c': {
'decoration': '// '
},
'cblock': {
'beginning': '/*',
'decoration': ' * ',
'end': ' */'
},
'xml': {
'beginning': '<!--',
'decoration': ' - ',
'end': '-->'
}
}
# Pointer to the right comment type
style_params = comment_styles[style]
if 'decoration' in kw:
prepostfix = kw['decoration']
else:
prepostfix = style_params['decoration']
# Default params
p = {
'newline': '\n',
'beginning': '',
'prefix': (prepostfix).rstrip(),
'prefix_count': 1,
'decoration': '',
'postfix': (prepostfix).rstrip(),
'postfix_count': 1,
'end': ''
}
# Update default params
p.update(style_params)
p.update(kw)
# Compose substrings for the final string
str_beginning = ''
if p['beginning']:
str_beginning = "%s%s" % (p['beginning'], p['newline'])
str_prefix = ''
if p['prefix']:
if p['prefix'] != p['newline']:
str_prefix = str(
"%s%s" % (p['prefix'], p['newline'])) * int(p['prefix_count'])
else:
str_prefix = str(
"%s" % (p['newline'])) * int(p['prefix_count'])
str_text = ("%s%s" % (
p['decoration'],
# Prepend each line of the text with the decorator
text.replace(
p['newline'], "%s%s" % (p['newline'], p['decoration'])))).replace(
# Remove trailing spaces when only decorator is on the line
"%s%s" % (p['decoration'], p['newline']),
"%s%s" % (p['decoration'].rstrip(), p['newline']))
str_postfix = p['newline'].join(
[''] + [p['postfix'] for x in range(p['postfix_count'])])
str_end = ''
if p['end']:
str_end = "%s%s" % (p['newline'], p['end'])
# Return the final string
return "%s%s%s%s%s" % (
str_beginning,
str_prefix,
str_text,
str_postfix,
str_end)
def extract(item, container, morekeys=None):
from jinja2.runtime import Undefined
value = container[item]
if value is not Undefined and morekeys is not None:
if not isinstance(morekeys, list):
morekeys = [morekeys]
try:
value = reduce(lambda d, k: d[k], morekeys, value)
except KeyError:
value = Undefined()
return value
@environmentfilter
def do_groupby(environment, value, attribute):
"""Overridden groupby filter for jinja2, to address an issue with
jinja2>=2.9.0,<2.9.5 where a namedtuple was returned which
has repr that prevents ansible.template.safe_eval.safe_eval from being
able to parse and eval the data.
jinja2<2.9.0,>=2.9.5 is not affected, as <2.9.0 uses a tuple, and
>=2.9.5 uses a standard tuple repr on the namedtuple.
The adaptation here, is to run the jinja2 `do_groupby` function, and
cast all of the namedtuples to a regular tuple.
See https://github.com/ansible/ansible/issues/20098
We may be able to remove this in the future.
"""
return [tuple(t) for t in _do_groupby(environment, value, attribute)]
def b64encode(string, encoding='utf-8'):
return to_text(base64.b64encode(to_bytes(string, encoding=encoding, errors='surrogate_or_strict')))
def b64decode(string, encoding='utf-8'):
return to_text(base64.b64decode(to_bytes(string, errors='surrogate_or_strict')), encoding=encoding)
def flatten(mylist, levels=None):
ret = []
for element in mylist:
if element in (None, 'None', 'null'):
# ignore undefined items
break
elif is_sequence(element):
if levels is None:
ret.extend(flatten(element))
elif levels >= 1:
# decrement as we go down the stack
ret.extend(flatten(element, levels=(int(levels) - 1)))
else:
ret.append(element)
else:
ret.append(element)
return ret
def subelements(obj, subelements, skip_missing=False):
'''Accepts a dict or list of dicts, and a dotted accessor and produces a product
of the element and the results of the dotted accessor
>>> obj = [{"name": "alice", "groups": ["wheel"], "authorized": ["/tmp/alice/onekey.pub"]}]
>>> subelements(obj, 'groups')
[({'name': 'alice', 'groups': ['wheel'], 'authorized': ['/tmp/alice/onekey.pub']}, 'wheel')]
'''
if isinstance(obj, dict):
element_list = list(obj.values())
elif isinstance(obj, list):
element_list = obj[:]
else:
raise AnsibleFilterError('obj must be a list of dicts or a nested dict')
if isinstance(subelements, list):
subelement_list = subelements[:]
elif isinstance(subelements, string_types):
subelement_list = subelements.split('.')
else:
raise AnsibleFilterError('subelements must be a list or a string')
results = []
for element in element_list:
values = element
for subelement in subelement_list:
try:
values = values[subelement]
except KeyError:
if skip_missing:
values = []
break
raise AnsibleFilterError("could not find %r key in iterated item %r" % (subelement, values))
except TypeError:
raise AnsibleFilterError("the key %s should point to a dictionary, got '%s'" % (subelement, values))
if not isinstance(values, list):
raise AnsibleFilterError("the key %r should point to a list, got %r" % (subelement, values))
for value in values:
results.append((element, value))
return results
def dict_to_list_of_dict_key_value_elements(mydict, key_name='key', value_name='value'):
''' takes a dictionary and transforms it into a list of dictionaries,
with each having a 'key' and 'value' keys that correspond to the keys and values of the original '''
if not isinstance(mydict, Mapping):
raise AnsibleFilterError("dict2items requires a dictionary, got %s instead." % type(mydict))
ret = []
for key in mydict:
ret.append({key_name: key, value_name: mydict[key]})
return ret
def list_of_dict_key_value_elements_to_dict(mylist, key_name='key', value_name='value'):
''' takes a list of dicts with each having a 'key' and 'value' keys, and transforms the list into a dictionary,
effectively as the reverse of dict2items '''
if not is_sequence(mylist):
raise AnsibleFilterError("items2dict requires a list, got %s instead." % type(mylist))
return dict((item[key_name], item[value_name]) for item in mylist)
def random_mac(value, seed=None):
''' takes string prefix, and return it completed with random bytes
to get a complete 6 bytes MAC address '''
if not isinstance(value, string_types):
raise AnsibleFilterError('Invalid value type (%s) for random_mac (%s)' % (type(value), value))
value = value.lower()
mac_items = value.split(':')
if len(mac_items) > 5:
raise AnsibleFilterError('Invalid value (%s) for random_mac: 5 colon(:) separated items max' % value)
err = ""
for mac in mac_items:
if len(mac) == 0:
err += ",empty item"
continue
if not re.match('[a-f0-9]{2}', mac):
err += ",%s not hexa byte" % mac
err = err.strip(',')
if len(err):
raise AnsibleFilterError('Invalid value (%s) for random_mac: %s' % (value, err))
if seed is None:
r = SystemRandom()
else:
r = Random(seed)
# Generate random int between x1000000000 and xFFFFFFFFFF
v = r.randint(68719476736, 1099511627775)
# Select first n chars to complement input prefix
remain = 2 * (6 - len(mac_items))
rnd = ('%x' % v)[:remain]
return value + re.sub(r'(..)', r':\1', rnd)
class FilterModule(object):
''' Ansible core jinja2 filters '''
def filters(self):
return {
# jinja2 overrides
'groupby': do_groupby,
# base 64
'b64decode': b64decode,
'b64encode': b64encode,
# uuid
'to_uuid': to_uuid,
# json
'to_json': to_json,
'to_nice_json': to_nice_json,
'from_json': json.loads,
# yaml
'to_yaml': to_yaml,
'to_nice_yaml': to_nice_yaml,
'from_yaml': from_yaml,
'from_yaml_all': from_yaml_all,
# path
'basename': partial(unicode_wrap, os.path.basename),
'dirname': partial(unicode_wrap, os.path.dirname),
'expanduser': partial(unicode_wrap, os.path.expanduser),
'expandvars': partial(unicode_wrap, os.path.expandvars),
'realpath': partial(unicode_wrap, os.path.realpath),
'relpath': partial(unicode_wrap, os.path.relpath),
'splitext': partial(unicode_wrap, os.path.splitext),
'win_basename': partial(unicode_wrap, ntpath.basename),
'win_dirname': partial(unicode_wrap, ntpath.dirname),
'win_splitdrive': partial(unicode_wrap, ntpath.splitdrive),
# file glob
'fileglob': fileglob,
# types
'bool': to_bool,
'to_datetime': to_datetime,
# date formatting
'strftime': strftime,
# quote string for shell usage
'quote': quote,
# hash filters
# md5 hex digest of string
'md5': md5s,
# sha1 hex digest of string
'sha1': checksum_s,
# checksum of string as used by ansible for checksumming files
'checksum': checksum_s,
# generic hashing
'password_hash': get_encrypted_password,
'hash': get_hash,
# regex
'regex_replace': regex_replace,
'regex_escape': regex_escape,
'regex_search': regex_search,
'regex_findall': regex_findall,
# ? : ;
'ternary': ternary,
# random stuff
'random': rand,
'shuffle': randomize_list,
# undefined
'mandatory': mandatory,
# comment-style decoration
'comment': comment,
# debug
'type_debug': lambda o: o.__class__.__name__,
# Data structures
'combine': combine,
'extract': extract,
'flatten': flatten,
'dict2items': dict_to_list_of_dict_key_value_elements,
'items2dict': list_of_dict_key_value_elements_to_dict,
'subelements': subelements,
# Misc
'random_mac': random_mac,
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,957 |
extract() filter fails when key does not exist in container
|
##### SUMMARY
`extract()` filter fails when key does not exist in container
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
core filters
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/bidord/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/bidord/dev/ansible/lib/ansible
executable location = /home/bidord/dev/ansible/bin/ansible
python version = 2.7.15+ (default, Jul 9 2019, 16:51:35) [GCC 7.4.0]
```
##### CONFIGURATION
(default)
```
```
##### OS / ENVIRONMENT
Any.
##### STEPS TO REPRODUCE
test-extract.yml:
```yaml
---
- hosts: localhost
gather_facts: false
vars:
container:
key:
subkey: value
tasks:
- ignore_errors: true
block:
- name: bad container
debug:
msg: "{{ 'key' | extract(badcontainer) | default('SUCCESS') }}"
- name: bad container, subkey
debug:
msg: "{{ 'key' | extract(badcontainer, 'subkey') | default('SUCCESS') }}"
- name: bad container, subkey as attribute
debug:
msg: "{{ ('key' | extract(badcontainer)).subkey | default('SUCCESS') }}"
- name: standard dict, bad key
debug:
msg: "{{ 'badkey' | extract(container) | default('SUCCESS') }}"
- name: standard dict, bad key, subkey
debug:
msg: "{{ 'badkey' | extract(container, 'subkey') | default('SUCCESS') }}"
- name: standard dict, bad key, subkey as attribute
debug:
msg: "{{ ('badkey' | extract(container)).subsubkey | default('SUCCESS') }}"
- name: standard dict, bad subkey
debug:
msg: "{{ 'key' | extract(container, 'badsubkey') | default('SUCCESS') }}"
- name: standard dict, bad subkey, subsubkey
debug:
msg: "{{ 'key' | extract(container, ['badsubkey', 'subsubkey']) | default('SUCCESS') }}"
- name: standard dict, bad subkey, subkey as attribute
debug:
msg: "{{ ('key' | extract(container, 'badsubkey')).subsubkey | default('SUCCESS') }}"
- name: hostvars, bad key
debug:
msg: "{{ 'badkey' | extract(hostvars) | default('SUCCESS') }}"
- name: hostvars, bad key, subkey
debug:
msg: "{{ 'badkey' | extract(hostvars, 'subkey') | default('SUCCESS') }}"
- name: hostvars, bad key, subkey as attribute
debug:
msg: "{{ ('badkey' | extract(hostvars)).subsubkey | default('SUCCESS') }}"
- name: hostvars, bad subkey
debug:
msg: "{{ 'localhost' | extract(hostvars, 'badsubkey') | default('SUCCESS') }}"
- name: hostvars, bad subkey, subsubkey
debug:
msg: "{{ 'localhost' | extract(hostvars, ['badsubkey', 'subsubkey']) | default('SUCCESS') }}"
- name: hostvars, bad subkey, subsubkey as attribute
debug:
msg: "{{ ('localhost' | extract(hostvars, 'badsubkey')).subsubkey | default('SUCCESS') }}"
```
##### EXPECTED RESULTS
All tests should print `SUCCESS`.
```
$ ANSIBLE_STDOUT_CALLBACK=yaml ansible-playbook test-extract.yml
PLAY [localhost] **************************************************************************************************
TASK [bad container] **********************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey] **************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey as attribute] *************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key] *************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key, subkey] *****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key, subkey as attribute] ****************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subsubkey] ***********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subkey as attribute] *************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key] ******************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey as attribute] *********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey] ***************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey] ****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey as attribute] ***************************************************************
ok: [localhost] =>
msg: SUCCESS
PLAY RECAP ********************************************************************************************************
localhost : ok=15 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
Some tests fail during the execution of `extract()`.
Others return `Undefined` instead of `AnsibleUndefined`, which then fails if we try to access a subkey using jinja2 `.` syntax.
```
$ ANSIBLE_STDOUT_CALLBACK=yaml ansible-playbook test-extract.yml
PLAY [localhost] **************************************************************************************************
TASK [bad container] **********************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey] **************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey as attribute] *************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key] *************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad key, subkey] *****************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad key, subkey as attribute] ****************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subsubkey] ***********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subkey as attribute] *************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: None is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 44, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: standard dict, bad subkey, subkey as attribute
^ here
...ignoring
TASK [hostvars, bad key] ******************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey] **********************************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: "hostvars['badkey']" is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 52, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad key, subkey
^ here
...ignoring
TASK [hostvars, bad key, subkey as attribute] *********************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: "hostvars['badkey']" is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 56, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad key, subkey as attribute
^ here
...ignoring
TASK [hostvars, bad subkey] ***************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey] ****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey as attribute] ***************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: None is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 68, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad subkey, subsubkey as attribute
^ here
...ignoring
PLAY RECAP ********************************************************************************************************
localhost : ok=15 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=7
```
Edit: actual results was a wrong copy/paste from a previous version of test-extract.yml
|
https://github.com/ansible/ansible/issues/64957
|
https://github.com/ansible/ansible/pull/64959
|
94043849855d4c4f573c4844aa7ac3e797b387d7
|
03c16096d737a43166719e9b8e9f816a533200f4
| 2019-11-17T14:04:14Z |
python
| 2019-12-04T12:24:52Z |
lib/ansible/vars/hostvars.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from jinja2.runtime import Undefined
from ansible.module_utils.common._collections_compat import Mapping
from ansible.template import Templar
STATIC_VARS = [
'ansible_version',
'ansible_play_hosts',
'ansible_dependent_role_names',
'ansible_play_role_names',
'ansible_role_names',
'inventory_hostname',
'inventory_hostname_short',
'inventory_file',
'inventory_dir',
'groups',
'group_names',
'omit',
'playbook_dir',
'play_hosts',
'role_names',
'ungrouped',
]
__all__ = ['HostVars', 'HostVarsVars']
# Note -- this is a Mapping, not a MutableMapping
class HostVars(Mapping):
''' A special view of vars_cache that adds values from the inventory when needed. '''
def __init__(self, inventory, variable_manager, loader):
self._lookup = dict()
self._inventory = inventory
self._loader = loader
self._variable_manager = variable_manager
variable_manager._hostvars = self
def set_variable_manager(self, variable_manager):
self._variable_manager = variable_manager
variable_manager._hostvars = self
def set_inventory(self, inventory):
self._inventory = inventory
def _find_host(self, host_name):
# does not use inventory.hosts so it can create localhost on demand
return self._inventory.get_host(host_name)
def raw_get(self, host_name):
'''
Similar to __getitem__, however the returned data is not run through
the templating engine to expand variables in the hostvars.
'''
host = self._find_host(host_name)
if host is None:
return Undefined(name="hostvars['%s']" % host_name)
return self._variable_manager.get_vars(host=host, include_hostvars=False)
def __getitem__(self, host_name):
data = self.raw_get(host_name)
if isinstance(data, Undefined):
return data
return HostVarsVars(data, loader=self._loader)
def set_host_variable(self, host, varname, value):
self._variable_manager.set_host_variable(host, varname, value)
def set_nonpersistent_facts(self, host, facts):
self._variable_manager.set_nonpersistent_facts(host, facts)
def set_host_facts(self, host, facts):
self._variable_manager.set_host_facts(host, facts)
def __contains__(self, host_name):
# does not use inventory.hosts so it can create localhost on demand
return self._find_host(host_name) is not None
def __iter__(self):
for host in self._inventory.hosts:
yield host
def __len__(self):
return len(self._inventory.hosts)
def __repr__(self):
out = {}
for host in self._inventory.hosts:
out[host] = self.get(host)
return repr(out)
def __deepcopy__(self, memo):
# We do not need to deepcopy because HostVars is immutable,
# however we have to implement the method so we can deepcopy
# variables' dicts that contain HostVars.
return self
class HostVarsVars(Mapping):
def __init__(self, variables, loader):
self._vars = variables
self._loader = loader
def __getitem__(self, var):
templar = Templar(variables=self._vars, loader=self._loader)
foo = templar.template(self._vars[var], fail_on_undefined=False, static_vars=STATIC_VARS)
return foo
def __contains__(self, var):
return (var in self._vars)
def __iter__(self):
for var in self._vars.keys():
yield var
def __len__(self):
return len(self._vars.keys())
def __repr__(self):
templar = Templar(variables=self._vars, loader=self._loader)
return repr(templar.template(self._vars, fail_on_undefined=False, static_vars=STATIC_VARS))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,957 |
extract() filter fails when key does not exist in container
|
##### SUMMARY
`extract()` filter fails when key does not exist in container
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
core filters
##### ANSIBLE VERSION
```
ansible 2.10.0.dev0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/bidord/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/bidord/dev/ansible/lib/ansible
executable location = /home/bidord/dev/ansible/bin/ansible
python version = 2.7.15+ (default, Jul 9 2019, 16:51:35) [GCC 7.4.0]
```
##### CONFIGURATION
(default)
```
```
##### OS / ENVIRONMENT
Any.
##### STEPS TO REPRODUCE
test-extract.yml:
```yaml
---
- hosts: localhost
gather_facts: false
vars:
container:
key:
subkey: value
tasks:
- ignore_errors: true
block:
- name: bad container
debug:
msg: "{{ 'key' | extract(badcontainer) | default('SUCCESS') }}"
- name: bad container, subkey
debug:
msg: "{{ 'key' | extract(badcontainer, 'subkey') | default('SUCCESS') }}"
- name: bad container, subkey as attribute
debug:
msg: "{{ ('key' | extract(badcontainer)).subkey | default('SUCCESS') }}"
- name: standard dict, bad key
debug:
msg: "{{ 'badkey' | extract(container) | default('SUCCESS') }}"
- name: standard dict, bad key, subkey
debug:
msg: "{{ 'badkey' | extract(container, 'subkey') | default('SUCCESS') }}"
- name: standard dict, bad key, subkey as attribute
debug:
msg: "{{ ('badkey' | extract(container)).subsubkey | default('SUCCESS') }}"
- name: standard dict, bad subkey
debug:
msg: "{{ 'key' | extract(container, 'badsubkey') | default('SUCCESS') }}"
- name: standard dict, bad subkey, subsubkey
debug:
msg: "{{ 'key' | extract(container, ['badsubkey', 'subsubkey']) | default('SUCCESS') }}"
- name: standard dict, bad subkey, subkey as attribute
debug:
msg: "{{ ('key' | extract(container, 'badsubkey')).subsubkey | default('SUCCESS') }}"
- name: hostvars, bad key
debug:
msg: "{{ 'badkey' | extract(hostvars) | default('SUCCESS') }}"
- name: hostvars, bad key, subkey
debug:
msg: "{{ 'badkey' | extract(hostvars, 'subkey') | default('SUCCESS') }}"
- name: hostvars, bad key, subkey as attribute
debug:
msg: "{{ ('badkey' | extract(hostvars)).subsubkey | default('SUCCESS') }}"
- name: hostvars, bad subkey
debug:
msg: "{{ 'localhost' | extract(hostvars, 'badsubkey') | default('SUCCESS') }}"
- name: hostvars, bad subkey, subsubkey
debug:
msg: "{{ 'localhost' | extract(hostvars, ['badsubkey', 'subsubkey']) | default('SUCCESS') }}"
- name: hostvars, bad subkey, subsubkey as attribute
debug:
msg: "{{ ('localhost' | extract(hostvars, 'badsubkey')).subsubkey | default('SUCCESS') }}"
```
##### EXPECTED RESULTS
All tests should print `SUCCESS`.
```
$ ANSIBLE_STDOUT_CALLBACK=yaml ansible-playbook test-extract.yml
PLAY [localhost] **************************************************************************************************
TASK [bad container] **********************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey] **************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey as attribute] *************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key] *************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key, subkey] *****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key, subkey as attribute] ****************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subsubkey] ***********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subkey as attribute] *************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key] ******************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey as attribute] *********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey] ***************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey] ****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey as attribute] ***************************************************************
ok: [localhost] =>
msg: SUCCESS
PLAY RECAP ********************************************************************************************************
localhost : ok=15 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
##### ACTUAL RESULTS
Some tests fail during the execution of `extract()`.
Others return `Undefined` instead of `AnsibleUndefined`, which then fails if we try to access a subkey using jinja2 `.` syntax.
```
$ ANSIBLE_STDOUT_CALLBACK=yaml ansible-playbook test-extract.yml
PLAY [localhost] **************************************************************************************************
TASK [bad container] **********************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey] **************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [bad container, subkey as attribute] *************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad key] *************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad key, subkey] *****************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad key, subkey as attribute] ****************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'badkey'
fatal: [localhost]: FAILED! =>
msg: Unexpected failure during module execution.
...ignoring
TASK [standard dict, bad subkey] **********************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subsubkey] ***********************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [standard dict, bad subkey, subkey as attribute] *************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: None is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 44, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: standard dict, bad subkey, subkey as attribute
^ here
...ignoring
TASK [hostvars, bad key] ******************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad key, subkey] **********************************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: "hostvars['badkey']" is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 52, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad key, subkey
^ here
...ignoring
TASK [hostvars, bad key, subkey as attribute] *********************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: "hostvars['badkey']" is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 56, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad key, subkey as attribute
^ here
...ignoring
TASK [hostvars, bad subkey] ***************************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey] ****************************************************************************
ok: [localhost] =>
msg: SUCCESS
TASK [hostvars, bad subkey, subsubkey as attribute] ***************************************************************
fatal: [localhost]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: None is undefined
The error appears to be in '/home/bidord/dev/ansible-test/test-extract.yml': line 68, column 11, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: hostvars, bad subkey, subsubkey as attribute
^ here
...ignoring
PLAY RECAP ********************************************************************************************************
localhost : ok=15 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=7
```
Edit: actual results was a wrong copy/paste from a previous version of test-extract.yml
|
https://github.com/ansible/ansible/issues/64957
|
https://github.com/ansible/ansible/pull/64959
|
94043849855d4c4f573c4844aa7ac3e797b387d7
|
03c16096d737a43166719e9b8e9f816a533200f4
| 2019-11-17T14:04:14Z |
python
| 2019-12-04T12:24:52Z |
test/integration/targets/filters/tasks/main.yml
|
# test code for filters
# Copyright: (c) 2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- set_fact:
output_dir: "{{ lookup('env', 'OUTPUT_DIR') }}"
- name: a dummy task to test the changed and success filters
shell: echo hi
register: some_registered_var
- debug:
var: some_registered_var
- name: Verify that we workaround a py26 json bug
template:
src: py26json.j2
dest: "{{ output_dir }}/py26json.templated"
mode: 0644
- name: 9851 - Verify that we don't trigger https://github.com/ansible/ansible/issues/9851
copy:
content: " [{{ item | to_nice_json }}]"
dest: "{{ output_dir }}/9851.out"
with_items:
- {"k": "Quotes \"'\n"}
- name: 9851 - copy known good output into place
copy:
src: 9851.txt
dest: "{{ output_dir }}/9851.txt"
- name: 9851 - Compare generated json to known good
shell: diff -w {{ output_dir }}/9851.out {{ output_dir }}/9851.txt
register: diff_result_9851
- name: 9851 - verify generated file matches known good
assert:
that:
- 'diff_result_9851.stdout == ""'
- name: fill in a basic template
template:
src: foo.j2
dest: "{{ output_dir }}/foo.templated"
mode: 0644
register: template_result
- name: copy known good into place
copy:
src: foo.txt
dest: "{{ output_dir }}/foo.txt"
- name: compare templated file to known good
shell: diff -w {{ output_dir }}/foo.templated {{ output_dir }}/foo.txt
register: diff_result
- name: verify templated file matches known good
assert:
that:
- 'diff_result.stdout == ""'
- name: Verify human_readable
tags: "human_readable"
assert:
that:
- '"1.00 Bytes" == 1|human_readable'
- '"1.00 bits" == 1|human_readable(isbits=True)'
- '"10.00 KB" == 10240|human_readable'
- '"97.66 MB" == 102400000|human_readable'
- '"0.10 GB" == 102400000|human_readable(unit="G")'
- '"0.10 Gb" == 102400000|human_readable(isbits=True, unit="G")'
- name: Verify human_to_bytes
tags: "human_to_bytes"
assert:
that:
- "{{'0'|human_to_bytes}} == 0"
- "{{'0.1'|human_to_bytes}} == 0"
- "{{'0.9'|human_to_bytes}} == 1"
- "{{'1'|human_to_bytes}} == 1"
- "{{'10.00 KB'|human_to_bytes}} == 10240"
- "{{ '11 MB'|human_to_bytes}} == 11534336"
- "{{ '1.1 GB'|human_to_bytes}} == 1181116006"
- "{{'10.00 Kb'|human_to_bytes(isbits=True)}} == 10240"
- name: Verify human_to_bytes (bad string)
set_fact:
bad_string: "{{ '10.00 foo' | human_to_bytes }}"
ignore_errors: yes
tags: human_to_bytes
register: _human_bytes_test
- name: Verify human_to_bytes (bad string)
tags: human_to_bytes
assert:
that: "{{_human_bytes_test.failed}}"
- name: Test extract
assert:
that:
- '"c" == 2 | extract(["a", "b", "c"])'
- '"b" == 1 | extract(["a", "b", "c"])'
- '"a" == 0 | extract(["a", "b", "c"])'
- name: Container lookups with extract
assert:
that:
- "'x' == [0]|map('extract',['x','y'])|list|first"
- "'y' == [1]|map('extract',['x','y'])|list|first"
- "42 == ['x']|map('extract',{'x':42,'y':31})|list|first"
- "31 == ['x','y']|map('extract',{'x':42,'y':31})|list|last"
- "'local' == ['localhost']|map('extract',hostvars,'ansible_connection')|list|first"
- "'local' == ['localhost']|map('extract',hostvars,['ansible_connection'])|list|first"
# map was added to jinja2 in version 2.7
when: "{{ ( lookup('pipe', '{{ ansible_python[\"executable\"] }} -c \"import jinja2; print(jinja2.__version__)\"') is version('2.7', '>=') ) }}"
- name: Test json_query filter
assert:
that:
- "users | json_query('[*].hosts[].host') == ['host_a', 'host_b', 'host_c', 'host_d']"
- name: Test hash filter
assert:
that:
- '"{{ "hash" | hash("sha1") }}" == "2346ad27d7568ba9896f1b7da6b5991251debdf2"'
- '"{{ "cafΓ©" | hash("sha1") }}" == "f424452a9673918c6f09b0cdd35b20be8e6ae7d7"'
- debug:
var: "'http://mary:[email protected]:9000/dir/index.html?query=term#fragment' | urlsplit"
verbosity: 1
tags: debug
- name: Test urlsplit filter
assert:
that:
- "'http://mary:[email protected]:9000/dir/index.html?query=term#fragment' | urlsplit('fragment') == 'fragment'"
- "'http://mary:[email protected]:9000/dir/index.html?query=term#fragment' | urlsplit('hostname') == 'www.acme.com'"
- "'http://mary:[email protected]:9000/dir/index.html?query=term#fragment' | urlsplit('netloc') == 'mary:[email protected]:9000'"
- "'http://mary:[email protected]:9000/dir/index.html?query=term#fragment' | urlsplit('path') == '/dir/index.html'"
- "'http://mary:[email protected]:9000/dir/index.html?query=term#fragment' | urlsplit('port') == 9000"
- "'http://mary:[email protected]:9000/dir/index.html?query=term#fragment' | urlsplit('query') == 'query=term'"
- "'http://mary:[email protected]:9000/dir/index.html?query=term#fragment' | urlsplit('scheme') == 'http'"
- "'http://mary:[email protected]:9000/dir/index.html?query=term#fragment' | urlsplit('username') == 'mary'"
- "'http://mary:[email protected]:9000/dir/index.html?query=term#fragment' | urlsplit('password') == 'MySecret'"
- "'http://mary:[email protected]:9000/dir/index.html?query=term#fragment' | urlsplit == { 'fragment': 'fragment', 'hostname': 'www.acme.com', 'netloc': 'mary:[email protected]:9000', 'password': 'MySecret', 'path': '/dir/index.html', 'port': 9000, 'query': 'query=term', 'scheme': 'http', 'username': 'mary' }"
- name: Test urlsplit filter bad argument
debug:
var: "'http://www.acme.com:9000/dir/index.html' | urlsplit('bad_filter')"
register: _bad_urlsplit_filter
ignore_errors: yes
- name: Verify urlsplit filter showed an error message
assert:
that:
- _bad_urlsplit_filter is failed
- "'unknown URL component' in _bad_urlsplit_filter.msg"
- name: Test urldecode filter
set_fact:
urldecoded_string: key="@{}Γ©&%Β£ foo bar '(;\<>""Β°)
- name: Test urlencode filter
set_fact:
urlencoded_string: '{{ urldecoded_string|urlencode }}'
- name: Verify urlencode en urldecode
assert:
that:
- urldecoded_string == urlencoded_string|urldecode
- name: Flatten tests
block:
- name: use flatten
set_fact:
flat_full: '{{orig_list|flatten}}'
flat_one: '{{orig_list|flatten(levels=1)}}'
flat_two: '{{orig_list|flatten(levels=2)}}'
flat_tuples: '{{ [1,3] | zip([2,4]) | list | flatten }}'
- name: Verify flatten filter works as expected
assert:
that:
- flat_full == [1, 2, 3, 4, 5, 6, 7]
- flat_one == [1, 2, 3, [4, [5]], 6, 7]
- flat_two == [1, 2, 3, 4, [5], 6, 7]
- flat_tuples == [1, 2, 3, 4]
vars:
orig_list: [1, 2, [3, [4, [5]], 6], 7]
- name: Test base64 filter
assert:
that:
- "'Ansible - γγγ¨γΏ\n' | b64encode == 'QW5zaWJsZSAtIOOBj+OCieOBqOOBvwo='"
- "'QW5zaWJsZSAtIOOBj+OCieOBqOOBvwo=' | b64decode == 'Ansible - γγγ¨γΏ\n'"
- "'Ansible - γγγ¨γΏ\n' | b64encode(encoding='utf-16-le') == 'QQBuAHMAaQBiAGwAZQAgAC0AIABPMIkwaDB/MAoA'"
- "'QQBuAHMAaQBiAGwAZQAgAC0AIABPMIkwaDB/MAoA' | b64decode(encoding='utf-16-le') == 'Ansible - γγγ¨γΏ\n'"
- name: Test random_mac filter bad argument type
debug:
var: "0 | random_mac"
register: _bad_random_mac_filter
ignore_errors: yes
- name: Verify random_mac filter showed a bad argument type error message
assert:
that:
- _bad_random_mac_filter is failed
- "_bad_random_mac_filter.msg is match('Invalid value type (.*int.*) for random_mac .*')"
- name: Test random_mac filter bad argument value
debug:
var: "'dummy' | random_mac"
register: _bad_random_mac_filter
ignore_errors: yes
- name: Verify random_mac filter showed a bad argument value error message
assert:
that:
- _bad_random_mac_filter is failed
- "_bad_random_mac_filter.msg is match('Invalid value (.*) for random_mac: .* not hexa byte')"
- name: Test random_mac filter prefix too big
debug:
var: "'00:00:00:00:00:00' | random_mac"
register: _bad_random_mac_filter
ignore_errors: yes
- name: Verify random_mac filter showed a prefix too big error message
assert:
that:
- _bad_random_mac_filter is failed
- "_bad_random_mac_filter.msg is match('Invalid value (.*) for random_mac: 5 colon.* separated items max')"
- name: Verify random_mac filter
assert:
that:
- "'00' | random_mac is match('^00:[a-f0-9][a-f0-9]:[a-f0-9][a-f0-9]:[a-f0-9][a-f0-9]:[a-f0-9][a-f0-9]:[a-f0-9][a-f0-9]$')"
- "'00:00' | random_mac is match('^00:00:[a-f0-9][a-f0-9]:[a-f0-9][a-f0-9]:[a-f0-9][a-f0-9]:[a-f0-9][a-f0-9]$')"
- "'00:00:00' | random_mac is match('^00:00:00:[a-f0-9][a-f0-9]:[a-f0-9][a-f0-9]:[a-f0-9][a-f0-9]$')"
- "'00:00:00:00' | random_mac is match('^00:00:00:[a-f0-9][a-f0-9]:[a-f0-9][a-f0-9]:[a-f0-9][a-f0-9]$')"
- "'00:00:00:00:00' | random_mac is match('^00:00:00:00:00:[a-f0-9][a-f0-9]$')"
- "'00:00:00' | random_mac != '00:00:00' | random_mac"
- name: Verify random_mac filter with seed
assert:
that:
- "'00:00:00' | random_mac(seed='test') == '00:00:00' | random_mac(seed='test')"
- "'00:00:00' | random_mac(seed='test') != '00:00:00' | random_mac(seed='another_test')"
- name: Verify that union can be chained
vars:
unions: '{{ [1,2,3]|union([4,5])|union([6,7]) }}'
assert:
that:
- "unions|type_debug == 'list'"
- "unions|length == 7"
- name: Test union with unhashable item
vars:
unions: '{{ [1,2,3]|union([{}]) }}'
assert:
that:
- "unions|type_debug == 'list'"
- "unions|length == 4"
- name: Test ipaddr filter
assert:
that:
- "'192.168.0.1/32' | ipaddr('netmask') == '255.255.255.255'"
- "'192.168.0.1/24' | ipaddr('netmask') == '255.255.255.0'"
- "not '192.168.0.1/31' | ipaddr('broadcast')"
- "'192.168.0.1/24' | ipaddr('broadcast') == '192.168.0.255'"
- "'192.168.0.1/24' | ipaddr('prefix') == 24"
- "'192.168.0.1/24' | ipaddr('address') == '192.168.0.1'"
- "'192.168.0.1/24' | ipaddr('network') == '192.168.0.0'"
- "'fe80::dead:beef/64' | ipaddr('broadcast') == 'fe80::ffff:ffff:ffff:ffff'"
- "'::1/120' | ipaddr('netmask') == 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ff00'"
- "{{ subnets | ipaddr(1) }} == ['10.1.1.1/24', '10.1.2.1/24']"
- "{{ subnets | ipaddr('1') }} == ['10.1.1.1/24', '10.1.2.1/24']"
- "{{ subnets | ipaddr(-1) }} == ['10.1.1.255/24', '10.1.2.255/24']"
- "{{ subnets | ipaddr('-1') }} == ['10.1.1.255/24', '10.1.2.255/24']"
- "'{{ prefix | ipaddr(1) }}' == '10.1.1.1/24'"
- "'{{ prefix | ipaddr('1') }}' == '10.1.1.1/24'"
- "'{{ prefix | ipaddr('network') }}' == '10.1.1.0'"
- "'{{ prefix | ipaddr('-1') }}' == '10.1.1.255/24'"
vars:
subnets: ['10.1.1.0/24', '10.1.2.0/24']
prefix: '10.1.1.0/24'
- name: Ensure dict2items works with hostvars
debug:
msg: "{{ item.key }}"
loop: "{{ hostvars|dict2items }}"
loop_control:
label: "{{ item.key }}"
- name: Ensure combining two dictionaries containing undefined variables provides a helpful error
block:
- set_fact:
foo:
key1: value1
- set_fact:
combined: "{{ foo | combine({'key2': undef_variable}) }}"
ignore_errors: yes
register: result
- assert:
that:
- "result.msg.startswith('The task includes an option with an undefined variable')"
- set_fact:
combined: "{{ foo | combine({'key2': {'nested': [undef_variable]}})}}"
ignore_errors: yes
register: result
- assert:
that:
- "result.msg.startswith('The task includes an option with an undefined variable')"
- set_fact:
key2: is_defined
- set_fact:
combined: "{{ foo | combine({'key2': key2}) }}"
- assert:
that:
- "combined.key2 == 'is_defined'"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,654 |
[WARNING]: template parsing did not produce documentation
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
$ansible-doc -l | grep UNDOCUMENTED
[WARNING]: template parsing did not produce documentation.
[WARNING]: win_template parsing did not produce documentation.
template UNDOCUMENTED
win_template UNDOCUMENTED
There're 2 modules undocumented.
$ansible --version
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/smith/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible-2.9.0-py3.7.egg/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 21 2019, 7:17:39) [GCC 7.4.0]
#
|
https://github.com/ansible/ansible/issues/64654
|
https://github.com/ansible/ansible/pull/65230
|
c04fc52aadd1ae5f29611590e98adabdd83ffdd1
|
770430fd071ce4adf068a9acbe7558198baedf34
| 2019-11-10T14:37:43Z |
python
| 2019-12-04T20:23:45Z |
lib/ansible/modules/files/template.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# This is a virtual module that is entirely implemented as an action plugin and runs on the controller
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: template
version_added: historical
options:
follow:
description:
- Determine whether symbolic links should be followed.
- When set to C(yes) symbolic links will be followed, if they exist.
- When set to C(no) symbolic links will not be followed.
- Previous to Ansible 2.4, this was hardcoded as C(yes).
type: bool
default: no
version_added: '2.4'
notes:
- You can use the M(copy) module with the C(content:) option if you prefer the template inline,
as part of the playbook.
- For Windows you can use M(win_template) which uses '\\r\\n' as C(newline_sequence) by default.
seealso:
- module: copy
- module: win_copy
- module: win_template
author:
- Ansible Core Team
- Michael DeHaan
extends_documentation_fragment:
- backup
- files
- template_common
- validate
'''
EXAMPLES = r'''
- name: Template a file to /etc/files.conf
template:
src: /mytemplates/foo.j2
dest: /etc/file.conf
owner: bin
group: wheel
mode: '0644'
- name: Template a file, using symbolic modes (equivalent to 0644)
template:
src: /mytemplates/foo.j2
dest: /etc/file.conf
owner: bin
group: wheel
mode: u=rw,g=r,o=r
- name: Copy a version of named.conf that is dependent on the OS. setype obtained by doing ls -Z /etc/named.conf on original file
template:
src: named.conf_{{ ansible_os_family }}.j2
dest: /etc/named.conf
group: named
setype: named_conf_t
mode: 0640
- name: Create a DOS-style text file from a template
template:
src: config.ini.j2
dest: /share/windows/config.ini
newline_sequence: '\r\n'
- name: Copy a new sudoers file into place, after passing validation with visudo
template:
src: /mine/sudoers
dest: /etc/sudoers
validate: /usr/sbin/visudo -cf %s
- name: Update sshd configuration safely, avoid locking yourself out
template:
src: etc/ssh/sshd_config.j2
dest: /etc/ssh/sshd_config
owner: root
group: root
mode: '0600'
validate: /usr/sbin/sshd -t -f %s
backup: yes
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,654 |
[WARNING]: template parsing did not produce documentation
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
$ansible-doc -l | grep UNDOCUMENTED
[WARNING]: template parsing did not produce documentation.
[WARNING]: win_template parsing did not produce documentation.
template UNDOCUMENTED
win_template UNDOCUMENTED
There're 2 modules undocumented.
$ansible --version
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/smith/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible-2.9.0-py3.7.egg/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 21 2019, 7:17:39) [GCC 7.4.0]
#
|
https://github.com/ansible/ansible/issues/64654
|
https://github.com/ansible/ansible/pull/65230
|
c04fc52aadd1ae5f29611590e98adabdd83ffdd1
|
770430fd071ce4adf068a9acbe7558198baedf34
| 2019-11-10T14:37:43Z |
python
| 2019-12-04T20:23:45Z |
lib/ansible/modules/windows/win_template.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# this is a virtual module that is entirely implemented server side
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: win_template
version_added: "1.9.2"
options:
backup:
description:
- Determine whether a backup should be created.
- When set to C(yes), create a backup file including the timestamp information
so you can get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '2.8'
newline_sequence:
default: '\r\n'
force:
version_added: '2.4'
notes:
- Beware fetching files from windows machines when creating templates because certain tools, such as Powershell ISE,
and regedit's export facility add a Byte Order Mark as the first character of the file, which can cause tracebacks.
- You can use the M(win_copy) module with the C(content:) option if you prefer the template inline, as part of the
playbook.
- For Linux you can use M(template) which uses '\\n' as C(newline_sequence) by default.
seealso:
- module: win_copy
- module: copy
- module: template
author:
- Jon Hawkesworth (@jhawkesworth)
extends_documentation_fragment:
- template_common
'''
EXAMPLES = r'''
- name: Create a file from a Jinja2 template
win_template:
src: /mytemplates/file.conf.j2
dest: C:\Temp\file.conf
- name: Create a Unix-style file from a Jinja2 template
win_template:
src: unix/config.conf.j2
dest: C:\share\unix\config.conf
newline_sequence: '\n'
backup: yes
'''
RETURN = r'''
backup_file:
description: Name of the backup file that was created.
returned: if backup=yes
type: str
sample: C:\Path\To\File.txt.11540.20150212-220915.bak
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,654 |
[WARNING]: template parsing did not produce documentation
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
```
$ansible-doc -l | grep UNDOCUMENTED
[WARNING]: template parsing did not produce documentation.
[WARNING]: win_template parsing did not produce documentation.
template UNDOCUMENTED
win_template UNDOCUMENTED
There're 2 modules undocumented.
$ansible --version
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/smith/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible-2.9.0-py3.7.egg/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 21 2019, 7:17:39) [GCC 7.4.0]
#
|
https://github.com/ansible/ansible/issues/64654
|
https://github.com/ansible/ansible/pull/65230
|
c04fc52aadd1ae5f29611590e98adabdd83ffdd1
|
770430fd071ce4adf068a9acbe7558198baedf34
| 2019-11-10T14:37:43Z |
python
| 2019-12-04T20:23:45Z |
lib/ansible/plugins/doc_fragments/template_common.py
|
# -*- coding: utf-8 -*-
# Copyright (c) 2019 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Standard template documentation fragment, use by template and win_template.
DOCUMENTATION = r'''
short_description: Template a file out to a remote server
description:
- Templates are processed by the L(Jinja2 templating language,http://jinja.pocoo.org/docs/).
- Documentation on the template formatting can be found in the
L(Template Designer Documentation,http://jinja.pocoo.org/docs/templates/).
- Additional variables listed below can be used in templates.
- C(ansible_managed) (configurable via the C(defaults) section of C(ansible.cfg)) contains a string which can be used to
describe the template name, host, modification time of the template file and the owner uid.
- C(template_host) contains the node name of the template's machine.
- C(template_uid) is the numeric user id of the owner.
- C(template_path) is the path of the template.
- C(template_fullpath) is the absolute path of the template.
- C(template_destpath) is the path of the template on the remote system (added in 2.8).
- C(template_run_date) is the date that the template was rendered.
options:
src:
description:
- Path of a Jinja2 formatted template on the Ansible controller.
- This can be a relative or an absolute path.
- The file must be encoded with C(utf-8) but I(output_encoding) can be used to control the encoding of the output
template.
type: path
required: yes
dest:
description:
- Location to render the template to on the remote machine.
type: path
required: yes
newline_sequence:
description:
- Specify the newline sequence to use for templating files.
type: str
choices: [ '\n', '\r', '\r\n' ]
default: '\n'
version_added: '2.4'
block_start_string:
description:
- The string marking the beginning of a block.
type: str
default: '{%'
version_added: '2.4'
block_end_string:
description:
- The string marking the end of a block.
type: str
default: '%}'
version_added: '2.4'
variable_start_string:
description:
- The string marking the beginning of a print statement.
type: str
default: '{{'
version_added: '2.4'
variable_end_string:
description:
- The string marking the end of a print statement.
type: str
default: '}}'
version_added: '2.4'
trim_blocks:
description:
- Determine when newlines should be removed from blocks.
- When set to C(yes) the first newline after a block is removed (block, not variable tag!).
type: bool
default: yes
version_added: '2.4'
lstrip_blocks:
description:
- Determine when leading spaces and tabs should be stripped.
- When set to C(yes) leading spaces and tabs are stripped from the start of a line to a block.
- This functionality requires Jinja 2.7 or newer.
type: bool
default: no
version_added: '2.6'
force:
description:
- Determine when the file is being transferred if the destination already exists.
- When set to C(yes), replace the remote file when contents are different than the source.
- When set to C(no), the file will only be transferred if the destination does not exist.
type: bool
default: yes
output_encoding:
description:
- Overrides the encoding used to write the template file defined by C(dest).
- It defaults to C(utf-8), but any encoding supported by python can be used.
- The source template file must always be encoded using C(utf-8), for homogeneity.
type: str
default: utf-8
version_added: '2.7'
notes:
- Including a string that uses a date in the template will result in the template being marked 'changed' each time.
- Since Ansible 0.9, templates are loaded with C(trim_blocks=True).
- >
Also, you can override jinja2 settings by adding a special header to template file.
i.e. C(#jinja2:variable_start_string:'[%', variable_end_string:'%]', trim_blocks: False)
which changes the variable interpolation markers to C([% var %]) instead of C({{ var }}).
This is the best way to prevent evaluation of things that look like, but should not be Jinja2.
- Using raw/endraw in Jinja2 will not work as you expect because templates in Ansible are recursively
evaluated.
- To find Byte Order Marks in files, use C(Format-Hex <file> -Count 16) on Windows, and use C(od -a -t x1 -N 16 <file>)
on Linux.
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,488 |
azure_rm_galleryimageversion_module should be able to take data disk snapshots as input.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
azure_rm_galleryimageversion_module should be able to take data disk snapshots as input.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
azure_rm_galleryimageversion_module
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Starting from SIG api version 2019-07-01, we support creating image version from snapshots, which contains 1 os disk snapshot and 0 to many data disk snapshots. By having this feature, it give customers full power to create image version from snapshots.
Here is a sample request entity:
{"name":"1.0.0","location":"West US","properties":{"publishingProfile":{"excludeFromLatest":false},"storageProfile":{"osDiskImage":{"source":{"id":"/subscriptions/<subId>/resourceGroups/rgName/providers/Microsoft.Compute/snapshots/osSnapshotName"}},"dataDiskImages":[{"lun":0,"source":{"id":"/subscriptions/<subId>/resourceGroups/rg/providers/Microsoft.Compute/snapshots/datadiskSnapshot1"}}]}}}
So for data disk, user needs to provide "lun" and "source".
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63488
|
https://github.com/ansible/ansible/pull/65405
|
2dcaa108d8eb388512096bc5da9032c9bf81af04
|
cff80f131942a75692293714757f1d2c9c5578f4
| 2019-10-15T01:02:29Z |
python
| 2019-12-05T00:31:47Z |
lib/ansible/module_utils/azure_rm_common_ext.py
|
# Copyright (c) 2019 Zim Kalinowski, (@zikalino)
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from ansible.module_utils.azure_rm_common import AzureRMModuleBase
import re
from ansible.module_utils.common.dict_transformations import _camel_to_snake, _snake_to_camel
from ansible.module_utils.six import string_types
class AzureRMModuleBaseExt(AzureRMModuleBase):
def inflate_parameters(self, spec, body, level):
if isinstance(body, list):
for item in body:
self.inflate_parameters(spec, item, level)
return
for name in spec.keys():
# first check if option was passed
param = body.get(name)
if not param:
if spec[name].get('purgeIfNone', False):
body.pop(name, None)
continue
# check if pattern needs to be used
pattern = spec[name].get('pattern', None)
if pattern:
if pattern == 'camelize':
param = _snake_to_camel(param, True)
else:
param = self.normalize_resource_id(param, pattern)
body[name] = param
disposition = spec[name].get('disposition', '*')
if level == 0 and not disposition.startswith('/'):
continue
if disposition == '/':
disposition = '/*'
parts = disposition.split('/')
if parts[0] == '':
# should fail if level is > 0?
parts.pop(0)
target_dict = body
elem = body.pop(name)
while len(parts) > 1:
target_dict = target_dict.setdefault(parts.pop(0), {})
targetName = parts[0] if parts[0] != '*' else name
target_dict[targetName] = elem
if spec[name].get('options'):
self.inflate_parameters(spec[name].get('options'), target_dict[targetName], level + 1)
def normalize_resource_id(self, value, pattern):
'''
Return a proper resource id string..
:param resource_id: It could be a resource name, resource id or dict containing parts from the pattern.
:param pattern: pattern of resource is, just like in Azure Swagger
'''
value_dict = {}
if isinstance(value, string_types):
value_parts = value.split('/')
if len(value_parts) == 1:
value_dict['name'] = value
else:
pattern_parts = pattern.split('/')
if len(value_parts) != len(pattern_parts):
return None
for i in range(len(value_parts)):
if pattern_parts[i].startswith('{'):
value_dict[pattern_parts[i][1:-1]] = value_parts[i]
elif value_parts[i].lower() != pattern_parts[i].lower():
return None
elif isinstance(value, dict):
value_dict = value
else:
return None
if not value_dict.get('subscription_id'):
value_dict['subscription_id'] = self.subscription_id
if not value_dict.get('resource_group'):
value_dict['resource_group'] = self.resource_group
# check if any extra values passed
for k in value_dict:
if not ('{' + k + '}') in pattern:
return None
# format url
return pattern.format(**value_dict)
def idempotency_check(self, old_params, new_params):
'''
Return True if something changed. Function will use fields from module_arg_spec to perform dependency checks.
:param old_params: old parameters dictionary, body from Get request.
:param new_params: new parameters dictionary, unpacked module parameters.
'''
modifiers = {}
result = {}
self.create_compare_modifiers(self.module.argument_spec, '', modifiers)
self.results['modifiers'] = modifiers
return self.default_compare(modifiers, new_params, old_params, '', self.results)
def create_compare_modifiers(self, arg_spec, path, result):
for k in arg_spec.keys():
o = arg_spec[k]
updatable = o.get('updatable', True)
comparison = o.get('comparison', 'default')
disposition = o.get('disposition', '*')
if disposition == '/':
disposition = '/*'
p = (path +
('/' if len(path) > 0 else '') +
disposition.replace('*', k) +
('/*' if o['type'] == 'list' else ''))
if comparison != 'default' or not updatable:
result[p] = {'updatable': updatable, 'comparison': comparison}
if o.get('options'):
self.create_compare_modifiers(o.get('options'), p, result)
def default_compare(self, modifiers, new, old, path, result):
'''
Default dictionary comparison.
This function will work well with most of the Azure resources.
It correctly handles "location" comparison.
Value handling:
- if "new" value is None, it will be taken from "old" dictionary if "incremental_update"
is enabled.
List handling:
- if list contains "name" field it will be sorted by "name" before comparison is done.
- if module has "incremental_update" set, items missing in the new list will be copied
from the old list
Warnings:
If field is marked as non-updatable, appropriate warning will be printed out and
"new" structure will be updated to old value.
:modifiers: Optional dictionary of modifiers, where key is the path and value is dict of modifiers
:param new: New version
:param old: Old version
Returns True if no difference between structures has been detected.
Returns False if difference was detected.
'''
if new is None:
return True
elif isinstance(new, dict):
comparison_result = True
if not isinstance(old, dict):
result['compare'].append('changed [' + path + '] old dict is null')
comparison_result = False
else:
for k in set(new.keys()) | set(old.keys()):
new_item = new.get(k, None)
old_item = old.get(k, None)
if new_item is None:
if isinstance(old_item, dict):
new[k] = old_item
result['compare'].append('new item was empty, using old [' + path + '][ ' + k + ' ]')
elif not self.default_compare(modifiers, new_item, old_item, path + '/' + k, result):
comparison_result = False
return comparison_result
elif isinstance(new, list):
comparison_result = True
if not isinstance(old, list) or len(new) != len(old):
result['compare'].append('changed [' + path + '] length is different or old value is null')
comparison_result = False
else:
if isinstance(old[0], dict):
key = None
if 'id' in old[0] and 'id' in new[0]:
key = 'id'
elif 'name' in old[0] and 'name' in new[0]:
key = 'name'
else:
key = next(iter(old[0]))
new = sorted(new, key=lambda x: x.get(key, None))
old = sorted(old, key=lambda x: x.get(key, None))
else:
new = sorted(new)
old = sorted(old)
for i in range(len(new)):
if not self.default_compare(modifiers, new[i], old[i], path + '/*', result):
comparison_result = False
return comparison_result
else:
updatable = modifiers.get(path, {}).get('updatable', True)
comparison = modifiers.get(path, {}).get('comparison', 'default')
if comparison == 'ignore':
return True
elif comparison == 'default' or comparison == 'sensitive':
if isinstance(old, string_types) and isinstance(new, string_types):
new = new.lower()
old = old.lower()
elif comparison == 'location':
if isinstance(old, string_types) and isinstance(new, string_types):
new = new.replace(' ', '').lower()
old = old.replace(' ', '').lower()
if str(new) != str(old):
result['compare'].append('changed [' + path + '] ' + str(new) + ' != ' + str(old) + ' - ' + str(comparison))
if updatable:
return False
else:
self.module.warn("property '" + path + "' cannot be updated (" + str(old) + "->" + str(new) + ")")
return True
else:
return True
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,488 |
azure_rm_galleryimageversion_module should be able to take data disk snapshots as input.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
azure_rm_galleryimageversion_module should be able to take data disk snapshots as input.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
azure_rm_galleryimageversion_module
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Starting from SIG api version 2019-07-01, we support creating image version from snapshots, which contains 1 os disk snapshot and 0 to many data disk snapshots. By having this feature, it give customers full power to create image version from snapshots.
Here is a sample request entity:
{"name":"1.0.0","location":"West US","properties":{"publishingProfile":{"excludeFromLatest":false},"storageProfile":{"osDiskImage":{"source":{"id":"/subscriptions/<subId>/resourceGroups/rgName/providers/Microsoft.Compute/snapshots/osSnapshotName"}},"dataDiskImages":[{"lun":0,"source":{"id":"/subscriptions/<subId>/resourceGroups/rg/providers/Microsoft.Compute/snapshots/datadiskSnapshot1"}}]}}}
So for data disk, user needs to provide "lun" and "source".
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63488
|
https://github.com/ansible/ansible/pull/65405
|
2dcaa108d8eb388512096bc5da9032c9bf81af04
|
cff80f131942a75692293714757f1d2c9c5578f4
| 2019-10-15T01:02:29Z |
python
| 2019-12-05T00:31:47Z |
lib/ansible/modules/cloud/azure/azure_rm_galleryimageversion.py
|
#!/usr/bin/python
#
# Copyright (c) 2019 Zim Kalinowski, (@zikalino)
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: azure_rm_galleryimageversion
version_added: '2.9'
short_description: Manage Azure SIG Image Version instance
description:
- Create, update and delete instance of Azure SIG Image Version.
options:
resource_group:
description:
- The name of the resource group.
required: true
type: str
gallery_name:
description:
- The name of the Shared Image Gallery in which the Image Definition resides.
required: true
type: str
gallery_image_name:
description:
- The name of the gallery Image Definition in which the Image Version is to be created.
required: true
type: str
name:
description:
- The name of the gallery Image Version to be created.
- Needs to follow semantic version name pattern, The allowed characters are digit and period.
- Digits must be within the range of a 32-bit integer. For example <MajorVersion>.<MinorVersion>.<Patch>.
required: true
type: str
location:
description:
- Resource location.
type: str
publishing_profile:
description:
- Publishing profile.
required: true
type: dict
suboptions:
target_regions:
description:
- The target regions where the Image Version is going to be replicated to.
- This property is updatable.
type: list
suboptions:
name:
description:
- Region name.
type: str
regional_replica_count:
description:
- The number of replicas of the Image Version to be created per region.
- This property would take effect for a region when regionalReplicaCount is not specified.
- This property is updatable.
type: str
storage_account_type:
description:
- Storage account type.
type: str
managed_image:
description:
- Managed image reference, could be resource ID, or dictionary containing I(resource_group) and I(name).
snapshot:
description:
- Source snapshot to be used.
replica_count:
description:
- The number of replicas of the Image Version to be created per region.
- This property would take effect for a region when regionalReplicaCount is not specified.
- This property is updatable.
type: int
exclude_from_latest:
description:
If I(exclude_from_latest=true), Virtual Machines deployed from the latest version of the Image Definition won't use this Image Version.
type: bool
end_of_life_date:
description:
- The end of life date of the gallery Image Version.
- This property can be used for decommissioning purposes.
- This property is updatable. Format should be according to ISO-8601, for instance "2019-06-26".
type: str
storage_account_type:
description:
- Specifies the storage account type to be used to store the image.
- This property is not updatable.
type: str
state:
description:
- Assert the state of the GalleryImageVersion.
- Use C(present) to create or update an GalleryImageVersion and C(absent) to delete it.
default: present
choices:
- absent
- present
type: str
extends_documentation_fragment:
- azure
- azure_tags
author:
- Zim Kalinowski (@zikalino)
'''
EXAMPLES = '''
- name: Create or update a simple gallery Image Version.
azure_rm_galleryimageversion:
resource_group: myResourceGroup
gallery_name: myGallery1283
gallery_image_name: myImage
name: 10.1.3
location: West US
publishing_profile:
end_of_life_date: "2020-10-01t00:00:00+00:00"
exclude_from_latest: yes
replica_count: 3
storage_account_type: Standard_LRS
target_regions:
- name: West US
regional_replica_count: 1
- name: East US
regional_replica_count: 2
storage_account_type: Standard_ZRS
managed_image:
name: myImage
resource_group: myResourceGroup
'''
RETURN = '''
id:
description:
- Resource ID.
returned: always
type: str
sample: "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Compute/galleries/myGalle
ry1283/images/myImage/versions/10.1.3"
'''
import time
import json
import re
from ansible.module_utils.azure_rm_common_ext import AzureRMModuleBaseExt
from ansible.module_utils.azure_rm_common_rest import GenericRestClient
from copy import deepcopy
try:
from msrestazure.azure_exceptions import CloudError
except ImportError:
# This is handled in azure_rm_common
pass
class Actions:
NoAction, Create, Update, Delete = range(4)
class AzureRMGalleryImageVersions(AzureRMModuleBaseExt):
def __init__(self):
self.module_arg_spec = dict(
resource_group=dict(
type='str',
updatable=False,
disposition='resourceGroupName',
required=True
),
gallery_name=dict(
type='str',
updatable=False,
disposition='galleryName',
required=True
),
gallery_image_name=dict(
type='str',
updatable=False,
disposition='galleryImageName',
required=True
),
name=dict(
type='str',
updatable=False,
disposition='galleryImageVersionName',
required=True
),
location=dict(
type='str',
updatable=False,
disposition='/',
comparison='location'
),
publishing_profile=dict(
type='dict',
disposition='/properties/publishingProfile',
options=dict(
target_regions=dict(
type='list',
disposition='targetRegions',
options=dict(
name=dict(
type='str',
required=True,
comparison='location'
),
regional_replica_count=dict(
type='int',
disposition='regionalReplicaCount'
),
storage_account_type=dict(
type='str',
disposition='storageAccountType'
)
)
),
managed_image=dict(
type='raw',
pattern=('/subscriptions/{subscription_id}/resourceGroups'
'/{resource_group}/providers/Microsoft.Compute'
'/images/{name}'),
comparison='ignore'
),
snapshot=dict(
type='raw',
pattern=('/subscriptions/{subscription_id}/resourceGroups'
'/{resource_group}/providers/Microsoft.Compute'
'/snapshots/{name}'),
comparison='ignore'
),
replica_count=dict(
type='int',
disposition='replicaCount'
),
exclude_from_latest=dict(
type='bool',
disposition='excludeFromLatest'
),
end_of_life_date=dict(
type='str',
disposition='endOfLifeDate'
),
storage_account_type=dict(
type='str',
disposition='storageAccountType',
choices=['Standard_LRS',
'Standard_ZRS']
)
)
),
state=dict(
type='str',
default='present',
choices=['present', 'absent']
)
)
self.resource_group = None
self.gallery_name = None
self.gallery_image_name = None
self.name = None
self.gallery_image_version = None
self.results = dict(changed=False)
self.mgmt_client = None
self.state = None
self.url = None
self.status_code = [200, 201, 202]
self.to_do = Actions.NoAction
self.body = {}
self.query_parameters = {}
self.query_parameters['api-version'] = '2019-07-01'
self.header_parameters = {}
self.header_parameters['Content-Type'] = 'application/json; charset=utf-8'
super(AzureRMGalleryImageVersions, self).__init__(derived_arg_spec=self.module_arg_spec,
supports_check_mode=True,
supports_tags=True)
def exec_module(self, **kwargs):
for key in list(self.module_arg_spec.keys()):
if hasattr(self, key):
setattr(self, key, kwargs[key])
elif kwargs[key] is not None:
self.body[key] = kwargs[key]
self.inflate_parameters(self.module_arg_spec, self.body, 0)
old_response = None
response = None
self.mgmt_client = self.get_mgmt_svc_client(GenericRestClient,
base_url=self._cloud_environment.endpoints.resource_manager)
resource_group = self.get_resource_group(self.resource_group)
if 'location' not in self.body:
self.body['location'] = resource_group.location
self.url = ('/subscriptions' +
'/{{ subscription_id }}' +
'/resourceGroups' +
'/{{ resource_group }}' +
'/providers' +
'/Microsoft.Compute' +
'/galleries' +
'/{{ gallery_name }}' +
'/images' +
'/{{ image_name }}' +
'/versions' +
'/{{ version_name }}')
self.url = self.url.replace('{{ subscription_id }}', self.subscription_id)
self.url = self.url.replace('{{ resource_group }}', self.resource_group)
self.url = self.url.replace('{{ gallery_name }}', self.gallery_name)
self.url = self.url.replace('{{ image_name }}', self.gallery_image_name)
self.url = self.url.replace('{{ version_name }}', self.name)
old_response = self.get_resource()
if not old_response:
self.log("GalleryImageVersion instance doesn't exist")
if self.state == 'absent':
self.log("Old instance didn't exist")
else:
self.to_do = Actions.Create
else:
self.log('GalleryImageVersion instance already exists')
if self.state == 'absent':
self.to_do = Actions.Delete
else:
modifiers = {}
self.create_compare_modifiers(self.module_arg_spec, '', modifiers)
self.results['modifiers'] = modifiers
self.results['compare'] = []
if not self.default_compare(modifiers, self.body, old_response, '', self.results):
self.to_do = Actions.Update
# fix for differences between version 2019-03-01 and 2019-07-01
snapshot = self.body.get('properties', {}).get('publishingProfile', {}).pop('snapshot', None)
if snapshot is not None:
self.body['properties'].setdefault('storageProfile', {}).setdefault('osDiskImage', {}).setdefault('source', {})['id'] = snapshot
managed_image = self.body.get('properties', {}).get('publishingProfile', {}).pop('managed_image', None)
if managed_image:
self.body['properties'].setdefault('storageProfile', {}).setdefault('source', {})['id'] = managed_image
if (self.to_do == Actions.Create) or (self.to_do == Actions.Update):
self.log('Need to Create / Update the GalleryImageVersion instance')
if self.check_mode:
self.results['changed'] = True
return self.results
response = self.create_update_resource()
self.results['changed'] = True
self.log('Creation / Update done')
elif self.to_do == Actions.Delete:
self.log('GalleryImageVersion instance deleted')
self.results['changed'] = True
if self.check_mode:
return self.results
self.delete_resource()
else:
self.log('GalleryImageVersion instance unchanged')
self.results['changed'] = False
response = old_response
if response:
self.results["id"] = response["id"]
return self.results
def create_update_resource(self):
# self.log('Creating / Updating the GalleryImageVersion instance {0}'.format(self.))
try:
response = self.mgmt_client.query(self.url,
'PUT',
self.query_parameters,
self.header_parameters,
self.body,
self.status_code,
600,
30)
except CloudError as exc:
self.log('Error attempting to create the GalleryImageVersion instance.')
self.fail('Error creating the GalleryImageVersion instance: {0}'.format(str(exc)))
try:
response = json.loads(response.text)
except Exception:
response = {'text': response.text}
while response['properties']['provisioningState'] == 'Creating':
time.sleep(60)
response = self.get_resource()
return response
def delete_resource(self):
# self.log('Deleting the GalleryImageVersion instance {0}'.format(self.))
try:
response = self.mgmt_client.query(self.url,
'DELETE',
self.query_parameters,
self.header_parameters,
None,
self.status_code,
600,
30)
except CloudError as e:
self.log('Error attempting to delete the GalleryImageVersion instance.')
self.fail('Error deleting the GalleryImageVersion instance: {0}'.format(str(e)))
return True
def get_resource(self):
# self.log('Checking if the GalleryImageVersion instance {0} is present'.format(self.))
found = False
try:
response = self.mgmt_client.query(self.url,
'GET',
self.query_parameters,
self.header_parameters,
None,
self.status_code,
600,
30)
response = json.loads(response.text)
found = True
self.log("Response : {0}".format(response))
# self.log("AzureFirewall instance : {0} found".format(response.name))
except CloudError as e:
self.log('Did not find the AzureFirewall instance.')
if found is True:
return response
return False
def main():
AzureRMGalleryImageVersions()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,488 |
azure_rm_galleryimageversion_module should be able to take data disk snapshots as input.
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
azure_rm_galleryimageversion_module should be able to take data disk snapshots as input.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
azure_rm_galleryimageversion_module
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
Starting from SIG api version 2019-07-01, we support creating image version from snapshots, which contains 1 os disk snapshot and 0 to many data disk snapshots. By having this feature, it give customers full power to create image version from snapshots.
Here is a sample request entity:
{"name":"1.0.0","location":"West US","properties":{"publishingProfile":{"excludeFromLatest":false},"storageProfile":{"osDiskImage":{"source":{"id":"/subscriptions/<subId>/resourceGroups/rgName/providers/Microsoft.Compute/snapshots/osSnapshotName"}},"dataDiskImages":[{"lun":0,"source":{"id":"/subscriptions/<subId>/resourceGroups/rg/providers/Microsoft.Compute/snapshots/datadiskSnapshot1"}}]}}}
So for data disk, user needs to provide "lun" and "source".
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63488
|
https://github.com/ansible/ansible/pull/65405
|
2dcaa108d8eb388512096bc5da9032c9bf81af04
|
cff80f131942a75692293714757f1d2c9c5578f4
| 2019-10-15T01:02:29Z |
python
| 2019-12-05T00:31:47Z |
test/integration/targets/azure_rm_gallery/tasks/main.yml
|
- name: Prepare random number
set_fact:
rpfx: "{{ resource_group | hash('md5') | truncate(7, True, '') }}{{ 1000 | random }}"
run_once: yes
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: "{{ resource_group }}"
name: testVnet
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: "{{ resource_group }}"
name: testSubnet
address_prefix: "10.0.1.0/24"
virtual_network: testVnet
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: "{{ resource_group }}"
allocation_method: Static
name: testPublicIP
- name: Create virtual network inteface cards for VM A and B
azure_rm_networkinterface:
resource_group: "{{ resource_group }}"
name: "vmforimage{{ rpfx }}nic"
virtual_network: testVnet
subnet: testSubnet
- name: Create VM
azure_rm_virtualmachine:
resource_group: "{{ resource_group }}"
name: "vmforimage{{ rpfx }}"
location: eastus
admin_username: testuser
admin_password: "Password1234!"
vm_size: Standard_B1ms
network_interfaces: "vmforimage{{ rpfx }}nic"
image:
offer: UbuntuServer
publisher: Canonical
sku: 16.04-LTS
version: latest
- name: Get VM facts
azure_rm_virtualmachine_facts:
resource_group: "{{ resource_group }}"
name: "vmforimage{{ rpfx }}"
register: output
- name: Create a snapshot by importing an unmanaged blob from the same subscription.
azure_rm_snapshot:
resource_group: "{{ resource_group }}"
name: "mySnapshot-{{ rpfx }}"
location: eastus
creation_data:
create_option: Import
source_uri: 'https://{{ output.vms[0].storage_account_name }}.blob.core.windows.net/{{ output.vms[0].storage_container_name }}/{{ output.vms[0].storage_blob_name }}'
register: output
- assert:
that:
- output.changed
- name: Generalize VM
azure_rm_virtualmachine:
resource_group: "{{ resource_group }}"
name: "vmforimage{{ rpfx }}"
generalized: yes
- name: Create custom image
azure_rm_image:
resource_group: "{{ resource_group }}"
name: testimagea
source: "vmforimage{{ rpfx }}"
- name: Create or update a simple gallery.
azure_rm_gallery:
resource_group: "{{ resource_group }}"
name: myGallery{{ rpfx }}
location: eastus
description: This is the gallery description.
register: output
- assert:
that:
- output.changed
- name: Create or update a simple gallery - idempotent
azure_rm_gallery:
resource_group: "{{ resource_group }}"
name: myGallery{{ rpfx }}
location: eastus
description: This is the gallery description.
register: output
- assert:
that:
- not output.changed
- name: Create or update a simple gallery - change description
azure_rm_gallery:
resource_group: "{{ resource_group }}"
name: myGallery{{ rpfx }}
location: eastus
description: This is the gallery description - xxx.
register: output
- assert:
that:
- output.changed
- name: Get a gallery info.
azure_rm_gallery_info:
resource_group: "{{ resource_group }}"
name: myGallery{{ rpfx }}
register: output
- assert:
that:
- not output.changed
- output.galleries['id'] != None
- output.galleries['name'] != None
- output.galleries['location'] != None
- output.galleries['description'] != None
- output.galleries['provisioning_state'] != None
- name: Create or update gallery image
azure_rm_galleryimage:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
name: myImage
location: eastus
os_type: linux
os_state: generalized
identifier:
publisher: myPublisherName
offer: myOfferName
sku: mySkuName
description: Image Description
register: output
- assert:
that:
- output.changed
- name: Create or update gallery image - idempotent
azure_rm_galleryimage:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
name: myImage
location: eastus
os_type: linux
os_state: generalized
identifier:
publisher: myPublisherName
offer: myOfferName
sku: mySkuName
description: Image Description
register: output
- assert:
that:
- not output.changed
- name: Create or update gallery image - change description
azure_rm_galleryimage:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
name: myImage
location: eastus
os_type: linux
os_state: generalized
identifier:
publisher: myPublisherName
offer: myOfferName
sku: mySkuName
description: Image Description XXXs
register: output
- assert:
that:
- output.changed
- name: Get a gallery image info.
azure_rm_galleryimage_info:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
name: myImage
register: output
- assert:
that:
- not output.changed
- output.images['id'] != None
- output.images['name'] != None
- output.images['location'] != None
- output.images['os_state'] != None
- output.images['os_type'] != None
- output.images['identifier'] != None
- name: Create or update a simple gallery Image Version.
azure_rm_galleryimageversion:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
gallery_image_name: myImage
name: 10.1.3
location: eastus
publishing_profile:
end_of_life_date: "2020-10-01t00:00:00+00:00"
exclude_from_latest: yes
replica_count: 3
storage_account_type: Standard_LRS
target_regions:
- name: eastus
regional_replica_count: 1
- name: westus
regional_replica_count: 2
storage_account_type: Standard_ZRS
managed_image:
name: testimagea
resource_group: "{{ resource_group }}"
register: output
- assert:
that:
- output.changed
- name: Create or update a simple gallery Image Version - idempotent
azure_rm_galleryimageversion:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
gallery_image_name: myImage
name: 10.1.3
location: eastus
publishing_profile:
end_of_life_date: "2020-10-01t00:00:00+00:00"
exclude_from_latest: yes
replica_count: 3
storage_account_type: Standard_LRS
target_regions:
- name: eastus
regional_replica_count: 1
- name: westus
regional_replica_count: 2
storage_account_type: Standard_ZRS
managed_image:
name: testimagea
resource_group: "{{ resource_group }}"
register: output
- assert:
that:
- not output.changed
- name: Create or update a simple gallery Image Version - change end of life
azure_rm_galleryimageversion:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
gallery_image_name: myImage
name: 10.1.3
location: eastus
publishing_profile:
end_of_life_date: "2021-10-01t00:00:00+00:00"
exclude_from_latest: yes
replica_count: 3
storage_account_type: Standard_LRS
target_regions:
- name: eastus
regional_replica_count: 1
- name: westus
regional_replica_count: 2
storage_account_type: Standard_ZRS
managed_image:
name: testimagea
resource_group: "{{ resource_group }}"
register: output
- assert:
that:
- output.changed
- name: Get a simple gallery Image Version info.
azure_rm_galleryimageversion_info:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
gallery_image_name: myImage
name: 10.1.3
register: output
- assert:
that:
- not output.changed
- output.versions['id'] != None
- output.versions['name'] != None
- output.versions['location'] != None
- output.versions['publishing_profile'] != None
- output.versions['provisioning_state'] != None
- name: Delete gallery image Version.
azure_rm_galleryimageversion:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
gallery_image_name: myImage
name: 10.1.3
state: absent
register: output
- assert:
that:
- output.changed
- name: Delete gallery image
azure_rm_galleryimage:
resource_group: "{{ resource_group }}"
gallery_name: myGallery{{ rpfx }}
name: myImage
state: absent
register: output
- assert:
that:
- output.changed
- name: Delete gallery
azure_rm_gallery:
resource_group: "{{ resource_group }}"
name: myGallery{{ rpfx }}
state: absent
register: output
- assert:
that:
- output.changed
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,024 |
BUG: ec2_instance module stopped working after update to python v 3.8 / ansible v 2.9.1
|
##### SUMMARY
ec2_instance module stopped working after update to v 2.9.1
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_instance module
https://docs.ansible.com/ansible/latest/modules/ec2_instance_module.html
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user0/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
```
uname -a
Linux work3 5.3.11-arch1-1 #1 SMP PREEMPT Tue, 12 Nov 2019 22:19:48 +0000 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
previously working snippet
```yaml
- name: stop image instance
register: image_instance
ec2_instance:
region: "{{amazon_region}}"
instance_ids: "{{instance_id}}"
state: stopped
wait: yes
wait_timeout: 320
aws_access_key: "{{aws_access_key_id}}"
aws_secret_key: "{{aws_secret_access_key}}"
validate_certs: no
```
##### EXPECTED RESULTS
invocation success
##### ACTUAL RESULTS
invocation failure
```paste below
Traceback (most recent call last):
File "/home/user0/.ansible/tmp/ansible-tmp-1574115528.3222384-162345873183700/AnsiballZ_ec2_instance.py", line 102, in <module>
_ansiballz_main()
File "/home/user0/.ansible/tmp/ansible-tmp-1574115528.3222384-162345873183700/AnsiballZ_ec2_instance.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/user0/.ansible/tmp/ansible-tmp-1574115528.3222384-162345873183700/AnsiballZ_ec2_instance.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.cloud.amazon.ec2_instance', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib/python3.8/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 95, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ec2_instance_payload_hz5qgsqh/ansible_ec2_instance_payload.zip/ansible/modules/cloud/amazon/ec2_instance.py", line 1710, in <module>
File "/tmp/ansible_ec2_instance_payload_hz5qgsqh/ansible_ec2_instance_payload.zip/ansible/modules/cloud/amazon/ec2_instance.py", line 1681, in main
File "/tmp/ansible_ec2_instance_payload_hz5qgsqh/ansible_ec2_instance_payload.zip/ansible/modules/cloud/amazon/ec2_instance.py", line 1297, in find_instances
RuntimeError: dictionary keys changed during iteration
```
#### code in error:
code in error is in fact doing: "dictionary keys changed during iteration":
[ec2_instance.py#L1297](https://github.com/ansible/ansible/blob/v2.9.1/lib/ansible/modules/cloud/amazon/ec2_instance.py#L1297
)
```
for key in filters.keys():
if not key.startswith("tag:"):
filters[key.replace("_", "-")] = filters.pop(key)
```
|
https://github.com/ansible/ansible/issues/65024
|
https://github.com/ansible/ansible/pull/65521
|
c266fc3b74665fd7313b84f2c0a050024151475c
|
7d3cc250ef548771f788b9f0119eca1d8164ff96
| 2019-11-18T23:26:33Z |
python
| 2019-12-05T10:02:59Z |
lib/ansible/modules/cloud/amazon/ec2_instance.py
|
#!/usr/bin/python
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ec2_instance
short_description: Create & manage EC2 instances
description:
- Create and manage AWS EC2 instances.
- >
Note: This module does not support creating
L(EC2 Spot instances,https://aws.amazon.com/ec2/spot/). The M(ec2) module
can create and manage spot instances.
version_added: "2.5"
author:
- Ryan Scott Brown (@ryansb)
requirements: [ "boto3", "botocore" ]
options:
instance_ids:
description:
- If you specify one or more instance IDs, only instances that have the specified IDs are returned.
type: list
state:
description:
- Goal state for the instances.
choices: [present, terminated, running, started, stopped, restarted, rebooted, absent]
default: present
type: str
wait:
description:
- Whether or not to wait for the desired state (use wait_timeout to customize this).
default: true
type: bool
wait_timeout:
description:
- How long to wait (in seconds) for the instance to finish booting/terminating.
default: 600
type: int
instance_type:
description:
- Instance type to use for the instance, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html)
Only required when instance is not already present.
default: t2.micro
type: str
user_data:
description:
- Opaque blob of data which is made available to the ec2 instance
type: str
tower_callback:
description:
- Preconfigured user-data to enable an instance to perform a Tower callback (Linux only).
- Mutually exclusive with I(user_data).
- For Windows instances, to enable remote access via Ansible set I(tower_callback.windows) to true, and optionally set an admin password.
- If using 'windows' and 'set_password', callback to Tower will not be performed but the instance will be ready to receive winrm connections from Ansible.
type: dict
suboptions:
tower_address:
description:
- IP address or DNS name of Tower server. Must be accessible via this address from the VPC that this instance will be launched in.
type: str
job_template_id:
description:
- Either the integer ID of the Tower Job Template, or the name (name supported only for Tower 3.2+).
type: str
host_config_key:
description:
- Host configuration secret key generated by the Tower job template.
type: str
tags:
description:
- A hash/dictionary of tags to add to the new instance or to add/remove from an existing one.
type: dict
purge_tags:
description:
- Delete any tags not specified in the task that are on the instance.
This means you have to specify all the desired tags on each task affecting an instance.
default: false
type: bool
image:
description:
- An image to use for the instance. The M(ec2_ami_info) module may be used to retrieve images.
One of I(image) or I(image_id) are required when instance is not already present.
type: dict
suboptions:
id:
description:
- The AMI ID.
type: str
ramdisk:
description:
- Overrides the AMI's default ramdisk ID.
type: str
kernel:
description:
- a string AKI to override the AMI kernel.
image_id:
description:
- I(ami) ID to use for the instance. One of I(image) or I(image_id) are required when instance is not already present.
- This is an alias for I(image.id).
type: str
security_groups:
description:
- A list of security group IDs or names (strings). Mutually exclusive with I(security_group).
type: list
security_group:
description:
- A security group ID or name. Mutually exclusive with I(security_groups).
type: str
name:
description:
- The Name tag for the instance.
type: str
vpc_subnet_id:
description:
- The subnet ID in which to launch the instance (VPC)
If none is provided, ec2_instance will chose the default zone of the default VPC.
aliases: ['subnet_id']
type: str
network:
description:
- Either a dictionary containing the key 'interfaces' corresponding to a list of network interface IDs or
containing specifications for a single network interface.
- Use the ec2_eni module to create ENIs with special settings.
type: dict
suboptions:
interfaces:
description:
- a list of ENI IDs (strings) or a list of objects containing the key I(id).
type: list
assign_public_ip:
description:
- when true assigns a public IP address to the interface
type: bool
private_ip_address:
description:
- an IPv4 address to assign to the interface
type: str
ipv6_addresses:
description:
- a list of IPv6 addresses to assign to the network interface
type: list
source_dest_check:
description:
- controls whether source/destination checking is enabled on the interface
type: bool
description:
description:
- a description for the network interface
type: str
private_ip_addresses:
description:
- a list of IPv4 addresses to assign to the network interface
type: list
subnet_id:
description:
- the subnet to connect the network interface to
type: str
delete_on_termination:
description:
- Delete the interface when the instance it is attached to is
terminated.
type: bool
device_index:
description:
- The index of the interface to modify
type: int
groups:
description:
- a list of security group IDs to attach to the interface
type: list
volumes:
description:
- A list of block device mappings, by default this will always use the AMI root device so the volumes option is primarily for adding more storage.
- A mapping contains the (optional) keys device_name, virtual_name, ebs.volume_type, ebs.volume_size, ebs.kms_key_id,
ebs.iops, and ebs.delete_on_termination.
- For more information about each parameter, see U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html).
type: list
launch_template:
description:
- The EC2 launch template to base instance configuration on.
type: dict
suboptions:
id:
description:
- the ID of the launch template (optional if name is specified).
type: str
name:
description:
- the pretty name of the launch template (optional if id is specified).
type: str
version:
description:
- the specific version of the launch template to use. If unspecified, the template default is chosen.
key_name:
description:
- Name of the SSH access key to assign to the instance - must exist in the region the instance is created.
type: str
availability_zone:
description:
- Specify an availability zone to use the default subnet it. Useful if not specifying the I(vpc_subnet_id) parameter.
- If no subnet, ENI, or availability zone is provided, the default subnet in the default VPC will be used in the first AZ (alphabetically sorted).
type: str
instance_initiated_shutdown_behavior:
description:
- Whether to stop or terminate an instance upon shutdown.
choices: ['stop', 'terminate']
type: str
tenancy:
description:
- What type of tenancy to allow an instance to use. Default is shared tenancy. Dedicated tenancy will incur additional charges.
choices: ['dedicated', 'default']
type: str
termination_protection:
description:
- Whether to enable termination protection.
This module will not terminate an instance with termination protection active, it must be turned off first.
type: bool
cpu_credit_specification:
description:
- For T series instances, choose whether to allow increased charges to buy CPU credits if the default pool is depleted.
- Choose I(unlimited) to enable buying additional CPU credits.
choices: ['unlimited', 'standard']
type: str
cpu_options:
description:
- Reduce the number of vCPU exposed to the instance.
- Those parameters can only be set at instance launch. The two suboptions threads_per_core and core_count are mandatory.
- See U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html) for combinations available.
- Requires botocore >= 1.10.16
version_added: 2.7
type: dict
suboptions:
threads_per_core:
description:
- Select the number of threads per core to enable. Disable or Enable Intel HT.
choices: [1, 2]
required: true
type: int
core_count:
description:
- Set the number of core to enable.
required: true
type: int
detailed_monitoring:
description:
- Whether to allow detailed cloudwatch metrics to be collected, enabling more detailed alerting.
type: bool
ebs_optimized:
description:
- Whether instance is should use optimized EBS volumes, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
type: bool
filters:
description:
- A dict of filters to apply when deciding whether existing instances match and should be altered. Each dict item
consists of a filter key and a filter value. See
U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html).
for possible filters. Filter names and values are case sensitive.
- By default, instances are filtered for counting by their "Name" tag, base AMI, state (running, by default), and
subnet ID. Any queryable filter can be used. Good candidates are specific tags, SSH keys, or security groups.
type: dict
instance_role:
description:
- The ARN or name of an EC2-enabled instance role to be used. If a name is not provided in arn format
then the ListInstanceProfiles permission must also be granted.
U(https://docs.aws.amazon.com/IAM/latest/APIReference/API_ListInstanceProfiles.html) If no full ARN is provided,
the role with a matching name will be used from the active AWS account.
type: str
placement_group:
description:
- The placement group that needs to be assigned to the instance
version_added: 2.8
type: str
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Terminate every running instance in a region. Use with EXTREME caution.
- ec2_instance:
state: absent
filters:
instance-state-name: running
# restart a particular instance by its ID
- ec2_instance:
state: restarted
instance_ids:
- i-12345678
# start an instance with a public IP address
- ec2_instance:
name: "public-compute-instance"
key_name: "prod-ssh-key"
vpc_subnet_id: subnet-5ca1ab1e
instance_type: c5.large
security_group: default
network:
assign_public_ip: true
image_id: ami-123456
tags:
Environment: Testing
# start an instance and Add EBS
- ec2_instance:
name: "public-withebs-instance"
vpc_subnet_id: subnet-5ca1ab1e
instance_type: t2.micro
key_name: "prod-ssh-key"
security_group: default
volumes:
- device_name: /dev/sda1
ebs:
volume_size: 16
delete_on_termination: true
# start an instance with a cpu_options
- ec2_instance:
name: "public-cpuoption-instance"
vpc_subnet_id: subnet-5ca1ab1e
tags:
Environment: Testing
instance_type: c4.large
volumes:
- device_name: /dev/sda1
ebs:
delete_on_termination: true
cpu_options:
core_count: 1
threads_per_core: 1
# start an instance and have it begin a Tower callback on boot
- ec2_instance:
name: "tower-callback-test"
key_name: "prod-ssh-key"
vpc_subnet_id: subnet-5ca1ab1e
security_group: default
tower_callback:
# IP or hostname of tower server
tower_address: 1.2.3.4
job_template_id: 876
host_config_key: '[secret config key goes here]'
network:
assign_public_ip: true
image_id: ami-123456
cpu_credit_specification: unlimited
tags:
SomeThing: "A value"
# start an instance with ENI (An existing ENI ID is required)
- ec2_instance:
name: "public-eni-instance"
key_name: "prod-ssh-key"
vpc_subnet_id: subnet-5ca1ab1e
network:
interfaces:
- id: "eni-12345"
tags:
Env: "eni_on"
volumes:
- device_name: /dev/sda1
ebs:
delete_on_termination: true
instance_type: t2.micro
image_id: ami-123456
# add second ENI interface
- ec2_instance:
name: "public-eni-instance"
network:
interfaces:
- id: "eni-12345"
- id: "eni-67890"
image_id: ami-123456
tags:
Env: "eni_on"
instance_type: t2.micro
'''
RETURN = '''
instances:
description: a list of ec2 instances
returned: when wait == true
type: complex
contains:
ami_launch_index:
description: The AMI launch index, which can be used to find this instance in the launch group.
returned: always
type: int
sample: 0
architecture:
description: The architecture of the image
returned: always
type: str
sample: x86_64
block_device_mappings:
description: Any block device mapping entries for the instance.
returned: always
type: complex
contains:
device_name:
description: The device name exposed to the instance (for example, /dev/sdh or xvdh).
returned: always
type: str
sample: /dev/sdh
ebs:
description: Parameters used to automatically set up EBS volumes when the instance is launched.
returned: always
type: complex
contains:
attach_time:
description: The time stamp when the attachment initiated.
returned: always
type: str
sample: "2017-03-23T22:51:24+00:00"
delete_on_termination:
description: Indicates whether the volume is deleted on instance termination.
returned: always
type: bool
sample: true
status:
description: The attachment state.
returned: always
type: str
sample: attached
volume_id:
description: The ID of the EBS volume
returned: always
type: str
sample: vol-12345678
client_token:
description: The idempotency token you provided when you launched the instance, if applicable.
returned: always
type: str
sample: mytoken
ebs_optimized:
description: Indicates whether the instance is optimized for EBS I/O.
returned: always
type: bool
sample: false
hypervisor:
description: The hypervisor type of the instance.
returned: always
type: str
sample: xen
iam_instance_profile:
description: The IAM instance profile associated with the instance, if applicable.
returned: always
type: complex
contains:
arn:
description: The Amazon Resource Name (ARN) of the instance profile.
returned: always
type: str
sample: "arn:aws:iam::000012345678:instance-profile/myprofile"
id:
description: The ID of the instance profile
returned: always
type: str
sample: JFJ397FDG400FG9FD1N
image_id:
description: The ID of the AMI used to launch the instance.
returned: always
type: str
sample: ami-0011223344
instance_id:
description: The ID of the instance.
returned: always
type: str
sample: i-012345678
instance_type:
description: The instance type size of the running instance.
returned: always
type: str
sample: t2.micro
key_name:
description: The name of the key pair, if this instance was launched with an associated key pair.
returned: always
type: str
sample: my-key
launch_time:
description: The time the instance was launched.
returned: always
type: str
sample: "2017-03-23T22:51:24+00:00"
monitoring:
description: The monitoring for the instance.
returned: always
type: complex
contains:
state:
description: Indicates whether detailed monitoring is enabled. Otherwise, basic monitoring is enabled.
returned: always
type: str
sample: disabled
network_interfaces:
description: One or more network interfaces for the instance.
returned: always
type: complex
contains:
association:
description: The association information for an Elastic IPv4 associated with the network interface.
returned: always
type: complex
contains:
ip_owner_id:
description: The ID of the owner of the Elastic IP address.
returned: always
type: str
sample: amazon
public_dns_name:
description: The public DNS name.
returned: always
type: str
sample: ""
public_ip:
description: The public IP address or Elastic IP address bound to the network interface.
returned: always
type: str
sample: 1.2.3.4
attachment:
description: The network interface attachment.
returned: always
type: complex
contains:
attach_time:
description: The time stamp when the attachment initiated.
returned: always
type: str
sample: "2017-03-23T22:51:24+00:00"
attachment_id:
description: The ID of the network interface attachment.
returned: always
type: str
sample: eni-attach-3aff3f
delete_on_termination:
description: Indicates whether the network interface is deleted when the instance is terminated.
returned: always
type: bool
sample: true
device_index:
description: The index of the device on the instance for the network interface attachment.
returned: always
type: int
sample: 0
status:
description: The attachment state.
returned: always
type: str
sample: attached
description:
description: The description.
returned: always
type: str
sample: My interface
groups:
description: One or more security groups.
returned: always
type: list
elements: dict
contains:
group_id:
description: The ID of the security group.
returned: always
type: str
sample: sg-abcdef12
group_name:
description: The name of the security group.
returned: always
type: str
sample: mygroup
ipv6_addresses:
description: One or more IPv6 addresses associated with the network interface.
returned: always
type: list
elements: dict
contains:
ipv6_address:
description: The IPv6 address.
returned: always
type: str
sample: "2001:0db8:85a3:0000:0000:8a2e:0370:7334"
mac_address:
description: The MAC address.
returned: always
type: str
sample: "00:11:22:33:44:55"
network_interface_id:
description: The ID of the network interface.
returned: always
type: str
sample: eni-01234567
owner_id:
description: The AWS account ID of the owner of the network interface.
returned: always
type: str
sample: 01234567890
private_ip_address:
description: The IPv4 address of the network interface within the subnet.
returned: always
type: str
sample: 10.0.0.1
private_ip_addresses:
description: The private IPv4 addresses associated with the network interface.
returned: always
type: list
elements: dict
contains:
association:
description: The association information for an Elastic IP address (IPv4) associated with the network interface.
returned: always
type: complex
contains:
ip_owner_id:
description: The ID of the owner of the Elastic IP address.
returned: always
type: str
sample: amazon
public_dns_name:
description: The public DNS name.
returned: always
type: str
sample: ""
public_ip:
description: The public IP address or Elastic IP address bound to the network interface.
returned: always
type: str
sample: 1.2.3.4
primary:
description: Indicates whether this IPv4 address is the primary private IP address of the network interface.
returned: always
type: bool
sample: true
private_ip_address:
description: The private IPv4 address of the network interface.
returned: always
type: str
sample: 10.0.0.1
source_dest_check:
description: Indicates whether source/destination checking is enabled.
returned: always
type: bool
sample: true
status:
description: The status of the network interface.
returned: always
type: str
sample: in-use
subnet_id:
description: The ID of the subnet for the network interface.
returned: always
type: str
sample: subnet-0123456
vpc_id:
description: The ID of the VPC for the network interface.
returned: always
type: str
sample: vpc-0123456
placement:
description: The location where the instance launched, if applicable.
returned: always
type: complex
contains:
availability_zone:
description: The Availability Zone of the instance.
returned: always
type: str
sample: ap-southeast-2a
group_name:
description: The name of the placement group the instance is in (for cluster compute instances).
returned: always
type: str
sample: ""
tenancy:
description: The tenancy of the instance (if the instance is running in a VPC).
returned: always
type: str
sample: default
private_dns_name:
description: The private DNS name.
returned: always
type: str
sample: ip-10-0-0-1.ap-southeast-2.compute.internal
private_ip_address:
description: The IPv4 address of the network interface within the subnet.
returned: always
type: str
sample: 10.0.0.1
product_codes:
description: One or more product codes.
returned: always
type: list
elements: dict
contains:
product_code_id:
description: The product code.
returned: always
type: str
sample: aw0evgkw8ef3n2498gndfgasdfsd5cce
product_code_type:
description: The type of product code.
returned: always
type: str
sample: marketplace
public_dns_name:
description: The public DNS name assigned to the instance.
returned: always
type: str
sample:
public_ip_address:
description: The public IPv4 address assigned to the instance
returned: always
type: str
sample: 52.0.0.1
root_device_name:
description: The device name of the root device
returned: always
type: str
sample: /dev/sda1
root_device_type:
description: The type of root device used by the AMI.
returned: always
type: str
sample: ebs
security_groups:
description: One or more security groups for the instance.
returned: always
type: list
elements: dict
contains:
group_id:
description: The ID of the security group.
returned: always
type: str
sample: sg-0123456
group_name:
description: The name of the security group.
returned: always
type: str
sample: my-security-group
network.source_dest_check:
description: Indicates whether source/destination checking is enabled.
returned: always
type: bool
sample: true
state:
description: The current state of the instance.
returned: always
type: complex
contains:
code:
description: The low byte represents the state.
returned: always
type: int
sample: 16
name:
description: The name of the state.
returned: always
type: str
sample: running
state_transition_reason:
description: The reason for the most recent state transition.
returned: always
type: str
sample:
subnet_id:
description: The ID of the subnet in which the instance is running.
returned: always
type: str
sample: subnet-00abcdef
tags:
description: Any tags assigned to the instance.
returned: always
type: dict
sample:
virtualization_type:
description: The type of virtualization of the AMI.
returned: always
type: str
sample: hvm
vpc_id:
description: The ID of the VPC the instance is in.
returned: always
type: dict
sample: vpc-0011223344
'''
import re
import uuid
import string
import textwrap
import time
from collections import namedtuple
try:
import boto3
import botocore.exceptions
except ImportError:
pass
from ansible.module_utils.six import text_type, string_types
from ansible.module_utils.six.moves.urllib import parse as urlparse
from ansible.module_utils._text import to_bytes, to_native
import ansible.module_utils.ec2 as ec2_utils
from ansible.module_utils.ec2 import (boto3_conn,
ec2_argument_spec,
get_aws_connection_info,
AWSRetry,
ansible_dict_to_boto3_filter_list,
compare_aws_tags,
boto3_tag_list_to_ansible_dict,
ansible_dict_to_boto3_tag_list,
camel_dict_to_snake_dict)
from ansible.module_utils.aws.core import AnsibleAWSModule
module = None
def tower_callback_script(tower_conf, windows=False, passwd=None):
script_url = 'https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1'
if windows and passwd is not None:
script_tpl = """<powershell>
$admin = [adsi]("WinNT://./administrator, user")
$admin.PSBase.Invoke("SetPassword", "{PASS}")
Invoke-Expression ((New-Object System.Net.Webclient).DownloadString('{SCRIPT}'))
</powershell>
"""
return to_native(textwrap.dedent(script_tpl).format(PASS=passwd, SCRIPT=script_url))
elif windows and passwd is None:
script_tpl = """<powershell>
$admin = [adsi]("WinNT://./administrator, user")
Invoke-Expression ((New-Object System.Net.Webclient).DownloadString('{SCRIPT}'))
</powershell>
"""
return to_native(textwrap.dedent(script_tpl).format(PASS=passwd, SCRIPT=script_url))
elif not windows:
for p in ['tower_address', 'job_template_id', 'host_config_key']:
if p not in tower_conf:
module.fail_json(msg="Incomplete tower_callback configuration. tower_callback.{0} not set.".format(p))
if isinstance(tower_conf['job_template_id'], string_types):
tower_conf['job_template_id'] = urlparse.quote(tower_conf['job_template_id'])
tpl = string.Template(textwrap.dedent("""#!/bin/bash
set -x
retry_attempts=10
attempt=0
while [[ $attempt -lt $retry_attempts ]]
do
status_code=`curl --max-time 10 -v -k -s -i \
--data "host_config_key=${host_config_key}" \
'https://${tower_address}/api/v2/job_templates/${template_id}/callback/' \
| head -n 1 \
| awk '{print $2}'`
if [[ $status_code == 404 ]]
then
status_code=`curl --max-time 10 -v -k -s -i \
--data "host_config_key=${host_config_key}" \
'https://${tower_address}/api/v1/job_templates/${template_id}/callback/' \
| head -n 1 \
| awk '{print $2}'`
# fall back to using V1 API for Tower 3.1 and below, since v2 API will always 404
fi
if [[ $status_code == 201 ]]
then
exit 0
fi
attempt=$(( attempt + 1 ))
echo "$${status_code} received... retrying in 1 minute. (Attempt $${attempt})"
sleep 60
done
exit 1
"""))
return tpl.safe_substitute(tower_address=tower_conf['tower_address'],
template_id=tower_conf['job_template_id'],
host_config_key=tower_conf['host_config_key'])
raise NotImplementedError("Only windows with remote-prep or non-windows with tower job callback supported so far.")
@AWSRetry.jittered_backoff()
def manage_tags(match, new_tags, purge_tags, ec2):
changed = False
old_tags = boto3_tag_list_to_ansible_dict(match['Tags'])
tags_to_set, tags_to_delete = compare_aws_tags(
old_tags, new_tags,
purge_tags=purge_tags,
)
if tags_to_set:
ec2.create_tags(
Resources=[match['InstanceId']],
Tags=ansible_dict_to_boto3_tag_list(tags_to_set))
changed |= True
if tags_to_delete:
delete_with_current_values = dict((k, old_tags.get(k)) for k in tags_to_delete)
ec2.delete_tags(
Resources=[match['InstanceId']],
Tags=ansible_dict_to_boto3_tag_list(delete_with_current_values))
changed |= True
return changed
def build_volume_spec(params):
volumes = params.get('volumes') or []
for volume in volumes:
if 'ebs' in volume:
for int_value in ['volume_size', 'iops']:
if int_value in volume['ebs']:
volume['ebs'][int_value] = int(volume['ebs'][int_value])
return [ec2_utils.snake_dict_to_camel_dict(v, capitalize_first=True) for v in volumes]
def add_or_update_instance_profile(instance, desired_profile_name):
instance_profile_setting = instance.get('IamInstanceProfile')
if instance_profile_setting and desired_profile_name:
if desired_profile_name in (instance_profile_setting.get('Name'), instance_profile_setting.get('Arn')):
# great, the profile we asked for is what's there
return False
else:
desired_arn = determine_iam_role(desired_profile_name)
if instance_profile_setting.get('Arn') == desired_arn:
return False
# update association
ec2 = module.client('ec2')
try:
association = ec2.describe_iam_instance_profile_associations(Filters=[{'Name': 'instance-id', 'Values': [instance['InstanceId']]}])
except botocore.exceptions.ClientError as e:
# check for InvalidAssociationID.NotFound
module.fail_json_aws(e, "Could not find instance profile association")
try:
resp = ec2.replace_iam_instance_profile_association(
AssociationId=association['IamInstanceProfileAssociations'][0]['AssociationId'],
IamInstanceProfile={'Arn': determine_iam_role(desired_profile_name)}
)
return True
except botocore.exceptions.ClientError as e:
module.fail_json_aws(e, "Could not associate instance profile")
if not instance_profile_setting and desired_profile_name:
# create association
ec2 = module.client('ec2')
try:
resp = ec2.associate_iam_instance_profile(
IamInstanceProfile={'Arn': determine_iam_role(desired_profile_name)},
InstanceId=instance['InstanceId']
)
return True
except botocore.exceptions.ClientError as e:
module.fail_json_aws(e, "Could not associate new instance profile")
return False
def build_network_spec(params, ec2=None):
"""
Returns list of interfaces [complex]
Interface type: {
'AssociatePublicIpAddress': True|False,
'DeleteOnTermination': True|False,
'Description': 'string',
'DeviceIndex': 123,
'Groups': [
'string',
],
'Ipv6AddressCount': 123,
'Ipv6Addresses': [
{
'Ipv6Address': 'string'
},
],
'NetworkInterfaceId': 'string',
'PrivateIpAddress': 'string',
'PrivateIpAddresses': [
{
'Primary': True|False,
'PrivateIpAddress': 'string'
},
],
'SecondaryPrivateIpAddressCount': 123,
'SubnetId': 'string'
},
"""
if ec2 is None:
ec2 = module.client('ec2')
interfaces = []
network = params.get('network') or {}
if not network.get('interfaces'):
# they only specified one interface
spec = {
'DeviceIndex': 0,
}
if network.get('assign_public_ip') is not None:
spec['AssociatePublicIpAddress'] = network['assign_public_ip']
if params.get('vpc_subnet_id'):
spec['SubnetId'] = params['vpc_subnet_id']
else:
default_vpc = get_default_vpc(ec2)
if default_vpc is None:
raise module.fail_json(
msg="No default subnet could be found - you must include a VPC subnet ID (vpc_subnet_id parameter) to create an instance")
else:
sub = get_default_subnet(ec2, default_vpc)
spec['SubnetId'] = sub['SubnetId']
if network.get('private_ip_address'):
spec['PrivateIpAddress'] = network['private_ip_address']
if params.get('security_group') or params.get('security_groups'):
groups = discover_security_groups(
group=params.get('security_group'),
groups=params.get('security_groups'),
subnet_id=spec['SubnetId'],
ec2=ec2
)
spec['Groups'] = [g['GroupId'] for g in groups]
if network.get('description') is not None:
spec['Description'] = network['description']
# TODO more special snowflake network things
return [spec]
# handle list of `network.interfaces` options
for idx, interface_params in enumerate(network.get('interfaces', [])):
spec = {
'DeviceIndex': idx,
}
if isinstance(interface_params, string_types):
# naive case where user gave
# network_interfaces: [eni-1234, eni-4567, ....]
# put into normal data structure so we don't dupe code
interface_params = {'id': interface_params}
if interface_params.get('id') is not None:
# if an ID is provided, we don't want to set any other parameters.
spec['NetworkInterfaceId'] = interface_params['id']
interfaces.append(spec)
continue
spec['DeleteOnTermination'] = interface_params.get('delete_on_termination', True)
if interface_params.get('ipv6_addresses'):
spec['Ipv6Addresses'] = [{'Ipv6Address': a} for a in interface_params.get('ipv6_addresses', [])]
if interface_params.get('private_ip_address'):
spec['PrivateIpAddress'] = interface_params.get('private_ip_address')
if interface_params.get('description'):
spec['Description'] = interface_params.get('description')
if interface_params.get('subnet_id', params.get('vpc_subnet_id')):
spec['SubnetId'] = interface_params.get('subnet_id', params.get('vpc_subnet_id'))
elif not spec.get('SubnetId') and not interface_params['id']:
# TODO grab a subnet from default VPC
raise ValueError('Failed to assign subnet to interface {0}'.format(interface_params))
interfaces.append(spec)
return interfaces
def warn_if_public_ip_assignment_changed(instance):
# This is a non-modifiable attribute.
assign_public_ip = (module.params.get('network') or {}).get('assign_public_ip')
if assign_public_ip is None:
return
# Check that public ip assignment is the same and warn if not
public_dns_name = instance.get('PublicDnsName')
if (public_dns_name and not assign_public_ip) or (assign_public_ip and not public_dns_name):
module.warn(
"Unable to modify public ip assignment to {0} for instance {1}. "
"Whether or not to assign a public IP is determined during instance creation.".format(
assign_public_ip, instance['InstanceId']))
def warn_if_cpu_options_changed(instance):
# This is a non-modifiable attribute.
cpu_options = module.params.get('cpu_options')
if cpu_options is None:
return
# Check that the CpuOptions set are the same and warn if not
core_count_curr = instance['CpuOptions'].get('CoreCount')
core_count = cpu_options.get('core_count')
threads_per_core_curr = instance['CpuOptions'].get('ThreadsPerCore')
threads_per_core = cpu_options.get('threads_per_core')
if core_count_curr != core_count:
module.warn(
"Unable to modify core_count from {0} to {1}. "
"Assigning a number of core is determinted during instance creation".format(
core_count_curr, core_count))
if threads_per_core_curr != threads_per_core:
module.warn(
"Unable to modify threads_per_core from {0} to {1}. "
"Assigning a number of threads per core is determined during instance creation.".format(
threads_per_core_curr, threads_per_core))
def discover_security_groups(group, groups, parent_vpc_id=None, subnet_id=None, ec2=None):
if ec2 is None:
ec2 = module.client('ec2')
if subnet_id is not None:
try:
sub = ec2.describe_subnets(SubnetIds=[subnet_id])
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'InvalidGroup.NotFound':
module.fail_json(
"Could not find subnet {0} to associate security groups. Please check the vpc_subnet_id and security_groups parameters.".format(
subnet_id
)
)
module.fail_json_aws(e, msg="Error while searching for subnet {0} parent VPC.".format(subnet_id))
except botocore.exceptions.BotoCoreError as e:
module.fail_json_aws(e, msg="Error while searching for subnet {0} parent VPC.".format(subnet_id))
parent_vpc_id = sub['Subnets'][0]['VpcId']
vpc = {
'Name': 'vpc-id',
'Values': [parent_vpc_id]
}
# because filter lists are AND in the security groups API,
# make two separate requests for groups by ID and by name
id_filters = [vpc]
name_filters = [vpc]
if group:
name_filters.append(
dict(
Name='group-name',
Values=[group]
)
)
if group.startswith('sg-'):
id_filters.append(
dict(
Name='group-id',
Values=[group]
)
)
if groups:
name_filters.append(
dict(
Name='group-name',
Values=groups
)
)
if [g for g in groups if g.startswith('sg-')]:
id_filters.append(
dict(
Name='group-id',
Values=[g for g in groups if g.startswith('sg-')]
)
)
found_groups = []
for f_set in (id_filters, name_filters):
if len(f_set) > 1:
found_groups.extend(ec2.get_paginator(
'describe_security_groups'
).paginate(
Filters=f_set
).search('SecurityGroups[]'))
return list(dict((g['GroupId'], g) for g in found_groups).values())
def build_top_level_options(params):
spec = {}
if params.get('image_id'):
spec['ImageId'] = params['image_id']
elif isinstance(params.get('image'), dict):
image = params.get('image', {})
spec['ImageId'] = image.get('id')
if 'ramdisk' in image:
spec['RamdiskId'] = image['ramdisk']
if 'kernel' in image:
spec['KernelId'] = image['kernel']
if not spec.get('ImageId') and not params.get('launch_template'):
module.fail_json(msg="You must include an image_id or image.id parameter to create an instance, or use a launch_template.")
if params.get('key_name') is not None:
spec['KeyName'] = params.get('key_name')
if params.get('user_data') is not None:
spec['UserData'] = to_native(params.get('user_data'))
elif params.get('tower_callback') is not None:
spec['UserData'] = tower_callback_script(
tower_conf=params.get('tower_callback'),
windows=params.get('tower_callback').get('windows', False),
passwd=params.get('tower_callback').get('set_password'),
)
if params.get('launch_template') is not None:
spec['LaunchTemplate'] = {}
if not params.get('launch_template').get('id') or params.get('launch_template').get('name'):
module.fail_json(msg="Could not create instance with launch template. Either launch_template.name or launch_template.id parameters are required")
if params.get('launch_template').get('id') is not None:
spec['LaunchTemplate']['LaunchTemplateId'] = params.get('launch_template').get('id')
if params.get('launch_template').get('name') is not None:
spec['LaunchTemplate']['LaunchTemplateName'] = params.get('launch_template').get('name')
if params.get('launch_template').get('version') is not None:
spec['LaunchTemplate']['Version'] = to_native(params.get('launch_template').get('version'))
if params.get('detailed_monitoring', False):
spec['Monitoring'] = {'Enabled': True}
if params.get('cpu_credit_specification') is not None:
spec['CreditSpecification'] = {'CpuCredits': params.get('cpu_credit_specification')}
if params.get('tenancy') is not None:
spec['Placement'] = {'Tenancy': params.get('tenancy')}
if params.get('placement_group'):
if 'Placement' in spec:
spec['Placement']['GroupName'] = str(params.get('placement_group'))
else:
spec.setdefault('Placement', {'GroupName': str(params.get('placement_group'))})
if params.get('ebs_optimized') is not None:
spec['EbsOptimized'] = params.get('ebs_optimized')
if params.get('instance_initiated_shutdown_behavior'):
spec['InstanceInitiatedShutdownBehavior'] = params.get('instance_initiated_shutdown_behavior')
if params.get('termination_protection') is not None:
spec['DisableApiTermination'] = params.get('termination_protection')
if params.get('cpu_options') is not None:
spec['CpuOptions'] = {}
spec['CpuOptions']['ThreadsPerCore'] = params.get('cpu_options').get('threads_per_core')
spec['CpuOptions']['CoreCount'] = params.get('cpu_options').get('core_count')
return spec
def build_instance_tags(params, propagate_tags_to_volumes=True):
tags = params.get('tags', {})
if params.get('name') is not None:
if tags is None:
tags = {}
tags['Name'] = params.get('name')
return [
{
'ResourceType': 'volume',
'Tags': ansible_dict_to_boto3_tag_list(tags),
},
{
'ResourceType': 'instance',
'Tags': ansible_dict_to_boto3_tag_list(tags),
},
]
def build_run_instance_spec(params, ec2=None):
if ec2 is None:
ec2 = module.client('ec2')
spec = dict(
ClientToken=uuid.uuid4().hex,
MaxCount=1,
MinCount=1,
)
# network parameters
spec['NetworkInterfaces'] = build_network_spec(params, ec2)
spec['BlockDeviceMappings'] = build_volume_spec(params)
spec.update(**build_top_level_options(params))
spec['TagSpecifications'] = build_instance_tags(params)
# IAM profile
if params.get('instance_role'):
spec['IamInstanceProfile'] = dict(Arn=determine_iam_role(params.get('instance_role')))
spec['InstanceType'] = params['instance_type']
return spec
def await_instances(ids, state='OK'):
if not module.params.get('wait', True):
# the user asked not to wait for anything
return
if module.check_mode:
# In check mode, there is no change even if you wait.
return
state_opts = {
'OK': 'instance_status_ok',
'STOPPED': 'instance_stopped',
'TERMINATED': 'instance_terminated',
'EXISTS': 'instance_exists',
'RUNNING': 'instance_running',
}
if state not in state_opts:
module.fail_json(msg="Cannot wait for state {0}, invalid state".format(state))
waiter = module.client('ec2').get_waiter(state_opts[state])
try:
waiter.wait(
InstanceIds=ids,
WaiterConfig={
'Delay': 15,
'MaxAttempts': module.params.get('wait_timeout', 600) // 15,
}
)
except botocore.exceptions.WaiterConfigError as e:
module.fail_json(msg="{0}. Error waiting for instances {1} to reach state {2}".format(
to_native(e), ', '.join(ids), state))
except botocore.exceptions.WaiterError as e:
module.warn("Instances {0} took too long to reach state {1}. {2}".format(
', '.join(ids), state, to_native(e)))
def diff_instance_and_params(instance, params, ec2=None, skip=None):
"""boto3 instance obj, module params"""
if ec2 is None:
ec2 = module.client('ec2')
if skip is None:
skip = []
changes_to_apply = []
id_ = instance['InstanceId']
ParamMapper = namedtuple('ParamMapper', ['param_key', 'instance_key', 'attribute_name', 'add_value'])
def value_wrapper(v):
return {'Value': v}
param_mappings = [
ParamMapper('ebs_optimized', 'EbsOptimized', 'ebsOptimized', value_wrapper),
ParamMapper('termination_protection', 'DisableApiTermination', 'disableApiTermination', value_wrapper),
# user data is an immutable property
# ParamMapper('user_data', 'UserData', 'userData', value_wrapper),
]
for mapping in param_mappings:
if params.get(mapping.param_key) is not None and mapping.instance_key not in skip:
value = AWSRetry.jittered_backoff()(ec2.describe_instance_attribute)(Attribute=mapping.attribute_name, InstanceId=id_)
if params.get(mapping.param_key) is not None and value[mapping.instance_key]['Value'] != params.get(mapping.param_key):
arguments = dict(
InstanceId=instance['InstanceId'],
# Attribute=mapping.attribute_name,
)
arguments[mapping.instance_key] = mapping.add_value(params.get(mapping.param_key))
changes_to_apply.append(arguments)
if (params.get('network') or {}).get('source_dest_check') is not None:
# network.source_dest_check is nested, so needs to be treated separately
check = bool(params.get('network').get('source_dest_check'))
if instance['SourceDestCheck'] != check:
changes_to_apply.append(dict(
InstanceId=instance['InstanceId'],
SourceDestCheck={'Value': check},
))
return changes_to_apply
def change_network_attachments(instance, params, ec2):
if (params.get('network') or {}).get('interfaces') is not None:
new_ids = []
for inty in params.get('network').get('interfaces'):
if isinstance(inty, dict) and 'id' in inty:
new_ids.append(inty['id'])
elif isinstance(inty, string_types):
new_ids.append(inty)
# network.interfaces can create the need to attach new interfaces
old_ids = [inty['NetworkInterfaceId'] for inty in instance['NetworkInterfaces']]
to_attach = set(new_ids) - set(old_ids)
for eni_id in to_attach:
ec2.attach_network_interface(
DeviceIndex=new_ids.index(eni_id),
InstanceId=instance['InstanceId'],
NetworkInterfaceId=eni_id,
)
return bool(len(to_attach))
return False
def find_instances(ec2, ids=None, filters=None):
paginator = ec2.get_paginator('describe_instances')
if ids:
return list(paginator.paginate(
InstanceIds=ids,
).search('Reservations[].Instances[]'))
elif filters is None:
module.fail_json(msg="No filters provided when they were required")
elif filters is not None:
for key in filters.keys():
if not key.startswith("tag:"):
filters[key.replace("_", "-")] = filters.pop(key)
return list(paginator.paginate(
Filters=ansible_dict_to_boto3_filter_list(filters)
).search('Reservations[].Instances[]'))
return []
@AWSRetry.jittered_backoff()
def get_default_vpc(ec2):
vpcs = ec2.describe_vpcs(Filters=ansible_dict_to_boto3_filter_list({'isDefault': 'true'}))
if len(vpcs.get('Vpcs', [])):
return vpcs.get('Vpcs')[0]
return None
@AWSRetry.jittered_backoff()
def get_default_subnet(ec2, vpc, availability_zone=None):
subnets = ec2.describe_subnets(
Filters=ansible_dict_to_boto3_filter_list({
'vpc-id': vpc['VpcId'],
'state': 'available',
'default-for-az': 'true',
})
)
if len(subnets.get('Subnets', [])):
if availability_zone is not None:
subs_by_az = dict((subnet['AvailabilityZone'], subnet) for subnet in subnets.get('Subnets'))
if availability_zone in subs_by_az:
return subs_by_az[availability_zone]
# to have a deterministic sorting order, we sort by AZ so we'll always pick the `a` subnet first
# there can only be one default-for-az subnet per AZ, so the AZ key is always unique in this list
by_az = sorted(subnets.get('Subnets'), key=lambda s: s['AvailabilityZone'])
return by_az[0]
return None
def ensure_instance_state(state, ec2=None):
if ec2 is None:
module.client('ec2')
if state in ('running', 'started'):
changed, failed, instances, failure_reason = change_instance_state(filters=module.params.get('filters'), desired_state='RUNNING')
if failed:
module.fail_json(
msg="Unable to start instances: {0}".format(failure_reason),
reboot_success=list(changed),
reboot_failed=failed)
module.exit_json(
msg='Instances started',
reboot_success=list(changed),
changed=bool(len(changed)),
reboot_failed=[],
instances=[pretty_instance(i) for i in instances],
)
elif state in ('restarted', 'rebooted'):
changed, failed, instances, failure_reason = change_instance_state(
filters=module.params.get('filters'),
desired_state='STOPPED')
changed, failed, instances, failure_reason = change_instance_state(
filters=module.params.get('filters'),
desired_state='RUNNING')
if failed:
module.fail_json(
msg="Unable to restart instances: {0}".format(failure_reason),
reboot_success=list(changed),
reboot_failed=failed)
module.exit_json(
msg='Instances restarted',
reboot_success=list(changed),
changed=bool(len(changed)),
reboot_failed=[],
instances=[pretty_instance(i) for i in instances],
)
elif state in ('stopped',):
changed, failed, instances, failure_reason = change_instance_state(
filters=module.params.get('filters'),
desired_state='STOPPED')
if failed:
module.fail_json(
msg="Unable to stop instances: {0}".format(failure_reason),
stop_success=list(changed),
stop_failed=failed)
module.exit_json(
msg='Instances stopped',
stop_success=list(changed),
changed=bool(len(changed)),
stop_failed=[],
instances=[pretty_instance(i) for i in instances],
)
elif state in ('absent', 'terminated'):
terminated, terminate_failed, instances, failure_reason = change_instance_state(
filters=module.params.get('filters'),
desired_state='TERMINATED')
if terminate_failed:
module.fail_json(
msg="Unable to terminate instances: {0}".format(failure_reason),
terminate_success=list(terminated),
terminate_failed=terminate_failed)
module.exit_json(
msg='Instances terminated',
terminate_success=list(terminated),
changed=bool(len(terminated)),
terminate_failed=[],
instances=[pretty_instance(i) for i in instances],
)
@AWSRetry.jittered_backoff()
def change_instance_state(filters, desired_state, ec2=None):
"""Takes STOPPED/RUNNING/TERMINATED"""
if ec2 is None:
ec2 = module.client('ec2')
changed = set()
instances = find_instances(ec2, filters=filters)
to_change = set(i['InstanceId'] for i in instances if i['State']['Name'].upper() != desired_state)
unchanged = set()
failure_reason = ""
for inst in instances:
try:
if desired_state == 'TERMINATED':
if module.check_mode:
changed.add(inst['InstanceId'])
continue
# TODO use a client-token to prevent double-sends of these start/stop/terminate commands
# https://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html
resp = ec2.terminate_instances(InstanceIds=[inst['InstanceId']])
[changed.add(i['InstanceId']) for i in resp['TerminatingInstances']]
if desired_state == 'STOPPED':
if inst['State']['Name'] in ('stopping', 'stopped'):
unchanged.add(inst['InstanceId'])
continue
if module.check_mode:
changed.add(inst['InstanceId'])
continue
resp = ec2.stop_instances(InstanceIds=[inst['InstanceId']])
[changed.add(i['InstanceId']) for i in resp['StoppingInstances']]
if desired_state == 'RUNNING':
if module.check_mode:
changed.add(inst['InstanceId'])
continue
resp = ec2.start_instances(InstanceIds=[inst['InstanceId']])
[changed.add(i['InstanceId']) for i in resp['StartingInstances']]
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
try:
failure_reason = to_native(e.message)
except AttributeError:
failure_reason = to_native(e)
if changed:
await_instances(ids=list(changed) + list(unchanged), state=desired_state)
change_failed = list(to_change - changed)
instances = find_instances(ec2, ids=list(i['InstanceId'] for i in instances))
return changed, change_failed, instances, failure_reason
def pretty_instance(i):
instance = camel_dict_to_snake_dict(i, ignore_list=['Tags'])
instance['tags'] = boto3_tag_list_to_ansible_dict(i['Tags'])
return instance
def determine_iam_role(name_or_arn):
if re.match(r'^arn:aws:iam::\d+:instance-profile/[\w+=/,.@-]+$', name_or_arn):
return name_or_arn
iam = module.client('iam', retry_decorator=AWSRetry.jittered_backoff())
try:
role = iam.get_instance_profile(InstanceProfileName=name_or_arn, aws_retry=True)
return role['InstanceProfile']['Arn']
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'NoSuchEntity':
module.fail_json_aws(e, msg="Could not find instance_role {0}".format(name_or_arn))
module.fail_json_aws(e, msg="An error occurred while searching for instance_role {0}. Please try supplying the full ARN.".format(name_or_arn))
def handle_existing(existing_matches, changed, ec2, state):
if state in ('running', 'started') and [i for i in existing_matches if i['State']['Name'] != 'running']:
ins_changed, failed, instances, failure_reason = change_instance_state(filters=module.params.get('filters'), desired_state='RUNNING')
if failed:
module.fail_json(msg="Couldn't start instances: {0}. Failure reason: {1}".format(instances, failure_reason))
module.exit_json(
changed=bool(len(ins_changed)) or changed,
instances=[pretty_instance(i) for i in instances],
instance_ids=[i['InstanceId'] for i in instances],
)
changes = diff_instance_and_params(existing_matches[0], module.params)
for c in changes:
AWSRetry.jittered_backoff()(ec2.modify_instance_attribute)(**c)
changed |= bool(changes)
changed |= add_or_update_instance_profile(existing_matches[0], module.params.get('instance_role'))
changed |= change_network_attachments(existing_matches[0], module.params, ec2)
altered = find_instances(ec2, ids=[i['InstanceId'] for i in existing_matches])
module.exit_json(
changed=bool(len(changes)) or changed,
instances=[pretty_instance(i) for i in altered],
instance_ids=[i['InstanceId'] for i in altered],
changes=changes,
)
def ensure_present(existing_matches, changed, ec2, state):
if len(existing_matches):
try:
handle_existing(existing_matches, changed, ec2, state)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(
e, msg="Failed to handle existing instances {0}".format(', '.join([i['InstanceId'] for i in existing_matches])),
# instances=[pretty_instance(i) for i in existing_matches],
# instance_ids=[i['InstanceId'] for i in existing_matches],
)
try:
instance_spec = build_run_instance_spec(module.params)
# If check mode is enabled,suspend 'ensure function'.
if module.check_mode:
module.exit_json(
changed=True,
spec=instance_spec,
)
instance_response = run_instances(ec2, **instance_spec)
instances = instance_response['Instances']
instance_ids = [i['InstanceId'] for i in instances]
for ins in instances:
changes = diff_instance_and_params(ins, module.params, skip=['UserData', 'EbsOptimized'])
for c in changes:
try:
AWSRetry.jittered_backoff()(ec2.modify_instance_attribute)(**c)
except botocore.exceptions.ClientError as e:
module.fail_json_aws(e, msg="Could not apply change {0} to new instance.".format(str(c)))
if not module.params.get('wait'):
module.exit_json(
changed=True,
instance_ids=instance_ids,
spec=instance_spec,
)
await_instances(instance_ids)
instances = ec2.get_paginator('describe_instances').paginate(
InstanceIds=instance_ids
).search('Reservations[].Instances[]')
module.exit_json(
changed=True,
instances=[pretty_instance(i) for i in instances],
instance_ids=instance_ids,
spec=instance_spec,
)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to create new EC2 instance")
@AWSRetry.jittered_backoff()
def run_instances(ec2, **instance_spec):
try:
return ec2.run_instances(**instance_spec)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'InvalidParameterValue' and "Invalid IAM Instance Profile ARN" in e.response['Error']['Message']:
# If the instance profile has just been created, it takes some time to be visible by ec2
# So we wait 10 second and retry the run_instances
time.sleep(10)
return ec2.run_instances(**instance_spec)
else:
raise e
def main():
global module
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
state=dict(default='present', choices=['present', 'started', 'running', 'stopped', 'restarted', 'rebooted', 'terminated', 'absent']),
wait=dict(default=True, type='bool'),
wait_timeout=dict(default=600, type='int'),
# count=dict(default=1, type='int'),
image=dict(type='dict'),
image_id=dict(type='str'),
instance_type=dict(default='t2.micro', type='str'),
user_data=dict(type='str'),
tower_callback=dict(type='dict'),
ebs_optimized=dict(type='bool'),
vpc_subnet_id=dict(type='str', aliases=['subnet_id']),
availability_zone=dict(type='str'),
security_groups=dict(default=[], type='list'),
security_group=dict(type='str'),
instance_role=dict(type='str'),
name=dict(type='str'),
tags=dict(type='dict'),
purge_tags=dict(type='bool', default=False),
filters=dict(type='dict', default=None),
launch_template=dict(type='dict'),
key_name=dict(type='str'),
cpu_credit_specification=dict(type='str', choices=['standard', 'unlimited']),
cpu_options=dict(type='dict', options=dict(
core_count=dict(type='int', required=True),
threads_per_core=dict(type='int', choices=[1, 2], required=True)
)),
tenancy=dict(type='str', choices=['dedicated', 'default']),
placement_group=dict(type='str'),
instance_initiated_shutdown_behavior=dict(type='str', choices=['stop', 'terminate']),
termination_protection=dict(type='bool'),
detailed_monitoring=dict(type='bool'),
instance_ids=dict(default=[], type='list'),
network=dict(default=None, type='dict'),
volumes=dict(default=None, type='list'),
))
# running/present are synonyms
# as are terminated/absent
module = AnsibleAWSModule(
argument_spec=argument_spec,
mutually_exclusive=[
['security_groups', 'security_group'],
['availability_zone', 'vpc_subnet_id'],
['tower_callback', 'user_data'],
['image_id', 'image'],
],
supports_check_mode=True
)
if module.params.get('network'):
if module.params.get('network').get('interfaces'):
if module.params.get('security_group'):
module.fail_json(msg="Parameter network.interfaces can't be used with security_group")
if module.params.get('security_groups'):
module.fail_json(msg="Parameter network.interfaces can't be used with security_groups")
state = module.params.get('state')
ec2 = module.client('ec2')
if module.params.get('filters') is None:
filters = {
# all states except shutting-down and terminated
'instance-state-name': ['pending', 'running', 'stopping', 'stopped']
}
if state == 'stopped':
# only need to change instances that aren't already stopped
filters['instance-state-name'] = ['stopping', 'pending', 'running']
if isinstance(module.params.get('instance_ids'), string_types):
filters['instance-id'] = [module.params.get('instance_ids')]
elif isinstance(module.params.get('instance_ids'), list) and len(module.params.get('instance_ids')):
filters['instance-id'] = module.params.get('instance_ids')
else:
if not module.params.get('vpc_subnet_id'):
if module.params.get('network'):
# grab AZ from one of the ENIs
ints = module.params.get('network').get('interfaces')
if ints:
filters['network-interface.network-interface-id'] = []
for i in ints:
if isinstance(i, dict):
i = i['id']
filters['network-interface.network-interface-id'].append(i)
else:
sub = get_default_subnet(ec2, get_default_vpc(ec2), availability_zone=module.params.get('availability_zone'))
filters['subnet-id'] = sub['SubnetId']
else:
filters['subnet-id'] = [module.params.get('vpc_subnet_id')]
if module.params.get('name'):
filters['tag:Name'] = [module.params.get('name')]
if module.params.get('image_id'):
filters['image-id'] = [module.params.get('image_id')]
elif (module.params.get('image') or {}).get('id'):
filters['image-id'] = [module.params.get('image', {}).get('id')]
module.params['filters'] = filters
if module.params.get('cpu_options') and not module.botocore_at_least('1.10.16'):
module.fail_json(msg="cpu_options is only supported with botocore >= 1.10.16")
existing_matches = find_instances(ec2, filters=module.params.get('filters'))
changed = False
if state not in ('terminated', 'absent') and existing_matches:
for match in existing_matches:
warn_if_public_ip_assignment_changed(match)
warn_if_cpu_options_changed(match)
tags = module.params.get('tags') or {}
name = module.params.get('name')
if name:
tags['Name'] = name
changed |= manage_tags(match, tags, module.params.get('purge_tags', False), ec2)
if state in ('present', 'running', 'started'):
ensure_present(existing_matches=existing_matches, changed=changed, ec2=ec2, state=state)
elif state in ('restarted', 'rebooted', 'stopped', 'absent', 'terminated'):
if existing_matches:
ensure_instance_state(state, ec2)
else:
module.exit_json(
msg='No matching instances found',
changed=False,
instances=[],
)
else:
module.fail_json(msg="We don't handle the state {0}".format(state))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,024 |
BUG: ec2_instance module stopped working after update to python v 3.8 / ansible v 2.9.1
|
##### SUMMARY
ec2_instance module stopped working after update to v 2.9.1
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_instance module
https://docs.ansible.com/ansible/latest/modules/ec2_instance_module.html
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user0/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
```
uname -a
Linux work3 5.3.11-arch1-1 #1 SMP PREEMPT Tue, 12 Nov 2019 22:19:48 +0000 x86_64 GNU/Linux
```
##### STEPS TO REPRODUCE
previously working snippet
```yaml
- name: stop image instance
register: image_instance
ec2_instance:
region: "{{amazon_region}}"
instance_ids: "{{instance_id}}"
state: stopped
wait: yes
wait_timeout: 320
aws_access_key: "{{aws_access_key_id}}"
aws_secret_key: "{{aws_secret_access_key}}"
validate_certs: no
```
##### EXPECTED RESULTS
invocation success
##### ACTUAL RESULTS
invocation failure
```paste below
Traceback (most recent call last):
File "/home/user0/.ansible/tmp/ansible-tmp-1574115528.3222384-162345873183700/AnsiballZ_ec2_instance.py", line 102, in <module>
_ansiballz_main()
File "/home/user0/.ansible/tmp/ansible-tmp-1574115528.3222384-162345873183700/AnsiballZ_ec2_instance.py", line 94, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/user0/.ansible/tmp/ansible-tmp-1574115528.3222384-162345873183700/AnsiballZ_ec2_instance.py", line 40, in invoke_module
runpy.run_module(mod_name='ansible.modules.cloud.amazon.ec2_instance', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib/python3.8/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib/python3.8/runpy.py", line 95, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib/python3.8/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/ansible_ec2_instance_payload_hz5qgsqh/ansible_ec2_instance_payload.zip/ansible/modules/cloud/amazon/ec2_instance.py", line 1710, in <module>
File "/tmp/ansible_ec2_instance_payload_hz5qgsqh/ansible_ec2_instance_payload.zip/ansible/modules/cloud/amazon/ec2_instance.py", line 1681, in main
File "/tmp/ansible_ec2_instance_payload_hz5qgsqh/ansible_ec2_instance_payload.zip/ansible/modules/cloud/amazon/ec2_instance.py", line 1297, in find_instances
RuntimeError: dictionary keys changed during iteration
```
#### code in error:
code in error is in fact doing: "dictionary keys changed during iteration":
[ec2_instance.py#L1297](https://github.com/ansible/ansible/blob/v2.9.1/lib/ansible/modules/cloud/amazon/ec2_instance.py#L1297
)
```
for key in filters.keys():
if not key.startswith("tag:"):
filters[key.replace("_", "-")] = filters.pop(key)
```
|
https://github.com/ansible/ansible/issues/65024
|
https://github.com/ansible/ansible/pull/65521
|
c266fc3b74665fd7313b84f2c0a050024151475c
|
7d3cc250ef548771f788b9f0119eca1d8164ff96
| 2019-11-18T23:26:33Z |
python
| 2019-12-05T10:02:59Z |
lib/ansible/modules/cloud/amazon/ec2_vol_info.py
|
#!/usr/bin/python
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ec2_vol_info
short_description: Gather information about ec2 volumes in AWS
description:
- Gather information about ec2 volumes in AWS.
- This module was called C(ec2_vol_facts) before Ansible 2.9. The usage did not change.
version_added: "2.1"
requirements: [ boto3 ]
author: "Rob White (@wimnat)"
options:
filters:
type: dict
description:
- A dict of filters to apply. Each dict item consists of a filter key and a filter value.
- See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVolumes.html) for possible filters.
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
# Gather information about all volumes
- ec2_vol_info:
# Gather information about a particular volume using volume ID
- ec2_vol_info:
filters:
volume-id: vol-00112233
# Gather information about any volume with a tag key Name and value Example
- ec2_vol_info:
filters:
"tag:Name": Example
# Gather information about any volume that is attached
- ec2_vol_info:
filters:
attachment.status: attached
'''
# TODO: Disabled the RETURN as it was breaking docs building. Someone needs to
# fix this
RETURN = '''# '''
import traceback
try:
from botocore.exceptions import ClientError
except ImportError:
pass # caught by imported HAS_BOTO3
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ec2 import ec2_argument_spec, get_aws_connection_info, boto3_conn, HAS_BOTO3, boto3_tag_list_to_ansible_dict
from ansible.module_utils.ec2 import ansible_dict_to_boto3_filter_list, camel_dict_to_snake_dict
def get_volume_info(volume, region):
attachment = volume["attachments"]
volume_info = {
'create_time': volume["create_time"],
'id': volume["volume_id"],
'encrypted': volume["encrypted"],
'iops': volume["iops"] if "iops" in volume else None,
'size': volume["size"],
'snapshot_id': volume["snapshot_id"],
'status': volume["state"],
'type': volume["volume_type"],
'zone': volume["availability_zone"],
'region': region,
'attachment_set': {
'attach_time': attachment[0]["attach_time"] if len(attachment) > 0 else None,
'device': attachment[0]["device"] if len(attachment) > 0 else None,
'instance_id': attachment[0]["instance_id"] if len(attachment) > 0 else None,
'status': attachment[0]["state"] if len(attachment) > 0 else None,
'delete_on_termination': attachment[0]["delete_on_termination"] if len(attachment) > 0 else None
},
'tags': boto3_tag_list_to_ansible_dict(volume['tags']) if "tags" in volume else None
}
return volume_info
def describe_volumes_with_backoff(connection, filters):
paginator = connection.get_paginator('describe_volumes')
return paginator.paginate(Filters=filters).build_full_result()
def list_ec2_volumes(connection, module, region):
# Replace filter key underscores with dashes, for compatibility, except if we're dealing with tags
sanitized_filters = module.params.get("filters")
for key in sanitized_filters:
if not key.startswith("tag:"):
sanitized_filters[key.replace("_", "-")] = sanitized_filters.pop(key)
volume_dict_array = []
try:
all_volumes = describe_volumes_with_backoff(connection, ansible_dict_to_boto3_filter_list(sanitized_filters))
except ClientError as e:
module.fail_json(msg=e.response, exception=traceback.format_exc())
for volume in all_volumes["Volumes"]:
volume = camel_dict_to_snake_dict(volume, ignore_list=['Tags'])
volume_dict_array.append(get_volume_info(volume, region))
module.exit_json(volumes=volume_dict_array)
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
filters=dict(default={}, type='dict')
)
)
module = AnsibleModule(argument_spec=argument_spec)
if module._name == 'ec2_vol_facts':
module.deprecate("The 'ec2_vol_facts' module has been renamed to 'ec2_vol_info'", version='2.13')
if not HAS_BOTO3:
module.fail_json(msg='boto3 required for this module')
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
connection = boto3_conn(
module,
conn_type='client',
resource='ec2',
region=region,
endpoint=ec2_url,
**aws_connect_params
)
list_ec2_volumes(connection, module, region)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,319 |
mysql_info: change order of collecting and filtering items
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
collect data based on the filters
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
mysql_info
##### ADDITIONAL INFORMATION
On large databases mysql_info runs very long due to size calculations of all db/tables even when this information is not needed (for example quering user information only)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63319
|
https://github.com/ansible/ansible/pull/63371
|
8b684644e0f30e180b690297abc04a67641a8c9c
|
c59e061cff2e883f566a31b4f88e62f2fbd680e7
| 2019-10-10T05:51:35Z |
python
| 2019-12-05T13:29:58Z |
changelogs/fragments/63371-mysql_info_add_exclude_fields_parameter.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,319 |
mysql_info: change order of collecting and filtering items
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
collect data based on the filters
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
mysql_info
##### ADDITIONAL INFORMATION
On large databases mysql_info runs very long due to size calculations of all db/tables even when this information is not needed (for example quering user information only)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63319
|
https://github.com/ansible/ansible/pull/63371
|
8b684644e0f30e180b690297abc04a67641a8c9c
|
c59e061cff2e883f566a31b4f88e62f2fbd680e7
| 2019-10-10T05:51:35Z |
python
| 2019-12-05T13:29:58Z |
lib/ansible/modules/database/mysql/mysql_info.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: mysql_info
short_description: Gather information about MySQL servers
description:
- Gathers information about MySQL servers.
version_added: '2.9'
options:
filter:
description:
- Limit the collected information by comma separated string or YAML list.
- Allowable values are C(version), C(databases), C(settings), C(global_status),
C(users), C(engines), C(master_status), C(slave_status), C(slave_hosts).
- By default, collects all subsets.
- You can use '!' before value (for example, C(!settings)) to exclude it from the information.
- If you pass including and excluding values to the filter, for example, I(filter=!settings,version),
the excluding values, C(!settings) in this case, will be ignored.
type: list
elements: str
login_db:
description:
- Database name to connect to.
- It makes sense if I(login_user) is allowed to connect to a specific database only.
type: str
author:
- Andrew Klychkov (@Andersson007)
extends_documentation_fragment: mysql
'''
EXAMPLES = r'''
# Display info from mysql-hosts group (using creds from ~/.my.cnf to connect):
# ansible mysql-hosts -m mysql_info
# Display only databases and users info:
# ansible mysql-hosts -m mysql_info -a 'filter=databases,users'
# Display only slave status:
# ansible standby -m mysql_info -a 'filter=slave_status'
# Display all info from databases group except settings:
# ansible databases -m mysql_info -a 'filter=!settings'
- name: Collect all possible information using passwordless root access
mysql_info:
login_user: root
- name: Get MySQL version with non-default credentials
mysql_info:
login_user: mysuperuser
login_password: mysuperpass
filter: version
- name: Collect all info except settings and users by root
mysql_info:
login_user: root
login_password: rootpass
filter: "!settings,!users"
- name: Collect info about databases and version using ~/.my.cnf as a credential file
become: yes
mysql_info:
filter:
- databases
- version
- name: Collect info about databases and version using ~alice/.my.cnf as a credential file
become: yes
mysql_info:
config_file: /home/alice/.my.cnf
filter:
- databases
'''
RETURN = r'''
version:
description: Database server version.
returned: if not excluded by filter
type: dict
sample: { "version": { "major": 5, "minor": 5, "release": 60 } }
contains:
major:
description: Major server version.
returned: if not excluded by filter
type: int
sample: 5
minor:
description: Minor server version.
returned: if not excluded by filter
type: int
sample: 5
release:
description: Release server version.
returned: if not excluded by filter
type: int
sample: 60
databases:
description: Information about databases.
returned: if not excluded by filter
type: dict
sample:
- { "mysql": { "size": 656594 }, "information_schema": { "size": 73728 } }
contains:
size:
description: Database size in bytes.
returned: if not excluded by filter
type: dict
sample: { 'size': 656594 }
settings:
description: Global settings (variables) information.
returned: if not excluded by filter
type: dict
sample:
- { "innodb_open_files": 300, innodb_page_size": 16384 }
global_status:
description: Global status information.
returned: if not excluded by filter
type: dict
sample:
- { "Innodb_buffer_pool_read_requests": 123, "Innodb_buffer_pool_reads": 32 }
version_added: "2.10"
users:
description: Users information.
returned: if not excluded by filter
type: dict
sample:
- { "localhost": { "root": { "Alter_priv": "Y", "Alter_routine_priv": "Y" } } }
engines:
description: Information about the server's storage engines.
returned: if not excluded by filter
type: dict
sample:
- { "CSV": { "Comment": "CSV storage engine", "Savepoints": "NO", "Support": "YES", "Transactions": "NO", "XA": "NO" } }
master_status:
description: Master status information.
returned: if master
type: dict
sample:
- { "Binlog_Do_DB": "", "Binlog_Ignore_DB": "mysql", "File": "mysql-bin.000001", "Position": 769 }
slave_status:
description: Slave status information.
returned: if standby
type: dict
sample:
- { "192.168.1.101": { "3306": { "replication_user": { "Connect_Retry": 60, "Exec_Master_Log_Pos": 769, "Last_Errno": 0 } } } }
slave_hosts:
description: Slave status information.
returned: if master
type: dict
sample:
- { "2": { "Host": "", "Master_id": 1, "Port": 3306 } }
'''
from decimal import Decimal
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.mysql import (
mysql_connect,
mysql_common_argument_spec,
mysql_driver,
mysql_driver_fail_msg,
)
from ansible.module_utils.six import iteritems
from ansible.module_utils._text import to_native
# ===========================================
# MySQL module specific support methods.
#
class MySQL_Info(object):
"""Class for collection MySQL instance information.
Arguments:
module (AnsibleModule): Object of AnsibleModule class.
cursor (pymysql/mysql-python): Cursor class for interaction with
the database.
Note:
If you need to add a new subset:
1. add a new key with the same name to self.info attr in self.__init__()
2. add a new private method to get the information
3. add invocation of the new method to self.__collect()
4. add info about the new subset to the DOCUMENTATION block
5. add info about the new subset with an example to RETURN block
"""
def __init__(self, module, cursor):
self.module = module
self.cursor = cursor
self.info = {
'version': {},
'databases': {},
'settings': {},
'global_status': {},
'engines': {},
'users': {},
'master_status': {},
'slave_hosts': {},
'slave_status': {},
}
def get_info(self, filter_):
"""Get MySQL instance information based on filter_.
Arguments:
filter_ (list): List of collected subsets (e.g., databases, users, etc.),
when it is empty, return all available information.
"""
self.__collect()
inc_list = []
exc_list = []
if filter_:
partial_info = {}
for fi in filter_:
if fi.lstrip('!') not in self.info:
self.module.warn('filter element: %s is not allowable, ignored' % fi)
continue
if fi[0] == '!':
exc_list.append(fi.lstrip('!'))
else:
inc_list.append(fi)
if inc_list:
for i in self.info:
if i in inc_list:
partial_info[i] = self.info[i]
else:
for i in self.info:
if i not in exc_list:
partial_info[i] = self.info[i]
return partial_info
else:
return self.info
def __collect(self):
"""Collect all possible subsets."""
self.__get_databases()
self.__get_global_variables()
self.__get_global_status()
self.__get_engines()
self.__get_users()
self.__get_master_status()
self.__get_slave_status()
self.__get_slaves()
def __get_engines(self):
"""Get storage engines info."""
res = self.__exec_sql('SHOW ENGINES')
if res:
for line in res:
engine = line['Engine']
self.info['engines'][engine] = {}
for vname, val in iteritems(line):
if vname != 'Engine':
self.info['engines'][engine][vname] = val
def __convert(self, val):
"""Convert unserializable data."""
try:
if isinstance(val, Decimal):
val = float(val)
else:
val = int(val)
except ValueError:
pass
except TypeError:
pass
return val
def __get_global_variables(self):
"""Get global variables (instance settings)."""
res = self.__exec_sql('SHOW GLOBAL VARIABLES')
if res:
for var in res:
self.info['settings'][var['Variable_name']] = self.__convert(var['Value'])
ver = self.info['settings']['version'].split('.')
release = ver[2].split('-')[0]
self.info['version'] = dict(
major=int(ver[0]),
minor=int(ver[1]),
release=int(release),
)
def __get_global_status(self):
"""Get global status."""
res = self.__exec_sql('SHOW GLOBAL STATUS')
if res:
for var in res:
self.info['global_status'][var['Variable_name']] = self.__convert(var['Value'])
def __get_master_status(self):
"""Get master status if the instance is a master."""
res = self.__exec_sql('SHOW MASTER STATUS')
if res:
for line in res:
for vname, val in iteritems(line):
self.info['master_status'][vname] = self.__convert(val)
def __get_slave_status(self):
"""Get slave status if the instance is a slave."""
res = self.__exec_sql('SHOW SLAVE STATUS')
if res:
for line in res:
host = line['Master_Host']
if host not in self.info['slave_status']:
self.info['slave_status'][host] = {}
port = line['Master_Port']
if port not in self.info['slave_status'][host]:
self.info['slave_status'][host][port] = {}
user = line['Master_User']
if user not in self.info['slave_status'][host][port]:
self.info['slave_status'][host][port][user] = {}
for vname, val in iteritems(line):
if vname not in ('Master_Host', 'Master_Port', 'Master_User'):
self.info['slave_status'][host][port][user][vname] = self.__convert(val)
def __get_slaves(self):
"""Get slave hosts info if the instance is a master."""
res = self.__exec_sql('SHOW SLAVE HOSTS')
if res:
for line in res:
srv_id = line['Server_id']
if srv_id not in self.info['slave_hosts']:
self.info['slave_hosts'][srv_id] = {}
for vname, val in iteritems(line):
if vname != 'Server_id':
self.info['slave_hosts'][srv_id][vname] = self.__convert(val)
def __get_users(self):
"""Get user info."""
res = self.__exec_sql('SELECT * FROM mysql.user')
if res:
for line in res:
host = line['Host']
if host not in self.info['users']:
self.info['users'][host] = {}
user = line['User']
self.info['users'][host][user] = {}
for vname, val in iteritems(line):
if vname not in ('Host', 'User'):
self.info['users'][host][user][vname] = self.__convert(val)
def __get_databases(self):
"""Get info about databases."""
query = ('SELECT table_schema AS "name", '
'SUM(data_length + index_length) AS "size" '
'FROM information_schema.TABLES GROUP BY table_schema')
res = self.__exec_sql(query)
if res:
for db in res:
self.info['databases'][db['name']] = {}
self.info['databases'][db['name']]['size'] = int(db['size'])
def __exec_sql(self, query, ddl=False):
"""Execute SQL.
Arguments:
ddl (bool): If True, return True or False.
Used for queries that don't return any rows
(mainly for DDL queries) (default False).
"""
try:
self.cursor.execute(query)
if not ddl:
res = self.cursor.fetchall()
return res
return True
except Exception as e:
self.module.fail_json(msg="Cannot execute SQL '%s': %s" % (query, to_native(e)))
return False
# ===========================================
# Module execution.
#
def main():
argument_spec = mysql_common_argument_spec()
argument_spec.update(
login_db=dict(type='str'),
filter=dict(type='list'),
)
# The module doesn't support check_mode
# because of it doesn't change anything
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
db = module.params['login_db']
connect_timeout = module.params['connect_timeout']
login_user = module.params['login_user']
login_password = module.params['login_password']
ssl_cert = module.params['client_cert']
ssl_key = module.params['client_key']
ssl_ca = module.params['ca_cert']
config_file = module.params['config_file']
filter_ = module.params['filter']
if filter_:
filter_ = [f.strip() for f in filter_]
if mysql_driver is None:
module.fail_json(msg=mysql_driver_fail_msg)
cursor = mysql_connect(module, login_user, login_password,
config_file, ssl_cert, ssl_key, ssl_ca, db,
connect_timeout=connect_timeout, cursor_class='DictCursor')
###############################
# Create object and do main job
mysql = MySQL_Info(module, cursor)
module.exit_json(changed=False, **mysql.get_info(filter_))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,319 |
mysql_info: change order of collecting and filtering items
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
collect data based on the filters
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
mysql_info
##### ADDITIONAL INFORMATION
On large databases mysql_info runs very long due to size calculations of all db/tables even when this information is not needed (for example quering user information only)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63319
|
https://github.com/ansible/ansible/pull/63371
|
8b684644e0f30e180b690297abc04a67641a8c9c
|
c59e061cff2e883f566a31b4f88e62f2fbd680e7
| 2019-10-10T05:51:35Z |
python
| 2019-12-05T13:29:58Z |
test/integration/targets/mysql_info/aliases
|
destructive
shippable/posix/group1
skip/osx
skip/freebsd
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,319 |
mysql_info: change order of collecting and filtering items
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
collect data based on the filters
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
mysql_info
##### ADDITIONAL INFORMATION
On large databases mysql_info runs very long due to size calculations of all db/tables even when this information is not needed (for example quering user information only)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/63319
|
https://github.com/ansible/ansible/pull/63371
|
8b684644e0f30e180b690297abc04a67641a8c9c
|
c59e061cff2e883f566a31b4f88e62f2fbd680e7
| 2019-10-10T05:51:35Z |
python
| 2019-12-05T13:29:58Z |
test/integration/targets/mysql_info/tasks/main.yml
|
# Test code for mysql_info module
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
###################
# Prepare for tests
#
# Create role for tests
- name: mysql_info - create mysql user {{ user_name }}
mysql_user:
name: '{{ user_name }}'
password: '{{ user_pass }}'
state: present
priv: '*.*:ALL'
login_unix_socket: '{{ mysql_socket }}'
# Create default MySQL config file with credentials
- name: mysql_info - create default config file
template:
src: my.cnf.j2
dest: '/root/.my.cnf'
mode: 0400
# Create non-default MySQL config file with credentials
- name: mysql_info - create non-default config file
template:
src: my.cnf.j2
dest: '/root/non-default_my.cnf'
mode: 0400
###############
# Do tests
# Access by default cred file
- name: mysql_info - collect default cred file
mysql_info:
login_user: '{{ user_name }}'
register: result
- assert:
that:
- result.changed == false
- result.version != {}
- result.settings != {}
- result.global_status != {}
- result.databases != {}
- result.engines != {}
- result.users != {}
# Access by non-default cred file
- name: mysql_info - check non-default cred file
mysql_info:
login_user: '{{ user_name }}'
config_file: '/root/non-default_my.cnf'
register: result
- assert:
that:
- result.changed == false
- result.version != {}
# Remove cred files
- name: mysql_info - remove cred files
file:
path: '{{ item }}'
state: absent
with_items:
- '/root/.my.cnf'
- '/root/non-default_my.cnf'
# Access with password
- name: mysql_info - check access with password
mysql_info:
login_user: '{{ user_name }}'
login_password: '{{ user_pass }}'
register: result
- assert:
that:
- result.changed == false
- result.version != {}
# Test excluding
- name: Collect all info except settings and users
mysql_info:
login_user: '{{ user_name }}'
login_password: '{{ user_pass }}'
filter: "!settings,!users"
register: result
- assert:
that:
- result.changed == false
- result.version != {}
- result.global_status != {}
- result.databases != {}
- result.engines != {}
- result.settings is not defined
- result.users is not defined
# Test including
- name: Collect info only about version and databases
mysql_info:
login_user: '{{ user_name }}'
login_password: '{{ user_pass }}'
filter:
- version
- databases
register: result
- assert:
that:
- result.changed == false
- result.version != {}
- result.databases != {}
- result.engines is not defined
- result.settings is not defined
- result.global_status is not defined
- result.users is not defined
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,121 |
Encrypting string from STDIN output result right after value
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When encrypting a single variable from sdtin the resulting output appears right after the value.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`ansible/lib/ansible/cli/vault.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible --version
ansible 2.9.1
config file = /Users/alexandrechouinard/.ansible.cfg
configured module search path = ['/Users/alexandrechouinard/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/alexandrechouinard/workspace/repos/ops2.0/provisioning-prometheus-vm/venv/lib/python3.7/site-packages/ansible
executable location = /Users/alexandrechouinard/workspace/repos/ops2.0/provisioning-prometheus-vm/venv/bin/ansible
python version = 3.7.4 (default, Sep 7 2019, 18:27:02) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ansible-config dump --only-changed
DEFAULT_REMOTE_USER(env: ANSIBLE_REMOTE_USER) = alexc
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Mac OSX
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Encrypt a string using STDIN
1. Input vault password 2 times
1. Type variable value
1. Input ctrl+d two times
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
ansible-vault encrypt_string --stdin-name 'variable'
New Vault password:
Confirm New Vault password:
Reading plaintext input from stdin. (ctrl-d to end input)
value
variable: !vault |
$ANSIBLE_VAULT;1.1;AES256
63393237383235396431646465663363663433366232623736633136313039626264663832333731
6136373531393136656135343135393964393439633038610a313130336635366362316264343662
32343834656337643338393430636533366131326166323934623834646335626639393835393762
3530306336356336660a306366333839376364666561613736306261393966613035623763366564
3636
Encryption successful
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ansible-vault encrypt_string --stdin-name 'variable'
New Vault password:
Confirm New Vault password:
Reading plaintext input from stdin. (ctrl-d to end input)
valuevariable: !vault |
$ANSIBLE_VAULT;1.1;AES256
63393237383235396431646465663363663433366232623736633136313039626264663832333731
6136373531393136656135343135393964393439633038610a313130336635366362316264343662
32343834656337643338393430636533366131326166323934623834646335626639393835393762
3530306336356336660a306366333839376364666561613736306261393966613035623763366564
3636
Encryption successful
```
|
https://github.com/ansible/ansible/issues/65121
|
https://github.com/ansible/ansible/pull/65122
|
a0f26b40cbe5015140574d7e168b23d6d30699ab
|
edc7c4ddee3122f356ba8f8b9438fecb7a8ad95e
| 2019-11-20T17:51:02Z |
python
| 2019-12-05T20:42:15Z |
changelogs/fragments/65122-fix-encrypt_string-stdin-name-ouput-tty.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,121 |
Encrypting string from STDIN output result right after value
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When encrypting a single variable from sdtin the resulting output appears right after the value.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`ansible/lib/ansible/cli/vault.py`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible --version
ansible 2.9.1
config file = /Users/alexandrechouinard/.ansible.cfg
configured module search path = ['/Users/alexandrechouinard/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/alexandrechouinard/workspace/repos/ops2.0/provisioning-prometheus-vm/venv/lib/python3.7/site-packages/ansible
executable location = /Users/alexandrechouinard/workspace/repos/ops2.0/provisioning-prometheus-vm/venv/bin/ansible
python version = 3.7.4 (default, Sep 7 2019, 18:27:02) [Clang 10.0.1 (clang-1001.0.46.4)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ansible-config dump --only-changed
DEFAULT_REMOTE_USER(env: ANSIBLE_REMOTE_USER) = alexc
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Mac OSX
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Encrypt a string using STDIN
1. Input vault password 2 times
1. Type variable value
1. Input ctrl+d two times
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
```
ansible-vault encrypt_string --stdin-name 'variable'
New Vault password:
Confirm New Vault password:
Reading plaintext input from stdin. (ctrl-d to end input)
value
variable: !vault |
$ANSIBLE_VAULT;1.1;AES256
63393237383235396431646465663363663433366232623736633136313039626264663832333731
6136373531393136656135343135393964393439633038610a313130336635366362316264343662
32343834656337643338393430636533366131326166323934623834646335626639393835393762
3530306336356336660a306366333839376364666561613736306261393966613035623763366564
3636
Encryption successful
```
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```
ansible-vault encrypt_string --stdin-name 'variable'
New Vault password:
Confirm New Vault password:
Reading plaintext input from stdin. (ctrl-d to end input)
valuevariable: !vault |
$ANSIBLE_VAULT;1.1;AES256
63393237383235396431646465663363663433366232623736633136313039626264663832333731
6136373531393136656135343135393964393439633038610a313130336635366362316264343662
32343834656337643338393430636533366131326166323934623834646335626639393835393762
3530306336356336660a306366333839376364666561613736306261393966613035623763366564
3636
Encryption successful
```
|
https://github.com/ansible/ansible/issues/65121
|
https://github.com/ansible/ansible/pull/65122
|
a0f26b40cbe5015140574d7e168b23d6d30699ab
|
edc7c4ddee3122f356ba8f8b9438fecb7a8ad95e
| 2019-11-20T17:51:02Z |
python
| 2019-12-05T20:42:15Z |
lib/ansible/cli/vault.py
|
# (c) 2014, James Tanner <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
from ansible import constants as C
from ansible import context
from ansible.cli import CLI
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleOptionsError
from ansible.module_utils._text import to_text, to_bytes
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.vault import VaultEditor, VaultLib, match_encrypt_secret
from ansible.utils.display import Display
display = Display()
class VaultCLI(CLI):
''' can encrypt any structured data file used by Ansible.
This can include *group_vars/* or *host_vars/* inventory variables,
variables loaded by *include_vars* or *vars_files*, or variable files
passed on the ansible-playbook command line with *-e @file.yml* or *-e @file.json*.
Role variables and defaults are also included!
Because Ansible tasks, handlers, and other objects are data, these can also be encrypted with vault.
If you'd like to not expose what variables you are using, you can keep an individual task file entirely encrypted.
'''
FROM_STDIN = "stdin"
FROM_ARGS = "the command line args"
FROM_PROMPT = "the interactive prompt"
def __init__(self, args):
self.b_vault_pass = None
self.b_new_vault_pass = None
self.encrypt_string_read_stdin = False
self.encrypt_secret = None
self.encrypt_vault_id = None
self.new_encrypt_secret = None
self.new_encrypt_vault_id = None
super(VaultCLI, self).__init__(args)
def init_parser(self):
super(VaultCLI, self).init_parser(
desc="encryption/decryption utility for Ansible data files",
epilog="\nSee '%s <command> --help' for more information on a specific command.\n\n" % os.path.basename(sys.argv[0])
)
common = opt_help.argparse.ArgumentParser(add_help=False)
opt_help.add_vault_options(common)
opt_help.add_verbosity_options(common)
subparsers = self.parser.add_subparsers(dest='action')
subparsers.required = True
output = opt_help.argparse.ArgumentParser(add_help=False)
output.add_argument('--output', default=None, dest='output_file',
help='output file name for encrypt or decrypt; use - for stdout',
type=opt_help.unfrack_path())
# For encrypting actions, we can also specify which of multiple vault ids should be used for encrypting
vault_id = opt_help.argparse.ArgumentParser(add_help=False)
vault_id.add_argument('--encrypt-vault-id', default=[], dest='encrypt_vault_id',
action='store', type=str,
help='the vault id used to encrypt (required if more than vault-id is provided)')
create_parser = subparsers.add_parser('create', help='Create new vault encrypted file', parents=[vault_id, common])
create_parser.set_defaults(func=self.execute_create)
create_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
decrypt_parser = subparsers.add_parser('decrypt', help='Decrypt vault encrypted file', parents=[output, common])
decrypt_parser.set_defaults(func=self.execute_decrypt)
decrypt_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
edit_parser = subparsers.add_parser('edit', help='Edit vault encrypted file', parents=[vault_id, common])
edit_parser.set_defaults(func=self.execute_edit)
edit_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
view_parser = subparsers.add_parser('view', help='View vault encrypted file', parents=[common])
view_parser.set_defaults(func=self.execute_view)
view_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
encrypt_parser = subparsers.add_parser('encrypt', help='Encrypt YAML file', parents=[common, output, vault_id])
encrypt_parser.set_defaults(func=self.execute_encrypt)
encrypt_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
enc_str_parser = subparsers.add_parser('encrypt_string', help='Encrypt a string', parents=[common, output, vault_id])
enc_str_parser.set_defaults(func=self.execute_encrypt_string)
enc_str_parser.add_argument('args', help='String to encrypt', metavar='string_to_encrypt', nargs='*')
enc_str_parser.add_argument('-p', '--prompt', dest='encrypt_string_prompt',
action='store_true',
help="Prompt for the string to encrypt")
enc_str_parser.add_argument('-n', '--name', dest='encrypt_string_names',
action='append',
help="Specify the variable name")
enc_str_parser.add_argument('--stdin-name', dest='encrypt_string_stdin_name',
default=None,
help="Specify the variable name for stdin")
rekey_parser = subparsers.add_parser('rekey', help='Re-key a vault encrypted file', parents=[common, vault_id])
rekey_parser.set_defaults(func=self.execute_rekey)
rekey_new_group = rekey_parser.add_mutually_exclusive_group()
rekey_new_group.add_argument('--new-vault-password-file', default=None, dest='new_vault_password_file',
help="new vault password file for rekey", type=opt_help.unfrack_path())
rekey_new_group.add_argument('--new-vault-id', default=None, dest='new_vault_id', type=str,
help='the new vault identity to use for rekey')
rekey_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
def post_process_args(self, options):
options = super(VaultCLI, self).post_process_args(options)
display.verbosity = options.verbosity
if options.vault_ids:
for vault_id in options.vault_ids:
if u';' in vault_id:
raise AnsibleOptionsError("'%s' is not a valid vault id. The character ';' is not allowed in vault ids" % vault_id)
if getattr(options, 'output_file', None) and len(options.args) > 1:
raise AnsibleOptionsError("At most one input file may be used with the --output option")
if options.action == 'encrypt_string':
if '-' in options.args or not options.args or options.encrypt_string_stdin_name:
self.encrypt_string_read_stdin = True
# TODO: prompting from stdin and reading from stdin seem mutually exclusive, but verify that.
if options.encrypt_string_prompt and self.encrypt_string_read_stdin:
raise AnsibleOptionsError('The --prompt option is not supported if also reading input from stdin')
return options
def run(self):
super(VaultCLI, self).run()
loader = DataLoader()
# set default restrictive umask
old_umask = os.umask(0o077)
vault_ids = list(context.CLIARGS['vault_ids'])
# there are 3 types of actions, those that just 'read' (decrypt, view) and only
# need to ask for a password once, and those that 'write' (create, encrypt) that
# ask for a new password and confirm it, and 'read/write (rekey) that asks for the
# old password, then asks for a new one and confirms it.
default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST
vault_ids = default_vault_ids + vault_ids
action = context.CLIARGS['action']
# TODO: instead of prompting for these before, we could let VaultEditor
# call a callback when it needs it.
if action in ['decrypt', 'view', 'rekey', 'edit']:
vault_secrets = self.setup_vault_secrets(loader, vault_ids=vault_ids,
vault_password_files=list(context.CLIARGS['vault_password_files']),
ask_vault_pass=context.CLIARGS['ask_vault_pass'])
if not vault_secrets:
raise AnsibleOptionsError("A vault password is required to use Ansible's Vault")
if action in ['encrypt', 'encrypt_string', 'create']:
encrypt_vault_id = None
# no --encrypt-vault-id context.CLIARGS['encrypt_vault_id'] for 'edit'
if action not in ['edit']:
encrypt_vault_id = context.CLIARGS['encrypt_vault_id'] or C.DEFAULT_VAULT_ENCRYPT_IDENTITY
vault_secrets = None
vault_secrets = \
self.setup_vault_secrets(loader,
vault_ids=vault_ids,
vault_password_files=list(context.CLIARGS['vault_password_files']),
ask_vault_pass=context.CLIARGS['ask_vault_pass'],
create_new_password=True)
if len(vault_secrets) > 1 and not encrypt_vault_id:
raise AnsibleOptionsError("The vault-ids %s are available to encrypt. Specify the vault-id to encrypt with --encrypt-vault-id" %
','.join([x[0] for x in vault_secrets]))
if not vault_secrets:
raise AnsibleOptionsError("A vault password is required to use Ansible's Vault")
encrypt_secret = match_encrypt_secret(vault_secrets,
encrypt_vault_id=encrypt_vault_id)
# only one secret for encrypt for now, use the first vault_id and use its first secret
# TODO: exception if more than one?
self.encrypt_vault_id = encrypt_secret[0]
self.encrypt_secret = encrypt_secret[1]
if action in ['rekey']:
encrypt_vault_id = context.CLIARGS['encrypt_vault_id'] or C.DEFAULT_VAULT_ENCRYPT_IDENTITY
# print('encrypt_vault_id: %s' % encrypt_vault_id)
# print('default_encrypt_vault_id: %s' % default_encrypt_vault_id)
# new_vault_ids should only ever be one item, from
# load the default vault ids if we are using encrypt-vault-id
new_vault_ids = []
if encrypt_vault_id:
new_vault_ids = default_vault_ids
if context.CLIARGS['new_vault_id']:
new_vault_ids.append(context.CLIARGS['new_vault_id'])
new_vault_password_files = []
if context.CLIARGS['new_vault_password_file']:
new_vault_password_files.append(context.CLIARGS['new_vault_password_file'])
new_vault_secrets = \
self.setup_vault_secrets(loader,
vault_ids=new_vault_ids,
vault_password_files=new_vault_password_files,
ask_vault_pass=context.CLIARGS['ask_vault_pass'],
create_new_password=True)
if not new_vault_secrets:
raise AnsibleOptionsError("A new vault password is required to use Ansible's Vault rekey")
# There is only one new_vault_id currently and one new_vault_secret, or we
# use the id specified in --encrypt-vault-id
new_encrypt_secret = match_encrypt_secret(new_vault_secrets,
encrypt_vault_id=encrypt_vault_id)
self.new_encrypt_vault_id = new_encrypt_secret[0]
self.new_encrypt_secret = new_encrypt_secret[1]
loader.set_vault_secrets(vault_secrets)
# FIXME: do we need to create VaultEditor here? its not reused
vault = VaultLib(vault_secrets)
self.editor = VaultEditor(vault)
context.CLIARGS['func']()
# and restore umask
os.umask(old_umask)
def execute_encrypt(self):
''' encrypt the supplied file using the provided vault secret '''
if not context.CLIARGS['args'] and sys.stdin.isatty():
display.display("Reading plaintext input from stdin", stderr=True)
for f in context.CLIARGS['args'] or ['-']:
# Fixme: use the correct vau
self.editor.encrypt_file(f, self.encrypt_secret,
vault_id=self.encrypt_vault_id,
output_file=context.CLIARGS['output_file'])
if sys.stdout.isatty():
display.display("Encryption successful", stderr=True)
@staticmethod
def format_ciphertext_yaml(b_ciphertext, indent=None, name=None):
indent = indent or 10
block_format_var_name = ""
if name:
block_format_var_name = "%s: " % name
block_format_header = "%s!vault |" % block_format_var_name
lines = []
vault_ciphertext = to_text(b_ciphertext)
lines.append(block_format_header)
for line in vault_ciphertext.splitlines():
lines.append('%s%s' % (' ' * indent, line))
yaml_ciphertext = '\n'.join(lines)
return yaml_ciphertext
def execute_encrypt_string(self):
''' encrypt the supplied string using the provided vault secret '''
b_plaintext = None
# Holds tuples (the_text, the_source_of_the_string, the variable name if its provided).
b_plaintext_list = []
# remove the non-option '-' arg (used to indicate 'read from stdin') from the candidate args so
# we don't add it to the plaintext list
args = [x for x in context.CLIARGS['args'] if x != '-']
# We can prompt and read input, or read from stdin, but not both.
if context.CLIARGS['encrypt_string_prompt']:
msg = "String to encrypt: "
name = None
name_prompt_response = display.prompt('Variable name (enter for no name): ')
# TODO: enforce var naming rules?
if name_prompt_response != "":
name = name_prompt_response
# TODO: could prompt for which vault_id to use for each plaintext string
# currently, it will just be the default
# could use private=True for shadowed input if useful
prompt_response = display.prompt(msg)
if prompt_response == '':
raise AnsibleOptionsError('The plaintext provided from the prompt was empty, not encrypting')
b_plaintext = to_bytes(prompt_response)
b_plaintext_list.append((b_plaintext, self.FROM_PROMPT, name))
# read from stdin
if self.encrypt_string_read_stdin:
if sys.stdout.isatty():
display.display("Reading plaintext input from stdin. (ctrl-d to end input)", stderr=True)
stdin_text = sys.stdin.read()
if stdin_text == '':
raise AnsibleOptionsError('stdin was empty, not encrypting')
b_plaintext = to_bytes(stdin_text)
# defaults to None
name = context.CLIARGS['encrypt_string_stdin_name']
b_plaintext_list.append((b_plaintext, self.FROM_STDIN, name))
# use any leftover args as strings to encrypt
# Try to match args up to --name options
if context.CLIARGS.get('encrypt_string_names', False):
name_and_text_list = list(zip(context.CLIARGS['encrypt_string_names'], args))
# Some but not enough --name's to name each var
if len(args) > len(name_and_text_list):
# Trying to avoid ever showing the plaintext in the output, so this warning is vague to avoid that.
display.display('The number of --name options do not match the number of args.',
stderr=True)
display.display('The last named variable will be "%s". The rest will not have'
' names.' % context.CLIARGS['encrypt_string_names'][-1],
stderr=True)
# Add the rest of the args without specifying a name
for extra_arg in args[len(name_and_text_list):]:
name_and_text_list.append((None, extra_arg))
# if no --names are provided, just use the args without a name.
else:
name_and_text_list = [(None, x) for x in args]
# Convert the plaintext text objects to bytestrings and collect
for name_and_text in name_and_text_list:
name, plaintext = name_and_text
if plaintext == '':
raise AnsibleOptionsError('The plaintext provided from the command line args was empty, not encrypting')
b_plaintext = to_bytes(plaintext)
b_plaintext_list.append((b_plaintext, self.FROM_ARGS, name))
# TODO: specify vault_id per string?
# Format the encrypted strings and any corresponding stderr output
outputs = self._format_output_vault_strings(b_plaintext_list, vault_id=self.encrypt_vault_id)
for output in outputs:
err = output.get('err', None)
out = output.get('out', '')
if err:
sys.stderr.write(err)
print(out)
if sys.stdout.isatty():
display.display("Encryption successful", stderr=True)
# TODO: offer block or string ala eyaml
def _format_output_vault_strings(self, b_plaintext_list, vault_id=None):
# If we are only showing one item in the output, we don't need to included commented
# delimiters in the text
show_delimiter = False
if len(b_plaintext_list) > 1:
show_delimiter = True
# list of dicts {'out': '', 'err': ''}
output = []
# Encrypt the plaintext, and format it into a yaml block that can be pasted into a playbook.
# For more than one input, show some differentiating info in the stderr output so we can tell them
# apart. If we have a var name, we include that in the yaml
for index, b_plaintext_info in enumerate(b_plaintext_list):
# (the text itself, which input it came from, its name)
b_plaintext, src, name = b_plaintext_info
b_ciphertext = self.editor.encrypt_bytes(b_plaintext, self.encrypt_secret,
vault_id=vault_id)
# block formatting
yaml_text = self.format_ciphertext_yaml(b_ciphertext, name=name)
err_msg = None
if show_delimiter:
human_index = index + 1
if name:
err_msg = '# The encrypted version of variable ("%s", the string #%d from %s).\n' % (name, human_index, src)
else:
err_msg = '# The encrypted version of the string #%d from %s.)\n' % (human_index, src)
output.append({'out': yaml_text, 'err': err_msg})
return output
def execute_decrypt(self):
''' decrypt the supplied file using the provided vault secret '''
if not context.CLIARGS['args'] and sys.stdin.isatty():
display.display("Reading ciphertext input from stdin", stderr=True)
for f in context.CLIARGS['args'] or ['-']:
self.editor.decrypt_file(f, output_file=context.CLIARGS['output_file'])
if sys.stdout.isatty():
display.display("Decryption successful", stderr=True)
def execute_create(self):
''' create and open a file in an editor that will be encrypted with the provided vault secret when closed'''
if len(context.CLIARGS['args']) > 1:
raise AnsibleOptionsError("ansible-vault create can take only one filename argument")
self.editor.create_file(context.CLIARGS['args'][0], self.encrypt_secret,
vault_id=self.encrypt_vault_id)
def execute_edit(self):
''' open and decrypt an existing vaulted file in an editor, that will be encrypted again when closed'''
for f in context.CLIARGS['args']:
self.editor.edit_file(f)
def execute_view(self):
''' open, decrypt and view an existing vaulted file using a pager using the supplied vault secret '''
for f in context.CLIARGS['args']:
# Note: vault should return byte strings because it could encrypt
# and decrypt binary files. We are responsible for changing it to
# unicode here because we are displaying it and therefore can make
# the decision that the display doesn't have to be precisely what
# the input was (leave that to decrypt instead)
plaintext = self.editor.plaintext(f)
self.pager(to_text(plaintext))
def execute_rekey(self):
''' re-encrypt a vaulted file with a new secret, the previous secret is required '''
for f in context.CLIARGS['args']:
# FIXME: plumb in vault_id, use the default new_vault_secret for now
self.editor.rekey_file(f, self.new_encrypt_secret,
self.new_encrypt_vault_id)
display.display("Rekey successful", stderr=True)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,018 |
Improve win_find Performance
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Currently, win_find can take a significant amount of time to iterate through large collections of directories.
For example, the DLL directories in Windows show a significant variance in execution time with relatively similar disk space utilization
DLL directory at 3GB of disk space and 20,000 directories can take 10-15 minutes to complete a scan with win_find
DLL directory at 4GB of disk space and 120,000 directories can take over 6 hours to complete with win_find
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_find.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
Parameters:
```yaml
scan_dir_list:
- path: 'C:\Windows\System32'
scandir: true
recurse: true
pattern:
- '*.com'
- '*.exe'
- '*.dll'
- '*.ocx'
- '*.sys'
```
Task:
```yaml
- name: Get full list of files from directory
win_find:
paths: "{{ item.path }}"
get_checksum: true
recurse: "{{ item.recurse | default('false') }}"
patterns: "{{item.pattern | default(omit) }}"
use_regex: "{{ item.use_regex | default(omit) }}"
file_type: file
with_items: "{{ scan_dir_list }}"
register: find_file_list
ignore_errors: true
```
|
https://github.com/ansible/ansible/issues/63018
|
https://github.com/ansible/ansible/pull/65536
|
96cbbdd59fe82574b9292bf3cafe34bb8b9ceade
|
fcdebe41e99d8c32ba9219b66df3a65346200b46
| 2019-10-01T16:15:19Z |
python
| 2019-12-06T00:01:11Z |
changelogs/fragments/win_find-performance.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,018 |
Improve win_find Performance
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Currently, win_find can take a significant amount of time to iterate through large collections of directories.
For example, the DLL directories in Windows show a significant variance in execution time with relatively similar disk space utilization
DLL directory at 3GB of disk space and 20,000 directories can take 10-15 minutes to complete a scan with win_find
DLL directory at 4GB of disk space and 120,000 directories can take over 6 hours to complete with win_find
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_find.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
Parameters:
```yaml
scan_dir_list:
- path: 'C:\Windows\System32'
scandir: true
recurse: true
pattern:
- '*.com'
- '*.exe'
- '*.dll'
- '*.ocx'
- '*.sys'
```
Task:
```yaml
- name: Get full list of files from directory
win_find:
paths: "{{ item.path }}"
get_checksum: true
recurse: "{{ item.recurse | default('false') }}"
patterns: "{{item.pattern | default(omit) }}"
use_regex: "{{ item.use_regex | default(omit) }}"
file_type: file
with_items: "{{ scan_dir_list }}"
register: find_file_list
ignore_errors: true
```
|
https://github.com/ansible/ansible/issues/63018
|
https://github.com/ansible/ansible/pull/65536
|
96cbbdd59fe82574b9292bf3cafe34bb8b9ceade
|
fcdebe41e99d8c32ba9219b66df3a65346200b46
| 2019-10-01T16:15:19Z |
python
| 2019-12-06T00:01:11Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead.
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
Plugins
=======
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,018 |
Improve win_find Performance
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Currently, win_find can take a significant amount of time to iterate through large collections of directories.
For example, the DLL directories in Windows show a significant variance in execution time with relatively similar disk space utilization
DLL directory at 3GB of disk space and 20,000 directories can take 10-15 minutes to complete a scan with win_find
DLL directory at 4GB of disk space and 120,000 directories can take over 6 hours to complete with win_find
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_find.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
Parameters:
```yaml
scan_dir_list:
- path: 'C:\Windows\System32'
scandir: true
recurse: true
pattern:
- '*.com'
- '*.exe'
- '*.dll'
- '*.ocx'
- '*.sys'
```
Task:
```yaml
- name: Get full list of files from directory
win_find:
paths: "{{ item.path }}"
get_checksum: true
recurse: "{{ item.recurse | default('false') }}"
patterns: "{{item.pattern | default(omit) }}"
use_regex: "{{ item.use_regex | default(omit) }}"
file_type: file
with_items: "{{ scan_dir_list }}"
register: find_file_list
ignore_errors: true
```
|
https://github.com/ansible/ansible/issues/63018
|
https://github.com/ansible/ansible/pull/65536
|
96cbbdd59fe82574b9292bf3cafe34bb8b9ceade
|
fcdebe41e99d8c32ba9219b66df3a65346200b46
| 2019-10-01T16:15:19Z |
python
| 2019-12-06T00:01:11Z |
lib/ansible/modules/windows/win_find.ps1
|
#!powershell
# Copyright: (c) 2016, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#Requires -Module Ansible.ModuleUtils.Legacy
$ErrorActionPreference = "Stop"
$params = Parse-Args -arguments $args -supports_check_mode $true
$_remote_tmp = Get-AnsibleParam $params "_ansible_remote_tmp" -type "path" -default $env:TMP
$paths = Get-AnsibleParam -obj $params -name 'paths' -failifempty $true
$age = Get-AnsibleParam -obj $params -name 'age'
$age_stamp = Get-AnsibleParam -obj $params -name 'age_stamp' -default 'mtime' -ValidateSet 'mtime','ctime','atime'
$file_type = Get-AnsibleParam -obj $params -name 'file_type' -default 'file' -ValidateSet 'file','directory'
$follow = Get-AnsibleParam -obj $params -name 'follow' -type "bool" -default $false
$hidden = Get-AnsibleParam -obj $params -name 'hidden' -type "bool" -default $false
$patterns = Get-AnsibleParam -obj $params -name 'patterns' -aliases "regex","regexp"
$recurse = Get-AnsibleParam -obj $params -name 'recurse' -type "bool" -default $false
$size = Get-AnsibleParam -obj $params -name 'size'
$use_regex = Get-AnsibleParam -obj $params -name 'use_regex' -type "bool" -default $false
$get_checksum = Get-AnsibleParam -obj $params -name 'get_checksum' -type "bool" -default $true
$checksum_algorithm = Get-AnsibleParam -obj $params -name 'checksum_algorithm' -default 'sha1' -ValidateSet 'md5', 'sha1', 'sha256', 'sha384', 'sha512'
$result = @{
files = @()
examined = 0
matched = 0
changed = $false
}
# C# code to determine link target, copied from http://chrisbensen.blogspot.com.au/2010/06/getfinalpathnamebyhandle.html
$symlink_util = @"
using System;
using System.Text;
using Microsoft.Win32.SafeHandles;
using System.ComponentModel;
using System.Runtime.InteropServices;
namespace Ansible.Command {
public class SymLinkHelper {
private const int FILE_SHARE_WRITE = 2;
private const int CREATION_DISPOSITION_OPEN_EXISTING = 3;
private const int FILE_FLAG_BACKUP_SEMANTICS = 0x02000000;
[DllImport("kernel32.dll", EntryPoint = "GetFinalPathNameByHandleW", CharSet = CharSet.Unicode, SetLastError = true)]
public static extern int GetFinalPathNameByHandle(IntPtr handle, [In, Out] StringBuilder path, int bufLen, int flags);
[DllImport("kernel32.dll", EntryPoint = "CreateFileW", CharSet = CharSet.Unicode, SetLastError = true)]
public static extern SafeFileHandle CreateFile(string lpFileName, int dwDesiredAccess,
int dwShareMode, IntPtr SecurityAttributes, int dwCreationDisposition, int dwFlagsAndAttributes, IntPtr hTemplateFile);
public static string GetSymbolicLinkTarget(System.IO.DirectoryInfo symlink) {
SafeFileHandle directoryHandle = CreateFile(symlink.FullName, 0, 2, System.IntPtr.Zero, CREATION_DISPOSITION_OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, System.IntPtr.Zero);
if(directoryHandle.IsInvalid)
throw new Win32Exception(Marshal.GetLastWin32Error());
StringBuilder path = new StringBuilder(512);
int size = GetFinalPathNameByHandle(directoryHandle.DangerousGetHandle(), path, path.Capacity, 0);
if (size<0)
throw new Win32Exception(Marshal.GetLastWin32Error()); // The remarks section of GetFinalPathNameByHandle mentions the return being prefixed with "\\?\" // More information about "\\?\" here -> http://msdn.microsoft.com/en-us/library/aa365247(v=VS.85).aspx
if (path[0] == '\\' && path[1] == '\\' && path[2] == '?' && path[3] == '\\')
return path.ToString().Substring(4);
else
return path.ToString();
}
}
}
"@
$original_tmp = $env:TMP
$env:TMP = $_remote_tmp
Add-Type -TypeDefinition $symlink_util
$env:TMP = $original_tmp
Function Assert-Age($info) {
$valid_match = $true
if ($null -ne $age) {
$seconds_per_unit = @{'s'=1; 'm'=60; 'h'=3600; 'd'=86400; 'w'=604800}
$seconds_pattern = '^(-?\d+)(s|m|h|d|w)?$'
$match = $age -match $seconds_pattern
if ($match) {
[int]$specified_seconds = $matches[1]
if ($null -eq $matches[2]) {
$chosen_unit = 's'
} else {
$chosen_unit = $matches[2]
}
$abs_seconds = $specified_seconds * ($seconds_per_unit.$chosen_unit)
$epoch = New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0
if ($age_stamp -eq 'mtime') {
$age_comparison = $epoch.AddSeconds($info.lastwritetime)
} elseif ($age_stamp -eq 'ctime') {
$age_comparison = $epoch.AddSeconds($info.creationtime)
} elseif ($age_stamp -eq 'atime') {
$age_comparison = $epoch.AddSeconds($info.lastaccesstime)
}
if ($specified_seconds -ge 0) {
$start_date = (Get-Date).AddSeconds($abs_seconds * -1)
if ($age_comparison -gt $start_date) {
$valid_match = $false
}
} else {
$start_date = (Get-Date).AddSeconds($abs_seconds)
if ($age_comparison -lt $start_date) {
$valid_match = $false
}
}
} else {
throw "failed to process age for file $($info.FullName)"
}
}
$valid_match
}
Function Assert-FileType($info) {
$valid_match = $true
if ($file_type -eq 'directory' -and $info.isdir -eq $false) {
$valid_match = $false
}
if ($file_type -eq 'file' -and $info.isdir -eq $true) {
$valid_match = $false
}
$valid_match
}
Function Assert-Hidden($info) {
$valid_match = $true
if ($hidden -eq $true -and $info.ishidden -eq $false) {
$valid_match = $false
}
if ($hidden -eq $false -and $info.ishidden -eq $true) {
$valid_match = $false
}
$valid_match
}
Function Assert-Pattern($info) {
$valid_match = $false
if ($null -ne $patterns) {
foreach ($pattern in $patterns) {
if ($use_regex -eq $true) {
# Use -match for regex matching
if ($info.filename -match $pattern) {
$valid_match = $true
}
} else {
# Use -like for wildcard matching
if ($info.filename -like $pattern) {
$valid_match = $true
}
}
}
} else {
$valid_match = $true
}
$valid_match
}
Function Assert-Size($info) {
$valid_match = $true
if ($null -ne $size) {
$bytes_per_unit = @{'b'=1; 'k'=1024; 'm'=1024*1024; 'g'=1024*1024*1024; 't'=1024*1024*1024*1024}
$size_pattern = '^(-?\d+)(b|k|m|g|t)?$'
$match = $size -match $size_pattern
if ($match) {
[int64]$specified_size = $matches[1]
if ($null -eq $matches[2]) {
$chosen_byte = 'b'
} else {
$chosen_byte = $matches[2]
}
$abs_size = $specified_size * ($bytes_per_unit.$chosen_byte)
if ($specified_size -ge 0) {
if ($info.size -lt $abs_size) {
$valid_match = $false
}
} else {
if ($info.size -gt $abs_size * -1) {
$valid_match = $false
}
}
} else {
throw "failed to process size for file $($info.FullName)"
}
}
$valid_match
}
Function Assert-FileStat($info) {
$age_match = Assert-Age -info $info
$file_type_match = Assert-FileType -info $info
$hidden_match = Assert-Hidden -info $info
$pattern_match = Assert-Pattern -info $info
$size_match = Assert-Size -info $info
if ($age_match -and $file_type_match -and $hidden_match -and $pattern_match -and $size_match) {
$info
} else {
$false
}
}
Function Get-FileStat($file) {
$epoch = New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0
$access_control = $file.GetAccessControl()
$attributes = @()
foreach ($attribute in ($file.Attributes -split ',')) {
$attributes += $attribute.Trim()
}
$file_stat = @{
isreadonly = $attributes -contains 'ReadOnly'
ishidden = $attributes -contains 'Hidden'
isarchive = $attributes -contains 'Archive'
attributes = $file.Attributes.ToString()
owner = $access_control.Owner
lastwritetime = (New-TimeSpan -Start $epoch -End $file.LastWriteTime).TotalSeconds
creationtime = (New-TimeSpan -Start $epoch -End $file.CreationTime).TotalSeconds
lastaccesstime = (New-TimeSpan -Start $epoch -End $file.LastAccessTime).TotalSeconds
path = $file.FullName
filename = $file.Name
}
$islnk = $false
$isdir = $attributes -contains 'Directory'
$isshared = $false
if ($attributes -contains 'ReparsePoint') {
# TODO: Find a way to differenciate between soft and junction links
$islnk = $true
# Try and get the symlink source, can result in failure if link is broken
try {
$lnk_source = [Ansible.Command.SymLinkHelper]::GetSymbolicLinkTarget($file)
$file_stat.lnk_source = $lnk_source
} catch {}
} elseif ($file.PSIsContainer) {
$isdir = $true
$share_info = Get-CIMInstance -Class Win32_Share -Filter "Path='$($file.Fullname -replace '\\', '\\')'"
if ($null -ne $share_info) {
$isshared = $true
$file_stat.sharename = $share_info.Name
}
# only get the size of a directory if there are files (not directories) inside the folder
# Get-ChildItem -LiteralPath does not work properly on older OS', use .NET instead
$dir_files = @()
try {
$dir_files = $file.EnumerateFiles("*", [System.IO.SearchOption]::AllDirectories)
} catch [System.IO.DirectoryNotFoundException] { # Broken ReparsePoint/Symlink, cannot enumerate
} catch [System.UnauthorizedAccessException] {} # No ListDirectory permissions, Get-ChildItem ignored this
$size = 0
foreach ($dir_file in $dir_files) {
$size += $dir_file.Length
}
$file_stat.size = $size
} else {
$file_stat.size = $file.length
$file_stat.extension = $file.Extension
if ($get_checksum) {
try {
$checksum = Get-FileChecksum -path $path -algorithm $checksum_algorithm
$file_stat.checksum = $checksum
} catch {
throw "failed to get checksum for file $($file.FullName)"
}
}
}
$file_stat.islnk = $islnk
$file_stat.isdir = $isdir
$file_stat.isshared = $isshared
Assert-FileStat -info $file_stat
}
Function Get-FilesInFolder($path) {
$items = @()
# Get-ChildItem -LiteralPath can bomb out on older OS', use .NET instead
$dir = New-Object -TypeName System.IO.DirectoryInfo -ArgumentList $path
$dir_files = @()
try {
$dir_files = $dir.EnumerateFileSystemInfos("*", [System.IO.SearchOption]::TopDirectoryOnly)
} catch [System.IO.DirectoryNotFoundException] { # Broken ReparsePoint/Symlink, cannot enumerate
} catch [System.UnauthorizedAccessException] {} # No ListDirectory permissions, Get-ChildItem ignored this
foreach ($item in $dir_files) {
if ($item -is [System.IO.DirectoryInfo] -and $recurse) {
if (($item.Attributes -like '*ReparsePoint*' -and $follow) -or ($item.Attributes -notlike '*ReparsePoint*')) {
# File is a link and we want to follow a link OR file is not a link
$items += $item.FullName
$items += Get-FilesInFolder -path $item.FullName
} else {
# File is a link but we don't want to follow a link
$items += $item.FullName
}
} else {
$items += $item.FullName
}
}
$items
}
$paths_to_check = @()
foreach ($path in $paths) {
if (Test-Path -LiteralPath $path) {
if ((Get-Item -LiteralPath $path -Force).PSIsContainer) {
$paths_to_check += Get-FilesInFolder -path $path
} else {
Fail-Json $result "Argument path $path is a file not a directory"
}
} else {
Fail-Json $result "Argument path $path does not exist cannot get information on"
}
}
$paths_to_check = $paths_to_check | Select-Object -Unique | Sort-Object
foreach ($path in $paths_to_check) {
try {
$file = Get-Item -LiteralPath $path -Force
$info = Get-FileStat -file $file
} catch {
Add-Warning -obj $result -message "win_find failed to check some files, these files were ignored and will not be part of the result output"
break
}
$new_examined = $result.examined + 1
$result.examined = $new_examined
if ($info -ne $false) {
$files = $result.Files
$files += $info
$new_matched = $result.matched + 1
$result.matched = $new_matched
$result.files = $files
}
}
Exit-Json $result
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,018 |
Improve win_find Performance
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Currently, win_find can take a significant amount of time to iterate through large collections of directories.
For example, the DLL directories in Windows show a significant variance in execution time with relatively similar disk space utilization
DLL directory at 3GB of disk space and 20,000 directories can take 10-15 minutes to complete a scan with win_find
DLL directory at 4GB of disk space and 120,000 directories can take over 6 hours to complete with win_find
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_find.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
Parameters:
```yaml
scan_dir_list:
- path: 'C:\Windows\System32'
scandir: true
recurse: true
pattern:
- '*.com'
- '*.exe'
- '*.dll'
- '*.ocx'
- '*.sys'
```
Task:
```yaml
- name: Get full list of files from directory
win_find:
paths: "{{ item.path }}"
get_checksum: true
recurse: "{{ item.recurse | default('false') }}"
patterns: "{{item.pattern | default(omit) }}"
use_regex: "{{ item.use_regex | default(omit) }}"
file_type: file
with_items: "{{ scan_dir_list }}"
register: find_file_list
ignore_errors: true
```
|
https://github.com/ansible/ansible/issues/63018
|
https://github.com/ansible/ansible/pull/65536
|
96cbbdd59fe82574b9292bf3cafe34bb8b9ceade
|
fcdebe41e99d8c32ba9219b66df3a65346200b46
| 2019-10-01T16:15:19Z |
python
| 2019-12-06T00:01:11Z |
lib/ansible/modules/windows/win_find.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2016, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# this is a windows documentation stub. actual code lives in the .ps1
# file of the same name
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: win_find
version_added: "2.3"
short_description: Return a list of files based on specific criteria
description:
- Return a list of files based on specified criteria.
- Multiple criteria are AND'd together.
- For non-Windows targets, use the M(find) module instead.
options:
age:
description:
- Select files or folders whose age is equal to or greater than
the specified time.
- Use a negative age to find files equal to or less than
the specified time.
- You can choose seconds, minutes, hours, days or weeks
by specifying the first letter of an of
those words (e.g., "2s", "10d", 1w").
type: str
age_stamp:
description:
- Choose the file property against which we compare C(age).
- The default attribute we compare with is the last modification time.
type: str
choices: [ atime, ctime, mtime ]
default: mtime
checksum_algorithm:
description:
- Algorithm to determine the checksum of a file.
- Will throw an error if the host is unable to use specified algorithm.
type: str
choices: [ md5, sha1, sha256, sha384, sha512 ]
default: sha1
file_type:
description: Type of file to search for.
type: str
choices: [ directory, file ]
default: file
follow:
description:
- Set this to C(yes) to follow symlinks in the path.
- This needs to be used in conjunction with C(recurse).
type: bool
default: no
get_checksum:
description:
- Whether to return a checksum of the file in the return info (default sha1),
use C(checksum_algorithm) to change from the default.
type: bool
default: yes
hidden:
description: Set this to include hidden files or folders.
type: bool
default: no
paths:
description:
- List of paths of directories to search for files or folders in.
- This can be supplied as a single path or a list of paths.
type: list
required: yes
patterns:
description:
- One or more (powershell or regex) patterns to compare filenames with.
- The type of pattern matching is controlled by C(use_regex) option.
- The patterns restrict the list of files or folders to be returned based on the filenames.
- For a file to be matched it only has to match with one pattern in a list provided.
type: list
aliases: [ "regex", "regexp" ]
recurse:
description:
- Will recursively descend into the directory looking for files or folders.
type: bool
default: no
size:
description:
- Select files or folders whose size is equal to or greater than the specified size.
- Use a negative value to find files equal to or less than the specified size.
- You can specify the size with a suffix of the byte type i.e. kilo = k, mega = m...
- Size is not evaluated for symbolic links.
type: str
use_regex:
description:
- Will set patterns to run as a regex check if set to C(yes).
type: bool
default: no
author:
- Jordan Borean (@jborean93)
'''
EXAMPLES = r'''
- name: Find files in path
win_find:
paths: D:\Temp
- name: Find hidden files in path
win_find:
paths: D:\Temp
hidden: yes
- name: Find files in multiple paths
win_find:
paths:
- C:\Temp
- D:\Temp
- name: Find files in directory while searching recursively
win_find:
paths: D:\Temp
recurse: yes
- name: Find files in directory while following symlinks
win_find:
paths: D:\Temp
recurse: yes
follow: yes
- name: Find files with .log and .out extension using powershell wildcards
win_find:
paths: D:\Temp
patterns: [ '*.log', '*.out' ]
- name: Find files in path based on regex pattern
win_find:
paths: D:\Temp
patterns: out_\d{8}-\d{6}.log
- name: Find files older than 1 day
win_find:
paths: D:\Temp
age: 86400
- name: Find files older than 1 day based on create time
win_find:
paths: D:\Temp
age: 86400
age_stamp: ctime
- name: Find files older than 1 day with unit syntax
win_find:
paths: D:\Temp
age: 1d
- name: Find files newer than 1 hour
win_find:
paths: D:\Temp
age: -3600
- name: Find files newer than 1 hour with unit syntax
win_find:
paths: D:\Temp
age: -1h
- name: Find files larger than 1MB
win_find:
paths: D:\Temp
size: 1048576
- name: Find files larger than 1GB with unit syntax
win_find:
paths: D:\Temp
size: 1g
- name: Find files smaller than 1MB
win_find:
paths: D:\Temp
size: -1048576
- name: Find files smaller than 1GB with unit syntax
win_find:
paths: D:\Temp
size: -1g
- name: Find folders/symlinks in multiple paths
win_find:
paths:
- C:\Temp
- D:\Temp
file_type: directory
- name: Find files and return SHA256 checksum of files found
win_find:
paths: C:\Temp
get_checksum: yes
checksum_algorithm: sha256
- name: Find files and do not return the checksum
win_find:
paths: C:\Temp
get_checksum: no
'''
RETURN = r'''
examined:
description: The number of files/folders that was checked.
returned: always
type: int
sample: 10
matched:
description: The number of files/folders that match the criteria.
returned: always
type: int
sample: 2
files:
description: Information on the files/folders that match the criteria returned as a list of dictionary elements
for each file matched. The entries are sorted by the path value alphabetically.
returned: success
type: complex
contains:
attributes:
description: attributes of the file at path in raw form.
returned: success, path exists
type: str
sample: "Archive, Hidden"
checksum:
description: The checksum of a file based on checksum_algorithm specified.
returned: success, path exists, path is a file, get_checksum == True
type: str
sample: 09cb79e8fc7453c84a07f644e441fd81623b7f98
creationtime:
description: The create time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
extension:
description: The extension of the file at path.
returned: success, path exists, path is a file
type: str
sample: ".ps1"
filename:
description: The name of the file.
returned: success, path exists
type: str
sample: temp
isarchive:
description: If the path is ready for archiving or not.
returned: success, path exists
type: bool
sample: true
isdir:
description: If the path is a directory or not.
returned: success, path exists
type: bool
sample: true
ishidden:
description: If the path is hidden or not.
returned: success, path exists
type: bool
sample: true
islnk:
description: If the path is a symbolic link or junction or not.
returned: success, path exists or deduped files
type: bool
sample: true
isreadonly:
description: If the path is read only or not.
returned: success, path exists
type: bool
sample: true
isshared:
description: If the path is shared or not.
returned: success, path exists
type: bool
sample: true
lastaccesstime:
description: The last access time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
lastwritetime:
description: The last modification time of the file represented in seconds since epoch.
returned: success, path exists
type: float
sample: 1477984205.15
lnk_source:
description: The target of the symbolic link, will return null if not a link or the link is broken.
returned: success, path exists, path is a symbolic link
type: str
sample: C:\temp
owner:
description: The owner of the file.
returned: success, path exists
type: str
sample: BUILTIN\Administrators
path:
description: The full absolute path to the file.
returned: success, path exists
type: str
sample: BUILTIN\Administrators
sharename:
description: The name of share if folder is shared.
returned: success, path exists, path is a directory and isshared == True
type: str
sample: file-share
size:
description: The size in bytes of a file or folder.
returned: success, path exists, path is not a link
type: int
sample: 1024
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,018 |
Improve win_find Performance
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Currently, win_find can take a significant amount of time to iterate through large collections of directories.
For example, the DLL directories in Windows show a significant variance in execution time with relatively similar disk space utilization
DLL directory at 3GB of disk space and 20,000 directories can take 10-15 minutes to complete a scan with win_find
DLL directory at 4GB of disk space and 120,000 directories can take over 6 hours to complete with win_find
##### ISSUE TYPE
- Bug report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_find.py
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
Parameters:
```yaml
scan_dir_list:
- path: 'C:\Windows\System32'
scandir: true
recurse: true
pattern:
- '*.com'
- '*.exe'
- '*.dll'
- '*.ocx'
- '*.sys'
```
Task:
```yaml
- name: Get full list of files from directory
win_find:
paths: "{{ item.path }}"
get_checksum: true
recurse: "{{ item.recurse | default('false') }}"
patterns: "{{item.pattern | default(omit) }}"
use_regex: "{{ item.use_regex | default(omit) }}"
file_type: file
with_items: "{{ scan_dir_list }}"
register: find_file_list
ignore_errors: true
```
|
https://github.com/ansible/ansible/issues/63018
|
https://github.com/ansible/ansible/pull/65536
|
96cbbdd59fe82574b9292bf3cafe34bb8b9ceade
|
fcdebe41e99d8c32ba9219b66df3a65346200b46
| 2019-10-01T16:15:19Z |
python
| 2019-12-06T00:01:11Z |
test/integration/targets/win_find/tasks/tests.yml
|
---
- name: expect failure when not setting paths
win_find:
patterns: a
register: actual
failed_when: "actual.msg != 'Get-AnsibleParam: Missing required argument: paths'"
- name: expect failure when setting paths to a file
win_find:
paths: "{{win_find_dir}}\\single\\large.ps1"
register: actual
failed_when: actual.msg != 'Argument path ' + win_find_dir + '\\single\\large.ps1 is a file not a directory'
- name: expect failure when path is set to a non existent folder
win_find:
paths: "{{win_find_dir}}\\thisisafakefolder"
register: actual
failed_when: actual.msg != 'Argument path ' + win_find_dir + '\\thisisafakefolder does not exist cannot get information on'
- name: get files in single directory
win_find:
paths: "{{win_find_dir}}\\single"
register: actual
- name: set expected value for files in a single directory
set_fact:
expected:
changed: False
examined: 5
failed: False
files:
- { isarchive: True,
attributes: Archive,
checksum: f8d100cdcf0e6c1007db2f8dd0b7ee2884df89af,
creationtime: 1477984205,
extension: .ps1,
filename: large.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\single\\large.ps1",
isreadonly: False,
isshared: False,
size: 260002 }
- { isarchive: True,
attributes: Archive,
checksum: 8df33cee3325596517df5bb5aa980cf9c5c1fda3,
creationtime: 1477984205,
extension: .log,
filename: out_20161101-091005.log,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\single\\out_20161101-091005.log",
isreadonly: False,
isshared: False,
size: 14 }
- { isarchive: True,
attributes: Archive,
checksum: 86f7e437faa5a7fce15d1ddcb9eaeaea377667b8,
creationtime: 1477984205,
extension: .ps1,
filename: small.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\single\\small.ps1",
isreadonly: False,
isshared: False,
size: 1 }
- { isarchive: True,
attributes: Archive,
checksum: 8df33cee3325596517df5bb5aa980cf9c5c1fda3,
creationtime: 1477984205,
extension: .ps1,
filename: test.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\single\\test.ps1",
isreadonly: False,
isshared: False,
size: 14 }
matched: 4
- name: assert actual == expected
assert:
that: actual == expected
- name: find hidden files
win_find:
paths: ['{{win_find_dir}}\\single', '{{win_find_dir}}\\nested']
hidden: True
register: actual
- name: set fact for hidden files
set_fact:
expected:
changed: False
examined: 11
failed: False
files:
- { isarchive: True,
attributes: "Hidden, Archive",
checksum: 8df33cee3325596517df5bb5aa980cf9c5c1fda3,
creationtime: 1477984205,
extension: .ps1,
filename: hidden.ps1,
ishidden: True,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\single\\hidden.ps1",
isreadonly: False,
isshared: False,
size: 14 }
matched: 1
- name: assert actual == expected
assert:
that: actual == expected
- name: find file based on pattern
win_find:
paths: '{{win_find_dir}}\\single'
patterns: ['*.log', 'out_*']
register: actual_pattern
- name: find file based on pattern regex
win_find:
paths: '{{win_find_dir}}\\single'
patterns: "out_\\d{8}-\\d{6}.log"
use_regex: True
register: actual_regex
- name: set fact for pattern files
set_fact:
expected:
changed: False
examined: 5
failed: False
files:
- { isarchive: True,
attributes: Archive,
checksum: 8df33cee3325596517df5bb5aa980cf9c5c1fda3,
creationtime: 1477984205,
extension: .log,
filename: out_20161101-091005.log,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\single\\out_20161101-091005.log",
isreadonly: False,
isshared: False,
size: 14 }
matched: 1
- name: assert actual == expected
assert:
that:
- actual_pattern == expected
- actual_regex == expected
- name: find files with recurse set
win_find:
paths: "{{win_find_dir}}\\nested"
recurse: True
patterns: "*.ps1"
register: actual
- name: set expected value for files in a nested directory
set_fact:
expected:
changed: False
examined: 8
failed: False
files:
- { isarchive: True,
attributes: Archive,
checksum: 8df33cee3325596517df5bb5aa980cf9c5c1fda3,
creationtime: 1477984205,
extension: .ps1,
filename: file.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\nested\\file.ps1",
isreadonly: False,
isshared: False,
size: 14 }
- { isarchive: True,
attributes: Archive,
checksum: 8df33cee3325596517df5bb5aa980cf9c5c1fda3,
creationtime: 1477984205,
extension: .ps1,
filename: test.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\nested\\sub-nest\\test.ps1",
isreadonly: False,
isshared: False,
size: 14 }
- { isarchive: True,
attributes: Archive,
checksum: 8df33cee3325596517df5bb5aa980cf9c5c1fda3,
creationtime: 1477984205,
extension: .ps1,
filename: test.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\nested\\test.ps1",
isreadonly: False,
isshared: False,
size: 14 }
matched: 3
- name: assert actual == expected
assert:
that: actual == expected
- name: find files with recurse set and follow links
win_find:
paths: "{{win_find_dir}}\\nested"
recurse: True
follow: True
patterns: "*.ps1"
register: actual
- name: set expected value for files in a nested directory while following links
set_fact:
expected:
changed: False
examined: 10
failed: False
files:
- { isarchive: True,
attributes: Archive,
checksum: 8df33cee3325596517df5bb5aa980cf9c5c1fda3,
creationtime: 1477984205,
extension: .ps1,
filename: file.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\nested\\file.ps1",
isreadonly: False,
isshared: False,
size: 14 }
- { isarchive: True,
attributes: Archive,
checksum: 8df33cee3325596517df5bb5aa980cf9c5c1fda3,
creationtime: 1477984205,
extension: .ps1,
filename: link.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\nested\\link\\link.ps1",
isreadonly: False,
isshared: False,
size: 14 }
- { isarchive: True,
attributes: Archive,
checksum: 8df33cee3325596517df5bb5aa980cf9c5c1fda3,
creationtime: 1477984205,
extension: .ps1,
filename: test.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\nested\\sub-nest\\test.ps1",
isreadonly: False,
isshared: False,
size: 14 }
- { isarchive: True,
attributes: Archive,
checksum: 8df33cee3325596517df5bb5aa980cf9c5c1fda3,
creationtime: 1477984205,
extension: .ps1,
filename: test.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\nested\\test.ps1",
isreadonly: False,
isshared: False,
size: 14 }
matched: 4
- name: assert actual == expected
assert:
that: actual == expected
- name: find directories
win_find:
paths: "{{win_find_dir}}\\link-dest"
file_type: directory
register: actual
- name: set expected fact for directories with recurse and follow
set_fact:
expected:
changed: False
examined: 2
failed: False
files:
- { isarchive: False,
attributes: Directory,
creationtime: 1477984205,
filename: sub-link,
ishidden: False,
isdir: True,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\link-dest\\sub-link",
isreadonly: False,
isshared: False,
size: 0 }
matched: 1
- name: assert actual == expected
assert:
that: actual == expected
- name: find directories recurse and follow with a broken link
win_find:
paths: "{{win_find_dir}}"
file_type: directory
recurse: True
follow: True
register: actual
- name: check directory count with recurse and follow is correct
assert:
that:
- actual.examined == 37
- actual.matched == 17
- actual.files[0].filename == 'broken-link'
- actual.files[0].islnk == True
- actual.files[6].filename == 'junction-link'
- actual.files[6].islnk == True
- actual.files[6].lnk_source == win_find_dir + '\\junction-link-dest'
- actual.files[11].filename == 'link'
- actual.files[11].islnk == True
- actual.files[11].lnk_source == win_find_dir + '\\link-dest'
- actual.files[15].filename == 'folder'
- actual.files[15].islnk == False
- actual.files[15].isshared == True
- actual.files[15].sharename == 'folder-share'
- name: filter files by size without byte specified
win_find:
paths: "{{win_find_dir}}\\single"
size: 260002
register: actual_without_byte
- name: filter files by size with byte specified
win_find:
paths: "{{win_find_dir}}\\single"
size: 253k
register: actual_with_byte
- name: set expected fact for files by size
set_fact:
expected:
changed: False
examined: 5
failed: False
files:
- { isarchive: True,
attributes: Archive,
checksum: f8d100cdcf0e6c1007db2f8dd0b7ee2884df89af,
creationtime: 1477984205,
extension: ".ps1",
filename: large.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\single\\large.ps1",
isreadonly: False,
isshared: False,
size: 260002 }
matched: 1
- name: assert actual == expected
assert:
that:
- actual_without_byte == expected
- actual_with_byte == expected
- name: filter files by size (less than) without byte specified
win_find:
paths: "{{win_find_dir}}\\single"
size: -4
register: actual_without_byte
- name: filter files by size (less than) with byte specified
win_find:
paths: "{{win_find_dir}}\\single"
size: -4b
register: actual_with_byte
- name: set expected fact for files by size (less than)
set_fact:
expected:
changed: False
examined: 5
failed: False
files:
- { isarchive: True,
attributes: Archive,
checksum: 86f7e437faa5a7fce15d1ddcb9eaeaea377667b8,
creationtime: 1477984205,
extension: ".ps1",
filename: small.ps1,
ishidden: False,
isdir: False,
islnk: False,
lastaccesstime: 1477984205,
lastwritetime: 1477984205,
owner: BUILTIN\Administrators,
path: "{{win_find_dir}}\\single\\small.ps1",
isreadonly: False,
isshared: False,
size: 1 }
matched: 1
- name: assert actual == expected
assert:
that:
- actual_without_byte == expected
- actual_with_byte == expected
# For dates we cannot assert against expected as the times change, this is a poor mans attempt at testing
- name: filter files by age without unit specified
win_find:
paths: "{{win_find_dir}}\\date"
age: 3600
register: actual_without_unit
- name: filter files by age with unit specified
win_find:
paths: "{{win_find_dir}}\\date"
age: 1h
register: actual_with_unit
- name: assert dates match each other
assert:
that:
- actual_without_unit == actual_with_unit
- actual_without_unit.matched == 1
- actual_without_unit.files[0].checksum == 'd1185139c47f5bc951e2e9135922fe31059206b1'
- actual_without_unit.files[0].path == win_find_dir + '\\date\\old.ps1'
- name: filter files by age (newer than) without unit specified
win_find:
paths: "{{win_find_dir}}\\date"
age: -3600
register: actual_without_unit
- name: filter files by age (newer than) without unit specified
win_find:
paths: "{{win_find_dir}}\\date"
age: -1h
register: actual_with_unit
- name: assert dates match each other
assert:
that:
- actual_without_unit == actual_with_unit
- actual_without_unit.matched == 1
- actual_without_unit.files[0].checksum == 'af99d0e98df4531b9f26c942f41d65c58766bfa9'
- actual_without_unit.files[0].path == win_find_dir + '\\date\\new.ps1'
- name: get list of files with md5 checksum
win_find:
paths: "{{win_find_dir}}\\single"
patterns: test.ps1
checksum_algorithm: md5
register: actual_md5_checksum
- name: assert md5 checksum value
assert:
that:
- actual_md5_checksum.files[0].checksum == 'd1713d0f1d2e8fae230328d8fd59de01'
- name: get list of files with sha1 checksum
win_find:
paths: "{{win_find_dir}}\\single"
patterns: test.ps1
checksum_algorithm: sha1
register: actual_sha1_checksum
- name: assert sha1 checksum value
assert:
that:
- actual_sha1_checksum.files[0].checksum == '8df33cee3325596517df5bb5aa980cf9c5c1fda3'
- name: get list of files with sha256 checksum
win_find:
paths: "{{win_find_dir}}\\single"
patterns: test.ps1
checksum_algorithm: sha256
register: actual_sha256_checksum
- name: assert sha256 checksum value
assert:
that:
- actual_sha256_checksum.files[0].checksum == 'c20d2eba7ffda0079812721b6f4e4e109e2f0c5e8cc3d1273a060df6f7d9f339'
- name: get list of files with sha384 checksum
win_find:
paths: "{{win_find_dir}}\\single"
patterns: test.ps1
checksum_algorithm: sha384
register: actual_sha384_checksum
- name: assert sha384 checksum value
assert:
that:
- actual_sha384_checksum.files[0].checksum == 'aed515eb216b9c7009ae8c4680f46c1e22004528b231aa0482a8587543bca47d3504e9f77e884eb2d11b2f9f5dc01651'
- name: get list of files with sha512 checksum
win_find:
paths: "{{win_find_dir}}\\single"
patterns: test.ps1
checksum_algorithm: sha512
register: actual_sha512_checksum
- name: assert sha512 checksum value
assert:
that:
- actual_sha512_checksum.files[0].checksum == '05abf64a68c4731699c23b4fc6894a36646fce525f3c96f9cf743b5d0c3bfd933dad0e95e449e3afe1f74d534d69a53b8f46cf835763dd42915813c897b02b87'
- name: get list of files without checksum
win_find:
paths: "{{win_find_dir}}\\single"
patterns: test.ps1
get_checksum: False
register: actual_no_checksum
- name: assert no checksum is returned
assert:
that:
- actual_no_checksum.files[0].checksum is undefined
# https://github.com/ansible/ansible/issues/26158
- name: get list of files in an empty nested directory
win_find:
paths: '{{win_find_dir}}\emptynested'
register: actual_empty_nested
- name: assert get list of files in an empty nested directory
assert:
that:
- actual_empty_nested.matched == 0
- name: create new folders for security tests
win_file:
path: '{{win_find_dir}}\{{item}}'
state: directory
with_items:
- secure-tests\secure\internal-folder
- secure-tests\open\internal-folder
- name: create random password for test user
set_fact:
test_win_find_password: password123! + {{ lookup('password', '/dev/null chars=ascii_letters,digits length=8') }}
- name: create test user who does not have access to secure folder
win_user:
name: '{{test_win_find_username}}'
password: '{{test_win_find_password}}'
state: present
groups:
- Users
- name: change owner of secure folder
win_owner:
path: '{{win_find_dir}}\secure-tests\secure'
user: BUILTIN\Administrators
recurse: yes
- name: set explicit inheritance of secure folder for the Administrators accounts
win_acl:
user: BUILTIN\Administrators
path: '{{win_find_dir}}\secure-tests\secure'
rights: FullControl
type: allow
state: present
inherit: None
- name: remove inheritance on the secure folder
win_acl_inheritance:
path: '{{win_find_dir}}\secure-tests\secure'
reorganize: no
state: absent
- name: run win_find with under-privileged account
win_find:
paths: '{{win_find_dir}}\secure-tests'
recurse: yes
file_type: directory
register: secure_result
become: yes
become_method: runas
become_user: '{{test_win_find_username}}'
vars:
ansible_become_password: '{{test_win_find_password}}'
- name: assert win_find only examined 2 files with under-privileged account
assert:
that:
- secure_result.examined == 2
- secure_result.matched == 2
- secure_result.files[0].path == win_find_dir + "\secure-tests\open"
- secure_result.files[1].path == win_find_dir + "\secure-tests\open\internal-folder"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,556 |
win_get_url doesn't follow redirects
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using win_get_url on Ansible 2.9.x to download files, it stops on the first 301 redirect and creates a 372 bytes file containing the HTTP response of the web server instead of the file that sits behind the redirect.
This happens on any Windows version that I tested: 2012 R2, 2016, 2019
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_get_url
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /Users/[REDACTED]/ansible.cfg
configured module search path = ['/Users/[REDACTED]/library', '/Users/[REDACTED]/ara/plugins/modules']
ansible python module location = /Users/[REDACTED]/lib/python3.7/site-packages/ansible
executable location = /Users/[REDACTED]/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/Users/[REDACTED]/ansible.cfg) = True
ANSIBLE_PIPELINING(/Users/[REDACTED]/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/Users/[REDACTED]/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no
ANSIBLE_SSH_RETRIES(/Users/[REDACTED]/ansible.cfg) = 3
CACHE_PLUGIN(/Users/[REDACTED]/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/Users/[REDACTED]/ansible.cfg) = ~/.ansible/facts.cachedir
CACHE_PLUGIN_TIMEOUT(/Users/[REDACTED]/ansible.cfg) = 300
DEFAULT_ACTION_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/ara/plugins/actions']
DEFAULT_CALLBACK_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/ara/plugins/callbacks']
DEFAULT_CALLBACK_WHITELIST(/Users/[REDACTED]/ansible.cfg) = ['profile_roles', 'profile_tasks', 'timer', 'junit']
DEFAULT_FORKS(/Users/[REDACTED]/ansible.cfg) = 100
DEFAULT_GATHERING(/Users/[REDACTED]/ansible.cfg) = smart
DEFAULT_HOST_LIST(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/inventory.sh']
DEFAULT_LOG_PATH(/Users/[REDACTED]/ansible.cfg) = /Users/res/.ansible/SLAnsible.log
DEFAULT_LOOKUP_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/plugins/lookup']
DEFAULT_MODULE_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/library', '/Users/[REDACTED]/ara/plugins/modules']
DEFAULT_REMOTE_USER(/Users/[REDACTED]/ansible.cfg) = stylelabs
DEFAULT_ROLES_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/roles_galaxy', '/Users/[REDACTED]/roles_mansible']
DEFAULT_STDOUT_CALLBACK(/Users/[REDACTED]/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/Users/[REDACTED]/ansible.cfg) = 20
HOST_KEY_CHECKING(/Users/[REDACTED]/ansible.cfg) = False
RETRY_FILES_ENABLED(/Users/[REDACTED]/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/[REDACTED]/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS: Windows 2012 R2, Windows 2016, Windows 2019
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: all
vars:
zabbix_win_download_link: "https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip"
zabbix_win_install_dir: "c:\\windows\\temp"
zabbix_win_package: "zabbix.zip"
tasks:
- name: "Windows | Download Zabbix Agent Zip file"
win_get_url:
url: "{{ zabbix_win_download_link }}"
dest: '{{ zabbix_win_install_dir }}\{{ zabbix_win_package }}'
force: False
follow_redirects: safe
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I'm expecting the file behind the redirect to be downloaded locally, not the Redirect HTTP response.
This was working in Ansible 2.8 and 2.7.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Windows | Download Zabbix Agent Zip file] *******************************************************************************************************************************************************************************************************
task path: /Users/[REDACTED]/win-get-url.yml:9
Thursday 05 December 2019 13:23:19 +0100 (0:00:04.039) 0:00:04.201 *****
Thursday 05 December 2019 13:23:19 +0100 (0:00:04.039) 0:00:04.199 *****
Using module file /Users/[REDACTED]/lib/python3.7/site-packages/ansible/modules/windows/win_get_url.ps1
Pipelining is enabled.
<[REDACTED]> ESTABLISH WINRM CONNECTION FOR USER: stylelabs on PORT 5986 TO [REDACTED]
EXEC (via pipeline wrapper)
changed: [[REDACTED]] => changed=true
checksum_dest: 5ab849c3b74d71be4d7d520de6c374e64fa6553c
checksum_src: 5ab849c3b74d71be4d7d520de6c374e64fa6553c
dest: c:\windows\temp\zabbix.zip
elapsed: 0.325719
invocation:
module_args:
checksum: null
checksum_algorithm: sha1
checksum_url: null
client_cert: null
client_cert_password: null
dest: c:\windows\temp\zabbix.zip
follow_redirects: safe
force: false
force_basic_auth: false
headers: null
http_agent: ansible-httpget
maximum_redirection: 50
method: null
proxy_password: null
proxy_url: null
proxy_use_default_credential: false
proxy_username: null
timeout: 30
url: https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip
url_password: null
url_username: null
use_default_credential: false
use_proxy: true
validate_certs: true
msg: Moved Permanently
size: 372
status_code: 301
url: https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip
META: ran handlers
META: ran handlers
```
File content
```
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved <a href="https://assets.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip">here</a>.</p>
<hr>
<address>Apache/2.4.10 (Debian) Server at www.zabbix.com Port 443</address>
</body></html>
```
The following, should be enough (it's even the default as per the documentation), but it's not:
```yaml
follow_redirects: safe
```
|
https://github.com/ansible/ansible/issues/65556
|
https://github.com/ansible/ansible/pull/65584
|
eaba5572cd1f206ae850c6730d50f32c58cc3131
|
9a81f5c3b7a723cc878a404dcf20037ea11bfeb7
| 2019-12-05T12:26:12Z |
python
| 2019-12-06T01:47:35Z |
changelogs/fragments/win_get_url-redirection.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,556 |
win_get_url doesn't follow redirects
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using win_get_url on Ansible 2.9.x to download files, it stops on the first 301 redirect and creates a 372 bytes file containing the HTTP response of the web server instead of the file that sits behind the redirect.
This happens on any Windows version that I tested: 2012 R2, 2016, 2019
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_get_url
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /Users/[REDACTED]/ansible.cfg
configured module search path = ['/Users/[REDACTED]/library', '/Users/[REDACTED]/ara/plugins/modules']
ansible python module location = /Users/[REDACTED]/lib/python3.7/site-packages/ansible
executable location = /Users/[REDACTED]/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/Users/[REDACTED]/ansible.cfg) = True
ANSIBLE_PIPELINING(/Users/[REDACTED]/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/Users/[REDACTED]/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no
ANSIBLE_SSH_RETRIES(/Users/[REDACTED]/ansible.cfg) = 3
CACHE_PLUGIN(/Users/[REDACTED]/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/Users/[REDACTED]/ansible.cfg) = ~/.ansible/facts.cachedir
CACHE_PLUGIN_TIMEOUT(/Users/[REDACTED]/ansible.cfg) = 300
DEFAULT_ACTION_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/ara/plugins/actions']
DEFAULT_CALLBACK_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/ara/plugins/callbacks']
DEFAULT_CALLBACK_WHITELIST(/Users/[REDACTED]/ansible.cfg) = ['profile_roles', 'profile_tasks', 'timer', 'junit']
DEFAULT_FORKS(/Users/[REDACTED]/ansible.cfg) = 100
DEFAULT_GATHERING(/Users/[REDACTED]/ansible.cfg) = smart
DEFAULT_HOST_LIST(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/inventory.sh']
DEFAULT_LOG_PATH(/Users/[REDACTED]/ansible.cfg) = /Users/res/.ansible/SLAnsible.log
DEFAULT_LOOKUP_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/plugins/lookup']
DEFAULT_MODULE_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/library', '/Users/[REDACTED]/ara/plugins/modules']
DEFAULT_REMOTE_USER(/Users/[REDACTED]/ansible.cfg) = stylelabs
DEFAULT_ROLES_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/roles_galaxy', '/Users/[REDACTED]/roles_mansible']
DEFAULT_STDOUT_CALLBACK(/Users/[REDACTED]/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/Users/[REDACTED]/ansible.cfg) = 20
HOST_KEY_CHECKING(/Users/[REDACTED]/ansible.cfg) = False
RETRY_FILES_ENABLED(/Users/[REDACTED]/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/[REDACTED]/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS: Windows 2012 R2, Windows 2016, Windows 2019
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: all
vars:
zabbix_win_download_link: "https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip"
zabbix_win_install_dir: "c:\\windows\\temp"
zabbix_win_package: "zabbix.zip"
tasks:
- name: "Windows | Download Zabbix Agent Zip file"
win_get_url:
url: "{{ zabbix_win_download_link }}"
dest: '{{ zabbix_win_install_dir }}\{{ zabbix_win_package }}'
force: False
follow_redirects: safe
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I'm expecting the file behind the redirect to be downloaded locally, not the Redirect HTTP response.
This was working in Ansible 2.8 and 2.7.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Windows | Download Zabbix Agent Zip file] *******************************************************************************************************************************************************************************************************
task path: /Users/[REDACTED]/win-get-url.yml:9
Thursday 05 December 2019 13:23:19 +0100 (0:00:04.039) 0:00:04.201 *****
Thursday 05 December 2019 13:23:19 +0100 (0:00:04.039) 0:00:04.199 *****
Using module file /Users/[REDACTED]/lib/python3.7/site-packages/ansible/modules/windows/win_get_url.ps1
Pipelining is enabled.
<[REDACTED]> ESTABLISH WINRM CONNECTION FOR USER: stylelabs on PORT 5986 TO [REDACTED]
EXEC (via pipeline wrapper)
changed: [[REDACTED]] => changed=true
checksum_dest: 5ab849c3b74d71be4d7d520de6c374e64fa6553c
checksum_src: 5ab849c3b74d71be4d7d520de6c374e64fa6553c
dest: c:\windows\temp\zabbix.zip
elapsed: 0.325719
invocation:
module_args:
checksum: null
checksum_algorithm: sha1
checksum_url: null
client_cert: null
client_cert_password: null
dest: c:\windows\temp\zabbix.zip
follow_redirects: safe
force: false
force_basic_auth: false
headers: null
http_agent: ansible-httpget
maximum_redirection: 50
method: null
proxy_password: null
proxy_url: null
proxy_use_default_credential: false
proxy_username: null
timeout: 30
url: https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip
url_password: null
url_username: null
use_default_credential: false
use_proxy: true
validate_certs: true
msg: Moved Permanently
size: 372
status_code: 301
url: https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip
META: ran handlers
META: ran handlers
```
File content
```
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved <a href="https://assets.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip">here</a>.</p>
<hr>
<address>Apache/2.4.10 (Debian) Server at www.zabbix.com Port 443</address>
</body></html>
```
The following, should be enough (it's even the default as per the documentation), but it's not:
```yaml
follow_redirects: safe
```
|
https://github.com/ansible/ansible/issues/65556
|
https://github.com/ansible/ansible/pull/65584
|
eaba5572cd1f206ae850c6730d50f32c58cc3131
|
9a81f5c3b7a723cc878a404dcf20037ea11bfeb7
| 2019-12-05T12:26:12Z |
python
| 2019-12-06T01:47:35Z |
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.WebRequest.psm1
|
# Copyright (c) 2019 Ansible Project
# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
Function Get-AnsibleWebRequest {
<#
.SYNOPSIS
Creates a System.Net.WebRequest object based on common URL module options in Ansible.
.DESCRIPTION
Will create a WebRequest based on common input options within Ansible. This can be used manually or with
Invoke-WithWebRequest.
.PARAMETER Uri
The URI to create the web request for.
.PARAMETER Method
The protocol method to use, if omitted, will use the default value for the URI protocol specified.
.PARAMETER FollowRedirects
Whether to follow redirect reponses. This is only valid when using a HTTP URI.
all - Will follow all redirects
none - Will follow no redirects
safe - Will only follow redirects when GET or HEAD is used as the Method
.PARAMETER Headers
A hashtable or dictionary of header values to set on the request. This is only valid for a HTTP URI.
.PARAMETER HttpAgent
A string to set for the 'User-Agent' header. This is only valid for a HTTP URI.
.PARAMETER MaximumRedirection
The maximum number of redirections that will be followed. This is only valid for a HTTP URI.
.PARAMETER Timeout
The timeout in seconds that defines how long to wait until the request times out.
.PARAMETER ValidateCerts
Whether to validate SSL certificates, default to True.
.PARAMETER ClientCert
The path to PFX file to use for X509 authentication. This is only valid for a HTTP URI. This path can either
be a filesystem path (C:\folder\cert.pfx) or a PSPath to a credential (Cert:\CurrentUser\My\<thumbprint>).
.PARAMETER ClientCertPassword
The password for the PFX certificate if required. This is only valid for a HTTP URI.
.PARAMETER ForceBasicAuth
Whether to set the Basic auth header on the first request instead of when required. This is only valid for a
HTTP URI.
.PARAMETER UrlUsername
The username to use for authenticating with the target.
.PARAMETER UrlPassword
The password to use for authenticating with the target.
.PARAMETER UseDefaultCredential
Whether to use the current user's credentials if available. This will only work when using Become, using SSH with
password auth, or WinRM with CredSSP or Kerberos with credential delegation.
.PARAMETER UseProxy
Whether to use the default proxy defined in IE (WinINet) for the user or set no proxy at all. This should not
be set to True when ProxyUrl is also defined.
.PARAMETER ProxyUrl
An explicit proxy server to use for the request instead of relying on the default proxy in IE. This is only
valid for a HTTP URI.
.PARAMETER ProxyUsername
An optional username to use for proxy authentication.
.PARAMETER ProxyPassword
The password for ProxyUsername.
.PARAMETER ProxyUseDefaultCredential
Whether to use the current user's credentials for proxy authentication if available. This will only work when
using Become, using SSH with password auth, or WinRM with CredSSP or Kerberos with credential delegation.
.PARAMETER Module
The AnsibleBasic module that can be used as a backup parameter source or a way to return warnings back to the
Ansible controller.
.EXAMPLE
$spec = @{
options = @{}
}
$spec.options += $ansible_web_request_options
$module = Ansible.Basic.AnsibleModule]::Create($args, $spec)
$web_request = Get-AnsibleWebRequest -Module $module
#>
[CmdletBinding()]
[OutputType([System.Net.WebRequest])]
Param (
[Alias("url")]
[System.Uri]
$Uri,
[System.String]
$Method,
[Alias("follow_redirects")]
[ValidateSet("all", "none", "safe")]
[System.String]
$FollowRedirects = "safe",
[System.Collections.IDictionary]
$Headers,
[Alias("http_agent")]
[System.String]
$HttpAgent = "ansible-httpget",
[Alias("maximum_redirection")]
[System.Int32]
$MaximumRedirection = 50,
[System.Int32]
$Timeout = 30,
[Alias("validate_certs")]
[System.Boolean]
$ValidateCerts = $true,
# Credential params
[Alias("client_cert")]
[System.String]
$ClientCert,
[Alias("client_cert_password")]
[System.String]
$ClientCertPassword,
[Alias("force_basic_auth")]
[Switch]
$ForceBasicAuth,
[Alias("url_username")]
[System.String]
$UrlUsername,
[Alias("url_password")]
[System.String]
$UrlPassword,
[Alias("use_default_credential")]
[Switch]
$UseDefaultCredential,
# Proxy params
[Alias("use_proxy")]
[System.Boolean]
$UseProxy = $true,
[Alias("proxy_url")]
[System.String]
$ProxyUrl,
[Alias("proxy_username")]
[System.String]
$ProxyUsername,
[Alias("proxy_password")]
[System.String]
$ProxyPassword,
[Alias("proxy_use_default_credential")]
[Switch]
$ProxyUseDefaultCredential,
[ValidateScript({ $_.GetType().FullName -eq 'Ansible.Basic.AnsibleModule' })]
[System.Object]
$Module
)
# Set module options for parameters unless they were explicitly passed in.
if ($Module) {
foreach ($param in $PSCmdlet.MyInvocation.MyCommand.Parameters.GetEnumerator()) {
if ($PSBoundParameters.ContainsKey($param.Key)) {
# Was set explicitly we want to use that value
continue
}
foreach ($alias in @($Param.Key) + $param.Value.Aliases) {
if ($Module.Params.ContainsKey($alias)) {
$var_value = $Module.Params.$alias -as $param.Value.ParameterType
Set-Variable -Name $param.Key -Value $var_value
break
}
}
}
}
# Disable certificate validation if requested
# FUTURE: set this on ServerCertificateValidationCallback of the HttpWebRequest once .NET 4.5 is the minimum
if (-not $ValidateCerts) {
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = { $true }
}
# Enable TLS1.1/TLS1.2 if they're available but disabled (eg. .NET 4.5)
$security_protocols = [System.Net.ServicePointManager]::SecurityProtocol -bor [System.Net.SecurityProtocolType]::SystemDefault
if ([System.Net.SecurityProtocolType].GetMember("Tls11").Count -gt 0) {
$security_protocols = $security_protocols -bor [System.Net.SecurityProtocolType]::Tls11
}
if ([System.Net.SecurityProtocolType].GetMember("Tls12").Count -gt 0) {
$security_protocols = $security_protocols -bor [System.Net.SecurityProtocolType]::Tls12
}
[System.Net.ServicePointManager]::SecurityProtocol = $security_protocols
$web_request = [System.Net.WebRequest]::Create($Uri)
if ($Method) {
$web_request.Method = $Method
}
$web_request.Timeout = $Timeout * 1000
if ($UseDefaultCredential -and $web_request -is [System.Net.HttpWebRequest]) {
$web_request.UseDefaultCredentials = $true
} elseif ($UrlUsername) {
if ($ForceBasicAuth) {
$auth_value = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $UrlUsername, $UrlPassword)))
$web_request.Headers.Add("Authorization", "Basic $auth_value")
} else {
$credential = New-Object -TypeName System.Net.NetworkCredential -ArgumentList $UrlUsername, $UrlPassword
$web_request.Credentials = $credential
}
}
if ($ClientCert) {
# Expecting either a filepath or PSPath (Cert:\CurrentUser\My\<thumbprint>)
$cert = Get-Item -LiteralPath $ClientCert -ErrorAction SilentlyContinue
if ($null -eq $cert) {
Write-Error -Message "Client certificate '$ClientCert' does not exist" -Category ObjectNotFound
return
}
$crypto_ns = 'System.Security.Cryptography.X509Certificates'
if ($cert.PSProvider.Name -ne 'Certificate') {
try {
$cert = New-Object -TypeName "$crypto_ns.X509Certificate2" -ArgumentList @(
$ClientCert, $ClientCertPassword
)
} catch [System.Security.Cryptography.CryptographicException] {
Write-Error -Message "Failed to read client certificate at '$ClientCert'" -Exception $_.Exception -Category SecurityError
return
}
}
$web_request.ClientCertificates = New-Object -TypeName "$crypto_ns.X509Certificate2Collection" -ArgumentList @(
$cert
)
}
if (-not $UseProxy) {
$proxy = $null
} elseif ($ProxyUrl) {
$proxy = New-Object -TypeName System.Net.WebProxy -ArgumentList $ProxyUrl, $true
} else {
$proxy = $web_request.Proxy
}
# $web_request.Proxy may return $null for a FTP web request. We only set the credentials if we have an actual
# proxy to work with, otherwise just ignore the credentials property.
if ($null -ne $proxy) {
if ($ProxyUseDefaultCredential) {
# Weird hack, $web_request.Proxy returns an IWebProxy object which only gurantees the Credentials
# property. We cannot set UseDefaultCredentials so we just set the Credentials to the
# DefaultCredentials in the CredentialCache which does the same thing.
$proxy.Credentials = [System.Net.CredentialCache]::DefaultCredentials
} elseif ($ProxyUsername) {
$proxy.Credentials = New-Object -TypeName System.Net.NetworkCredential -ArgumentList @(
$ProxyUsername, $ProxyPassword
)
} else {
$proxy.Credentials = $null
}
$web_request.Proxy = $proxy
}
# Some parameters only apply when dealing with a HttpWebRequest
if ($web_request -is [System.Net.HttpWebRequest]) {
if ($Headers) {
foreach ($header in $Headers.GetEnumerator()) {
switch ($header.Key) {
Accept { $web_request.Accept = $header.Value }
Connection { $web_request.Connection = $header.Value }
Content-Length { $web_request.ContentLength = $header.Value }
Content-Type { $web_request.ContentType = $header.Value }
Expect { $web_request.Expect = $header.Value }
Date { $web_request.Date = $header.Value }
Host { $web_request.Host = $header.Value }
If-Modified-Since { $web_request.IfModifiedSince = $header.Value }
Range { $web_request.AddRange($header.Value) }
Referer { $web_request.Referer = $header.Value }
Transfer-Encoding {
$web_request.SendChunked = $true
$web_request.TransferEncoding = $header.Value
}
User-Agent { continue }
default { $web_request.Headers.Add($header.Key, $header.Value) }
}
}
}
# For backwards compatibility we need to support setting the User-Agent if the header was set in the task.
# We just need to make sure that if an explicit http_agent module was set then that takes priority.
if ($Headers -and $Headers.ContainsKey("User-Agent")) {
if ($HttpAgent -eq $ansible_web_request_options.http_agent.default) {
$HttpAgent = $Headers['User-Agent']
} elseif ($null -ne $Module) {
$Module.Warn("The 'User-Agent' header and the 'http_agent' was set, using the 'http_agent' for web request")
}
}
$web_request.UserAgent = $HttpAgent
switch ($FollowRedirects) {
none { $web_request.AllowAutoRedirect = $false }
safe {
if ($web_request.Method -in @("GET", "HEAD")) {
$web_request.AllowAutoRedirect = $false
} else {
$web_request.AllowAutoRedirect = $true
}
}
all { $web_request.AllowAutoRedirect = $true }
}
if ($MaximumRedirection -eq 0) {
$web_request.AllowAutoRedirect = $false
} else {
$web_request.MaximumAutomaticRedirections = $MaximumRedirection
}
}
return $web_request
}
Function Invoke-WithWebRequest {
<#
.SYNOPSIS
Invokes a ScriptBlock with the WebRequest.
.DESCRIPTION
Invokes the ScriptBlock and handle extra information like accessing the response stream, closing those streams
safely as well as setting common module return values.
.PARAMETER Module
The Ansible.Basic module to set the return values for. This will set the following return values;
elapsed - The total time, in seconds, that it took to send the web request and process the response
msg - The human readable description of the response status code
status_code - An int that is the response status code
.PARAMETER Request
The System.Net.WebRequest to call. This can either be manually crafted or created with Get-AnsibleWebRequest.
.PARAMETER Script
The ScriptBlock to invoke during the web request. This ScriptBlock should take in the params
Param ([System.Net.WebResponse]$Response, [System.IO.Stream]$Stream)
This scriptblock should manage the response based on what it need to do.
.PARAMETER Body
An optional Stream to send to the target during the request.
.PARAMETER IgnoreBadResponse
By default a WebException will be raised for a non 2xx status code and the Script will not be invoked. This
parameter can be set to process all responses regardless of the status code.
.EXAMPLE Basic module that downloads a file
$spec = @{
options = @{
path = @{ type = "path"; required = $true }
}
}
$spec.options += $ansible_web_request_options
$module = Ansible.Basic.AnsibleModule]::Create($args, $spec)
$web_request = Get-AnsibleWebRequest -Module $module
Invoke-WithWebRequest -Module $module -Request $web_request -Script {
Param ([System.Net.WebResponse]$Response, [System.IO.Stream]$Stream)
$fs = [System.IO.File]::Create($module.Params.path)
try {
$Stream.CopyTo($fs)
$fs.Flush()
} finally {
$fs.Dispose()
}
}
#>
[CmdletBinding()]
param (
[Parameter(Mandatory=$true)]
[System.Object]
[ValidateScript({ $_.GetType().FullName -eq 'Ansible.Basic.AnsibleModule' })]
$Module,
[Parameter(Mandatory=$true)]
[System.Net.WebRequest]
$Request,
[Parameter(Mandatory=$true)]
[ScriptBlock]
$Script,
[AllowNull()]
[System.IO.Stream]
$Body,
[Switch]
$IgnoreBadResponse
)
$start = Get-Date
if ($null -ne $Body) {
$request_st = $Request.GetRequestStream()
try {
$Body.CopyTo($request_st)
$request_st.Flush()
} finally {
$request_st.Close()
}
}
try {
try {
$web_response = $Request.GetResponse()
} catch [System.Net.WebException] {
# A WebResponse with a status code not in the 200 range will raise a WebException. We check if the
# exception raised contains the actual response and continue on if IgnoreBadResponse is set. We also
# make sure we set the status_code return value on the Module object if possible
if ($_.Exception.PSObject.Properties.Name -match "Response") {
$web_response = $_.Exception.Response
if (-not $IgnoreBadResponse -or $null -eq $web_response) {
$Module.Result.msg = $_.Exception.StatusDescription
$Module.Result.status_code = $_.Exception.Response.StatusCode
throw $_
}
} else {
throw $_
}
}
if ($Request.RequestUri.IsFile) {
# A FileWebResponse won't have these properties set
$Module.Result.msg = "OK"
$Module.Result.status_code = 200
} else {
$Module.Result.msg = $web_response.StatusDescription
$Module.Result.status_code = $web_response.StatusCode
}
$response_stream = $web_response.GetResponseStream()
try {
# Invoke the ScriptBlock and pass in WebResponse and ResponseStream
&$Script -Response $web_response -Stream $response_stream
} finally {
$response_stream.Dispose()
}
} finally {
if ($web_response) {
$web_response.Close()
}
$Module.Result.elapsed = ((Get-date) - $start).TotalSeconds
}
}
$ansible_web_request_options = @{
url = @{ type="str"; required=$true }
method = @{ type="str" }
follow_redirects = @{ type="str"; choices=@("all","none","safe"); default="safe" }
headers = @{ type="dict" }
http_agent = @{ type="str"; default="ansible-httpget" }
maximum_redirection = @{ type="int"; default=50 }
timeout = @{ type="int"; default=30 } # Was defaulted to 10 in win_get_url but 30 in win_uri so we use 30
validate_certs = @{ type="bool"; default=$true }
# Credential options
client_cert = @{ type="str" }
client_cert_password = @{ type="str"; no_log=$true }
force_basic_auth = @{ type="bool"; default=$false }
url_username = @{ type="str"; aliases=@("user", "username") } # user was used in win_uri
url_password = @{ type="str"; aliases=@("password"); no_log=$true }
use_default_credential = @{ type="bool"; default=$false }
# Proxy options
use_proxy = @{ type="bool"; default=$true }
proxy_url = @{ type="str" }
proxy_username = @{ type="str" }
proxy_password = @{ type="str"; no_log=$true }
proxy_use_default_credential = @{ type="bool"; default=$false }
}
$export_members = @{
Function = "Get-AnsibleWebRequest", "Invoke-WithWebRequest"
Variable = "ansible_web_request_options"
}
Export-ModuleMember @export_members
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,556 |
win_get_url doesn't follow redirects
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using win_get_url on Ansible 2.9.x to download files, it stops on the first 301 redirect and creates a 372 bytes file containing the HTTP response of the web server instead of the file that sits behind the redirect.
This happens on any Windows version that I tested: 2012 R2, 2016, 2019
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_get_url
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /Users/[REDACTED]/ansible.cfg
configured module search path = ['/Users/[REDACTED]/library', '/Users/[REDACTED]/ara/plugins/modules']
ansible python module location = /Users/[REDACTED]/lib/python3.7/site-packages/ansible
executable location = /Users/[REDACTED]/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/Users/[REDACTED]/ansible.cfg) = True
ANSIBLE_PIPELINING(/Users/[REDACTED]/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/Users/[REDACTED]/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no
ANSIBLE_SSH_RETRIES(/Users/[REDACTED]/ansible.cfg) = 3
CACHE_PLUGIN(/Users/[REDACTED]/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/Users/[REDACTED]/ansible.cfg) = ~/.ansible/facts.cachedir
CACHE_PLUGIN_TIMEOUT(/Users/[REDACTED]/ansible.cfg) = 300
DEFAULT_ACTION_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/ara/plugins/actions']
DEFAULT_CALLBACK_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/ara/plugins/callbacks']
DEFAULT_CALLBACK_WHITELIST(/Users/[REDACTED]/ansible.cfg) = ['profile_roles', 'profile_tasks', 'timer', 'junit']
DEFAULT_FORKS(/Users/[REDACTED]/ansible.cfg) = 100
DEFAULT_GATHERING(/Users/[REDACTED]/ansible.cfg) = smart
DEFAULT_HOST_LIST(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/inventory.sh']
DEFAULT_LOG_PATH(/Users/[REDACTED]/ansible.cfg) = /Users/res/.ansible/SLAnsible.log
DEFAULT_LOOKUP_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/plugins/lookup']
DEFAULT_MODULE_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/library', '/Users/[REDACTED]/ara/plugins/modules']
DEFAULT_REMOTE_USER(/Users/[REDACTED]/ansible.cfg) = stylelabs
DEFAULT_ROLES_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/roles_galaxy', '/Users/[REDACTED]/roles_mansible']
DEFAULT_STDOUT_CALLBACK(/Users/[REDACTED]/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/Users/[REDACTED]/ansible.cfg) = 20
HOST_KEY_CHECKING(/Users/[REDACTED]/ansible.cfg) = False
RETRY_FILES_ENABLED(/Users/[REDACTED]/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/[REDACTED]/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS: Windows 2012 R2, Windows 2016, Windows 2019
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: all
vars:
zabbix_win_download_link: "https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip"
zabbix_win_install_dir: "c:\\windows\\temp"
zabbix_win_package: "zabbix.zip"
tasks:
- name: "Windows | Download Zabbix Agent Zip file"
win_get_url:
url: "{{ zabbix_win_download_link }}"
dest: '{{ zabbix_win_install_dir }}\{{ zabbix_win_package }}'
force: False
follow_redirects: safe
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I'm expecting the file behind the redirect to be downloaded locally, not the Redirect HTTP response.
This was working in Ansible 2.8 and 2.7.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Windows | Download Zabbix Agent Zip file] *******************************************************************************************************************************************************************************************************
task path: /Users/[REDACTED]/win-get-url.yml:9
Thursday 05 December 2019 13:23:19 +0100 (0:00:04.039) 0:00:04.201 *****
Thursday 05 December 2019 13:23:19 +0100 (0:00:04.039) 0:00:04.199 *****
Using module file /Users/[REDACTED]/lib/python3.7/site-packages/ansible/modules/windows/win_get_url.ps1
Pipelining is enabled.
<[REDACTED]> ESTABLISH WINRM CONNECTION FOR USER: stylelabs on PORT 5986 TO [REDACTED]
EXEC (via pipeline wrapper)
changed: [[REDACTED]] => changed=true
checksum_dest: 5ab849c3b74d71be4d7d520de6c374e64fa6553c
checksum_src: 5ab849c3b74d71be4d7d520de6c374e64fa6553c
dest: c:\windows\temp\zabbix.zip
elapsed: 0.325719
invocation:
module_args:
checksum: null
checksum_algorithm: sha1
checksum_url: null
client_cert: null
client_cert_password: null
dest: c:\windows\temp\zabbix.zip
follow_redirects: safe
force: false
force_basic_auth: false
headers: null
http_agent: ansible-httpget
maximum_redirection: 50
method: null
proxy_password: null
proxy_url: null
proxy_use_default_credential: false
proxy_username: null
timeout: 30
url: https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip
url_password: null
url_username: null
use_default_credential: false
use_proxy: true
validate_certs: true
msg: Moved Permanently
size: 372
status_code: 301
url: https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip
META: ran handlers
META: ran handlers
```
File content
```
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved <a href="https://assets.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip">here</a>.</p>
<hr>
<address>Apache/2.4.10 (Debian) Server at www.zabbix.com Port 443</address>
</body></html>
```
The following, should be enough (it's even the default as per the documentation), but it's not:
```yaml
follow_redirects: safe
```
|
https://github.com/ansible/ansible/issues/65556
|
https://github.com/ansible/ansible/pull/65584
|
eaba5572cd1f206ae850c6730d50f32c58cc3131
|
9a81f5c3b7a723cc878a404dcf20037ea11bfeb7
| 2019-12-05T12:26:12Z |
python
| 2019-12-06T01:47:35Z |
test/integration/targets/win_get_url/tasks/tests_url.yml
|
- name: download single file (check)
win_get_url:
url: https://{{ httpbin_host }}/base64/SG93IG5vdyBicm93biBjb3c=
dest: '{{ testing_dir }}\output.txt'
check_mode: yes
register: http_download_check
- name: get result of download single file (check)
win_stat:
path: '{{ testing_dir }}\output.txt'
register: http_download_result_check
- name: assert download single file (check)
assert:
that:
- http_download_check is not failed
- http_download_check is changed
- http_download_check.url
- http_download_check.dest
- not http_download_result_check.stat.exists
- name: download single file
win_get_url:
url: https://{{ httpbin_host }}/base64/SG93IG5vdyBicm93biBjb3c=
dest: '{{ testing_dir }}\output.txt'
register: http_download
- name: get result of download single file
win_stat:
path: '{{ testing_dir }}\output.txt'
register: http_download_result
- name: assert download single file
assert:
that:
- http_download is not failed
- http_download is changed
- http_download.url
- http_download.dest
- http_download_result.stat.exists
- name: download single file (idempotent)
win_get_url:
url: https://{{ httpbin_host }}/base64/SG93IG5vdyBicm93biBjb3c=
dest: '{{ testing_dir }}\output.txt'
register: http_download_again
- name: assert download single file (idempotent)
assert:
that:
- not http_download_again is changed
# Cannot use httpbin as the Last-Modified date is generated dynamically
- name: download file for force=no tests
win_get_url:
url: https://ansible-ci-files.s3.amazonaws.com/test/integration/roles/test_win_get_url/SlimFTPd.exe
dest: '{{ testing_dir }}\output'
- name: download single file with force no
win_get_url:
url: https://ansible-ci-files.s3.amazonaws.com/test/integration/roles/test_win_get_url/SlimFTPd.exe
dest: '{{ testing_dir }}\output'
force: no
register: http_download_no_force
- name: assert download single file with force no
assert:
that:
- http_download_no_force is not changed
- name: manually change the content and last modified time on FTP source to older datetime
win_shell: |
$path = '{{ testing_dir }}\output'
Set-Content -LiteralPath $path -Value 'abc'
(Get-Item -LiteralPath $path).LastWriteTime = (Get-Date -Date "01/01/1970")
- name: download newer file with force no
win_get_url:
url: https://ansible-ci-files.s3.amazonaws.com/test/integration/roles/test_win_get_url/SlimFTPd.exe
dest: '{{ testing_dir }}\output'
force: no
register: http_download_newer_no_force
- name: assert download newer file with force no
assert:
that:
- http_download_newer_no_force is changed
- name: download file to directory
win_get_url:
url: https://{{ httpbin_host }}/image/png
dest: '{{ testing_dir }}'
register: http_download_to_directory
- name: get result of download to directory
win_stat:
path: '{{ testing_dir }}\png'
register: http_download_to_directory_result
- name: assert download file to directory
assert:
that:
- http_download_to_directory is changed
- http_download_to_directory_result.stat.exists
- name: download to path with env var
win_get_url:
url: https://{{ httpbin_host }}/image/jpeg
dest: '%TEST_WIN_GET_URL%\jpeg.jpg'
register: http_download_with_env
environment:
TEST_WIN_GET_URL: '{{ testing_dir }}'
- name: get result of download to path with env var
win_stat:
path: '{{ testing_dir }}\jpeg.jpg'
register: http_download_with_env_result
- name: assert download to path with env var
assert:
that:
- http_download_with_env is changed
- http_download_with_env_result.stat.exists
- name: fail when link returns 404
win_get_url:
url: https://{{ httpbin_host }}/status/404
dest: '{{ testing_dir }}\skynet_module.html'
ignore_errors: yes
register: fail_download_404
- name: assert fail when link returns 404
assert:
that:
- fail_download_404 is not changed
- fail_download_404 is failed
- fail_download_404.status_code == 404
- name: fail when dest is an invalid path
win_get_url:
url: https://{{ httpbin_host }}/base64/YQ==
dest: Q:\Filez\Cyberdyne.html
register: fail_invalid_path
failed_when: '"The path ''Q:\Filez'' does not exist for destination ''Q:\Filez\Cyberdyne.html''" not in fail_invalid_path.msg'
- name: test basic authentication
win_get_url:
url: http://{{ httpbin_host }}/basic-auth/username/password
dest: '{{ testing_dir }}\basic.txt'
url_username: username
url_password: password
register: basic_auth
- name: assert test basic authentication
assert:
that:
- basic_auth is changed
- basic_auth.status_code == 200
# httpbin hidden-basic-auth returns 404 not found on auth failure which stops the automatic auth handler from working.
# Setting force_basic_auth=yes means the Basic auth header is sent in the original request not after a 401 response
- name: test force basic authentication
win_get_url:
url: http://{{ httpbin_host }}/hidden-basic-auth/username/password
dest: '{{ testing_dir }}\force-basic.txt'
url_username: username
url_password: password
force_basic_auth: yes
register: force_basic_auth
- name: assert test force basic auth
assert:
that:
- force_basic_auth is changed
- force_basic_auth.status_code == 200
- name: timeout request
win_get_url:
url: https://{{ httpbin_host }}/delay/7
dest: '{{ testing_dir }}\timeout.txt'
timeout: 3
register: timeout_req
failed_when: 'timeout_req.msg != "Error downloading ''https://" + httpbin_host + "/delay/7'' to ''" + testing_dir + "\\timeout.txt'': The operation has timed out"'
- name: send request with headers
win_get_url:
url: https://{{ httpbin_host }}/headers
dest: '{{ testing_dir }}\headers.txt'
headers:
testing: 123
User-Agent: 'badAgent'
accept: 'text/html'
register: headers
- name: get result of send request with headers
slurp:
path: '{{ testing_dir }}\headers.txt'
register: headers_actual
- name: assert send request with headers
assert:
that:
- headers is changed
- headers.status_code == 200
- (headers_actual.content | b64decode | from_json).headers.Testing == '123'
- (headers_actual.content | b64decode | from_json).headers["User-Agent"] == 'badAgent'
- (headers_actual.content | b64decode | from_json).headers.Accept == 'text/html'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,556 |
win_get_url doesn't follow redirects
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using win_get_url on Ansible 2.9.x to download files, it stops on the first 301 redirect and creates a 372 bytes file containing the HTTP response of the web server instead of the file that sits behind the redirect.
This happens on any Windows version that I tested: 2012 R2, 2016, 2019
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
win_get_url
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /Users/[REDACTED]/ansible.cfg
configured module search path = ['/Users/[REDACTED]/library', '/Users/[REDACTED]/ara/plugins/modules']
ansible python module location = /Users/[REDACTED]/lib/python3.7/site-packages/ansible
executable location = /Users/[REDACTED]/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:32) [Clang 11.0.0 (clang-1100.0.33.8)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/Users/[REDACTED]/ansible.cfg) = True
ANSIBLE_PIPELINING(/Users/[REDACTED]/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/Users/[REDACTED]/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no
ANSIBLE_SSH_RETRIES(/Users/[REDACTED]/ansible.cfg) = 3
CACHE_PLUGIN(/Users/[REDACTED]/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/Users/[REDACTED]/ansible.cfg) = ~/.ansible/facts.cachedir
CACHE_PLUGIN_TIMEOUT(/Users/[REDACTED]/ansible.cfg) = 300
DEFAULT_ACTION_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/ara/plugins/actions']
DEFAULT_CALLBACK_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/ara/plugins/callbacks']
DEFAULT_CALLBACK_WHITELIST(/Users/[REDACTED]/ansible.cfg) = ['profile_roles', 'profile_tasks', 'timer', 'junit']
DEFAULT_FORKS(/Users/[REDACTED]/ansible.cfg) = 100
DEFAULT_GATHERING(/Users/[REDACTED]/ansible.cfg) = smart
DEFAULT_HOST_LIST(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/inventory.sh']
DEFAULT_LOG_PATH(/Users/[REDACTED]/ansible.cfg) = /Users/res/.ansible/SLAnsible.log
DEFAULT_LOOKUP_PLUGIN_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/plugins/lookup']
DEFAULT_MODULE_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/library', '/Users/[REDACTED]/ara/plugins/modules']
DEFAULT_REMOTE_USER(/Users/[REDACTED]/ansible.cfg) = stylelabs
DEFAULT_ROLES_PATH(/Users/[REDACTED]/ansible.cfg) = ['/Users/[REDACTED]/roles_galaxy', '/Users/[REDACTED]/roles_mansible']
DEFAULT_STDOUT_CALLBACK(/Users/[REDACTED]/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/Users/[REDACTED]/ansible.cfg) = 20
HOST_KEY_CHECKING(/Users/[REDACTED]/ansible.cfg) = False
RETRY_FILES_ENABLED(/Users/[REDACTED]/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/Users/[REDACTED]/ansible.cfg) = ignore
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Target OS: Windows 2012 R2, Windows 2016, Windows 2019
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: all
vars:
zabbix_win_download_link: "https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip"
zabbix_win_install_dir: "c:\\windows\\temp"
zabbix_win_package: "zabbix.zip"
tasks:
- name: "Windows | Download Zabbix Agent Zip file"
win_get_url:
url: "{{ zabbix_win_download_link }}"
dest: '{{ zabbix_win_install_dir }}\{{ zabbix_win_package }}'
force: False
follow_redirects: safe
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I'm expecting the file behind the redirect to be downloaded locally, not the Redirect HTTP response.
This was working in Ansible 2.8 and 2.7.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [Windows | Download Zabbix Agent Zip file] *******************************************************************************************************************************************************************************************************
task path: /Users/[REDACTED]/win-get-url.yml:9
Thursday 05 December 2019 13:23:19 +0100 (0:00:04.039) 0:00:04.201 *****
Thursday 05 December 2019 13:23:19 +0100 (0:00:04.039) 0:00:04.199 *****
Using module file /Users/[REDACTED]/lib/python3.7/site-packages/ansible/modules/windows/win_get_url.ps1
Pipelining is enabled.
<[REDACTED]> ESTABLISH WINRM CONNECTION FOR USER: stylelabs on PORT 5986 TO [REDACTED]
EXEC (via pipeline wrapper)
changed: [[REDACTED]] => changed=true
checksum_dest: 5ab849c3b74d71be4d7d520de6c374e64fa6553c
checksum_src: 5ab849c3b74d71be4d7d520de6c374e64fa6553c
dest: c:\windows\temp\zabbix.zip
elapsed: 0.325719
invocation:
module_args:
checksum: null
checksum_algorithm: sha1
checksum_url: null
client_cert: null
client_cert_password: null
dest: c:\windows\temp\zabbix.zip
follow_redirects: safe
force: false
force_basic_auth: false
headers: null
http_agent: ansible-httpget
maximum_redirection: 50
method: null
proxy_password: null
proxy_url: null
proxy_use_default_credential: false
proxy_username: null
timeout: 30
url: https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip
url_password: null
url_username: null
use_default_credential: false
use_proxy: true
validate_certs: true
msg: Moved Permanently
size: 372
status_code: 301
url: https://www.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip
META: ran handlers
META: ran handlers
```
File content
```
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved <a href="https://assets.zabbix.com/downloads/4.0.0/zabbix_agents-4.0.0-win-amd64-openssl.zip">here</a>.</p>
<hr>
<address>Apache/2.4.10 (Debian) Server at www.zabbix.com Port 443</address>
</body></html>
```
The following, should be enough (it's even the default as per the documentation), but it's not:
```yaml
follow_redirects: safe
```
|
https://github.com/ansible/ansible/issues/65556
|
https://github.com/ansible/ansible/pull/65584
|
eaba5572cd1f206ae850c6730d50f32c58cc3131
|
9a81f5c3b7a723cc878a404dcf20037ea11bfeb7
| 2019-12-05T12:26:12Z |
python
| 2019-12-06T01:47:35Z |
test/integration/targets/win_uri/tasks/main.yml
|
---
# get with mismatch https
# get with mismatch https and ignore validation
- name: get request without return_content
win_uri:
url: https://{{httpbin_host}}/get
return_content: no
register: get_request_without_content
- name: assert get request without return_content
assert:
that:
- not get_request_without_content.changed
- get_request_without_content.content is not defined
- get_request_without_content.json is not defined
- get_request_without_content.status_code == 200
- name: get request with xml content
win_uri:
url: https://{{httpbin_host}}/xml
return_content: yes
register: get_request_with_xml_content
- name: assert get request with xml content
assert:
that:
- not get_request_with_xml_content.changed
- get_request_with_xml_content.content is defined
- get_request_with_xml_content.json is not defined
- get_request_with_xml_content.status_code == 200
- name: get request with binary content
win_uri:
url: https://{{httpbin_host}}/image/png
return_content: yes
register: get_request_with_binary_content
- name: assert get request with binary content
assert:
that:
- not get_request_with_binary_content.changed
- get_request_with_binary_content.content is defined
- get_request_with_binary_content.json is not defined
- get_request_with_xml_content.status_code == 200
- name: get request with return_content and dest (check mode)
win_uri:
url: https://{{httpbin_host}}/get
return_content: yes
dest: '{{ remote_tmp_dir }}\get.json'
register: get_request_with_dest_check
check_mode: yes
- name: get stat of downloaded file (check mode)
win_stat:
path: '{{ remote_tmp_dir }}\get.json'
register: get_request_with_dest_actual_check
- name: assert get request with return_content and dest (check mode)
assert:
that:
- get_request_with_dest_check.changed
- get_request_with_dest_check.content is defined
- get_request_with_dest_check.json is defined
- get_request_with_dest_actual_check.stat.exists == False
- name: get request with return_content and dest
win_uri:
url: https://{{httpbin_host}}/get
return_content: yes
dest: '{{ remote_tmp_dir }}\get.json'
register: get_request_with_dest
- name: get stat of downloaded file
win_stat:
path: '{{ remote_tmp_dir }}\get.json'
checksum_algorithm: sha1
get_checksum: yes
register: get_request_with_dest_actual
- name: assert get request with return_content and dest
assert:
that:
- get_request_with_dest.changed
- get_request_with_dest.content is defined
- get_request_with_dest.json is defined
- get_request_with_dest_actual.stat.exists == True
- get_request_with_dest_actual.stat.checksum == get_request_with_dest.content|hash('sha1')
- name: get request with return_content and dest (idempotent)
win_uri:
url: https://{{httpbin_host}}/get
return_content: yes
dest: '{{ remote_tmp_dir }}\get.json'
register: get_request_with_dest_again
- name: assert get request with return_content and dest (idempotent)
assert:
that:
- not get_request_with_dest_again.changed
- name: test request with creates option should skip
win_uri:
url: https://{{httpbin_host}}/get
creates: '{{ remote_tmp_dir }}\get.json'
register: request_with_creates_skipped
- name: assert test request with creates option should skip
assert:
that:
- not request_with_creates_skipped.changed
- request_with_creates_skipped.skipped
- name: test request with creates option should not skip
win_uri:
url: https://{{httpbin_host}}/get
creates: '{{ remote_tmp_dir }}\fake.json'
register: request_with_creates_not_skipped
- name: assert test request with creates option should not skip
assert:
that:
- not request_with_creates_not_skipped.changed
- request_with_creates_not_skipped.skipped is not defined
- name: post request with return_content, dest and different content
win_uri:
url: https://{{httpbin_host}}/post
method: POST
content_type: application/json
body: '{"foo": "bar"}'
return_content: yes
dest: '{{ remote_tmp_dir }}\get.json'
register: post_request_with_different_content
- name: get stat of downloaded file
win_stat:
path: '{{ remote_tmp_dir }}\get.json'
checksum_algorithm: sha1
get_checksum: yes
register: post_request_with_different_content_actual
- name: assert post request with return_content, dest and different content
assert:
that:
- post_request_with_different_content.changed
- post_request_with_different_content_actual.stat.exists == True
- post_request_with_different_content_actual.stat.checksum == post_request_with_different_content.content|hash('sha1')
- name: test redirect without follow_redirects
win_uri:
url: https://{{httpbin_host}}/redirect/2
follow_redirects: none
status_code: 302
register: redirect_without_follow
- name: assert redirect without follow_redirects
assert:
that:
- not redirect_without_follow.changed
- redirect_without_follow.location|default("") == '/relative-redirect/1'
- redirect_without_follow.status_code == 302
- name: test redirect with follow_redirects
win_uri:
url: https://{{httpbin_host}}/redirect/2
follow_redirects: all
register: redirect_with_follow
- name: assert redirect with follow_redirects
assert:
that:
- not redirect_with_follow.changed
- redirect_with_follow.location is not defined
- redirect_with_follow.status_code == 200
- redirect_with_follow.response_uri == 'https://{{httpbin_host}}/get'
- name: get request with redirect of TLS
win_uri:
url: https://{{httpbin_host}}/redirect/2
follow_redirects: all
register: redirect_with_follow_tls
- name: assert redirect with redirect of TLS
assert:
that:
- not redirect_with_follow_tls.changed
- redirect_with_follow_tls.location is not defined
- redirect_with_follow_tls.status_code == 200
- redirect_with_follow_tls.response_uri == 'https://{{httpbin_host}}/get'
- name: test basic auth
win_uri:
url: https://{{httpbin_host}}/basic-auth/user/passwd
user: user
password: passwd
register: basic_auth
- name: assert test basic auth
assert:
that:
- not basic_auth.changed
- basic_auth.status_code == 200
- name: test basic auth with force auth
win_uri:
url: https://{{httpbin_host}}/hidden-basic-auth/user/passwd
user: user
password: passwd
force_basic_auth: yes
register: basic_auth_forced
- name: assert test basic auth with forced auth
assert:
that:
- not basic_auth_forced.changed
- basic_auth_forced.status_code == 200
- name: test PUT
win_uri:
url: https://{{httpbin_host}}/put
method: PUT
body: foo=bar
return_content: yes
register: put_request
- name: assert test PUT
assert:
that:
- not put_request.changed
- put_request.status_code == 200
- put_request.json.data == 'foo=bar'
- name: test OPTIONS
win_uri:
url: https://{{httpbin_host}}/
method: OPTIONS
register: option_request
- name: assert test OPTIONS
assert:
that:
- not option_request.changed
- option_request.status_code == 200
- 'option_request.allow.split(", ")|sort == ["GET", "HEAD", "OPTIONS"]'
# SNI Tests
- name: validate status_codes are correct
win_uri:
url: https://{{httpbin_host}}/status/202
status_code:
- 202
- 418
method: POST
body: foo
register: status_code_check
- name: assert validate status_codes are correct
assert:
that:
- not status_code_check.changed
- status_code_check.status_code == 202
- name: send JSON body with dict type
win_uri:
url: https://{{httpbin_host}}/post
method: POST
body:
foo: bar
list:
- 1
- 2
dict:
foo: bar
headers:
'Content-Type': 'text/json'
return_content: yes
register: json_as_dict
- name: set fact of expected json dict
set_fact:
json_as_dict_value:
foo: bar
list:
- 1
- 2
dict:
foo: bar
- name: assert send JSON body with dict type
assert:
that:
- not json_as_dict.changed
- json_as_dict.json.json == json_as_dict_value
- json_as_dict.status_code == 200
- name: send JSON body with 1 item in list
win_uri:
url: https://{{httpbin_host}}/post
method: POST
body:
- foo: bar
headers:
'Content-Type': 'text/json'
return_content: yes
register: json_as_oneitemlist
- name: set fact of expected json 1 item list
set_fact:
json_as_oneitemlist_value:
- foo: bar
- name: assert send JSON body with 1 item in list
assert:
that:
- not json_as_oneitemlist.changed
- json_as_oneitemlist.json.json == json_as_oneitemlist_value
- json_as_oneitemlist.status_code == 200
- name: get request with custom headers
win_uri:
url: https://{{httpbin_host}}/get
headers:
Test-Header: hello
Another-Header: world
return_content: yes
register: get_custom_header
- name: assert request with custom headers
assert:
that:
- not get_custom_header.changed
- get_custom_header.status_code == 200
- get_custom_header.json.headers['Test-Header'] == 'hello'
- get_custom_header.json.headers['Another-Header'] == 'world'
- name: Validate invalid method
win_uri:
url: https://{{ httpbin_host }}/anything
method: UNKNOWN
register: invalid_method
ignore_errors: yes
- name: Assert invalid method fails
assert:
that:
- invalid_method is failure
- invalid_method.status_code == 405
- invalid_method.status_description == 'METHOD NOT ALLOWED'
# client cert auth tests
- name: get request with timeout
win_uri:
url: https://{{httpbin_host}}/delay/10
timeout: 5
register: get_with_timeout_fail
failed_when: '"The operation has timed out" not in get_with_timeout_fail.msg'
- name: connect to fakepath that does not exist
win_uri:
url: https://{{httpbin_host}}/fakepath
status_code: 404
return_content: yes
register: invalid_path
# verifies the return values are still set on a non 200 response
- name: assert connect to fakepath that does not exist
assert:
that:
- not invalid_path.changed
- invalid_path.status_code == 404
- invalid_path.status_description == 'NOT FOUND'
- invalid_path.content is defined
- invalid_path.method == 'GET'
- invalid_path.connection is defined
- name: post request with custom headers
win_uri:
url: https://{{httpbin_host}}/post
method: POST
headers:
Test-Header: hello
Another-Header: world
content_type: application/json
body: '{"foo": "bar"}'
return_content: yes
register: post_request_with_custom_headers
- name: assert post with custom headers
assert:
that:
- not post_request_with_custom_headers.changed
- post_request_with_custom_headers.status_code == 200
- post_request_with_custom_headers.json.headers['Content-Type'] == "application/json"
- post_request_with_custom_headers.json.headers['Test-Header'] == 'hello'
- post_request_with_custom_headers.json.headers['Another-Header'] == 'world'
- name: validate status codes as list of strings
win_uri:
url: https://{{httpbin_host}}/status/202
status_code:
- '202'
- '418'
method: POST
body: foo
return_content: yes
register: request_status_code_string
- name: assert status codes as list of strings
assert:
that:
- not request_status_code_string.changed
- request_status_code_string.status_code == 202
- name: validate status codes as comma separated list
win_uri:
url: https://{{httpbin_host}}/status/202
status_code: 202, 418
method: POST
body: foo
return_content: yes
register: request_status_code_comma
- name: assert status codes as comma separated list
assert:
that:
- not request_status_code_comma.changed
- request_status_code_comma.status_code == 202
# https://github.com/ansible/ansible/issues/55294
- name: get json content that is an array
win_uri:
url: https://{{httpbin_host}}/base64/{{ '[{"abc":"def"}]' | b64encode }}
return_content: yes
register: content_array
- name: assert content of json array
assert:
that:
- not content_array is changed
- content_array.content == '[{"abc":"def"}]'
- content_array.json == [{"abc":"def"}]
- name: send request with explicit http_agent
win_uri:
url: https://{{httpbin_host}}/get
http_agent: test-agent
return_content: yes
register: http_agent_option
- name: assert send request with explicit http_agent
assert:
that:
- http_agent_option.json.headers['User-Agent'] == 'test-agent'
- name: send request with explicit User-Agent header
win_uri:
url: https://{{httpbin_host}}/get
headers:
User-Agent: test-agent
return_content: yes
register: http_agent_header
- name: assert send request with explicit User-Agent header
assert:
that:
- http_agent_header.json.headers['User-Agent'] == 'test-agent'
- name: send request with explicit http_agent and header (http_agent wins)
win_uri:
url: https://{{httpbin_host}}/get
http_agent: test-agent-option
headers:
User-Agent: test-agent-header
return_content: yes
register: http_agent_combo
- name: assert send request with explicit http_agent and header (http_agent wins)
assert:
that:
- http_agent_combo.json.headers['User-Agent'] == 'test-agent-option'
- http_agent_combo.warnings[0] == "The 'User-Agent' header and the 'http_agent' was set, using the 'http_agent' for web request"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,560 |
mysql_user module check_implicit_admin doesn't fallback to login credentials
|
##### SUMMARY
mysql_user module with check_implicit_admin=yes try to connect to database without password. But when login failed doesn't fallback to login_user and login_password credentials.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
mysql_user
##### ANSIBLE VERSION
```
2.9.0
```
##### OS / ENVIRONMENT
Ubuntu 18.04
##### STEPS TO REPRODUCE
```yaml
- name: Update root password for all root accounts
mysql_user: name=root password={{ mysql_root_password }} login_user=root login_password={{ mysql_root_password }} host={{ item }} check_implicit_admin=yes
with_items:
- "{{ current_hostname.stdout | lower }}"
- 127.0.0.1
- ::1
- localhost
```
First run is ok, but on second run Ansible failed when try to connect as root without password and no fallback to login credentials.
##### EXPECTED RESULTS
First run -> root / no password
Other runs -> root / mysql_root_password
##### ACTUAL RESULTS
Provision failed at other as first run
|
https://github.com/ansible/ansible/issues/64560
|
https://github.com/ansible/ansible/pull/64585
|
f21e72d55ac2e4408036dcf18a5b5da61883b3e5
|
47aea84924e8149e153107564c8b029dc4f52c27
| 2019-11-07T13:09:16Z |
python
| 2019-12-06T05:52:34Z |
lib/ansible/module_utils/mysql.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Jonathan Mainguy <[email protected]>, 2015
# Most of this was originally added by Sven Schliesing @muffl0n in the mysql_user.py module
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
try:
import pymysql as mysql_driver
_mysql_cursor_param = 'cursor'
except ImportError:
try:
import MySQLdb as mysql_driver
import MySQLdb.cursors
_mysql_cursor_param = 'cursorclass'
except ImportError:
mysql_driver = None
from ansible.module_utils._text import to_native
mysql_driver_fail_msg = 'The PyMySQL (Python 2.7 and Python 3.X) or MySQL-python (Python 2.X) module is required.'
def mysql_connect(module, login_user=None, login_password=None, config_file='', ssl_cert=None, ssl_key=None, ssl_ca=None, db=None, cursor_class=None,
connect_timeout=30):
config = {}
if ssl_ca is not None or ssl_key is not None or ssl_cert is not None:
config['ssl'] = {}
if module.params['login_unix_socket']:
config['unix_socket'] = module.params['login_unix_socket']
else:
config['host'] = module.params['login_host']
config['port'] = module.params['login_port']
if os.path.exists(config_file):
config['read_default_file'] = config_file
# If login_user or login_password are given, they should override the
# config file
if login_user is not None:
config['user'] = login_user
if login_password is not None:
config['passwd'] = login_password
if ssl_cert is not None:
config['ssl']['cert'] = ssl_cert
if ssl_key is not None:
config['ssl']['key'] = ssl_key
if ssl_ca is not None:
config['ssl']['ca'] = ssl_ca
if db is not None:
config['db'] = db
if connect_timeout is not None:
config['connect_timeout'] = connect_timeout
try:
db_connection = mysql_driver.connect(**config)
except Exception as e:
module.fail_json(msg="unable to connect to database: %s" % to_native(e))
if cursor_class == 'DictCursor':
return db_connection.cursor(**{_mysql_cursor_param: mysql_driver.cursors.DictCursor})
else:
return db_connection.cursor()
def mysql_common_argument_spec():
return dict(
login_user=dict(type='str', default=None),
login_password=dict(type='str', no_log=True),
login_host=dict(type='str', default='localhost'),
login_port=dict(type='int', default=3306),
login_unix_socket=dict(type='str'),
config_file=dict(type='path', default='~/.my.cnf'),
connect_timeout=dict(type='int', default=30),
client_cert=dict(type='path', aliases=['ssl_cert']),
client_key=dict(type='path', aliases=['ssl_key']),
ca_cert=dict(type='path', aliases=['ssl_ca']),
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,560 |
mysql_user module check_implicit_admin doesn't fallback to login credentials
|
##### SUMMARY
mysql_user module with check_implicit_admin=yes try to connect to database without password. But when login failed doesn't fallback to login_user and login_password credentials.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
mysql_user
##### ANSIBLE VERSION
```
2.9.0
```
##### OS / ENVIRONMENT
Ubuntu 18.04
##### STEPS TO REPRODUCE
```yaml
- name: Update root password for all root accounts
mysql_user: name=root password={{ mysql_root_password }} login_user=root login_password={{ mysql_root_password }} host={{ item }} check_implicit_admin=yes
with_items:
- "{{ current_hostname.stdout | lower }}"
- 127.0.0.1
- ::1
- localhost
```
First run is ok, but on second run Ansible failed when try to connect as root without password and no fallback to login credentials.
##### EXPECTED RESULTS
First run -> root / no password
Other runs -> root / mysql_root_password
##### ACTUAL RESULTS
Provision failed at other as first run
|
https://github.com/ansible/ansible/issues/64560
|
https://github.com/ansible/ansible/pull/64585
|
f21e72d55ac2e4408036dcf18a5b5da61883b3e5
|
47aea84924e8149e153107564c8b029dc4f52c27
| 2019-11-07T13:09:16Z |
python
| 2019-12-06T05:52:34Z |
lib/ansible/modules/database/mysql/mysql_info.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: mysql_info
short_description: Gather information about MySQL servers
description:
- Gathers information about MySQL servers.
version_added: '2.9'
options:
filter:
description:
- Limit the collected information by comma separated string or YAML list.
- Allowable values are C(version), C(databases), C(settings), C(global_status),
C(users), C(engines), C(master_status), C(slave_status), C(slave_hosts).
- By default, collects all subsets.
- You can use '!' before value (for example, C(!settings)) to exclude it from the information.
- If you pass including and excluding values to the filter, for example, I(filter=!settings,version),
the excluding values, C(!settings) in this case, will be ignored.
type: list
elements: str
login_db:
description:
- Database name to connect to.
- It makes sense if I(login_user) is allowed to connect to a specific database only.
type: str
exclude_fields:
description:
- List of fields which are not needed to collect.
- "Supports elements: C(db_size). Unsupported elements will be ignored"
type: list
elements: str
version_added: '2.10'
author:
- Andrew Klychkov (@Andersson007)
extends_documentation_fragment: mysql
'''
EXAMPLES = r'''
# Display info from mysql-hosts group (using creds from ~/.my.cnf to connect):
# ansible mysql-hosts -m mysql_info
# Display only databases and users info:
# ansible mysql-hosts -m mysql_info -a 'filter=databases,users'
# Display only slave status:
# ansible standby -m mysql_info -a 'filter=slave_status'
# Display all info from databases group except settings:
# ansible databases -m mysql_info -a 'filter=!settings'
- name: Collect all possible information using passwordless root access
mysql_info:
login_user: root
- name: Get MySQL version with non-default credentials
mysql_info:
login_user: mysuperuser
login_password: mysuperpass
filter: version
- name: Collect all info except settings and users by root
mysql_info:
login_user: root
login_password: rootpass
filter: "!settings,!users"
- name: Collect info about databases and version using ~/.my.cnf as a credential file
become: yes
mysql_info:
filter:
- databases
- version
- name: Collect info about databases and version using ~alice/.my.cnf as a credential file
become: yes
mysql_info:
config_file: /home/alice/.my.cnf
filter:
- databases
- version
- name: Collect info about databases excluding their sizes
become: yes
mysql_info:
config_file: /home/alice/.my.cnf
filter:
- databases
exclude_fields: db_size
'''
RETURN = r'''
version:
description: Database server version.
returned: if not excluded by filter
type: dict
sample: { "version": { "major": 5, "minor": 5, "release": 60 } }
contains:
major:
description: Major server version.
returned: if not excluded by filter
type: int
sample: 5
minor:
description: Minor server version.
returned: if not excluded by filter
type: int
sample: 5
release:
description: Release server version.
returned: if not excluded by filter
type: int
sample: 60
databases:
description: Information about databases.
returned: if not excluded by filter
type: dict
sample:
- { "mysql": { "size": 656594 }, "information_schema": { "size": 73728 } }
contains:
size:
description: Database size in bytes.
returned: if not excluded by filter
type: dict
sample: { 'size': 656594 }
settings:
description: Global settings (variables) information.
returned: if not excluded by filter
type: dict
sample:
- { "innodb_open_files": 300, innodb_page_size": 16384 }
global_status:
description: Global status information.
returned: if not excluded by filter
type: dict
sample:
- { "Innodb_buffer_pool_read_requests": 123, "Innodb_buffer_pool_reads": 32 }
version_added: "2.10"
users:
description: Users information.
returned: if not excluded by filter
type: dict
sample:
- { "localhost": { "root": { "Alter_priv": "Y", "Alter_routine_priv": "Y" } } }
engines:
description: Information about the server's storage engines.
returned: if not excluded by filter
type: dict
sample:
- { "CSV": { "Comment": "CSV storage engine", "Savepoints": "NO", "Support": "YES", "Transactions": "NO", "XA": "NO" } }
master_status:
description: Master status information.
returned: if master
type: dict
sample:
- { "Binlog_Do_DB": "", "Binlog_Ignore_DB": "mysql", "File": "mysql-bin.000001", "Position": 769 }
slave_status:
description: Slave status information.
returned: if standby
type: dict
sample:
- { "192.168.1.101": { "3306": { "replication_user": { "Connect_Retry": 60, "Exec_Master_Log_Pos": 769, "Last_Errno": 0 } } } }
slave_hosts:
description: Slave status information.
returned: if master
type: dict
sample:
- { "2": { "Host": "", "Master_id": 1, "Port": 3306 } }
'''
from decimal import Decimal
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.mysql import (
mysql_connect,
mysql_common_argument_spec,
mysql_driver,
mysql_driver_fail_msg,
)
from ansible.module_utils.six import iteritems
from ansible.module_utils._text import to_native
# ===========================================
# MySQL module specific support methods.
#
class MySQL_Info(object):
"""Class for collection MySQL instance information.
Arguments:
module (AnsibleModule): Object of AnsibleModule class.
cursor (pymysql/mysql-python): Cursor class for interaction with
the database.
Note:
If you need to add a new subset:
1. add a new key with the same name to self.info attr in self.__init__()
2. add a new private method to get the information
3. add invocation of the new method to self.__collect()
4. add info about the new subset to the DOCUMENTATION block
5. add info about the new subset with an example to RETURN block
"""
def __init__(self, module, cursor):
self.module = module
self.cursor = cursor
self.info = {
'version': {},
'databases': {},
'settings': {},
'global_status': {},
'engines': {},
'users': {},
'master_status': {},
'slave_hosts': {},
'slave_status': {},
}
def get_info(self, filter_, exclude_fields):
"""Get MySQL instance information based on filter_.
Arguments:
filter_ (list): List of collected subsets (e.g., databases, users, etc.),
when it is empty, return all available information.
"""
self.__collect(exclude_fields)
inc_list = []
exc_list = []
if filter_:
partial_info = {}
for fi in filter_:
if fi.lstrip('!') not in self.info:
self.module.warn('filter element: %s is not allowable, ignored' % fi)
continue
if fi[0] == '!':
exc_list.append(fi.lstrip('!'))
else:
inc_list.append(fi)
if inc_list:
for i in self.info:
if i in inc_list:
partial_info[i] = self.info[i]
else:
for i in self.info:
if i not in exc_list:
partial_info[i] = self.info[i]
return partial_info
else:
return self.info
def __collect(self, exclude_fields):
"""Collect all possible subsets."""
self.__get_databases(exclude_fields)
self.__get_global_variables()
self.__get_global_status()
self.__get_engines()
self.__get_users()
self.__get_master_status()
self.__get_slave_status()
self.__get_slaves()
def __get_engines(self):
"""Get storage engines info."""
res = self.__exec_sql('SHOW ENGINES')
if res:
for line in res:
engine = line['Engine']
self.info['engines'][engine] = {}
for vname, val in iteritems(line):
if vname != 'Engine':
self.info['engines'][engine][vname] = val
def __convert(self, val):
"""Convert unserializable data."""
try:
if isinstance(val, Decimal):
val = float(val)
else:
val = int(val)
except ValueError:
pass
except TypeError:
pass
return val
def __get_global_variables(self):
"""Get global variables (instance settings)."""
res = self.__exec_sql('SHOW GLOBAL VARIABLES')
if res:
for var in res:
self.info['settings'][var['Variable_name']] = self.__convert(var['Value'])
ver = self.info['settings']['version'].split('.')
release = ver[2].split('-')[0]
self.info['version'] = dict(
major=int(ver[0]),
minor=int(ver[1]),
release=int(release),
)
def __get_global_status(self):
"""Get global status."""
res = self.__exec_sql('SHOW GLOBAL STATUS')
if res:
for var in res:
self.info['global_status'][var['Variable_name']] = self.__convert(var['Value'])
def __get_master_status(self):
"""Get master status if the instance is a master."""
res = self.__exec_sql('SHOW MASTER STATUS')
if res:
for line in res:
for vname, val in iteritems(line):
self.info['master_status'][vname] = self.__convert(val)
def __get_slave_status(self):
"""Get slave status if the instance is a slave."""
res = self.__exec_sql('SHOW SLAVE STATUS')
if res:
for line in res:
host = line['Master_Host']
if host not in self.info['slave_status']:
self.info['slave_status'][host] = {}
port = line['Master_Port']
if port not in self.info['slave_status'][host]:
self.info['slave_status'][host][port] = {}
user = line['Master_User']
if user not in self.info['slave_status'][host][port]:
self.info['slave_status'][host][port][user] = {}
for vname, val in iteritems(line):
if vname not in ('Master_Host', 'Master_Port', 'Master_User'):
self.info['slave_status'][host][port][user][vname] = self.__convert(val)
def __get_slaves(self):
"""Get slave hosts info if the instance is a master."""
res = self.__exec_sql('SHOW SLAVE HOSTS')
if res:
for line in res:
srv_id = line['Server_id']
if srv_id not in self.info['slave_hosts']:
self.info['slave_hosts'][srv_id] = {}
for vname, val in iteritems(line):
if vname != 'Server_id':
self.info['slave_hosts'][srv_id][vname] = self.__convert(val)
def __get_users(self):
"""Get user info."""
res = self.__exec_sql('SELECT * FROM mysql.user')
if res:
for line in res:
host = line['Host']
if host not in self.info['users']:
self.info['users'][host] = {}
user = line['User']
self.info['users'][host][user] = {}
for vname, val in iteritems(line):
if vname not in ('Host', 'User'):
self.info['users'][host][user][vname] = self.__convert(val)
def __get_databases(self, exclude_fields):
"""Get info about databases."""
if not exclude_fields:
query = ('SELECT table_schema AS "name", '
'SUM(data_length + index_length) AS "size" '
'FROM information_schema.TABLES GROUP BY table_schema')
else:
if 'db_size' in exclude_fields:
query = ('SELECT table_schema AS "name" '
'FROM information_schema.TABLES GROUP BY table_schema')
res = self.__exec_sql(query)
if res:
for db in res:
self.info['databases'][db['name']] = {}
if not exclude_fields or 'db_size' not in exclude_fields:
self.info['databases'][db['name']]['size'] = int(db['size'])
def __exec_sql(self, query, ddl=False):
"""Execute SQL.
Arguments:
ddl (bool): If True, return True or False.
Used for queries that don't return any rows
(mainly for DDL queries) (default False).
"""
try:
self.cursor.execute(query)
if not ddl:
res = self.cursor.fetchall()
return res
return True
except Exception as e:
self.module.fail_json(msg="Cannot execute SQL '%s': %s" % (query, to_native(e)))
return False
# ===========================================
# Module execution.
#
def main():
argument_spec = mysql_common_argument_spec()
argument_spec.update(
login_db=dict(type='str'),
filter=dict(type='list'),
exclude_fields=dict(type='list'),
)
# The module doesn't support check_mode
# because of it doesn't change anything
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
db = module.params['login_db']
connect_timeout = module.params['connect_timeout']
login_user = module.params['login_user']
login_password = module.params['login_password']
ssl_cert = module.params['client_cert']
ssl_key = module.params['client_key']
ssl_ca = module.params['ca_cert']
config_file = module.params['config_file']
filter_ = module.params['filter']
exclude_fields = module.params['exclude_fields']
if filter_:
filter_ = [f.strip() for f in filter_]
if exclude_fields:
exclude_fields = set([f.strip() for f in exclude_fields])
if mysql_driver is None:
module.fail_json(msg=mysql_driver_fail_msg)
cursor = mysql_connect(module, login_user, login_password,
config_file, ssl_cert, ssl_key, ssl_ca, db,
connect_timeout=connect_timeout, cursor_class='DictCursor')
###############################
# Create object and do main job
mysql = MySQL_Info(module, cursor)
module.exit_json(changed=False, **mysql.get_info(filter_, exclude_fields))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,560 |
mysql_user module check_implicit_admin doesn't fallback to login credentials
|
##### SUMMARY
mysql_user module with check_implicit_admin=yes try to connect to database without password. But when login failed doesn't fallback to login_user and login_password credentials.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
mysql_user
##### ANSIBLE VERSION
```
2.9.0
```
##### OS / ENVIRONMENT
Ubuntu 18.04
##### STEPS TO REPRODUCE
```yaml
- name: Update root password for all root accounts
mysql_user: name=root password={{ mysql_root_password }} login_user=root login_password={{ mysql_root_password }} host={{ item }} check_implicit_admin=yes
with_items:
- "{{ current_hostname.stdout | lower }}"
- 127.0.0.1
- ::1
- localhost
```
First run is ok, but on second run Ansible failed when try to connect as root without password and no fallback to login credentials.
##### EXPECTED RESULTS
First run -> root / no password
Other runs -> root / mysql_root_password
##### ACTUAL RESULTS
Provision failed at other as first run
|
https://github.com/ansible/ansible/issues/64560
|
https://github.com/ansible/ansible/pull/64585
|
f21e72d55ac2e4408036dcf18a5b5da61883b3e5
|
47aea84924e8149e153107564c8b029dc4f52c27
| 2019-11-07T13:09:16Z |
python
| 2019-12-06T05:52:34Z |
test/integration/targets/mysql_user/defaults/main.yml
|
---
# defaults file for test_mysql_user
db_name: 'data'
user_name_1: 'db_user1'
user_name_2: 'db_user2'
user_password_1: 'gadfFDSdtTU^Sdfuj'
user_password_2: 'jkFKUdfhdso78yi&td'
db_names:
- clientdb
- employeedb
- providerdb
tmp_dir: '/tmp'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,560 |
mysql_user module check_implicit_admin doesn't fallback to login credentials
|
##### SUMMARY
mysql_user module with check_implicit_admin=yes try to connect to database without password. But when login failed doesn't fallback to login_user and login_password credentials.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
mysql_user
##### ANSIBLE VERSION
```
2.9.0
```
##### OS / ENVIRONMENT
Ubuntu 18.04
##### STEPS TO REPRODUCE
```yaml
- name: Update root password for all root accounts
mysql_user: name=root password={{ mysql_root_password }} login_user=root login_password={{ mysql_root_password }} host={{ item }} check_implicit_admin=yes
with_items:
- "{{ current_hostname.stdout | lower }}"
- 127.0.0.1
- ::1
- localhost
```
First run is ok, but on second run Ansible failed when try to connect as root without password and no fallback to login credentials.
##### EXPECTED RESULTS
First run -> root / no password
Other runs -> root / mysql_root_password
##### ACTUAL RESULTS
Provision failed at other as first run
|
https://github.com/ansible/ansible/issues/64560
|
https://github.com/ansible/ansible/pull/64585
|
f21e72d55ac2e4408036dcf18a5b5da61883b3e5
|
47aea84924e8149e153107564c8b029dc4f52c27
| 2019-11-07T13:09:16Z |
python
| 2019-12-06T05:52:34Z |
test/integration/targets/mysql_user/tasks/issue-64560.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,560 |
mysql_user module check_implicit_admin doesn't fallback to login credentials
|
##### SUMMARY
mysql_user module with check_implicit_admin=yes try to connect to database without password. But when login failed doesn't fallback to login_user and login_password credentials.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
mysql_user
##### ANSIBLE VERSION
```
2.9.0
```
##### OS / ENVIRONMENT
Ubuntu 18.04
##### STEPS TO REPRODUCE
```yaml
- name: Update root password for all root accounts
mysql_user: name=root password={{ mysql_root_password }} login_user=root login_password={{ mysql_root_password }} host={{ item }} check_implicit_admin=yes
with_items:
- "{{ current_hostname.stdout | lower }}"
- 127.0.0.1
- ::1
- localhost
```
First run is ok, but on second run Ansible failed when try to connect as root without password and no fallback to login credentials.
##### EXPECTED RESULTS
First run -> root / no password
Other runs -> root / mysql_root_password
##### ACTUAL RESULTS
Provision failed at other as first run
|
https://github.com/ansible/ansible/issues/64560
|
https://github.com/ansible/ansible/pull/64585
|
f21e72d55ac2e4408036dcf18a5b5da61883b3e5
|
47aea84924e8149e153107564c8b029dc4f52c27
| 2019-11-07T13:09:16Z |
python
| 2019-12-06T05:52:34Z |
test/integration/targets/mysql_user/tasks/main.yml
|
# test code for the mysql_user module
# (c) 2014, Wayne Rosario <[email protected]>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 dof the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# ============================================================
# create mysql user and verify user is added to mysql database
#
- include: create_user.yml user_name={{user_name_1}} user_password={{ user_password_1 }}
- include: assert_user.yml user_name={{user_name_1}}
- include: remove_user.yml user_name={{user_name_1}} user_password={{ user_password_1 }}
- include: assert_no_user.yml user_name={{user_name_1}}
# ============================================================
# Create mysql user that already exist on mysql database
#
- include: create_user.yml user_name={{user_name_1}} user_password={{ user_password_1 }}
- name: create mysql user that already exist (expect changed=false)
mysql_user:
name: '{{user_name_1}}'
password: '{{user_password_1}}'
state: present
login_unix_socket: '{{ mysql_socket }}'
register: result
- name: assert output message mysql user was not created
assert: { that: "result.changed == false" }
# ============================================================
# remove mysql user and verify user is removed from mysql database
#
- name: remove mysql user state=absent (expect changed=true)
mysql_user:
name: '{{ user_name_1 }}'
password: '{{ user_password_1 }}'
state: absent
login_unix_socket: '{{ mysql_socket }}'
register: result
- name: assert output message mysql user was removed
assert: { that: "result.changed == true" }
- include: assert_no_user.yml user_name={{user_name_1}}
# ============================================================
# remove mysql user that does not exist on mysql database
#
- name: remove mysql user that does not exist state=absent (expect changed=false)
mysql_user:
name: '{{ user_name_1 }}'
password: '{{ user_password_1 }}'
state: absent
login_unix_socket: '{{ mysql_socket }}'
register: result
- name: assert output message mysql user that does not exist
assert: { that: "result.changed == false" }
- include: assert_no_user.yml user_name={{user_name_1}}
# ============================================================
# Create user with no privileges and verify default privileges are assign
#
- name: create user with select privilege state=present (expect changed=true)
mysql_user:
name: '{{ user_name_1 }}'
password: '{{ user_password_1 }}'
state: present
login_unix_socket: '{{ mysql_socket }}'
register: result
- include: assert_user.yml user_name={{user_name_1}} priv=USAGE
- include: remove_user.yml user_name={{user_name_1}} user_password={{ user_password_1 }}
- include: assert_no_user.yml user_name={{user_name_1}}
# ============================================================
# Create user with select privileges and verify select privileges are assign
#
- name: create user with select privilege state=present (expect changed=true)
mysql_user:
name: '{{ user_name_2 }}'
password: '{{ user_password_2 }}'
state: present
priv: '*.*:SELECT'
login_unix_socket: '{{ mysql_socket }}'
register: result
- include: assert_user.yml user_name={{user_name_2}} priv=SELECT
- include: remove_user.yml user_name={{user_name_2}} user_password={{ user_password_2 }}
- include: assert_no_user.yml user_name={{user_name_2}}
# ============================================================
# Assert user has access to multiple databases
#
- name: give users access to multiple databases
mysql_user:
name: '{{ item[0] }}'
priv: '{{ item[1] }}.*:ALL'
append_privs: yes
password: '{{ user_password_1 }}'
login_unix_socket: '{{ mysql_socket }}'
with_nested:
- [ '{{ user_name_1 }}', '{{ user_name_2 }}']
- "{{db_names}}"
- name: show grants access for user1 on multiple database
command: mysql "-e SHOW GRANTS FOR '{{ user_name_1 }}'@'localhost';"
register: result
- name: assert grant access for user1 on multiple database
assert: { that: "'{{ item }}' in result.stdout" }
with_items: "{{db_names}}"
- name: show grants access for user2 on multiple database
command: mysql "-e SHOW GRANTS FOR '{{ user_name_2 }}'@'localhost';"
register: result
- name: assert grant access for user2 on multiple database
assert: { that: "'{{ item }}' in result.stdout" }
with_items: "{{db_names}}"
- include: remove_user.yml user_name={{user_name_1}} user_password={{ user_password_1 }}
- include: remove_user.yml user_name={{user_name_2}} user_password={{ user_password_1 }}
- name: give user access to database via wildcard
mysql_user:
name: '{{ user_name_1 }}'
priv: '%db.*:SELECT'
append_privs: yes
password: '{{ user_password_1 }}'
login_unix_socket: '{{ mysql_socket }}'
- name: show grants access for user1 on multiple database
command: mysql "-e SHOW GRANTS FOR '{{ user_name_1 }}'@'localhost';"
register: result
- name: assert grant access for user1 on multiple database
assert:
that:
- "'%db' in result.stdout"
- "'SELECT' in result.stdout"
- name: change user access to database via wildcard
mysql_user:
name: '{{ user_name_1 }}'
priv: '%db.*:INSERT'
append_privs: yes
password: '{{ user_password_1 }}'
login_unix_socket: '{{ mysql_socket }}'
- name: show grants access for user1 on multiple database
command: mysql "-e SHOW GRANTS FOR '{{ user_name_1 }}'@'localhost';"
register: result
- name: assert grant access for user1 on multiple database
assert:
that:
- "'%db' in result.stdout"
- "'INSERT' in result.stdout"
- include: remove_user.yml user_name={{user_name_1}} user_password={{ user_password_1 }}
# ============================================================
# Update user password for a user.
# Assert the user password is updated and old password can no longer be used.
#
#- include: user_password_update_test.yml
# ============================================================
# Assert create user with SELECT privileges, attempt to create database and update privileges to create database
#
- include: test_privs.yml current_privilege=SELECT current_append_privs=no
# ============================================================
# Assert creating user with SELECT privileges, attempt to create database and append privileges to create database
#
- include: test_privs.yml current_privilege=DROP current_append_privs=yes
# ============================================================
# Assert create user with SELECT privileges, attempt to create database and update privileges to create database
#
- include: test_privs.yml current_privilege='UPDATE,ALTER' current_append_privs=no
# ============================================================
# Assert creating user with SELECT privileges, attempt to create database and append privileges to create database
#
- include: test_privs.yml current_privilege='INSERT,DELETE' current_append_privs=yes
- import_tasks: issue-29511.yaml
tags:
- issue-29511
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,249 |
Exception from file connectionpool.py with every ad-hoc command executed on Windows node connecting by WinRM and certificate
|
### SUMMARY
Exception from file connectionpool.py showed with every ad-hoc command executed on Windows node connecting by WinRM and certificate. Ansible 2.8.x doesn't show this exception
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Ansible Connection WinRM
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/etsadm/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
```
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 30
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles']
DEFAULT_VAULT_PASSWORD_FILE(/etc/ansible/ansible.cfg) = /home/etsadm/ansible_pwd.txt
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
```
Dynamic inventory created with these variables for all nodes:
'all': {
'vars': {
'ansible_winrm_cert_pem': '/etc/ansible/certificado/cert.pem',
'ansible_winrm_transport': 'certificate',
'ansible_winrm_server_cert_validation': 'ignore',
'ansible_winrm_cert_key_pem': '/etc/ansible/certificado/key.pem',
'ansible_connection': 'winrm'
}
}
##### OS / ENVIRONMENT
RHEL 7.7 x86_64
Ansible 2.9.1
Python 2.7.5
##### STEPS TO REPRODUCE
Every ad-hoc command executed on a Windows node, connecting by WinRM and certificate, throws an exception but command runs fine.
Example:
<!--- Paste example playbooks or commands between quotes below -->
``` ansible -m win_ping windows_node_client ```
##### ACTUAL RESULTS
```
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/etsadm/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts/generaInventario.sh as it did not pass its verify_file() method
Parsed /etc/ansible/hosts/generaInventario.sh inventory source with script plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/minimal.pyc
META: ran handlers
Using module file /usr/lib/python2.7/site-packages/ansible/modules/windows/win_ping.ps1
Pipelining is enabled.
<windows_node_client> ESTABLISH WINRM CONNECTION FOR USER: None on PORT 5986 TO windows_node_client
Traceback (most recent call last):
File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 467, in format
s = self._fmt % record.__dict__
KeyError: 'user'
Logged from file connectionpool.py, line 735
EXEC (via pipeline wrapper)
windows_node_client | SUCCESS => {
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"ping": "pong"
}
META: ran handlers
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/65249
|
https://github.com/ansible/ansible/pull/65582
|
09fca101b72aa19494b0332ad3dac1f42a724802
|
b7822276424d26565c33954db7436a5f6ab8063c
| 2019-11-25T11:59:35Z |
python
| 2019-12-06T20:06:52Z |
changelogs/fragments/logging-traceback.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,249 |
Exception from file connectionpool.py with every ad-hoc command executed on Windows node connecting by WinRM and certificate
|
### SUMMARY
Exception from file connectionpool.py showed with every ad-hoc command executed on Windows node connecting by WinRM and certificate. Ansible 2.8.x doesn't show this exception
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Ansible Connection WinRM
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/etsadm/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
```
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 30
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles']
DEFAULT_VAULT_PASSWORD_FILE(/etc/ansible/ansible.cfg) = /home/etsadm/ansible_pwd.txt
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
```
Dynamic inventory created with these variables for all nodes:
'all': {
'vars': {
'ansible_winrm_cert_pem': '/etc/ansible/certificado/cert.pem',
'ansible_winrm_transport': 'certificate',
'ansible_winrm_server_cert_validation': 'ignore',
'ansible_winrm_cert_key_pem': '/etc/ansible/certificado/key.pem',
'ansible_connection': 'winrm'
}
}
##### OS / ENVIRONMENT
RHEL 7.7 x86_64
Ansible 2.9.1
Python 2.7.5
##### STEPS TO REPRODUCE
Every ad-hoc command executed on a Windows node, connecting by WinRM and certificate, throws an exception but command runs fine.
Example:
<!--- Paste example playbooks or commands between quotes below -->
``` ansible -m win_ping windows_node_client ```
##### ACTUAL RESULTS
```
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/etsadm/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts/generaInventario.sh as it did not pass its verify_file() method
Parsed /etc/ansible/hosts/generaInventario.sh inventory source with script plugin
Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/minimal.pyc
META: ran handlers
Using module file /usr/lib/python2.7/site-packages/ansible/modules/windows/win_ping.ps1
Pipelining is enabled.
<windows_node_client> ESTABLISH WINRM CONNECTION FOR USER: None on PORT 5986 TO windows_node_client
Traceback (most recent call last):
File "/usr/lib64/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
File "/usr/lib64/python2.7/logging/__init__.py", line 467, in format
s = self._fmt % record.__dict__
KeyError: 'user'
Logged from file connectionpool.py, line 735
EXEC (via pipeline wrapper)
windows_node_client | SUCCESS => {
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"ping": "pong"
}
META: ran handlers
META: ran handlers
```
|
https://github.com/ansible/ansible/issues/65249
|
https://github.com/ansible/ansible/pull/65582
|
09fca101b72aa19494b0332ad3dac1f42a724802
|
b7822276424d26565c33954db7436a5f6ab8063c
| 2019-11-25T11:59:35Z |
python
| 2019-12-06T20:06:52Z |
lib/ansible/utils/display.py
|
# (c) 2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fcntl
import getpass
import locale
import logging
import os
import random
import subprocess
import sys
import textwrap
import time
from struct import unpack, pack
from termios import TIOCGWINSZ
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.six import with_metaclass
from ansible.utils.color import stringc
from ansible.utils.singleton import Singleton
from ansible.utils.unsafe_proxy import wrap_var
try:
# Python 2
input = raw_input
except NameError:
# Python 3, we already have raw_input
pass
class FilterBlackList(logging.Filter):
def __init__(self, blacklist):
self.blacklist = [logging.Filter(name) for name in blacklist]
def filter(self, record):
return not any(f.filter(record) for f in self.blacklist)
logger = None
# TODO: make this a callback event instead
if getattr(C, 'DEFAULT_LOG_PATH'):
path = C.DEFAULT_LOG_PATH
if path and (os.path.exists(path) and os.access(path, os.W_OK)) or os.access(os.path.dirname(path), os.W_OK):
logging.basicConfig(filename=path, level=logging.INFO, format='%(asctime)s p=%(user)s u=%(process)d | %(message)s')
logger = logging.LoggerAdapter(logging.getLogger('ansible'), {'user': getpass.getuser()})
for handler in logging.root.handlers:
handler.addFilter(FilterBlackList(getattr(C, 'DEFAULT_LOG_FILTER', [])))
else:
print("[WARNING]: log file at %s is not writeable and we cannot create it, aborting\n" % path, file=sys.stderr)
# map color to log levels
color_to_log_level = {C.COLOR_ERROR: logging.ERROR,
C.COLOR_WARN: logging.WARNING,
C.COLOR_OK: logging.INFO,
C.COLOR_SKIP: logging.WARNING,
C.COLOR_UNREACHABLE: logging.ERROR,
C.COLOR_DEBUG: logging.DEBUG,
C.COLOR_CHANGED: logging.INFO,
C.COLOR_DEPRECATE: logging.WARNING,
C.COLOR_VERBOSE: logging.INFO}
b_COW_PATHS = (
b"/usr/bin/cowsay",
b"/usr/games/cowsay",
b"/usr/local/bin/cowsay", # BSD path for cowsay
b"/opt/local/bin/cowsay", # MacPorts path for cowsay
)
class Display(with_metaclass(Singleton, object)):
def __init__(self, verbosity=0):
self.columns = None
self.verbosity = verbosity
# list of all deprecation messages to prevent duplicate display
self._deprecations = {}
self._warns = {}
self._errors = {}
self.b_cowsay = None
self.noncow = C.ANSIBLE_COW_SELECTION
self.set_cowsay_info()
if self.b_cowsay:
try:
cmd = subprocess.Popen([self.b_cowsay, "-l"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
self.cows_available = set([to_text(c) for c in out.split()])
if C.ANSIBLE_COW_WHITELIST and any(C.ANSIBLE_COW_WHITELIST):
self.cows_available = set(C.ANSIBLE_COW_WHITELIST).intersection(self.cows_available)
except Exception:
# could not execute cowsay for some reason
self.b_cowsay = False
self._set_column_width()
def set_cowsay_info(self):
if C.ANSIBLE_NOCOWS:
return
if C.ANSIBLE_COW_PATH:
self.b_cowsay = C.ANSIBLE_COW_PATH
else:
for b_cow_path in b_COW_PATHS:
if os.path.exists(b_cow_path):
self.b_cowsay = b_cow_path
def display(self, msg, color=None, stderr=False, screen_only=False, log_only=False, newline=True):
""" Display a message to the user
Note: msg *must* be a unicode string to prevent UnicodeError tracebacks.
"""
nocolor = msg
if color:
msg = stringc(msg, color)
if not log_only:
if not msg.endswith(u'\n') and newline:
msg2 = msg + u'\n'
else:
msg2 = msg
msg2 = to_bytes(msg2, encoding=self._output_encoding(stderr=stderr))
if sys.version_info >= (3,):
# Convert back to text string on python3
# We first convert to a byte string so that we get rid of
# characters that are invalid in the user's locale
msg2 = to_text(msg2, self._output_encoding(stderr=stderr), errors='replace')
# Note: After Display() class is refactored need to update the log capture
# code in 'bin/ansible-connection' (and other relevant places).
if not stderr:
fileobj = sys.stdout
else:
fileobj = sys.stderr
fileobj.write(msg2)
try:
fileobj.flush()
except IOError as e:
# Ignore EPIPE in case fileobj has been prematurely closed, eg.
# when piping to "head -n1"
if e.errno != errno.EPIPE:
raise
if logger and not screen_only:
# We first convert to a byte string so that we get rid of
# color and characters that are invalid in the user's locale
msg2 = to_bytes(nocolor.lstrip(u'\n'))
if sys.version_info >= (3,):
# Convert back to text string on python3
msg2 = to_text(msg2, self._output_encoding(stderr=stderr))
lvl = logging.INFO
if color:
# set logger level based on color (not great)
try:
lvl = color_to_log_level[color]
except KeyError:
# this should not happen, but JIC
raise AnsibleAssertionError('Invalid color supplied to display: %s' % color)
# actually log
logger.log(lvl, msg2)
def v(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=0)
def vv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=1)
def vvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=2)
def vvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=3)
def vvvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=4)
def vvvvvv(self, msg, host=None):
return self.verbose(msg, host=host, caplevel=5)
def debug(self, msg, host=None):
if C.DEFAULT_DEBUG:
if host is None:
self.display("%6d %0.5f: %s" % (os.getpid(), time.time(), msg), color=C.COLOR_DEBUG)
else:
self.display("%6d %0.5f [%s]: %s" % (os.getpid(), time.time(), host, msg), color=C.COLOR_DEBUG)
def verbose(self, msg, host=None, caplevel=2):
to_stderr = C.VERBOSE_TO_STDERR
if self.verbosity > caplevel:
if host is None:
self.display(msg, color=C.COLOR_VERBOSE, stderr=to_stderr)
else:
self.display("<%s> %s" % (host, msg), color=C.COLOR_VERBOSE, stderr=to_stderr)
def deprecated(self, msg, version=None, removed=False):
''' used to print out a deprecation message.'''
if not removed and not C.DEPRECATION_WARNINGS:
return
if not removed:
if version:
new_msg = "[DEPRECATION WARNING]: %s. This feature will be removed in version %s." % (msg, version)
else:
new_msg = "[DEPRECATION WARNING]: %s. This feature will be removed in a future release." % (msg)
new_msg = new_msg + " Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\n\n"
else:
raise AnsibleError("[DEPRECATED]: %s.\nPlease update your playbooks." % msg)
wrapped = textwrap.wrap(new_msg, self.columns, drop_whitespace=False)
new_msg = "\n".join(wrapped) + "\n"
if new_msg not in self._deprecations:
self.display(new_msg.strip(), color=C.COLOR_DEPRECATE, stderr=True)
self._deprecations[new_msg] = 1
def warning(self, msg, formatted=False):
if not formatted:
new_msg = "[WARNING]: %s" % msg
wrapped = textwrap.wrap(new_msg, self.columns)
new_msg = "\n".join(wrapped) + "\n"
else:
new_msg = "\n[WARNING]: \n%s" % msg
if new_msg not in self._warns:
self.display(new_msg, color=C.COLOR_WARN, stderr=True)
self._warns[new_msg] = 1
def system_warning(self, msg):
if C.SYSTEM_WARNINGS:
self.warning(msg)
def banner(self, msg, color=None, cows=True):
'''
Prints a header-looking line with cowsay or stars with length depending on terminal width (3 minimum)
'''
if self.b_cowsay and cows:
try:
self.banner_cowsay(msg)
return
except OSError:
self.warning("somebody cleverly deleted cowsay or something during the PB run. heh.")
msg = msg.strip()
star_len = self.columns - len(msg)
if star_len <= 3:
star_len = 3
stars = u"*" * star_len
self.display(u"\n%s %s" % (msg, stars), color=color)
def banner_cowsay(self, msg, color=None):
if u": [" in msg:
msg = msg.replace(u"[", u"")
if msg.endswith(u"]"):
msg = msg[:-1]
runcmd = [self.b_cowsay, b"-W", b"60"]
if self.noncow:
thecow = self.noncow
if thecow == 'random':
thecow = random.choice(list(self.cows_available))
runcmd.append(b'-f')
runcmd.append(to_bytes(thecow))
runcmd.append(to_bytes(msg))
cmd = subprocess.Popen(runcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
self.display(u"%s\n" % to_text(out), color=color)
def error(self, msg, wrap_text=True):
if wrap_text:
new_msg = u"\n[ERROR]: %s" % msg
wrapped = textwrap.wrap(new_msg, self.columns)
new_msg = u"\n".join(wrapped) + u"\n"
else:
new_msg = u"ERROR! %s" % msg
if new_msg not in self._errors:
self.display(new_msg, color=C.COLOR_ERROR, stderr=True)
self._errors[new_msg] = 1
@staticmethod
def prompt(msg, private=False):
prompt_string = to_bytes(msg, encoding=Display._output_encoding())
if sys.version_info >= (3,):
# Convert back into text on python3. We do this double conversion
# to get rid of characters that are illegal in the user's locale
prompt_string = to_text(prompt_string)
if private:
return getpass.getpass(prompt_string)
else:
return input(prompt_string)
def do_var_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None, unsafe=None):
result = None
if sys.__stdin__.isatty():
do_prompt = self.prompt
if prompt and default is not None:
msg = "%s [%s]: " % (prompt, default)
elif prompt:
msg = "%s: " % prompt
else:
msg = 'input for %s: ' % varname
if confirm:
while True:
result = do_prompt(msg, private)
second = do_prompt("confirm " + msg, private)
if result == second:
break
self.display("***** VALUES ENTERED DO NOT MATCH ****")
else:
result = do_prompt(msg, private)
else:
result = None
self.warning("Not prompting as we are not in interactive mode")
# if result is false and default is not None
if not result and default is not None:
result = default
if encrypt:
# Circular import because encrypt needs a display class
from ansible.utils.encrypt import do_encrypt
result = do_encrypt(result, encrypt, salt_size, salt)
# handle utf-8 chars
result = to_text(result, errors='surrogate_or_strict')
if unsafe:
result = wrap_var(result)
return result
@staticmethod
def _output_encoding(stderr=False):
encoding = locale.getpreferredencoding()
# https://bugs.python.org/issue6202
# Python2 hardcodes an obsolete value on Mac. Use MacOSX defaults
# instead.
if encoding in ('mac-roman',):
encoding = 'utf-8'
return encoding
def _set_column_width(self):
if os.isatty(0):
tty_size = unpack('HHHH', fcntl.ioctl(0, TIOCGWINSZ, pack('HHHH', 0, 0, 0, 0)))[1]
else:
tty_size = 0
self.columns = max(79, tty_size - 1)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,096 |
vmware_vm_inventory.py ignore port parameter
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I am running [vmware_vm_inventory.py]( https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/vmware_vm_inventory.py) in a firewalled environment without access to port `443` (i.e., only port 1443 forwarded to a single VCSA port 443) and setting `with_tags: True` and `port: 1443`, I get an error about port `443` instead (`HTTPSConnectionPool(host='foo.bar.com', port=443`).
But if I set `with_tags: False`, or run it on the same subnet as the VCSA host (i.e., with port 443 access), the script works as expected, without errors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
I suspect the problem is NOT `vmware_vm_inventory.py` but one of its dependencies.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /ansible/ansible.cfg
configured module search path = ['/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:25:15) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_BECOME_METHOD(env: ANSIBLE_BECOME_METHOD) = sudo
DEFAULT_BECOME_USER(env: ANSIBLE_BECOME_USER) = root
DEFAULT_STRATEGY(/ansible/ansible.cfg) = mitogen_linear
DEFAULT_STRATEGY_PLUGIN_PATH(/ansible/ansible.cfg) = ['/usr/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy']
HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False
INVENTORY_ENABLED(/ansible/ansible.cfg) = ['vmware_vm_inventory']
~
~
DEFAULT_BECOME_METHOD(env: ANSIBLE_BECOME_METHOD) = sudo
DEFAULT_BECOME_USER(env: ANSIBLE_BECOME_USER) = root
DEFAULT_STRATEGY(/ansible/ansible.cfg) = mitogen_linear
DEFAULT_STRATEGY_PLUGIN_PATH(/ansible/ansible.cfg) = ['/usr/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy']
HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False
INVENTORY_ENABLED(/ansible/ansible.cfg) = ['vmware_vm_inventory']
```
Config:
```paste below
plugin: vmware_vm_inventory
strict: False
hostname: foo.bar.com
port: 1443
username: [email protected]
password: fooBar
validate_certs: False
with_tags: True
```
I'm using a Dockerfile like this:
```
RUN set -euxo pipefail ;\
sed -i 's/http\:\/\/dl-cdn.alpinelinux.org/https\:\/\/alpine.global.ssl.fastly.net/g' /etc/apk/repositories ;\
apk add --no-cache --update python3 ca-certificates openssh-client sshpass dumb-init bash git jq ;\
apk add --no-cache --update --virtual .build-deps python3-dev build-base libffi-dev openssl-dev ;\
pip3 install --no-cache --upgrade pip ;\
pip3 install --no-cache --upgrade setuptools ansible ;\
pip3 install --no-cache mitogen ansible-lint ; \
pip3 install --no-cache --upgrade pywinrm ; \
apk del --no-cache --purge .build-deps ;\
rm -rf /var/cache/apk/* ;\
rm -rf /root/.cache ;\
ln -s /usr/bin/python3 /usr/bin/python ;\
mkdir -p /etc/ansible/ ;\
/bin/echo -e "[local]\nlocalhost ansible_connection=local" > /etc/ansible/hosts ;\
ssh-keygen -q -t ed25519 -N '' -f /root/.ssh/id_ed25519 ;\
mkdir -p ~/.ssh && echo "Host *" > ~/.ssh/config && echo " StrictHostKeyChecking no" >> ~/.ssh/config
# Dependency to enumerate VSphere hosts
# REQUIRED? pip3 libxml2-dev libxslt-dev python-dev ;\
RUN apk add py3-lxml ;\
pip3 install --no-cache --upgrade PyVmomi ;\
pip3 install git+https://github.com/vmware/vsphere-automation-sdk-python
# Optionally download the latest version (from Github)
COPY -chown=ansible:ansible ./vmware_vm_inventory.py /usr/lib/python3.7/site-packages/ansible/plugins/inventory/vmware_vm_inventory.py
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.10.2
PRETTY_NAME="Alpine Linux v3.10"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Install Ansible on a subnet without 443 access (e.g., port 1443 only)
2. Enter that port in .vmware.yml as `port: 1443`
3. Follow example from [the documentation](https://docs.ansible.com/ansible/latest/plugins/inventory.html).
<!--- Paste example playbooks or commands between quotes below -->
Error:
> [WARNING]: * Failed to parse /ansible/inventories/dev/.vmware.yml with vmware_vm_inventory plugin: HTTPSConnectionPool(host='foo.bar.com', port=443): Max retries exceeded with url: /api (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object
> at 0x7f7ac7985350>: Failed to establish a new connection: [Errno 111] Connection refused'))
>
> [WARNING]: Unable to parse /ansible/inventories/dev/.vmware.yml as an inventory source
>
> [WARNING]: No inventory was parsed, only implicit localhost is available
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Entering `Port: 1234` should do *all* work on that port, not require some parts to work on port `443`.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Port 443 seems required regardless of the port the user entered.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/64096
|
https://github.com/ansible/ansible/pull/65568
|
b7822276424d26565c33954db7436a5f6ab8063c
|
c97360d21f2ba003abc881b9b3d03ab0ef672508
| 2019-10-30T10:02:20Z |
python
| 2019-12-06T20:23:12Z |
changelogs/fragments/vmware_vm_inventory_port.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,096 |
vmware_vm_inventory.py ignore port parameter
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
I am running [vmware_vm_inventory.py]( https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/inventory/vmware_vm_inventory.py) in a firewalled environment without access to port `443` (i.e., only port 1443 forwarded to a single VCSA port 443) and setting `with_tags: True` and `port: 1443`, I get an error about port `443` instead (`HTTPSConnectionPool(host='foo.bar.com', port=443`).
But if I set `with_tags: False`, or run it on the same subnet as the VCSA host (i.e., with port 443 access), the script works as expected, without errors.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
I suspect the problem is NOT `vmware_vm_inventory.py` but one of its dependencies.
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.6
config file = /ansible/ansible.cfg
configured module search path = ['/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:25:15) [GCC 8.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
DEFAULT_BECOME_METHOD(env: ANSIBLE_BECOME_METHOD) = sudo
DEFAULT_BECOME_USER(env: ANSIBLE_BECOME_USER) = root
DEFAULT_STRATEGY(/ansible/ansible.cfg) = mitogen_linear
DEFAULT_STRATEGY_PLUGIN_PATH(/ansible/ansible.cfg) = ['/usr/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy']
HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False
INVENTORY_ENABLED(/ansible/ansible.cfg) = ['vmware_vm_inventory']
~
~
DEFAULT_BECOME_METHOD(env: ANSIBLE_BECOME_METHOD) = sudo
DEFAULT_BECOME_USER(env: ANSIBLE_BECOME_USER) = root
DEFAULT_STRATEGY(/ansible/ansible.cfg) = mitogen_linear
DEFAULT_STRATEGY_PLUGIN_PATH(/ansible/ansible.cfg) = ['/usr/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy']
HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False
INVENTORY_ENABLED(/ansible/ansible.cfg) = ['vmware_vm_inventory']
```
Config:
```paste below
plugin: vmware_vm_inventory
strict: False
hostname: foo.bar.com
port: 1443
username: [email protected]
password: fooBar
validate_certs: False
with_tags: True
```
I'm using a Dockerfile like this:
```
RUN set -euxo pipefail ;\
sed -i 's/http\:\/\/dl-cdn.alpinelinux.org/https\:\/\/alpine.global.ssl.fastly.net/g' /etc/apk/repositories ;\
apk add --no-cache --update python3 ca-certificates openssh-client sshpass dumb-init bash git jq ;\
apk add --no-cache --update --virtual .build-deps python3-dev build-base libffi-dev openssl-dev ;\
pip3 install --no-cache --upgrade pip ;\
pip3 install --no-cache --upgrade setuptools ansible ;\
pip3 install --no-cache mitogen ansible-lint ; \
pip3 install --no-cache --upgrade pywinrm ; \
apk del --no-cache --purge .build-deps ;\
rm -rf /var/cache/apk/* ;\
rm -rf /root/.cache ;\
ln -s /usr/bin/python3 /usr/bin/python ;\
mkdir -p /etc/ansible/ ;\
/bin/echo -e "[local]\nlocalhost ansible_connection=local" > /etc/ansible/hosts ;\
ssh-keygen -q -t ed25519 -N '' -f /root/.ssh/id_ed25519 ;\
mkdir -p ~/.ssh && echo "Host *" > ~/.ssh/config && echo " StrictHostKeyChecking no" >> ~/.ssh/config
# Dependency to enumerate VSphere hosts
# REQUIRED? pip3 libxml2-dev libxslt-dev python-dev ;\
RUN apk add py3-lxml ;\
pip3 install --no-cache --upgrade PyVmomi ;\
pip3 install git+https://github.com/vmware/vsphere-automation-sdk-python
# Optionally download the latest version (from Github)
COPY -chown=ansible:ansible ./vmware_vm_inventory.py /usr/lib/python3.7/site-packages/ansible/plugins/inventory/vmware_vm_inventory.py
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
```
cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.10.2
PRETTY_NAME="Alpine Linux v3.10"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
```
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
1. Install Ansible on a subnet without 443 access (e.g., port 1443 only)
2. Enter that port in .vmware.yml as `port: 1443`
3. Follow example from [the documentation](https://docs.ansible.com/ansible/latest/plugins/inventory.html).
<!--- Paste example playbooks or commands between quotes below -->
Error:
> [WARNING]: * Failed to parse /ansible/inventories/dev/.vmware.yml with vmware_vm_inventory plugin: HTTPSConnectionPool(host='foo.bar.com', port=443): Max retries exceeded with url: /api (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object
> at 0x7f7ac7985350>: Failed to establish a new connection: [Errno 111] Connection refused'))
>
> [WARNING]: Unable to parse /ansible/inventories/dev/.vmware.yml as an inventory source
>
> [WARNING]: No inventory was parsed, only implicit localhost is available
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Entering `Port: 1234` should do *all* work on that port, not require some parts to work on port `443`.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Port 443 seems required regardless of the port the user entered.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/64096
|
https://github.com/ansible/ansible/pull/65568
|
b7822276424d26565c33954db7436a5f6ab8063c
|
c97360d21f2ba003abc881b9b3d03ab0ef672508
| 2019-10-30T10:02:20Z |
python
| 2019-12-06T20:23:12Z |
lib/ansible/plugins/inventory/vmware_vm_inventory.py
|
#
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: vmware_vm_inventory
plugin_type: inventory
short_description: VMware Guest inventory source
version_added: "2.7"
author:
- Abhijeet Kasurde (@Akasurde)
description:
- Get virtual machines as inventory hosts from VMware environment.
- Uses any file which ends with vmware.yml, vmware.yaml, vmware_vm_inventory.yml, or vmware_vm_inventory.yaml as a YAML configuration file.
- The inventory_hostname is always the 'Name' and UUID of the virtual machine. UUID is added as VMware allows virtual machines with the same name.
extends_documentation_fragment:
- inventory_cache
requirements:
- "Python >= 2.7"
- "PyVmomi"
- "requests >= 2.3"
- "vSphere Automation SDK - For tag feature"
- "vCloud Suite SDK - For tag feature"
options:
hostname:
description: Name of vCenter or ESXi server.
required: True
env:
- name: VMWARE_HOST
- name: VMWARE_SERVER
username:
description: Name of vSphere admin user.
required: True
env:
- name: VMWARE_USER
- name: VMWARE_USERNAME
password:
description: Password of vSphere admin user.
required: True
env:
- name: VMWARE_PASSWORD
port:
description: Port number used to connect to vCenter or ESXi Server.
default: 443
env:
- name: VMWARE_PORT
validate_certs:
description:
- Allows connection when SSL certificates are not valid. Set to C(false) when certificates are not trusted.
default: True
type: boolean
env:
- name: VMWARE_VALIDATE_CERTS
with_tags:
description:
- Include tags and associated virtual machines.
- Requires 'vSphere Automation SDK' library to be installed on the given controller machine.
- Please refer following URLs for installation steps
- 'https://code.vmware.com/web/sdk/65/vsphere-automation-python'
default: False
type: boolean
properties:
description:
- Specify the list of VMware schema properties associated with the VM.
- These properties will be populated in hostvars of the given VM.
- Each value in the list specifies the path to a specific property in VM object.
type: list
default: [ 'name', 'config.cpuHotAddEnabled', 'config.cpuHotRemoveEnabled',
'config.instanceUuid', 'config.hardware.numCPU', 'config.template',
'config.name', 'guest.hostName', 'guest.ipAddress',
'guest.guestId', 'guest.guestState', 'runtime.maxMemoryUsage',
'customValue'
]
version_added: "2.9"
'''
EXAMPLES = '''
# Sample configuration file for VMware Guest dynamic inventory
plugin: vmware_vm_inventory
strict: False
hostname: 10.65.223.31
username: [email protected]
password: Esxi@123$%
validate_certs: False
with_tags: True
# Gather minimum set of properties for VMware guest
plugin: vmware_vm_inventory
strict: False
hostname: 10.65.223.31
username: [email protected]
password: Esxi@123$%
validate_certs: False
with_tags: False
properties:
- 'name'
- 'guest.ipAddress'
'''
import ssl
import atexit
from ansible.errors import AnsibleError, AnsibleParserError
try:
# requests is required for exception handling of the ConnectionError
import requests
HAS_REQUESTS = True
except ImportError:
HAS_REQUESTS = False
try:
from pyVim import connect
from pyVmomi import vim, vmodl
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
try:
from com.vmware.vapi.std_client import DynamicID
from vmware.vapi.vsphere.client import create_vsphere_client
HAS_VSPHERE = True
except ImportError:
HAS_VSPHERE = False
from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable
class BaseVMwareInventory:
def __init__(self, hostname, username, password, port, validate_certs, with_tags):
self.hostname = hostname
self.username = username
self.password = password
self.port = port
self.with_tags = with_tags
self.validate_certs = validate_certs
self.content = None
self.rest_content = None
def do_login(self):
"""
Check requirements and do login
"""
self.check_requirements()
self.content = self._login()
if self.with_tags:
self.rest_content = self._login_vapi()
def _login_vapi(self):
"""
Login to vCenter API using REST call
Returns: connection object
"""
session = requests.Session()
session.verify = self.validate_certs
if not self.validate_certs:
# Disable warning shown at stdout
requests.packages.urllib3.disable_warnings()
client = create_vsphere_client(server=self.hostname,
username=self.username,
password=self.password,
session=session)
if client is None:
raise AnsibleError("Failed to login to %s using %s" % (self.hostname, self.username))
return client
def _login(self):
"""
Login to vCenter or ESXi server
Returns: connection object
"""
if self.validate_certs and not hasattr(ssl, 'SSLContext'):
raise AnsibleError('pyVim does not support changing verification mode with python < 2.7.9. Either update '
'python or set validate_certs to false in configuration YAML file.')
ssl_context = None
if not self.validate_certs and hasattr(ssl, 'SSLContext'):
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_NONE
service_instance = None
try:
service_instance = connect.SmartConnect(host=self.hostname, user=self.username,
pwd=self.password, sslContext=ssl_context,
port=self.port)
except vim.fault.InvalidLogin as e:
raise AnsibleParserError("Unable to log on to vCenter or ESXi API at %s:%s as %s: %s" % (self.hostname, self.port, self.username, e.msg))
except vim.fault.NoPermission as e:
raise AnsibleParserError("User %s does not have required permission"
" to log on to vCenter or ESXi API at %s:%s : %s" % (self.username, self.hostname, self.port, e.msg))
except (requests.ConnectionError, ssl.SSLError) as e:
raise AnsibleParserError("Unable to connect to vCenter or ESXi API at %s on TCP/%s: %s" % (self.hostname, self.port, e))
except vmodl.fault.InvalidRequest as e:
# Request is malformed
raise AnsibleParserError("Failed to get a response from server %s:%s as "
"request is malformed: %s" % (self.hostname, self.port, e.msg))
except Exception as e:
raise AnsibleParserError("Unknown error while connecting to vCenter or ESXi API at %s:%s : %s" % (self.hostname, self.port, e))
if service_instance is None:
raise AnsibleParserError("Unknown error while connecting to vCenter or ESXi API at %s:%s" % (self.hostname, self.port))
atexit.register(connect.Disconnect, service_instance)
return service_instance.RetrieveContent()
def check_requirements(self):
""" Check all requirements for this inventory are satisified"""
if not HAS_REQUESTS:
raise AnsibleParserError('Please install "requests" Python module as this is required'
' for VMware Guest dynamic inventory plugin.')
elif not HAS_PYVMOMI:
raise AnsibleParserError('Please install "PyVmomi" Python module as this is required'
' for VMware Guest dynamic inventory plugin.')
if HAS_REQUESTS:
# Pyvmomi 5.5 and onwards requires requests 2.3
# https://github.com/vmware/pyvmomi/blob/master/requirements.txt
required_version = (2, 3)
requests_version = requests.__version__.split(".")[:2]
try:
requests_major_minor = tuple(map(int, requests_version))
except ValueError:
raise AnsibleParserError("Failed to parse 'requests' library version.")
if requests_major_minor < required_version:
raise AnsibleParserError("'requests' library version should"
" be >= %s, found: %s." % (".".join([str(w) for w in required_version]),
requests.__version__))
if not HAS_VSPHERE and self.with_tags:
raise AnsibleError("Unable to find 'vSphere Automation SDK' Python library which is required."
" Please refer this URL for installation steps"
" - https://code.vmware.com/web/sdk/65/vsphere-automation-python")
if not all([self.hostname, self.username, self.password]):
raise AnsibleError("Missing one of the following : hostname, username, password. Please read "
"the documentation for more information.")
def _get_managed_objects_properties(self, vim_type, properties=None):
"""
Look up a Managed Object Reference in vCenter / ESXi Environment
:param vim_type: Type of vim object e.g, for datacenter - vim.Datacenter
:param properties: List of properties related to vim object e.g. Name
:return: local content object
"""
# Get Root Folder
root_folder = self.content.rootFolder
if properties is None:
properties = ['name']
# Create Container View with default root folder
mor = self.content.viewManager.CreateContainerView(root_folder, [vim_type], True)
# Create Traversal spec
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
name="traversal_spec",
path='view',
skip=False,
type=vim.view.ContainerView
)
# Create Property Spec
property_spec = vmodl.query.PropertyCollector.PropertySpec(
type=vim_type, # Type of object to retrieved
all=False,
pathSet=properties
)
# Create Object Spec
object_spec = vmodl.query.PropertyCollector.ObjectSpec(
obj=mor,
skip=True,
selectSet=[traversal_spec]
)
# Create Filter Spec
filter_spec = vmodl.query.PropertyCollector.FilterSpec(
objectSet=[object_spec],
propSet=[property_spec],
reportMissingObjectsInResults=False
)
return self.content.propertyCollector.RetrieveContents([filter_spec])
@staticmethod
def _get_object_prop(vm, attributes):
"""Safely get a property or return None"""
result = vm
for attribute in attributes:
try:
result = getattr(result, attribute)
except (AttributeError, IndexError):
return None
return result
class InventoryModule(BaseInventoryPlugin, Cacheable):
NAME = 'vmware_vm_inventory'
def verify_file(self, path):
"""
Verify plugin configuration file and mark this plugin active
Args:
path: Path of configuration YAML file
Returns: True if everything is correct, else False
"""
valid = False
if super(InventoryModule, self).verify_file(path):
if path.endswith(('vmware.yaml', 'vmware.yml', 'vmware_vm_inventory.yaml', 'vmware_vm_inventory.yml')):
valid = True
return valid
def parse(self, inventory, loader, path, cache=True):
"""
Parses the inventory file
"""
super(InventoryModule, self).parse(inventory, loader, path, cache=cache)
cache_key = self.get_cache_key(path)
config_data = self._read_config_data(path)
# set _options from config data
self._consume_options(config_data)
self.pyv = BaseVMwareInventory(
hostname=self.get_option('hostname'),
username=self.get_option('username'),
password=self.get_option('password'),
port=self.get_option('port'),
with_tags=self.get_option('with_tags'),
validate_certs=self.get_option('validate_certs')
)
self.pyv.do_login()
self.pyv.check_requirements()
source_data = None
if cache:
cache = self.get_option('cache')
update_cache = False
if cache:
try:
source_data = self._cache[cache_key]
except KeyError:
update_cache = True
using_current_cache = cache and not update_cache
cacheable_results = self._populate_from_source(source_data, using_current_cache)
if update_cache:
self._cache[cache_key] = cacheable_results
def _populate_from_cache(self, source_data):
""" Populate cache using source data """
hostvars = source_data.pop('_meta', {}).get('hostvars', {})
for group in source_data:
if group == 'all':
continue
else:
self.inventory.add_group(group)
hosts = source_data[group].get('hosts', [])
for host in hosts:
self._populate_host_vars([host], hostvars.get(host, {}), group)
self.inventory.add_child('all', group)
def _populate_from_source(self, source_data, using_current_cache):
"""
Populate inventory data from direct source
"""
if using_current_cache:
self._populate_from_cache(source_data)
return source_data
cacheable_results = {'_meta': {'hostvars': {}}}
hostvars = {}
objects = self.pyv._get_managed_objects_properties(vim_type=vim.VirtualMachine,
properties=['name'])
if self.pyv.with_tags:
tag_svc = self.pyv.rest_content.tagging.Tag
tag_association = self.pyv.rest_content.tagging.TagAssociation
tags_info = dict()
tags = tag_svc.list()
for tag in tags:
tag_obj = tag_svc.get(tag)
tags_info[tag_obj.id] = tag_obj.name
if tag_obj.name not in cacheable_results:
cacheable_results[tag_obj.name] = {'hosts': []}
self.inventory.add_group(tag_obj.name)
for vm_obj in objects:
for vm_obj_property in vm_obj.propSet:
# VMware does not provide a way to uniquely identify VM by its name
# i.e. there can be two virtual machines with same name
# Appending "_" and VMware UUID to make it unique
if not vm_obj.obj.config:
# Sometime orphaned VMs return no configurations
continue
current_host = vm_obj_property.val + "_" + vm_obj.obj.config.uuid
if current_host not in hostvars:
hostvars[current_host] = {}
self.inventory.add_host(current_host)
host_ip = vm_obj.obj.guest.ipAddress
if host_ip:
self.inventory.set_variable(current_host, 'ansible_host', host_ip)
self._populate_host_properties(vm_obj, current_host)
# Only gather facts related to tag if vCloud and vSphere is installed.
if HAS_VSPHERE and self.pyv.with_tags:
# Add virtual machine to appropriate tag group
vm_mo_id = vm_obj.obj._GetMoId()
vm_dynamic_id = DynamicID(type='VirtualMachine', id=vm_mo_id)
attached_tags = tag_association.list_attached_tags(vm_dynamic_id)
for tag_id in attached_tags:
self.inventory.add_child(tags_info[tag_id], current_host)
cacheable_results[tags_info[tag_id]]['hosts'].append(current_host)
# Based on power state of virtual machine
vm_power = str(vm_obj.obj.summary.runtime.powerState)
if vm_power not in cacheable_results:
cacheable_results[vm_power] = {'hosts': []}
self.inventory.add_group(vm_power)
cacheable_results[vm_power]['hosts'].append(current_host)
self.inventory.add_child(vm_power, current_host)
# Based on guest id
vm_guest_id = vm_obj.obj.config.guestId
if vm_guest_id and vm_guest_id not in cacheable_results:
cacheable_results[vm_guest_id] = {'hosts': []}
self.inventory.add_group(vm_guest_id)
cacheable_results[vm_guest_id]['hosts'].append(current_host)
self.inventory.add_child(vm_guest_id, current_host)
for host in hostvars:
h = self.inventory.get_host(host)
cacheable_results['_meta']['hostvars'][h.name] = h.vars
return cacheable_results
def _populate_host_properties(self, vm_obj, current_host):
# Load VM properties in host_vars
vm_properties = self.get_option('properties') or []
field_mgr = self.pyv.content.customFieldsManager.field
for vm_prop in vm_properties:
if vm_prop == 'customValue':
for cust_value in vm_obj.obj.customValue:
self.inventory.set_variable(current_host,
[y.name for y in field_mgr if y.key == cust_value.key][0],
cust_value.value)
else:
vm_value = self.pyv._get_object_prop(vm_obj.obj, vm_prop.split("."))
self.inventory.set_variable(current_host, vm_prop, vm_value)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,188 |
Impossible to modify attribute of a ovirt VM created by cloning a template
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
We try to maintain VM's configuration using the ovirt_vm module using a single playbook (that does not change) and a yaml configuration file to describe the VM (that change during the life of the VM).
The VM is created from a template as a clone/independant version. The creation is done with ansible-playbook without error.
But when we try to change an attribute (for example : description) in the yaml configuration file, the execution of the playbook fails.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ovirt_vm
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Empty output
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
The target is an uptodate RHV installation (version 4.3.6.7).
We tried to use 2 differents hosts to execute the playbook :
- ubuntu 18.04 with ansible and ovirt-engine-sdk-python installed with pip3
- the HostedEngine of RHV : RHEL 7.7 with uptodate versions of ovirt-engine-sdk-python and ansible package
The result is the same in both cases.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
First execution of the playbook to create the VM :
ansible-playbook template_bug.yaml -e 'vm_file=vm.yaml'
Content of vm.yaml file is :
```yaml
---
vm_name: "TEST-TemplateBug"
vm_template: "SYS-Ubuntu16"
vm_description: "Original description"
```
Second execution of the playbook in order to modify VM description :
Same command but with a different vm.yaml file :
```yaml
---
vm_name: "TEST-TemplateBug"
vm_template: "SYS-Ubuntu16"
vm_description: "New description"
```
Content of the playbook:
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
# Doc : https://docs.ansible.com/ansible/latest/modules/ovirt_vm_module.html
- name: Template Bug
hosts: localhost
connection: local
gather_facts: false
vars_files:
- "rhv_connection.yaml"
- "{{ vm_file }}"
vars_prompt:
- name: rhv_password
prompt: "Password"
private: yes
pre_tasks:
- name: "RHV connection"
ovirt_auth:
url: "{{ rhv_url }}"
username: "{{ rhv_login }}"
password: "{{ rhv_password }}"
ca_file: "{{ rhv_cert }}"
insecure: false
tags:
- always
tasks:
- name: "VM {{ vm_name }} creation/modification"
ovirt_vm:
auth: "{{ ovirt_auth }}"
name: "{{ vm_name }}"
template: "{{ vm_template }}"
description: "{{ vm_description }}"
state: present
clone: yes
cluster: "{{ rhv_cluster }}"
post_tasks:
- name: RHV log out
ovirt_auth:
state: absent
ovirt_auth: "{{ ovirt_auth }}"
tags:
- always
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
We expected the modification of description attribute of the VM.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The second execution of ansible-playbook failed with error.
<!--- Paste verbatim command output between quotes -->
```paste below
ansible-playbook template_bug.yaml -e 'vm_file=vm.yaml'
Password:
PLAY [Template Bug] **************************************************************************************************************************************************************************************************************************
TASK [RHV connection] ************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [VM TEST-TemplateBug creation/modification] *********************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Error: Fault reason is "Operation Failed". Fault detail is "[Cannot edit VM. The requested template doesn't belong to the same base template as the original template.]". HTTP response code is 400.
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot edit VM. The requested template doesn't belong to the same base template as the original template.]\". HTTP response code is 400."}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/65188
|
https://github.com/ansible/ansible/pull/65317
|
5c9539903eec844e59fb5a729fcbfe63e79d682c
|
9c79de2e1eeee604f874a7a328d1ab4105681204
| 2019-11-22T14:03:37Z |
python
| 2019-12-09T10:09:33Z |
lib/ansible/modules/cloud/ovirt/ovirt_vm.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_vm
short_description: Module to manage Virtual Machines in oVirt/RHV
version_added: "2.2"
author:
- Ondra Machacek (@machacekondra)
description:
- This module manages whole lifecycle of the Virtual Machine(VM) in oVirt/RHV.
- Since VM can hold many states in oVirt/RHV, this see notes to see how the states of the VM are handled.
options:
name:
description:
- Name of the Virtual Machine to manage.
- If VM don't exists C(name) is required. Otherwise C(id) or C(name) can be used.
id:
description:
- ID of the Virtual Machine to manage.
state:
description:
- Should the Virtual Machine be running/stopped/present/absent/suspended/next_run/registered/exported/reboot.
When C(state) is I(registered) and the unregistered VM's name
belongs to an already registered in engine VM in the same DC
then we fail to register the unregistered template.
- I(present) state will create/update VM and don't change its state if it already exists.
- I(running) state will create/update VM and start it.
- I(next_run) state updates the VM and if the VM has next run configuration it will be rebooted.
- Please check I(notes) to more detailed description of states.
- I(exported) state will export the VM to export domain or as OVA.
- I(registered) is supported since 2.4.
- I(reboot) is supported since 2.10, virtual machine is rebooted only if it's in up state.
choices: [ absent, next_run, present, registered, running, stopped, suspended, exported, reboot ]
default: present
cluster:
description:
- Name of the cluster, where Virtual Machine should be created.
- Required if creating VM.
allow_partial_import:
description:
- Boolean indication whether to allow partial registration of Virtual Machine when C(state) is registered.
type: bool
version_added: "2.4"
vnic_profile_mappings:
description:
- "Mapper which maps an external virtual NIC profile to one that exists in the engine when C(state) is registered.
vnic_profile is described by the following dictionary:"
suboptions:
source_network_name:
description:
- The network name of the source network.
source_profile_name:
description:
- The profile name related to the source network.
target_profile_id:
description:
- The id of the target profile id to be mapped to in the engine.
version_added: "2.5"
cluster_mappings:
description:
- "Mapper which maps cluster name between VM's OVF and the destination cluster this VM should be registered to,
relevant when C(state) is registered.
Cluster mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source cluster.
dest_name:
description:
- The name of the destination cluster.
version_added: "2.5"
role_mappings:
description:
- "Mapper which maps role name between VM's OVF and the destination role this VM should be registered to,
relevant when C(state) is registered.
Role mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source role.
dest_name:
description:
- The name of the destination role.
version_added: "2.5"
domain_mappings:
description:
- "Mapper which maps aaa domain name between VM's OVF and the destination aaa domain this VM should be registered to,
relevant when C(state) is registered.
The aaa domain mapping is described by the following dictionary:"
suboptions:
source_name:
description:
- The name of the source aaa domain.
dest_name:
description:
- The name of the destination aaa domain.
version_added: "2.5"
affinity_group_mappings:
description:
- "Mapper which maps affinity name between VM's OVF and the destination affinity this VM should be registered to,
relevant when C(state) is registered."
version_added: "2.5"
affinity_label_mappings:
description:
- "Mapper which maps affinity label name between VM's OVF and the destination label this VM should be registered to,
relevant when C(state) is registered."
version_added: "2.5"
lun_mappings:
description:
- "Mapper which maps lun between VM's OVF and the destination lun this VM should contain, relevant when C(state) is registered.
lun_mappings is described by the following dictionary:
- C(logical_unit_id): The logical unit number to identify a logical unit,
- C(logical_unit_port): The port being used to connect with the LUN disk.
- C(logical_unit_portal): The portal being used to connect with the LUN disk.
- C(logical_unit_address): The address of the block storage host.
- C(logical_unit_target): The iSCSI specification located on an iSCSI server
- C(logical_unit_username): Username to be used to connect to the block storage host.
- C(logical_unit_password): Password to be used to connect to the block storage host.
- C(storage_type): The storage type which the LUN reside on (iscsi or fcp)"
version_added: "2.5"
reassign_bad_macs:
description:
- "Boolean indication whether to reassign bad macs when C(state) is registered."
type: bool
version_added: "2.5"
template:
description:
- Name of the template, which should be used to create Virtual Machine.
- Required if creating VM.
- If template is not specified and VM doesn't exist, VM will be created from I(Blank) template.
template_version:
description:
- Version number of the template to be used for VM.
- By default the latest available version of the template is used.
version_added: "2.3"
use_latest_template_version:
description:
- Specify if latest template version should be used, when running a stateless VM.
- If this parameter is set to I(yes) stateless VM is created.
type: bool
version_added: "2.3"
storage_domain:
description:
- Name of the storage domain where all template disks should be created.
- This parameter is considered only when C(template) is provided.
- IMPORTANT - This parameter is not idempotent, if the VM exists and you specify different storage domain,
disk won't move.
version_added: "2.4"
disk_format:
description:
- Specify format of the disk.
- If C(cow) format is used, disk will by created as sparse, so space will be allocated for the volume as needed, also known as I(thin provision).
- If C(raw) format is used, disk storage will be allocated right away, also known as I(preallocated).
- Note that this option isn't idempotent as it's not currently possible to change format of the disk via API.
- This parameter is considered only when C(template) and C(storage domain) is provided.
choices: [ cow, raw ]
default: cow
version_added: "2.4"
memory:
description:
- Amount of memory of the Virtual Machine. Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- Default value is set by engine.
memory_guaranteed:
description:
- Amount of minimal guaranteed memory of the Virtual Machine.
Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- C(memory_guaranteed) parameter can't be lower than C(memory) parameter.
- Default value is set by engine.
memory_max:
description:
- Upper bound of virtual machine memory up to which memory hot-plug can be performed.
Prefix uses IEC 60027-2 standard (for example 1GiB, 1024MiB).
- Default value is set by engine.
version_added: "2.5"
cpu_shares:
description:
- Set a CPU shares for this Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_cores:
description:
- Number of virtual CPUs cores of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_sockets:
description:
- Number of virtual CPUs sockets of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
cpu_threads:
description:
- Number of threads per core of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
version_added: "2.5"
type:
description:
- Type of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
- I(high_performance) is supported since Ansible 2.5 and oVirt/RHV 4.2.
choices: [ desktop, server, high_performance ]
quota_id:
description:
- "Virtual Machine quota ID to be used for disk. By default quota is chosen by oVirt/RHV engine."
version_added: "2.5"
operating_system:
description:
- Operating system of the Virtual Machine.
- Default value is set by oVirt/RHV engine.
- "Possible values: debian_7, freebsd, freebsdx64, other, other_linux,
other_linux_ppc64, other_ppc64, rhel_3, rhel_4, rhel_4x64, rhel_5, rhel_5x64,
rhel_6, rhel_6x64, rhel_6_ppc64, rhel_7x64, rhel_7_ppc64, sles_11, sles_11_ppc64,
ubuntu_12_04, ubuntu_12_10, ubuntu_13_04, ubuntu_13_10, ubuntu_14_04, ubuntu_14_04_ppc64,
windows_10, windows_10x64, windows_2003, windows_2003x64, windows_2008, windows_2008x64,
windows_2008r2x64, windows_2008R2x64, windows_2012x64, windows_2012R2x64, windows_7,
windows_7x64, windows_8, windows_8x64, windows_xp"
boot_devices:
description:
- List of boot devices which should be used to boot. For example C([ cdrom, hd ]).
- Default value is set by oVirt/RHV engine.
choices: [ cdrom, hd, network ]
boot_menu:
description:
- "I(True) enable menu to select boot device, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
usb_support:
description:
- "I(True) enable USB support, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
serial_console:
description:
- "I(True) enable VirtIO serial console, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
sso:
description:
- "I(True) enable Single Sign On by Guest Agent, I(False) to disable it. By default is chosen by oVirt/RHV engine."
type: bool
version_added: "2.5"
host:
description:
- Specify host where Virtual Machine should be running. By default the host is chosen by engine scheduler.
- This parameter is used only when C(state) is I(running) or I(present).
high_availability:
description:
- If I(yes) Virtual Machine will be set as highly available.
- If I(no) Virtual Machine won't be set as highly available.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
high_availability_priority:
description:
- Indicates the priority of the virtual machine inside the run and migration queues.
Virtual machines with higher priorities will be started and migrated before virtual machines with lower
priorities. The value is an integer between 0 and 100. The higher the value, the higher the priority.
- If no value is passed, default value is set by oVirt/RHV engine.
version_added: "2.5"
lease:
description:
- Name of the storage domain this virtual machine lease reside on. Pass an empty string to remove the lease.
- NOTE - Supported since oVirt 4.1.
version_added: "2.4"
custom_compatibility_version:
description:
- "Enables a virtual machine to be customized to its own compatibility version. If
'C(custom_compatibility_version)' is set, it overrides the cluster's compatibility version
for this particular virtual machine."
version_added: "2.7"
host_devices:
description:
- Single Root I/O Virtualization - technology that allows single device to expose multiple endpoints that can be passed to VMs
- host_devices is an list which contain dictionary with name and state of device
version_added: "2.7"
delete_protected:
description:
- If I(yes) Virtual Machine will be set as delete protected.
- If I(no) Virtual Machine won't be set as delete protected.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
stateless:
description:
- If I(yes) Virtual Machine will be set as stateless.
- If I(no) Virtual Machine will be unset as stateless.
- If no value is passed, default value is set by oVirt/RHV engine.
type: bool
clone:
description:
- If I(yes) then the disks of the created virtual machine will be cloned and independent of the template.
- This parameter is used only when C(state) is I(running) or I(present) and VM didn't exist before.
type: bool
default: 'no'
clone_permissions:
description:
- If I(yes) then the permissions of the template (only the direct ones, not the inherited ones)
will be copied to the created virtual machine.
- This parameter is used only when C(state) is I(running) or I(present) and VM didn't exist before.
type: bool
default: 'no'
cd_iso:
description:
- ISO file from ISO storage domain which should be attached to Virtual Machine.
- If you pass empty string the CD will be ejected from VM.
- If used with C(state) I(running) or I(present) and VM is running the CD will be attached to VM.
- If used with C(state) I(running) or I(present) and VM is down the CD will be attached to VM persistently.
force:
description:
- Please check to I(Synopsis) to more detailed description of force parameter, it can behave differently
in different situations.
type: bool
default: 'no'
nics:
description:
- List of NICs, which should be attached to Virtual Machine. NIC is described by following dictionary.
suboptions:
name:
description:
- Name of the NIC.
profile_name:
description:
- Profile name where NIC should be attached.
interface:
description:
- Type of the network interface.
choices: ['virtio', 'e1000', 'rtl8139']
default: 'virtio'
mac_address:
description:
- Custom MAC address of the network interface, by default it's obtained from MAC pool.
- "NOTE - This parameter is used only when C(state) is I(running) or I(present) and is able to only create NICs.
To manage NICs of the VM in more depth please use M(ovirt_nic) module instead."
disks:
description:
- List of disks, which should be attached to Virtual Machine. Disk is described by following dictionary.
suboptions:
name:
description:
- Name of the disk. Either C(name) or C(id) is required.
id:
description:
- ID of the disk. Either C(name) or C(id) is required.
interface:
description:
- Interface of the disk.
choices: ['virtio', 'ide']
default: 'virtio'
bootable:
description:
- I(True) if the disk should be bootable, default is non bootable.
type: bool
activate:
description:
- I(True) if the disk should be activated, default is activated.
- "NOTE - This parameter is used only when C(state) is I(running) or I(present) and is able to only attach disks.
To manage disks of the VM in more depth please use M(ovirt_disk) module instead."
type: bool
sysprep:
description:
- Dictionary with values for Windows Virtual Machine initialization using sysprep.
suboptions:
host_name:
description:
- Hostname to be set to Virtual Machine when deployed.
active_directory_ou:
description:
- Active Directory Organizational Unit, to be used for login of user.
org_name:
description:
- Organization name to be set to Windows Virtual Machine.
domain:
description:
- Domain to be set to Windows Virtual Machine.
timezone:
description:
- Timezone to be set to Windows Virtual Machine.
ui_language:
description:
- UI language of the Windows Virtual Machine.
system_locale:
description:
- System localization of the Windows Virtual Machine.
input_locale:
description:
- Input localization of the Windows Virtual Machine.
windows_license_key:
description:
- License key to be set to Windows Virtual Machine.
user_name:
description:
- Username to be used for set password to Windows Virtual Machine.
root_password:
description:
- Password to be set for username to Windows Virtual Machine.
cloud_init:
description:
- Dictionary with values for Unix-like Virtual Machine initialization using cloud init.
suboptions:
host_name:
description:
- Hostname to be set to Virtual Machine when deployed.
timezone:
description:
- Timezone to be set to Virtual Machine when deployed.
user_name:
description:
- Username to be used to set password to Virtual Machine when deployed.
root_password:
description:
- Password to be set for user specified by C(user_name) parameter.
authorized_ssh_keys:
description:
- Use this SSH keys to login to Virtual Machine.
regenerate_ssh_keys:
description:
- If I(True) SSH keys will be regenerated on Virtual Machine.
type: bool
custom_script:
description:
- Cloud-init script which will be executed on Virtual Machine when deployed.
- This is appended to the end of the cloud-init script generated by any other options.
dns_servers:
description:
- DNS servers to be configured on Virtual Machine.
dns_search:
description:
- DNS search domains to be configured on Virtual Machine.
nic_boot_protocol:
description:
- Set boot protocol of the network interface of Virtual Machine.
choices: ['none', 'dhcp', 'static']
nic_ip_address:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
nic_netmask:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
nic_gateway:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
nic_boot_protocol_v6:
description:
- Set boot protocol of the network interface of Virtual Machine.
choices: ['none', 'dhcp', 'static']
version_added: "2.9"
nic_ip_address_v6:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
version_added: "2.9"
nic_netmask_v6:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
version_added: "2.9"
nic_gateway_v6:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
- For IPv6 addresses the value is an integer in the range of 0-128, which represents the subnet prefix.
version_added: "2.9"
nic_name:
description:
- Set name to network interface of Virtual Machine.
nic_on_boot:
description:
- If I(True) network interface will be set to start on boot.
type: bool
cloud_init_nics:
description:
- List of dictionaries representing network interfaces to be setup by cloud init.
- This option is used, when user needs to setup more network interfaces via cloud init.
- If one network interface is enough, user should use C(cloud_init) I(nic_*) parameters. C(cloud_init) I(nic_*) parameters
are merged with C(cloud_init_nics) parameters.
suboptions:
nic_boot_protocol:
description:
- Set boot protocol of the network interface of Virtual Machine. Can be one of C(none), C(dhcp) or C(static).
nic_ip_address:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
nic_netmask:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
nic_gateway:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
nic_boot_protocol_v6:
description:
- Set boot protocol of the network interface of Virtual Machine. Can be one of C(none), C(dhcp) or C(static).
version_added: "2.9"
nic_ip_address_v6:
description:
- If boot protocol is static, set this IP address to network interface of Virtual Machine.
version_added: "2.9"
nic_netmask_v6:
description:
- If boot protocol is static, set this netmask to network interface of Virtual Machine.
version_added: "2.9"
nic_gateway_v6:
description:
- If boot protocol is static, set this gateway to network interface of Virtual Machine.
- For IPv6 addresses the value is an integer in the range of 0-128, which represents the subnet prefix.
version_added: "2.9"
nic_name:
description:
- Set name to network interface of Virtual Machine.
nic_on_boot:
description:
- If I(True) network interface will be set to start on boot.
type: bool
version_added: "2.3"
cloud_init_persist:
description:
- "If I(yes) the C(cloud_init) or C(sysprep) parameters will be saved for the virtual machine
and the virtual machine won't be started as run-once."
type: bool
version_added: "2.5"
aliases: [ 'sysprep_persist' ]
default: 'no'
kernel_params_persist:
description:
- "If I(true) C(kernel_params), C(initrd_path) and C(kernel_path) will persist in virtual machine configuration,
if I(False) it will be used for run once."
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
type: bool
version_added: "2.8"
kernel_path:
description:
- Path to a kernel image used to boot the virtual machine.
- Kernel image must be stored on either the ISO domain or on the host's storage.
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
version_added: "2.3"
initrd_path:
description:
- Path to an initial ramdisk to be used with the kernel specified by C(kernel_path) option.
- Ramdisk image must be stored on either the ISO domain or on the host's storage.
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
version_added: "2.3"
kernel_params:
description:
- Kernel command line parameters (formatted as string) to be used with the kernel specified by C(kernel_path) option.
- Usable with oVirt 4.3 and lower; removed in oVirt 4.4.
version_added: "2.3"
instance_type:
description:
- Name of virtual machine's hardware configuration.
- By default no instance type is used.
version_added: "2.3"
description:
description:
- Description of the Virtual Machine.
version_added: "2.3"
comment:
description:
- Comment of the Virtual Machine.
version_added: "2.3"
timezone:
description:
- Sets time zone offset of the guest hardware clock.
- For example C(Etc/GMT)
version_added: "2.3"
serial_policy:
description:
- Specify a serial number policy for the Virtual Machine.
- Following options are supported.
- C(vm) - Sets the Virtual Machine's UUID as its serial number.
- C(host) - Sets the host's UUID as the Virtual Machine's serial number.
- C(custom) - Allows you to specify a custom serial number in C(serial_policy_value).
choices: ['vm', 'host', 'custom']
version_added: "2.3"
serial_policy_value:
description:
- Allows you to specify a custom serial number.
- This parameter is used only when C(serial_policy) is I(custom).
version_added: "2.3"
vmware:
description:
- Dictionary of values to be used to connect to VMware and import
a virtual machine to oVirt.
suboptions:
username:
description:
- The username to authenticate against the VMware.
password:
description:
- The password to authenticate against the VMware.
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1)
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
xen:
description:
- Dictionary of values to be used to connect to XEN and import
a virtual machine to oVirt.
suboptions:
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(xen+ssh://[email protected]). This is required parameter.
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
kvm:
description:
- Dictionary of values to be used to connect to kvm and import
a virtual machine to oVirt.
suboptions:
name:
description:
- The name of the KVM virtual machine.
username:
description:
- The username to authenticate against the KVM.
password:
description:
- The password to authenticate against the KVM.
url:
description:
- The URL to be passed to the I(virt-v2v) tool for conversion.
- For example I(qemu:///system). This is required parameter.
drivers_iso:
description:
- The name of the ISO containing drivers that can be used during the I(virt-v2v) conversion process.
sparse:
description:
- Specifies the disk allocation policy of the resulting virtual machine. I(true) for sparse, I(false) for preallocated.
type: bool
default: true
storage_domain:
description:
- Specifies the target storage domain for converted disks. This is required parameter.
version_added: "2.3"
cpu_mode:
description:
- "CPU mode of the virtual machine. It can be some of the following: I(host_passthrough), I(host_model) or I(custom)."
- "For I(host_passthrough) CPU type you need to set C(placement_policy) to I(pinned)."
- "If no value is passed, default value is set by oVirt/RHV engine."
version_added: "2.5"
placement_policy:
description:
- "The configuration of the virtual machine's placement policy."
- "If no value is passed, default value is set by oVirt/RHV engine."
- "Placement policy can be one of the following values:"
suboptions:
migratable:
description:
- "Allow manual and automatic migration."
pinned:
description:
- "Do not allow migration."
user_migratable:
description:
- "Allow manual migration only."
version_added: "2.5"
ticket:
description:
- "If I(true), in addition return I(remote_vv_file) inside I(vm) dictionary, which contains compatible
content for remote-viewer application. Works only C(state) is I(running)."
version_added: "2.7"
type: bool
cpu_pinning:
description:
- "CPU Pinning topology to map virtual machine CPU to host CPU."
- "CPU Pinning topology is a list of dictionary which can have following values:"
suboptions:
cpu:
description:
- "Number of the host CPU."
vcpu:
description:
- "Number of the virtual machine CPU."
version_added: "2.5"
soundcard_enabled:
description:
- "If I(true), the sound card is added to the virtual machine."
type: bool
version_added: "2.5"
smartcard_enabled:
description:
- "If I(true), use smart card authentication."
type: bool
version_added: "2.5"
io_threads:
description:
- "Number of IO threads used by virtual machine. I(0) means IO threading disabled."
version_added: "2.5"
ballooning_enabled:
description:
- "If I(true), use memory ballooning."
- "Memory balloon is a guest device, which may be used to re-distribute / reclaim the host memory
based on VM needs in a dynamic way. In this way it's possible to create memory over commitment states."
type: bool
version_added: "2.5"
numa_tune_mode:
description:
- "Set how the memory allocation for NUMA nodes of this VM is applied (relevant if NUMA nodes are set for this VM)."
- "It can be one of the following: I(interleave), I(preferred) or I(strict)."
- "If no value is passed, default value is set by oVirt/RHV engine."
choices: ['interleave', 'preferred', 'strict']
version_added: "2.6"
numa_nodes:
description:
- "List of vNUMA Nodes to set for this VM and pin them to assigned host's physical NUMA node."
- "Each vNUMA node is described by following dictionary:"
suboptions:
index:
description:
- "The index of this NUMA node (mandatory)."
memory:
description:
- "Memory size of the NUMA node in MiB (mandatory)."
cores:
description:
- "list of VM CPU cores indexes to be included in this NUMA node (mandatory)."
numa_node_pins:
description:
- "list of physical NUMA node indexes to pin this virtual NUMA node to."
version_added: "2.6"
rng_device:
description:
- "Random number generator (RNG). You can choose of one the following devices I(urandom), I(random) or I(hwrng)."
- "In order to select I(hwrng), you must have it enabled on cluster first."
- "/dev/urandom is used for cluster version >= 4.1, and /dev/random for cluster version <= 4.0"
version_added: "2.5"
custom_properties:
description:
- "Properties sent to VDSM to configure various hooks."
- "Custom properties is a list of dictionary which can have following values:"
suboptions:
name:
description:
- "Name of the custom property. For example: I(hugepages), I(vhost), I(sap_agent), etc."
regexp:
description:
- "Regular expression to set for custom property."
value:
description:
- "Value to set for custom property."
version_added: "2.5"
watchdog:
description:
- "Assign watchdog device for the virtual machine."
- "Watchdogs is a dictionary which can have following values:"
suboptions:
model:
description:
- "Model of the watchdog device. For example: I(i6300esb), I(diag288) or I(null)."
action:
description:
- "Watchdog action to be performed when watchdog is triggered. For example: I(none), I(reset), I(poweroff), I(pause) or I(dump)."
version_added: "2.5"
graphical_console:
description:
- "Assign graphical console to the virtual machine."
suboptions:
headless_mode:
description:
- If I(true) disable the graphics console for this virtual machine.
type: bool
protocol:
description:
- Graphical protocol, a list of I(spice), I(vnc), or both.
type: list
disconnect_action:
description:
- "Returns the action that will take place when the graphic console(SPICE only) is disconnected. The options are:"
- I(none) No action is taken.
- I(lock_screen) Locks the currently active user session.
- I(logout) Logs out the currently active user session.
- I(reboot) Initiates a graceful virtual machine reboot.
- I(shutdown) Initiates a graceful virtual machine shutdown.
type: str
version_added: "2.10"
keyboard_layout:
description:
- The keyboard layout to use with this graphic console.
- This option is only available for the VNC console type.
- If no keyboard is enabled then it won't be reported.
type: str
version_added: "2.10"
monitors:
description:
- The number of monitors opened for this graphic console.
- This option is only available for the SPICE protocol.
- Possible values are 1, 2 or 4.
type: int
version_added: "2.10"
version_added: "2.5"
exclusive:
description:
- "When C(state) is I(exported) this parameter indicates if the existing VM with the
same name should be overwritten."
version_added: "2.8"
type: bool
export_domain:
description:
- "When C(state) is I(exported)this parameter specifies the name of the export storage domain."
version_added: "2.8"
export_ova:
description:
- Dictionary of values to be used to export VM as OVA.
suboptions:
host:
description:
- The name of the destination host where the OVA has to be exported.
directory:
description:
- The name of the directory where the OVA has to be exported.
filename:
description:
- The name of the exported OVA file.
version_added: "2.8"
force_migrate:
description:
- If I(true), the VM will migrate when I(placement_policy=user-migratable) but not when I(placement_policy=pinned).
version_added: "2.8"
type: bool
migrate:
description:
- "If I(true), the VM will migrate to any available host."
version_added: "2.8"
type: bool
next_run:
description:
- "If I(true), the update will not be applied to the VM immediately and will be only applied when virtual machine is restarted."
- NOTE - If there are multiple next run configuration changes on the VM, the first change may get reverted if this option is not passed.
version_added: "2.8"
type: bool
snapshot_name:
description:
- "Snapshot to clone VM from."
- "Snapshot with description specified should exist."
- "You have to specify C(snapshot_vm) parameter with virtual machine name of this snapshot."
version_added: "2.9"
snapshot_vm:
description:
- "Source VM to clone VM from."
- "VM should have snapshot specified by C(snapshot)."
- "If C(snapshot_name) specified C(snapshot_vm) is required."
version_added: "2.9"
custom_emulated_machine:
description:
- "Sets the value of the custom_emulated_machine attribute."
version_added: "2.10"
notes:
- If VM is in I(UNASSIGNED) or I(UNKNOWN) state before any operation, the module will fail.
If VM is in I(IMAGE_LOCKED) state before any operation, we try to wait for VM to be I(DOWN).
If VM is in I(SAVING_STATE) state before any operation, we try to wait for VM to be I(SUSPENDED).
If VM is in I(POWERING_DOWN) state before any operation, we try to wait for VM to be I(UP) or I(DOWN). VM can
get into I(UP) state from I(POWERING_DOWN) state, when there is no ACPI or guest agent running inside VM, or
if the shutdown operation fails.
When user specify I(UP) C(state), we always wait to VM to be in I(UP) state in case VM is I(MIGRATING),
I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). In other states we run start operation on VM.
When user specify I(stopped) C(state), and If user pass C(force) parameter set to I(true) we forcibly stop the VM in
any state. If user don't pass C(force) parameter, we always wait to VM to be in UP state in case VM is
I(MIGRATING), I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). If VM is in I(PAUSED) or
I(SUSPENDED) state, we start the VM. Then we gracefully shutdown the VM.
When user specify I(suspended) C(state), we always wait to VM to be in UP state in case VM is I(MIGRATING),
I(REBOOTING), I(POWERING_UP), I(RESTORING_STATE), I(WAIT_FOR_LAUNCH). If VM is in I(PAUSED) or I(DOWN) state,
we start the VM. Then we suspend the VM.
When user specify I(absent) C(state), we forcibly stop the VM in any state and remove it.
- "If you update a VM parameter that requires a reboot, the oVirt engine always creates a new snapshot for the VM,
and an Ansible playbook will report this as changed."
extends_documentation_fragment: ovirt
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
- name: Creates a new Virtual Machine from template named 'rhel7_template'
ovirt_vm:
state: present
name: myvm
template: rhel7_template
cluster: mycluster
- name: Register VM
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
name: myvm
- name: Register VM using id
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
- name: Register VM, allowing partial import
ovirt_vm:
state: registered
storage_domain: mystorage
allow_partial_import: "True"
cluster: mycluster
id: 1111-1111-1111-1111
- name: Register VM with vnic profile mappings and reassign bad macs
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
vnic_profile_mappings:
- source_network_name: mynetwork
source_profile_name: mynetwork
target_profile_id: 3333-3333-3333-3333
- source_network_name: mynetwork2
source_profile_name: mynetwork2
target_profile_id: 4444-4444-4444-4444
reassign_bad_macs: "True"
- name: Register VM with mappings
ovirt_vm:
state: registered
storage_domain: mystorage
cluster: mycluster
id: 1111-1111-1111-1111
role_mappings:
- source_name: Role_A
dest_name: Role_B
domain_mappings:
- source_name: Domain_A
dest_name: Domain_B
lun_mappings:
- source_storage_type: iscsi
source_logical_unit_id: 1IET_000d0001
source_logical_unit_port: 3260
source_logical_unit_portal: 1
source_logical_unit_address: 10.34.63.203
source_logical_unit_target: iqn.2016-08-09.brq.str-01:omachace
dest_storage_type: iscsi
dest_logical_unit_id: 1IET_000d0002
dest_logical_unit_port: 3260
dest_logical_unit_portal: 1
dest_logical_unit_address: 10.34.63.204
dest_logical_unit_target: iqn.2016-08-09.brq.str-02:omachace
affinity_group_mappings:
- source_name: Affinity_A
dest_name: Affinity_B
affinity_label_mappings:
- source_name: Label_A
dest_name: Label_B
cluster_mappings:
- source_name: cluster_A
dest_name: cluster_B
- name: Creates a stateless VM which will always use latest template version
ovirt_vm:
name: myvm
template: rhel7
cluster: mycluster
use_latest_template_version: true
# Creates a new server rhel7 Virtual Machine from Blank template
# on brq01 cluster with 2GiB memory and 2 vcpu cores/sockets
# and attach bootable disk with name rhel7_disk and attach virtio NIC
- ovirt_vm:
state: present
cluster: brq01
name: myvm
memory: 2GiB
cpu_cores: 2
cpu_sockets: 2
cpu_shares: 1024
type: server
operating_system: rhel_7x64
disks:
- name: rhel7_disk
bootable: True
nics:
- name: nic1
# Change VM Name
- ovirt_vm:
id: 00000000-0000-0000-0000-000000000000
name: "new_vm_name"
- name: Run VM with cloud init
ovirt_vm:
name: rhel7
template: rhel7
cluster: Default
memory: 1GiB
high_availability: true
high_availability_priority: 50 # Available from Ansible 2.5
cloud_init:
nic_boot_protocol: static
nic_ip_address: 10.34.60.86
nic_netmask: 255.255.252.0
nic_gateway: 10.34.63.254
nic_name: eth1
nic_on_boot: true
host_name: example.com
custom_script: |
write_files:
- content: |
Hello, world!
path: /tmp/greeting.txt
permissions: '0644'
user_name: root
root_password: super_password
- name: Run VM with cloud init, with multiple network interfaces
ovirt_vm:
name: rhel7_4
template: rhel7
cluster: mycluster
cloud_init_nics:
- nic_name: eth0
nic_boot_protocol: dhcp
nic_on_boot: true
- nic_name: eth1
nic_boot_protocol: static
nic_ip_address: 10.34.60.86
nic_netmask: 255.255.252.0
nic_gateway: 10.34.63.254
nic_on_boot: true
# IP version 6 parameters are supported since ansible 2.9
- nic_name: eth2
nic_boot_protocol_v6: static
nic_ip_address_v6: '2620:52:0:2282:b898:1f69:6512:36c5'
nic_gateway_v6: '2620:52:0:2282:b898:1f69:6512:36c9'
nic_netmask_v6: '120'
nic_on_boot: true
- nic_name: eth3
nic_on_boot: true
nic_boot_protocol_v6: dhcp
- name: Run VM with sysprep
ovirt_vm:
name: windows2012R2_AD
template: windows2012R2
cluster: Default
memory: 3GiB
high_availability: true
sysprep:
host_name: windowsad.example.com
user_name: Administrator
root_password: SuperPassword123
- name: Migrate/Run VM to/on host named 'host1'
ovirt_vm:
state: running
name: myvm
host: host1
- name: Migrate VM to any available host
ovirt_vm:
state: running
name: myvm
migrate: true
- name: Change VMs CD
ovirt_vm:
name: myvm
cd_iso: drivers.iso
- name: Eject VMs CD
ovirt_vm:
name: myvm
cd_iso: ''
- name: Boot VM from CD
ovirt_vm:
name: myvm
cd_iso: centos7_x64.iso
boot_devices:
- cdrom
- name: Stop vm
ovirt_vm:
state: stopped
name: myvm
- name: Upgrade memory to already created VM
ovirt_vm:
name: myvm
memory: 4GiB
- name: Hot plug memory to already created and running VM (VM won't be restarted)
ovirt_vm:
name: myvm
memory: 4GiB
# Create/update a VM to run with two vNUMA nodes and pin them to physical NUMA nodes as follows:
# vnuma index 0-> numa index 0, vnuma index 1-> numa index 1
- name: Create a VM to run with two vNUMA nodes
ovirt_vm:
name: myvm
cluster: mycluster
numa_tune_mode: "interleave"
numa_nodes:
- index: 0
cores: [0]
memory: 20
numa_node_pins: [0]
- index: 1
cores: [1]
memory: 30
numa_node_pins: [1]
- name: Update an existing VM to run without previously created vNUMA nodes (i.e. remove all vNUMA nodes+NUMA pinning setting)
ovirt_vm:
name: myvm
cluster: mycluster
state: "present"
numa_tune_mode: "interleave"
numa_nodes:
- index: -1
# When change on the VM needs restart of the VM, use next_run state,
# The VM will be updated and rebooted if there are any changes.
# If present state would be used, VM won't be restarted.
- ovirt_vm:
state: next_run
name: myvm
boot_devices:
- network
- name: Import virtual machine from VMware
ovirt_vm:
state: stopped
cluster: mycluster
name: vmware_win10
timeout: 1800
poll_interval: 30
vmware:
url: vpx://[email protected]/Folder1/Cluster1/2.3.4.5?no_verify=1
name: windows10
storage_domain: mynfs
username: user
password: password
- name: Create vm from template and create all disks on specific storage domain
ovirt_vm:
name: vm_test
cluster: mycluster
template: mytemplate
storage_domain: mynfs
nics:
- name: nic1
- name: Remove VM, if VM is running it will be stopped
ovirt_vm:
state: absent
name: myvm
# Defining a specific quota for a VM:
# Since Ansible 2.5
- ovirt_quotas_facts:
data_center: Default
name: myquota
- ovirt_vm:
name: myvm
sso: False
boot_menu: True
usb_support: True
serial_console: True
quota_id: "{{ ovirt_quotas[0]['id'] }}"
- name: Create a VM that has the console configured for both Spice and VNC
ovirt_vm:
name: myvm
template: mytemplate
cluster: mycluster
graphical_console:
protocol:
- spice
- vnc
# Execute remote viewer to VM
- block:
- name: Create a ticket for console for a running VM
ovirt_vm:
name: myvm
ticket: true
state: running
register: myvm
- name: Save ticket to file
copy:
content: "{{ myvm.vm.remote_vv_file }}"
dest: ~/vvfile.vv
- name: Run remote viewer with file
command: remote-viewer ~/vvfile.vv
# Default value of host_device state is present
- name: Attach host devices to virtual machine
ovirt_vm:
name: myvm
host: myhost
placement_policy: pinned
host_devices:
- name: pci_0000_00_06_0
- name: pci_0000_00_07_0
state: absent
- name: pci_0000_00_08_0
state: present
- name: Export the VM as OVA
ovirt_vm:
name: myvm
state: exported
cluster: mycluster
export_ova:
host: myhost
filename: myvm.ova
directory: /tmp/
- name: Clone VM from snapshot
ovirt_vm:
snapshot_vm: myvm
snapshot_name: myvm_snap
name: myvm_clone
state: present
'''
RETURN = '''
id:
description: ID of the VM which is managed
returned: On success if VM is found.
type: str
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
vm:
description: "Dictionary of all the VM attributes. VM attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/vm.
Additionally when user sent ticket=true, this module will return also remote_vv_file
parameter in vm dictionary, which contains remote-viewer compatible file to open virtual
machine console. Please note that this file contains sensible information."
returned: On success if VM is found.
type: dict
'''
import traceback
try:
import ovirtsdk4.types as otypes
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
BaseModule,
check_params,
check_sdk,
convert_to_bytes,
create_connection,
equal,
get_dict_of_struct,
get_entity,
get_link_name,
get_id_by_name,
ovirt_full_argument_spec,
search_by_attributes,
search_by_name,
wait,
engine_supported,
)
class VmsModule(BaseModule):
def __init__(self, *args, **kwargs):
super(VmsModule, self).__init__(*args, **kwargs)
self._initialization = None
self._is_new = False
def __get_template_with_version(self):
"""
oVirt/RHV in version 4.1 doesn't support search by template+version_number,
so we need to list all templates with specific name and then iterate
through it's version until we find the version we look for.
"""
template = None
templates_service = self._connection.system_service().templates_service()
if self.param('template'):
clusters_service = self._connection.system_service().clusters_service()
cluster = search_by_name(clusters_service, self.param('cluster'))
data_center = self._connection.follow_link(cluster.data_center)
templates = templates_service.list(
search='name=%s and datacenter=%s' % (self.param('template'), data_center.name)
)
if self.param('template_version'):
templates = [
t for t in templates
if t.version.version_number == self.param('template_version')
]
if not templates:
raise ValueError(
"Template with name '%s' and version '%s' in data center '%s' was not found'" % (
self.param('template'),
self.param('template_version'),
data_center.name
)
)
template = sorted(templates, key=lambda t: t.version.version_number, reverse=True)[0]
elif self._is_new:
# If template isn't specified and VM is about to be created specify default template:
template = templates_service.template_service('00000000-0000-0000-0000-000000000000').get()
return template
def __get_storage_domain_and_all_template_disks(self, template):
if self.param('template') is None:
return None
if self.param('storage_domain') is None:
return None
disks = list()
for att in self._connection.follow_link(template.disk_attachments):
disks.append(
otypes.DiskAttachment(
disk=otypes.Disk(
id=att.disk.id,
format=otypes.DiskFormat(self.param('disk_format')),
storage_domains=[
otypes.StorageDomain(
id=get_id_by_name(
self._connection.system_service().storage_domains_service(),
self.param('storage_domain')
)
)
]
)
)
)
return disks
def __get_snapshot(self):
if self.param('snapshot_vm') is None:
return None
if self.param('snapshot_name') is None:
return None
vms_service = self._connection.system_service().vms_service()
vm_id = get_id_by_name(vms_service, self.param('snapshot_vm'))
vm_service = vms_service.vm_service(vm_id)
snaps_service = vm_service.snapshots_service()
snaps = snaps_service.list()
snap = next(
(s for s in snaps if s.description == self.param('snapshot_name')),
None
)
return snap
def __get_cluster(self):
if self.param('cluster') is not None:
return self.param('cluster')
elif self.param('snapshot_name') is not None and self.param('snapshot_vm') is not None:
vms_service = self._connection.system_service().vms_service()
vm = search_by_name(vms_service, self.param('snapshot_vm'))
return self._connection.system_service().clusters_service().cluster_service(vm.cluster.id).get().name
def build_entity(self):
template = self.__get_template_with_version()
cluster = self.__get_cluster()
snapshot = self.__get_snapshot()
display = self.param('graphical_console') or dict()
disk_attachments = self.__get_storage_domain_and_all_template_disks(template)
return otypes.Vm(
id=self.param('id'),
name=self.param('name'),
cluster=otypes.Cluster(
name=cluster
) if cluster else None,
disk_attachments=disk_attachments,
template=otypes.Template(
id=template.id,
) if template else None,
use_latest_template_version=self.param('use_latest_template_version'),
stateless=self.param('stateless') or self.param('use_latest_template_version'),
delete_protected=self.param('delete_protected'),
custom_emulated_machine=self.param('custom_emulated_machine'),
bios=(
otypes.Bios(boot_menu=otypes.BootMenu(enabled=self.param('boot_menu')))
) if self.param('boot_menu') is not None else None,
console=(
otypes.Console(enabled=self.param('serial_console'))
) if self.param('serial_console') is not None else None,
usb=(
otypes.Usb(enabled=self.param('usb_support'))
) if self.param('usb_support') is not None else None,
sso=(
otypes.Sso(
methods=[otypes.Method(id=otypes.SsoMethod.GUEST_AGENT)] if self.param('sso') else []
)
) if self.param('sso') is not None else None,
quota=otypes.Quota(id=self._module.params.get('quota_id')) if self.param('quota_id') is not None else None,
high_availability=otypes.HighAvailability(
enabled=self.param('high_availability'),
priority=self.param('high_availability_priority'),
) if self.param('high_availability') is not None or self.param('high_availability_priority') else None,
lease=otypes.StorageDomainLease(
storage_domain=otypes.StorageDomain(
id=get_id_by_name(
service=self._connection.system_service().storage_domains_service(),
name=self.param('lease')
) if self.param('lease') else None
)
) if self.param('lease') is not None else None,
cpu=otypes.Cpu(
topology=otypes.CpuTopology(
cores=self.param('cpu_cores'),
sockets=self.param('cpu_sockets'),
threads=self.param('cpu_threads'),
) if any((
self.param('cpu_cores'),
self.param('cpu_sockets'),
self.param('cpu_threads')
)) else None,
cpu_tune=otypes.CpuTune(
vcpu_pins=[
otypes.VcpuPin(vcpu=int(pin['vcpu']), cpu_set=str(pin['cpu'])) for pin in self.param('cpu_pinning')
],
) if self.param('cpu_pinning') else None,
mode=otypes.CpuMode(self.param('cpu_mode')) if self.param('cpu_mode') else None,
) if any((
self.param('cpu_cores'),
self.param('cpu_sockets'),
self.param('cpu_threads'),
self.param('cpu_mode'),
self.param('cpu_pinning')
)) else None,
cpu_shares=self.param('cpu_shares'),
os=otypes.OperatingSystem(
type=self.param('operating_system'),
boot=otypes.Boot(
devices=[
otypes.BootDevice(dev) for dev in self.param('boot_devices')
],
) if self.param('boot_devices') else None,
cmdline=self.param('kernel_params') if self.param('kernel_params_persist') else None,
initrd=self.param('initrd_path') if self.param('kernel_params_persist') else None,
kernel=self.param('kernel_path') if self.param('kernel_params_persist') else None,
) if (
self.param('operating_system') or self.param('boot_devices') or self.param('kernel_params_persist')
) else None,
type=otypes.VmType(
self.param('type')
) if self.param('type') else None,
memory=convert_to_bytes(
self.param('memory')
) if self.param('memory') else None,
memory_policy=otypes.MemoryPolicy(
guaranteed=convert_to_bytes(self.param('memory_guaranteed')),
ballooning=self.param('ballooning_enabled'),
max=convert_to_bytes(self.param('memory_max')),
) if any((
self.param('memory_guaranteed'),
self.param('ballooning_enabled') is not None,
self.param('memory_max')
)) else None,
instance_type=otypes.InstanceType(
id=get_id_by_name(
self._connection.system_service().instance_types_service(),
self.param('instance_type'),
),
) if self.param('instance_type') else None,
custom_compatibility_version=otypes.Version(
major=self._get_major(self.param('custom_compatibility_version')),
minor=self._get_minor(self.param('custom_compatibility_version')),
) if self.param('custom_compatibility_version') is not None else None,
description=self.param('description'),
comment=self.param('comment'),
time_zone=otypes.TimeZone(
name=self.param('timezone'),
) if self.param('timezone') else None,
serial_number=otypes.SerialNumber(
policy=otypes.SerialNumberPolicy(self.param('serial_policy')),
value=self.param('serial_policy_value'),
) if (
self.param('serial_policy') is not None or
self.param('serial_policy_value') is not None
) else None,
placement_policy=otypes.VmPlacementPolicy(
affinity=otypes.VmAffinity(self.param('placement_policy')),
hosts=[
otypes.Host(name=self.param('host')),
] if self.param('host') else None,
) if self.param('placement_policy') else None,
soundcard_enabled=self.param('soundcard_enabled'),
display=otypes.Display(
smartcard_enabled=self.param('smartcard_enabled'),
disconnect_action=display.get('disconnect_action'),
keyboard_layout=display.get('keyboard_layout'),
monitors=display.get('monitors'),
) if (
self.param('smartcard_enabled') is not None or
display.get('disconnect_action') is not None or
display.get('keyboard_layout') is not None or
display.get('monitors') is not None
) else None,
io=otypes.Io(
threads=self.param('io_threads'),
) if self.param('io_threads') is not None else None,
numa_tune_mode=otypes.NumaTuneMode(
self.param('numa_tune_mode')
) if self.param('numa_tune_mode') else None,
rng_device=otypes.RngDevice(
source=otypes.RngSource(self.param('rng_device')),
) if self.param('rng_device') else None,
custom_properties=[
otypes.CustomProperty(
name=cp.get('name'),
regexp=cp.get('regexp'),
value=str(cp.get('value')),
) for cp in self.param('custom_properties') if cp
] if self.param('custom_properties') is not None else None,
initialization=self.get_initialization() if self.param('cloud_init_persist') else None,
snapshots=[otypes.Snapshot(id=snapshot.id)] if snapshot is not None else None,
)
def _get_export_domain_service(self):
provider_name = self._module.params['export_domain']
export_sds_service = self._connection.system_service().storage_domains_service()
export_sd_id = get_id_by_name(export_sds_service, provider_name)
return export_sds_service.service(export_sd_id)
def post_export_action(self, entity):
self._service = self._get_export_domain_service().vms_service()
def update_check(self, entity):
res = self._update_check(entity)
if entity.next_run_configuration_exists:
res = res and self._update_check(self._service.service(entity.id).get(next_run=True))
return res
def _update_check(self, entity):
def check_cpu_pinning():
if self.param('cpu_pinning'):
current = []
if entity.cpu.cpu_tune:
current = [(str(pin.cpu_set), int(pin.vcpu)) for pin in entity.cpu.cpu_tune.vcpu_pins]
passed = [(str(pin['cpu']), int(pin['vcpu'])) for pin in self.param('cpu_pinning')]
return sorted(current) == sorted(passed)
return True
def check_custom_properties():
if self.param('custom_properties'):
current = []
if entity.custom_properties:
current = [(cp.name, cp.regexp, str(cp.value)) for cp in entity.custom_properties]
passed = [(cp.get('name'), cp.get('regexp'), str(cp.get('value'))) for cp in self.param('custom_properties') if cp]
return sorted(current) == sorted(passed)
return True
def check_host():
if self.param('host') is not None:
return self.param('host') in [self._connection.follow_link(host).name for host in getattr(entity.placement_policy, 'hosts', None) or []]
return True
def check_custom_compatibility_version():
if self.param('custom_compatibility_version') is not None:
return (self._get_minor(self.param('custom_compatibility_version')) == self._get_minor(entity.custom_compatibility_version) and
self._get_major(self.param('custom_compatibility_version')) == self._get_major(entity.custom_compatibility_version))
return True
cpu_mode = getattr(entity.cpu, 'mode')
vm_display = entity.display
provided_vm_display = self.param('graphical_console') or dict()
return (
check_cpu_pinning() and
check_custom_properties() and
check_host() and
check_custom_compatibility_version() and
not self.param('cloud_init_persist') and
not self.param('kernel_params_persist') and
equal(self.param('cluster'), get_link_name(self._connection, entity.cluster)) and equal(convert_to_bytes(self.param('memory')), entity.memory) and
equal(convert_to_bytes(self.param('memory_guaranteed')), entity.memory_policy.guaranteed) and
equal(convert_to_bytes(self.param('memory_max')), entity.memory_policy.max) and
equal(self.param('cpu_cores'), entity.cpu.topology.cores) and
equal(self.param('cpu_sockets'), entity.cpu.topology.sockets) and
equal(self.param('cpu_threads'), entity.cpu.topology.threads) and
equal(self.param('cpu_mode'), str(cpu_mode) if cpu_mode else None) and
equal(self.param('type'), str(entity.type)) and
equal(self.param('name'), str(entity.name)) and
equal(self.param('operating_system'), str(entity.os.type)) and
equal(self.param('boot_menu'), entity.bios.boot_menu.enabled) and
equal(self.param('soundcard_enabled'), entity.soundcard_enabled) and
equal(self.param('smartcard_enabled'), getattr(vm_display, 'smartcard_enabled', False)) and
equal(self.param('io_threads'), entity.io.threads) and
equal(self.param('ballooning_enabled'), entity.memory_policy.ballooning) and
equal(self.param('serial_console'), getattr(entity.console, 'enabled', None)) and
equal(self.param('usb_support'), entity.usb.enabled) and
equal(self.param('sso'), True if entity.sso.methods else False) and
equal(self.param('quota_id'), getattr(entity.quota, 'id', None)) and
equal(self.param('high_availability'), entity.high_availability.enabled) and
equal(self.param('high_availability_priority'), entity.high_availability.priority) and
equal(self.param('lease'), get_link_name(self._connection, getattr(entity.lease, 'storage_domain', None))) and
equal(self.param('stateless'), entity.stateless) and
equal(self.param('cpu_shares'), entity.cpu_shares) and
equal(self.param('delete_protected'), entity.delete_protected) and
equal(self.param('custom_emulated_machine'), entity.custom_emulated_machine) and
equal(self.param('use_latest_template_version'), entity.use_latest_template_version) and
equal(self.param('boot_devices'), [str(dev) for dev in getattr(entity.os.boot, 'devices', [])]) and
equal(self.param('instance_type'), get_link_name(self._connection, entity.instance_type), ignore_case=True) and
equal(self.param('description'), entity.description) and
equal(self.param('comment'), entity.comment) and
equal(self.param('timezone'), getattr(entity.time_zone, 'name', None)) and
equal(self.param('serial_policy'), str(getattr(entity.serial_number, 'policy', None))) and
equal(self.param('serial_policy_value'), getattr(entity.serial_number, 'value', None)) and
equal(self.param('placement_policy'), str(entity.placement_policy.affinity) if entity.placement_policy else None) and
equal(self.param('numa_tune_mode'), str(entity.numa_tune_mode)) and
equal(self.param('rng_device'), str(entity.rng_device.source) if entity.rng_device else None) and
equal(provided_vm_display.get('monitors'), getattr(vm_display, 'monitors', None)) and
equal(provided_vm_display.get('keyboard_layout'), getattr(vm_display, 'keyboard_layout', None)) and
equal(provided_vm_display.get('disconnect_action'), getattr(vm_display, 'disconnect_action', None), ignore_case=True)
)
def pre_create(self, entity):
# Mark if entity exists before touching it:
if entity is None:
self._is_new = True
def post_update(self, entity):
self.post_present(entity.id)
def post_present(self, entity_id):
# After creation of the VM, attach disks and NICs:
entity = self._service.service(entity_id).get()
self.__attach_disks(entity)
self.__attach_nics(entity)
self._attach_cd(entity)
self.changed = self.__attach_numa_nodes(entity)
self.changed = self.__attach_watchdog(entity)
self.changed = self.__attach_graphical_console(entity)
self.changed = self.__attach_host_devices(entity)
def pre_remove(self, entity):
# Forcibly stop the VM, if it's not in DOWN state:
if entity.status != otypes.VmStatus.DOWN:
if not self._module.check_mode:
self.changed = self.action(
action='stop',
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)['changed']
def __suspend_shutdown_common(self, vm_service):
if vm_service.get().status in [
otypes.VmStatus.MIGRATING,
otypes.VmStatus.POWERING_UP,
otypes.VmStatus.REBOOT_IN_PROGRESS,
otypes.VmStatus.WAIT_FOR_LAUNCH,
otypes.VmStatus.UP,
otypes.VmStatus.RESTORING_STATE,
]:
self._wait_for_UP(vm_service)
def _pre_shutdown_action(self, entity):
vm_service = self._service.vm_service(entity.id)
self.__suspend_shutdown_common(vm_service)
if entity.status in [otypes.VmStatus.SUSPENDED, otypes.VmStatus.PAUSED]:
vm_service.start()
self._wait_for_UP(vm_service)
return vm_service.get()
def _pre_suspend_action(self, entity):
vm_service = self._service.vm_service(entity.id)
self.__suspend_shutdown_common(vm_service)
if entity.status in [otypes.VmStatus.PAUSED, otypes.VmStatus.DOWN]:
vm_service.start()
self._wait_for_UP(vm_service)
return vm_service.get()
def _post_start_action(self, entity):
vm_service = self._service.service(entity.id)
self._wait_for_UP(vm_service)
self._attach_cd(vm_service.get())
def _attach_cd(self, entity):
cd_iso = self.param('cd_iso')
if cd_iso is not None:
vm_service = self._service.service(entity.id)
current = vm_service.get().status == otypes.VmStatus.UP and self.param('state') == 'running'
cdroms_service = vm_service.cdroms_service()
cdrom_device = cdroms_service.list()[0]
cdrom_service = cdroms_service.cdrom_service(cdrom_device.id)
cdrom = cdrom_service.get(current=current)
if getattr(cdrom.file, 'id', '') != cd_iso:
if not self._module.check_mode:
cdrom_service.update(
cdrom=otypes.Cdrom(
file=otypes.File(id=cd_iso)
),
current=current,
)
self.changed = True
return entity
def _migrate_vm(self, entity):
vm_host = self.param('host')
vm_service = self._service.vm_service(entity.id)
# In case VM is preparing to be UP, wait to be up, to migrate it:
if entity.status == otypes.VmStatus.UP:
if vm_host is not None:
hosts_service = self._connection.system_service().hosts_service()
current_vm_host = hosts_service.host_service(entity.host.id).get().name
if vm_host != current_vm_host:
if not self._module.check_mode:
vm_service.migrate(host=otypes.Host(name=vm_host), force=self.param('force_migrate'))
self._wait_for_UP(vm_service)
self.changed = True
elif self.param('migrate'):
if not self._module.check_mode:
vm_service.migrate(force=self.param('force_migrate'))
self._wait_for_UP(vm_service)
self.changed = True
return entity
def _wait_for_UP(self, vm_service):
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.UP,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
def _wait_for_vm_disks(self, vm_service):
disks_service = self._connection.system_service().disks_service()
for da in vm_service.disk_attachments_service().list():
disk_service = disks_service.disk_service(da.disk.id)
wait(
service=disk_service,
condition=lambda disk: disk.status == otypes.DiskStatus.OK if disk.storage_type == otypes.DiskStorageType.IMAGE else True,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
def wait_for_down(self, vm):
"""
This function will first wait for the status DOWN of the VM.
Then it will find the active snapshot and wait until it's state is OK for
stateless VMs and stateless snapshot is removed.
"""
vm_service = self._service.vm_service(vm.id)
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
if vm.stateless:
snapshots_service = vm_service.snapshots_service()
snapshots = snapshots_service.list()
snap_active = [
snap for snap in snapshots
if snap.snapshot_type == otypes.SnapshotType.ACTIVE
][0]
snap_stateless = [
snap for snap in snapshots
if snap.snapshot_type == otypes.SnapshotType.STATELESS
]
# Stateless snapshot may be already removed:
if snap_stateless:
"""
We need to wait for Active snapshot ID, to be removed as it's current
stateless snapshot. Then we need to wait for staless snapshot ID to
be read, for use, because it will become active snapshot.
"""
wait(
service=snapshots_service.snapshot_service(snap_active.id),
condition=lambda snap: snap is None,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
wait(
service=snapshots_service.snapshot_service(snap_stateless[0].id),
condition=lambda snap: snap.snapshot_status == otypes.SnapshotStatus.OK,
wait=self.param('wait'),
timeout=self.param('timeout'),
)
return True
def __attach_graphical_console(self, entity):
graphical_console = self.param('graphical_console')
if not graphical_console:
return False
vm_service = self._service.service(entity.id)
gcs_service = vm_service.graphics_consoles_service()
graphical_consoles = gcs_service.list()
# Remove all graphical consoles if there are any:
if bool(graphical_console.get('headless_mode')):
if not self._module.check_mode:
for gc in graphical_consoles:
gcs_service.console_service(gc.id).remove()
return len(graphical_consoles) > 0
# If there are not gc add any gc to be added:
protocol = graphical_console.get('protocol')
current_protocols = [str(gc.protocol) for gc in graphical_consoles]
if not current_protocols:
if not self._module.check_mode:
for p in protocol:
gcs_service.add(
otypes.GraphicsConsole(
protocol=otypes.GraphicsType(p),
)
)
return True
# Update consoles:
if sorted(protocol) != sorted(current_protocols):
if not self._module.check_mode:
for gc in graphical_consoles:
gcs_service.console_service(gc.id).remove()
for p in protocol:
gcs_service.add(
otypes.GraphicsConsole(
protocol=otypes.GraphicsType(p),
)
)
return True
def __attach_disks(self, entity):
if not self.param('disks'):
return
vm_service = self._service.service(entity.id)
disks_service = self._connection.system_service().disks_service()
disk_attachments_service = vm_service.disk_attachments_service()
self._wait_for_vm_disks(vm_service)
for disk in self.param('disks'):
# If disk ID is not specified, find disk by name:
disk_id = disk.get('id')
if disk_id is None:
disk_id = getattr(
search_by_name(
service=disks_service,
name=disk.get('name')
),
'id',
None
)
# Attach disk to VM:
disk_attachment = disk_attachments_service.attachment_service(disk_id)
if get_entity(disk_attachment) is None:
if not self._module.check_mode:
disk_attachments_service.add(
otypes.DiskAttachment(
disk=otypes.Disk(
id=disk_id,
),
active=disk.get('activate', True),
interface=otypes.DiskInterface(
disk.get('interface', 'virtio')
),
bootable=disk.get('bootable', False),
)
)
self.changed = True
def __get_vnic_profile_id(self, nic):
"""
Return VNIC profile ID looked up by it's name, because there can be
more VNIC profiles with same name, other criteria of filter is cluster.
"""
vnics_service = self._connection.system_service().vnic_profiles_service()
clusters_service = self._connection.system_service().clusters_service()
cluster = search_by_name(clusters_service, self.param('cluster'))
profiles = [
profile for profile in vnics_service.list()
if profile.name == nic.get('profile_name')
]
cluster_networks = [
net.id for net in self._connection.follow_link(cluster.networks)
]
try:
return next(
profile.id for profile in profiles
if profile.network.id in cluster_networks
)
except StopIteration:
raise Exception(
"Profile '%s' was not found in cluster '%s'" % (
nic.get('profile_name'),
self.param('cluster')
)
)
def __attach_numa_nodes(self, entity):
updated = False
numa_nodes_service = self._service.service(entity.id).numa_nodes_service()
if len(self.param('numa_nodes')) > 0:
# Remove all existing virtual numa nodes before adding new ones
existed_numa_nodes = numa_nodes_service.list()
existed_numa_nodes.sort(reverse=len(existed_numa_nodes) > 1 and existed_numa_nodes[1].index > existed_numa_nodes[0].index)
for current_numa_node in existed_numa_nodes:
numa_nodes_service.node_service(current_numa_node.id).remove()
updated = True
for numa_node in self.param('numa_nodes'):
if numa_node is None or numa_node.get('index') is None or numa_node.get('cores') is None or numa_node.get('memory') is None:
continue
numa_nodes_service.add(
otypes.VirtualNumaNode(
index=numa_node.get('index'),
memory=numa_node.get('memory'),
cpu=otypes.Cpu(
cores=[
otypes.Core(
index=core
) for core in numa_node.get('cores')
],
),
numa_node_pins=[
otypes.NumaNodePin(
index=pin
) for pin in numa_node.get('numa_node_pins')
] if numa_node.get('numa_node_pins') is not None else None,
)
)
updated = True
return updated
def __attach_watchdog(self, entity):
watchdogs_service = self._service.service(entity.id).watchdogs_service()
watchdog = self.param('watchdog')
if watchdog is not None:
current_watchdog = next(iter(watchdogs_service.list()), None)
if watchdog.get('model') is None and current_watchdog:
watchdogs_service.watchdog_service(current_watchdog.id).remove()
return True
elif watchdog.get('model') is not None and current_watchdog is None:
watchdogs_service.add(
otypes.Watchdog(
model=otypes.WatchdogModel(watchdog.get('model').lower()),
action=otypes.WatchdogAction(watchdog.get('action')),
)
)
return True
elif current_watchdog is not None:
if (
str(current_watchdog.model).lower() != watchdog.get('model').lower() or
str(current_watchdog.action).lower() != watchdog.get('action').lower()
):
watchdogs_service.watchdog_service(current_watchdog.id).update(
otypes.Watchdog(
model=otypes.WatchdogModel(watchdog.get('model')),
action=otypes.WatchdogAction(watchdog.get('action')),
)
)
return True
return False
def __attach_nics(self, entity):
# Attach NICs to VM, if specified:
nics_service = self._service.service(entity.id).nics_service()
for nic in self.param('nics'):
if search_by_name(nics_service, nic.get('name')) is None:
if not self._module.check_mode:
nics_service.add(
otypes.Nic(
name=nic.get('name'),
interface=otypes.NicInterface(
nic.get('interface', 'virtio')
),
vnic_profile=otypes.VnicProfile(
id=self.__get_vnic_profile_id(nic),
) if nic.get('profile_name') else None,
mac=otypes.Mac(
address=nic.get('mac_address')
) if nic.get('mac_address') else None,
)
)
self.changed = True
def get_initialization(self):
if self._initialization is not None:
return self._initialization
sysprep = self.param('sysprep')
cloud_init = self.param('cloud_init')
cloud_init_nics = self.param('cloud_init_nics') or []
if cloud_init is not None:
cloud_init_nics.append(cloud_init)
if cloud_init or cloud_init_nics:
self._initialization = otypes.Initialization(
nic_configurations=[
otypes.NicConfiguration(
boot_protocol=otypes.BootProtocol(
nic.pop('nic_boot_protocol').lower()
) if nic.get('nic_boot_protocol') else None,
ipv6_boot_protocol=otypes.BootProtocol(
nic.pop('nic_boot_protocol_v6').lower()
) if nic.get('nic_boot_protocol_v6') else None,
name=nic.pop('nic_name', None),
on_boot=nic.pop('nic_on_boot', None),
ip=otypes.Ip(
address=nic.pop('nic_ip_address', None),
netmask=nic.pop('nic_netmask', None),
gateway=nic.pop('nic_gateway', None),
version=otypes.IpVersion('v4')
) if (
nic.get('nic_gateway') is not None or
nic.get('nic_netmask') is not None or
nic.get('nic_ip_address') is not None
) else None,
ipv6=otypes.Ip(
address=nic.pop('nic_ip_address_v6', None),
netmask=nic.pop('nic_netmask_v6', None),
gateway=nic.pop('nic_gateway_v6', None),
version=otypes.IpVersion('v6')
) if (
nic.get('nic_gateway_v6') is not None or
nic.get('nic_netmask_v6') is not None or
nic.get('nic_ip_address_v6') is not None
) else None,
)
for nic in cloud_init_nics
if (
nic.get('nic_boot_protocol_v6') is not None or
nic.get('nic_ip_address_v6') is not None or
nic.get('nic_gateway_v6') is not None or
nic.get('nic_netmask_v6') is not None or
nic.get('nic_gateway') is not None or
nic.get('nic_netmask') is not None or
nic.get('nic_ip_address') is not None or
nic.get('nic_boot_protocol') is not None or
nic.get('nic_on_boot') is not None
)
] if cloud_init_nics else None,
**cloud_init
)
elif sysprep:
self._initialization = otypes.Initialization(
**sysprep
)
return self._initialization
def __attach_host_devices(self, entity):
vm_service = self._service.service(entity.id)
host_devices_service = vm_service.host_devices_service()
host_devices = self.param('host_devices')
updated = False
if host_devices:
device_names = [dev.name for dev in host_devices_service.list()]
for device in host_devices:
device_name = device.get('name')
state = device.get('state', 'present')
if state == 'absent' and device_name in device_names:
updated = True
if not self._module.check_mode:
device_id = get_id_by_name(host_devices_service, device.get('name'))
host_devices_service.device_service(device_id).remove()
elif state == 'present' and device_name not in device_names:
updated = True
if not self._module.check_mode:
host_devices_service.add(
otypes.HostDevice(
name=device.get('name'),
)
)
return updated
def _get_role_mappings(module):
roleMappings = list()
for roleMapping in module.params['role_mappings']:
roleMappings.append(
otypes.RegistrationRoleMapping(
from_=otypes.Role(
name=roleMapping['source_name'],
) if roleMapping['source_name'] else None,
to=otypes.Role(
name=roleMapping['dest_name'],
) if roleMapping['dest_name'] else None,
)
)
return roleMappings
def _get_affinity_group_mappings(module):
affinityGroupMappings = list()
for affinityGroupMapping in module.params['affinity_group_mappings']:
affinityGroupMappings.append(
otypes.RegistrationAffinityGroupMapping(
from_=otypes.AffinityGroup(
name=affinityGroupMapping['source_name'],
) if affinityGroupMapping['source_name'] else None,
to=otypes.AffinityGroup(
name=affinityGroupMapping['dest_name'],
) if affinityGroupMapping['dest_name'] else None,
)
)
return affinityGroupMappings
def _get_affinity_label_mappings(module):
affinityLabelMappings = list()
for affinityLabelMapping in module.params['affinity_label_mappings']:
affinityLabelMappings.append(
otypes.RegistrationAffinityLabelMapping(
from_=otypes.AffinityLabel(
name=affinityLabelMapping['source_name'],
) if affinityLabelMapping['source_name'] else None,
to=otypes.AffinityLabel(
name=affinityLabelMapping['dest_name'],
) if affinityLabelMapping['dest_name'] else None,
)
)
return affinityLabelMappings
def _get_domain_mappings(module):
domainMappings = list()
for domainMapping in module.params['domain_mappings']:
domainMappings.append(
otypes.RegistrationDomainMapping(
from_=otypes.Domain(
name=domainMapping['source_name'],
) if domainMapping['source_name'] else None,
to=otypes.Domain(
name=domainMapping['dest_name'],
) if domainMapping['dest_name'] else None,
)
)
return domainMappings
def _get_lun_mappings(module):
lunMappings = list()
for lunMapping in module.params['lun_mappings']:
lunMappings.append(
otypes.RegistrationLunMapping(
from_=otypes.Disk(
lun_storage=otypes.HostStorage(
type=otypes.StorageType(lunMapping['source_storage_type'])
if (lunMapping['source_storage_type'] in
['iscsi', 'fcp']) else None,
logical_units=[
otypes.LogicalUnit(
id=lunMapping['source_logical_unit_id'],
)
],
),
) if lunMapping['source_logical_unit_id'] else None,
to=otypes.Disk(
lun_storage=otypes.HostStorage(
type=otypes.StorageType(lunMapping['dest_storage_type'])
if (lunMapping['dest_storage_type'] in
['iscsi', 'fcp']) else None,
logical_units=[
otypes.LogicalUnit(
id=lunMapping['dest_logical_unit_id'],
port=lunMapping['dest_logical_unit_port'],
portal=lunMapping['dest_logical_unit_portal'],
address=lunMapping['dest_logical_unit_address'],
target=lunMapping['dest_logical_unit_target'],
password=lunMapping['dest_logical_unit_password'],
username=lunMapping['dest_logical_unit_username'],
)
],
),
) if lunMapping['dest_logical_unit_id'] else None,
),
),
return lunMappings
def _get_cluster_mappings(module):
clusterMappings = list()
for clusterMapping in module.params['cluster_mappings']:
clusterMappings.append(
otypes.RegistrationClusterMapping(
from_=otypes.Cluster(
name=clusterMapping['source_name'],
),
to=otypes.Cluster(
name=clusterMapping['dest_name'],
) if clusterMapping['dest_name'] else None,
)
)
return clusterMappings
def _get_vnic_profile_mappings(module):
vnicProfileMappings = list()
for vnicProfileMapping in module.params['vnic_profile_mappings']:
vnicProfileMappings.append(
otypes.VnicProfileMapping(
source_network_name=vnicProfileMapping['source_network_name'],
source_network_profile_name=vnicProfileMapping['source_profile_name'],
target_vnic_profile=otypes.VnicProfile(
id=vnicProfileMapping['target_profile_id'],
) if vnicProfileMapping['target_profile_id'] else None,
)
)
return vnicProfileMappings
def import_vm(module, connection):
vms_service = connection.system_service().vms_service()
if search_by_name(vms_service, module.params['name']) is not None:
return False
events_service = connection.system_service().events_service()
last_event = events_service.list(max=1)[0]
external_type = [
tmp for tmp in ['kvm', 'xen', 'vmware']
if module.params[tmp] is not None
][0]
external_vm = module.params[external_type]
imports_service = connection.system_service().external_vm_imports_service()
imported_vm = imports_service.add(
otypes.ExternalVmImport(
vm=otypes.Vm(
name=module.params['name']
),
name=external_vm.get('name'),
username=external_vm.get('username', 'test'),
password=external_vm.get('password', 'test'),
provider=otypes.ExternalVmProviderType(external_type),
url=external_vm.get('url'),
cluster=otypes.Cluster(
name=module.params['cluster'],
) if module.params['cluster'] else None,
storage_domain=otypes.StorageDomain(
name=external_vm.get('storage_domain'),
) if external_vm.get('storage_domain') else None,
sparse=external_vm.get('sparse', True),
host=otypes.Host(
name=module.params['host'],
) if module.params['host'] else None,
)
)
# Wait until event with code 1152 for our VM don't appear:
vms_service = connection.system_service().vms_service()
wait(
service=vms_service.vm_service(imported_vm.vm.id),
condition=lambda vm: len([
event
for event in events_service.list(
from_=int(last_event.id),
search='type=1152 and vm.id=%s' % vm.id,
)
]) > 0 if vm is not None else False,
fail_condition=lambda vm: vm is None,
timeout=module.params['timeout'],
poll_interval=module.params['poll_interval'],
)
return True
def check_deprecated_params(module, connection):
if engine_supported(connection, '4.4') and \
(module.params.get('kernel_params_persist') is not None or
module.params.get('kernel_path') is not None or
module.params.get('initrd_path') is not None or
module.params.get('kernel_params') is not None):
module.warn("Parameters 'kernel_params_persist', 'kernel_path', 'initrd_path', 'kernel_params' are not supported since oVirt 4.4.")
def control_state(vm, vms_service, module):
if vm is None:
return
force = module.params['force']
state = module.params['state']
vm_service = vms_service.vm_service(vm.id)
if vm.status == otypes.VmStatus.IMAGE_LOCKED:
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)
elif vm.status == otypes.VmStatus.SAVING_STATE:
# Result state is SUSPENDED, we should wait to be suspended:
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.SUSPENDED,
)
elif (
vm.status == otypes.VmStatus.UNASSIGNED or
vm.status == otypes.VmStatus.UNKNOWN
):
# Invalid states:
module.fail_json(msg="Not possible to control VM, if it's in '{0}' status".format(vm.status))
elif vm.status == otypes.VmStatus.POWERING_DOWN:
if (force and state == 'stopped') or state == 'absent':
vm_service.stop()
wait(
service=vm_service,
condition=lambda vm: vm.status == otypes.VmStatus.DOWN,
)
else:
# If VM is powering down, wait to be DOWN or UP.
# VM can end in UP state in case there is no GA
# or ACPI on the VM or shutdown operation crashed:
wait(
service=vm_service,
condition=lambda vm: vm.status in [otypes.VmStatus.DOWN, otypes.VmStatus.UP],
)
def main():
argument_spec = ovirt_full_argument_spec(
state=dict(type='str', default='present', choices=[
'absent', 'next_run', 'present', 'registered', 'running', 'stopped', 'suspended', 'exported', 'reboot'
]),
name=dict(type='str'),
id=dict(type='str'),
cluster=dict(type='str'),
allow_partial_import=dict(type='bool'),
template=dict(type='str'),
template_version=dict(type='int'),
use_latest_template_version=dict(type='bool'),
storage_domain=dict(type='str'),
disk_format=dict(type='str', default='cow', choices=['cow', 'raw']),
disks=dict(type='list', default=[]),
memory=dict(type='str'),
memory_guaranteed=dict(type='str'),
memory_max=dict(type='str'),
cpu_sockets=dict(type='int'),
cpu_cores=dict(type='int'),
cpu_shares=dict(type='int'),
cpu_threads=dict(type='int'),
type=dict(type='str', choices=['server', 'desktop', 'high_performance']),
operating_system=dict(type='str'),
cd_iso=dict(type='str'),
boot_devices=dict(type='list', choices=['cdrom', 'hd', 'network']),
vnic_profile_mappings=dict(default=[], type='list'),
cluster_mappings=dict(default=[], type='list'),
role_mappings=dict(default=[], type='list'),
affinity_group_mappings=dict(default=[], type='list'),
affinity_label_mappings=dict(default=[], type='list'),
lun_mappings=dict(default=[], type='list'),
domain_mappings=dict(default=[], type='list'),
reassign_bad_macs=dict(default=None, type='bool'),
boot_menu=dict(type='bool'),
serial_console=dict(type='bool'),
usb_support=dict(type='bool'),
sso=dict(type='bool'),
quota_id=dict(type='str'),
high_availability=dict(type='bool'),
high_availability_priority=dict(type='int'),
lease=dict(type='str'),
stateless=dict(type='bool'),
delete_protected=dict(type='bool'),
custom_emulated_machine=dict(type='str'),
force=dict(type='bool', default=False),
nics=dict(type='list', default=[]),
cloud_init=dict(type='dict'),
cloud_init_nics=dict(type='list', default=[]),
cloud_init_persist=dict(type='bool', default=False, aliases=['sysprep_persist']),
kernel_params_persist=dict(type='bool', default=False),
sysprep=dict(type='dict'),
host=dict(type='str'),
clone=dict(type='bool', default=False),
clone_permissions=dict(type='bool', default=False),
kernel_path=dict(type='str'),
initrd_path=dict(type='str'),
kernel_params=dict(type='str'),
instance_type=dict(type='str'),
description=dict(type='str'),
comment=dict(type='str'),
timezone=dict(type='str'),
serial_policy=dict(type='str', choices=['vm', 'host', 'custom']),
serial_policy_value=dict(type='str'),
vmware=dict(type='dict'),
xen=dict(type='dict'),
kvm=dict(type='dict'),
cpu_mode=dict(type='str'),
placement_policy=dict(type='str'),
custom_compatibility_version=dict(type='str'),
ticket=dict(type='bool', default=None),
cpu_pinning=dict(type='list'),
soundcard_enabled=dict(type='bool', default=None),
smartcard_enabled=dict(type='bool', default=None),
io_threads=dict(type='int', default=None),
ballooning_enabled=dict(type='bool', default=None),
rng_device=dict(type='str'),
numa_tune_mode=dict(type='str', choices=['interleave', 'preferred', 'strict']),
numa_nodes=dict(type='list', default=[]),
custom_properties=dict(type='list'),
watchdog=dict(type='dict'),
host_devices=dict(type='list'),
graphical_console=dict(
type='dict',
options=dict(
headless_mode=dict(type='bool'),
protocol=dict(type='list'),
disconnect_action=dict(type='str'),
keyboard_layout=dict(type='str'),
monitors=dict(type='int'),
)
),
exclusive=dict(type='bool'),
export_domain=dict(default=None),
export_ova=dict(type='dict'),
force_migrate=dict(type='bool'),
migrate=dict(type='bool', default=None),
next_run=dict(type='bool'),
snapshot_name=dict(type='str'),
snapshot_vm=dict(type='str'),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[['id', 'name']],
required_if=[
('state', 'registered', ['storage_domain']),
],
required_together=[['snapshot_name', 'snapshot_vm']]
)
check_sdk(module)
check_params(module)
try:
state = module.params['state']
auth = module.params.pop('auth')
connection = create_connection(auth)
check_deprecated_params(module, connection)
vms_service = connection.system_service().vms_service()
vms_module = VmsModule(
connection=connection,
module=module,
service=vms_service,
)
vm = vms_module.search_entity(list_params={'all_content': True})
# Boolean variable to mark if vm existed before module was executed
vm_existed = True if vm else False
control_state(vm, vms_service, module)
if state in ('present', 'running', 'next_run'):
if module.params['xen'] or module.params['kvm'] or module.params['vmware']:
vms_module.changed = import_vm(module, connection)
# In case of wait=false and state=running, waits for VM to be created
# In case VM don't exist, wait for VM DOWN state,
# otherwise don't wait for any state, just update VM:
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
update_params={'next_run': module.params['next_run']} if module.params['next_run'] is not None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
_wait=True if not module.params['wait'] and state == 'running' else module.params['wait'],
)
# If VM is going to be created and check_mode is on, return now:
if module.check_mode and ret.get('id') is None:
module.exit_json(**ret)
vms_module.post_present(ret['id'])
# Run the VM if it was just created, else don't run it:
if state == 'running':
def kernel_persist_check():
return (module.params.get('kernel_params') or
module.params.get('initrd_path') or
module.params.get('kernel_path')
and not module.params.get('cloud_init_persist'))
initialization = vms_module.get_initialization()
ret = vms_module.action(
action='start',
post_action=vms_module._post_start_action,
action_condition=lambda vm: (
vm.status not in [
otypes.VmStatus.MIGRATING,
otypes.VmStatus.POWERING_UP,
otypes.VmStatus.REBOOT_IN_PROGRESS,
otypes.VmStatus.WAIT_FOR_LAUNCH,
otypes.VmStatus.UP,
otypes.VmStatus.RESTORING_STATE,
]
),
wait_condition=lambda vm: vm.status == otypes.VmStatus.UP,
# Start action kwargs:
use_cloud_init=True if not module.params.get('cloud_init_persist') and module.params.get('cloud_init') else None,
use_sysprep=True if not module.params.get('cloud_init_persist') and module.params.get('sysprep') else None,
vm=otypes.Vm(
placement_policy=otypes.VmPlacementPolicy(
hosts=[otypes.Host(name=module.params['host'])]
) if module.params['host'] else None,
initialization=initialization,
os=otypes.OperatingSystem(
cmdline=module.params.get('kernel_params'),
initrd=module.params.get('initrd_path'),
kernel=module.params.get('kernel_path'),
) if (kernel_persist_check()) else None,
) if (
kernel_persist_check() or
module.params.get('host') or
initialization is not None
and not module.params.get('cloud_init_persist')
) else None,
)
if module.params['ticket']:
vm_service = vms_service.vm_service(ret['id'])
graphics_consoles_service = vm_service.graphics_consoles_service()
graphics_console = graphics_consoles_service.list()[0]
console_service = graphics_consoles_service.console_service(graphics_console.id)
ticket = console_service.remote_viewer_connection_file()
if ticket:
ret['vm']['remote_vv_file'] = ticket
if state == 'next_run':
# Apply next run configuration, if needed:
vm = vms_service.vm_service(ret['id']).get()
if vm.next_run_configuration_exists:
ret = vms_module.action(
action='reboot',
entity=vm,
action_condition=lambda vm: vm.status == otypes.VmStatus.UP,
wait_condition=lambda vm: vm.status == otypes.VmStatus.UP,
)
# Allow migrate vm when state present.
if vm_existed:
vms_module._migrate_vm(vm)
ret['changed'] = vms_module.changed
elif state == 'stopped':
if module.params['xen'] or module.params['kvm'] or module.params['vmware']:
vms_module.changed = import_vm(module, connection)
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
)
if module.params['force']:
ret = vms_module.action(
action='stop',
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=vms_module.wait_for_down,
)
else:
ret = vms_module.action(
action='shutdown',
pre_action=vms_module._pre_shutdown_action,
action_condition=lambda vm: vm.status != otypes.VmStatus.DOWN,
wait_condition=vms_module.wait_for_down,
)
vms_module.post_present(ret['id'])
elif state == 'suspended':
ret = vms_module.create(
entity=vm,
result_state=otypes.VmStatus.DOWN if vm is None else None,
clone=module.params['clone'],
clone_permissions=module.params['clone_permissions'],
)
vms_module.post_present(ret['id'])
ret = vms_module.action(
action='suspend',
pre_action=vms_module._pre_suspend_action,
action_condition=lambda vm: vm.status != otypes.VmStatus.SUSPENDED,
wait_condition=lambda vm: vm.status == otypes.VmStatus.SUSPENDED,
)
elif state == 'absent':
ret = vms_module.remove()
elif state == 'registered':
storage_domains_service = connection.system_service().storage_domains_service()
# Find the storage domain with unregistered VM:
sd_id = get_id_by_name(storage_domains_service, module.params['storage_domain'])
storage_domain_service = storage_domains_service.storage_domain_service(sd_id)
vms_service = storage_domain_service.vms_service()
# Find the unregistered VM we want to register:
vms = vms_service.list(unregistered=True)
vm = next(
(vm for vm in vms if (vm.id == module.params['id'] or vm.name == module.params['name'])),
None
)
changed = False
if vm is None:
vm = vms_module.search_entity()
if vm is None:
raise ValueError(
"VM '%s(%s)' wasn't found." % (module.params['name'], module.params['id'])
)
else:
# Register the vm into the system:
changed = True
vm_service = vms_service.vm_service(vm.id)
vm_service.register(
allow_partial_import=module.params['allow_partial_import'],
cluster=otypes.Cluster(
name=module.params['cluster']
) if module.params['cluster'] else None,
vnic_profile_mappings=_get_vnic_profile_mappings(module)
if module.params['vnic_profile_mappings'] else None,
reassign_bad_macs=module.params['reassign_bad_macs']
if module.params['reassign_bad_macs'] is not None else None,
registration_configuration=otypes.RegistrationConfiguration(
cluster_mappings=_get_cluster_mappings(module),
role_mappings=_get_role_mappings(module),
domain_mappings=_get_domain_mappings(module),
lun_mappings=_get_lun_mappings(module),
affinity_group_mappings=_get_affinity_group_mappings(module),
affinity_label_mappings=_get_affinity_label_mappings(module),
) if (module.params['cluster_mappings']
or module.params['role_mappings']
or module.params['domain_mappings']
or module.params['lun_mappings']
or module.params['affinity_group_mappings']
or module.params['affinity_label_mappings']) else None
)
if module.params['wait']:
vm = vms_module.wait_for_import()
else:
# Fetch vm to initialize return.
vm = vm_service.get()
ret = {
'changed': changed,
'id': vm.id,
'vm': get_dict_of_struct(vm)
}
elif state == 'exported':
if module.params['export_domain']:
export_service = vms_module._get_export_domain_service()
export_vm = search_by_attributes(export_service.vms_service(), id=vm.id)
ret = vms_module.action(
entity=vm,
action='export',
action_condition=lambda t: export_vm is None or module.params['exclusive'],
wait_condition=lambda t: t is not None,
post_action=vms_module.post_export_action,
storage_domain=otypes.StorageDomain(id=export_service.get().id),
exclusive=module.params['exclusive'],
)
elif module.params['export_ova']:
export_vm = module.params['export_ova']
ret = vms_module.action(
entity=vm,
action='export_to_path_on_host',
host=otypes.Host(name=export_vm.get('host')),
directory=export_vm.get('directory'),
filename=export_vm.get('filename'),
)
elif state == 'reboot':
ret = vms_module.action(
action='reboot',
entity=vm,
action_condition=lambda vm: vm.status == otypes.VmStatus.UP,
wait_condition=lambda vm: vm.status == otypes.VmStatus.UP,
)
module.exit_json(**ret)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,595 |
maven_artifact fails with "Connection timed out" because our Artifactory blocks if there are a lot of 401.
|
`maven_artifact` uses httplib2 that attempts anonymous access even if user/password is provided.
So access log looks like:
```
/opt/bitnami/apps/artifactory/artifactory_home/logs/request.log:
20191108102644|3|REQUEST|188.163.xxx.xxx|non_authenticated_user|GET|/libs-snapshot-local/xxx/2.14.0-SNAPSHOT/maven-metadata.xml|HTTP/1.1|401|0
20191108102644|2|REQUEST|188.163.xxx.xxx|reader|GET|/libs-snapshot-local/xxx/2.14.0-SNAPSHOT/maven-metadata.xml|HTTP/1.1|200|792
```
Our repository Artifactory blocks access (per IP??) if too many 401 happens.
Ansible then fails with:
```
failed: [stage-server] (item={'name': 'tp-sc', 'jmx_port': 18004}) => {"ansible_loop_var": "item", "changed": false,
"msg": "Failed to connect to example.com at port 443: [Errno 110] Connection timed out",
"status": -1,
"url": "https://example.com/artifactory/libs-snapshot-local/xxx/2.14.0-SNAPSHOT/server-2.14.0-20191108.084713-5.jar.md5"}
```
There is a patch that fixes the behavior of httplib2 : https://github.com/DorianGeraut/maven_artifact_work/commit/5fff2870ba1e8dee864b21eb3a2ea40c10547484
|
https://github.com/ansible/ansible/issues/64595
|
https://github.com/ansible/ansible/pull/64808
|
9c79de2e1eeee604f874a7a328d1ab4105681204
|
30132861af49ba7e3b9697de51305ca88a912b2c
| 2019-11-08T11:19:46Z |
python
| 2019-12-09T10:26:12Z |
lib/ansible/modules/packaging/language/maven_artifact.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2014, Chris Schmidt <chris.schmidt () contrastsecurity.com>
#
# Built using https://github.com/hamnis/useful-scripts/blob/master/python/download-maven-artifact
# as a reference and starting point.
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: maven_artifact
short_description: Downloads an Artifact from a Maven Repository
version_added: "2.0"
description:
- Downloads an artifact from a maven repository given the maven coordinates provided to the module.
- Can retrieve snapshots or release versions of the artifact and will resolve the latest available
version if one is not available.
author: "Chris Schmidt (@chrisisbeef)"
requirements:
- lxml
- boto if using a S3 repository (s3://...)
options:
group_id:
description:
- The Maven groupId coordinate
required: true
artifact_id:
description:
- The maven artifactId coordinate
required: true
version:
description:
- The maven version coordinate
- Mutually exclusive with I(version_by_spec).
version_by_spec:
description:
- The maven dependency version ranges.
- See supported version ranges on U(https://cwiki.apache.org/confluence/display/MAVENOLD/Dependency+Mediation+and+Conflict+Resolution)
- The range type "(,1.0],[1.2,)" and "(,1.1),(1.1,)" is not supported.
- Mutually exclusive with I(version).
version_added: "2.10"
classifier:
description:
- The maven classifier coordinate
extension:
description:
- The maven type/extension coordinate
default: jar
repository_url:
description:
- The URL of the Maven Repository to download from.
- Use s3://... if the repository is hosted on Amazon S3, added in version 2.2.
- Use file://... if the repository is local, added in version 2.6
default: http://repo1.maven.org/maven2
username:
description:
- The username to authenticate as to the Maven Repository. Use AWS secret key of the repository is hosted on S3
aliases: [ "aws_secret_key" ]
password:
description:
- The password to authenticate with to the Maven Repository. Use AWS secret access key of the repository is hosted on S3
aliases: [ "aws_secret_access_key" ]
headers:
description:
- Add custom HTTP headers to a request in hash/dict format.
type: dict
version_added: "2.8"
dest:
description:
- The path where the artifact should be written to
- If file mode or ownerships are specified and destination path already exists, they affect the downloaded file
required: true
state:
description:
- The desired state of the artifact
default: present
choices: [present,absent]
timeout:
description:
- Specifies a timeout in seconds for the connection attempt
default: 10
version_added: "2.3"
validate_certs:
description:
- If C(no), SSL certificates will not be validated. This should only be set to C(no) when no other option exists.
type: bool
default: 'yes'
version_added: "1.9.3"
keep_name:
description:
- If C(yes), the downloaded artifact's name is preserved, i.e the version number remains part of it.
- This option only has effect when C(dest) is a directory and C(version) is set to C(latest) or C(version_by_spec)
is defined.
type: bool
default: 'no'
version_added: "2.4"
verify_checksum:
description:
- If C(never), the md5 checksum will never be downloaded and verified.
- If C(download), the md5 checksum will be downloaded and verified only after artifact download. This is the default.
- If C(change), the md5 checksum will be downloaded and verified if the destination already exist,
to verify if they are identical. This was the behaviour before 2.6. Since it downloads the md5 before (maybe)
downloading the artifact, and since some repository software, when acting as a proxy/cache, return a 404 error
if the artifact has not been cached yet, it may fail unexpectedly.
If you still need it, you should consider using C(always) instead - if you deal with a checksum, it is better to
use it to verify integrity after download.
- C(always) combines C(download) and C(change).
required: false
default: 'download'
choices: ['never', 'download', 'change', 'always']
version_added: "2.6"
extends_documentation_fragment:
- files
'''
EXAMPLES = '''
# Download the latest version of the JUnit framework artifact from Maven Central
- maven_artifact:
group_id: junit
artifact_id: junit
dest: /tmp/junit-latest.jar
# Download JUnit 4.11 from Maven Central
- maven_artifact:
group_id: junit
artifact_id: junit
version: 4.11
dest: /tmp/junit-4.11.jar
# Download an artifact from a private repository requiring authentication
- maven_artifact:
group_id: com.company
artifact_id: library-name
repository_url: 'https://repo.company.com/maven'
username: user
password: pass
dest: /tmp/library-name-latest.jar
# Download a WAR File to the Tomcat webapps directory to be deployed
- maven_artifact:
group_id: com.company
artifact_id: web-app
extension: war
repository_url: 'https://repo.company.com/maven'
dest: /var/lib/tomcat7/webapps/web-app.war
# Keep a downloaded artifact's name, i.e. retain the version
- maven_artifact:
version: latest
artifact_id: spring-core
group_id: org.springframework
dest: /tmp/
keep_name: yes
# Download the latest version of the JUnit framework artifact from Maven local
- maven_artifact:
group_id: junit
artifact_id: junit
dest: /tmp/junit-latest.jar
repository_url: "file://{{ lookup('env','HOME') }}/.m2/repository"
# Download the latest version between 3.8 and 4.0 (exclusive) of the JUnit framework artifact from Maven Central
- maven_artifact:
group_id: junit
artifact_id: junit
version_by_spec: "[3.8,4.0)"
dest: /tmp/
'''
import hashlib
import os
import posixpath
import shutil
import io
import tempfile
import traceback
from ansible.module_utils.ansible_release import __version__ as ansible_version
from re import match
LXML_ETREE_IMP_ERR = None
try:
from lxml import etree
HAS_LXML_ETREE = True
except ImportError:
LXML_ETREE_IMP_ERR = traceback.format_exc()
HAS_LXML_ETREE = False
BOTO_IMP_ERR = None
try:
import boto3
HAS_BOTO = True
except ImportError:
BOTO_IMP_ERR = traceback.format_exc()
HAS_BOTO = False
SEMANTIC_VERSION_IMP_ERR = None
try:
from semantic_version import Version, Spec
HAS_SEMANTIC_VERSION = True
except ImportError:
SEMANTIC_VERSION_IMP_ERR = traceback.format_exc()
HAS_SEMANTIC_VERSION = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.six.moves.urllib.parse import urlparse
from ansible.module_utils.urls import fetch_url
from ansible.module_utils._text import to_bytes, to_native, to_text
def split_pre_existing_dir(dirname):
'''
Return the first pre-existing directory and a list of the new directories that will be created.
'''
head, tail = os.path.split(dirname)
b_head = to_bytes(head, errors='surrogate_or_strict')
if not os.path.exists(b_head):
if head == dirname:
return None, [head]
else:
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(head)
else:
return head, [tail]
new_directory_list.append(tail)
return pre_existing_dir, new_directory_list
def adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed):
'''
Walk the new directories list and make sure that permissions are as we would expect
'''
if new_directory_list:
first_sub_dir = new_directory_list.pop(0)
if not pre_existing_dir:
working_dir = first_sub_dir
else:
working_dir = os.path.join(pre_existing_dir, first_sub_dir)
directory_args['path'] = working_dir
changed = module.set_fs_attributes_if_different(directory_args, changed)
changed = adjust_recursive_directory_permissions(working_dir, new_directory_list, module, directory_args, changed)
return changed
class Artifact(object):
def __init__(self, group_id, artifact_id, version, version_by_spec, classifier='', extension='jar'):
if not group_id:
raise ValueError("group_id must be set")
if not artifact_id:
raise ValueError("artifact_id must be set")
self.group_id = group_id
self.artifact_id = artifact_id
self.version = version
self.version_by_spec = version_by_spec
self.classifier = classifier
if not extension:
self.extension = "jar"
else:
self.extension = extension
def is_snapshot(self):
return self.version and self.version.endswith("SNAPSHOT")
def path(self, with_version=True):
base = posixpath.join(self.group_id.replace(".", "/"), self.artifact_id)
if with_version and self.version:
base = posixpath.join(base, self.version)
return base
def _generate_filename(self):
filename = self.artifact_id + "-" + self.classifier + "." + self.extension
if not self.classifier:
filename = self.artifact_id + "." + self.extension
return filename
def get_filename(self, filename=None):
if not filename:
filename = self._generate_filename()
elif os.path.isdir(filename):
filename = os.path.join(filename, self._generate_filename())
return filename
def __str__(self):
result = "%s:%s:%s" % (self.group_id, self.artifact_id, self.version)
if self.classifier:
result = "%s:%s:%s:%s:%s" % (self.group_id, self.artifact_id, self.extension, self.classifier, self.version)
elif self.extension != "jar":
result = "%s:%s:%s:%s" % (self.group_id, self.artifact_id, self.extension, self.version)
return result
@staticmethod
def parse(input):
parts = input.split(":")
if len(parts) >= 3:
g = parts[0]
a = parts[1]
v = parts[len(parts) - 1]
t = None
c = None
if len(parts) == 4:
t = parts[2]
if len(parts) == 5:
t = parts[2]
c = parts[3]
return Artifact(g, a, v, c, t)
else:
return None
class MavenDownloader:
def __init__(self, module, base, local=False, headers=None):
self.module = module
if base.endswith("/"):
base = base.rstrip("/")
self.base = base
self.local = local
self.headers = headers
self.user_agent = "Ansible {0} maven_artifact".format(ansible_version)
self.latest_version_found = None
self.metadata_file_name = "maven-metadata-local.xml" if local else "maven-metadata.xml"
def find_version_by_spec(self, artifact):
path = "/%s/%s" % (artifact.path(False), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
xml = etree.fromstring(content)
original_versions = xml.xpath("/metadata/versioning/versions/version/text()")
versions = []
for version in original_versions:
try:
versions.append(Version.coerce(version))
except ValueError:
# This means that version string is not a valid semantic versioning
pass
parse_versions_syntax = {
# example -> (,1.0]
r"^\(,(?P<upper_bound>[0-9.]*)]$": "<={upper_bound}",
# example -> 1.0
r"^(?P<version>[0-9.]*)$": "~={version}",
# example -> [1.0]
r"^\[(?P<version>[0-9.]*)\]$": "=={version}",
# example -> [1.2, 1.3]
r"^\[(?P<lower_bound>[0-9.]*),\s*(?P<upper_bound>[0-9.]*)\]$": ">={lower_bound},<={upper_bound}",
# example -> [1.2, 1.3)
r"^\[(?P<lower_bound>[0-9.]*),\s*(?P<upper_bound>[0-9.]+)\)$": ">={lower_bound},<{upper_bound}",
# example -> [1.5,)
r"^\[(?P<lower_bound>[0-9.]*),\)$": ">={lower_bound}",
}
for regex, spec_format in parse_versions_syntax.items():
regex_result = match(regex, artifact.version_by_spec)
if regex_result:
spec = Spec(spec_format.format(**regex_result.groupdict()))
selected_version = spec.select(versions)
if not selected_version:
raise ValueError("No version found with this spec version: {0}".format(artifact.version_by_spec))
# To deal when repos on maven don't have patch number on first build (e.g. 3.8 instead of 3.8.0)
if str(selected_version) not in original_versions:
selected_version.patch = None
return str(selected_version)
raise ValueError("The spec version {0} is not supported! ".format(artifact.version_by_spec))
def find_latest_version_available(self, artifact):
if self.latest_version_found:
return self.latest_version_found
path = "/%s/%s" % (artifact.path(False), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
xml = etree.fromstring(content)
v = xml.xpath("/metadata/versioning/versions/version[last()]/text()")
if v:
self.latest_version_found = v[0]
return v[0]
def find_uri_for_artifact(self, artifact):
if artifact.version_by_spec:
artifact.version = self.find_version_by_spec(artifact)
if artifact.version == "latest":
artifact.version = self.find_latest_version_available(artifact)
if artifact.is_snapshot():
if self.local:
return self._uri_for_artifact(artifact, artifact.version)
path = "/%s/%s" % (artifact.path(), self.metadata_file_name)
content = self._getContent(self.base + path, "Failed to retrieve the maven metadata file: " + path)
xml = etree.fromstring(content)
for snapshotArtifact in xml.xpath("/metadata/versioning/snapshotVersions/snapshotVersion"):
classifier = snapshotArtifact.xpath("classifier/text()")
artifact_classifier = classifier[0] if classifier else ''
extension = snapshotArtifact.xpath("extension/text()")
artifact_extension = extension[0] if extension else ''
if artifact_classifier == artifact.classifier and artifact_extension == artifact.extension:
return self._uri_for_artifact(artifact, snapshotArtifact.xpath("value/text()")[0])
timestamp_xmlpath = xml.xpath("/metadata/versioning/snapshot/timestamp/text()")
if timestamp_xmlpath:
timestamp = timestamp_xmlpath[0]
build_number = xml.xpath("/metadata/versioning/snapshot/buildNumber/text()")[0]
return self._uri_for_artifact(artifact, artifact.version.replace("SNAPSHOT", timestamp + "-" + build_number))
return self._uri_for_artifact(artifact, artifact.version)
def _uri_for_artifact(self, artifact, version=None):
if artifact.is_snapshot() and not version:
raise ValueError("Expected uniqueversion for snapshot artifact " + str(artifact))
elif not artifact.is_snapshot():
version = artifact.version
if artifact.classifier:
return posixpath.join(self.base, artifact.path(), artifact.artifact_id + "-" + version + "-" + artifact.classifier + "." + artifact.extension)
return posixpath.join(self.base, artifact.path(), artifact.artifact_id + "-" + version + "." + artifact.extension)
# for small files, directly get the full content
def _getContent(self, url, failmsg, force=True):
if self.local:
parsed_url = urlparse(url)
if os.path.isfile(parsed_url.path):
with io.open(parsed_url.path, 'rb') as f:
return f.read()
if force:
raise ValueError(failmsg + " because can not find file: " + url)
return None
response = self._request(url, failmsg, force)
if response:
return response.read()
return None
# only for HTTP request
def _request(self, url, failmsg, force=True):
url_to_use = url
parsed_url = urlparse(url)
if parsed_url.scheme == 's3':
parsed_url = urlparse(url)
bucket_name = parsed_url.netloc
key_name = parsed_url.path[1:]
client = boto3.client('s3', aws_access_key_id=self.module.params.get('username', ''), aws_secret_access_key=self.module.params.get('password', ''))
url_to_use = client.generate_presigned_url('get_object', Params={'Bucket': bucket_name, 'Key': key_name}, ExpiresIn=10)
req_timeout = self.module.params.get('timeout')
# Hack to add parameters in the way that fetch_url expects
self.module.params['url_username'] = self.module.params.get('username', '')
self.module.params['url_password'] = self.module.params.get('password', '')
self.module.params['http_agent'] = self.user_agent
response, info = fetch_url(self.module, url_to_use, timeout=req_timeout, headers=self.headers)
if info['status'] == 200:
return response
if force:
raise ValueError(failmsg + " because of " + info['msg'] + "for URL " + url_to_use)
return None
def download(self, tmpdir, artifact, verify_download, filename=None):
if (not artifact.version and not artifact.version_by_spec) or artifact.version == "latest":
artifact = Artifact(artifact.group_id, artifact.artifact_id, self.find_latest_version_available(artifact), None,
artifact.classifier, artifact.extension)
url = self.find_uri_for_artifact(artifact)
tempfd, tempname = tempfile.mkstemp(dir=tmpdir)
try:
# copy to temp file
if self.local:
parsed_url = urlparse(url)
if os.path.isfile(parsed_url.path):
shutil.copy2(parsed_url.path, tempname)
else:
return "Can not find local file: " + parsed_url.path
else:
response = self._request(url, "Failed to download artifact " + str(artifact))
with os.fdopen(tempfd, 'wb') as f:
shutil.copyfileobj(response, f)
if verify_download:
invalid_md5 = self.is_invalid_md5(tempname, url)
if invalid_md5:
# if verify_change was set, the previous file would be deleted
os.remove(tempname)
return invalid_md5
except Exception as e:
os.remove(tempname)
raise e
# all good, now copy temp file to target
shutil.move(tempname, artifact.get_filename(filename))
return None
def is_invalid_md5(self, file, remote_url):
if os.path.exists(file):
local_md5 = self._local_md5(file)
if self.local:
parsed_url = urlparse(remote_url)
remote_md5 = self._local_md5(parsed_url.path)
else:
try:
remote_md5 = to_text(self._getContent(remote_url + '.md5', "Failed to retrieve MD5", False), errors='strict')
except UnicodeError as e:
return "Cannot retrieve a valid md5 from %s: %s" % (remote_url, to_native(e))
if(not remote_md5):
return "Cannot find md5 from " + remote_url
try:
# Check if remote md5 only contains md5 or md5 + filename
_remote_md5 = remote_md5.split(None)[0]
remote_md5 = _remote_md5
# remote_md5 is empty so we continue and keep original md5 string
# This should not happen since we check for remote_md5 before
except IndexError as e:
pass
if local_md5 == remote_md5:
return None
else:
return "Checksum does not match: we computed " + local_md5 + "but the repository states " + remote_md5
return "Path does not exist: " + file
def _local_md5(self, file):
md5 = hashlib.md5()
with io.open(file, 'rb') as f:
for chunk in iter(lambda: f.read(8192), b''):
md5.update(chunk)
return md5.hexdigest()
def main():
module = AnsibleModule(
argument_spec=dict(
group_id=dict(required=True),
artifact_id=dict(required=True),
version=dict(default=None),
version_by_spec=dict(default=None),
classifier=dict(default=''),
extension=dict(default='jar'),
repository_url=dict(default='http://repo1.maven.org/maven2'),
username=dict(default=None, aliases=['aws_secret_key']),
password=dict(default=None, no_log=True, aliases=['aws_secret_access_key']),
headers=dict(type='dict'),
state=dict(default="present", choices=["present", "absent"]), # TODO - Implement a "latest" state
timeout=dict(default=10, type='int'),
dest=dict(type="path", required=True),
validate_certs=dict(required=False, default=True, type='bool'),
keep_name=dict(required=False, default=False, type='bool'),
verify_checksum=dict(required=False, default='download', choices=['never', 'download', 'change', 'always'])
),
add_file_common_args=True,
mutually_exclusive=([('version', 'version_by_spec')])
)
if not HAS_LXML_ETREE:
module.fail_json(msg=missing_required_lib('lxml'), exception=LXML_ETREE_IMP_ERR)
if module.params['version_by_spec'] and not HAS_SEMANTIC_VERSION:
module.fail_json(msg=missing_required_lib('semantic_version'), exception=SEMANTIC_VERSION_IMP_ERR)
repository_url = module.params["repository_url"]
if not repository_url:
repository_url = "http://repo1.maven.org/maven2"
try:
parsed_url = urlparse(repository_url)
except AttributeError as e:
module.fail_json(msg='url parsing went wrong %s' % e)
local = parsed_url.scheme == "file"
if parsed_url.scheme == 's3' and not HAS_BOTO:
module.fail_json(msg=missing_required_lib('boto3', reason='when using s3:// repository URLs'),
exception=BOTO_IMP_ERR)
group_id = module.params["group_id"]
artifact_id = module.params["artifact_id"]
version = module.params["version"]
version_by_spec = module.params["version_by_spec"]
classifier = module.params["classifier"]
extension = module.params["extension"]
headers = module.params['headers']
state = module.params["state"]
dest = module.params["dest"]
b_dest = to_bytes(dest, errors='surrogate_or_strict')
keep_name = module.params["keep_name"]
verify_checksum = module.params["verify_checksum"]
verify_download = verify_checksum in ['download', 'always']
verify_change = verify_checksum in ['change', 'always']
downloader = MavenDownloader(module, repository_url, local, headers)
if not version_by_spec and not version:
version = "latest"
try:
artifact = Artifact(group_id, artifact_id, version, version_by_spec, classifier, extension)
except ValueError as e:
module.fail_json(msg=e.args[0])
changed = False
prev_state = "absent"
if dest.endswith(os.sep):
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if not os.path.exists(b_dest):
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(dest)
os.makedirs(b_dest)
directory_args = module.load_file_common_arguments(module.params)
directory_mode = module.params["directory_mode"]
if directory_mode is not None:
directory_args['mode'] = directory_mode
else:
directory_args['mode'] = None
changed = adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed)
if os.path.isdir(b_dest):
version_part = version
if version == 'latest':
version_part = downloader.find_latest_version_available(artifact)
elif version_by_spec:
version_part = downloader.find_version_by_spec(artifact)
filename = "{artifact_id}{version_part}{classifier}.{extension}".format(
artifact_id=artifact_id,
version_part="-{0}".format(version_part) if keep_name else "",
classifier="-{0}".format(classifier) if classifier else "",
extension=extension
)
dest = posixpath.join(dest, filename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.lexists(b_dest) and ((not verify_change) or not downloader.is_invalid_md5(dest, downloader.find_uri_for_artifact(artifact))):
prev_state = "present"
if prev_state == "absent":
try:
download_error = downloader.download(module.tmpdir, artifact, verify_download, b_dest)
if download_error is None:
changed = True
else:
module.fail_json(msg="Cannot retrieve the artifact to destination: " + download_error)
except ValueError as e:
module.fail_json(msg=e.args[0])
module.params['dest'] = dest
file_args = module.load_file_common_arguments(module.params)
changed = module.set_fs_attributes_if_different(file_args, changed)
if changed:
module.exit_json(state=state, dest=dest, group_id=group_id, artifact_id=artifact_id, version=version, classifier=classifier,
extension=extension, repository_url=repository_url, changed=changed)
else:
module.exit_json(state=state, dest=dest, changed=changed)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,954 |
ansible_virtualization_ facts inside podman container are wrong
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When running ansible inside a container with podman, I get
```
"ansible_virtualization_role": "host",
"ansible_virtualization_type": "kvm",
```
It does not detect that it is a guest running inside a container
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
setup module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
[root@ff0affb75461 /]# ansible-config dump --only-changed
[root@ff0affb75461 /]#
```
##### OS / ENVIRONMENT
podman version 1.4.4
from latest centos 7 extras
`$ podman run -ti centos:7`
##### STEPS TO REPRODUCE
inside a podman container I get
```
[root@ff0affb75461 /]# ansible localhost -m setup|grep 'ansible_virtualization'
"ansible_virtualization_role": "host",
"ansible_virtualization_type": "kvm",
```
inside a docker container I get
```
[root@f443309ca58b /]# ansible localhost -m setup|grep 'ansible_virtualization'```
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "docker",
```
##### EXPECTED RESULTS
```
[root@ff0affb75461 /]# ansible localhost -m setup|grep 'ansible_virtualization'
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "podman",
```
##### ACTUAL RESULTS
```paste below
[root@ff0affb75461 /]# ansible localhost -m setup|grep 'ansible_virtualization'
"ansible_virtualization_role": "host",
"ansible_virtualization_type": "kvm",
```
|
https://github.com/ansible/ansible/issues/64954
|
https://github.com/ansible/ansible/pull/64981
|
f8216db21f079f60656e5a4df09276d4b311b855
|
2e82989b3bd0c6b8e124c8d769e5a0210cb5c086
| 2019-11-17T08:37:45Z |
python
| 2019-12-10T05:56:34Z |
changelogs/fragments/64954_virtualization_podman.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,954 |
ansible_virtualization_ facts inside podman container are wrong
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
When running ansible inside a container with podman, I get
```
"ansible_virtualization_role": "host",
"ansible_virtualization_type": "kvm",
```
It does not detect that it is a guest running inside a container
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
setup module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
[root@ff0affb75461 /]# ansible-config dump --only-changed
[root@ff0affb75461 /]#
```
##### OS / ENVIRONMENT
podman version 1.4.4
from latest centos 7 extras
`$ podman run -ti centos:7`
##### STEPS TO REPRODUCE
inside a podman container I get
```
[root@ff0affb75461 /]# ansible localhost -m setup|grep 'ansible_virtualization'
"ansible_virtualization_role": "host",
"ansible_virtualization_type": "kvm",
```
inside a docker container I get
```
[root@f443309ca58b /]# ansible localhost -m setup|grep 'ansible_virtualization'```
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "docker",
```
##### EXPECTED RESULTS
```
[root@ff0affb75461 /]# ansible localhost -m setup|grep 'ansible_virtualization'
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "podman",
```
##### ACTUAL RESULTS
```paste below
[root@ff0affb75461 /]# ansible localhost -m setup|grep 'ansible_virtualization'
"ansible_virtualization_role": "host",
"ansible_virtualization_type": "kvm",
```
|
https://github.com/ansible/ansible/issues/64954
|
https://github.com/ansible/ansible/pull/64981
|
f8216db21f079f60656e5a4df09276d4b311b855
|
2e82989b3bd0c6b8e124c8d769e5a0210cb5c086
| 2019-11-17T08:37:45Z |
python
| 2019-12-10T05:56:34Z |
lib/ansible/module_utils/facts/virtual/linux.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import re
from ansible.module_utils.facts.virtual.base import Virtual, VirtualCollector
from ansible.module_utils.facts.utils import get_file_content, get_file_lines
class LinuxVirtual(Virtual):
"""
This is a Linux-specific subclass of Virtual. It defines
- virtualization_type
- virtualization_role
"""
platform = 'Linux'
# For more information, check: http://people.redhat.com/~rjones/virt-what/
def get_virtual_facts(self):
virtual_facts = {}
# lxc/docker
if os.path.exists('/proc/1/cgroup'):
for line in get_file_lines('/proc/1/cgroup'):
if re.search(r'/docker(/|-[0-9a-f]+\.scope)', line):
virtual_facts['virtualization_type'] = 'docker'
virtual_facts['virtualization_role'] = 'guest'
return virtual_facts
if re.search('/lxc/', line) or re.search('/machine.slice/machine-lxc', line):
virtual_facts['virtualization_type'] = 'lxc'
virtual_facts['virtualization_role'] = 'guest'
return virtual_facts
# lxc does not always appear in cgroups anymore but sets 'container=lxc' environment var, requires root privs
if os.path.exists('/proc/1/environ'):
for line in get_file_lines('/proc/1/environ'):
if re.search('container=lxc', line):
virtual_facts['virtualization_type'] = 'lxc'
virtual_facts['virtualization_role'] = 'guest'
return virtual_facts
if os.path.exists('/proc/vz') and not os.path.exists('/proc/lve'):
virtual_facts['virtualization_type'] = 'openvz'
if os.path.exists('/proc/bc'):
virtual_facts['virtualization_role'] = 'host'
else:
virtual_facts['virtualization_role'] = 'guest'
return virtual_facts
systemd_container = get_file_content('/run/systemd/container')
if systemd_container:
virtual_facts['virtualization_type'] = systemd_container
virtual_facts['virtualization_role'] = 'guest'
return virtual_facts
if os.path.exists("/proc/xen"):
virtual_facts['virtualization_type'] = 'xen'
virtual_facts['virtualization_role'] = 'guest'
try:
for line in get_file_lines('/proc/xen/capabilities'):
if "control_d" in line:
virtual_facts['virtualization_role'] = 'host'
except IOError:
pass
return virtual_facts
# assume guest for this block
virtual_facts['virtualization_role'] = 'guest'
product_name = get_file_content('/sys/devices/virtual/dmi/id/product_name')
if product_name in ('KVM', 'Bochs', 'AHV'):
virtual_facts['virtualization_type'] = 'kvm'
return virtual_facts
if product_name == 'RHEV Hypervisor':
virtual_facts['virtualization_type'] = 'RHEV'
return virtual_facts
if product_name in ('VMware Virtual Platform', 'VMware7,1'):
virtual_facts['virtualization_type'] = 'VMware'
return virtual_facts
if product_name in ('OpenStack Compute', 'OpenStack Nova'):
virtual_facts['virtualization_type'] = 'openstack'
return virtual_facts
bios_vendor = get_file_content('/sys/devices/virtual/dmi/id/bios_vendor')
if bios_vendor == 'Xen':
virtual_facts['virtualization_type'] = 'xen'
return virtual_facts
if bios_vendor == 'innotek GmbH':
virtual_facts['virtualization_type'] = 'virtualbox'
return virtual_facts
if bios_vendor in ('Amazon EC2', 'DigitalOcean', 'Hetzner'):
virtual_facts['virtualization_type'] = 'kvm'
return virtual_facts
sys_vendor = get_file_content('/sys/devices/virtual/dmi/id/sys_vendor')
KVM_SYS_VENDORS = ('QEMU', 'oVirt', 'Amazon EC2', 'DigitalOcean', 'Google', 'Scaleway', 'Nutanix')
if sys_vendor in KVM_SYS_VENDORS:
virtual_facts['virtualization_type'] = 'kvm'
return virtual_facts
# FIXME: This does also match hyperv
if sys_vendor == 'Microsoft Corporation':
virtual_facts['virtualization_type'] = 'VirtualPC'
return virtual_facts
if sys_vendor == 'Parallels Software International Inc.':
virtual_facts['virtualization_type'] = 'parallels'
return virtual_facts
if sys_vendor == 'OpenStack Foundation':
virtual_facts['virtualization_type'] = 'openstack'
return virtual_facts
# unassume guest
del virtual_facts['virtualization_role']
if os.path.exists('/proc/self/status'):
for line in get_file_lines('/proc/self/status'):
if re.match(r'^VxID:\s+\d+', line):
virtual_facts['virtualization_type'] = 'linux_vserver'
if re.match(r'^VxID:\s+0', line):
virtual_facts['virtualization_role'] = 'host'
else:
virtual_facts['virtualization_role'] = 'guest'
return virtual_facts
if os.path.exists('/proc/cpuinfo'):
for line in get_file_lines('/proc/cpuinfo'):
if re.match('^model name.*QEMU Virtual CPU', line):
virtual_facts['virtualization_type'] = 'kvm'
elif re.match('^vendor_id.*User Mode Linux', line):
virtual_facts['virtualization_type'] = 'uml'
elif re.match('^model name.*UML', line):
virtual_facts['virtualization_type'] = 'uml'
elif re.match('^machine.*CHRP IBM pSeries .emulated by qemu.', line):
virtual_facts['virtualization_type'] = 'kvm'
elif re.match('^vendor_id.*PowerVM Lx86', line):
virtual_facts['virtualization_type'] = 'powervm_lx86'
elif re.match('^vendor_id.*IBM/S390', line):
virtual_facts['virtualization_type'] = 'PR/SM'
lscpu = self.module.get_bin_path('lscpu')
if lscpu:
rc, out, err = self.module.run_command(["lscpu"])
if rc == 0:
for line in out.splitlines():
data = line.split(":", 1)
key = data[0].strip()
if key == 'Hypervisor':
virtual_facts['virtualization_type'] = data[1].strip()
else:
virtual_facts['virtualization_type'] = 'ibm_systemz'
else:
continue
if virtual_facts['virtualization_type'] == 'PR/SM':
virtual_facts['virtualization_role'] = 'LPAR'
else:
virtual_facts['virtualization_role'] = 'guest'
return virtual_facts
# Beware that we can have both kvm and virtualbox running on a single system
if os.path.exists("/proc/modules") and os.access('/proc/modules', os.R_OK):
modules = []
for line in get_file_lines("/proc/modules"):
data = line.split(" ", 1)
modules.append(data[0])
if 'kvm' in modules:
virtual_facts['virtualization_type'] = 'kvm'
virtual_facts['virtualization_role'] = 'host'
if os.path.isdir('/rhev/'):
# Check whether this is a RHEV hypervisor (is vdsm running ?)
for f in glob.glob('/proc/[0-9]*/comm'):
try:
with open(f) as virt_fh:
comm_content = virt_fh.read().rstrip()
if comm_content == 'vdsm':
virtual_facts['virtualization_type'] = 'RHEV'
break
except Exception:
pass
return virtual_facts
if 'vboxdrv' in modules:
virtual_facts['virtualization_type'] = 'virtualbox'
virtual_facts['virtualization_role'] = 'host'
return virtual_facts
if 'virtio' in modules:
virtual_facts['virtualization_type'] = 'kvm'
virtual_facts['virtualization_role'] = 'guest'
return virtual_facts
# In older Linux Kernel versions, /sys filesystem is not available
# dmidecode is the safest option to parse virtualization related values
dmi_bin = self.module.get_bin_path('dmidecode')
# We still want to continue even if dmidecode is not available
if dmi_bin is not None:
(rc, out, err) = self.module.run_command('%s -s system-product-name' % dmi_bin)
if rc == 0:
# Strip out commented lines (specific dmidecode output)
vendor_name = ''.join([line.strip() for line in out.splitlines() if not line.startswith('#')])
if vendor_name.startswith('VMware'):
virtual_facts['virtualization_type'] = 'VMware'
virtual_facts['virtualization_role'] = 'guest'
return virtual_facts
# If none of the above matches, return 'NA' for virtualization_type
# and virtualization_role. This allows for proper grouping.
virtual_facts['virtualization_type'] = 'NA'
virtual_facts['virtualization_role'] = 'NA'
return virtual_facts
class LinuxVirtualCollector(VirtualCollector):
_fact_class = LinuxVirtual
_platform = 'Linux'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,249 |
Doc for win_format does not mention it is for Windows v6.2 or higher
|
##### SUMMARY
Documentation for win_format does not mention that it only works on Windows v6.2 or higher - this is only apparent from looking at the code.
The similar module win_partition includes the line:
A minimum Operating System Version of 6.2 is required to use this module. To check if your OS is compatible, see https://docs.microsoft.com/en-us/windows/desktop/sysinfo/operating-system-version.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
win_format
##### ANSIBLE VERSION
2.8.4
|
https://github.com/ansible/ansible/issues/62249
|
https://github.com/ansible/ansible/pull/65617
|
88bba21708668487fbc6fdaeb2bd148b6208cecc
|
02539c9a3741af7f11bff9ee31d23364ed39befd
| 2019-09-13T10:17:07Z |
python
| 2019-12-10T15:16:35Z |
lib/ansible/modules/windows/win_format.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Varun Chopra (@chopraaa) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
module: win_format
version_added: '2.8'
short_description: Formats an existing volume or a new volume on an existing partition on Windows
description:
- The M(win_format) module formats an existing volume or a new volume on an existing partition on Windows
options:
drive_letter:
description:
- Used to specify the drive letter of the volume to be formatted.
type: str
path:
description:
- Used to specify the path to the volume to be formatted.
type: str
label:
description:
- Used to specify the label of the volume to be formatted.
type: str
new_label:
description:
- Used to specify the new file system label of the formatted volume.
type: str
file_system:
description:
- Used to specify the file system to be used when formatting the target volume.
type: str
choices: [ ntfs, refs, exfat, fat32, fat ]
allocation_unit_size:
description:
- Specifies the cluster size to use when formatting the volume.
- If no cluster size is specified when you format a partition, defaults are selected based on
the size of the partition.
- This value must be a multiple of the physical sector size of the disk.
type: int
large_frs:
description:
- Specifies that large File Record System (FRS) should be used.
type: bool
compress:
description:
- Enable compression on the resulting NTFS volume.
- NTFS compression is not supported where I(allocation_unit_size) is more than 4096.
type: bool
integrity_streams:
description:
- Enable integrity streams on the resulting ReFS volume.
type: bool
full:
description:
- A full format writes to every sector of the disk, takes much longer to perform than the
default (quick) format, and is not recommended on storage that is thinly provisioned.
- Specify C(true) for full format.
type: bool
force:
description:
- Specify if formatting should be forced for volumes that are not created from new partitions
or if the source and target file system are different.
type: bool
notes:
- One of three parameters (I(drive_letter), I(path) and I(label)) are mandatory to identify the target
volume but more than one cannot be specified at the same time.
- This module is idempotent if I(force) is not specified and file system labels remain preserved.
- For more information, see U(https://docs.microsoft.com/en-us/previous-versions/windows/desktop/stormgmt/format-msft-volume)
seealso:
- module: win_disk_facts
- module: win_partition
author:
- Varun Chopra (@chopraaa) <[email protected]>
'''
EXAMPLES = r'''
- name: Create a partition with drive letter D and size 5 GiB
win_partition:
drive_letter: D
partition_size: 5 GiB
disk_number: 1
- name: Full format the newly created partition as NTFS and label it
win_format:
drive_letter: D
file_system: NTFS
new_label: Formatted
full: True
'''
RETURN = r'''
#
'''
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,293 |
group module return values undocumented
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
group module return values are undocumented.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
group
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/aschultz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.3 (default, Apr 09 2019, 05:18:21) [GCC]
```
|
https://github.com/ansible/ansible/issues/65293
|
https://github.com/ansible/ansible/pull/65294
|
f89db2af9978e2acaa8c56ee6fb6dc9908119d1a
|
d906fdeba275377ba54345a610d57275149584f8
| 2019-11-26T20:06:04Z |
python
| 2019-12-10T15:32:43Z |
lib/ansible/modules/system/group.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = '''
---
module: group
version_added: "0.0.2"
short_description: Add or remove groups
requirements:
- groupadd
- groupdel
- groupmod
description:
- Manage presence of groups on a host.
- For Windows targets, use the M(win_group) module instead.
options:
name:
description:
- Name of the group to manage.
type: str
required: true
gid:
description:
- Optional I(GID) to set for the group.
type: int
state:
description:
- Whether the group should be present or not on the remote host.
type: str
choices: [ absent, present ]
default: present
system:
description:
- If I(yes), indicates that the group created is a system group.
type: bool
default: no
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local groups.
(e.g. it uses C(lgroupadd) instead of C(groupadd)).
- This requires that these commands exist on the targeted host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.6"
non_unique:
description:
- This option allows to change the group ID to a non-unique value. Requires C(gid).
- Not supported on macOS or BusyBox distributions.
type: bool
default: no
version_added: "2.8"
seealso:
- module: user
- module: win_group
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = '''
- name: Ensure group "somegroup" exists
group:
name: somegroup
state: present
'''
import grp
from ansible.module_utils._text import to_bytes
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.sys_info import get_platform_subclass
class Group(object):
"""
This is a generic Group manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- group_del()
- group_add()
- group_mod()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None
GROUPFILE = '/etc/group'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Group)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.gid = module.params['gid']
self.system = module.params['system']
self.local = module.params['local']
self.non_unique = module.params['non_unique']
def execute_command(self, cmd):
return self.module.run_command(cmd)
def group_del(self):
if self.local:
command_name = 'lgroupdel'
else:
command_name = 'groupdel'
cmd = [self.module.get_bin_path(command_name, True), self.name]
return self.execute_command(cmd)
def _local_check_gid_exists(self):
if self.gid:
for gr in grp.getgrall():
if self.gid == gr.gr_gid and self.name != gr.gr_name:
self.module.fail_json(msg="GID '{0}' already exists with group '{1}'".format(self.gid, gr.gr_name))
def group_add(self, **kwargs):
if self.local:
command_name = 'lgroupadd'
self._local_check_gid_exists()
else:
command_name = 'groupadd'
cmd = [self.module.get_bin_path(command_name, True)]
for key in kwargs:
if key == 'gid' and kwargs[key] is not None:
cmd.append('-g')
cmd.append(str(kwargs[key]))
if self.non_unique:
cmd.append('-o')
elif key == 'system' and kwargs[key] is True:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
if self.local:
command_name = 'lgroupmod'
self._local_check_gid_exists()
else:
command_name = 'groupmod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.group_info()
for key in kwargs:
if key == 'gid':
if kwargs[key] is not None and info[2] != int(kwargs[key]):
cmd.append('-g')
cmd.append(str(kwargs[key]))
if self.non_unique:
cmd.append('-o')
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
def group_exists(self):
try:
if grp.getgrnam(self.name):
return True
except KeyError:
return False
def group_info(self):
if not self.group_exists():
return False
try:
info = list(grp.getgrnam(self.name))
except KeyError:
return False
return info
# ===========================================
class SunOS(Group):
"""
This is a SunOS Group manipulation class. Solaris doesn't have
the 'system' group concept.
This overrides the following methods from the generic class:-
- group_add()
"""
platform = 'SunOS'
distribution = None
GROUPFILE = '/etc/group'
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('groupadd', True)]
for key in kwargs:
if key == 'gid' and kwargs[key] is not None:
cmd.append('-g')
cmd.append(str(kwargs[key]))
if self.non_unique:
cmd.append('-o')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class AIX(Group):
"""
This is a AIX Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'AIX'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('rmgroup', True), self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('mkgroup', True)]
for key in kwargs:
if key == 'gid' and kwargs[key] is not None:
cmd.append('id=' + str(kwargs[key]))
elif key == 'system' and kwargs[key] is True:
cmd.append('-a')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('chgroup', True)]
info = self.group_info()
for key in kwargs:
if key == 'gid':
if kwargs[key] is not None and info[2] != int(kwargs[key]):
cmd.append('id=' + str(kwargs[key]))
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class FreeBsdGroup(Group):
"""
This is a FreeBSD Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'FreeBSD'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('pw', True), 'groupdel', self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('pw', True), 'groupadd', self.name]
if self.gid is not None:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('pw', True), 'groupmod', self.name]
info = self.group_info()
cmd_len = len(cmd)
if self.gid is not None and int(self.gid) != info[2]:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
# modify the group if cmd will do anything
if cmd_len != len(cmd):
if self.module.check_mode:
return (0, '', '')
return self.execute_command(cmd)
return (None, '', '')
class DragonFlyBsdGroup(FreeBsdGroup):
"""
This is a DragonFlyBSD Group manipulation class.
It inherits all behaviors from FreeBsdGroup class.
"""
platform = 'DragonFly'
# ===========================================
class DarwinGroup(Group):
"""
This is a Mac macOS Darwin Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
group manipulation are done using dseditgroup(1).
"""
platform = 'Darwin'
distribution = None
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('dseditgroup', True)]
cmd += ['-o', 'create']
if self.gid is not None:
cmd += ['-i', str(self.gid)]
elif 'system' in kwargs and kwargs['system'] is True:
gid = self.get_lowest_available_system_gid()
if gid is not False:
self.gid = str(gid)
cmd += ['-i', str(self.gid)]
cmd += ['-L', self.name]
(rc, out, err) = self.execute_command(cmd)
return (rc, out, err)
def group_del(self):
cmd = [self.module.get_bin_path('dseditgroup', True)]
cmd += ['-o', 'delete']
cmd += ['-L', self.name]
(rc, out, err) = self.execute_command(cmd)
return (rc, out, err)
def group_mod(self, gid=None):
info = self.group_info()
if self.gid is not None and int(self.gid) != info[2]:
cmd = [self.module.get_bin_path('dseditgroup', True)]
cmd += ['-o', 'edit']
if gid is not None:
cmd += ['-i', str(gid)]
cmd += ['-L', self.name]
(rc, out, err) = self.execute_command(cmd)
return (rc, out, err)
return (None, '', '')
def get_lowest_available_system_gid(self):
# check for lowest available system gid (< 500)
try:
cmd = [self.module.get_bin_path('dscl', True)]
cmd += ['/Local/Default', '-list', '/Groups', 'PrimaryGroupID']
(rc, out, err) = self.execute_command(cmd)
lines = out.splitlines()
highest = 0
for group_info in lines:
parts = group_info.split(' ')
if len(parts) > 1:
gid = int(parts[-1])
if gid > highest and gid < 500:
highest = gid
if highest == 0 or highest == 499:
return False
return (highest + 1)
except Exception:
return False
class OpenBsdGroup(Group):
"""
This is a OpenBSD Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'OpenBSD'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('groupdel', True), self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('groupadd', True)]
if self.gid is not None:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('groupmod', True)]
info = self.group_info()
if self.gid is not None and int(self.gid) != info[2]:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class NetBsdGroup(Group):
"""
This is a NetBSD Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'NetBSD'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('groupdel', True), self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('groupadd', True)]
if self.gid is not None:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('groupmod', True)]
info = self.group_info()
if self.gid is not None and int(self.gid) != info[2]:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class BusyBoxGroup(Group):
"""
BusyBox group manipulation class for systems that have addgroup and delgroup.
It overrides the following methods:
- group_add()
- group_del()
- group_mod()
"""
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('addgroup', True)]
if self.gid is not None:
cmd.extend(['-g', str(self.gid)])
if self.system:
cmd.append('-S')
cmd.append(self.name)
return self.execute_command(cmd)
def group_del(self):
cmd = [self.module.get_bin_path('delgroup', True), self.name]
return self.execute_command(cmd)
def group_mod(self, **kwargs):
# Since there is no groupmod command, modify /etc/group directly
info = self.group_info()
if self.gid is not None and self.gid != info[2]:
with open('/etc/group', 'rb') as f:
b_groups = f.read()
b_name = to_bytes(self.name)
b_current_group_string = b'%s:x:%d:' % (b_name, info[2])
b_new_group_string = b'%s:x:%d:' % (b_name, self.gid)
if b':%d:' % self.gid in b_groups:
self.module.fail_json(msg="gid '{gid}' in use".format(gid=self.gid))
if self.module.check_mode:
return 0, '', ''
b_new_groups = b_groups.replace(b_current_group_string, b_new_group_string)
with open('/etc/group', 'wb') as f:
f.write(b_new_groups)
return 0, '', ''
return None, '', ''
class AlpineGroup(BusyBoxGroup):
platform = 'Linux'
distribution = 'Alpine'
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True),
gid=dict(type='int'),
system=dict(type='bool', default=False),
local=dict(type='bool', default=False),
non_unique=dict(type='bool', default=False),
),
supports_check_mode=True,
required_if=[
['non_unique', True, ['gid']],
],
)
group = Group(module)
module.debug('Group instantiated - platform %s' % group.platform)
if group.distribution:
module.debug('Group instantiated - distribution %s' % group.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = group.name
result['state'] = group.state
if group.state == 'absent':
if group.group_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = group.group_del()
if rc != 0:
module.fail_json(name=group.name, msg=err)
elif group.state == 'present':
if not group.group_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = group.group_add(gid=group.gid, system=group.system)
else:
(rc, out, err) = group.group_mod(gid=group.gid)
if rc is not None and rc != 0:
module.fail_json(name=group.name, msg=err)
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if group.group_exists():
info = group.group_info()
result['system'] = group.system
result['gid'] = info[2]
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,643 |
VMware: Cautions on using VMware modules with standalone ESXi
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
As a cautions when using VMware modules with standalone ESXi, I think that it is necessary to add to the following document that API is read-only for in the case of free license.
https://docs.ansible.com/ansible/latest/scenario_guides/vmware_scenarios/faq.html
If I create a VM by execute vmware_guest with the free license, the following error occurs.
```
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to create virtual machine due to product versioning restrictions: Current license or ESXi version prohibits execution of the requested operation."}
```
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs/docsite/rst/scenario_guides/vmware_scenarios/faq.rst
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
# ansible --version
ansible 2.10.0.dev0
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /root/devel/ansible/lib/ansible
executable location = /root/devel/ansible/bin/ansible
python version = 3.6.8 (default, Jul 1 2019, 16:43:04) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
##### ADDITIONAL INFORMATION
see:
* https://github.com/ansible/ansible/issues/42262
* https://blogs.vmware.com/vsphere/2012/02/introduction-to-the-vsphere-api-part-1.html
thanks
|
https://github.com/ansible/ansible/issues/64643
|
https://github.com/ansible/ansible/pull/65569
|
056b035c98e286ac060695034dfee4b07000add1
|
5ebce4672b32c0f69d91d2e3af9ebb0de0c9e0f2
| 2019-11-10T05:53:45Z |
python
| 2019-12-10T21:45:32Z |
docs/docsite/rst/scenario_guides/vmware_scenarios/faq.rst
|
.. _vmware_faq:
******************
Ansible VMware FAQ
******************
vmware_guest
============
Can I deploy a virtual machine on a standalone ESXi server ?
------------------------------------------------------------
Yes. ``vmware_guest`` can deploy a virtual machine with required settings on a standalone ESXi server.
Is ``/vm`` required for ``vmware_guest`` module ?
-------------------------------------------------
Prior to Ansible version 2.5, ``folder`` was an optional parameter with a default value of ``/vm``.
The folder parameter was used to discover information about virtual machines in the given infrastructure.
Starting with Ansible version 2.5, ``folder`` is still an optional parameter with no default value.
This parameter will be now used to identify a user's virtual machine, if multiple virtual machines or virtual
machine templates are found with same name. VMware does not restrict the system administrator from creating virtual
machines with same name.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 62,441 |
Module asa_acl crashes when lines is an empty array
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The module asa_acl crashes due to returning an undefined reference `acl_name` when the lines parameter is an empty array.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
asa_acl
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.5
config file = None
configured module search path = ['/Users/brettmitchell/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.2 (default, Jan 13 2019, 12:50:01) [Clang 10.0.0 (clang-1000.11.45.5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Check out the latest version of ansible
Run module asa_acl with an empty lines array
Note that the module crashes on line 162 in `parse_acl_name` due to `UnboundLocalError: local variable 'acl_name' referenced before assignment`
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The module should execute successfully with no operation being performed
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The module crashed
<!--- Paste verbatim command output between quotes -->
```paste below
Traceback (most recent call last):
File \"/Users/brettmitchell/.ansible/tmp/ansible-local-234681ttsab3b/ansible-tmp-1568737374.339541-43237010518318/AnsiballZ_asa_acl.py\", line 114, in <module>
_ansiballz_main()
File \"/Users/brettmitchell/.ansible/tmp/ansible-local-234681ttsab3b/ansible-tmp-1568737374.339541-43237010518318/AnsiballZ_asa_acl.py\", line 106, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File \"/Users/brettmitchell/.ansible/tmp/ansible-local-234681ttsab3b/ansible-tmp-1568737374.339541-43237010518318/AnsiballZ_asa_acl.py\", line 49, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File \"/var/folders/h9/4gjsgyxn6kbb0pdrt9rtn0840000gn/T/ansible_asa_acl_payload_IhH99W/__main__.py\", line 222, in <module>
File \"/var/folders/h9/4gjsgyxn6kbb0pdrt9rtn0840000gn/T/ansible_asa_acl_payload_IhH99W/__main__.py\", line 192, in main
File \"/var/folders/h9/4gjsgyxn6kbb0pdrt9rtn0840000gn/T/ansible_asa_acl_payload_IhH99W/__main__.py\", line 162, in parse_acl_name
UnboundLocalError: local variable 'acl_name' referenced before assignment
```
|
https://github.com/ansible/ansible/issues/62441
|
https://github.com/ansible/ansible/pull/63838
|
bcc2ffdbf904a885813ed2007411b0c93936e9f1
|
895c8ce37341521582c413f0ad071de8e64cb6b7
| 2019-09-17T17:00:15Z |
python
| 2019-12-11T07:32:05Z |
lib/ansible/modules/network/asa/asa_acl.py
|
#!/usr/bin/python
#
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = """
---
module: asa_acl
version_added: "2.2"
author: "Patrick Ogenstad (@ogenstad)"
short_description: Manage access-lists on a Cisco ASA
description:
- This module allows you to work with access-lists on a Cisco ASA device.
extends_documentation_fragment: asa
options:
lines:
description:
- The ordered set of commands that should be configured in the
section. The commands must be the exact same commands as found
in the device running-config. Be sure to note the configuration
command syntax as some commands are automatically modified by the
device config parser.
required: true
aliases: [commands]
before:
description:
- The ordered set of commands to push on to the command stack if
a change needs to be made. This allows the playbook designer
the opportunity to perform configuration commands prior to pushing
any changes without affecting how the set of commands are matched
against the system.
after:
description:
- The ordered set of commands to append to the end of the command
stack if a changed needs to be made. Just like with I(before) this
allows the playbook designer to append a set of commands to be
executed after the command set.
match:
description:
- Instructs the module on the way to perform the matching of
the set of commands against the current device config. If
match is set to I(line), commands are matched line by line. If
match is set to I(strict), command lines are matched with respect
to position. Finally if match is set to I(exact), command lines
must be an equal match.
default: line
choices: ['line', 'strict', 'exact']
replace:
description:
- Instructs the module on the way to perform the configuration
on the device. If the replace argument is set to I(line) then
the modified lines are pushed to the device in configuration
mode. If the replace argument is set to I(block) then the entire
command block is pushed to the device in configuration mode if any
line is not correct.
default: line
choices: ['line', 'block']
force:
description:
- The force argument instructs the module to not consider the
current devices running-config. When set to true, this will
cause the module to push the contents of I(src) into the device
without first checking if already configured.
type: bool
default: 'no'
config:
description:
- The module, by default, will connect to the remote device and
retrieve the current running-config to use as a base for comparing
against the contents of source. There are times when it is not
desirable to have the task get the current running-config for
every task in a playbook. The I(config) argument allows the
implementer to pass in the configuration to use as the base
config for comparison.
"""
EXAMPLES = """
# Note: examples below use the following provider dict to handle
# transport and authentication to the node.
---
vars:
cli:
host: "{{ inventory_hostname }}"
username: cisco
password: cisco
transport: cli
authorize: yes
auth_pass: cisco
---
- asa_acl:
lines:
- access-list ACL-ANSIBLE extended permit tcp any any eq 82
- access-list ACL-ANSIBLE extended permit tcp any any eq www
- access-list ACL-ANSIBLE extended permit tcp any any eq 97
- access-list ACL-ANSIBLE extended permit tcp any any eq 98
- access-list ACL-ANSIBLE extended permit tcp any any eq 99
before: clear configure access-list ACL-ANSIBLE
match: strict
replace: block
provider: "{{ cli }}"
- asa_acl:
lines:
- access-list ACL-OUTSIDE extended permit tcp any any eq www
- access-list ACL-OUTSIDE extended permit tcp any any eq https
context: customer_a
provider: "{{ cli }}"
"""
RETURN = """
updates:
description: The set of commands that will be pushed to the remote device
returned: always
type: list
sample: ['access-list ACL-OUTSIDE extended permit tcp any any eq www']
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.asa.asa import asa_argument_spec, check_args
from ansible.module_utils.network.asa.asa import get_config, load_config, run_commands
from ansible.module_utils.network.common.config import NetworkConfig, dumps
def get_acl_config(module, acl_name):
contents = module.params['config']
if not contents:
contents = get_config(module)
filtered_config = list()
for item in contents.split('\n'):
if item.startswith('access-list %s ' % acl_name):
filtered_config.append(item)
return NetworkConfig(indent=1, contents='\n'.join(filtered_config))
def parse_acl_name(module):
first_line = True
for line in module.params['lines']:
ace = line.split()
if ace[0] != 'access-list':
module.fail_json(msg='All lines/commands must begin with "access-list" %s is not permitted' % ace[0])
if len(ace) <= 1:
module.fail_json(msg='All lines/commands must contain the name of the access-list')
if first_line:
acl_name = ace[1]
else:
if acl_name != ace[1]:
module.fail_json(msg='All lines/commands must use the same access-list %s is not %s' % (ace[1], acl_name))
first_line = False
return acl_name
def main():
argument_spec = dict(
lines=dict(aliases=['commands'], required=True, type='list'),
before=dict(type='list'),
after=dict(type='list'),
match=dict(default='line', choices=['line', 'strict', 'exact']),
replace=dict(default='line', choices=['line', 'block']),
force=dict(default=False, type='bool'),
config=dict()
)
argument_spec.update(asa_argument_spec)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
lines = module.params['lines']
result = {'changed': False}
candidate = NetworkConfig(indent=1)
candidate.add(lines)
acl_name = parse_acl_name(module)
if not module.params['force']:
contents = get_acl_config(module, acl_name)
config = NetworkConfig(indent=1, contents=contents)
commands = candidate.difference(config)
commands = dumps(commands, 'commands').split('\n')
commands = [str(c) for c in commands if c]
else:
commands = str(candidate).split('\n')
if commands:
if module.params['before']:
commands[:0] = module.params['before']
if module.params['after']:
commands.extend(module.params['after'])
if not module.check_mode:
load_config(module, commands)
result['changed'] = True
result['updates'] = commands
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,034 |
nios_fixed_address not able to add options that don't require use_options
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
It is not possible to add options that don't require use_options e.g. host-name
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
net_tools, nios, lib/ansible/modules/net_tools/nios/nios_fixed_address.py
##### ANSIBLE VERSION
```
ansible 2.8.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/xxxx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
empty
```
##### OS / ENVIRONMENT
Ubuntu 18.04 with Ansible 2.8.6 from Ansible PPA (2.8.6-1ppa~bionic)
pip freeze|grep -i infoblox
infoblox-client==0.4.23
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: set dhcp options for a ipv4 fixed address
nios_fixed_address:
name: hostname.domain
ipaddr: "{{ vm_ipaddr }}"
mac: "{{ macaddress }}"
network: 10.0.0.0/24
network_view: default
options:
- name: host-name
value: "{{ vm_name }}"
state: present
provider: "{{ nios_provider }}"
connection: local
register: dhcp_results
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The host-name option should be set in the infoblox appliance for the specific fixed address.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible terminates with the error below.
It is impossible to use any option that has no use_option. The source code has no special treatment for those occasions. "use_options" defaults always to "True". See line 221 in lib/ansible/modules/net_tools/nios/nios_fixed_address.py
The same error happens when any use_option is used. I tried "yes", "no", "true", "false". It's also not possible to use equivalent "num: 12" instead of "name: host-name".
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [localhost]: FAILED! => {"changed": false, "code": "Client.Ibap.Proto", "msg": "Option host-name can not have a use_option flag", "operation": "create_object", "type": "AdmConProtoError"}
```
|
https://github.com/ansible/ansible/issues/64034
|
https://github.com/ansible/ansible/pull/65369
|
895c8ce37341521582c413f0ad071de8e64cb6b7
|
0685691d073cd2a271e4fc60543e6a9f8142dd56
| 2019-10-28T17:50:05Z |
python
| 2019-12-11T07:42:47Z |
lib/ansible/module_utils/net_tools/nios/api.py
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# (c) 2018 Red Hat Inc.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
import os
from functools import partial
from ansible.module_utils._text import to_native
from ansible.module_utils.six import iteritems
from ansible.module_utils._text import to_text
from ansible.module_utils.basic import env_fallback
try:
from infoblox_client.connector import Connector
from infoblox_client.exceptions import InfobloxException
HAS_INFOBLOX_CLIENT = True
except ImportError:
HAS_INFOBLOX_CLIENT = False
# defining nios constants
NIOS_DNS_VIEW = 'view'
NIOS_NETWORK_VIEW = 'networkview'
NIOS_HOST_RECORD = 'record:host'
NIOS_IPV4_NETWORK = 'network'
NIOS_IPV6_NETWORK = 'ipv6network'
NIOS_ZONE = 'zone_auth'
NIOS_PTR_RECORD = 'record:ptr'
NIOS_A_RECORD = 'record:a'
NIOS_AAAA_RECORD = 'record:aaaa'
NIOS_CNAME_RECORD = 'record:cname'
NIOS_MX_RECORD = 'record:mx'
NIOS_SRV_RECORD = 'record:srv'
NIOS_NAPTR_RECORD = 'record:naptr'
NIOS_TXT_RECORD = 'record:txt'
NIOS_NSGROUP = 'nsgroup'
NIOS_IPV4_FIXED_ADDRESS = 'fixedaddress'
NIOS_IPV6_FIXED_ADDRESS = 'ipv6fixedaddress'
NIOS_NEXT_AVAILABLE_IP = 'func:nextavailableip'
NIOS_IPV4_NETWORK_CONTAINER = 'networkcontainer'
NIOS_IPV6_NETWORK_CONTAINER = 'ipv6networkcontainer'
NIOS_MEMBER = 'member'
NIOS_PROVIDER_SPEC = {
'host': dict(fallback=(env_fallback, ['INFOBLOX_HOST'])),
'username': dict(fallback=(env_fallback, ['INFOBLOX_USERNAME'])),
'password': dict(fallback=(env_fallback, ['INFOBLOX_PASSWORD']), no_log=True),
'validate_certs': dict(type='bool', default=False, fallback=(env_fallback, ['INFOBLOX_SSL_VERIFY']), aliases=['ssl_verify']),
'silent_ssl_warnings': dict(type='bool', default=True),
'http_request_timeout': dict(type='int', default=10, fallback=(env_fallback, ['INFOBLOX_HTTP_REQUEST_TIMEOUT'])),
'http_pool_connections': dict(type='int', default=10),
'http_pool_maxsize': dict(type='int', default=10),
'max_retries': dict(type='int', default=3, fallback=(env_fallback, ['INFOBLOX_MAX_RETRIES'])),
'wapi_version': dict(default='2.1', fallback=(env_fallback, ['INFOBLOX_WAP_VERSION'])),
'max_results': dict(type='int', default=1000, fallback=(env_fallback, ['INFOBLOX_MAX_RETRIES']))
}
def get_connector(*args, **kwargs):
''' Returns an instance of infoblox_client.connector.Connector
:params args: positional arguments are silently ignored
:params kwargs: dict that is passed to Connector init
:returns: Connector
'''
if not HAS_INFOBLOX_CLIENT:
raise Exception('infoblox-client is required but does not appear '
'to be installed. It can be installed using the '
'command `pip install infoblox-client`')
if not set(kwargs.keys()).issubset(list(NIOS_PROVIDER_SPEC.keys()) + ['ssl_verify']):
raise Exception('invalid or unsupported keyword argument for connector')
for key, value in iteritems(NIOS_PROVIDER_SPEC):
if key not in kwargs:
# apply default values from NIOS_PROVIDER_SPEC since we cannot just
# assume the provider values are coming from AnsibleModule
if 'default' in value:
kwargs[key] = value['default']
# override any values with env variables unless they were
# explicitly set
env = ('INFOBLOX_%s' % key).upper()
if env in os.environ:
kwargs[key] = os.environ.get(env)
if 'validate_certs' in kwargs.keys():
kwargs['ssl_verify'] = kwargs['validate_certs']
kwargs.pop('validate_certs', None)
return Connector(kwargs)
def normalize_extattrs(value):
''' Normalize extattrs field to expected format
The module accepts extattrs as key/value pairs. This method will
transform the key/value pairs into a structure suitable for
sending across WAPI in the format of:
extattrs: {
key: {
value: <value>
}
}
'''
return dict([(k, {'value': v}) for k, v in iteritems(value)])
def flatten_extattrs(value):
''' Flatten the key/value struct for extattrs
WAPI returns extattrs field as a dict in form of:
extattrs: {
key: {
value: <value>
}
}
This method will flatten the structure to:
extattrs: {
key: value
}
'''
return dict([(k, v['value']) for k, v in iteritems(value)])
def member_normalize(member_spec):
''' Transforms the member module arguments into a valid WAPI struct
This function will transform the arguments into a structure that
is a valid WAPI structure in the format of:
{
key: <value>,
}
It will remove any arguments that are set to None since WAPI will error on
that condition.
The remainder of the value validation is performed by WAPI
Some parameters in ib_spec are passed as a list in order to pass the validation for elements.
In this function, they are converted to dictionary.
'''
member_elements = ['vip_setting', 'ipv6_setting', 'lan2_port_setting', 'mgmt_port_setting',
'pre_provisioning', 'network_setting', 'v6_network_setting',
'ha_port_setting', 'lan_port_setting', 'lan2_physical_setting',
'lan_ha_port_setting', 'mgmt_network_setting', 'v6_mgmt_network_setting']
for key in member_spec.keys():
if key in member_elements and member_spec[key] is not None:
member_spec[key] = member_spec[key][0]
if isinstance(member_spec[key], dict):
member_spec[key] = member_normalize(member_spec[key])
elif isinstance(member_spec[key], list):
for x in member_spec[key]:
if isinstance(x, dict):
x = member_normalize(x)
elif member_spec[key] is None:
del member_spec[key]
return member_spec
class WapiBase(object):
''' Base class for implementing Infoblox WAPI API '''
provider_spec = {'provider': dict(type='dict', options=NIOS_PROVIDER_SPEC)}
def __init__(self, provider):
self.connector = get_connector(**provider)
def __getattr__(self, name):
try:
return self.__dict__[name]
except KeyError:
if name.startswith('_'):
raise AttributeError("'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
return partial(self._invoke_method, name)
def _invoke_method(self, name, *args, **kwargs):
try:
method = getattr(self.connector, name)
return method(*args, **kwargs)
except InfobloxException as exc:
if hasattr(self, 'handle_exception'):
self.handle_exception(name, exc)
else:
raise
class WapiLookup(WapiBase):
''' Implements WapiBase for lookup plugins '''
def handle_exception(self, method_name, exc):
if ('text' in exc.response):
raise Exception(exc.response['text'])
else:
raise Exception(exc)
class WapiInventory(WapiBase):
''' Implements WapiBase for dynamic inventory script '''
pass
class WapiModule(WapiBase):
''' Implements WapiBase for executing a NIOS module '''
def __init__(self, module):
self.module = module
provider = module.params['provider']
try:
super(WapiModule, self).__init__(provider)
except Exception as exc:
self.module.fail_json(msg=to_text(exc))
def handle_exception(self, method_name, exc):
''' Handles any exceptions raised
This method will be called if an InfobloxException is raised for
any call to the instance of Connector and also, in case of generic
exception. This method will then gracefully fail the module.
:args exc: instance of InfobloxException
'''
if ('text' in exc.response):
self.module.fail_json(
msg=exc.response['text'],
type=exc.response['Error'].split(':')[0],
code=exc.response.get('code'),
operation=method_name
)
else:
self.module.fail_json(msg=to_native(exc))
def run(self, ib_obj_type, ib_spec):
''' Runs the module and performans configuration tasks
:args ib_obj_type: the WAPI object type to operate against
:args ib_spec: the specification for the WAPI object as a dict
:returns: a results dict
'''
update = new_name = None
state = self.module.params['state']
if state not in ('present', 'absent'):
self.module.fail_json(msg='state must be one of `present`, `absent`, got `%s`' % state)
result = {'changed': False}
obj_filter = dict([(k, self.module.params[k]) for k, v in iteritems(ib_spec) if v.get('ib_req')])
# get object reference
ib_obj_ref, update, new_name = self.get_object_ref(self.module, ib_obj_type, obj_filter, ib_spec)
proposed_object = {}
for key, value in iteritems(ib_spec):
if self.module.params[key] is not None:
if 'transform' in value:
proposed_object[key] = value['transform'](self.module)
else:
proposed_object[key] = self.module.params[key]
# If configure_by_dns is set to False, then delete the default dns set in the param else throw exception
if not proposed_object.get('configure_for_dns') and proposed_object.get('view') == 'default'\
and ib_obj_type == NIOS_HOST_RECORD:
del proposed_object['view']
elif not proposed_object.get('configure_for_dns') and proposed_object.get('view') != 'default'\
and ib_obj_type == NIOS_HOST_RECORD:
self.module.fail_json(msg='DNS Bypass is not allowed if DNS view is set other than \'default\'')
if ib_obj_ref:
if len(ib_obj_ref) > 1:
for each in ib_obj_ref:
# To check for existing A_record with same name with input A_record by IP
if each.get('ipv4addr') and each.get('ipv4addr') == proposed_object.get('ipv4addr'):
current_object = each
# To check for existing Host_record with same name with input Host_record by IP
elif each.get('ipv4addrs')[0].get('ipv4addr') and each.get('ipv4addrs')[0].get('ipv4addr')\
== proposed_object.get('ipv4addrs')[0].get('ipv4addr'):
current_object = each
# Else set the current_object with input value
else:
current_object = obj_filter
ref = None
else:
current_object = ib_obj_ref[0]
if 'extattrs' in current_object:
current_object['extattrs'] = flatten_extattrs(current_object['extattrs'])
if current_object.get('_ref'):
ref = current_object.pop('_ref')
else:
current_object = obj_filter
ref = None
# checks if the object type is member to normalize the attributes being passed
if (ib_obj_type == NIOS_MEMBER):
proposed_object = member_normalize(proposed_object)
# checks if the name's field has been updated
if update and new_name:
proposed_object['name'] = new_name
check_remove = []
if (ib_obj_type == NIOS_HOST_RECORD):
# this check is for idempotency, as if the same ip address shall be passed
# add param will be removed, and same exists true for remove case as well.
if 'ipv4addrs' in [current_object and proposed_object]:
for each in current_object['ipv4addrs']:
if each['ipv4addr'] == proposed_object['ipv4addrs'][0]['ipv4addr']:
if 'add' in proposed_object['ipv4addrs'][0]:
del proposed_object['ipv4addrs'][0]['add']
break
check_remove += each.values()
if proposed_object['ipv4addrs'][0]['ipv4addr'] not in check_remove:
if 'remove' in proposed_object['ipv4addrs'][0]:
del proposed_object['ipv4addrs'][0]['remove']
res = None
modified = not self.compare_objects(current_object, proposed_object)
if 'extattrs' in proposed_object:
proposed_object['extattrs'] = normalize_extattrs(proposed_object['extattrs'])
# Checks if nios_next_ip param is passed in ipv4addrs/ipv4addr args
proposed_object = self.check_if_nios_next_ip_exists(proposed_object)
if state == 'present':
if ref is None:
if not self.module.check_mode:
self.create_object(ib_obj_type, proposed_object)
result['changed'] = True
# Check if NIOS_MEMBER and the flag to call function create_token is set
elif (ib_obj_type == NIOS_MEMBER) and (proposed_object['create_token']):
proposed_object = None
# the function creates a token that can be used by a pre-provisioned member to join the grid
result['api_results'] = self.call_func('create_token', ref, proposed_object)
result['changed'] = True
elif modified:
if 'ipv4addrs' in proposed_object:
if ('add' not in proposed_object['ipv4addrs'][0]) and ('remove' not in proposed_object['ipv4addrs'][0]):
self.check_if_recordname_exists(obj_filter, ib_obj_ref, ib_obj_type, current_object, proposed_object)
if (ib_obj_type in (NIOS_HOST_RECORD, NIOS_NETWORK_VIEW, NIOS_DNS_VIEW)):
run_update = True
proposed_object = self.on_update(proposed_object, ib_spec)
if 'ipv4addrs' in proposed_object:
if ('add' or 'remove') in proposed_object['ipv4addrs'][0]:
run_update, proposed_object = self.check_if_add_remove_ip_arg_exists(proposed_object)
if run_update:
res = self.update_object(ref, proposed_object)
result['changed'] = True
else:
res = ref
if (ib_obj_type in (NIOS_A_RECORD, NIOS_AAAA_RECORD, NIOS_PTR_RECORD, NIOS_SRV_RECORD)):
# popping 'view' key as update of 'view' is not supported with respect to a:record/aaaa:record/srv:record/ptr:record
if 'ipv4addrs' in proposed_object:
if 'add' in proposed_object['ipv4addrs'][0]:
run_update, proposed_object = self.check_if_add_remove_ip_arg_exists(proposed_object)
if run_update:
res = self.update_object(ref, proposed_object)
result['changed'] = True
else:
res = ref
if (ib_obj_type in (NIOS_A_RECORD, NIOS_AAAA_RECORD)):
# popping 'view' key as update of 'view' is not supported with respect to a:record/aaaa:record
proposed_object = self.on_update(proposed_object, ib_spec)
del proposed_object['view']
res = self.update_object(ref, proposed_object)
result['changed'] = True
elif 'network_view' in proposed_object:
proposed_object.pop('network_view')
result['changed'] = True
if not self.module.check_mode and res is None:
proposed_object = self.on_update(proposed_object, ib_spec)
self.update_object(ref, proposed_object)
result['changed'] = True
elif state == 'absent':
if ref is not None:
if 'ipv4addrs' in proposed_object:
if 'remove' in proposed_object['ipv4addrs'][0]:
self.check_if_add_remove_ip_arg_exists(proposed_object)
self.update_object(ref, proposed_object)
result['changed'] = True
elif not self.module.check_mode:
self.delete_object(ref)
result['changed'] = True
return result
def check_if_recordname_exists(self, obj_filter, ib_obj_ref, ib_obj_type, current_object, proposed_object):
''' Send POST request if host record input name and retrieved ref name is same,
but input IP and retrieved IP is different'''
if 'name' in (obj_filter and ib_obj_ref[0]) and ib_obj_type == NIOS_HOST_RECORD:
obj_host_name = obj_filter['name']
ref_host_name = ib_obj_ref[0]['name']
if 'ipv4addrs' in (current_object and proposed_object):
current_ip_addr = current_object['ipv4addrs'][0]['ipv4addr']
proposed_ip_addr = proposed_object['ipv4addrs'][0]['ipv4addr']
elif 'ipv6addrs' in (current_object and proposed_object):
current_ip_addr = current_object['ipv6addrs'][0]['ipv6addr']
proposed_ip_addr = proposed_object['ipv6addrs'][0]['ipv6addr']
if obj_host_name == ref_host_name and current_ip_addr != proposed_ip_addr:
self.create_object(ib_obj_type, proposed_object)
def check_if_nios_next_ip_exists(self, proposed_object):
''' Check if nios_next_ip argument is passed in ipaddr while creating
host record, if yes then format proposed object ipv4addrs and pass
func:nextavailableip and ipaddr range to create hostrecord with next
available ip in one call to avoid any race condition '''
if 'ipv4addrs' in proposed_object:
if 'nios_next_ip' in proposed_object['ipv4addrs'][0]['ipv4addr']:
ip_range = self.module._check_type_dict(proposed_object['ipv4addrs'][0]['ipv4addr'])['nios_next_ip']
proposed_object['ipv4addrs'][0]['ipv4addr'] = NIOS_NEXT_AVAILABLE_IP + ':' + ip_range
elif 'ipv4addr' in proposed_object:
if 'nios_next_ip' in proposed_object['ipv4addr']:
ip_range = self.module._check_type_dict(proposed_object['ipv4addr'])['nios_next_ip']
proposed_object['ipv4addr'] = NIOS_NEXT_AVAILABLE_IP + ':' + ip_range
return proposed_object
def check_if_add_remove_ip_arg_exists(self, proposed_object):
'''
This function shall check if add/remove param is set to true and
is passed in the args, then we will update the proposed dictionary
to add/remove IP to existing host_record, if the user passes false
param with the argument nothing shall be done.
:returns: True if param is changed based on add/remove, and also the
changed proposed_object.
'''
update = False
if 'add' in proposed_object['ipv4addrs'][0]:
if proposed_object['ipv4addrs'][0]['add']:
proposed_object['ipv4addrs+'] = proposed_object['ipv4addrs']
del proposed_object['ipv4addrs']
del proposed_object['ipv4addrs+'][0]['add']
update = True
else:
del proposed_object['ipv4addrs'][0]['add']
elif 'remove' in proposed_object['ipv4addrs'][0]:
if proposed_object['ipv4addrs'][0]['remove']:
proposed_object['ipv4addrs-'] = proposed_object['ipv4addrs']
del proposed_object['ipv4addrs']
del proposed_object['ipv4addrs-'][0]['remove']
update = True
else:
del proposed_object['ipv4addrs'][0]['remove']
return update, proposed_object
def issubset(self, item, objects):
''' Checks if item is a subset of objects
:args item: the subset item to validate
:args objects: superset list of objects to validate against
:returns: True if item is a subset of one entry in objects otherwise
this method will return None
'''
for obj in objects:
if isinstance(item, dict):
if all(entry in obj.items() for entry in item.items()):
return True
else:
if item in obj:
return True
def compare_objects(self, current_object, proposed_object):
for key, proposed_item in iteritems(proposed_object):
current_item = current_object.get(key)
# if proposed has a key that current doesn't then the objects are
# not equal and False will be immediately returned
if current_item is None:
return False
elif isinstance(proposed_item, list):
for subitem in proposed_item:
if not self.issubset(subitem, current_item):
return False
elif isinstance(proposed_item, dict):
return self.compare_objects(current_item, proposed_item)
else:
if current_item != proposed_item:
return False
return True
def get_object_ref(self, module, ib_obj_type, obj_filter, ib_spec):
''' this function gets the reference object of pre-existing nios objects '''
update = False
old_name = new_name = None
if ('name' in obj_filter):
# gets and returns the current object based on name/old_name passed
try:
name_obj = self.module._check_type_dict(obj_filter['name'])
old_name = name_obj['old_name']
new_name = name_obj['new_name']
except TypeError:
name = obj_filter['name']
if old_name and new_name:
if (ib_obj_type == NIOS_HOST_RECORD):
test_obj_filter = dict([('name', old_name), ('view', obj_filter['view'])])
elif (ib_obj_type in (NIOS_AAAA_RECORD, NIOS_A_RECORD)):
test_obj_filter = obj_filter
else:
test_obj_filter = dict([('name', old_name)])
# get the object reference
ib_obj = self.get_object(ib_obj_type, test_obj_filter, return_fields=ib_spec.keys())
if ib_obj:
obj_filter['name'] = new_name
else:
test_obj_filter['name'] = new_name
ib_obj = self.get_object(ib_obj_type, test_obj_filter, return_fields=ib_spec.keys())
update = True
return ib_obj, update, new_name
if (ib_obj_type == NIOS_HOST_RECORD):
# to check only by name if dns bypassing is set
if not obj_filter['configure_for_dns']:
test_obj_filter = dict([('name', name)])
else:
test_obj_filter = dict([('name', name), ('view', obj_filter['view'])])
elif (ib_obj_type == NIOS_IPV4_FIXED_ADDRESS or ib_obj_type == NIOS_IPV6_FIXED_ADDRESS and 'mac' in obj_filter):
test_obj_filter = dict([['mac', obj_filter['mac']]])
elif (ib_obj_type == NIOS_A_RECORD):
# resolves issue where a_record with uppercase name was returning null and was failing
test_obj_filter = obj_filter
test_obj_filter['name'] = test_obj_filter['name'].lower()
# check if test_obj_filter is empty copy passed obj_filter
else:
test_obj_filter = obj_filter
ib_obj = self.get_object(ib_obj_type, test_obj_filter.copy(), return_fields=ib_spec.keys())
elif (ib_obj_type == NIOS_ZONE):
# del key 'restart_if_needed' as nios_zone get_object fails with the key present
temp = ib_spec['restart_if_needed']
del ib_spec['restart_if_needed']
ib_obj = self.get_object(ib_obj_type, obj_filter.copy(), return_fields=ib_spec.keys())
# reinstate restart_if_needed if ib_obj is none, meaning there's no existing nios_zone ref
if not ib_obj:
ib_spec['restart_if_needed'] = temp
elif (ib_obj_type == NIOS_MEMBER):
# del key 'create_token' as nios_member get_object fails with the key present
temp = ib_spec['create_token']
del ib_spec['create_token']
ib_obj = self.get_object(ib_obj_type, obj_filter.copy(), return_fields=ib_spec.keys())
if temp:
# reinstate 'create_token' key
ib_spec['create_token'] = temp
else:
ib_obj = self.get_object(ib_obj_type, obj_filter.copy(), return_fields=ib_spec.keys())
return ib_obj, update, new_name
def on_update(self, proposed_object, ib_spec):
''' Event called before the update is sent to the API endpoing
This method will allow the final proposed object to be changed
and/or keys filtered before it is sent to the API endpoint to
be processed.
:args proposed_object: A dict item that will be encoded and sent
the API endpoint with the updated data structure
:returns: updated object to be sent to API endpoint
'''
keys = set()
for key, value in iteritems(proposed_object):
update = ib_spec[key].get('update', True)
if not update:
keys.add(key)
return dict([(k, v) for k, v in iteritems(proposed_object) if k not in keys])
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,034 |
nios_fixed_address not able to add options that don't require use_options
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
It is not possible to add options that don't require use_options e.g. host-name
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
net_tools, nios, lib/ansible/modules/net_tools/nios/nios_fixed_address.py
##### ANSIBLE VERSION
```
ansible 2.8.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/xxxx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
empty
```
##### OS / ENVIRONMENT
Ubuntu 18.04 with Ansible 2.8.6 from Ansible PPA (2.8.6-1ppa~bionic)
pip freeze|grep -i infoblox
infoblox-client==0.4.23
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: set dhcp options for a ipv4 fixed address
nios_fixed_address:
name: hostname.domain
ipaddr: "{{ vm_ipaddr }}"
mac: "{{ macaddress }}"
network: 10.0.0.0/24
network_view: default
options:
- name: host-name
value: "{{ vm_name }}"
state: present
provider: "{{ nios_provider }}"
connection: local
register: dhcp_results
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The host-name option should be set in the infoblox appliance for the specific fixed address.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible terminates with the error below.
It is impossible to use any option that has no use_option. The source code has no special treatment for those occasions. "use_options" defaults always to "True". See line 221 in lib/ansible/modules/net_tools/nios/nios_fixed_address.py
The same error happens when any use_option is used. I tried "yes", "no", "true", "false". It's also not possible to use equivalent "num: 12" instead of "name: host-name".
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [localhost]: FAILED! => {"changed": false, "code": "Client.Ibap.Proto", "msg": "Option host-name can not have a use_option flag", "operation": "create_object", "type": "AdmConProtoError"}
```
|
https://github.com/ansible/ansible/issues/64034
|
https://github.com/ansible/ansible/pull/65369
|
895c8ce37341521582c413f0ad071de8e64cb6b7
|
0685691d073cd2a271e4fc60543e6a9f8142dd56
| 2019-10-28T17:50:05Z |
python
| 2019-12-11T07:42:47Z |
lib/ansible/modules/net_tools/nios/nios_fixed_address.py
|
#!/usr/bin/python
# Copyright (c) 2018 Red Hat, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'certified'}
DOCUMENTATION = '''
---
module: nios_fixed_address
version_added: "2.8"
author: "Sumit Jaiswal (@sjaiswal)"
short_description: Configure Infoblox NIOS DHCP Fixed Address
description:
- A fixed address is a specific IP address that a DHCP server
always assigns when a lease request comes from a particular
MAC address of the client.
- Supports both IPV4 and IPV6 internet protocols
requirements:
- infoblox-client
extends_documentation_fragment: nios
options:
name:
description:
- Specifies the hostname with which fixed DHCP ip-address is stored
for respective mac.
required: false
ipaddr:
description:
- IPV4/V6 address of the fixed address.
required: true
mac:
description:
- The MAC address of the interface.
required: true
network:
description:
- Specifies the network range in which ipaddr exists.
required: true
aliases:
- network
network_view:
description:
- Configures the name of the network view to associate with this
configured instance.
required: false
default: default
options:
description:
- Configures the set of DHCP options to be included as part of
the configured network instance. This argument accepts a list
of values (see suboptions). When configuring suboptions at
least one of C(name) or C(num) must be specified.
suboptions:
name:
description:
- The name of the DHCP option to configure
num:
description:
- The number of the DHCP option to configure
value:
description:
- The value of the DHCP option specified by C(name)
required: true
use_option:
description:
- Only applies to a subset of options (see NIOS API documentation)
type: bool
default: 'yes'
vendor_class:
description:
- The name of the space this DHCP option is associated to
default: DHCP
extattrs:
description:
- Allows for the configuration of Extensible Attributes on the
instance of the object. This argument accepts a set of key / value
pairs for configuration.
comment:
description:
- Configures a text string comment to be associated with the instance
of this object. The provided text string will be configured on the
object instance.
state:
description:
- Configures the intended state of the instance of the object on
the NIOS server. When this value is set to C(present), the object
is configured on the device and when this value is set to C(absent)
the value is removed (if necessary) from the device.
default: present
choices:
- present
- absent
'''
EXAMPLES = '''
- name: configure ipv4 dhcp fixed address
nios_fixed_address:
name: ipv4_fixed
ipaddr: 192.168.10.1
mac: 08:6d:41:e8:fd:e8
network: 192.168.10.0/24
network_view: default
comment: this is a test comment
state: present
provider:
host: "{{ inventory_hostname_short }}"
username: admin
password: admin
connection: local
- name: configure a ipv6 dhcp fixed address
nios_fixed_address:
name: ipv6_fixed
ipaddr: fe80::1/10
mac: 08:6d:41:e8:fd:e8
network: fe80::/64
network_view: default
comment: this is a test comment
state: present
provider:
host: "{{ inventory_hostname_short }}"
username: admin
password: admin
connection: local
- name: set dhcp options for a ipv4 fixed address
nios_fixed_address:
name: ipv4_fixed
ipaddr: 192.168.10.1
mac: 08:6d:41:e8:fd:e8
network: 192.168.10.0/24
network_view: default
comment: this is a test comment
options:
- name: domain-name
value: ansible.com
state: present
provider:
host: "{{ inventory_hostname_short }}"
username: admin
password: admin
connection: local
- name: remove a ipv4 dhcp fixed address
nios_fixed_address:
name: ipv4_fixed
ipaddr: 192.168.10.1
mac: 08:6d:41:e8:fd:e8
network: 192.168.10.0/24
network_view: default
state: absent
provider:
host: "{{ inventory_hostname_short }}"
username: admin
password: admin
connection: local
'''
RETURN = ''' # '''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import iteritems
from ansible.module_utils.net_tools.nios.api import WapiModule
from ansible.module_utils.network.common.utils import validate_ip_address, validate_ip_v6_address
from ansible.module_utils.net_tools.nios.api import NIOS_IPV4_FIXED_ADDRESS, NIOS_IPV6_FIXED_ADDRESS
def options(module):
''' Transforms the module argument into a valid WAPI struct
This function will transform the options argument into a structure that
is a valid WAPI structure in the format of:
{
name: <value>,
num: <value>,
value: <value>,
use_option: <value>,
vendor_class: <value>
}
It will remove any options that are set to None since WAPI will error on
that condition. It will also verify that either `name` or `num` is
set in the structure but does not validate the values are equal.
The remainder of the value validation is performed by WAPI
'''
options = list()
for item in module.params['options']:
opt = dict([(k, v) for k, v in iteritems(item) if v is not None])
if 'name' not in opt and 'num' not in opt:
module.fail_json(msg='one of `name` or `num` is required for option value')
options.append(opt)
return options
def validate_ip_addr_type(ip, arg_spec, module):
'''This function will check if the argument ip is type v4/v6 and return appropriate infoblox network type
'''
check_ip = ip.split('/')
if validate_ip_address(check_ip[0]) and 'ipaddr' in arg_spec:
arg_spec['ipv4addr'] = arg_spec.pop('ipaddr')
module.params['ipv4addr'] = module.params.pop('ipaddr')
return NIOS_IPV4_FIXED_ADDRESS, arg_spec, module
elif validate_ip_v6_address(check_ip[0]) and 'ipaddr' in arg_spec:
arg_spec['ipv6addr'] = arg_spec.pop('ipaddr')
module.params['ipv6addr'] = module.params.pop('ipaddr')
return NIOS_IPV6_FIXED_ADDRESS, arg_spec, module
def main():
''' Main entry point for module execution
'''
option_spec = dict(
# one of name or num is required; enforced by the function options()
name=dict(),
num=dict(type='int'),
value=dict(required=True),
use_option=dict(type='bool', default=True),
vendor_class=dict(default='DHCP')
)
ib_spec = dict(
name=dict(required=True),
ipaddr=dict(required=True, aliases=['ipaddr'], ib_req=True),
mac=dict(required=True, aliases=['mac'], ib_req=True),
network=dict(required=True, aliases=['network'], ib_req=True),
network_view=dict(default='default', aliases=['network_view']),
options=dict(type='list', elements='dict', options=option_spec, transform=options),
extattrs=dict(type='dict'),
comment=dict()
)
argument_spec = dict(
provider=dict(required=True),
state=dict(default='present', choices=['present', 'absent'])
)
argument_spec.update(ib_spec)
argument_spec.update(WapiModule.provider_spec)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
# to get the argument ipaddr
obj_filter = dict([(k, module.params[k]) for k, v in iteritems(ib_spec) if v.get('ib_req')])
# to modify argument based on ipaddr type i.e. IPV4/IPV6
fixed_address_ip_type, ib_spec, module = validate_ip_addr_type(obj_filter['ipaddr'], ib_spec, module)
wapi = WapiModule(module)
result = wapi.run(fixed_address_ip_type, ib_spec)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,034 |
nios_fixed_address not able to add options that don't require use_options
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
It is not possible to add options that don't require use_options e.g. host-name
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
net_tools, nios, lib/ansible/modules/net_tools/nios/nios_fixed_address.py
##### ANSIBLE VERSION
```
ansible 2.8.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/xxxx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
empty
```
##### OS / ENVIRONMENT
Ubuntu 18.04 with Ansible 2.8.6 from Ansible PPA (2.8.6-1ppa~bionic)
pip freeze|grep -i infoblox
infoblox-client==0.4.23
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: set dhcp options for a ipv4 fixed address
nios_fixed_address:
name: hostname.domain
ipaddr: "{{ vm_ipaddr }}"
mac: "{{ macaddress }}"
network: 10.0.0.0/24
network_view: default
options:
- name: host-name
value: "{{ vm_name }}"
state: present
provider: "{{ nios_provider }}"
connection: local
register: dhcp_results
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
The host-name option should be set in the infoblox appliance for the specific fixed address.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible terminates with the error below.
It is impossible to use any option that has no use_option. The source code has no special treatment for those occasions. "use_options" defaults always to "True". See line 221 in lib/ansible/modules/net_tools/nios/nios_fixed_address.py
The same error happens when any use_option is used. I tried "yes", "no", "true", "false". It's also not possible to use equivalent "num: 12" instead of "name: host-name".
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [localhost]: FAILED! => {"changed": false, "code": "Client.Ibap.Proto", "msg": "Option host-name can not have a use_option flag", "operation": "create_object", "type": "AdmConProtoError"}
```
|
https://github.com/ansible/ansible/issues/64034
|
https://github.com/ansible/ansible/pull/65369
|
895c8ce37341521582c413f0ad071de8e64cb6b7
|
0685691d073cd2a271e4fc60543e6a9f8142dd56
| 2019-10-28T17:50:05Z |
python
| 2019-12-11T07:42:47Z |
lib/ansible/modules/net_tools/nios/nios_txt_record.py
|
#!/usr/bin/python
# Copyright (c) 2018 Red Hat, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'certified'}
DOCUMENTATION = '''
---
module: nios_txt_record
version_added: "2.7"
author: "Corey Wanless (@coreywan)"
short_description: Configure Infoblox NIOS txt records
description:
- Adds and/or removes instances of txt record objects from
Infoblox NIOS servers. This module manages NIOS C(record:txt) objects
using the Infoblox WAPI interface over REST.
requirements:
- infoblox_client
extends_documentation_fragment: nios
options:
name:
description:
- Specifies the fully qualified hostname to add or remove from
the system
required: true
view:
description:
- Sets the DNS view to associate this tst record with. The DNS
view must already be configured on the system
required: true
default: default
aliases:
- dns_view
text:
description:
- Text associated with the record. It can contain up to 255 bytes
per substring, up to a total of 512 bytes. To enter leading,
trailing, or embedded spaces in the text, add quotes around the
text to preserve the spaces.
ttl:
description:
- Configures the TTL to be associated with this tst record
extattrs:
description:
- Allows for the configuration of Extensible Attributes on the
instance of the object. This argument accepts a set of key / value
pairs for configuration.
comment:
description:
- Configures a text string comment to be associated with the instance
of this object. The provided text string will be configured on the
object instance.
state:
description:
- Configures the intended state of the instance of the object on
the NIOS server. When this value is set to C(present), the object
is configured on the device and when this value is set to C(absent)
the value is removed (if necessary) from the device.
default: present
choices:
- present
- absent
'''
EXAMPLES = '''
- name: Ensure a text Record Exists
nios_txt_record:
name: fqdn.txt.record.com
text: mytext
state: present
view: External
provider:
host: "{{ inventory_hostname_short }}"
username: admin
password: admin
- name: Ensure a text Record does not exist
nios_txt_record:
name: fqdn.txt.record.com
text: mytext
state: absent
view: External
provider:
host: "{{ inventory_hostname_short }}"
username: admin
password: admin
'''
RETURN = ''' # '''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import iteritems
from ansible.module_utils.net_tools.nios.api import WapiModule
def main():
''' Main entry point for module execution
'''
ib_spec = dict(
name=dict(required=True, ib_req=True),
view=dict(default='default', aliases=['dns_view'], ib_req=True),
text=dict(type='str'),
ttl=dict(type='int'),
extattrs=dict(type='dict'),
comment=dict(),
)
argument_spec = dict(
provider=dict(required=True),
state=dict(default='present', choices=['present', 'absent'])
)
argument_spec.update(ib_spec)
argument_spec.update(WapiModule.provider_spec)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
wapi = WapiModule(module)
result = wapi.run('record:txt', ib_spec)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,220 |
vmware_category add "Associable Object Types" functionality
|
##### SUMMARY
Can functionality be added so that "Associable Object Types" can be configured for a tag category please?
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_category
##### ADDITIONAL INFORMATION
Perhaps the module could be extended to include a list, examples below;
```
- name: 'Create a Category'
connection: 'local'
vmware_category:
hostname: '{{ ansible_host }}'
username: '{{ ansible_vcenter_username }}'
password: '{{ ansible_vcenter_password }}'
category_name: "sas_luns"
category_description: "SAS LUNS"
category_cardinality: 'multiple'
associable_object_types:
- Datastore
- Datastore Cluster
state: present
validate_certs: '{{ validate_certs }}'
- name: 'Create a Category'
connection: 'local'
vmware_category:
hostname: '{{ ansible_host }}'
username: '{{ ansible_vcenter_username }}'
password: '{{ ansible_vcenter_password }}'
category_name: "Region 1"
category_description: "Region"
category_cardinality: 'multiple'
associable_object_types:
- All objects
state: present
validate_certs: '{{ validate_certs }}'
```
The type of Associable Object Types in vSphere 6.7 are:
All objects
Folder
Cluster
Datacenter
Datastore
Datastore Cluster
Distributed Port Group
Distributed Switch
Host
Content Library
Library item
Network
Resource Pool
vApp
Virtual Machine
|
https://github.com/ansible/ansible/issues/61220
|
https://github.com/ansible/ansible/pull/62347
|
24b8b629b9a608583262467bbdb63ed828530c78
|
19220a0607437ff29ccccdd67f5012cca0eee2f1
| 2019-08-23T11:40:12Z |
python
| 2019-12-11T08:15:07Z |
lib/ansible/modules/cloud/vmware/vmware_category.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_category
short_description: Manage VMware categories
description:
- This module can be used to create / delete / update VMware categories.
- Tag feature is introduced in vSphere 6 version, so this module is not supported in the earlier versions of vSphere.
- All variables and VMware object names are case sensitive.
version_added: '2.7'
author:
- Abhijeet Kasurde (@Akasurde)
notes:
- Tested on vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
- vSphere Automation SDK
options:
category_name:
description:
- The name of category to manage.
required: True
type: str
category_description:
description:
- The category description.
- This is required only if C(state) is set to C(present).
- This parameter is ignored, when C(state) is set to C(absent).
default: ''
type: str
category_cardinality:
description:
- The category cardinality.
- This parameter is ignored, when updating existing category.
choices: ['multiple', 'single']
default: 'multiple'
type: str
new_category_name:
description:
- The new name for an existing category.
- This value is used while updating an existing category.
type: str
state:
description:
- The state of category.
- If set to C(present) and category does not exists, then category is created.
- If set to C(present) and category exists, then category is updated.
- If set to C(absent) and category exists, then category is deleted.
- If set to C(absent) and category does not exists, no action is taken.
- Process of updating category only allows name, description change.
default: 'present'
choices: [ 'present', 'absent' ]
type: str
extends_documentation_fragment: vmware_rest_client.documentation
'''
EXAMPLES = r'''
- name: Create a category
vmware_category:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
category_name: Sample_Cat_0001
category_description: Sample Description
category_cardinality: 'multiple'
state: present
- name: Rename category
vmware_category:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
category_name: Sample_Category_0001
new_category_name: Sample_Category_0002
state: present
- name: Update category description
vmware_category:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
category_name: Sample_Category_0001
category_description: Some fancy description
state: present
- name: Delete category
vmware_category:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
category_name: Sample_Category_0002
state: absent
'''
RETURN = r'''
category_results:
description: dictionary of category metadata
returned: on success
type: dict
sample: {
"category_id": "urn:vmomi:InventoryServiceCategory:d7120bda-9fa5-4f92-9d71-aa1acff2e5a8:GLOBAL",
"msg": "Category NewCat_0001 updated."
}
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware_rest_client import VmwareRestClient
try:
from com.vmware.cis.tagging_client import CategoryModel
from com.vmware.vapi.std.errors_client import Error
except ImportError:
pass
class VmwareCategory(VmwareRestClient):
def __init__(self, module):
super(VmwareCategory, self).__init__(module)
self.category_service = self.api_client.tagging.Category
self.global_categories = dict()
self.category_name = self.params.get('category_name')
self.get_all_categories()
def ensure_state(self):
"""Manage internal states of categories. """
desired_state = self.params.get('state')
states = {
'present': {
'present': self.state_update_category,
'absent': self.state_create_category,
},
'absent': {
'present': self.state_delete_category,
'absent': self.state_unchanged,
}
}
states[desired_state][self.check_category_status()]()
def state_create_category(self):
"""Create category."""
category_spec = self.category_service.CreateSpec()
category_spec.name = self.category_name
category_spec.description = self.params.get('category_description')
if self.params.get('category_cardinality') == 'single':
category_spec.cardinality = CategoryModel.Cardinality.SINGLE
else:
category_spec.cardinality = CategoryModel.Cardinality.MULTIPLE
category_spec.associable_types = set()
try:
category_id = self.category_service.create(category_spec)
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
if category_id:
self.module.exit_json(changed=True,
category_results=dict(msg="Category '%s' created." % category_spec.name,
category_id=category_id))
self.module.exit_json(changed=False,
category_results=dict(msg="No category created", category_id=''))
def state_unchanged(self):
"""Return unchanged state."""
self.module.exit_json(changed=False)
def state_update_category(self):
"""Update category."""
category_id = self.global_categories[self.category_name]['category_id']
changed = False
results = dict(msg="Category %s is unchanged." % self.category_name,
category_id=category_id)
category_update_spec = self.category_service.UpdateSpec()
change_list = []
old_cat_desc = self.global_categories[self.category_name]['category_description']
new_cat_desc = self.params.get('category_description')
if new_cat_desc and new_cat_desc != old_cat_desc:
category_update_spec.description = new_cat_desc
results['msg'] = 'Category %s updated.' % self.category_name
change_list.append(True)
new_cat_name = self.params.get('new_category_name')
if new_cat_name in self.global_categories:
self.module.fail_json(msg="Unable to rename %s as %s already"
" exists in configuration." % (self.category_name, new_cat_name))
old_cat_name = self.global_categories[self.category_name]['category_name']
if new_cat_name and new_cat_name != old_cat_name:
category_update_spec.name = new_cat_name
results['msg'] = 'Category %s updated.' % self.category_name
change_list.append(True)
if any(change_list):
try:
self.category_service.update(category_id, category_update_spec)
changed = True
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
self.module.exit_json(changed=changed,
category_results=results)
def state_delete_category(self):
"""Delete category."""
category_id = self.global_categories[self.category_name]['category_id']
try:
self.category_service.delete(category_id=category_id)
except Error as error:
self.module.fail_json(msg="%s" % self.get_error_message(error))
self.module.exit_json(changed=True,
category_results=dict(msg="Category '%s' deleted." % self.category_name,
category_id=category_id))
def check_category_status(self):
"""
Check if category exists or not
Returns: 'present' if category found, else 'absent'
"""
if self.category_name in self.global_categories:
return 'present'
else:
return 'absent'
def get_all_categories(self):
"""Retrieve all category information."""
for category in self.category_service.list():
category_obj = self.category_service.get(category)
self.global_categories[category_obj.name] = dict(
category_description=category_obj.description,
category_used_by=category_obj.used_by,
category_cardinality=str(category_obj.cardinality),
category_associable_types=category_obj.associable_types,
category_id=category_obj.id,
category_name=category_obj.name,
)
def main():
argument_spec = VmwareRestClient.vmware_client_argument_spec()
argument_spec.update(
category_name=dict(type='str', required=True),
category_description=dict(type='str', default='', required=False),
category_cardinality=dict(type='str', choices=["multiple", "single"], default="multiple"),
new_category_name=dict(type='str'),
state=dict(type='str', choices=['present', 'absent'], default='present'),
)
module = AnsibleModule(argument_spec=argument_spec)
vmware_category = VmwareCategory(module)
vmware_category.ensure_state()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,220 |
vmware_category add "Associable Object Types" functionality
|
##### SUMMARY
Can functionality be added so that "Associable Object Types" can be configured for a tag category please?
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_category
##### ADDITIONAL INFORMATION
Perhaps the module could be extended to include a list, examples below;
```
- name: 'Create a Category'
connection: 'local'
vmware_category:
hostname: '{{ ansible_host }}'
username: '{{ ansible_vcenter_username }}'
password: '{{ ansible_vcenter_password }}'
category_name: "sas_luns"
category_description: "SAS LUNS"
category_cardinality: 'multiple'
associable_object_types:
- Datastore
- Datastore Cluster
state: present
validate_certs: '{{ validate_certs }}'
- name: 'Create a Category'
connection: 'local'
vmware_category:
hostname: '{{ ansible_host }}'
username: '{{ ansible_vcenter_username }}'
password: '{{ ansible_vcenter_password }}'
category_name: "Region 1"
category_description: "Region"
category_cardinality: 'multiple'
associable_object_types:
- All objects
state: present
validate_certs: '{{ validate_certs }}'
```
The type of Associable Object Types in vSphere 6.7 are:
All objects
Folder
Cluster
Datacenter
Datastore
Datastore Cluster
Distributed Port Group
Distributed Switch
Host
Content Library
Library item
Network
Resource Pool
vApp
Virtual Machine
|
https://github.com/ansible/ansible/issues/61220
|
https://github.com/ansible/ansible/pull/62347
|
24b8b629b9a608583262467bbdb63ed828530c78
|
19220a0607437ff29ccccdd67f5012cca0eee2f1
| 2019-08-23T11:40:12Z |
python
| 2019-12-11T08:15:07Z |
test/integration/targets/vmware_category/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,220 |
vmware_category add "Associable Object Types" functionality
|
##### SUMMARY
Can functionality be added so that "Associable Object Types" can be configured for a tag category please?
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_category
##### ADDITIONAL INFORMATION
Perhaps the module could be extended to include a list, examples below;
```
- name: 'Create a Category'
connection: 'local'
vmware_category:
hostname: '{{ ansible_host }}'
username: '{{ ansible_vcenter_username }}'
password: '{{ ansible_vcenter_password }}'
category_name: "sas_luns"
category_description: "SAS LUNS"
category_cardinality: 'multiple'
associable_object_types:
- Datastore
- Datastore Cluster
state: present
validate_certs: '{{ validate_certs }}'
- name: 'Create a Category'
connection: 'local'
vmware_category:
hostname: '{{ ansible_host }}'
username: '{{ ansible_vcenter_username }}'
password: '{{ ansible_vcenter_password }}'
category_name: "Region 1"
category_description: "Region"
category_cardinality: 'multiple'
associable_object_types:
- All objects
state: present
validate_certs: '{{ validate_certs }}'
```
The type of Associable Object Types in vSphere 6.7 are:
All objects
Folder
Cluster
Datacenter
Datastore
Datastore Cluster
Distributed Port Group
Distributed Switch
Host
Content Library
Library item
Network
Resource Pool
vApp
Virtual Machine
|
https://github.com/ansible/ansible/issues/61220
|
https://github.com/ansible/ansible/pull/62347
|
24b8b629b9a608583262467bbdb63ed828530c78
|
19220a0607437ff29ccccdd67f5012cca0eee2f1
| 2019-08-23T11:40:12Z |
python
| 2019-12-11T08:15:07Z |
test/integration/targets/vmware_category/tasks/associable_obj_types.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,220 |
vmware_category add "Associable Object Types" functionality
|
##### SUMMARY
Can functionality be added so that "Associable Object Types" can be configured for a tag category please?
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_category
##### ADDITIONAL INFORMATION
Perhaps the module could be extended to include a list, examples below;
```
- name: 'Create a Category'
connection: 'local'
vmware_category:
hostname: '{{ ansible_host }}'
username: '{{ ansible_vcenter_username }}'
password: '{{ ansible_vcenter_password }}'
category_name: "sas_luns"
category_description: "SAS LUNS"
category_cardinality: 'multiple'
associable_object_types:
- Datastore
- Datastore Cluster
state: present
validate_certs: '{{ validate_certs }}'
- name: 'Create a Category'
connection: 'local'
vmware_category:
hostname: '{{ ansible_host }}'
username: '{{ ansible_vcenter_username }}'
password: '{{ ansible_vcenter_password }}'
category_name: "Region 1"
category_description: "Region"
category_cardinality: 'multiple'
associable_object_types:
- All objects
state: present
validate_certs: '{{ validate_certs }}'
```
The type of Associable Object Types in vSphere 6.7 are:
All objects
Folder
Cluster
Datacenter
Datastore
Datastore Cluster
Distributed Port Group
Distributed Switch
Host
Content Library
Library item
Network
Resource Pool
vApp
Virtual Machine
|
https://github.com/ansible/ansible/issues/61220
|
https://github.com/ansible/ansible/pull/62347
|
24b8b629b9a608583262467bbdb63ed828530c78
|
19220a0607437ff29ccccdd67f5012cca0eee2f1
| 2019-08-23T11:40:12Z |
python
| 2019-12-11T08:15:07Z |
test/integration/targets/vmware_category/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 64,613 |
vmware_guest throws error when guest is in 'suspended' state attempting to move to 'poweredon'
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
vmware_guest throws error when guest is in 'suspended' state attempting to move to 'poweredon'
Exercising all other state transitions and they seem to work fine. poweredon, poweredoff, restarted, shutdownguest, rebootguest
Only occurs when machine is in 'suspended' state and we attempt to move it to 'poweredon' or anyt other state.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
vmware_guest
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ansible 2.7.8
config file = /Users/spollock/ansible/fs-vmw-control/ansible.cfg
configured module search path = [u'/Users/spollock/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/spollock/anaconda2/lib/python2.7/site-packages/ansible
executable location = /Users/spollock/anaconda2/bin/ansible
python version = 2.7.15 |Anaconda, Inc.| (default, Dec 14 2018, 13:10:39) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
ProductName: Mac OS X
ProductVersion: 10.14.6
BuildVersion: 18G103
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "Changing Machine state for: {{ vcpwrname }} to State: {{ vcpwrstate }}"
vmware_guest:
validate_certs: no
state: "{{ vcpwrstate }}"
hostname: "{{ vcenterhostname }}"
username: "{{ vcusername }}"
password: "{{ vcpassword }}"
datacenter: "{{ vcdatacenter }}"
name: "{{ vcpwrname }}"
folder: "{{ vcpwrfolder }}"
delegate_to: localhost
tags: power
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expected machine to return from suspended state.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes -->
```paste below
Instead, the following error is thrown:
TASK [power-single : Changing Machine state for: Pollock-W10 to State: poweredon] *************************************************************************
task path: /Users/spollock/ansible/fs-vmw-control/roles/power-single/tasks/main.yml:12
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: spollock
<localhost> EXEC /bin/sh -c 'echo ~spollock && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079 `" && echo ansible-tmp-1573230765.56-35724150586079="` echo /Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079 `" ) && sleep 0'
Using module file /Users/spollock/anaconda2/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_guest.py
<localhost> PUT /Users/spollock/.ansible/tmp/ansible-local-84161qhcXMy/tmpMTq2G4 TO /Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079/AnsiballZ_vmware_guest.py
<localhost> EXEC /bin/sh -c 'chmod u+x /Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079/ /Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079/AnsiballZ_vmware_guest.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079/AnsiballZ_vmware_guest.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079/AnsiballZ_vmware_guest.py", line 113, in <module>
_ansiballz_main()
File "/Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079/AnsiballZ_vmware_guest.py", line 105, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079/AnsiballZ_vmware_guest.py", line 48, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/var/folders/3g/2y8gx9ds133d8qyhqk8yd5tc0000gp/T/ansible_vmware_guest_payload_Z5rt91/__main__.py", line 2396, in <module>
File "/var/folders/3g/2y8gx9ds133d8qyhqk8yd5tc0000gp/T/ansible_vmware_guest_payload_Z5rt91/__main__.py", line 2372, in main
KeyError: 'instance'
fatal: [localhost -> localhost]: FAILED! => {
"changed": false,
"module_stderr": "Traceback (most recent call last):\n File \"/Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079/AnsiballZ_vmware_guest.py\", line 113, in <module>\n _ansiballz_main()\n File \"/Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079/AnsiballZ_vmware_guest.py\", line 105, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/Users/spollock/.ansible/tmp/ansible-tmp-1573230765.56-35724150586079/AnsiballZ_vmware_guest.py\", line 48, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/var/folders/3g/2y8gx9ds133d8qyhqk8yd5tc0000gp/T/ansible_vmware_guest_payload_Z5rt91/__main__.py\", line 2396, in <module>\n File \"/var/folders/3g/2y8gx9ds133d8qyhqk8yd5tc0000gp/T/ansible_vmware_guest_payload_Z5rt91/__main__.py\", line 2372, in main\nKeyError: 'instance'\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
PLAY RECAP ************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
|
https://github.com/ansible/ansible/issues/64613
|
https://github.com/ansible/ansible/pull/65684
|
41e19a4058bf56ad797e5c57212cd05294ff8934
|
356e3e30faa0394e9062852ee849f269ce77f3a2
| 2019-11-08T16:33:58Z |
python
| 2019-12-12T04:16:24Z |
lib/ansible/module_utils/vmware.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Joseph Callen <jcallen () csc.com>
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, James E. King III (@jeking3) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import atexit
import ansible.module_utils.common._collections_compat as collections_compat
import json
import os
import re
import ssl
import time
import traceback
from random import randint
from distutils.version import StrictVersion
REQUESTS_IMP_ERR = None
try:
# requests is required for exception handling of the ConnectionError
import requests
HAS_REQUESTS = True
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
HAS_REQUESTS = False
PYVMOMI_IMP_ERR = None
try:
from pyVim import connect
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
HAS_PYVMOMIJSON = hasattr(VmomiSupport, 'VmomiJSONEncoder')
except ImportError:
PYVMOMI_IMP_ERR = traceback.format_exc()
HAS_PYVMOMI = False
HAS_PYVMOMIJSON = False
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.six import integer_types, iteritems, string_types, raise_from
from ansible.module_utils.basic import env_fallback, missing_required_lib
class TaskError(Exception):
def __init__(self, *args, **kwargs):
super(TaskError, self).__init__(*args, **kwargs)
def wait_for_task(task, max_backoff=64, timeout=3600):
"""Wait for given task using exponential back-off algorithm.
Args:
task: VMware task object
max_backoff: Maximum amount of sleep time in seconds
timeout: Timeout for the given task in seconds
Returns: Tuple with True and result for successful task
Raises: TaskError on failure
"""
failure_counter = 0
start_time = time.time()
while True:
if time.time() - start_time >= timeout:
raise TaskError("Timeout")
if task.info.state == vim.TaskInfo.State.success:
return True, task.info.result
if task.info.state == vim.TaskInfo.State.error:
error_msg = task.info.error
host_thumbprint = None
try:
error_msg = error_msg.msg
if hasattr(task.info.error, 'thumbprint'):
host_thumbprint = task.info.error.thumbprint
except AttributeError:
pass
finally:
raise_from(TaskError(error_msg, host_thumbprint), task.info.error)
if task.info.state in [vim.TaskInfo.State.running, vim.TaskInfo.State.queued]:
sleep_time = min(2 ** failure_counter + randint(1, 1000) / 1000, max_backoff)
time.sleep(sleep_time)
failure_counter += 1
def wait_for_vm_ip(content, vm, timeout=300):
facts = dict()
interval = 15
while timeout > 0:
_facts = gather_vm_facts(content, vm)
if _facts['ipv4'] or _facts['ipv6']:
facts = _facts
break
time.sleep(interval)
timeout -= interval
return facts
def find_obj(content, vimtype, name, first=True, folder=None):
container = content.viewManager.CreateContainerView(folder or content.rootFolder, recursive=True, type=vimtype)
# Get all objects matching type (and name if given)
obj_list = [obj for obj in container.view if not name or to_text(obj.name) == to_text(name)]
container.Destroy()
# Return first match or None
if first:
if obj_list:
return obj_list[0]
return None
# Return all matching objects or empty list
return obj_list
def find_dvspg_by_name(dv_switch, portgroup_name):
portgroup_name = quote_obj_name(portgroup_name)
portgroups = dv_switch.portgroup
for pg in portgroups:
if pg.name == portgroup_name:
return pg
return None
def find_object_by_name(content, name, obj_type, folder=None, recurse=True):
if not isinstance(obj_type, list):
obj_type = [obj_type]
objects = get_all_objs(content, obj_type, folder=folder, recurse=recurse)
for obj in objects:
if obj.name == name:
return obj
return None
def find_cluster_by_name(content, cluster_name, datacenter=None):
if datacenter and hasattr(datacenter, 'hostFolder'):
folder = datacenter.hostFolder
else:
folder = content.rootFolder
return find_object_by_name(content, cluster_name, [vim.ClusterComputeResource], folder=folder)
def find_datacenter_by_name(content, datacenter_name):
return find_object_by_name(content, datacenter_name, [vim.Datacenter])
def get_parent_datacenter(obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
return datacenter
def find_datastore_by_name(content, datastore_name, datacenter_name=None):
return find_object_by_name(content, datastore_name, [vim.Datastore], datacenter_name)
def find_dvs_by_name(content, switch_name, folder=None):
return find_object_by_name(content, switch_name, [vim.DistributedVirtualSwitch], folder=folder)
def find_hostsystem_by_name(content, hostname):
return find_object_by_name(content, hostname, [vim.HostSystem])
def find_resource_pool_by_name(content, resource_pool_name):
return find_object_by_name(content, resource_pool_name, [vim.ResourcePool])
def find_network_by_name(content, network_name):
return find_object_by_name(content, quote_obj_name(network_name), [vim.Network])
def find_vm_by_id(content, vm_id, vm_id_type="vm_name", datacenter=None,
cluster=None, folder=None, match_first=False):
""" UUID is unique to a VM, every other id returns the first match. """
si = content.searchIndex
vm = None
if vm_id_type == 'dns_name':
vm = si.FindByDnsName(datacenter=datacenter, dnsName=vm_id, vmSearch=True)
elif vm_id_type == 'uuid':
# Search By BIOS UUID rather than instance UUID
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=False, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'instance_uuid':
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=True, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'ip':
vm = si.FindByIp(datacenter=datacenter, ip=vm_id, vmSearch=True)
elif vm_id_type == 'vm_name':
folder = None
if cluster:
folder = cluster
elif datacenter:
folder = datacenter.hostFolder
vm = find_vm_by_name(content, vm_id, folder)
elif vm_id_type == 'inventory_path':
searchpath = folder
# get all objects for this path
f_obj = si.FindByInventoryPath(searchpath)
if f_obj:
if isinstance(f_obj, vim.Datacenter):
f_obj = f_obj.vmFolder
for c_obj in f_obj.childEntity:
if not isinstance(c_obj, vim.VirtualMachine):
continue
if c_obj.name == vm_id:
vm = c_obj
if match_first:
break
return vm
def find_vm_by_name(content, vm_name, folder=None, recurse=True):
return find_object_by_name(content, vm_name, [vim.VirtualMachine], folder=folder, recurse=recurse)
def find_host_portgroup_by_name(host, portgroup_name):
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return None
def compile_folder_path_for_object(vobj):
""" make a /vm/foo/bar/baz like folder path for an object """
paths = []
if isinstance(vobj, vim.Folder):
paths.append(vobj.name)
thisobj = vobj
while hasattr(thisobj, 'parent'):
thisobj = thisobj.parent
try:
moid = thisobj._moId
except AttributeError:
moid = None
if moid in ['group-d1', 'ha-folder-root']:
break
if isinstance(thisobj, vim.Folder):
paths.append(thisobj.name)
paths.reverse()
return '/' + '/'.join(paths)
def _get_vm_prop(vm, attributes):
"""Safely get a property or return None"""
result = vm
for attribute in attributes:
try:
result = getattr(result, attribute)
except (AttributeError, IndexError):
return None
return result
def gather_vm_facts(content, vm):
""" Gather facts from vim.VirtualMachine object. """
facts = {
'module_hw': True,
'hw_name': vm.config.name,
'hw_power_status': vm.summary.runtime.powerState,
'hw_guest_full_name': vm.summary.guest.guestFullName,
'hw_guest_id': vm.summary.guest.guestId,
'hw_product_uuid': vm.config.uuid,
'hw_processor_count': vm.config.hardware.numCPU,
'hw_cores_per_socket': vm.config.hardware.numCoresPerSocket,
'hw_memtotal_mb': vm.config.hardware.memoryMB,
'hw_interfaces': [],
'hw_datastores': [],
'hw_files': [],
'hw_esxi_host': None,
'hw_guest_ha_state': None,
'hw_is_template': vm.config.template,
'hw_folder': None,
'hw_version': vm.config.version,
'instance_uuid': vm.config.instanceUuid,
'guest_tools_status': _get_vm_prop(vm, ('guest', 'toolsRunningStatus')),
'guest_tools_version': _get_vm_prop(vm, ('guest', 'toolsVersion')),
'guest_question': vm.summary.runtime.question,
'guest_consolidation_needed': vm.summary.runtime.consolidationNeeded,
'ipv4': None,
'ipv6': None,
'annotation': vm.config.annotation,
'customvalues': {},
'snapshots': [],
'current_snapshot': None,
'vnc': {},
'moid': vm._moId,
'vimref': "vim.VirtualMachine:%s" % vm._moId,
}
# facts that may or may not exist
if vm.summary.runtime.host:
try:
host = vm.summary.runtime.host
facts['hw_esxi_host'] = host.summary.config.name
facts['hw_cluster'] = host.parent.name if host.parent and isinstance(host.parent, vim.ClusterComputeResource) else None
except vim.fault.NoPermission:
# User does not have read permission for the host system,
# proceed without this value. This value does not contribute or hamper
# provisioning or power management operations.
pass
if vm.summary.runtime.dasVmProtection:
facts['hw_guest_ha_state'] = vm.summary.runtime.dasVmProtection.dasProtected
datastores = vm.datastore
for ds in datastores:
facts['hw_datastores'].append(ds.info.name)
try:
files = vm.config.files
layout = vm.layout
if files:
facts['hw_files'] = [files.vmPathName]
for item in layout.snapshot:
for snap in item.snapshotFile:
if 'vmsn' in snap:
facts['hw_files'].append(snap)
for item in layout.configFile:
facts['hw_files'].append(os.path.join(os.path.dirname(files.vmPathName), item))
for item in vm.layout.logFile:
facts['hw_files'].append(os.path.join(files.logDirectory, item))
for item in vm.layout.disk:
for disk in item.diskFile:
facts['hw_files'].append(disk)
except Exception:
pass
facts['hw_folder'] = PyVmomi.get_vm_path(content, vm)
cfm = content.customFieldsManager
# Resolve custom values
for value_obj in vm.summary.customValue:
kn = value_obj.key
if cfm is not None and cfm.field:
for f in cfm.field:
if f.key == value_obj.key:
kn = f.name
# Exit the loop immediately, we found it
break
facts['customvalues'][kn] = value_obj.value
net_dict = {}
vmnet = _get_vm_prop(vm, ('guest', 'net'))
if vmnet:
for device in vmnet:
if device.deviceConfigId > 0:
net_dict[device.macAddress] = list(device.ipAddress)
if vm.guest.ipAddress:
if ':' in vm.guest.ipAddress:
facts['ipv6'] = vm.guest.ipAddress
else:
facts['ipv4'] = vm.guest.ipAddress
ethernet_idx = 0
for entry in vm.config.hardware.device:
if not hasattr(entry, 'macAddress'):
continue
if entry.macAddress:
mac_addr = entry.macAddress
mac_addr_dash = mac_addr.replace(':', '-')
else:
mac_addr = mac_addr_dash = None
if (hasattr(entry, 'backing') and hasattr(entry.backing, 'port') and
hasattr(entry.backing.port, 'portKey') and hasattr(entry.backing.port, 'portgroupKey')):
port_group_key = entry.backing.port.portgroupKey
port_key = entry.backing.port.portKey
else:
port_group_key = None
port_key = None
factname = 'hw_eth' + str(ethernet_idx)
facts[factname] = {
'addresstype': entry.addressType,
'label': entry.deviceInfo.label,
'macaddress': mac_addr,
'ipaddresses': net_dict.get(entry.macAddress, None),
'macaddress_dash': mac_addr_dash,
'summary': entry.deviceInfo.summary,
'portgroup_portkey': port_key,
'portgroup_key': port_group_key,
}
facts['hw_interfaces'].append('eth' + str(ethernet_idx))
ethernet_idx += 1
snapshot_facts = list_snapshots(vm)
if 'snapshots' in snapshot_facts:
facts['snapshots'] = snapshot_facts['snapshots']
facts['current_snapshot'] = snapshot_facts['current_snapshot']
facts['vnc'] = get_vnc_extraconfig(vm)
return facts
def deserialize_snapshot_obj(obj):
return {'id': obj.id,
'name': obj.name,
'description': obj.description,
'creation_time': obj.createTime,
'state': obj.state}
def list_snapshots_recursively(snapshots):
snapshot_data = []
for snapshot in snapshots:
snapshot_data.append(deserialize_snapshot_obj(snapshot))
snapshot_data = snapshot_data + list_snapshots_recursively(snapshot.childSnapshotList)
return snapshot_data
def get_current_snap_obj(snapshots, snapob):
snap_obj = []
for snapshot in snapshots:
if snapshot.snapshot == snapob:
snap_obj.append(snapshot)
snap_obj = snap_obj + get_current_snap_obj(snapshot.childSnapshotList, snapob)
return snap_obj
def list_snapshots(vm):
result = {}
snapshot = _get_vm_prop(vm, ('snapshot',))
if not snapshot:
return result
if vm.snapshot is None:
return result
result['snapshots'] = list_snapshots_recursively(vm.snapshot.rootSnapshotList)
current_snapref = vm.snapshot.currentSnapshot
current_snap_obj = get_current_snap_obj(vm.snapshot.rootSnapshotList, current_snapref)
if current_snap_obj:
result['current_snapshot'] = deserialize_snapshot_obj(current_snap_obj[0])
else:
result['current_snapshot'] = dict()
return result
def get_vnc_extraconfig(vm):
result = {}
for opts in vm.config.extraConfig:
for optkeyname in ['enabled', 'ip', 'port', 'password']:
if opts.key.lower() == "remotedisplay.vnc." + optkeyname:
result[optkeyname] = opts.value
return result
def vmware_argument_spec():
return dict(
hostname=dict(type='str',
required=False,
fallback=(env_fallback, ['VMWARE_HOST']),
),
username=dict(type='str',
aliases=['user', 'admin'],
required=False,
fallback=(env_fallback, ['VMWARE_USER'])),
password=dict(type='str',
aliases=['pass', 'pwd'],
required=False,
no_log=True,
fallback=(env_fallback, ['VMWARE_PASSWORD'])),
port=dict(type='int',
default=443,
fallback=(env_fallback, ['VMWARE_PORT'])),
validate_certs=dict(type='bool',
required=False,
default=True,
fallback=(env_fallback, ['VMWARE_VALIDATE_CERTS'])
),
proxy_host=dict(type='str',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_HOST'])),
proxy_port=dict(type='int',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_PORT'])),
)
def connect_to_api(module, disconnect_atexit=True, return_si=False):
hostname = module.params['hostname']
username = module.params['username']
password = module.params['password']
port = module.params.get('port', 443)
validate_certs = module.params['validate_certs']
if not hostname:
module.fail_json(msg="Hostname parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_HOST=ESXI_HOSTNAME'")
if not username:
module.fail_json(msg="Username parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_USER=ESXI_USERNAME'")
if not password:
module.fail_json(msg="Password parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_PASSWORD=ESXI_PASSWORD'")
if validate_certs and not hasattr(ssl, 'SSLContext'):
module.fail_json(msg='pyVim does not support changing verification mode with python < 2.7.9. Either update '
'python or use validate_certs=false.')
elif validate_certs:
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True
ssl_context.load_default_certs()
elif hasattr(ssl, 'SSLContext'):
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_NONE
ssl_context.check_hostname = False
else: # Python < 2.7.9 or RHEL/Centos < 7.4
ssl_context = None
service_instance = None
proxy_host = module.params.get('proxy_host')
proxy_port = module.params.get('proxy_port')
connect_args = dict(
host=hostname,
port=port,
)
if ssl_context:
connect_args.update(sslContext=ssl_context)
msg_suffix = ''
try:
if proxy_host:
msg_suffix = " [proxy: %s:%d]" % (proxy_host, proxy_port)
connect_args.update(httpProxyHost=proxy_host, httpProxyPort=proxy_port)
smart_stub = connect.SmartStubAdapter(**connect_args)
session_stub = connect.VimSessionOrientedStub(smart_stub, connect.VimSessionOrientedStub.makeUserLoginMethod(username, password))
service_instance = vim.ServiceInstance('ServiceInstance', session_stub)
else:
connect_args.update(user=username, pwd=password)
service_instance = connect.SmartConnect(**connect_args)
except vim.fault.InvalidLogin as invalid_login:
msg = "Unable to log on to vCenter or ESXi API at %s:%s " % (hostname, port)
module.fail_json(msg="%s as %s: %s" % (msg, username, invalid_login.msg) + msg_suffix)
except vim.fault.NoPermission as no_permission:
module.fail_json(msg="User %s does not have required permission"
" to log on to vCenter or ESXi API at %s:%s : %s" % (username, hostname, port, no_permission.msg))
except (requests.ConnectionError, ssl.SSLError) as generic_req_exc:
module.fail_json(msg="Unable to connect to vCenter or ESXi API at %s on TCP/%s: %s" % (hostname, port, generic_req_exc))
except vmodl.fault.InvalidRequest as invalid_request:
# Request is malformed
msg = "Failed to get a response from server %s:%s " % (hostname, port)
module.fail_json(msg="%s as request is malformed: %s" % (msg, invalid_request.msg) + msg_suffix)
except Exception as generic_exc:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port) + msg_suffix
module.fail_json(msg="%s : %s" % (msg, generic_exc))
if service_instance is None:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port)
module.fail_json(msg=msg + msg_suffix)
# Disabling atexit should be used in special cases only.
# Such as IP change of the ESXi host which removes the connection anyway.
# Also removal significantly speeds up the return of the module
if disconnect_atexit:
atexit.register(connect.Disconnect, service_instance)
if return_si:
return service_instance, service_instance.RetrieveContent()
return service_instance.RetrieveContent()
def get_all_objs(content, vimtype, folder=None, recurse=True):
if not folder:
folder = content.rootFolder
obj = {}
container = content.viewManager.CreateContainerView(folder, vimtype, recurse)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
def run_command_in_guest(content, vm, username, password, program_path, program_args, program_cwd, program_env):
result = {'failed': False}
tools_status = vm.guest.toolsStatus
if (tools_status == 'toolsNotInstalled' or
tools_status == 'toolsNotRunning'):
result['failed'] = True
result['msg'] = "VMwareTools is not installed or is not running in the guest"
return result
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/NamePasswordAuthentication.rst
creds = vim.vm.guest.NamePasswordAuthentication(
username=username, password=password
)
try:
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/ProcessManager.rst
pm = content.guestOperationsManager.processManager
# https://www.vmware.com/support/developer/converter-sdk/conv51_apireference/vim.vm.guest.ProcessManager.ProgramSpec.html
ps = vim.vm.guest.ProcessManager.ProgramSpec(
# programPath=program,
# arguments=args
programPath=program_path,
arguments=program_args,
workingDirectory=program_cwd,
)
res = pm.StartProgramInGuest(vm, creds, ps)
result['pid'] = res
pdata = pm.ListProcessesInGuest(vm, creds, [res])
# wait for pid to finish
while not pdata[0].endTime:
time.sleep(1)
pdata = pm.ListProcessesInGuest(vm, creds, [res])
result['owner'] = pdata[0].owner
result['startTime'] = pdata[0].startTime.isoformat()
result['endTime'] = pdata[0].endTime.isoformat()
result['exitCode'] = pdata[0].exitCode
if result['exitCode'] != 0:
result['failed'] = True
result['msg'] = "program exited non-zero"
else:
result['msg'] = "program completed successfully"
except Exception as e:
result['msg'] = str(e)
result['failed'] = True
return result
def serialize_spec(clonespec):
"""Serialize a clonespec or a relocation spec"""
data = {}
attrs = dir(clonespec)
attrs = [x for x in attrs if not x.startswith('_')]
for x in attrs:
xo = getattr(clonespec, x)
if callable(xo):
continue
xt = type(xo)
if xo is None:
data[x] = None
elif isinstance(xo, vim.vm.ConfigSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.RelocateSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDisk):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDeviceSpec.FileOperation):
data[x] = to_text(xo)
elif isinstance(xo, vim.Description):
data[x] = {
'dynamicProperty': serialize_spec(xo.dynamicProperty),
'dynamicType': serialize_spec(xo.dynamicType),
'label': serialize_spec(xo.label),
'summary': serialize_spec(xo.summary),
}
elif hasattr(xo, 'name'):
data[x] = to_text(xo) + ':' + to_text(xo.name)
elif isinstance(xo, vim.vm.ProfileSpec):
pass
elif issubclass(xt, list):
data[x] = []
for xe in xo:
data[x].append(serialize_spec(xe))
elif issubclass(xt, string_types + integer_types + (float, bool)):
if issubclass(xt, integer_types):
data[x] = int(xo)
else:
data[x] = to_text(xo)
elif issubclass(xt, bool):
data[x] = xo
elif issubclass(xt, dict):
data[to_text(x)] = {}
for k, v in xo.items():
k = to_text(k)
data[x][k] = serialize_spec(v)
else:
data[x] = str(xt)
return data
def find_host_by_cluster_datacenter(module, content, datacenter_name, cluster_name, host_name):
dc = find_datacenter_by_name(content, datacenter_name)
if dc is None:
module.fail_json(msg="Unable to find datacenter with name %s" % datacenter_name)
cluster = find_cluster_by_name(content, cluster_name, datacenter=dc)
if cluster is None:
module.fail_json(msg="Unable to find cluster with name %s" % cluster_name)
for host in cluster.host:
if host.name == host_name:
return host, cluster
return None, cluster
def set_vm_power_state(content, vm, state, force, timeout=0):
"""
Set the power status for a VM determined by the current and
requested states. force is forceful
"""
facts = gather_vm_facts(content, vm)
expected_state = state.replace('_', '').replace('-', '').lower()
current_state = facts['hw_power_status'].lower()
result = dict(
changed=False,
failed=False,
)
# Need Force
if not force and current_state not in ['poweredon', 'poweredoff']:
result['failed'] = True
result['msg'] = "Virtual Machine is in %s power state. Force is required!" % current_state
return result
# State is not already true
if current_state != expected_state:
task = None
try:
if expected_state == 'poweredoff':
task = vm.PowerOff()
elif expected_state == 'poweredon':
task = vm.PowerOn()
elif expected_state == 'restarted':
if current_state in ('poweredon', 'poweringon', 'resetting', 'poweredoff'):
task = vm.Reset()
else:
result['failed'] = True
result['msg'] = "Cannot restart virtual machine in the current state %s" % current_state
elif expected_state == 'suspended':
if current_state in ('poweredon', 'poweringon'):
task = vm.Suspend()
else:
result['failed'] = True
result['msg'] = 'Cannot suspend virtual machine in the current state %s' % current_state
elif expected_state in ['shutdownguest', 'rebootguest']:
if current_state == 'poweredon':
if vm.guest.toolsRunningStatus == 'guestToolsRunning':
if expected_state == 'shutdownguest':
task = vm.ShutdownGuest()
if timeout > 0:
result.update(wait_for_poweroff(vm, timeout))
else:
task = vm.RebootGuest()
# Set result['changed'] immediately because
# shutdown and reboot return None.
result['changed'] = True
else:
result['failed'] = True
result['msg'] = "VMware tools should be installed for guest shutdown/reboot"
else:
result['failed'] = True
result['msg'] = "Virtual machine %s must be in poweredon state for guest shutdown/reboot" % vm.name
else:
result['failed'] = True
result['msg'] = "Unsupported expected state provided: %s" % expected_state
except Exception as e:
result['failed'] = True
result['msg'] = to_text(e)
if task:
wait_for_task(task)
if task.info.state == 'error':
result['failed'] = True
result['msg'] = task.info.error.msg
else:
result['changed'] = True
# need to get new metadata if changed
result['instance'] = gather_vm_facts(content, vm)
return result
def wait_for_poweroff(vm, timeout=300):
result = dict()
interval = 15
while timeout > 0:
if vm.runtime.powerState.lower() == 'poweredoff':
break
time.sleep(interval)
timeout -= interval
else:
result['failed'] = True
result['msg'] = 'Timeout while waiting for VM power off.'
return result
def is_integer(value, type_of='int'):
try:
VmomiSupport.vmodlTypes[type_of](value)
return True
except (TypeError, ValueError):
return False
def is_boolean(value):
if str(value).lower() in ['true', 'on', 'yes', 'false', 'off', 'no']:
return True
return False
def is_truthy(value):
if str(value).lower() in ['true', 'on', 'yes']:
return True
return False
def quote_obj_name(object_name=None):
"""
Replace special characters in object name
with urllib quote equivalent
"""
if not object_name:
return None
from collections import OrderedDict
SPECIAL_CHARS = OrderedDict({
'%': '%25',
'/': '%2f',
'\\': '%5c'
})
for key in SPECIAL_CHARS.keys():
if key in object_name:
object_name = object_name.replace(key, SPECIAL_CHARS[key])
return object_name
class PyVmomi(object):
def __init__(self, module):
"""
Constructor
"""
if not HAS_REQUESTS:
module.fail_json(msg=missing_required_lib('requests'),
exception=REQUESTS_IMP_ERR)
if not HAS_PYVMOMI:
module.fail_json(msg=missing_required_lib('PyVmomi'),
exception=PYVMOMI_IMP_ERR)
self.module = module
self.params = module.params
self.current_vm_obj = None
self.si, self.content = connect_to_api(self.module, return_si=True)
self.custom_field_mgr = []
if self.content.customFieldsManager: # not an ESXi
self.custom_field_mgr = self.content.customFieldsManager.field
def is_vcenter(self):
"""
Check if given hostname is vCenter or ESXi host
Returns: True if given connection is with vCenter server
False if given connection is with ESXi server
"""
api_type = None
try:
api_type = self.content.about.apiType
except (vmodl.RuntimeFault, vim.fault.VimFault) as exc:
self.module.fail_json(msg="Failed to get status of vCenter server : %s" % exc.msg)
if api_type == 'VirtualCenter':
return True
elif api_type == 'HostAgent':
return False
def get_managed_objects_properties(self, vim_type, properties=None):
"""
Look up a Managed Object Reference in vCenter / ESXi Environment
:param vim_type: Type of vim object e.g, for datacenter - vim.Datacenter
:param properties: List of properties related to vim object e.g. Name
:return: local content object
"""
# Get Root Folder
root_folder = self.content.rootFolder
if properties is None:
properties = ['name']
# Create Container View with default root folder
mor = self.content.viewManager.CreateContainerView(root_folder, [vim_type], True)
# Create Traversal spec
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
name="traversal_spec",
path='view',
skip=False,
type=vim.view.ContainerView
)
# Create Property Spec
property_spec = vmodl.query.PropertyCollector.PropertySpec(
type=vim_type, # Type of object to retrieved
all=False,
pathSet=properties
)
# Create Object Spec
object_spec = vmodl.query.PropertyCollector.ObjectSpec(
obj=mor,
skip=True,
selectSet=[traversal_spec]
)
# Create Filter Spec
filter_spec = vmodl.query.PropertyCollector.FilterSpec(
objectSet=[object_spec],
propSet=[property_spec],
reportMissingObjectsInResults=False
)
return self.content.propertyCollector.RetrieveContents([filter_spec])
# Virtual Machine related functions
def get_vm(self):
"""
Find unique virtual machine either by UUID, MoID or Name.
Returns: virtual machine object if found, else None.
"""
vm_obj = None
user_desired_path = None
use_instance_uuid = self.params.get('use_instance_uuid') or False
if 'uuid' in self.params and self.params['uuid']:
if not use_instance_uuid:
vm_obj = find_vm_by_id(self.content, vm_id=self.params['uuid'], vm_id_type="uuid")
elif use_instance_uuid:
vm_obj = find_vm_by_id(self.content,
vm_id=self.params['uuid'],
vm_id_type="instance_uuid")
elif 'name' in self.params and self.params['name']:
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
vms = []
for temp_vm_object in objects:
if (
len(temp_vm_object.propSet) == 1 and
temp_vm_object.propSet[0].val == self.params['name']):
vms.append(temp_vm_object.obj)
# get_managed_objects_properties may return multiple virtual machine,
# following code tries to find user desired one depending upon the folder specified.
if len(vms) > 1:
# We have found multiple virtual machines, decide depending upon folder value
if self.params['folder'] is None:
self.module.fail_json(msg="Multiple virtual machines with same name [%s] found, "
"Folder value is a required parameter to find uniqueness "
"of the virtual machine" % self.params['name'],
details="Please see documentation of the vmware_guest module "
"for folder parameter.")
# Get folder path where virtual machine is located
# User provided folder where user thinks virtual machine is present
user_folder = self.params['folder']
# User defined datacenter
user_defined_dc = self.params['datacenter']
# User defined datacenter's object
datacenter_obj = find_datacenter_by_name(self.content, self.params['datacenter'])
# Get Path for Datacenter
dcpath = compile_folder_path_for_object(vobj=datacenter_obj)
# Nested folder does not return trailing /
if not dcpath.endswith('/'):
dcpath += '/'
if user_folder in [None, '', '/']:
# User provided blank value or
# User provided only root value, we fail
self.module.fail_json(msg="vmware_guest found multiple virtual machines with same "
"name [%s], please specify folder path other than blank "
"or '/'" % self.params['name'])
elif user_folder.startswith('/vm/'):
# User provided nested folder under VMware default vm folder i.e. folder = /vm/india/finance
user_desired_path = "%s%s%s" % (dcpath, user_defined_dc, user_folder)
else:
# User defined datacenter is not nested i.e. dcpath = '/' , or
# User defined datacenter is nested i.e. dcpath = '/F0/DC0' or
# User provided folder starts with / and datacenter i.e. folder = /ha-datacenter/ or
# User defined folder starts with datacenter without '/' i.e.
# folder = DC0/vm/india/finance or
# folder = DC0/vm
user_desired_path = user_folder
for vm in vms:
# Check if user has provided same path as virtual machine
actual_vm_folder_path = self.get_vm_path(content=self.content, vm_name=vm)
if not actual_vm_folder_path.startswith("%s%s" % (dcpath, user_defined_dc)):
continue
if user_desired_path in actual_vm_folder_path:
vm_obj = vm
break
elif vms:
# Unique virtual machine found.
vm_obj = vms[0]
elif 'moid' in self.params and self.params['moid']:
vm_obj = VmomiSupport.templateOf('VirtualMachine')(self.params['moid'], self.si._stub)
if vm_obj:
self.current_vm_obj = vm_obj
return vm_obj
def gather_facts(self, vm):
"""
Gather facts of virtual machine.
Args:
vm: Name of virtual machine.
Returns: Facts dictionary of the given virtual machine.
"""
return gather_vm_facts(self.content, vm)
@staticmethod
def get_vm_path(content, vm_name):
"""
Find the path of virtual machine.
Args:
content: VMware content object
vm_name: virtual machine managed object
Returns: Folder of virtual machine if exists, else None
"""
folder_name = None
folder = vm_name.parent
if folder:
folder_name = folder.name
fp = folder.parent
# climb back up the tree to find our path, stop before the root folder
while fp is not None and fp.name is not None and fp != content.rootFolder:
folder_name = fp.name + '/' + folder_name
try:
fp = fp.parent
except Exception:
break
folder_name = '/' + folder_name
return folder_name
def get_vm_or_template(self, template_name=None):
"""
Find the virtual machine or virtual machine template using name
used for cloning purpose.
Args:
template_name: Name of virtual machine or virtual machine template
Returns: virtual machine or virtual machine template object
"""
template_obj = None
if not template_name:
return template_obj
if "/" in template_name:
vm_obj_path = os.path.dirname(template_name)
vm_obj_name = os.path.basename(template_name)
template_obj = find_vm_by_id(self.content, vm_obj_name, vm_id_type="inventory_path", folder=vm_obj_path)
if template_obj:
return template_obj
else:
template_obj = find_vm_by_id(self.content, vm_id=template_name, vm_id_type="uuid")
if template_obj:
return template_obj
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
templates = []
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == template_name:
templates.append(temp_vm_object.obj)
break
if len(templates) > 1:
# We have found multiple virtual machine templates
self.module.fail_json(msg="Multiple virtual machines or templates with same name [%s] found." % template_name)
elif templates:
template_obj = templates[0]
return template_obj
# Cluster related functions
def find_cluster_by_name(self, cluster_name, datacenter_name=None):
"""
Find Cluster by name in given datacenter
Args:
cluster_name: Name of cluster name to find
datacenter_name: (optional) Name of datacenter
Returns: True if found
"""
return find_cluster_by_name(self.content, cluster_name, datacenter=datacenter_name)
def get_all_hosts_by_cluster(self, cluster_name):
"""
Get all hosts from cluster by cluster name
Args:
cluster_name: Name of cluster
Returns: List of hosts
"""
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
return [host for host in cluster_obj.host]
else:
return []
# Hosts related functions
def find_hostsystem_by_name(self, host_name):
"""
Find Host by name
Args:
host_name: Name of ESXi host
Returns: True if found
"""
return find_hostsystem_by_name(self.content, hostname=host_name)
def get_all_host_objs(self, cluster_name=None, esxi_host_name=None):
"""
Get all host system managed object
Args:
cluster_name: Name of Cluster
esxi_host_name: Name of ESXi server
Returns: A list of all host system managed objects, else empty list
"""
host_obj_list = []
if not self.is_vcenter():
hosts = get_all_objs(self.content, [vim.HostSystem]).keys()
if hosts:
host_obj_list.append(list(hosts)[0])
else:
if cluster_name:
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
host_obj_list = [host for host in cluster_obj.host]
else:
self.module.fail_json(changed=False, msg="Cluster '%s' not found" % cluster_name)
elif esxi_host_name:
if isinstance(esxi_host_name, str):
esxi_host_name = [esxi_host_name]
for host in esxi_host_name:
esxi_host_obj = self.find_hostsystem_by_name(host_name=host)
if esxi_host_obj:
host_obj_list.append(esxi_host_obj)
else:
self.module.fail_json(changed=False, msg="ESXi '%s' not found" % host)
return host_obj_list
def host_version_at_least(self, version=None, vm_obj=None, host_name=None):
"""
Check that the ESXi Host is at least a specific version number
Args:
vm_obj: virtual machine object, required one of vm_obj, host_name
host_name (string): ESXi host name
version (tuple): a version tuple, for example (6, 7, 0)
Returns: bool
"""
if vm_obj:
host_system = vm_obj.summary.runtime.host
elif host_name:
host_system = self.find_hostsystem_by_name(host_name=host_name)
else:
self.module.fail_json(msg='VM object or ESXi host name must be set one.')
if host_system and version:
host_version = host_system.summary.config.product.version
return StrictVersion(host_version) >= StrictVersion('.'.join(map(str, version)))
else:
self.module.fail_json(msg='Unable to get the ESXi host from vm: %s, or hostname %s,'
'or the passed ESXi version: %s is None.' % (vm_obj, host_name, version))
# Network related functions
@staticmethod
def find_host_portgroup_by_name(host, portgroup_name):
"""
Find Portgroup on given host
Args:
host: Host config object
portgroup_name: Name of portgroup
Returns: True if found else False
"""
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return False
def get_all_port_groups_by_host(self, host_system):
"""
Get all Port Group by host
Args:
host_system: Name of Host System
Returns: List of Port Group Spec
"""
pgs_list = []
for pg in host_system.config.network.portgroup:
pgs_list.append(pg)
return pgs_list
def find_network_by_name(self, network_name=None):
"""
Get network specified by name
Args:
network_name: Name of network
Returns: List of network managed objects
"""
networks = []
if not network_name:
return networks
objects = self.get_managed_objects_properties(vim_type=vim.Network, properties=['name'])
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == network_name:
networks.append(temp_vm_object.obj)
break
return networks
def network_exists_by_name(self, network_name=None):
"""
Check if network with a specified name exists or not
Args:
network_name: Name of network
Returns: True if network exists else False
"""
ret = False
if not network_name:
return ret
ret = True if self.find_network_by_name(network_name=network_name) else False
return ret
# Datacenter
def find_datacenter_by_name(self, datacenter_name):
"""
Get datacenter managed object by name
Args:
datacenter_name: Name of datacenter
Returns: datacenter managed object if found else None
"""
return find_datacenter_by_name(self.content, datacenter_name=datacenter_name)
def is_datastore_valid(self, datastore_obj=None):
"""
Check if datastore selected is valid or not
Args:
datastore_obj: datastore managed object
Returns: True if datastore is valid, False if not
"""
if not datastore_obj \
or datastore_obj.summary.maintenanceMode != 'normal' \
or not datastore_obj.summary.accessible:
return False
return True
def find_datastore_by_name(self, datastore_name, datacenter_name=None):
"""
Get datastore managed object by name
Args:
datastore_name: Name of datastore
datacenter_name: Name of datacenter where the datastore resides. This is needed because Datastores can be
shared across Datacenters, so we need to specify the datacenter to assure we get the correct Managed Object Reference
Returns: datastore managed object if found else None
"""
return find_datastore_by_name(self.content, datastore_name=datastore_name, datacenter_name=datacenter_name)
# Datastore cluster
def find_datastore_cluster_by_name(self, datastore_cluster_name):
"""
Get datastore cluster managed object by name
Args:
datastore_cluster_name: Name of datastore cluster
Returns: Datastore cluster managed object if found else None
"""
data_store_clusters = get_all_objs(self.content, [vim.StoragePod])
for dsc in data_store_clusters:
if dsc.name == datastore_cluster_name:
return dsc
return None
# Resource pool
def find_resource_pool_by_name(self, resource_pool_name, folder=None):
"""
Get resource pool managed object by name
Args:
resource_pool_name: Name of resource pool
Returns: Resource pool managed object if found else None
"""
if not folder:
folder = self.content.rootFolder
resource_pools = get_all_objs(self.content, [vim.ResourcePool], folder=folder)
for rp in resource_pools:
if rp.name == resource_pool_name:
return rp
return None
def find_resource_pool_by_cluster(self, resource_pool_name='Resources', cluster=None):
"""
Get resource pool managed object by cluster object
Args:
resource_pool_name: Name of resource pool
cluster: Managed object of cluster
Returns: Resource pool managed object if found else None
"""
desired_rp = None
if not cluster:
return desired_rp
if resource_pool_name != 'Resources':
# Resource pool name is different than default 'Resources'
resource_pools = cluster.resourcePool.resourcePool
if resource_pools:
for rp in resource_pools:
if rp.name == resource_pool_name:
desired_rp = rp
break
else:
desired_rp = cluster.resourcePool
return desired_rp
# VMDK stuff
def vmdk_disk_path_split(self, vmdk_path):
"""
Takes a string in the format
[datastore_name] path/to/vm_name.vmdk
Returns a tuple with multiple strings:
1. datastore_name: The name of the datastore (without brackets)
2. vmdk_fullpath: The "path/to/vm_name.vmdk" portion
3. vmdk_filename: The "vm_name.vmdk" portion of the string (os.path.basename equivalent)
4. vmdk_folder: The "path/to/" portion of the string (os.path.dirname equivalent)
"""
try:
datastore_name = re.match(r'^\[(.*?)\]', vmdk_path, re.DOTALL).groups()[0]
vmdk_fullpath = re.match(r'\[.*?\] (.*)$', vmdk_path).groups()[0]
vmdk_filename = os.path.basename(vmdk_fullpath)
vmdk_folder = os.path.dirname(vmdk_fullpath)
return datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder
except (IndexError, AttributeError) as e:
self.module.fail_json(msg="Bad path '%s' for filename disk vmdk image: %s" % (vmdk_path, to_native(e)))
def find_vmdk_file(self, datastore_obj, vmdk_fullpath, vmdk_filename, vmdk_folder):
"""
Return vSphere file object or fail_json
Args:
datastore_obj: Managed object of datastore
vmdk_fullpath: Path of VMDK file e.g., path/to/vm/vmdk_filename.vmdk
vmdk_filename: Name of vmdk e.g., VM0001_1.vmdk
vmdk_folder: Base dir of VMDK e.g, path/to/vm
"""
browser = datastore_obj.browser
datastore_name = datastore_obj.name
datastore_name_sq = "[" + datastore_name + "]"
if browser is None:
self.module.fail_json(msg="Unable to access browser for datastore %s" % datastore_name)
detail_query = vim.host.DatastoreBrowser.FileInfo.Details(
fileOwner=True,
fileSize=True,
fileType=True,
modification=True
)
search_spec = vim.host.DatastoreBrowser.SearchSpec(
details=detail_query,
matchPattern=[vmdk_filename],
searchCaseInsensitive=True,
)
search_res = browser.SearchSubFolders(
datastorePath=datastore_name_sq,
searchSpec=search_spec
)
changed = False
vmdk_path = datastore_name_sq + " " + vmdk_fullpath
try:
changed, result = wait_for_task(search_res)
except TaskError as task_e:
self.module.fail_json(msg=to_native(task_e))
if not changed:
self.module.fail_json(msg="No valid disk vmdk image found for path %s" % vmdk_path)
target_folder_paths = [
datastore_name_sq + " " + vmdk_folder + '/',
datastore_name_sq + " " + vmdk_folder,
]
for file_result in search_res.info.result:
for f in getattr(file_result, 'file'):
if f.path == vmdk_filename and file_result.folderPath in target_folder_paths:
return f
self.module.fail_json(msg="No vmdk file found for path specified [%s]" % vmdk_path)
#
# Conversion to JSON
#
def _deepmerge(self, d, u):
"""
Deep merges u into d.
Credit:
https://bit.ly/2EDOs1B (stackoverflow question 3232943)
License:
cc-by-sa 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Changes:
using collections_compat for compatibility
Args:
- d (dict): dict to merge into
- u (dict): dict to merge into d
Returns:
dict, with u merged into d
"""
for k, v in iteritems(u):
if isinstance(v, collections_compat.Mapping):
d[k] = self._deepmerge(d.get(k, {}), v)
else:
d[k] = v
return d
def _extract(self, data, remainder):
"""
This is used to break down dotted properties for extraction.
Args:
- data (dict): result of _jsonify on a property
- remainder: the remainder of the dotted property to select
Return:
dict
"""
result = dict()
if '.' not in remainder:
result[remainder] = data[remainder]
return result
key, remainder = remainder.split('.', 1)
result[key] = self._extract(data[key], remainder)
return result
def _jsonify(self, obj):
"""
Convert an object from pyVmomi into JSON.
Args:
- obj (object): vim object
Return:
dict
"""
return json.loads(json.dumps(obj, cls=VmomiSupport.VmomiJSONEncoder,
sort_keys=True, strip_dynamic=True))
def to_json(self, obj, properties=None):
"""
Convert a vSphere (pyVmomi) Object into JSON. This is a deep
transformation. The list of properties is optional - if not
provided then all properties are deeply converted. The resulting
JSON is sorted to improve human readability.
Requires upstream support from pyVmomi > 6.7.1
(https://github.com/vmware/pyvmomi/pull/732)
Args:
- obj (object): vim object
- properties (list, optional): list of properties following
the property collector specification, for example:
["config.hardware.memoryMB", "name", "overallStatus"]
default is a complete object dump, which can be large
Return:
dict
"""
if not HAS_PYVMOMIJSON:
self.module.fail_json(msg='The installed version of pyvmomi lacks JSON output support; need pyvmomi>6.7.1')
result = dict()
if properties:
for prop in properties:
try:
if '.' in prop:
key, remainder = prop.split('.', 1)
tmp = dict()
tmp[key] = self._extract(self._jsonify(getattr(obj, key)), remainder)
self._deepmerge(result, tmp)
else:
result[prop] = self._jsonify(getattr(obj, prop))
# To match gather_vm_facts output
prop_name = prop
if prop.lower() == '_moid':
prop_name = 'moid'
elif prop.lower() == '_vimref':
prop_name = 'vimref'
result[prop_name] = result[prop]
except (AttributeError, KeyError):
self.module.fail_json(msg="Property '{0}' not found.".format(prop))
else:
result = self._jsonify(obj)
return result
def get_folder_path(self, cur):
full_path = '/' + cur.name
while hasattr(cur, 'parent') and cur.parent:
if cur.parent == self.content.rootFolder:
break
cur = cur.parent
full_path = '/' + cur.name + full_path
return full_path
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 60,968 |
seport: Doesn't support numerical ports
|
##### SUMMARY
If `seport` is given a numerical port, or list of numerical ports. It fails to parse. This is contrary to the documentation examples.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
seport
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.4
config file = /Users/dkimsey/.ansible.cfg
configured module search path = ['/Users/dkimsey/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/dkimsey/.virtualenvs/ansible-2.8/lib/python3.7/site-packages/ansible
executable location = /Users/dkimsey/.virtualenvs/ansible-2.8/bin/ansible
python version = 3.7.3 (default, May 1 2019, 10:48:04) [Clang 10.0.0 (clang-1000.11.45.5)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_NOCOWS(/Users/dkimsey/.ansible.cfg) = True
ANSIBLE_PIPELINING(/Users/dkimsey/.ansible.cfg) = True
ANSIBLE_SSH_ARGS(/Users/dkimsey/.ansible.cfg) = -C -o ControlMaster=auto -o ControlPersist=300s -o ConnectTimeout=30
DEFAULT_BECOME(/Users/dkimsey/.ansible.cfg) = True
DIFF_ALWAYS(/Users/dkimsey/.ansible.cfg) = True
RETRY_FILES_ENABLED(/Users/dkimsey/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
OS X, Python 3.7
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: all
tasks:
- name: Allow memcached to listen on tcp ports 10000 and 10112
seport:
ports:
- 10000
- 10112
proto: tcp
setype: memcache_port_t
state: present
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should parse & apply correctly.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
It throws an exception, `AttributeError: 'int' object has no attribute 'split'`
<!--- Paste verbatim command output between quotes -->
```paste below
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 114, in <module>\n File \"<stdin>\", line 106, in _ansiballz_main\n File \"<stdin>\", line 49, in invoke_module\n File \"/tmp/ansible_seport_payload_EvYc5a/__main__.py\", line 307, in <module>\n File \"/tmp/ansible_seport_payload_EvYc5a/__main__.py\", line 297, in main\n File \"/tmp/ansible_seport_payload_EvYc5a/__main__.py\", line 206, in semanage_port_add\n File \"/tmp/ansible_seport_payload_EvYc5a/__main__.py\", line 159, in semanage_port_get_type\nAttributeError: 'int' object has no attribute 'split'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
```
Traceback (most recent call last):
File "<stdin>", line 114, in <module>
File "<stdin>", line 106, in _ansiballz_main
File "<stdin>", line 49, in invoke_module
File "/tmp/ansible_seport_payload_EvYc5a/__main__.py", line 307, in <module>
File "/tmp/ansible_seport_payload_EvYc5a/__main__.py", line 297, in main
File "/tmp/ansible_seport_payload_EvYc5a/__main__.py", line 206, in semanage_port_add
File "/tmp/ansible_seport_payload_EvYc5a/__main__.py", line 159, in semanage_port_get_type
AttributeError: 'int' object has no attribute 'split'
```
|
https://github.com/ansible/ansible/issues/60968
|
https://github.com/ansible/ansible/pull/65134
|
4a54873023c80b826e8cbcda28c44ce36fc97606
|
570c82f068d58b1e6ad9d2611bf647c3c82e6db0
| 2019-08-20T18:09:44Z |
python
| 2019-12-12T08:45:33Z |
lib/ansible/modules/system/seport.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, Dan Keder <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: seport
short_description: Manages SELinux network port type definitions
description:
- Manages SELinux network port type definitions.
version_added: "2.0"
options:
ports:
description:
- Ports or port ranges.
- Can be a list (since 2.6) or comma separated string.
type: list
required: true
proto:
description:
- Protocol for the specified port.
type: str
required: true
choices: [ tcp, udp ]
setype:
description:
- SELinux type for the specified port.
type: str
required: true
state:
description:
- Desired boolean value.
type: str
choices: [ absent, present ]
default: present
reload:
description:
- Reload SELinux policy after commit.
type: bool
default: yes
ignore_selinux_state:
description:
- Run independent of selinux runtime state
type: bool
default: no
version_added: '2.8'
notes:
- The changes are persistent across reboots.
- Not tested on any debian based system.
requirements:
- libselinux-python
- policycoreutils-python
author:
- Dan Keder (@dankeder)
'''
EXAMPLES = r'''
- name: Allow Apache to listen on tcp port 8888
seport:
ports: 8888
proto: tcp
setype: http_port_t
state: present
- name: Allow sshd to listen on tcp port 8991
seport:
ports: 8991
proto: tcp
setype: ssh_port_t
state: present
- name: Allow memcached to listen on tcp ports 10000-10100 and 10112
seport:
ports: 10000-10100,10112
proto: tcp
setype: memcache_port_t
state: present
- name: Allow memcached to listen on tcp ports 10000-10100 and 10112
seport:
ports:
- 10000-10100
- 10112
proto: tcp
setype: memcache_port_t
state: present
'''
import traceback
SELINUX_IMP_ERR = None
try:
import selinux
HAVE_SELINUX = True
except ImportError:
SELINUX_IMP_ERR = traceback.format_exc()
HAVE_SELINUX = False
SEOBJECT_IMP_ERR = None
try:
import seobject
HAVE_SEOBJECT = True
except ImportError:
SEOBJECT_IMP_ERR = traceback.format_exc()
HAVE_SEOBJECT = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils._text import to_native
def get_runtime_status(ignore_selinux_state=False):
return True if ignore_selinux_state is True else selinux.is_selinux_enabled()
def semanage_port_get_ports(seport, setype, proto):
""" Get the list of ports that have the specified type definition.
:param seport: Instance of seobject.portRecords
:type setype: str
:param setype: SELinux type.
:type proto: str
:param proto: Protocol ('tcp' or 'udp')
:rtype: list
:return: List of ports that have the specified SELinux type.
"""
records = seport.get_all_by_type()
if (setype, proto) in records:
return records[(setype, proto)]
else:
return []
def semanage_port_get_type(seport, port, proto):
""" Get the SELinux type of the specified port.
:param seport: Instance of seobject.portRecords
:type port: str
:param port: Port or port range (example: "8080", "8080-9090")
:type proto: str
:param proto: Protocol ('tcp' or 'udp')
:rtype: tuple
:return: Tuple containing the SELinux type and MLS/MCS level, or None if not found.
"""
ports = port.split('-', 1)
if len(ports) == 1:
ports.extend(ports)
key = (int(ports[0]), int(ports[1]), proto)
records = seport.get_all()
if key in records:
return records[key]
else:
return None
def semanage_port_add(module, ports, proto, setype, do_reload, serange='s0', sestore=''):
""" Add SELinux port type definition to the policy.
:type module: AnsibleModule
:param module: Ansible module
:type ports: list
:param ports: List of ports and port ranges to add (e.g. ["8080", "8080-9090"])
:type proto: str
:param proto: Protocol ('tcp' or 'udp')
:type setype: str
:param setype: SELinux type
:type do_reload: bool
:param do_reload: Whether to reload SELinux policy after commit
:type serange: str
:param serange: SELinux MLS/MCS range (defaults to 's0')
:type sestore: str
:param sestore: SELinux store
:rtype: bool
:return: True if the policy was changed, otherwise False
"""
try:
seport = seobject.portRecords(sestore)
seport.set_reload(do_reload)
change = False
ports_by_type = semanage_port_get_ports(seport, setype, proto)
for port in ports:
if port not in ports_by_type:
change = True
port_type = semanage_port_get_type(seport, port, proto)
if port_type is None and not module.check_mode:
seport.add(port, proto, serange, setype)
elif port_type is not None and not module.check_mode:
seport.modify(port, proto, serange, setype)
except (ValueError, IOError, KeyError, OSError, RuntimeError) as e:
module.fail_json(msg="%s: %s\n" % (e.__class__.__name__, to_native(e)), exception=traceback.format_exc())
return change
def semanage_port_del(module, ports, proto, setype, do_reload, sestore=''):
""" Delete SELinux port type definition from the policy.
:type module: AnsibleModule
:param module: Ansible module
:type ports: list
:param ports: List of ports and port ranges to delete (e.g. ["8080", "8080-9090"])
:type proto: str
:param proto: Protocol ('tcp' or 'udp')
:type setype: str
:param setype: SELinux type.
:type do_reload: bool
:param do_reload: Whether to reload SELinux policy after commit
:type sestore: str
:param sestore: SELinux store
:rtype: bool
:return: True if the policy was changed, otherwise False
"""
try:
seport = seobject.portRecords(sestore)
seport.set_reload(do_reload)
change = False
ports_by_type = semanage_port_get_ports(seport, setype, proto)
for port in ports:
if port in ports_by_type:
change = True
if not module.check_mode:
seport.delete(port, proto)
except (ValueError, IOError, KeyError, OSError, RuntimeError) as e:
module.fail_json(msg="%s: %s\n" % (e.__class__.__name__, to_native(e)), exception=traceback.format_exc())
return change
def main():
module = AnsibleModule(
argument_spec=dict(
ignore_selinux_state=dict(type='bool', default=False),
ports=dict(type='list', required=True),
proto=dict(type='str', required=True, choices=['tcp', 'udp']),
setype=dict(type='str', required=True),
state=dict(type='str', default='present', choices=['absent', 'present']),
reload=dict(type='bool', default=True),
),
supports_check_mode=True,
)
if not HAVE_SELINUX:
module.fail_json(msg=missing_required_lib("libselinux-python"), exception=SELINUX_IMP_ERR)
if not HAVE_SEOBJECT:
module.fail_json(msg=missing_required_lib("policycoreutils-python"), exception=SEOBJECT_IMP_ERR)
ignore_selinux_state = module.params['ignore_selinux_state']
if not get_runtime_status(ignore_selinux_state):
module.fail_json(msg="SELinux is disabled on this host.")
ports = module.params['ports']
proto = module.params['proto']
setype = module.params['setype']
state = module.params['state']
do_reload = module.params['reload']
result = {
'ports': ports,
'proto': proto,
'setype': setype,
'state': state,
}
if state == 'present':
result['changed'] = semanage_port_add(module, ports, proto, setype, do_reload)
elif state == 'absent':
result['changed'] = semanage_port_del(module, ports, proto, setype, do_reload)
else:
module.fail_json(msg='Invalid value of argument "state": {0}'.format(state))
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,757 |
Fix simple typo: workind -> working
|
# Issue Type
[x] Bug (Typo)
# Steps to Replicate
1. Examine lib/ansible/plugins/shell/__init__.py.
2. Search for `workind`.
# Expected Behaviour
1. Should read `working`.
|
https://github.com/ansible/ansible/issues/65757
|
https://github.com/ansible/ansible/pull/65758
|
570c82f068d58b1e6ad9d2611bf647c3c82e6db0
|
cbc513e74893e1f224b553bd9c68505e7f7bd883
| 2019-12-12T09:16:14Z |
python
| 2019-12-12T10:07:14Z |
lib/ansible/plugins/shell/__init__.py
|
# (c) 2016 RedHat
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import os.path
import random
import re
import time
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.module_utils.six import text_type
from ansible.module_utils.six.moves import shlex_quote
from ansible.module_utils._text import to_native
from ansible.plugins import AnsiblePlugin
_USER_HOME_PATH_RE = re.compile(r'^~[_.A-Za-z0-9][-_.A-Za-z0-9]*$')
class ShellBase(AnsiblePlugin):
def __init__(self):
super(ShellBase, self).__init__()
self.env = {}
self.tmpdir = None
self.executable = None
def _normalize_system_tmpdirs(self):
# Normalize the tmp directory strings. We don't use expanduser/expandvars because those
# can vary between remote user and become user. Therefore the safest practice will be for
# this to always be specified as full paths)
normalized_paths = [d.rstrip('/') for d in self.get_option('system_tmpdirs')]
# Make sure all system_tmpdirs are absolute otherwise they'd be relative to the login dir
# which is almost certainly going to fail in a cornercase.
if not all(os.path.isabs(d) for d in normalized_paths):
raise AnsibleError('The configured system_tmpdirs contains a relative path: {0}. All'
' system_tmpdirs must be absolute'.format(to_native(normalized_paths)))
self.set_option('system_tmpdirs', normalized_paths)
def set_options(self, task_keys=None, var_options=None, direct=None):
super(ShellBase, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
# set env if needed, deal with environment's 'dual nature' list of dicts or dict
env = self.get_option('environment')
if isinstance(env, list):
for env_dict in env:
self.env.update(env_dict)
else:
self.env.update(env)
# We can remove the try: except in the future when we make ShellBase a proper subset of
# *all* shells. Right now powershell and third party shells which do not use the
# shell_common documentation fragment (and so do not have system_tmpdirs) will fail
try:
self._normalize_system_tmpdirs()
except KeyError:
pass
def env_prefix(self, **kwargs):
return ' '.join(['%s=%s' % (k, shlex_quote(text_type(v))) for k, v in kwargs.items()])
def join_path(self, *args):
return os.path.join(*args)
# some shells (eg, powershell) are snooty about filenames/extensions, this lets the shell plugin have a say
def get_remote_filename(self, pathname):
base_name = os.path.basename(pathname.strip())
return base_name.strip()
def path_has_trailing_slash(self, path):
return path.endswith('/')
def chmod(self, paths, mode):
cmd = ['chmod', mode]
cmd.extend(paths)
cmd = [shlex_quote(c) for c in cmd]
return ' '.join(cmd)
def chown(self, paths, user):
cmd = ['chown', user]
cmd.extend(paths)
cmd = [shlex_quote(c) for c in cmd]
return ' '.join(cmd)
def set_user_facl(self, paths, user, mode):
"""Only sets acls for users as that's really all we need"""
cmd = ['setfacl', '-m', 'u:%s:%s' % (user, mode)]
cmd.extend(paths)
cmd = [shlex_quote(c) for c in cmd]
return ' '.join(cmd)
def remove(self, path, recurse=False):
path = shlex_quote(path)
cmd = 'rm -f '
if recurse:
cmd += '-r '
return cmd + "%s %s" % (path, self._SHELL_REDIRECT_ALLNULL)
def exists(self, path):
cmd = ['test', '-e', shlex_quote(path)]
return ' '.join(cmd)
def mkdtemp(self, basefile=None, system=False, mode=0o700, tmpdir=None):
if not basefile:
basefile = 'ansible-tmp-%s-%s' % (time.time(), random.randint(0, 2**48))
# When system is specified we have to create this in a directory where
# other users can read and access the tmp directory.
# This is because we use system to create tmp dirs for unprivileged users who are
# sudo'ing to a second unprivileged user.
# The 'system_tmpdirs' setting defines dirctories we can use for this purpose
# the default are, /tmp and /var/tmp.
# So we only allow one of those locations if system=True, using the
# passed in tmpdir if it is valid or the first one from the setting if not.
if system:
tmpdir = tmpdir.rstrip('/')
if tmpdir in self.get_option('system_tmpdirs'):
basetmpdir = tmpdir
else:
basetmpdir = self.get_option('system_tmpdirs')[0]
else:
if tmpdir is None:
basetmpdir = self.get_option('remote_tmp')
else:
basetmpdir = tmpdir
basetmp = self.join_path(basetmpdir, basefile)
cmd = 'mkdir -p %s echo %s %s' % (self._SHELL_SUB_LEFT, basetmp, self._SHELL_SUB_RIGHT)
cmd += ' %s echo %s=%s echo %s %s' % (self._SHELL_AND, basefile, self._SHELL_SUB_LEFT, basetmp, self._SHELL_SUB_RIGHT)
# change the umask in a subshell to achieve the desired mode
# also for directories created with `mkdir -p`
if mode:
tmp_umask = 0o777 & ~mode
cmd = '%s umask %o %s %s %s' % (self._SHELL_GROUP_LEFT, tmp_umask, self._SHELL_AND, cmd, self._SHELL_GROUP_RIGHT)
return cmd
def expand_user(self, user_home_path, username=''):
''' Return a command to expand tildes in a path
It can be either "~" or "~username". We just ignore $HOME
We use the POSIX definition of a username:
http://pubs.opengroup.org/onlinepubs/000095399/basedefs/xbd_chap03.html#tag_03_426
http://pubs.opengroup.org/onlinepubs/000095399/basedefs/xbd_chap03.html#tag_03_276
Falls back to 'current workind directory' as we assume 'home is where the remote user ends up'
'''
# Check that the user_path to expand is safe
if user_home_path != '~':
if not _USER_HOME_PATH_RE.match(user_home_path):
# shlex_quote will make the shell return the string verbatim
user_home_path = shlex_quote(user_home_path)
elif username:
# if present the user name is appended to resolve "that user's home"
user_home_path += username
return 'echo %s' % user_home_path
def pwd(self):
"""Return the working directory after connecting"""
return 'echo %spwd%s' % (self._SHELL_SUB_LEFT, self._SHELL_SUB_RIGHT)
def build_module_command(self, env_string, shebang, cmd, arg_path=None):
# don't quote the cmd if it's an empty string, because this will break pipelining mode
if cmd.strip() != '':
cmd = shlex_quote(cmd)
cmd_parts = []
if shebang:
shebang = shebang.replace("#!", "").strip()
else:
shebang = ""
cmd_parts.extend([env_string.strip(), shebang, cmd])
if arg_path is not None:
cmd_parts.append(arg_path)
new_cmd = " ".join(cmd_parts)
return new_cmd
def append_command(self, cmd, cmd_to_append):
"""Append an additional command if supported by the shell"""
if self._SHELL_AND:
cmd += ' %s %s' % (self._SHELL_AND, cmd_to_append)
return cmd
def wrap_for_exec(self, cmd):
"""wrap script execution with any necessary decoration (eg '&' for quoted powershell script paths)"""
return cmd
def quote(self, cmd):
"""Returns a shell-escaped string that can be safely used as one token in a shell command line"""
return shlex_quote(cmd)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,727 |
mysql_info doesn't list empty DBs
|
##### SUMMARY
`mysql_info` module doesn't show empty DBs. It uses following query in the code:
```
MariaDB [(none)]> SELECT table_schema AS 'name', SUM(data_length + index_length) AS "size" FROM information_schema.TABLES GROUP BY table_schema;
+--------------------+-----------+
| name | size |
+--------------------+-----------+
| d106953_tm | 470728704 |
| information_schema | 212992 |
| mysql | 6038300 |
| performance_schema | 0 |
+--------------------+-----------+
4 rows in set (0,005 sec)
```
But `show databases` show something else;
```
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| d106953_tm |
| information_schema |
| innodb |
| mysql |
| performance_schema |
| testovic |
| tm_web |
| ttttt |
+--------------------+
8 rows in set (0,001 sec)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`mysql_info` modules
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /home/jiri/.ansible.cfg
configured module search path = ['/home/jiri/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jiri/stow/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /home/jiri/stow/ansible/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### OS / ENVIRONMENT
Amazon Linux 2
rh-mariadb103-3.3-5.el7.x86_64
##### STEPS TO REPRODUCE
* create empty via eg. via `mysql_db module`
* query via `mysql_info` what databases where found
##### EXPECTED RESULTS
should list even empty DBs
##### ACTUAL RESULTS
it seems only populated DBs are listed
|
https://github.com/ansible/ansible/issues/65727
|
https://github.com/ansible/ansible/pull/65755
|
80333f9c4b4f79ffe0af995be4aaffaa36524f4e
|
0079b8eaa205dc72df84efbc069670aaaeeb5143
| 2019-12-11T12:21:31Z |
python
| 2019-12-12T13:10:52Z |
changelogs/fragments/65755-mysql_info_doesnt_list_empty_dbs.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,727 |
mysql_info doesn't list empty DBs
|
##### SUMMARY
`mysql_info` module doesn't show empty DBs. It uses following query in the code:
```
MariaDB [(none)]> SELECT table_schema AS 'name', SUM(data_length + index_length) AS "size" FROM information_schema.TABLES GROUP BY table_schema;
+--------------------+-----------+
| name | size |
+--------------------+-----------+
| d106953_tm | 470728704 |
| information_schema | 212992 |
| mysql | 6038300 |
| performance_schema | 0 |
+--------------------+-----------+
4 rows in set (0,005 sec)
```
But `show databases` show something else;
```
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| d106953_tm |
| information_schema |
| innodb |
| mysql |
| performance_schema |
| testovic |
| tm_web |
| ttttt |
+--------------------+
8 rows in set (0,001 sec)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`mysql_info` modules
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /home/jiri/.ansible.cfg
configured module search path = ['/home/jiri/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jiri/stow/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /home/jiri/stow/ansible/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### OS / ENVIRONMENT
Amazon Linux 2
rh-mariadb103-3.3-5.el7.x86_64
##### STEPS TO REPRODUCE
* create empty via eg. via `mysql_db module`
* query via `mysql_info` what databases where found
##### EXPECTED RESULTS
should list even empty DBs
##### ACTUAL RESULTS
it seems only populated DBs are listed
|
https://github.com/ansible/ansible/issues/65727
|
https://github.com/ansible/ansible/pull/65755
|
80333f9c4b4f79ffe0af995be4aaffaa36524f4e
|
0079b8eaa205dc72df84efbc069670aaaeeb5143
| 2019-12-11T12:21:31Z |
python
| 2019-12-12T13:10:52Z |
lib/ansible/modules/database/mysql/mysql_info.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: mysql_info
short_description: Gather information about MySQL servers
description:
- Gathers information about MySQL servers.
version_added: '2.9'
options:
filter:
description:
- Limit the collected information by comma separated string or YAML list.
- Allowable values are C(version), C(databases), C(settings), C(global_status),
C(users), C(engines), C(master_status), C(slave_status), C(slave_hosts).
- By default, collects all subsets.
- You can use '!' before value (for example, C(!settings)) to exclude it from the information.
- If you pass including and excluding values to the filter, for example, I(filter=!settings,version),
the excluding values, C(!settings) in this case, will be ignored.
type: list
elements: str
login_db:
description:
- Database name to connect to.
- It makes sense if I(login_user) is allowed to connect to a specific database only.
type: str
exclude_fields:
description:
- List of fields which are not needed to collect.
- "Supports elements: C(db_size). Unsupported elements will be ignored"
type: list
elements: str
version_added: '2.10'
notes:
- Calculating the size of a database might be slow, depending on the number and size of tables in it.
To avoid this, use I(exclude_fields=db_size).
seealso:
- module: mysql_variables
- module: mysql_db
- module: mysql_user
- module: mysql_replication
author:
- Andrew Klychkov (@Andersson007)
extends_documentation_fragment: mysql
'''
EXAMPLES = r'''
# Display info from mysql-hosts group (using creds from ~/.my.cnf to connect):
# ansible mysql-hosts -m mysql_info
# Display only databases and users info:
# ansible mysql-hosts -m mysql_info -a 'filter=databases,users'
# Display only slave status:
# ansible standby -m mysql_info -a 'filter=slave_status'
# Display all info from databases group except settings:
# ansible databases -m mysql_info -a 'filter=!settings'
- name: Collect all possible information using passwordless root access
mysql_info:
login_user: root
- name: Get MySQL version with non-default credentials
mysql_info:
login_user: mysuperuser
login_password: mysuperpass
filter: version
- name: Collect all info except settings and users by root
mysql_info:
login_user: root
login_password: rootpass
filter: "!settings,!users"
- name: Collect info about databases and version using ~/.my.cnf as a credential file
become: yes
mysql_info:
filter:
- databases
- version
- name: Collect info about databases and version using ~alice/.my.cnf as a credential file
become: yes
mysql_info:
config_file: /home/alice/.my.cnf
filter:
- databases
- version
- name: Collect info about databases excluding their sizes
become: yes
mysql_info:
config_file: /home/alice/.my.cnf
filter:
- databases
exclude_fields: db_size
'''
RETURN = r'''
version:
description: Database server version.
returned: if not excluded by filter
type: dict
sample: { "version": { "major": 5, "minor": 5, "release": 60 } }
contains:
major:
description: Major server version.
returned: if not excluded by filter
type: int
sample: 5
minor:
description: Minor server version.
returned: if not excluded by filter
type: int
sample: 5
release:
description: Release server version.
returned: if not excluded by filter
type: int
sample: 60
databases:
description: Information about databases.
returned: if not excluded by filter
type: dict
sample:
- { "mysql": { "size": 656594 }, "information_schema": { "size": 73728 } }
contains:
size:
description: Database size in bytes.
returned: if not excluded by filter
type: dict
sample: { 'size': 656594 }
settings:
description: Global settings (variables) information.
returned: if not excluded by filter
type: dict
sample:
- { "innodb_open_files": 300, innodb_page_size": 16384 }
global_status:
description: Global status information.
returned: if not excluded by filter
type: dict
sample:
- { "Innodb_buffer_pool_read_requests": 123, "Innodb_buffer_pool_reads": 32 }
version_added: "2.10"
users:
description: Users information.
returned: if not excluded by filter
type: dict
sample:
- { "localhost": { "root": { "Alter_priv": "Y", "Alter_routine_priv": "Y" } } }
engines:
description: Information about the server's storage engines.
returned: if not excluded by filter
type: dict
sample:
- { "CSV": { "Comment": "CSV storage engine", "Savepoints": "NO", "Support": "YES", "Transactions": "NO", "XA": "NO" } }
master_status:
description: Master status information.
returned: if master
type: dict
sample:
- { "Binlog_Do_DB": "", "Binlog_Ignore_DB": "mysql", "File": "mysql-bin.000001", "Position": 769 }
slave_status:
description: Slave status information.
returned: if standby
type: dict
sample:
- { "192.168.1.101": { "3306": { "replication_user": { "Connect_Retry": 60, "Exec_Master_Log_Pos": 769, "Last_Errno": 0 } } } }
slave_hosts:
description: Slave status information.
returned: if master
type: dict
sample:
- { "2": { "Host": "", "Master_id": 1, "Port": 3306 } }
'''
from decimal import Decimal
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.mysql import (
mysql_connect,
mysql_common_argument_spec,
mysql_driver,
mysql_driver_fail_msg,
)
from ansible.module_utils.six import iteritems
from ansible.module_utils._text import to_native
# ===========================================
# MySQL module specific support methods.
#
class MySQL_Info(object):
"""Class for collection MySQL instance information.
Arguments:
module (AnsibleModule): Object of AnsibleModule class.
cursor (pymysql/mysql-python): Cursor class for interaction with
the database.
Note:
If you need to add a new subset:
1. add a new key with the same name to self.info attr in self.__init__()
2. add a new private method to get the information
3. add invocation of the new method to self.__collect()
4. add info about the new subset to the DOCUMENTATION block
5. add info about the new subset with an example to RETURN block
"""
def __init__(self, module, cursor):
self.module = module
self.cursor = cursor
self.info = {
'version': {},
'databases': {},
'settings': {},
'global_status': {},
'engines': {},
'users': {},
'master_status': {},
'slave_hosts': {},
'slave_status': {},
}
def get_info(self, filter_, exclude_fields):
"""Get MySQL instance information based on filter_.
Arguments:
filter_ (list): List of collected subsets (e.g., databases, users, etc.),
when it is empty, return all available information.
"""
self.__collect(exclude_fields)
inc_list = []
exc_list = []
if filter_:
partial_info = {}
for fi in filter_:
if fi.lstrip('!') not in self.info:
self.module.warn('filter element: %s is not allowable, ignored' % fi)
continue
if fi[0] == '!':
exc_list.append(fi.lstrip('!'))
else:
inc_list.append(fi)
if inc_list:
for i in self.info:
if i in inc_list:
partial_info[i] = self.info[i]
else:
for i in self.info:
if i not in exc_list:
partial_info[i] = self.info[i]
return partial_info
else:
return self.info
def __collect(self, exclude_fields):
"""Collect all possible subsets."""
self.__get_databases(exclude_fields)
self.__get_global_variables()
self.__get_global_status()
self.__get_engines()
self.__get_users()
self.__get_master_status()
self.__get_slave_status()
self.__get_slaves()
def __get_engines(self):
"""Get storage engines info."""
res = self.__exec_sql('SHOW ENGINES')
if res:
for line in res:
engine = line['Engine']
self.info['engines'][engine] = {}
for vname, val in iteritems(line):
if vname != 'Engine':
self.info['engines'][engine][vname] = val
def __convert(self, val):
"""Convert unserializable data."""
try:
if isinstance(val, Decimal):
val = float(val)
else:
val = int(val)
except ValueError:
pass
except TypeError:
pass
return val
def __get_global_variables(self):
"""Get global variables (instance settings)."""
res = self.__exec_sql('SHOW GLOBAL VARIABLES')
if res:
for var in res:
self.info['settings'][var['Variable_name']] = self.__convert(var['Value'])
ver = self.info['settings']['version'].split('.')
release = ver[2].split('-')[0]
self.info['version'] = dict(
major=int(ver[0]),
minor=int(ver[1]),
release=int(release),
)
def __get_global_status(self):
"""Get global status."""
res = self.__exec_sql('SHOW GLOBAL STATUS')
if res:
for var in res:
self.info['global_status'][var['Variable_name']] = self.__convert(var['Value'])
def __get_master_status(self):
"""Get master status if the instance is a master."""
res = self.__exec_sql('SHOW MASTER STATUS')
if res:
for line in res:
for vname, val in iteritems(line):
self.info['master_status'][vname] = self.__convert(val)
def __get_slave_status(self):
"""Get slave status if the instance is a slave."""
res = self.__exec_sql('SHOW SLAVE STATUS')
if res:
for line in res:
host = line['Master_Host']
if host not in self.info['slave_status']:
self.info['slave_status'][host] = {}
port = line['Master_Port']
if port not in self.info['slave_status'][host]:
self.info['slave_status'][host][port] = {}
user = line['Master_User']
if user not in self.info['slave_status'][host][port]:
self.info['slave_status'][host][port][user] = {}
for vname, val in iteritems(line):
if vname not in ('Master_Host', 'Master_Port', 'Master_User'):
self.info['slave_status'][host][port][user][vname] = self.__convert(val)
def __get_slaves(self):
"""Get slave hosts info if the instance is a master."""
res = self.__exec_sql('SHOW SLAVE HOSTS')
if res:
for line in res:
srv_id = line['Server_id']
if srv_id not in self.info['slave_hosts']:
self.info['slave_hosts'][srv_id] = {}
for vname, val in iteritems(line):
if vname != 'Server_id':
self.info['slave_hosts'][srv_id][vname] = self.__convert(val)
def __get_users(self):
"""Get user info."""
res = self.__exec_sql('SELECT * FROM mysql.user')
if res:
for line in res:
host = line['Host']
if host not in self.info['users']:
self.info['users'][host] = {}
user = line['User']
self.info['users'][host][user] = {}
for vname, val in iteritems(line):
if vname not in ('Host', 'User'):
self.info['users'][host][user][vname] = self.__convert(val)
def __get_databases(self, exclude_fields):
"""Get info about databases."""
if not exclude_fields:
query = ('SELECT table_schema AS "name", '
'SUM(data_length + index_length) AS "size" '
'FROM information_schema.TABLES GROUP BY table_schema')
else:
if 'db_size' in exclude_fields:
query = ('SELECT table_schema AS "name" '
'FROM information_schema.TABLES GROUP BY table_schema')
res = self.__exec_sql(query)
if res:
for db in res:
self.info['databases'][db['name']] = {}
if not exclude_fields or 'db_size' not in exclude_fields:
self.info['databases'][db['name']]['size'] = int(db['size'])
def __exec_sql(self, query, ddl=False):
"""Execute SQL.
Arguments:
ddl (bool): If True, return True or False.
Used for queries that don't return any rows
(mainly for DDL queries) (default False).
"""
try:
self.cursor.execute(query)
if not ddl:
res = self.cursor.fetchall()
return res
return True
except Exception as e:
self.module.fail_json(msg="Cannot execute SQL '%s': %s" % (query, to_native(e)))
return False
# ===========================================
# Module execution.
#
def main():
argument_spec = mysql_common_argument_spec()
argument_spec.update(
login_db=dict(type='str'),
filter=dict(type='list'),
exclude_fields=dict(type='list'),
)
# The module doesn't support check_mode
# because of it doesn't change anything
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
db = module.params['login_db']
connect_timeout = module.params['connect_timeout']
login_user = module.params['login_user']
login_password = module.params['login_password']
ssl_cert = module.params['client_cert']
ssl_key = module.params['client_key']
ssl_ca = module.params['ca_cert']
config_file = module.params['config_file']
filter_ = module.params['filter']
exclude_fields = module.params['exclude_fields']
if filter_:
filter_ = [f.strip() for f in filter_]
if exclude_fields:
exclude_fields = set([f.strip() for f in exclude_fields])
if mysql_driver is None:
module.fail_json(msg=mysql_driver_fail_msg)
try:
cursor = mysql_connect(module, login_user, login_password,
config_file, ssl_cert, ssl_key, ssl_ca, db,
connect_timeout=connect_timeout, cursor_class='DictCursor')
except Exception as e:
module.fail_json(msg="unable to connect to database, check login_user and login_password are correct or %s has the credentials. "
"Exception message: %s" % (config_file, to_native(e)))
###############################
# Create object and do main job
mysql = MySQL_Info(module, cursor)
module.exit_json(changed=False, **mysql.get_info(filter_, exclude_fields))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,727 |
mysql_info doesn't list empty DBs
|
##### SUMMARY
`mysql_info` module doesn't show empty DBs. It uses following query in the code:
```
MariaDB [(none)]> SELECT table_schema AS 'name', SUM(data_length + index_length) AS "size" FROM information_schema.TABLES GROUP BY table_schema;
+--------------------+-----------+
| name | size |
+--------------------+-----------+
| d106953_tm | 470728704 |
| information_schema | 212992 |
| mysql | 6038300 |
| performance_schema | 0 |
+--------------------+-----------+
4 rows in set (0,005 sec)
```
But `show databases` show something else;
```
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| d106953_tm |
| information_schema |
| innodb |
| mysql |
| performance_schema |
| testovic |
| tm_web |
| ttttt |
+--------------------+
8 rows in set (0,001 sec)
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`mysql_info` modules
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /home/jiri/.ansible.cfg
configured module search path = ['/home/jiri/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jiri/stow/ansible/venv/lib/python3.6/site-packages/ansible
executable location = /home/jiri/stow/ansible/venv/bin/ansible
python version = 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0]
```
##### OS / ENVIRONMENT
Amazon Linux 2
rh-mariadb103-3.3-5.el7.x86_64
##### STEPS TO REPRODUCE
* create empty via eg. via `mysql_db module`
* query via `mysql_info` what databases where found
##### EXPECTED RESULTS
should list even empty DBs
##### ACTUAL RESULTS
it seems only populated DBs are listed
|
https://github.com/ansible/ansible/issues/65727
|
https://github.com/ansible/ansible/pull/65755
|
80333f9c4b4f79ffe0af995be4aaffaa36524f4e
|
0079b8eaa205dc72df84efbc069670aaaeeb5143
| 2019-12-11T12:21:31Z |
python
| 2019-12-12T13:10:52Z |
test/integration/targets/mysql_info/tasks/main.yml
|
# Test code for mysql_info module
# Copyright: (c) 2019, Andrew Klychkov (@Andersson007) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
###################
# Prepare for tests
#
# Create role for tests
- name: mysql_info - create mysql user {{ user_name }}
mysql_user:
name: '{{ user_name }}'
password: '{{ user_pass }}'
state: present
priv: '*.*:ALL'
login_unix_socket: '{{ mysql_socket }}'
# Create default MySQL config file with credentials
- name: mysql_info - create default config file
template:
src: my.cnf.j2
dest: '/root/.my.cnf'
mode: 0400
# Create non-default MySQL config file with credentials
- name: mysql_info - create non-default config file
template:
src: my.cnf.j2
dest: '/root/non-default_my.cnf'
mode: 0400
###############
# Do tests
# Access by default cred file
- name: mysql_info - collect default cred file
mysql_info:
login_user: '{{ user_name }}'
register: result
- assert:
that:
- result.changed == false
- result.version != {}
- result.settings != {}
- result.global_status != {}
- result.databases != {}
- result.engines != {}
- result.users != {}
# Access by non-default cred file
- name: mysql_info - check non-default cred file
mysql_info:
login_user: '{{ user_name }}'
config_file: '/root/non-default_my.cnf'
register: result
- assert:
that:
- result.changed == false
- result.version != {}
# Remove cred files
- name: mysql_info - remove cred files
file:
path: '{{ item }}'
state: absent
with_items:
- '/root/.my.cnf'
- '/root/non-default_my.cnf'
# Access with password
- name: mysql_info - check access with password
mysql_info:
login_user: '{{ user_name }}'
login_password: '{{ user_pass }}'
register: result
- assert:
that:
- result.changed == false
- result.version != {}
# Test excluding
- name: Collect all info except settings and users
mysql_info:
login_user: '{{ user_name }}'
login_password: '{{ user_pass }}'
filter: "!settings,!users"
register: result
- assert:
that:
- result.changed == false
- result.version != {}
- result.global_status != {}
- result.databases != {}
- result.engines != {}
- result.settings is not defined
- result.users is not defined
# Test including
- name: Collect info only about version and databases
mysql_info:
login_user: '{{ user_name }}'
login_password: '{{ user_pass }}'
filter:
- version
- databases
register: result
- assert:
that:
- result.changed == false
- result.version != {}
- result.databases != {}
- result.engines is not defined
- result.settings is not defined
- result.global_status is not defined
- result.users is not defined
# Test exclude_fields: db_size
# 'unsupported' element is passed to check that an unsupported value
# won't break anything (will be ignored regarding to the module's documentation).
- name: Collect info about databases excluding their sizes
mysql_info:
login_user: '{{ user_name }}'
login_password: '{{ user_pass }}'
filter:
- databases
exclude_fields:
- db_size
- unsupported
register: result
- assert:
that:
- result.changed == false
- result.databases != {}
- result.databases.mysql == {}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,304 |
zabbix_host fails: zabbix_host.py .. KeyError: 'inventory_mode'
|
##### SUMMARY
The module zabbix_host [fails](https://travis-ci.org/robertdebock/ansible-role-zabbix_web/jobs/617545189#L653) using:
```
ansible_loop_var": "item", "changed": false, "item": {"description": "Example server 1 description", "groups": ["Linux servers"], "interface_dns": "server1.example.com", "interface_ip": "192.168.127.127", "link_templates": ["Template OS Linux by Zabbix agent"], "name": "Example server 1", "visible_name": "Example server 1 name"
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_host
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.9.1
config file = None
configured module search path = ['/home/robertdb/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
```
$ ansible-config dump --only-changed
# NO OUTPUT
```
##### OS / ENVIRONMENT
Debian
Centos-7
Centos-8
Ubuntu
##### STEPS TO REPRODUCE
Git clone my [zabbix_web Ansible role](https://github.com/robertdebock/ansible-role-zabbix_web) and run `molecule test`
```
git clone https://github.com/robertdebock/ansible-role-zabbix_web.git
cd ansible-role-zabbix_web
molecule test
```
##### EXPECTED RESULTS
I was not expecting an error. The host is actually added to Zabbix by the way.
##### ACTUAL RESULTS
```
TASK [ansible-role-zabbix_web : add zabbix hosts] ******************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'inventory_mode'
failed: [zabbix_web-centos-latest] (item=Example server 1) => {"ansible_loop_var": "item", "changed": false, "item": {"description": "Example server 1 description", "groups": ["Linux servers"], "interface_dns": "server1.example.com", "interface_ip": "192.168.127.127", "link_templates": ["Template OS Linux by Zabbix agent"], "name": "Example server 1", "visible_name": "Example server 1 name"}, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1574832740.9065554-150815383322675/AnsiballZ_zabbix_host.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1574832740.9065554-150815383322675/AnsiballZ_zabbix_host.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1574832740.9065554-150815383322675/AnsiballZ_zabbix_host.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.monitoring.zabbix.zabbix_host', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_zabbix_host_payload_xl5geyrr/ansible_zabbix_host_payload.zip/ansible/modules/monitoring/zabbix/zabbix_host.py\", line 902, in <module>\n File \"/tmp/ansible_zabbix_host_payload_xl5geyrr/ansible_zabbix_host_payload.zip/ansible/modules/monitoring/zabbix/zabbix_host.py\", line 862, in main\n File \"/tmp/ansible_zabbix_host_payload_xl5geyrr/ansible_zabbix_host_payload.zip/ansible/modules/monitoring/zabbix/zabbix_host.py\", line 546, in check_all_properties\nKeyError: 'inventory_mode'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
|
https://github.com/ansible/ansible/issues/65304
|
https://github.com/ansible/ansible/pull/65392
|
06d997b2b2c3034dd5d567127df54b93f8ee0f34
|
7b2cfdacd00ddf907247270d228a6bf5f72258a1
| 2019-11-27T05:46:43Z |
python
| 2019-12-16T08:02:11Z |
changelogs/fragments/65304-fix_zabbix_host_inventory_mode_key_error.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,304 |
zabbix_host fails: zabbix_host.py .. KeyError: 'inventory_mode'
|
##### SUMMARY
The module zabbix_host [fails](https://travis-ci.org/robertdebock/ansible-role-zabbix_web/jobs/617545189#L653) using:
```
ansible_loop_var": "item", "changed": false, "item": {"description": "Example server 1 description", "groups": ["Linux servers"], "interface_dns": "server1.example.com", "interface_ip": "192.168.127.127", "link_templates": ["Template OS Linux by Zabbix agent"], "name": "Example server 1", "visible_name": "Example server 1 name"
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_host
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.9.1
config file = None
configured module search path = ['/home/robertdb/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
```
$ ansible-config dump --only-changed
# NO OUTPUT
```
##### OS / ENVIRONMENT
Debian
Centos-7
Centos-8
Ubuntu
##### STEPS TO REPRODUCE
Git clone my [zabbix_web Ansible role](https://github.com/robertdebock/ansible-role-zabbix_web) and run `molecule test`
```
git clone https://github.com/robertdebock/ansible-role-zabbix_web.git
cd ansible-role-zabbix_web
molecule test
```
##### EXPECTED RESULTS
I was not expecting an error. The host is actually added to Zabbix by the way.
##### ACTUAL RESULTS
```
TASK [ansible-role-zabbix_web : add zabbix hosts] ******************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'inventory_mode'
failed: [zabbix_web-centos-latest] (item=Example server 1) => {"ansible_loop_var": "item", "changed": false, "item": {"description": "Example server 1 description", "groups": ["Linux servers"], "interface_dns": "server1.example.com", "interface_ip": "192.168.127.127", "link_templates": ["Template OS Linux by Zabbix agent"], "name": "Example server 1", "visible_name": "Example server 1 name"}, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1574832740.9065554-150815383322675/AnsiballZ_zabbix_host.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1574832740.9065554-150815383322675/AnsiballZ_zabbix_host.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1574832740.9065554-150815383322675/AnsiballZ_zabbix_host.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.monitoring.zabbix.zabbix_host', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_zabbix_host_payload_xl5geyrr/ansible_zabbix_host_payload.zip/ansible/modules/monitoring/zabbix/zabbix_host.py\", line 902, in <module>\n File \"/tmp/ansible_zabbix_host_payload_xl5geyrr/ansible_zabbix_host_payload.zip/ansible/modules/monitoring/zabbix/zabbix_host.py\", line 862, in main\n File \"/tmp/ansible_zabbix_host_payload_xl5geyrr/ansible_zabbix_host_payload.zip/ansible/modules/monitoring/zabbix/zabbix_host.py\", line 546, in check_all_properties\nKeyError: 'inventory_mode'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
|
https://github.com/ansible/ansible/issues/65304
|
https://github.com/ansible/ansible/pull/65392
|
06d997b2b2c3034dd5d567127df54b93f8ee0f34
|
7b2cfdacd00ddf907247270d228a6bf5f72258a1
| 2019-11-27T05:46:43Z |
python
| 2019-12-16T08:02:11Z |
lib/ansible/modules/monitoring/zabbix/zabbix_host.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2013-2014, Epic Games, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: zabbix_host
short_description: Create/update/delete Zabbix hosts
description:
- This module allows you to create, modify and delete Zabbix host entries and associated group and template data.
version_added: "2.0"
author:
- "Cove (@cove)"
- Tony Minfei Ding (!UNKNOWN)
- Harrison Gu (@harrisongu)
- Werner Dijkerman (@dj-wasabi)
- Eike Frost (@eikef)
requirements:
- "python >= 2.6"
- "zabbix-api >= 0.5.4"
options:
host_name:
description:
- Name of the host in Zabbix.
- I(host_name) is the unique identifier used and cannot be updated using this module.
required: true
visible_name:
description:
- Visible name of the host in Zabbix.
version_added: '2.3'
description:
description:
- Description of the host in Zabbix.
version_added: '2.5'
host_groups:
description:
- List of host groups the host is part of.
link_templates:
description:
- List of templates linked to the host.
inventory_mode:
description:
- Configure the inventory mode.
choices: ['automatic', 'manual', 'disabled']
version_added: '2.1'
inventory_zabbix:
description:
- Add Facts for a zabbix inventory (e.g. Tag) (see example below).
- Please review the interface documentation for more information on the supported properties
- U(https://www.zabbix.com/documentation/3.2/manual/api/reference/host/object#host_inventory)
version_added: '2.5'
status:
description:
- Monitoring status of the host.
choices: ['enabled', 'disabled']
default: 'enabled'
state:
description:
- State of the host.
- On C(present), it will create if host does not exist or update the host if the associated data is different.
- On C(absent) will remove a host if it exists.
choices: ['present', 'absent']
default: 'present'
proxy:
description:
- The name of the Zabbix proxy to be used.
interfaces:
type: list
description:
- List of interfaces to be created for the host (see example below).
- For more information, review host interface documentation at
- U(https://www.zabbix.com/documentation/4.0/manual/api/reference/hostinterface/object)
suboptions:
type:
description:
- Interface type to add
- Numerical values are also accepted for interface type
- 1 = agent
- 2 = snmp
- 3 = ipmi
- 4 = jmx
choices: ['agent', 'snmp', 'ipmi', 'jmx']
required: true
main:
type: int
description:
- Whether the interface is used as default.
- If multiple interfaces with the same type are provided, only one can be default.
- 0 (not default), 1 (default)
default: 0
choices: [0, 1]
useip:
type: int
description:
- Connect to host interface with IP address instead of DNS name.
- 0 (don't use ip), 1 (use ip)
default: 0
choices: [0, 1]
ip:
type: str
description:
- IP address used by host interface.
- Required if I(useip=1).
default: ''
dns:
type: str
description:
- DNS name of the host interface.
- Required if I(useip=0).
default: ''
port:
type: str
description:
- Port used by host interface.
- If not specified, default port for each type of interface is used
- 10050 if I(type='agent')
- 161 if I(type='snmp')
- 623 if I(type='ipmi')
- 12345 if I(type='jmx')
bulk:
type: int
description:
- Whether to use bulk SNMP requests.
- 0 (don't use bulk requests), 1 (use bulk requests)
choices: [0, 1]
default: 1
default: []
tls_connect:
description:
- Specifies what encryption to use for outgoing connections.
- Possible values, 1 (no encryption), 2 (PSK), 4 (certificate).
- Works only with >= Zabbix 3.0
default: 1
version_added: '2.5'
tls_accept:
description:
- Specifies what types of connections are allowed for incoming connections.
- The tls_accept parameter accepts values of 1 to 7
- Possible values, 1 (no encryption), 2 (PSK), 4 (certificate).
- Values can be combined.
- Works only with >= Zabbix 3.0
default: 1
version_added: '2.5'
tls_psk_identity:
description:
- It is a unique name by which this specific PSK is referred to by Zabbix components
- Do not put sensitive information in the PSK identity string, it is transmitted over the network unencrypted.
- Works only with >= Zabbix 3.0
version_added: '2.5'
tls_psk:
description:
- PSK value is a hard to guess string of hexadecimal digits.
- The preshared key, at least 32 hex digits. Required if either I(tls_connect) or I(tls_accept) has PSK enabled.
- Works only with >= Zabbix 3.0
version_added: '2.5'
ca_cert:
description:
- Required certificate issuer.
- Works only with >= Zabbix 3.0
version_added: '2.5'
aliases: [ tls_issuer ]
tls_subject:
description:
- Required certificate subject.
- Works only with >= Zabbix 3.0
version_added: '2.5'
ipmi_authtype:
description:
- IPMI authentication algorithm.
- Please review the Host object documentation for more information on the supported properties
- 'https://www.zabbix.com/documentation/3.4/manual/api/reference/host/object'
- Possible values are, C(0) (none), C(1) (MD2), C(2) (MD5), C(4) (straight), C(5) (OEM), C(6) (RMCP+),
with -1 being the API default.
- Please note that the Zabbix API will treat absent settings as default when updating
any of the I(ipmi_)-options; this means that if you attempt to set any of the four
options individually, the rest will be reset to default values.
version_added: '2.5'
ipmi_privilege:
description:
- IPMI privilege level.
- Please review the Host object documentation for more information on the supported properties
- 'https://www.zabbix.com/documentation/3.4/manual/api/reference/host/object'
- Possible values are C(1) (callback), C(2) (user), C(3) (operator), C(4) (admin), C(5) (OEM), with C(2)
being the API default.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
ipmi_username:
description:
- IPMI username.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
ipmi_password:
description:
- IPMI password.
- also see the last note in the I(ipmi_authtype) documentation
version_added: '2.5'
force:
description:
- Overwrite the host configuration, even if already present.
type: bool
default: 'yes'
version_added: '2.0'
extends_documentation_fragment:
- zabbix
'''
EXAMPLES = '''
- name: Create a new host or update an existing host's info
local_action:
module: zabbix_host
server_url: http://monitor.example.com
login_user: username
login_password: password
host_name: ExampleHost
visible_name: ExampleName
description: My ExampleHost Description
host_groups:
- Example group1
- Example group2
link_templates:
- Example template1
- Example template2
status: enabled
state: present
inventory_mode: manual
inventory_zabbix:
tag: "{{ your_tag }}"
alias: "{{ your_alias }}"
notes: "Special Informations: {{ your_informations | default('None') }}"
location: "{{ your_location }}"
site_rack: "{{ your_site_rack }}"
os: "{{ your_os }}"
hardware: "{{ your_hardware }}"
ipmi_authtype: 2
ipmi_privilege: 4
ipmi_username: username
ipmi_password: password
interfaces:
- type: 1
main: 1
useip: 1
ip: 10.xx.xx.xx
dns: ""
port: "10050"
- type: 4
main: 1
useip: 1
ip: 10.xx.xx.xx
dns: ""
port: "12345"
proxy: a.zabbix.proxy
- name: Update an existing host's TLS settings
local_action:
module: zabbix_host
server_url: http://monitor.example.com
login_user: username
login_password: password
host_name: ExampleHost
visible_name: ExampleName
host_groups:
- Example group1
tls_psk_identity: test
tls_connect: 2
tls_psk: 123456789abcdef123456789abcdef12
'''
import atexit
import copy
import traceback
try:
from zabbix_api import ZabbixAPI
HAS_ZABBIX_API = True
except ImportError:
ZBX_IMP_ERR = traceback.format_exc()
HAS_ZABBIX_API = False
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
class Host(object):
def __init__(self, module, zbx):
self._module = module
self._zapi = zbx
# exist host
def is_host_exist(self, host_name):
result = self._zapi.host.get({'filter': {'host': host_name}})
return result
# check if host group exists
def check_host_group_exist(self, group_names):
for group_name in group_names:
result = self._zapi.hostgroup.get({'filter': {'name': group_name}})
if not result:
self._module.fail_json(msg="Hostgroup not found: %s" % group_name)
return True
def get_template_ids(self, template_list):
template_ids = []
if template_list is None or len(template_list) == 0:
return template_ids
for template in template_list:
template_list = self._zapi.template.get({'output': 'extend', 'filter': {'host': template}})
if len(template_list) < 1:
self._module.fail_json(msg="Template not found: %s" % template)
else:
template_id = template_list[0]['templateid']
template_ids.append(template_id)
return template_ids
def add_host(self, host_name, group_ids, status, interfaces, proxy_id, visible_name, description, tls_connect,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
parameters = {'host': host_name, 'interfaces': interfaces, 'groups': group_ids, 'status': status,
'tls_connect': tls_connect, 'tls_accept': tls_accept}
if proxy_id:
parameters['proxy_hostid'] = proxy_id
if visible_name:
parameters['name'] = visible_name
if tls_psk_identity is not None:
parameters['tls_psk_identity'] = tls_psk_identity
if tls_psk is not None:
parameters['tls_psk'] = tls_psk
if tls_issuer is not None:
parameters['tls_issuer'] = tls_issuer
if tls_subject is not None:
parameters['tls_subject'] = tls_subject
if description:
parameters['description'] = description
if ipmi_authtype is not None:
parameters['ipmi_authtype'] = ipmi_authtype
if ipmi_privilege is not None:
parameters['ipmi_privilege'] = ipmi_privilege
if ipmi_username is not None:
parameters['ipmi_username'] = ipmi_username
if ipmi_password is not None:
parameters['ipmi_password'] = ipmi_password
host_list = self._zapi.host.create(parameters)
if len(host_list) >= 1:
return host_list['hostids'][0]
except Exception as e:
self._module.fail_json(msg="Failed to create host %s: %s" % (host_name, e))
def update_host(self, host_name, group_ids, status, host_id, interfaces, exist_interface_list, proxy_id,
visible_name, description, tls_connect, tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype,
ipmi_privilege, ipmi_username, ipmi_password):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
parameters = {'hostid': host_id, 'groups': group_ids, 'status': status, 'tls_connect': tls_connect,
'tls_accept': tls_accept}
if proxy_id >= 0:
parameters['proxy_hostid'] = proxy_id
if visible_name:
parameters['name'] = visible_name
if tls_psk_identity:
parameters['tls_psk_identity'] = tls_psk_identity
if tls_psk:
parameters['tls_psk'] = tls_psk
if tls_issuer:
parameters['tls_issuer'] = tls_issuer
if tls_subject:
parameters['tls_subject'] = tls_subject
if description:
parameters['description'] = description
if ipmi_authtype:
parameters['ipmi_authtype'] = ipmi_authtype
if ipmi_privilege:
parameters['ipmi_privilege'] = ipmi_privilege
if ipmi_username:
parameters['ipmi_username'] = ipmi_username
if ipmi_password:
parameters['ipmi_password'] = ipmi_password
self._zapi.host.update(parameters)
interface_list_copy = exist_interface_list
if interfaces:
for interface in interfaces:
flag = False
interface_str = interface
for exist_interface in exist_interface_list:
interface_type = int(interface['type'])
exist_interface_type = int(exist_interface['type'])
if interface_type == exist_interface_type:
# update
interface_str['interfaceid'] = exist_interface['interfaceid']
self._zapi.hostinterface.update(interface_str)
flag = True
interface_list_copy.remove(exist_interface)
break
if not flag:
# add
interface_str['hostid'] = host_id
self._zapi.hostinterface.create(interface_str)
# remove
remove_interface_ids = []
for remove_interface in interface_list_copy:
interface_id = remove_interface['interfaceid']
remove_interface_ids.append(interface_id)
if len(remove_interface_ids) > 0:
self._zapi.hostinterface.delete(remove_interface_ids)
except Exception as e:
self._module.fail_json(msg="Failed to update host %s: %s" % (host_name, e))
def delete_host(self, host_id, host_name):
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.delete([host_id])
except Exception as e:
self._module.fail_json(msg="Failed to delete host %s: %s" % (host_name, e))
# get host by host name
def get_host_by_host_name(self, host_name):
host_list = self._zapi.host.get({'output': 'extend', 'selectInventory': 'extend', 'filter': {'host': [host_name]}})
if len(host_list) < 1:
self._module.fail_json(msg="Host not found: %s" % host_name)
else:
return host_list[0]
# get proxyid by proxy name
def get_proxyid_by_proxy_name(self, proxy_name):
proxy_list = self._zapi.proxy.get({'output': 'extend', 'filter': {'host': [proxy_name]}})
if len(proxy_list) < 1:
self._module.fail_json(msg="Proxy not found: %s" % proxy_name)
else:
return int(proxy_list[0]['proxyid'])
# get group ids by group names
def get_group_ids_by_group_names(self, group_names):
group_ids = []
if self.check_host_group_exist(group_names):
group_list = self._zapi.hostgroup.get({'output': 'extend', 'filter': {'name': group_names}})
for group in group_list:
group_id = group['groupid']
group_ids.append({'groupid': group_id})
return group_ids
# get host templates by host id
def get_host_templates_by_host_id(self, host_id):
template_ids = []
template_list = self._zapi.template.get({'output': 'extend', 'hostids': host_id})
for template in template_list:
template_ids.append(template['templateid'])
return template_ids
# get host groups by host id
def get_host_groups_by_host_id(self, host_id):
exist_host_groups = []
host_groups_list = self._zapi.hostgroup.get({'output': 'extend', 'hostids': host_id})
if len(host_groups_list) >= 1:
for host_groups_name in host_groups_list:
exist_host_groups.append(host_groups_name['name'])
return exist_host_groups
# check the exist_interfaces whether it equals the interfaces or not
def check_interface_properties(self, exist_interface_list, interfaces):
interfaces_port_list = []
if interfaces is not None:
if len(interfaces) >= 1:
for interface in interfaces:
interfaces_port_list.append(str(interface['port']))
exist_interface_ports = []
if len(exist_interface_list) >= 1:
for exist_interface in exist_interface_list:
exist_interface_ports.append(str(exist_interface['port']))
if set(interfaces_port_list) != set(exist_interface_ports):
return True
for exist_interface in exist_interface_list:
exit_interface_port = str(exist_interface['port'])
for interface in interfaces:
interface_port = str(interface['port'])
if interface_port == exit_interface_port:
for key in interface.keys():
if str(exist_interface[key]) != str(interface[key]):
return True
return False
# get the status of host by host
def get_host_status_by_host(self, host):
return host['status']
# check all the properties before link or clear template
def check_all_properties(self, host_id, host_groups, status, interfaces, template_ids,
exist_interfaces, host, proxy_id, visible_name, description, host_name,
inventory_mode, inventory_zabbix, tls_accept, tls_psk_identity, tls_psk,
tls_issuer, tls_subject, tls_connect, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password):
# get the existing host's groups
exist_host_groups = self.get_host_groups_by_host_id(host_id)
if set(host_groups) != set(exist_host_groups):
return True
# get the existing status
exist_status = self.get_host_status_by_host(host)
if int(status) != int(exist_status):
return True
# check the exist_interfaces whether it equals the interfaces or not
if self.check_interface_properties(exist_interfaces, interfaces):
return True
# get the existing templates
exist_template_ids = self.get_host_templates_by_host_id(host_id)
if set(list(template_ids)) != set(exist_template_ids):
return True
if int(host['proxy_hostid']) != int(proxy_id):
return True
# Check whether the visible_name has changed; Zabbix defaults to the technical hostname if not set.
if visible_name:
if host['name'] != visible_name:
return True
# Only compare description if it is given as a module parameter
if description:
if host['description'] != description:
return True
if inventory_mode:
if host['inventory']:
if int(host['inventory']['inventory_mode']) != self.inventory_mode_numeric(inventory_mode):
return True
elif inventory_mode != 'disabled':
return True
if inventory_zabbix:
proposed_inventory = copy.deepcopy(host['inventory'])
proposed_inventory.update(inventory_zabbix)
if proposed_inventory != host['inventory']:
return True
if tls_accept is not None and 'tls_accept' in host:
if int(host['tls_accept']) != tls_accept:
return True
if tls_psk_identity is not None and 'tls_psk_identity' in host:
if host['tls_psk_identity'] != tls_psk_identity:
return True
if tls_psk is not None and 'tls_psk' in host:
if host['tls_psk'] != tls_psk:
return True
if tls_issuer is not None and 'tls_issuer' in host:
if host['tls_issuer'] != tls_issuer:
return True
if tls_subject is not None and 'tls_subject' in host:
if host['tls_subject'] != tls_subject:
return True
if tls_connect is not None and 'tls_connect' in host:
if int(host['tls_connect']) != tls_connect:
return True
if ipmi_authtype is not None:
if int(host['ipmi_authtype']) != ipmi_authtype:
return True
if ipmi_privilege is not None:
if int(host['ipmi_privilege']) != ipmi_privilege:
return True
if ipmi_username is not None:
if host['ipmi_username'] != ipmi_username:
return True
if ipmi_password is not None:
if host['ipmi_password'] != ipmi_password:
return True
return False
# link or clear template of the host
def link_or_clear_template(self, host_id, template_id_list, tls_connect, tls_accept, tls_psk_identity, tls_psk,
tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password):
# get host's exist template ids
exist_template_id_list = self.get_host_templates_by_host_id(host_id)
exist_template_ids = set(exist_template_id_list)
template_ids = set(template_id_list)
template_id_list = list(template_ids)
# get unlink and clear templates
templates_clear = exist_template_ids.difference(template_ids)
templates_clear_list = list(templates_clear)
request_str = {'hostid': host_id, 'templates': template_id_list, 'templates_clear': templates_clear_list,
'tls_connect': tls_connect, 'tls_accept': tls_accept, 'ipmi_authtype': ipmi_authtype,
'ipmi_privilege': ipmi_privilege, 'ipmi_username': ipmi_username, 'ipmi_password': ipmi_password}
if tls_psk_identity is not None:
request_str['tls_psk_identity'] = tls_psk_identity
if tls_psk is not None:
request_str['tls_psk'] = tls_psk
if tls_issuer is not None:
request_str['tls_issuer'] = tls_issuer
if tls_subject is not None:
request_str['tls_subject'] = tls_subject
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to link template to host: %s" % e)
def inventory_mode_numeric(self, inventory_mode):
if inventory_mode == "automatic":
return int(1)
elif inventory_mode == "manual":
return int(0)
elif inventory_mode == "disabled":
return int(-1)
return inventory_mode
# Update the host inventory_mode
def update_inventory_mode(self, host_id, inventory_mode):
# nothing was set, do nothing
if not inventory_mode:
return
inventory_mode = self.inventory_mode_numeric(inventory_mode)
# watch for - https://support.zabbix.com/browse/ZBX-6033
request_str = {'hostid': host_id, 'inventory_mode': inventory_mode}
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to set inventory_mode to host: %s" % e)
def update_inventory_zabbix(self, host_id, inventory):
if not inventory:
return
request_str = {'hostid': host_id, 'inventory': inventory}
try:
if self._module.check_mode:
self._module.exit_json(changed=True)
self._zapi.host.update(request_str)
except Exception as e:
self._module.fail_json(msg="Failed to set inventory to host: %s" % e)
def main():
module = AnsibleModule(
argument_spec=dict(
server_url=dict(type='str', required=True, aliases=['url']),
login_user=dict(type='str', required=True),
login_password=dict(type='str', required=True, no_log=True),
host_name=dict(type='str', required=True),
http_login_user=dict(type='str', required=False, default=None),
http_login_password=dict(type='str', required=False, default=None, no_log=True),
validate_certs=dict(type='bool', required=False, default=True),
host_groups=dict(type='list', required=False),
link_templates=dict(type='list', required=False),
status=dict(default="enabled", choices=['enabled', 'disabled']),
state=dict(default="present", choices=['present', 'absent']),
inventory_mode=dict(required=False, choices=['automatic', 'manual', 'disabled']),
ipmi_authtype=dict(type='int', default=None),
ipmi_privilege=dict(type='int', default=None),
ipmi_username=dict(type='str', required=False, default=None),
ipmi_password=dict(type='str', required=False, default=None, no_log=True),
tls_connect=dict(type='int', default=1),
tls_accept=dict(type='int', default=1),
tls_psk_identity=dict(type='str', required=False),
tls_psk=dict(type='str', required=False),
ca_cert=dict(type='str', required=False, aliases=['tls_issuer']),
tls_subject=dict(type='str', required=False),
inventory_zabbix=dict(required=False, type='dict'),
timeout=dict(type='int', default=10),
interfaces=dict(type='list', required=False),
force=dict(type='bool', default=True),
proxy=dict(type='str', required=False),
visible_name=dict(type='str', required=False),
description=dict(type='str', required=False)
),
supports_check_mode=True
)
if not HAS_ZABBIX_API:
module.fail_json(msg=missing_required_lib('zabbix-api', url='https://pypi.org/project/zabbix-api/'), exception=ZBX_IMP_ERR)
server_url = module.params['server_url']
login_user = module.params['login_user']
login_password = module.params['login_password']
http_login_user = module.params['http_login_user']
http_login_password = module.params['http_login_password']
validate_certs = module.params['validate_certs']
host_name = module.params['host_name']
visible_name = module.params['visible_name']
description = module.params['description']
host_groups = module.params['host_groups']
link_templates = module.params['link_templates']
inventory_mode = module.params['inventory_mode']
ipmi_authtype = module.params['ipmi_authtype']
ipmi_privilege = module.params['ipmi_privilege']
ipmi_username = module.params['ipmi_username']
ipmi_password = module.params['ipmi_password']
tls_connect = module.params['tls_connect']
tls_accept = module.params['tls_accept']
tls_psk_identity = module.params['tls_psk_identity']
tls_psk = module.params['tls_psk']
tls_issuer = module.params['ca_cert']
tls_subject = module.params['tls_subject']
inventory_zabbix = module.params['inventory_zabbix']
status = module.params['status']
state = module.params['state']
timeout = module.params['timeout']
interfaces = module.params['interfaces']
force = module.params['force']
proxy = module.params['proxy']
# convert enabled to 0; disabled to 1
status = 1 if status == "disabled" else 0
zbx = None
# login to zabbix
try:
zbx = ZabbixAPI(server_url, timeout=timeout, user=http_login_user, passwd=http_login_password,
validate_certs=validate_certs)
zbx.login(login_user, login_password)
atexit.register(zbx.logout)
except Exception as e:
module.fail_json(msg="Failed to connect to Zabbix server: %s" % e)
host = Host(module, zbx)
template_ids = []
if link_templates:
template_ids = host.get_template_ids(link_templates)
group_ids = []
if host_groups:
group_ids = host.get_group_ids_by_group_names(host_groups)
ip = ""
if interfaces:
# ensure interfaces are well-formed
for interface in interfaces:
if 'type' not in interface:
module.fail_json(msg="(interface) type needs to be specified for interface '%s'." % interface)
interfacetypes = {'agent': 1, 'snmp': 2, 'ipmi': 3, 'jmx': 4}
if interface['type'] in interfacetypes.keys():
interface['type'] = interfacetypes[interface['type']]
if interface['type'] < 1 or interface['type'] > 4:
module.fail_json(msg="Interface type can only be 1-4 for interface '%s'." % interface)
if 'useip' not in interface:
interface['useip'] = 0
if 'dns' not in interface:
if interface['useip'] == 0:
module.fail_json(msg="dns needs to be set if useip is 0 on interface '%s'." % interface)
interface['dns'] = ''
if 'ip' not in interface:
if interface['useip'] == 1:
module.fail_json(msg="ip needs to be set if useip is 1 on interface '%s'." % interface)
interface['ip'] = ''
if 'main' not in interface:
interface['main'] = 0
if 'port' in interface and not isinstance(interface['port'], str):
try:
interface['port'] = str(interface['port'])
except ValueError:
module.fail_json(msg="port should be convertable to string on interface '%s'." % interface)
if 'port' not in interface:
if interface['type'] == 1:
interface['port'] = "10050"
elif interface['type'] == 2:
interface['port'] = "161"
elif interface['type'] == 3:
interface['port'] = "623"
elif interface['type'] == 4:
interface['port'] = "12345"
if interface['type'] == 1:
ip = interface['ip']
# Use proxy specified, or set to 0
if proxy:
proxy_id = host.get_proxyid_by_proxy_name(proxy)
else:
proxy_id = 0
# check if host exist
is_host_exist = host.is_host_exist(host_name)
if is_host_exist:
# get host id by host name
zabbix_host_obj = host.get_host_by_host_name(host_name)
host_id = zabbix_host_obj['hostid']
# If proxy is not specified as a module parameter, use the existing setting
if proxy is None:
proxy_id = int(zabbix_host_obj['proxy_hostid'])
if state == "absent":
# remove host
host.delete_host(host_id, host_name)
module.exit_json(changed=True, result="Successfully delete host %s" % host_name)
else:
if not host_groups:
# if host_groups have not been specified when updating an existing host, just
# get the group_ids from the existing host without updating them.
host_groups = host.get_host_groups_by_host_id(host_id)
group_ids = host.get_group_ids_by_group_names(host_groups)
# get existing host's interfaces
exist_interfaces = host._zapi.hostinterface.get({'output': 'extend', 'hostids': host_id})
# if no interfaces were specified with the module, start with an empty list
if not interfaces:
interfaces = []
# When force=no is specified, append existing interfaces to interfaces to update. When
# no interfaces have been specified, copy existing interfaces as specified from the API.
# Do the same with templates and host groups.
if not force or not interfaces:
for interface in copy.deepcopy(exist_interfaces):
# remove values not used during hostinterface.add/update calls
for key in tuple(interface.keys()):
if key in ['interfaceid', 'hostid', 'bulk']:
interface.pop(key, None)
for index in interface.keys():
if index in ['useip', 'main', 'type']:
interface[index] = int(interface[index])
if interface not in interfaces:
interfaces.append(interface)
if not force or link_templates is None:
template_ids = list(set(template_ids + host.get_host_templates_by_host_id(host_id)))
if not force:
for group_id in host.get_group_ids_by_group_names(host.get_host_groups_by_host_id(host_id)):
if group_id not in group_ids:
group_ids.append(group_id)
# update host
if host.check_all_properties(host_id, host_groups, status, interfaces, template_ids,
exist_interfaces, zabbix_host_obj, proxy_id, visible_name,
description, host_name, inventory_mode, inventory_zabbix,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, tls_connect,
ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password):
host.update_host(host_name, group_ids, status, host_id,
interfaces, exist_interfaces, proxy_id, visible_name, description, tls_connect, tls_accept,
tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password)
host.link_or_clear_template(host_id, template_ids, tls_connect, tls_accept, tls_psk_identity,
tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password)
host.update_inventory_mode(host_id, inventory_mode)
host.update_inventory_zabbix(host_id, inventory_zabbix)
module.exit_json(changed=True,
result="Successfully update host %s (%s) and linked with template '%s'"
% (host_name, ip, link_templates))
else:
module.exit_json(changed=False)
else:
if state == "absent":
# the host is already deleted.
module.exit_json(changed=False)
if not group_ids:
module.fail_json(msg="Specify at least one group for creating host '%s'." % host_name)
if not interfaces or (interfaces and len(interfaces) == 0):
module.fail_json(msg="Specify at least one interface for creating host '%s'." % host_name)
# create host
host_id = host.add_host(host_name, group_ids, status, interfaces, proxy_id, visible_name, description, tls_connect,
tls_accept, tls_psk_identity, tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege,
ipmi_username, ipmi_password)
host.link_or_clear_template(host_id, template_ids, tls_connect, tls_accept, tls_psk_identity,
tls_psk, tls_issuer, tls_subject, ipmi_authtype, ipmi_privilege, ipmi_username, ipmi_password)
host.update_inventory_mode(host_id, inventory_mode)
host.update_inventory_zabbix(host_id, inventory_zabbix)
module.exit_json(changed=True, result="Successfully added host %s (%s) and linked with template '%s'" % (
host_name, ip, link_templates))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,304 |
zabbix_host fails: zabbix_host.py .. KeyError: 'inventory_mode'
|
##### SUMMARY
The module zabbix_host [fails](https://travis-ci.org/robertdebock/ansible-role-zabbix_web/jobs/617545189#L653) using:
```
ansible_loop_var": "item", "changed": false, "item": {"description": "Example server 1 description", "groups": ["Linux servers"], "interface_dns": "server1.example.com", "interface_ip": "192.168.127.127", "link_templates": ["Template OS Linux by Zabbix agent"], "name": "Example server 1", "visible_name": "Example server 1 name"
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
zabbix_host
##### ANSIBLE VERSION
```
$ ansible --version
ansible 2.9.1
config file = None
configured module search path = ['/home/robertdb/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Oct 17 2019, 12:16:48) [GCC 9.2.1 20190827 (Red Hat 9.2.1-1)]
```
##### CONFIGURATION
```
$ ansible-config dump --only-changed
# NO OUTPUT
```
##### OS / ENVIRONMENT
Debian
Centos-7
Centos-8
Ubuntu
##### STEPS TO REPRODUCE
Git clone my [zabbix_web Ansible role](https://github.com/robertdebock/ansible-role-zabbix_web) and run `molecule test`
```
git clone https://github.com/robertdebock/ansible-role-zabbix_web.git
cd ansible-role-zabbix_web
molecule test
```
##### EXPECTED RESULTS
I was not expecting an error. The host is actually added to Zabbix by the way.
##### ACTUAL RESULTS
```
TASK [ansible-role-zabbix_web : add zabbix hosts] ******************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'inventory_mode'
failed: [zabbix_web-centos-latest] (item=Example server 1) => {"ansible_loop_var": "item", "changed": false, "item": {"description": "Example server 1 description", "groups": ["Linux servers"], "interface_dns": "server1.example.com", "interface_ip": "192.168.127.127", "link_templates": ["Template OS Linux by Zabbix agent"], "name": "Example server 1", "visible_name": "Example server 1 name"}, "module_stderr": "Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-tmp-1574832740.9065554-150815383322675/AnsiballZ_zabbix_host.py\", line 102, in <module>\n _ansiballz_main()\n File \"/root/.ansible/tmp/ansible-tmp-1574832740.9065554-150815383322675/AnsiballZ_zabbix_host.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/root/.ansible/tmp/ansible-tmp-1574832740.9065554-150815383322675/AnsiballZ_zabbix_host.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible.modules.monitoring.zabbix.zabbix_host', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_zabbix_host_payload_xl5geyrr/ansible_zabbix_host_payload.zip/ansible/modules/monitoring/zabbix/zabbix_host.py\", line 902, in <module>\n File \"/tmp/ansible_zabbix_host_payload_xl5geyrr/ansible_zabbix_host_payload.zip/ansible/modules/monitoring/zabbix/zabbix_host.py\", line 862, in main\n File \"/tmp/ansible_zabbix_host_payload_xl5geyrr/ansible_zabbix_host_payload.zip/ansible/modules/monitoring/zabbix/zabbix_host.py\", line 546, in check_all_properties\nKeyError: 'inventory_mode'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
|
https://github.com/ansible/ansible/issues/65304
|
https://github.com/ansible/ansible/pull/65392
|
06d997b2b2c3034dd5d567127df54b93f8ee0f34
|
7b2cfdacd00ddf907247270d228a6bf5f72258a1
| 2019-11-27T05:46:43Z |
python
| 2019-12-16T08:02:11Z |
test/integration/targets/zabbix_host/tasks/zabbix_host_tests.yml
|
---
- name: "test: create host with many options set"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
visible_name: ExampleName
description: My ExampleHost Description
host_groups:
- Linux servers
- Zabbix servers
link_templates:
- Template App IMAP Service
- Template App NTP Service
status: enabled
state: present
inventory_mode: manual
inventory_zabbix:
tag: test-tag
alias: test-alias
notes: "Special Informations: test-info"
location: test-location
site_rack: test-rack
os: test-os
hardware: test-hw
interfaces:
- type: 1
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "10050"
- type: 1
main: 0
useip: 1
ip: 10.1.1.1
dns: ""
port: "{$MACRO}"
- type: 4
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "12345"
proxy: ExampleProxy
tls_psk_identity: test
tls_connect: 2
tls_psk: 123456789abcdef123456789abcdef12
register: zabbix_host1
- name: expect to succeed and that things changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: try to create the same host with the same settings"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
visible_name: ExampleName
description: My ExampleHost Description
host_groups:
- Linux servers
- Zabbix servers
link_templates:
- Template App IMAP Service
- Template App NTP Service
status: enabled
state: present
inventory_mode: manual
inventory_zabbix:
tag: test-tag
alias: test-alias
notes: "Special Informations: test-info"
location: test-location
site_rack: test-rack
os: test-os
hardware: test-hw
interfaces:
- type: 1
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "10050"
- type: 1
main: 0
useip: 1
ip: 10.1.1.1
dns: ""
port: "{$MACRO}"
- type: 4
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "12345"
proxy: ExampleProxy
tls_psk_identity: test
tls_connect: 2
tls_psk: 123456789abcdef123456789abcdef12
register: zabbix_host1
- name: updating with same values should be idempotent
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: try to create the same host with the same settings and force false"
zabbix_host:
force: false
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
visible_name: ExampleName
description: My ExampleHost Description
host_groups:
- Linux servers
- Zabbix servers
link_templates:
- Template App IMAP Service
- Template App NTP Service
status: enabled
state: present
inventory_mode: manual
inventory_zabbix:
tag: test-tag
alias: test-alias
notes: "Special Informations: test-info"
location: test-location
site_rack: test-rack
os: test-os
hardware: test-hw
interfaces:
- type: 1
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "10050"
- type: 1
main: 0
useip: 1
ip: 10.1.1.1
dns: ""
port: "{$MACRO}"
- type: 4
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "12345"
proxy: ExampleProxy
tls_psk_identity: test
tls_connect: 2
tls_psk: 123456789abcdef123456789abcdef12
register: zabbix_host1
- name: updating with same values and force false should be idempotent
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: try to create the same host changing one parameter in the inventory with force false"
zabbix_host:
force: false
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
visible_name: ExampleName
description: My ExampleHost Description
host_groups:
- Linux servers
- Zabbix servers
link_templates:
- Template App IMAP Service
- Template App NTP Service
status: enabled
state: present
inventory_mode: manual
inventory_zabbix:
tag: test-tag
alias: test-alias
notes: "Special Informations: test-info"
location: test-location
site_rack: test-rack
os: test-os
hardware: test-hw-modified
interfaces:
- type: 1
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "10050"
- type: 1
main: 0
useip: 1
ip: 10.1.1.1
dns: ""
port: "{$MACRO}"
- type: 4
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "12345"
proxy: ExampleProxy
tls_psk_identity: test
tls_connect: 2
tls_psk: 123456789abcdef123456789abcdef12
register: zabbix_host1
- name: changing the value of an already defined inventory should work and mark task as changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change visible_name"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
visible_name: "ExampleName Changed"
register: zabbix_host1
- name: expect to succeed and that things changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change visible_name (again)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
visible_name: "ExampleName Changed"
register: zabbix_host1
- name: updating with same values should be idempotent
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: change description"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
description: "My ExampleHost Description Changed"
register: zabbix_host1
- name: expect to succeed and that things changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change description (again)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
description: "My ExampleHost Description Changed"
register: zabbix_host1
- name: updating with same values should be idempotent
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: change host groups (adding one group)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
host_groups:
- Linux servers
- Zabbix servers
- Virtual machines
register: zabbix_host1
- name: expect to succeed and that things changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change host groups (remove one group)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
host_groups:
- Linux servers
- Zabbix servers
register: zabbix_host1
- name: expect to succeed and that things changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change host groups (add one group using force=no)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
host_groups:
- Virtual machines
force: no
register: zabbix_host1
- name: expect to succeed and that things changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change host groups (check whether we are at three groups)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
host_groups:
- Linux servers
- Zabbix servers
- Virtual machines
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: change host groups (attempt to remove all host groups)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
host_groups:
-
register: zabbix_host1
ignore_errors: yes
- name: expect to fail
assert:
that:
- "zabbix_host1 is failed"
- name: "test: change host linked templates (same as before)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
link_templates:
- Template App IMAP Service
- Template App NTP Service
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: change host linked templates (add one template)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
link_templates:
- Template App IMAP Service
- Template App NTP Service
- Template App HTTP Service
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change host linked templates (add one template, using force=no)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
link_templates:
- Template App LDAP Service
force: no
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change host linked templates (make sure we are at 4 templates)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
link_templates:
- Template App IMAP Service
- Template App NTP Service
- Template App HTTP Service
- Template App LDAP Service
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: change host linked templates (remove all templates)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
link_templates:
-
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change host linked templates (check we have no templates left)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
link_templates:
-
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: change host status"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
status: disabled
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change host status (again)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
status: disabled
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: change host inventory mode"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
inventory_mode: automatic
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change host inventory mode"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
inventory_mode: automatic
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: change host inventory data (one field)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
inventory_zabbix:
tag: test-tag-two
alias: test-alias
notes: "Special Informations: test-info"
location: test-location
site_rack: test-rack
os: test-os
hardware: test-hw
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change host inventory data (again)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
inventory_zabbix:
tag: test-tag-two
alias: test-alias
notes: "Special Informations: test-info"
location: test-location
site_rack: test-rack
os: test-os
hardware: test-hw
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: remove host proxy"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
proxy: ''
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: add host proxy"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
proxy: ExampleProxy
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: add host proxy (again)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
proxy: ExampleProxy
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: change tls settings"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
tls_psk_identity: test2
tls_connect: 4
tls_accept: 7
tls_psk: 123456789abcdef123456789abcdef13
tls_issuer: AcmeCorp
tls_subject: AcmeCorpServer
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change tls settings (again)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
tls_psk_identity: test2
tls_connect: 4
tls_accept: 7
tls_psk: 123456789abcdef123456789abcdef13
tls_issuer: AcmeCorp
tls_subject: AcmeCorpServer
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: change interface settings (remove one)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
interfaces:
- type: 1
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "10050"
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change interface settings (again)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
interfaces:
- type: 1
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "10050"
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: change interface settings (add one interface using force=no)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
interfaces:
- type: 4
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "12345"
force: no
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: change interface settings (verify that we are at two interfaces)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
interfaces:
- type: 1
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "10050"
- type: 4
main: 1
useip: 1
ip: 10.1.1.1
dns: ""
port: "12345"
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "not zabbix_host1 is changed"
- name: "test: add IPMI settings"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
ipmi_authtype: 2
ipmi_privilege: 4
ipmi_username: username
ipmi_password: password
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: add IPMI settings again"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
ipmi_authtype: 2
ipmi_privilege: 4
ipmi_username: username
ipmi_password: password
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "zabbix_host1 is not changed"
- name: "test: verify that an empty change is idempotent"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "zabbix_host1 is not changed"
- name: "test: IPMI set default values"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
ipmi_authtype: -1
ipmi_privilege: 2
ipmi_username: ""
ipmi_password: ""
register: zabbix_host1
- name: expect to succeed and that things have changed
assert:
that:
- "zabbix_host1 is changed"
- name: "test: IPMI set default values (again)"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
ipmi_authtype: -1
ipmi_privilege: 2
ipmi_username: ""
ipmi_password: ""
register: zabbix_host1
- name: expect to succeed and that things have not changed
assert:
that:
- "zabbix_host1 is not changed"
- name: "test: attempt to delete host created earlier"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
state: absent
register: zabbix_host1
- name: deleting a host is a change, right?
assert:
that:
- "zabbix_host1 is changed"
- name: "test: attempt deleting a non-existant host"
zabbix_host:
server_url: "{{ zabbix_server_url }}"
login_user: "{{ zabbix_login_user }}"
login_password: "{{ zabbix_login_password }}"
host_name: ExampleHost
state: absent
register: zabbix_host1
- name: deleting a non-existant host is not a change, right?
assert:
that:
- "not zabbix_host1 is changed"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,298 |
using setup module under a role provide via a collection fails due to the wrong module being picked
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
A role provided via collection that includes a call to setup: gather_subset fails because the windows powershell version of setup is picked.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible plugin loader
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.1
config file = None
configured module search path = ['/home/aschultz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/aschultz/.virtualenvs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/aschultz/.virtualenvs/ansible/bin/ansible
python version = 3.6.8 (default, Oct 8 2019, 16:29:04) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
Also tested
```
ansible 2.10.0.dev0
config file = None
configured module search path = ['/home/aschultz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/aschultz/.virtualenvs/ansible-devel/lib/python3.6/site-packages/ansible-2.10.0.dev0-py3.6.egg/ansible
executable location = /home/aschultz/.virtualenvs/ansible-devel/bin/ansible
python version = 3.6.8 (default, Oct 8 2019, 16:29:04) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
No changes
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
On a fedora31 host running under python 3.6.8 in a virtual environment against a CentOS7 host or against the localhost
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have provided a sample collection that reproduces the problem:
https://github.com/mwhahaha/ansible-collection-failure
On a linux host do:
```
git clone https://github.com/mwhahaha/ansible-collection-failure
cd ansible-collection-failure
ansible-galaxy collection build failure
ansible-galaxy collection install mwhahaha-failure-1.0.0.tar.gz
ansible-playbook sigh.yml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
setup gather_subset should complete successfully.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible attempts to use the powershell setup module on a linux system.
<!--- Paste verbatim command output between quotes -->
```
$ ansible-playbook sigh.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] *********************************************************************************************************************************************************************
TASK [orly] **************************************************************************************************************************************************************************
TASK [sadness : Gather facts for sadness] ********************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: powershell: command not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}
PLAY RECAP ***************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/65298
|
https://github.com/ansible/ansible/pull/65776
|
74e9b1e2190b4fa7f6fa59294d03ea154d44cfd8
|
6f76a48f59e4d1936f3f3bd1711b3999e1f3869b
| 2019-11-26T22:56:11Z |
python
| 2019-12-16T16:28:24Z |
changelogs/fragments/collection_loader-sort-plugins.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,298 |
using setup module under a role provide via a collection fails due to the wrong module being picked
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
A role provided via collection that includes a call to setup: gather_subset fails because the windows powershell version of setup is picked.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible plugin loader
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.1
config file = None
configured module search path = ['/home/aschultz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/aschultz/.virtualenvs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/aschultz/.virtualenvs/ansible/bin/ansible
python version = 3.6.8 (default, Oct 8 2019, 16:29:04) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
Also tested
```
ansible 2.10.0.dev0
config file = None
configured module search path = ['/home/aschultz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/aschultz/.virtualenvs/ansible-devel/lib/python3.6/site-packages/ansible-2.10.0.dev0-py3.6.egg/ansible
executable location = /home/aschultz/.virtualenvs/ansible-devel/bin/ansible
python version = 3.6.8 (default, Oct 8 2019, 16:29:04) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
No changes
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
On a fedora31 host running under python 3.6.8 in a virtual environment against a CentOS7 host or against the localhost
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have provided a sample collection that reproduces the problem:
https://github.com/mwhahaha/ansible-collection-failure
On a linux host do:
```
git clone https://github.com/mwhahaha/ansible-collection-failure
cd ansible-collection-failure
ansible-galaxy collection build failure
ansible-galaxy collection install mwhahaha-failure-1.0.0.tar.gz
ansible-playbook sigh.yml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
setup gather_subset should complete successfully.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible attempts to use the powershell setup module on a linux system.
<!--- Paste verbatim command output between quotes -->
```
$ ansible-playbook sigh.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] *********************************************************************************************************************************************************************
TASK [orly] **************************************************************************************************************************************************************************
TASK [sadness : Gather facts for sadness] ********************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: powershell: command not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}
PLAY RECAP ***************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/65298
|
https://github.com/ansible/ansible/pull/65776
|
74e9b1e2190b4fa7f6fa59294d03ea154d44cfd8
|
6f76a48f59e4d1936f3f3bd1711b3999e1f3869b
| 2019-11-26T22:56:11Z |
python
| 2019-12-16T16:28:24Z |
lib/ansible/plugins/loader.py
|
# (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# (c) 2012-2014, Michael DeHaan <[email protected]> and others
# (c) 2017, Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import os.path
import sys
import warnings
from collections import defaultdict
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_bytes, to_text, to_native
from ansible.module_utils.six import string_types
from ansible.parsing.utils.yaml import from_yaml
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.plugins import get_plugin_class, MODULE_CACHE, PATH_CACHE, PLUGIN_PATH_CACHE
from ansible.utils.collection_loader import AnsibleCollectionLoader, AnsibleFlatMapLoader, AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.plugin_docs import add_fragments
try:
import importlib.util
imp = None
except ImportError:
import imp
# HACK: keep Python 2.6 controller tests happy in CI until they're properly split
try:
from importlib import import_module
except ImportError:
import_module = __import__
display = Display()
def get_all_plugin_loaders():
return [(name, obj) for (name, obj) in globals().items() if isinstance(obj, PluginLoader)]
def add_all_plugin_dirs(path):
''' add any existing plugin dirs in the path provided '''
b_path = to_bytes(path, errors='surrogate_or_strict')
if os.path.isdir(b_path):
for name, obj in get_all_plugin_loaders():
if obj.subdir:
plugin_path = os.path.join(b_path, to_bytes(obj.subdir))
if os.path.isdir(plugin_path):
obj.add_directory(to_text(plugin_path))
else:
display.warning("Ignoring invalid path provided to plugin path: '%s' is not a directory" % to_text(path))
def get_shell_plugin(shell_type=None, executable=None):
if not shell_type:
# default to sh
shell_type = 'sh'
# mostly for backwards compat
if executable:
if isinstance(executable, string_types):
shell_filename = os.path.basename(executable)
try:
shell = shell_loader.get(shell_filename)
except Exception:
shell = None
if shell is None:
for shell in shell_loader.all():
if shell_filename in shell.COMPATIBLE_SHELLS:
shell_type = shell.SHELL_FAMILY
break
else:
raise AnsibleError("Either a shell type or a shell executable must be provided ")
shell = shell_loader.get(shell_type)
if not shell:
raise AnsibleError("Could not find the shell plugin required (%s)." % shell_type)
if executable:
setattr(shell, 'executable', executable)
return shell
def add_dirs_to_loader(which_loader, paths):
loader = getattr(sys.modules[__name__], '%s_loader' % which_loader)
for path in paths:
loader.add_directory(path, with_subdir=True)
class PluginLoader:
'''
PluginLoader loads plugins from the configured plugin directories.
It searches for plugins by iterating through the combined list of play basedirs, configured
paths, and the python path. The first match is used.
'''
def __init__(self, class_name, package, config, subdir, aliases=None, required_base_class=None):
aliases = {} if aliases is None else aliases
self.class_name = class_name
self.base_class = required_base_class
self.package = package
self.subdir = subdir
# FIXME: remove alias dict in favor of alias by symlink?
self.aliases = aliases
if config and not isinstance(config, list):
config = [config]
elif not config:
config = []
self.config = config
if class_name not in MODULE_CACHE:
MODULE_CACHE[class_name] = {}
if class_name not in PATH_CACHE:
PATH_CACHE[class_name] = None
if class_name not in PLUGIN_PATH_CACHE:
PLUGIN_PATH_CACHE[class_name] = defaultdict(dict)
# hold dirs added at runtime outside of config
self._extra_dirs = []
# caches
self._module_cache = MODULE_CACHE[class_name]
self._paths = PATH_CACHE[class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[class_name]
self._searched_paths = set()
def _clear_caches(self):
if C.OLD_PLUGIN_CACHE_CLEARING:
self._paths = None
else:
# reset global caches
MODULE_CACHE[self.class_name] = {}
PATH_CACHE[self.class_name] = None
PLUGIN_PATH_CACHE[self.class_name] = defaultdict(dict)
# reset internal caches
self._module_cache = MODULE_CACHE[self.class_name]
self._paths = PATH_CACHE[self.class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[self.class_name]
self._searched_paths = set()
def __setstate__(self, data):
'''
Deserializer.
'''
class_name = data.get('class_name')
package = data.get('package')
config = data.get('config')
subdir = data.get('subdir')
aliases = data.get('aliases')
base_class = data.get('base_class')
PATH_CACHE[class_name] = data.get('PATH_CACHE')
PLUGIN_PATH_CACHE[class_name] = data.get('PLUGIN_PATH_CACHE')
self.__init__(class_name, package, config, subdir, aliases, base_class)
self._extra_dirs = data.get('_extra_dirs', [])
self._searched_paths = data.get('_searched_paths', set())
def __getstate__(self):
'''
Serializer.
'''
return dict(
class_name=self.class_name,
base_class=self.base_class,
package=self.package,
config=self.config,
subdir=self.subdir,
aliases=self.aliases,
_extra_dirs=self._extra_dirs,
_searched_paths=self._searched_paths,
PATH_CACHE=PATH_CACHE[self.class_name],
PLUGIN_PATH_CACHE=PLUGIN_PATH_CACHE[self.class_name],
)
def format_paths(self, paths):
''' Returns a string suitable for printing of the search path '''
# Uses a list to get the order right
ret = []
for i in paths:
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
def print_paths(self):
return self.format_paths(self._get_paths(subdirs=False))
def _all_directories(self, dir):
results = []
results.append(dir)
for root, subdirs, files in os.walk(dir, followlinks=True):
if '__init__.py' in files:
for x in subdirs:
results.append(os.path.join(root, x))
return results
def _get_package_paths(self, subdirs=True):
''' Gets the path of a Python package '''
if not self.package:
return []
if not hasattr(self, 'package_path'):
m = __import__(self.package)
parts = self.package.split('.')[1:]
for parent_mod in parts:
m = getattr(m, parent_mod)
self.package_path = os.path.dirname(m.__file__)
if subdirs:
return self._all_directories(self.package_path)
return [self.package_path]
def _get_paths(self, subdirs=True):
''' Return a list of paths to search for plugins in '''
# FIXME: This is potentially buggy if subdirs is sometimes True and sometimes False.
# In current usage, everything calls this with subdirs=True except for module_utils_loader and ansible-doc
# which always calls it with subdirs=False. So there currently isn't a problem with this caching.
if self._paths is not None:
return self._paths
ret = self._extra_dirs[:]
# look in any configured plugin paths, allow one level deep for subcategories
if self.config is not None:
for path in self.config:
path = os.path.realpath(os.path.expanduser(path))
if subdirs:
contents = glob.glob("%s/*" % path) + glob.glob("%s/*/*" % path)
for c in contents:
if os.path.isdir(c) and c not in ret:
ret.append(c)
if path not in ret:
ret.append(path)
# look for any plugins installed in the package subtree
# Note package path always gets added last so that every other type of
# path is searched before it.
ret.extend(self._get_package_paths(subdirs=subdirs))
# HACK: because powershell modules are in the same directory
# hierarchy as other modules we have to process them last. This is
# because powershell only works on windows but the other modules work
# anywhere (possibly including windows if the correct language
# interpreter is installed). the non-powershell modules can have any
# file extension and thus powershell modules are picked up in that.
# The non-hack way to fix this is to have powershell modules be
# a different PluginLoader/ModuleLoader. But that requires changing
# other things too (known thing to change would be PATHS_CACHE,
# PLUGIN_PATHS_CACHE, and MODULE_CACHE. Since those three dicts key
# on the class_name and neither regular modules nor powershell modules
# would have class_names, they would not work as written.
reordered_paths = []
win_dirs = []
for path in ret:
if path.endswith('windows'):
win_dirs.append(path)
else:
reordered_paths.append(path)
reordered_paths.extend(win_dirs)
# cache and return the result
self._paths = reordered_paths
return reordered_paths
def _load_config_defs(self, name, module, path):
''' Reads plugin docs to find configuration setting definitions, to push to config manager for later use '''
# plugins w/o class name don't support config
if self.class_name:
type_name = get_plugin_class(self.class_name)
# if type name != 'module_doc_fragment':
if type_name in C.CONFIGURABLE_PLUGINS:
dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()
if dstring:
add_fragments(dstring, path, fragment_loader=fragment_loader)
if dstring and 'options' in dstring and isinstance(dstring['options'], dict):
C.config.initialize_plugin_configuration_definitions(type_name, name, dstring['options'])
display.debug('Loaded config def from plugin (%s/%s)' % (type_name, name))
def add_directory(self, directory, with_subdir=False):
''' Adds an additional directory to the search path '''
directory = os.path.realpath(directory)
if directory is not None:
if with_subdir:
directory = os.path.join(directory, self.subdir)
if directory not in self._extra_dirs:
# append the directory and invalidate the path cache
self._extra_dirs.append(directory)
self._clear_caches()
display.debug('Added %s to loader search path' % (directory))
def _find_fq_plugin(self, fq_name, extension):
plugin_type = AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir)
acr = AnsibleCollectionRef.from_fqcr(fq_name, plugin_type)
n_resource = to_native(acr.resource, errors='strict')
# we want this before the extension is added
full_name = '{0}.{1}'.format(acr.n_python_package_name, n_resource)
if extension:
n_resource += extension
pkg = sys.modules.get(acr.n_python_package_name)
if not pkg:
# FIXME: there must be cheaper/safer way to do this
pkg = import_module(acr.n_python_package_name)
# if the package is one of our flatmaps, we need to consult its loader to find the path, since the file could be
# anywhere in the tree
if hasattr(pkg, '__loader__') and isinstance(pkg.__loader__, AnsibleFlatMapLoader):
try:
file_path = pkg.__loader__.find_file(n_resource)
return full_name, to_text(file_path)
except IOError:
# this loader already takes care of extensionless files, so if we didn't find it, just bail
return None, None
pkg_path = os.path.dirname(pkg.__file__)
n_resource_path = os.path.join(pkg_path, n_resource)
# FIXME: and is file or file link or ...
if os.path.exists(n_resource_path):
return full_name, to_text(n_resource_path)
# look for any matching extension in the package location (sans filter)
ext_blacklist = ['.pyc', '.pyo']
found_files = [f for f in glob.iglob(os.path.join(pkg_path, n_resource) + '.*') if os.path.isfile(f) and os.path.splitext(f)[1] not in ext_blacklist]
if not found_files:
return None, None
if len(found_files) > 1:
# TODO: warn?
pass
return full_name, to_text(found_files[0])
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name '''
return self.find_plugin_with_name(name, mod_type, ignore_deprecated, check_aliases, collection_list)[1]
def find_plugin_with_name(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name '''
global _PLUGIN_FILTERS
if name in _PLUGIN_FILTERS[self.package]:
return None, None
if mod_type:
suffix = mod_type
elif self.class_name:
# Ansible plugins that run in the controller process (most plugins)
suffix = '.py'
else:
# Only Ansible Modules. Ansible modules can be any executable so
# they can have any suffix
suffix = ''
# FIXME: need this right now so we can still load shipped PS module_utils- come up with a more robust solution
if (AnsibleCollectionRef.is_valid_fqcr(name) or collection_list) and not name.startswith('Ansible'):
if '.' in name or not collection_list:
candidates = [name]
else:
candidates = ['{0}.{1}'.format(c, name) for c in collection_list]
# TODO: keep actual errors, not just assembled messages
errors = []
for candidate_name in candidates:
try:
# HACK: refactor this properly
if candidate_name.startswith('ansible.legacy'):
# just pass the raw name to the old lookup function to check in all the usual locations
full_name = name
p = self._find_plugin_legacy(name.replace('ansible.legacy.', '', 1), ignore_deprecated, check_aliases, suffix)
else:
full_name, p = self._find_fq_plugin(candidate_name, suffix)
if p:
return full_name, p
except Exception as ex:
errors.append(to_native(ex))
if errors:
display.debug(msg='plugin lookup for {0} failed; errors: {1}'.format(name, '; '.join(errors)))
return None, None
# if we got here, there's no collection list and it's not an FQ name, so do legacy lookup
return name, self._find_plugin_legacy(name, ignore_deprecated, check_aliases, suffix)
def _find_plugin_legacy(self, name, ignore_deprecated=False, check_aliases=False, suffix=None):
if check_aliases:
name = self.aliases.get(name, name)
# The particular cache to look for modules within. This matches the
# requested mod_type
pull_cache = self._plugin_path_cache[suffix]
try:
return pull_cache[name]
except KeyError:
# Cache miss. Now let's find the plugin
pass
# TODO: Instead of using the self._paths cache (PATH_CACHE) and
# self._searched_paths we could use an iterator. Before enabling that
# we need to make sure we don't want to add additional directories
# (add_directory()) once we start using the iterator. Currently, it
# looks like _get_paths() never forces a cache refresh so if we expect
# additional directories to be added later, it is buggy.
for path in (p for p in self._get_paths() if p not in self._searched_paths and os.path.isdir(p)):
display.debug('trying %s' % path)
try:
full_paths = (os.path.join(path, f) for f in os.listdir(path))
except OSError as e:
display.warning("Error accessing plugin paths: %s" % to_text(e))
for full_path in (f for f in full_paths if os.path.isfile(f) and not f.endswith('__init__.py')):
full_name = os.path.basename(full_path)
# HACK: We have no way of executing python byte compiled files as ansible modules so specifically exclude them
# FIXME: I believe this is only correct for modules and module_utils.
# For all other plugins we want .pyc and .pyo should be valid
if any(full_path.endswith(x) for x in C.BLACKLIST_EXTS):
continue
splitname = os.path.splitext(full_name)
base_name = splitname[0]
try:
extension = splitname[1]
except IndexError:
extension = ''
# Module found, now enter it into the caches that match this file
if base_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][base_name] = full_path
if full_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][full_name] = full_path
if base_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][base_name] = full_path
if full_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][full_name] = full_path
self._searched_paths.add(path)
try:
return pull_cache[name]
except KeyError:
# Didn't find the plugin in this directory. Load modules from the next one
pass
# if nothing is found, try finding alias/deprecated
if not name.startswith('_'):
alias_name = '_' + name
# We've already cached all the paths at this point
if alias_name in pull_cache:
if not ignore_deprecated and not os.path.islink(pull_cache[alias_name]):
# FIXME: this is not always the case, some are just aliases
display.deprecated('%s is kept for backwards compatibility but usage is discouraged. ' # pylint: disable=ansible-deprecated-no-version
'The module documentation details page may explain more about this rationale.' % name.lstrip('_'))
return pull_cache[alias_name]
return None
def has_plugin(self, name, collection_list=None):
''' Checks if a plugin named name exists '''
try:
return self.find_plugin(name, collection_list=collection_list) is not None
except Exception as ex:
if isinstance(ex, AnsibleError):
raise
# log and continue, likely an innocuous type/package loading failure in collections import
display.debug('has_plugin error: {0}'.format(to_text(ex)))
__contains__ = has_plugin
def _load_module_source(self, name, path):
# avoid collisions across plugins
if name.startswith('ansible_collections.'):
full_name = name
else:
full_name = '.'.join([self.package, name])
if full_name in sys.modules:
# Avoids double loading, See https://github.com/ansible/ansible/issues/13110
return sys.modules[full_name]
with warnings.catch_warnings():
warnings.simplefilter("ignore", RuntimeWarning)
if imp is None:
spec = importlib.util.spec_from_file_location(to_native(full_name), to_native(path))
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
sys.modules[full_name] = module
else:
with open(to_bytes(path), 'rb') as module_file:
# to_native is used here because imp.load_source's path is for tracebacks and python's traceback formatting uses native strings
module = imp.load_source(to_native(full_name), to_native(path), module_file)
return module
def _update_object(self, obj, name, path):
# set extra info on the module, in case we want it later
setattr(obj, '_original_path', path)
setattr(obj, '_load_name', name)
def get(self, name, *args, **kwargs):
''' instantiates a plugin of the given name using arguments '''
found_in_cache = True
class_only = kwargs.pop('class_only', False)
collection_list = kwargs.pop('collection_list', None)
if name in self.aliases:
name = self.aliases[name]
name, path = self.find_plugin_with_name(name, collection_list=collection_list)
if path is None:
return None
if path not in self._module_cache:
self._module_cache[path] = self._load_module_source(name, path)
self._load_config_defs(name, self._module_cache[path], path)
found_in_cache = False
obj = getattr(self._module_cache[path], self.class_name)
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
return None
if not issubclass(obj, plugin_class):
return None
self._display_plugin_load(self.class_name, name, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
obj = obj(*args, **kwargs)
except TypeError as e:
if "abstract" in e.args[0]:
# Abstract Base Class. The found plugin file does not
# fully implement the defined interface.
return None
raise
self._update_object(obj, name, path)
return obj
def _display_plugin_load(self, class_name, name, searched_paths, path, found_in_cache=None, class_only=None):
''' formats data to display debug info for plugin loading, also avoids processing unless really needed '''
if C.DEFAULT_DEBUG:
msg = 'Loading %s \'%s\' from %s' % (class_name, os.path.basename(name), path)
if len(searched_paths) > 1:
msg = '%s (searched paths: %s)' % (msg, self.format_paths(searched_paths))
if found_in_cache or class_only:
msg = '%s (found_in_cache=%s, class_only=%s)' % (msg, found_in_cache, class_only)
display.debug(msg)
def all(self, *args, **kwargs):
'''
Iterate through all plugins of this type
A plugin loader is initialized with a specific type. This function is an iterator returning
all of the plugins of that type to the caller.
:kwarg path_only: If this is set to True, then we return the paths to where the plugins reside
instead of an instance of the plugin. This conflicts with class_only and both should
not be set.
:kwarg class_only: If this is set to True then we return the python class which implements
a plugin rather than an instance of the plugin. This conflicts with path_only and both
should not be set.
:kwarg _dedupe: By default, we only return one plugin per plugin name. Deduplication happens
in the same way as the :meth:`get` and :meth:`find_plugin` methods resolve which plugin
should take precedence. If this is set to False, then we return all of the plugins
found, including those with duplicate names. In the case of duplicates, the order in
which they are returned is the one that would take precedence first, followed by the
others in decreasing precedence order. This should only be used by subclasses which
want to manage their own deduplication of the plugins.
:*args: Any extra arguments are passed to each plugin when it is instantiated.
:**kwargs: Any extra keyword arguments are passed to each plugin when it is instantiated.
'''
# TODO: Change the signature of this method to:
# def all(return_type='instance', args=None, kwargs=None):
# if args is None: args = []
# if kwargs is None: kwargs = {}
# return_type can be instance, class, or path.
# These changes will mean that plugin parameters won't conflict with our params and
# will also make it impossible to request both a path and a class at the same time.
#
# Move _dedupe to be a class attribute, CUSTOM_DEDUPE, with subclasses for filters and
# tests setting it to True
global _PLUGIN_FILTERS
dedupe = kwargs.pop('_dedupe', True)
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False)
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
all_matches = []
found_in_cache = True
for i in self._get_paths():
all_matches.extend(glob.glob(os.path.join(i, "*.py")))
loaded_modules = set()
for path in sorted(all_matches, key=os.path.basename):
name = os.path.splitext(path)[0]
basename = os.path.basename(name)
if basename == '__init__' or basename in _PLUGIN_FILTERS[self.package]:
continue
if dedupe and basename in loaded_modules:
continue
loaded_modules.add(basename)
if path_only:
yield path
continue
if path not in self._module_cache:
try:
if self.subdir in ('filter_plugins', 'test_plugins'):
# filter and test plugin files can contain multiple plugins
# they must have a unique python module name to prevent them from shadowing each other
full_name = '{0}_{1}'.format(abs(hash(path)), basename)
else:
full_name = basename
module = self._load_module_source(full_name, path)
self._load_config_defs(basename, module, path)
except Exception as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
self._module_cache[path] = module
found_in_cache = False
try:
obj = getattr(self._module_cache[path], self.class_name)
except AttributeError as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
continue
if not issubclass(obj, plugin_class):
continue
self._display_plugin_load(self.class_name, basename, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
obj = obj(*args, **kwargs)
except TypeError as e:
display.warning("Skipping plugin (%s) as it seems to be incomplete: %s" % (path, to_text(e)))
self._update_object(obj, basename, path)
yield obj
class Jinja2Loader(PluginLoader):
"""
PluginLoader optimized for Jinja2 plugins
The filter and test plugins are Jinja2 plugins encapsulated inside of our plugin format.
The way the calling code is setup, we need to do a few things differently in the all() method
"""
def find_plugin(self, name, collection_list=None):
# Nothing using Jinja2Loader use this method. We can't use the base class version because
# we deduplicate differently than the base class
if '.' in name:
return super(Jinja2Loader, self).find_plugin(name, collection_list=collection_list)
raise AnsibleError('No code should call find_plugin for Jinja2Loaders (Not implemented)')
def get(self, name, *args, **kwargs):
# Nothing using Jinja2Loader use this method. We can't use the base class version because
# we deduplicate differently than the base class
if '.' in name:
return super(Jinja2Loader, self).get(name, *args, **kwargs)
raise AnsibleError('No code should call find_plugin for Jinja2Loaders (Not implemented)')
def all(self, *args, **kwargs):
"""
Differences with :meth:`PluginLoader.all`:
* We do not deduplicate ansible plugin names. This is because we don't care about our
plugin names, here. We care about the names of the actual jinja2 plugins which are inside
of our plugins.
* We reverse the order of the list of plugins compared to other PluginLoaders. This is
because of how calling code chooses to sync the plugins from the list. It adds all the
Jinja2 plugins from one of our Ansible plugins into a dict. Then it adds the Jinja2
plugins from the next Ansible plugin, overwriting any Jinja2 plugins that had the same
name. This is an encapsulation violation (the PluginLoader should not know about what
calling code does with the data) but we're pushing the common code here. We'll fix
this in the future by moving more of the common code into this PluginLoader.
* We return a list. We could iterate the list instead but that's extra work for no gain because
the API receiving this doesn't care. It just needs an iterable
"""
# We don't deduplicate ansible plugin names. Instead, calling code deduplicates jinja2
# plugin names.
kwargs['_dedupe'] = False
# We have to instantiate a list of all plugins so that we can reverse it. We reverse it so
# that calling code will deduplicate this correctly.
plugins = [p for p in super(Jinja2Loader, self).all(*args, **kwargs)]
plugins.reverse()
return plugins
def _load_plugin_filter():
filters = defaultdict(frozenset)
user_set = False
if C.PLUGIN_FILTERS_CFG is None:
filter_cfg = '/etc/ansible/plugin_filters.yml'
else:
filter_cfg = C.PLUGIN_FILTERS_CFG
user_set = True
if os.path.exists(filter_cfg):
with open(filter_cfg, 'rb') as f:
try:
filter_data = from_yaml(f.read())
except Exception as e:
display.warning(u'The plugin filter file, {0} was not parsable.'
u' Skipping: {1}'.format(filter_cfg, to_text(e)))
return filters
try:
version = filter_data['filter_version']
except KeyError:
display.warning(u'The plugin filter file, {0} was invalid.'
u' Skipping.'.format(filter_cfg))
return filters
# Try to convert for people specifying version as a float instead of string
version = to_text(version)
version = version.strip()
if version == u'1.0':
# Modules and action plugins share the same blacklist since the difference between the
# two isn't visible to the users
try:
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
except TypeError:
display.warning(u'Unable to parse the plugin filter file {0} as'
u' module_blacklist is not a list.'
u' Skipping.'.format(filter_cfg))
return filters
filters['ansible.plugins.action'] = filters['ansible.modules']
else:
display.warning(u'The plugin filter file, {0} was a version not recognized by this'
u' version of Ansible. Skipping.'.format(filter_cfg))
else:
if user_set:
display.warning(u'The plugin filter file, {0} does not exist.'
u' Skipping.'.format(filter_cfg))
# Specialcase the stat module as Ansible can run very few things if stat is blacklisted.
if 'stat' in filters['ansible.modules']:
raise AnsibleError('The stat module was specified in the module blacklist file, {0}, but'
' Ansible will not function without the stat module. Please remove stat'
' from the blacklist.'.format(to_native(filter_cfg)))
return filters
def _configure_collection_loader():
if not any((isinstance(l, AnsibleCollectionLoader) for l in sys.meta_path)):
sys.meta_path.insert(0, AnsibleCollectionLoader(C.config))
# TODO: All of the following is initialization code It should be moved inside of an initialization
# function which is called at some point early in the ansible and ansible-playbook CLI startup.
_PLUGIN_FILTERS = _load_plugin_filter()
_configure_collection_loader()
# doc fragments first
fragment_loader = PluginLoader(
'ModuleDocFragment',
'ansible.plugins.doc_fragments',
C.DOC_FRAGMENT_PLUGIN_PATH,
'doc_fragments',
)
action_loader = PluginLoader(
'ActionModule',
'ansible.plugins.action',
C.DEFAULT_ACTION_PLUGIN_PATH,
'action_plugins',
required_base_class='ActionBase',
)
cache_loader = PluginLoader(
'CacheModule',
'ansible.plugins.cache',
C.DEFAULT_CACHE_PLUGIN_PATH,
'cache_plugins',
)
callback_loader = PluginLoader(
'CallbackModule',
'ansible.plugins.callback',
C.DEFAULT_CALLBACK_PLUGIN_PATH,
'callback_plugins',
)
connection_loader = PluginLoader(
'Connection',
'ansible.plugins.connection',
C.DEFAULT_CONNECTION_PLUGIN_PATH,
'connection_plugins',
aliases={'paramiko': 'paramiko_ssh'},
required_base_class='ConnectionBase',
)
shell_loader = PluginLoader(
'ShellModule',
'ansible.plugins.shell',
'shell_plugins',
'shell_plugins',
)
module_loader = PluginLoader(
'',
'ansible.modules',
C.DEFAULT_MODULE_PATH,
'library',
)
module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
# NB: dedicated loader is currently necessary because PS module_utils expects "with subdir" lookup where
# regular module_utils doesn't. This can be revisited once we have more granular loaders.
ps_module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
lookup_loader = PluginLoader(
'LookupModule',
'ansible.plugins.lookup',
C.DEFAULT_LOOKUP_PLUGIN_PATH,
'lookup_plugins',
required_base_class='LookupBase',
)
filter_loader = Jinja2Loader(
'FilterModule',
'ansible.plugins.filter',
C.DEFAULT_FILTER_PLUGIN_PATH,
'filter_plugins',
)
test_loader = Jinja2Loader(
'TestModule',
'ansible.plugins.test',
C.DEFAULT_TEST_PLUGIN_PATH,
'test_plugins'
)
strategy_loader = PluginLoader(
'StrategyModule',
'ansible.plugins.strategy',
C.DEFAULT_STRATEGY_PLUGIN_PATH,
'strategy_plugins',
required_base_class='StrategyBase',
)
terminal_loader = PluginLoader(
'TerminalModule',
'ansible.plugins.terminal',
C.DEFAULT_TERMINAL_PLUGIN_PATH,
'terminal_plugins',
required_base_class='TerminalBase'
)
vars_loader = PluginLoader(
'VarsModule',
'ansible.plugins.vars',
C.DEFAULT_VARS_PLUGIN_PATH,
'vars_plugins',
)
cliconf_loader = PluginLoader(
'Cliconf',
'ansible.plugins.cliconf',
C.DEFAULT_CLICONF_PLUGIN_PATH,
'cliconf_plugins',
required_base_class='CliconfBase'
)
netconf_loader = PluginLoader(
'Netconf',
'ansible.plugins.netconf',
C.DEFAULT_NETCONF_PLUGIN_PATH,
'netconf_plugins',
required_base_class='NetconfBase'
)
inventory_loader = PluginLoader(
'InventoryModule',
'ansible.plugins.inventory',
C.DEFAULT_INVENTORY_PLUGIN_PATH,
'inventory_plugins'
)
httpapi_loader = PluginLoader(
'HttpApi',
'ansible.plugins.httpapi',
C.DEFAULT_HTTPAPI_PLUGIN_PATH,
'httpapi_plugins',
required_base_class='HttpApiBase',
)
become_loader = PluginLoader(
'BecomeModule',
'ansible.plugins.become',
C.BECOME_PLUGIN_PATH,
'become_plugins'
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,298 |
using setup module under a role provide via a collection fails due to the wrong module being picked
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
A role provided via collection that includes a call to setup: gather_subset fails because the windows powershell version of setup is picked.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible plugin loader
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.1
config file = None
configured module search path = ['/home/aschultz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/aschultz/.virtualenvs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/aschultz/.virtualenvs/ansible/bin/ansible
python version = 3.6.8 (default, Oct 8 2019, 16:29:04) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
Also tested
```
ansible 2.10.0.dev0
config file = None
configured module search path = ['/home/aschultz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/aschultz/.virtualenvs/ansible-devel/lib/python3.6/site-packages/ansible-2.10.0.dev0-py3.6.egg/ansible
executable location = /home/aschultz/.virtualenvs/ansible-devel/bin/ansible
python version = 3.6.8 (default, Oct 8 2019, 16:29:04) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
No changes
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
On a fedora31 host running under python 3.6.8 in a virtual environment against a CentOS7 host or against the localhost
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have provided a sample collection that reproduces the problem:
https://github.com/mwhahaha/ansible-collection-failure
On a linux host do:
```
git clone https://github.com/mwhahaha/ansible-collection-failure
cd ansible-collection-failure
ansible-galaxy collection build failure
ansible-galaxy collection install mwhahaha-failure-1.0.0.tar.gz
ansible-playbook sigh.yml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
setup gather_subset should complete successfully.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible attempts to use the powershell setup module on a linux system.
<!--- Paste verbatim command output between quotes -->
```
$ ansible-playbook sigh.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] *********************************************************************************************************************************************************************
TASK [orly] **************************************************************************************************************************************************************************
TASK [sadness : Gather facts for sadness] ********************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: powershell: command not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}
PLAY RECAP ***************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/65298
|
https://github.com/ansible/ansible/pull/65776
|
74e9b1e2190b4fa7f6fa59294d03ea154d44cfd8
|
6f76a48f59e4d1936f3f3bd1711b3999e1f3869b
| 2019-11-26T22:56:11Z |
python
| 2019-12-16T16:28:24Z |
lib/ansible/utils/collection_loader.py
|
# (c) 2019 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import os.path
import pkgutil
import re
import sys
from types import ModuleType
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.six import iteritems, string_types, with_metaclass
from ansible.utils.singleton import Singleton
# HACK: keep Python 2.6 controller tests happy in CI until they're properly split
try:
from importlib import import_module
except ImportError:
import_module = __import__
_SYNTHETIC_PACKAGES = {
# these provide fallback package definitions when there are no on-disk paths
'ansible_collections': dict(type='pkg_only', allow_external_subpackages=True),
'ansible_collections.ansible': dict(type='pkg_only', allow_external_subpackages=True),
# these implement the ansible.builtin synthetic collection mapped to the packages inside the ansible distribution
'ansible_collections.ansible.builtin': dict(type='pkg_only'),
'ansible_collections.ansible.builtin.plugins': dict(type='map', map='ansible.plugins'),
'ansible_collections.ansible.builtin.plugins.module_utils': dict(type='map', map='ansible.module_utils', graft=True),
'ansible_collections.ansible.builtin.plugins.modules': dict(type='flatmap', flatmap='ansible.modules', graft=True),
}
# FIXME: exception handling/error logging
class AnsibleCollectionLoader(with_metaclass(Singleton, object)):
def __init__(self, config=None):
if config:
self._n_configured_paths = config.get_config_value('COLLECTIONS_PATHS')
else:
self._n_configured_paths = os.environ.get('ANSIBLE_COLLECTIONS_PATHS', '').split(os.pathsep)
if isinstance(self._n_configured_paths, string_types):
self._n_configured_paths = [self._n_configured_paths]
elif self._n_configured_paths is None:
self._n_configured_paths = []
# expand any placeholders in configured paths
self._n_configured_paths = [to_native(os.path.expanduser(p), errors='surrogate_or_strict') for p in self._n_configured_paths]
self._n_playbook_paths = []
self._default_collection = None
# pre-inject grafted package maps so we can force them to use the right loader instead of potentially delegating to a "normal" loader
for syn_pkg_def in (p for p in iteritems(_SYNTHETIC_PACKAGES) if p[1].get('graft')):
pkg_name = syn_pkg_def[0]
pkg_def = syn_pkg_def[1]
newmod = ModuleType(pkg_name)
newmod.__package__ = pkg_name
newmod.__file__ = '<ansible_synthetic_collection_package>'
pkg_type = pkg_def.get('type')
# TODO: need to rethink map style so we can just delegate all the loading
if pkg_type == 'flatmap':
newmod.__loader__ = AnsibleFlatMapLoader(import_module(pkg_def['flatmap']))
newmod.__path__ = []
sys.modules[pkg_name] = newmod
@property
def n_collection_paths(self):
return self._n_playbook_paths + self._n_configured_paths
def get_collection_path(self, collection_name):
if not AnsibleCollectionRef.is_valid_collection_name(collection_name):
raise ValueError('{0} is not a valid collection name'.format(to_native(collection_name)))
m = import_module('ansible_collections.{0}'.format(collection_name))
return m.__file__
def set_playbook_paths(self, b_playbook_paths):
if isinstance(b_playbook_paths, string_types):
b_playbook_paths = [b_playbook_paths]
# track visited paths; we have to preserve the dir order as-passed in case there are duplicate collections (first one wins)
added_paths = set()
# de-dupe and ensure the paths are native strings (Python seems to do this for package paths etc, so assume it's safe)
self._n_playbook_paths = [os.path.join(to_native(p), 'collections') for p in b_playbook_paths if not (p in added_paths or added_paths.add(p))]
# FIXME: only allow setting this once, or handle any necessary cache/package path invalidations internally?
# FIXME: is there a better place to store this?
# FIXME: only allow setting this once
def set_default_collection(self, collection_name):
self._default_collection = collection_name
@property
def default_collection(self):
return self._default_collection
def find_module(self, fullname, path=None):
if self._find_module(fullname, path, load=False)[0]:
return self
return None
def load_module(self, fullname):
mod = self._find_module(fullname, None, load=True)[1]
if not mod:
raise ImportError('module {0} not found'.format(fullname))
return mod
def _find_module(self, fullname, path, load):
# this loader is only concerned with items under the Ansible Collections namespace hierarchy, ignore others
if not fullname.startswith('ansible_collections.') and fullname != 'ansible_collections':
return False, None
if sys.modules.get(fullname):
if not load:
return True, None
return True, sys.modules[fullname]
newmod = None
# this loader implements key functionality for Ansible collections
# * implicit distributed namespace packages for the root Ansible namespace (no pkgutil.extend_path hackery reqd)
# * implicit package support for Python 2.7 (no need for __init__.py in collections, except to use standard Py2.7 tooling)
# * preventing controller-side code injection during collection loading
# * (default loader would execute arbitrary package code from all __init__.py's)
parent_pkg_name = '.'.join(fullname.split('.')[:-1])
parent_pkg = sys.modules.get(parent_pkg_name)
if parent_pkg_name and not parent_pkg:
raise ImportError('parent package {0} not found'.format(parent_pkg_name))
# are we at or below the collection level? eg a.mynamespace.mycollection.something.else
# if so, we don't want distributed namespace behavior; first mynamespace.mycollection on the path is where
# we'll load everything from (ie, don't fall back to another mynamespace.mycollection lower on the path)
sub_collection = fullname.count('.') > 1
synpkg_def = _SYNTHETIC_PACKAGES.get(fullname)
synpkg_remainder = ''
if not synpkg_def:
# if the parent is a grafted package, we have some special work to do, otherwise just look for stuff on disk
parent_synpkg_def = _SYNTHETIC_PACKAGES.get(parent_pkg_name)
if parent_synpkg_def and parent_synpkg_def.get('graft'):
synpkg_def = parent_synpkg_def
synpkg_remainder = '.' + fullname.rpartition('.')[2]
# FUTURE: collapse as much of this back to on-demand as possible (maybe stub packages that get replaced when actually loaded?)
if synpkg_def:
pkg_type = synpkg_def.get('type')
if not pkg_type:
raise KeyError('invalid synthetic package type (no package "type" specified)')
if pkg_type == 'map':
map_package = synpkg_def.get('map')
if not map_package:
raise KeyError('invalid synthetic map package definition (no target "map" defined)')
if not load:
return True, None
mod = import_module(map_package + synpkg_remainder)
sys.modules[fullname] = mod
return True, mod
elif pkg_type == 'flatmap':
raise NotImplementedError()
elif pkg_type == 'pkg_only':
if not load:
return True, None
newmod = ModuleType(fullname)
newmod.__package__ = fullname
newmod.__file__ = '<ansible_synthetic_collection_package>'
newmod.__loader__ = self
newmod.__path__ = []
if not synpkg_def.get('allow_external_subpackages'):
# if external subpackages are NOT allowed, we're done
sys.modules[fullname] = newmod
return True, newmod
# if external subpackages ARE allowed, check for on-disk implementations and return a normal
# package if we find one, otherwise return the one we created here
if not parent_pkg: # top-level package, look for NS subpackages on all collection paths
package_paths = [self._extend_path_with_ns(p, fullname) for p in self.n_collection_paths]
else: # subpackage; search in all subpaths (we'll limit later inside a collection)
package_paths = [self._extend_path_with_ns(p, fullname) for p in parent_pkg.__path__]
for candidate_child_path in package_paths:
code_object = None
is_package = True
location = None
# check for implicit sub-package first
if os.path.isdir(to_bytes(candidate_child_path)):
# Py3.x implicit namespace packages don't have a file location, so they don't support get_data
# (which assumes the parent dir or that the loader has an internal mapping); so we have to provide
# a bogus leaf file on the __file__ attribute for pkgutil.get_data to strip off
location = os.path.join(candidate_child_path, '__synthetic__')
else:
for source_path in [os.path.join(candidate_child_path, '__init__.py'),
candidate_child_path + '.py']:
if not os.path.isfile(to_bytes(source_path)):
continue
if not load:
return True, None
with open(to_bytes(source_path), 'rb') as fd:
source = fd.read()
code_object = compile(source=source, filename=source_path, mode='exec', flags=0, dont_inherit=True)
location = source_path
is_package = source_path.endswith('__init__.py')
break
if not location:
continue
newmod = ModuleType(fullname)
newmod.__file__ = location
newmod.__loader__ = self
if is_package:
if sub_collection: # we never want to search multiple instances of the same collection; use first found
newmod.__path__ = [candidate_child_path]
else:
newmod.__path__ = package_paths
newmod.__package__ = fullname
else:
newmod.__package__ = parent_pkg_name
sys.modules[fullname] = newmod
if code_object:
# FIXME: decide cases where we don't actually want to exec the code?
exec(code_object, newmod.__dict__)
return True, newmod
# even if we didn't find one on disk, fall back to a synthetic package if we have one...
if newmod:
sys.modules[fullname] = newmod
return True, newmod
# FIXME: need to handle the "no dirs present" case for at least the root and synthetic internal collections like ansible.builtin
return False, None
@staticmethod
def _extend_path_with_ns(path, ns):
ns_path_add = ns.rsplit('.', 1)[-1]
return os.path.join(path, ns_path_add)
def get_data(self, filename):
with open(filename, 'rb') as fd:
return fd.read()
class AnsibleFlatMapLoader(object):
_extension_blacklist = ['.pyc', '.pyo']
def __init__(self, root_package):
self._root_package = root_package
self._dirtree = None
def _init_dirtree(self):
# FIXME: thread safety
root_path = os.path.dirname(self._root_package.__file__)
flat_files = []
# FIXME: make this a dict of filename->dir for faster direct lookup?
# FIXME: deal with _ prefixed deprecated files (or require another method for collections?)
# FIXME: fix overloaded filenames (eg, rename Windows setup to win_setup)
for root, dirs, files in os.walk(root_path):
# add all files in this dir that don't have a blacklisted extension
flat_files.extend(((root, f) for f in files if not any((f.endswith(ext) for ext in self._extension_blacklist))))
self._dirtree = flat_files
def find_file(self, filename):
# FIXME: thread safety
if not self._dirtree:
self._init_dirtree()
if '.' not in filename: # no extension specified, use extension regex to filter
extensionless_re = re.compile(r'^{0}(\..+)?$'.format(re.escape(filename)))
# why doesn't Python have first()?
try:
# FIXME: store extensionless in a separate direct lookup?
filepath = next(os.path.join(r, f) for r, f in self._dirtree if extensionless_re.match(f))
except StopIteration:
raise IOError("couldn't find {0}".format(filename))
else: # actual filename, just look it up
# FIXME: this case sucks; make it a lookup
try:
filepath = next(os.path.join(r, f) for r, f in self._dirtree if f == filename)
except StopIteration:
raise IOError("couldn't find {0}".format(filename))
return filepath
def get_data(self, filename):
found_file = self.find_file(filename)
with open(found_file, 'rb') as fd:
return fd.read()
# TODO: implement these for easier inline debugging?
# def get_source(self, fullname):
# def get_code(self, fullname):
# def is_package(self, fullname):
class AnsibleCollectionRef:
# FUTURE: introspect plugin loaders to get these dynamically?
VALID_REF_TYPES = frozenset(to_text(r) for r in ['action', 'become', 'cache', 'callback', 'cliconf', 'connection',
'doc_fragments', 'filter', 'httpapi', 'inventory', 'lookup',
'module_utils', 'modules', 'netconf', 'role', 'shell', 'strategy',
'terminal', 'test', 'vars'])
# FIXME: tighten this up to match Python identifier reqs, etc
VALID_COLLECTION_NAME_RE = re.compile(to_text(r'^(\w+)\.(\w+)$'))
VALID_SUBDIRS_RE = re.compile(to_text(r'^\w+(\.\w+)*$'))
VALID_FQCR_RE = re.compile(to_text(r'^\w+\.\w+\.\w+(\.\w+)*$')) # can have 0-N included subdirs as well
def __init__(self, collection_name, subdirs, resource, ref_type):
"""
Create an AnsibleCollectionRef from components
:param collection_name: a collection name of the form 'namespace.collectionname'
:param subdirs: optional subdir segments to be appended below the plugin type (eg, 'subdir1.subdir2')
:param resource: the name of the resource being references (eg, 'mymodule', 'someaction', 'a_role')
:param ref_type: the type of the reference, eg 'module', 'role', 'doc_fragment'
"""
collection_name = to_text(collection_name, errors='strict')
if subdirs is not None:
subdirs = to_text(subdirs, errors='strict')
resource = to_text(resource, errors='strict')
ref_type = to_text(ref_type, errors='strict')
if not self.is_valid_collection_name(collection_name):
raise ValueError('invalid collection name (must be of the form namespace.collection): {0}'.format(to_native(collection_name)))
if ref_type not in self.VALID_REF_TYPES:
raise ValueError('invalid collection ref_type: {0}'.format(ref_type))
self.collection = collection_name
if subdirs:
if not re.match(self.VALID_SUBDIRS_RE, subdirs):
raise ValueError('invalid subdirs entry: {0} (must be empty/None or of the form subdir1.subdir2)'.format(to_native(subdirs)))
self.subdirs = subdirs
else:
self.subdirs = u''
self.resource = resource
self.ref_type = ref_type
package_components = [u'ansible_collections', self.collection]
if self.ref_type == u'role':
package_components.append(u'roles')
else:
# we assume it's a plugin
package_components += [u'plugins', self.ref_type]
if self.subdirs:
package_components.append(self.subdirs)
if self.ref_type == u'role':
# roles are their own resource
package_components.append(self.resource)
self.n_python_package_name = to_native('.'.join(package_components))
@staticmethod
def from_fqcr(ref, ref_type):
"""
Parse a string as a fully-qualified collection reference, raises ValueError if invalid
:param ref: collection reference to parse (a valid ref is of the form 'ns.coll.resource' or 'ns.coll.subdir1.subdir2.resource')
:param ref_type: the type of the reference, eg 'module', 'role', 'doc_fragment'
:return: a populated AnsibleCollectionRef object
"""
# assuming the fq_name is of the form (ns).(coll).(optional_subdir_N).(resource_name),
# we split the resource name off the right, split ns and coll off the left, and we're left with any optional
# subdirs that need to be added back below the plugin-specific subdir we'll add. So:
# ns.coll.resource -> ansible_collections.ns.coll.plugins.(plugintype).resource
# ns.coll.subdir1.resource -> ansible_collections.ns.coll.plugins.subdir1.(plugintype).resource
# ns.coll.rolename -> ansible_collections.ns.coll.roles.rolename
if not AnsibleCollectionRef.is_valid_fqcr(ref):
raise ValueError('{0} is not a valid collection reference'.format(to_native(ref)))
ref = to_text(ref, errors='strict')
ref_type = to_text(ref_type, errors='strict')
resource_splitname = ref.rsplit(u'.', 1)
package_remnant = resource_splitname[0]
resource = resource_splitname[1]
# split the left two components of the collection package name off, anything remaining is plugin-type
# specific subdirs to be added back on below the plugin type
package_splitname = package_remnant.split(u'.', 2)
if len(package_splitname) == 3:
subdirs = package_splitname[2]
else:
subdirs = u''
collection_name = u'.'.join(package_splitname[0:2])
return AnsibleCollectionRef(collection_name, subdirs, resource, ref_type)
@staticmethod
def try_parse_fqcr(ref, ref_type):
"""
Attempt to parse a string as a fully-qualified collection reference, returning None on failure (instead of raising an error)
:param ref: collection reference to parse (a valid ref is of the form 'ns.coll.resource' or 'ns.coll.subdir1.subdir2.resource')
:param ref_type: the type of the reference, eg 'module', 'role', 'doc_fragment'
:return: a populated AnsibleCollectionRef object on successful parsing, else None
"""
try:
return AnsibleCollectionRef.from_fqcr(ref, ref_type)
except ValueError:
pass
@staticmethod
def legacy_plugin_dir_to_plugin_type(legacy_plugin_dir_name):
"""
Utility method to convert from a PluginLoader dir name to a plugin ref_type
:param legacy_plugin_dir_name: PluginLoader dir name (eg, 'action_plugins', 'library')
:return: the corresponding plugin ref_type (eg, 'action', 'role')
"""
legacy_plugin_dir_name = to_text(legacy_plugin_dir_name)
plugin_type = legacy_plugin_dir_name.replace(u'_plugins', u'')
if plugin_type == u'library':
plugin_type = u'modules'
if plugin_type not in AnsibleCollectionRef.VALID_REF_TYPES:
raise ValueError('{0} cannot be mapped to a valid collection ref type'.format(to_native(legacy_plugin_dir_name)))
return plugin_type
@staticmethod
def is_valid_fqcr(ref, ref_type=None):
"""
Validates if is string is a well-formed fully-qualified collection reference (does not look up the collection itself)
:param ref: candidate collection reference to validate (a valid ref is of the form 'ns.coll.resource' or 'ns.coll.subdir1.subdir2.resource')
:param ref_type: optional reference type to enable deeper validation, eg 'module', 'role', 'doc_fragment'
:return: True if the collection ref passed is well-formed, False otherwise
"""
ref = to_text(ref)
if not ref_type:
return bool(re.match(AnsibleCollectionRef.VALID_FQCR_RE, ref))
return bool(AnsibleCollectionRef.try_parse_fqcr(ref, ref_type))
@staticmethod
def is_valid_collection_name(collection_name):
"""
Validates if the given string is a well-formed collection name (does not look up the collection itself)
:param collection_name: candidate collection name to validate (a valid name is of the form 'ns.collname')
:return: True if the collection name passed is well-formed, False otherwise
"""
collection_name = to_text(collection_name)
return bool(re.match(AnsibleCollectionRef.VALID_COLLECTION_NAME_RE, collection_name))
def get_collection_role_path(role_name, collection_list=None):
acr = AnsibleCollectionRef.try_parse_fqcr(role_name, 'role')
if acr:
# looks like a valid qualified collection ref; skip the collection_list
role = acr.resource
collection_list = [acr.collection]
subdirs = acr.subdirs
resource = acr.resource
elif not collection_list:
return None # not a FQ role and no collection search list spec'd, nothing to do
else:
resource = role_name # treat as unqualified, loop through the collection search list to try and resolve
subdirs = ''
for collection_name in collection_list:
try:
acr = AnsibleCollectionRef(collection_name=collection_name, subdirs=subdirs, resource=resource, ref_type='role')
# FIXME: error handling/logging; need to catch any import failures and move along
# FIXME: this line shouldn't be necessary, but py2 pkgutil.get_data is delegating back to built-in loader when it shouldn't
pkg = import_module(acr.n_python_package_name)
if pkg is not None:
# the package is now loaded, get the collection's package and ask where it lives
path = os.path.dirname(to_bytes(sys.modules[acr.n_python_package_name].__file__, errors='surrogate_or_strict'))
return resource, to_text(path, errors='surrogate_or_strict'), collection_name
except IOError:
continue
except Exception as ex:
# FIXME: pick out typical import errors first, then error logging
continue
return None
_N_COLLECTION_PATH_RE = re.compile(r'/ansible_collections/([^/]+)/([^/]+)')
def get_collection_name_from_path(path):
"""
Return the containing collection name for a given path, or None if the path is not below a configured collection, or
the collection cannot be loaded (eg, the collection is masked by another of the same name higher in the configured
collection roots).
:param n_path: native-string path to evaluate for collection containment
:return: collection name or None
"""
n_collection_paths = [to_native(os.path.realpath(to_bytes(p))) for p in AnsibleCollectionLoader().n_collection_paths]
b_path = os.path.realpath(to_bytes(path))
n_path = to_native(b_path)
for coll_path in n_collection_paths:
common_prefix = to_native(os.path.commonprefix([b_path, to_bytes(coll_path)]))
if common_prefix == coll_path:
# strip off the common prefix (handle weird testing cases of nested collection roots, eg)
collection_remnant = n_path[len(coll_path):]
# commonprefix may include the trailing /, prepend to the remnant if necessary (eg trailing / on root)
if collection_remnant and collection_remnant[0] != '/':
collection_remnant = '/' + collection_remnant
# the path lives under this collection root, see if it maps to a collection
found_collection = _N_COLLECTION_PATH_RE.search(collection_remnant)
if not found_collection:
continue
n_collection_name = '{0}.{1}'.format(*found_collection.groups())
loaded_collection_path = AnsibleCollectionLoader().get_collection_path(n_collection_name)
if not loaded_collection_path:
return None
# ensure we're using the canonical real path, with the bogus __synthetic__ stripped off
b_loaded_collection_path = os.path.dirname(os.path.realpath(to_bytes(loaded_collection_path)))
# if the collection path prefix matches the path prefix we were passed, it's the same collection that's loaded
if os.path.commonprefix([b_path, b_loaded_collection_path]) == b_loaded_collection_path:
return n_collection_name
return None # if not, it's a collection, but not the same collection the loader sees, so ignore it
def set_collection_playbook_paths(b_playbook_paths):
AnsibleCollectionLoader().set_playbook_paths(b_playbook_paths)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,298 |
using setup module under a role provide via a collection fails due to the wrong module being picked
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
A role provided via collection that includes a call to setup: gather_subset fails because the windows powershell version of setup is picked.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ansible plugin loader
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible 2.9.1
config file = None
configured module search path = ['/home/aschultz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/aschultz/.virtualenvs/ansible/lib/python3.6/site-packages/ansible
executable location = /home/aschultz/.virtualenvs/ansible/bin/ansible
python version = 3.6.8 (default, Oct 8 2019, 16:29:04) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
Also tested
```
ansible 2.10.0.dev0
config file = None
configured module search path = ['/home/aschultz/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/aschultz/.virtualenvs/ansible-devel/lib/python3.6/site-packages/ansible-2.10.0.dev0-py3.6.egg/ansible
executable location = /home/aschultz/.virtualenvs/ansible-devel/bin/ansible
python version = 3.6.8 (default, Oct 8 2019, 16:29:04) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
No changes
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
On a fedora31 host running under python 3.6.8 in a virtual environment against a CentOS7 host or against the localhost
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
I have provided a sample collection that reproduces the problem:
https://github.com/mwhahaha/ansible-collection-failure
On a linux host do:
```
git clone https://github.com/mwhahaha/ansible-collection-failure
cd ansible-collection-failure
ansible-galaxy collection build failure
ansible-galaxy collection install mwhahaha-failure-1.0.0.tar.gz
ansible-playbook sigh.yml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
setup gather_subset should complete successfully.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible attempts to use the powershell setup module on a linux system.
<!--- Paste verbatim command output between quotes -->
```
$ ansible-playbook sigh.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] *********************************************************************************************************************************************************************
TASK [orly] **************************************************************************************************************************************************************************
TASK [sadness : Gather facts for sadness] ********************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: powershell: command not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}
PLAY RECAP ***************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/65298
|
https://github.com/ansible/ansible/pull/65776
|
74e9b1e2190b4fa7f6fa59294d03ea154d44cfd8
|
6f76a48f59e4d1936f3f3bd1711b3999e1f3869b
| 2019-11-26T22:56:11Z |
python
| 2019-12-16T16:28:24Z |
test/integration/targets/collections/collection_root_user/ansible_collections/testns/testcoll/roles/testrole/tasks/main.yml
|
- name: check collections list from role meta
plugin_lookup:
register: pluginlookup_out
- name: call role-local ping module
ping:
register: ping_out
- name: call unqualified module in another collection listed in role meta (testns.coll_in_sys)
systestmodule:
register: systestmodule_out
# verify that pluginloader caching doesn't prevent us from explicitly calling a builtin plugin with the same name
- name: call builtin ping module explicitly
ansible.builtin.ping:
register: builtinping_out
- debug:
msg: '{{ test_role_input | default("(undefined)") }}'
register: test_role_output
- set_fact:
testrole_source: collection
# FIXME: add tests to ensure that block/task level stuff in a collection-hosted role properly inherit role default/meta values
- assert:
that:
- pluginlookup_out.collection_list == ['testns.testcoll', 'ansible.builtin', 'testns.coll_in_sys', 'bogus.fromrolemeta']
- ping_out.source is defined and ping_out.source == 'user'
- systestmodule_out.source is defined and systestmodule_out.source == 'sys'
- builtinping_out.ping is defined and builtinping_out.ping == 'pong'
- test_role_input is not defined or test_role_input == test_role_output.msg
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,418 |
unit test documentation contains incorrect import references
|
##### SUMMARY
After this MR https://github.com/ansible/ansible/pull/46996 ansible unittests where moved out of the ansible scope.
The documentation page for `Unit Testing Ansible Modules` https://docs.ansible.com/ansible/latest/dev_guide/testing_units_modules.html#how-to-unit-test-ansible-modules is not updated accordingly and I'm trying to find a package in pypi which contains these ansible unittesting libs that were removed from the main ansible package.
So as the documentation seems to be outdated after the new unittesting tools structure was implemented and how do I test my module?
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
https://docs.ansible.com/ansible/latest/dev_guide/testing_units_modules.html#how-to-unit-test-ansible-modules
##### ANSIBLE VERSION
ansible 2.9.1
|
https://github.com/ansible/ansible/issues/65418
|
https://github.com/ansible/ansible/pull/65775
|
6f76a48f59e4d1936f3f3bd1711b3999e1f3869b
|
40fb46f1e80df19b4843340d9d0983bcf1bd74b5
| 2019-12-02T10:49:52Z |
python
| 2019-12-16T17:07:26Z |
docs/docsite/rst/dev_guide/testing_units_modules.rst
|
:orphan:
.. _testing_units_modules:
****************************
Unit Testing Ansible Modules
****************************
.. highlight:: python
.. contents:: Topics
Introduction
============
This document explains why, how and when you should use unit tests for Ansible modules.
The document doesn't apply to other parts of Ansible for which the recommendations are
normally closer to the Python standard. There is basic documentation for Ansible unit
tests in the developer guide :ref:`testing_units`. This document should
be readable for a new Ansible module author. If you find it incomplete or confusing,
please open a bug or ask for help on Ansible IRC.
What Are Unit Tests?
====================
Ansible includes a set of unit tests in the :file:`test/units` directory. These tests primarily cover the
internals but can also can cover Ansible modules. The structure of the unit tests matches
the structure of the code base, so the tests that reside in the :file:`test/units/modules/` directory
are organized by module groups.
Integration tests can be used for most modules, but there are situations where
cases cannot be verified using integration tests. This means that Ansible unit test cases
may extend beyond testing only minimal units and in some cases will include some
level of functional testing.
Why Use Unit Tests?
===================
Ansible unit tests have advantages and disadvantages. It is important to understand these.
Advantages include:
* Most unit tests are much faster than most Ansible integration tests. The complete suite
of unit tests can be run regularly by a developer on their local system.
* Unit tests can be run by developers who don't have access to the system which the module is
designed to work on, allowing a level of verification that changes to core functions
haven't broken module expectations.
* Unit tests can easily substitute system functions allowing testing of software that
would be impractical. For example, the ``sleep()`` function can be replaced and we check
that a ten minute sleep was called without actually waiting ten minutes.
* Unit tests are run on different Python versions. This allows us to
ensure that the code behaves in the same way on different Python versions.
There are also some potential disadvantages of unit tests. Unit tests don't normally
directly test actual useful valuable features of software, instead just internal
implementation
* Unit tests that test the internal, non-visible features of software may make
refactoring difficult if those internal features have to change (see also naming in How
below)
* Even if the internal feature is working correctly it is possible that there will be a
problem between the internal code tested and the actual result delivered to the user
Normally the Ansible integration tests (which are written in Ansible YAML) provide better
testing for most module functionality. If those tests already test a feature and perform
well there may be little point in providing a unit test covering the same area as well.
When To Use Unit Tests
======================
There are a number of situations where unit tests are a better choice than integration
tests. For example, testing things which are impossible, slow or very difficult to test
with integration tests, such as:
* Forcing rare / strange / random situations that can't be forced, such as specific network
failures and exceptions
* Extensive testing of slow configuration APIs
* Situations where the integration tests cannot be run as part of the main Ansible
continuous integration running in Shippable.
Providing quick feedback
------------------------
Example:
A single step of the rds_instance test cases can take up to 20
minutes (the time to create an RDS instance in Amazon). The entire
test run can last for well over an hour. All 16 of the unit tests
complete execution in less than 2 seconds.
The time saving provided by being able to run the code in a unit test makes it worth
creating a unit test when bug fixing a module, even if those tests do not often identify
problems later. As a basic goal, every module should have at least one unit test which
will give quick feedback in easy cases without having to wait for the integration tests to
complete.
Ensuring correct use of external interfaces
-------------------------------------------
Unit tests can check the way in which external services are run to ensure that they match
specifications or are as efficient as possible *even when the final output will not be changed*.
Example:
Package managers are often far more efficient when installing multiple packages at once
rather than each package separately. The final result is the
same: the packages are all installed, so the efficiency is difficult to verify through
integration tests. By providing a mock package manager and verifying that it is called
once, we can build a valuable test for module efficiency.
Another related use is in the situation where an API has versions which behave
differently. A programmer working on a new version may change the module to work with the
new API version and unintentionally break the old version. A test case
which checks that the call happens properly for the old version can help avoid the
problem. In this situation it is very important to include version numbering in the test case
name (see `Naming unit tests`_ below).
Providing specific design tests
--------------------------------
By building a requirement for a particular part of the
code and then coding to that requirement, unit tests _can_ sometimes improve the code and
help future developers understand that code.
Unit tests that test internal implementation details of code, on the other hand, almost
always do more harm than good. Testing that your packages to install are stored in a list
would slow down and confuse a future developer who might need to change that list into a
dictionary for efficiency. This problem can be reduced somewhat with clear test naming so
that the future developer immediately knows to delete the test case, but it is often
better to simply leave out the test case altogether and test for a real valuable feature
of the code, such as installing all of the packages supplied as arguments to the module.
How to unit test Ansible modules
================================
There are a number of techniques for unit testing modules. Beware that most
modules without unit tests are structured in a way that makes testing quite difficult and
can lead to very complicated tests which need more work than the code. Effectively using unit
tests may lead you to restructure your code. This is often a good thing and leads
to better code overall. Good restructuring can make your code clearer and easier to understand.
Naming unit tests
-----------------
Unit tests should have logical names. If a developer working on the module being tested
breaks the test case, it should be easy to figure what the unit test covers from the name.
If a unit test is designed to verify compatibility with a specific software or API version
then include the version in the name of the unit test.
As an example, ``test_v2_state_present_should_call_create_server_with_name()`` would be a
good name, ``test_create_server()`` would not be.
Use of Mocks
------------
Mock objects (from https://docs.python.org/3/library/unittest.mock.html) can be very
useful in building unit tests for special / difficult cases, but they can also
lead to complex and confusing coding situations. One good use for mocks would be in
simulating an API. As for 'six', the 'mock' python package is bundled with Ansible (use
``import ansible.compat.tests.mock``). See for example
Ensuring failure cases are visible with mock objects
----------------------------------------------------
Functions like :meth:`module.fail_json` are normally expected to terminate execution. When you
run with a mock module object this doesn't happen since the mock always returns another mock
from a function call. You can set up the mock to raise an exception as shown above, or you can
assert that these functions have not been called in each test. For example::
module = MagicMock()
function_to_test(module, argument)
module.fail_json.assert_not_called()
This applies not only to calling the main module but almost any other
function in a module which gets the module object.
Mocking of the actual module
----------------------------
The setup of an actual module is quite complex (see `Passing Arguments`_ below) and often
isn't needed for most functions which use a module. Instead you can use a mock object as
the module and create any module attributes needed by the function you are testing. If
you do this, beware that the module exit functions need special handling as mentioned
above, either by throwing an exception or ensuring that they haven't been called. For example::
class AnsibleExitJson(Exception):
"""Exception class to be raised by module.exit_json and caught by the test case"""
pass
# you may also do the same to fail json
module = MagicMock()
module.exit_json.side_effect = AnsibleExitJson(Exception)
with self.assertRaises(AnsibleExitJson) as result:
return = my_module.test_this_function(module, argument)
module.fail_json.assert_not_called()
assert return["changed"] == True
API definition with unit test cases
-----------------------------------
API interaction is usually best tested with the function tests defined in Ansible's
integration testing section, which run against the actual API. There are several cases
where the unit tests are likely to work better.
Defining a module against an API specification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This case is especially important for modules interacting with web services, which provide
an API that Ansible uses but which are beyond the control of the user.
By writing a custom emulation of the calls that return data from the API, we can ensure
that only the features which are clearly defined in the specification of the API are
present in the message. This means that we can check that we use the correct
parameters and nothing else.
*Example: in rds_instance unit tests a simple instance state is defined*::
def simple_instance_list(status, pending):
return {u'DBInstances': [{u'DBInstanceArn': 'arn:aws:rds:us-east-1:1234567890:db:fakedb',
u'DBInstanceStatus': status,
u'PendingModifiedValues': pending,
u'DBInstanceIdentifier': 'fakedb'}]}
This is then used to create a list of states::
rds_client_double = MagicMock()
rds_client_double.describe_db_instances.side_effect = [
simple_instance_list('rebooting', {"a": "b", "c": "d"}),
simple_instance_list('available', {"c": "d", "e": "f"}),
simple_instance_list('rebooting', {"a": "b"}),
simple_instance_list('rebooting', {"e": "f", "g": "h"}),
simple_instance_list('rebooting', {}),
simple_instance_list('available', {"g": "h", "i": "j"}),
simple_instance_list('rebooting', {"i": "j", "k": "l"}),
simple_instance_list('available', {}),
simple_instance_list('available', {}),
]
These states are then used as returns from a mock object to ensure that the ``await`` function
waits through all of the states that would mean the RDS instance has not yet completed
configuration::
rds_i.await_resource(rds_client_double, "some-instance", "available", mod_mock,
await_pending=1)
assert(len(sleeper_double.mock_calls) > 5), "await_pending didn't wait enough"
By doing this we check that the ``await`` function will keep waiting through
potentially unusual that it would be impossible to reliably trigger through the
integration tests but which happen unpredictably in reality.
Defining a module to work against multiple API versions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This case is especially important for modules interacting with many different versions of
software; for example, package installation modules that might be expected to work with
many different operating system versions.
By using previously stored data from various versions of an API we can ensure that the
code is tested against the actual data which will be sent from that version of the system
even when the version is very obscure and unlikely to be available during testing.
Ansible special cases for unit testing
======================================
There are a number of special cases for unit testing the environment of an Ansible module.
The most common are documented below, and suggestions for others can be found by looking
at the source code of the existing unit tests or asking on the Ansible IRC channel or mailing
lists.
Module argument processing
--------------------------
There are two problems with running the main function of a module:
* Since the module is supposed to accept arguments on ``STDIN`` it is a bit difficult to
set up the arguments correctly so that the module will get them as parameters.
* All modules should finish by calling either the :meth:`module.fail_json` or
:meth:`module.exit_json`, but these won't work correctly in a testing environment.
Passing Arguments
-----------------
.. This section should be updated once https://github.com/ansible/ansible/pull/31456 is
closed since the function below will be provided in a library file.
To pass arguments to a module correctly, use the ``set_module_args`` method which accepts a dictionary
as its parameter. Module creation and argument processing is
handled through the :class:`AnsibleModule` object in the basic section of the utilities. Normally
this accepts input on ``STDIN``, which is not convenient for unit testing. When the special
variable is set it will be treated as if the input came on ``STDIN`` to the module. Simply call that function before setting up your module::
import json
from units.modules.utils import set_module_args
from ansible.module_utils._text import to_bytes
def test_already_registered(self):
set_module_args({
'activationkey': 'key',
'username': 'user',
'password': 'pass',
})
Handling exit correctly
-----------------------
.. This section should be updated once https://github.com/ansible/ansible/pull/31456 is
closed since the exit and failure functions below will be provided in a library file.
The :meth:`module.exit_json` function won't work properly in a testing environment since it
writes error information to ``STDOUT`` upon exit, where it
is difficult to examine. This can be mitigated by replacing it (and :meth:`module.fail_json`) with
a function that raises an exception::
def exit_json(*args, **kwargs):
if 'changed' not in kwargs:
kwargs['changed'] = False
raise AnsibleExitJson(kwargs)
Now you can ensure that the first function called is the one you expected simply by
testing for the correct exception::
def test_returned_value(self):
set_module_args({
'activationkey': 'key',
'username': 'user',
'password': 'pass',
})
with self.assertRaises(AnsibleExitJson) as result:
my_module.main()
The same technique can be used to replace :meth:`module.fail_json` (which is used for failure
returns from modules) and for the ``aws_module.fail_json_aws()`` (used in modules for Amazon
Web Services).
Running the main function
-------------------------
If you do want to run the actual main function of a module you must import the module, set
the arguments as above, set up the appropriate exit exception and then run the module::
# This test is based around pytest's features for individual test functions
import pytest
import ansible.modules.module.group.my_module as my_module
def test_main_function(monkeypatch):
monkeypatch.setattr(my_module.AnsibleModule, "exit_json", fake_exit_json)
set_module_args({
'activationkey': 'key',
'username': 'user',
'password': 'pass',
})
my_module.main()
Handling calls to external executables
--------------------------------------
Module must use :meth:`AnsibleModule.run_command` in order to execute an external command. This
method needs to be mocked:
Here is a simple mock of :meth:`AnsibleModule.run_command` (taken from :file:`test/units/modules/packaging/os/test_rhn_register.py`)::
with patch.object(basic.AnsibleModule, 'run_command') as run_command:
run_command.return_value = 0, '', '' # successful execution, no output
with self.assertRaises(AnsibleExitJson) as result:
self.module.main()
self.assertFalse(result.exception.args[0]['changed'])
# Check that run_command has been called
run_command.assert_called_once_with('/usr/bin/command args')
self.assertEqual(run_command.call_count, 1)
self.assertFalse(run_command.called)
A Complete Example
------------------
The following example is a complete skeleton that reuses the mocks explained above and adds a new
mock for :meth:`Ansible.get_bin_path`::
import json
from ansible.compat.tests import unittest
from ansible.compat.tests.mock import patch
from ansible.module_utils import basic
from ansible.module_utils._text import to_bytes
from ansible.modules.namespace import my_module
def set_module_args(args):
"""prepare arguments so that they will be picked up during module creation"""
args = json.dumps({'ANSIBLE_MODULE_ARGS': args})
basic._ANSIBLE_ARGS = to_bytes(args)
class AnsibleExitJson(Exception):
"""Exception class to be raised by module.exit_json and caught by the test case"""
pass
class AnsibleFailJson(Exception):
"""Exception class to be raised by module.fail_json and caught by the test case"""
pass
def exit_json(*args, **kwargs):
"""function to patch over exit_json; package return data into an exception"""
if 'changed' not in kwargs:
kwargs['changed'] = False
raise AnsibleExitJson(kwargs)
def fail_json(*args, **kwargs):
"""function to patch over fail_json; package return data into an exception"""
kwargs['failed'] = True
raise AnsibleFailJson(kwargs)
def get_bin_path(self, arg, required=False):
"""Mock AnsibleModule.get_bin_path"""
if arg.endswith('my_command'):
return '/usr/bin/my_command'
else:
if required:
fail_json(msg='%r not found !' % arg)
class TestMyModule(unittest.TestCase):
def setUp(self):
self.mock_module_helper = patch.multiple(basic.AnsibleModule,
exit_json=exit_json,
fail_json=fail_json,
get_bin_path=get_bin_path)
self.mock_module_helper.start()
self.addCleanup(self.mock_module_helper.stop)
def test_module_fail_when_required_args_missing(self):
with self.assertRaises(AnsibleFailJson):
set_module_args({})
self.module.main()
def test_ensure_command_called(self):
set_module_args({
'param1': 10,
'param2': 'test',
})
with patch.object(basic.AnsibleModule, 'run_command') as mock_run_command:
stdout = 'configuration updated'
stderr = ''
rc = 0
mock_run_command.return_value = rc, stdout, stderr # successful execution
with self.assertRaises(AnsibleExitJson) as result:
my_module.main()
self.assertFalse(result.exception.args[0]['changed']) # ensure result is changed
mock_run_command.assert_called_once_with('/usr/bin/my_command --value 10 --name test')
Restructuring modules to enable testing module set up and other processes
-------------------------------------------------------------------------
Often modules have a ``main()`` function which sets up the module and then performs other
actions. This can make it difficult to check argument processing. This can be made easier by
moving module configuration and initialization into a separate function. For example::
argument_spec = dict(
# module function variables
state=dict(choices=['absent', 'present', 'rebooted', 'restarted'], default='present'),
apply_immediately=dict(type='bool', default=False),
wait=dict(type='bool', default=False),
wait_timeout=dict(type='int', default=600),
allocated_storage=dict(type='int', aliases=['size']),
db_instance_identifier=dict(aliases=["id"], required=True),
)
def setup_module_object():
module = AnsibleAWSModule(
argument_spec=argument_spec,
required_if=required_if,
mutually_exclusive=[['old_instance_id', 'source_db_instance_identifier',
'db_snapshot_identifier']],
)
return module
def main():
module = setup_module_object()
validate_parameters(module)
conn = setup_client(module)
return_dict = run_task(module, conn)
module.exit_json(**return_dict)
This now makes it possible to run tests against the module initiation function::
def test_rds_module_setup_fails_if_db_instance_identifier_parameter_missing():
# db_instance_identifier parameter is missing
set_module_args({
'state': 'absent',
'apply_immediately': 'True',
})
with self.assertRaises(AnsibleFailJson) as result:
self.module.setup_json
See also ``test/units/module_utils/aws/test_rds.py``
Note that the ``argument_spec`` dictionary is visible in a module variable. This has
advantages, both in allowing explicit testing of the arguments and in allowing the easy
creation of module objects for testing.
The same restructuring technique can be valuable for testing other functionality, such as the part of the module which queries the object that the module configures.
Traps for maintaining Python 2 compatibility
============================================
If you use the ``mock`` library from the Python 2.6 standard library, a number of the
assert functions are missing but will return as if successful. This means that test cases should take great care *not* use
functions marked as _new_ in the Python 3 documentation, since the tests will likely always
succeed even if the code is broken when run on older versions of Python.
A helpful development approach to this should be to ensure that all of the tests have been
run under Python 2.6 and that each assertion in the test cases has been checked to work by breaking
the code in Ansible to trigger that failure.
.. warning:: Maintain Python 2.6 compatibility
Please remember that modules need to maintain compatibility with Python 2.6 so the unittests for
modules should also be compatible with Python 2.6.
.. seealso::
:ref:`testing_units`
Ansible unit tests documentation
:ref:`testing_running_locally`
Running tests locally including gathering and reporting coverage data
:ref:`developing_modules_general`
Get started developing a module
`Python 3 documentation - 26.4. unittest β Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
The documentation of the unittest framework in python 3
`Python 2 documentation - 25.3. unittest β Unit testing framework <https://docs.python.org/3/library/unittest.html>`_
The documentation of the earliest supported unittest framework - from Python 2.6
`pytest: helps you write better programs <https://docs.pytest.org/en/latest/>`_
The documentation of pytest - the framework actually used to run Ansible unit tests
`Development Mailing List <https://groups.google.com/group/ansible-devel>`_
Mailing list for development topics
`Testing Your Code (from The Hitchhiker's Guide to Python!) <https://docs.python-guide.org/writing/tests/>`_
General advice on testing Python code
`Uncle Bob's many videos on YouTube <https://www.youtube.com/watch?v=QedpQjxBPMA&list=PLlu0CT-JnSasQzGrGzddSczJQQU7295D2>`_
Unit testing is a part of the of various philosophies of software development, including
Extreme Programming (XP), Clean Coding. Uncle Bob talks through how to benefit from this
`"Why Most Unit Testing is Waste" <https://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf>`_
An article warning against the costs of unit testing
`'A Response to "Why Most Unit Testing is Waste"' <https://henrikwarne.com/2014/09/04/a-response-to-why-most-unit-testing-is-waste/>`_
An response pointing to how to maintain the value of unit tests
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,761 |
postgresql_privs fail after it's updated to 2.9.2
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
regardin the comment from https://github.com/ansible/ansible/pull/65098
```
{"attempts": 3, "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 102, in <module>\n File \"<stdin>\", line 94, in _ansiballz_main\n File \"<stdin>\", line 40, in invoke_module\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 1057, in <module>\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 1038, in main\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 752, in manipulate_privs\nTypeError: '<' not supported between instances of 'NoneType' and 'NoneType'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
related changes where backported by https://github.com/ansible/ansible/pull/65098
Seems that we need to add conditions to *.sort() if they are no empty/None, etc
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```postgresql_privs```
```lib/ansible/modules/database/postgresql/postgresql_privs.py```
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9.2
```
in 2.9.1 it was ok
##### CONFIGURATION
Doesn't matter
##### OS / ENVIRONMENT
Doesn't matter
##### STEPS TO REPRODUCE
```yaml
- name: Create permissions for {{ client }}
postgresql_privs:
db: '{{ client }}'
roles: '{{ client }}'
privs: ALL
objs: ALL_IN_SCHEMA
register: result
retries: 3
delay: 10
until: result is not failed
become: true
become_user: postgres
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
``` ```
##### ACTUAL RESULTS
```
{"attempts": 3, "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 102, in <module>\n File \"<stdin>\", line 94, in _ansiballz_main\n File \"<stdin>\", line 40, in invoke_module\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 1057, in <module>\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 1038, in main\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 752, in manipulate_privs\nTypeError: '<' not supported between instances of 'NoneType' and 'NoneType'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
|
https://github.com/ansible/ansible/issues/65761
|
https://github.com/ansible/ansible/pull/65903
|
ec0885cf05027e0b220abf1feee96a9f7770cafa
|
9b85a51c64a687f8db4a9bfe3fea0f62f5f65af2
| 2019-12-12T10:48:40Z |
python
| 2019-12-17T13:53:51Z |
changelogs/fragments/65903-postgresql_privs_sort_lists_with_none_elements.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,761 |
postgresql_privs fail after it's updated to 2.9.2
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
regardin the comment from https://github.com/ansible/ansible/pull/65098
```
{"attempts": 3, "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 102, in <module>\n File \"<stdin>\", line 94, in _ansiballz_main\n File \"<stdin>\", line 40, in invoke_module\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 1057, in <module>\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 1038, in main\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 752, in manipulate_privs\nTypeError: '<' not supported between instances of 'NoneType' and 'NoneType'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
related changes where backported by https://github.com/ansible/ansible/pull/65098
Seems that we need to add conditions to *.sort() if they are no empty/None, etc
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```postgresql_privs```
```lib/ansible/modules/database/postgresql/postgresql_privs.py```
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.9.2
```
in 2.9.1 it was ok
##### CONFIGURATION
Doesn't matter
##### OS / ENVIRONMENT
Doesn't matter
##### STEPS TO REPRODUCE
```yaml
- name: Create permissions for {{ client }}
postgresql_privs:
db: '{{ client }}'
roles: '{{ client }}'
privs: ALL
objs: ALL_IN_SCHEMA
register: result
retries: 3
delay: 10
until: result is not failed
become: true
become_user: postgres
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
``` ```
##### ACTUAL RESULTS
```
{"attempts": 3, "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 102, in <module>\n File \"<stdin>\", line 94, in _ansiballz_main\n File \"<stdin>\", line 40, in invoke_module\n File \"/usr/lib/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 1057, in <module>\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 1038, in main\n File \"/tmp/ansible_postgresql_privs_payload_gsems1r4/ansible_postgresql_privs_payload.zip/ansible/modules/database/postgresql/postgresql_privs.py\", line 752, in manipulate_privs\nTypeError: '<' not supported between instances of 'NoneType' and 'NoneType'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
|
https://github.com/ansible/ansible/issues/65761
|
https://github.com/ansible/ansible/pull/65903
|
ec0885cf05027e0b220abf1feee96a9f7770cafa
|
9b85a51c64a687f8db4a9bfe3fea0f62f5f65af2
| 2019-12-12T10:48:40Z |
python
| 2019-12-17T13:53:51Z |
lib/ansible/modules/database/postgresql/postgresql_privs.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: Ansible Project
# Copyright: (c) 2019, Tobias Birkefeld (@tcraxs) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: postgresql_privs
version_added: '1.2'
short_description: Grant or revoke privileges on PostgreSQL database objects
description:
- Grant or revoke privileges on PostgreSQL database objects.
- This module is basically a wrapper around most of the functionality of
PostgreSQL's GRANT and REVOKE statements with detection of changes
(GRANT/REVOKE I(privs) ON I(type) I(objs) TO/FROM I(roles)).
options:
database:
description:
- Name of database to connect to.
required: yes
type: str
aliases:
- db
- login_db
state:
description:
- If C(present), the specified privileges are granted, if C(absent) they are revoked.
type: str
default: present
choices: [ absent, present ]
privs:
description:
- Comma separated list of privileges to grant/revoke.
type: str
aliases:
- priv
type:
description:
- Type of database object to set privileges on.
- The C(default_privs) choice is available starting at version 2.7.
- The C(foreign_data_wrapper) and C(foreign_server) object types are available from Ansible version '2.8'.
- The C(type) choice is available from Ansible version '2.10'.
type: str
default: table
choices: [ database, default_privs, foreign_data_wrapper, foreign_server, function,
group, language, table, tablespace, schema, sequence, type ]
objs:
description:
- Comma separated list of database objects to set privileges on.
- If I(type) is C(table), C(partition table), C(sequence) or C(function),
the special valueC(ALL_IN_SCHEMA) can be provided instead to specify all
database objects of type I(type) in the schema specified via I(schema).
(This also works with PostgreSQL < 9.0.) (C(ALL_IN_SCHEMA) is available
for C(function) and C(partition table) from version 2.8)
- If I(type) is C(database), this parameter can be omitted, in which case
privileges are set for the database specified via I(database).
- 'If I(type) is I(function), colons (":") in object names will be
replaced with commas (needed to specify function signatures, see examples)'
type: str
aliases:
- obj
schema:
description:
- Schema that contains the database objects specified via I(objs).
- May only be provided if I(type) is C(table), C(sequence), C(function), C(type),
or C(default_privs). Defaults to C(public) in these cases.
- Pay attention, for embedded types when I(type=type)
I(schema) can be C(pg_catalog) or C(information_schema) respectively.
type: str
roles:
description:
- Comma separated list of role (user/group) names to set permissions for.
- The special value C(PUBLIC) can be provided instead to set permissions
for the implicitly defined PUBLIC group.
type: str
required: yes
aliases:
- role
fail_on_role:
version_added: '2.8'
description:
- If C(yes), fail when target role (for whom privs need to be granted) does not exist.
Otherwise just warn and continue.
default: yes
type: bool
session_role:
version_added: '2.8'
description:
- Switch to session_role after connecting.
- The specified session_role must be a role that the current login_user is a member of.
- Permissions checking for SQL commands is carried out as though the session_role were the one that had logged in originally.
type: str
target_roles:
description:
- A list of existing role (user/group) names to set as the
default permissions for database objects subsequently created by them.
- Parameter I(target_roles) is only available with C(type=default_privs).
type: str
version_added: '2.8'
grant_option:
description:
- Whether C(role) may grant/revoke the specified privileges/group memberships to others.
- Set to C(no) to revoke GRANT OPTION, leave unspecified to make no changes.
- I(grant_option) only has an effect if I(state) is C(present).
type: bool
aliases:
- admin_option
host:
description:
- Database host address. If unspecified, connect via Unix socket.
type: str
aliases:
- login_host
port:
description:
- Database port to connect to.
type: int
default: 5432
aliases:
- login_port
unix_socket:
description:
- Path to a Unix domain socket for local connections.
type: str
aliases:
- login_unix_socket
login:
description:
- The username to authenticate with.
type: str
default: postgres
aliases:
- login_user
password:
description:
- The password to authenticate with.
type: str
aliases:
- login_password
ssl_mode:
description:
- Determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server.
- See https://www.postgresql.org/docs/current/static/libpq-ssl.html for more information on the modes.
- Default of C(prefer) matches libpq default.
type: str
default: prefer
choices: [ allow, disable, prefer, require, verify-ca, verify-full ]
version_added: '2.3'
ca_cert:
description:
- Specifies the name of a file containing SSL certificate authority (CA) certificate(s).
- If the file exists, the server's certificate will be verified to be signed by one of these authorities.
version_added: '2.3'
type: str
aliases:
- ssl_rootcert
notes:
- Parameters that accept comma separated lists (I(privs), I(objs), I(roles))
have singular alias names (I(priv), I(obj), I(role)).
- To revoke only C(GRANT OPTION) for a specific object, set I(state) to
C(present) and I(grant_option) to C(no) (see examples).
- Note that when revoking privileges from a role R, this role may still have
access via privileges granted to any role R is a member of including C(PUBLIC).
- Note that when revoking privileges from a role R, you do so as the user
specified via I(login). If R has been granted the same privileges by
another user also, R can still access database objects via these privileges.
- When revoking privileges, C(RESTRICT) is assumed (see PostgreSQL docs).
seealso:
- module: postgresql_user
- module: postgresql_owner
- module: postgresql_membership
- name: PostgreSQL privileges
description: General information about PostgreSQL privileges.
link: https://www.postgresql.org/docs/current/ddl-priv.html
- name: PostgreSQL GRANT command reference
description: Complete reference of the PostgreSQL GRANT command documentation.
link: https://www.postgresql.org/docs/current/sql-grant.html
- name: PostgreSQL REVOKE command reference
description: Complete reference of the PostgreSQL REVOKE command documentation.
link: https://www.postgresql.org/docs/current/sql-revoke.html
extends_documentation_fragment:
- postgres
author:
- Bernhard Weitzhofer (@b6d)
- Tobias Birkefeld (@tcraxs)
'''
EXAMPLES = r'''
# On database "library":
# GRANT SELECT, INSERT, UPDATE ON TABLE public.books, public.authors
# TO librarian, reader WITH GRANT OPTION
- name: Grant privs to librarian and reader on database library
postgresql_privs:
database: library
state: present
privs: SELECT,INSERT,UPDATE
type: table
objs: books,authors
schema: public
roles: librarian,reader
grant_option: yes
- name: Same as above leveraging default values
postgresql_privs:
db: library
privs: SELECT,INSERT,UPDATE
objs: books,authors
roles: librarian,reader
grant_option: yes
# REVOKE GRANT OPTION FOR INSERT ON TABLE books FROM reader
# Note that role "reader" will be *granted* INSERT privilege itself if this
# isn't already the case (since state: present).
- name: Revoke privs from reader
postgresql_privs:
db: library
state: present
priv: INSERT
obj: books
role: reader
grant_option: no
# "public" is the default schema. This also works for PostgreSQL 8.x.
- name: REVOKE INSERT, UPDATE ON ALL TABLES IN SCHEMA public FROM reader
postgresql_privs:
db: library
state: absent
privs: INSERT,UPDATE
objs: ALL_IN_SCHEMA
role: reader
- name: GRANT ALL PRIVILEGES ON SCHEMA public, math TO librarian
postgresql_privs:
db: library
privs: ALL
type: schema
objs: public,math
role: librarian
# Note the separation of arguments with colons.
- name: GRANT ALL PRIVILEGES ON FUNCTION math.add(int, int) TO librarian, reader
postgresql_privs:
db: library
privs: ALL
type: function
obj: add(int:int)
schema: math
roles: librarian,reader
# Note that group role memberships apply cluster-wide and therefore are not
# restricted to database "library" here.
- name: GRANT librarian, reader TO alice, bob WITH ADMIN OPTION
postgresql_privs:
db: library
type: group
objs: librarian,reader
roles: alice,bob
admin_option: yes
# Note that here "db: postgres" specifies the database to connect to, not the
# database to grant privileges on (which is specified via the "objs" param)
- name: GRANT ALL PRIVILEGES ON DATABASE library TO librarian
postgresql_privs:
db: postgres
privs: ALL
type: database
obj: library
role: librarian
# If objs is omitted for type "database", it defaults to the database
# to which the connection is established
- name: GRANT ALL PRIVILEGES ON DATABASE library TO librarian
postgresql_privs:
db: library
privs: ALL
type: database
role: librarian
# Available since version 2.7
# Objs must be set, ALL_DEFAULT to TABLES/SEQUENCES/TYPES/FUNCTIONS
# ALL_DEFAULT works only with privs=ALL
# For specific
- name: ALTER DEFAULT PRIVILEGES ON DATABASE library TO librarian
postgresql_privs:
db: library
objs: ALL_DEFAULT
privs: ALL
type: default_privs
role: librarian
grant_option: yes
# Available since version 2.7
# Objs must be set, ALL_DEFAULT to TABLES/SEQUENCES/TYPES/FUNCTIONS
# ALL_DEFAULT works only with privs=ALL
# For specific
- name: ALTER DEFAULT PRIVILEGES ON DATABASE library TO reader, step 1
postgresql_privs:
db: library
objs: TABLES,SEQUENCES
privs: SELECT
type: default_privs
role: reader
- name: ALTER DEFAULT PRIVILEGES ON DATABASE library TO reader, step 2
postgresql_privs:
db: library
objs: TYPES
privs: USAGE
type: default_privs
role: reader
# Available since version 2.8
- name: GRANT ALL PRIVILEGES ON FOREIGN DATA WRAPPER fdw TO reader
postgresql_privs:
db: test
objs: fdw
privs: ALL
type: foreign_data_wrapper
role: reader
# Available since version 2.10
- name: GRANT ALL PRIVILEGES ON TYPE customtype TO reader
postgresql_privs:
db: test
objs: customtype
privs: ALL
type: type
role: reader
# Available since version 2.8
- name: GRANT ALL PRIVILEGES ON FOREIGN SERVER fdw_server TO reader
postgresql_privs:
db: test
objs: fdw_server
privs: ALL
type: foreign_server
role: reader
# Available since version 2.8
# Grant 'execute' permissions on all functions in schema 'common' to role 'caller'
- name: GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA common TO caller
postgresql_privs:
type: function
state: present
privs: EXECUTE
roles: caller
objs: ALL_IN_SCHEMA
schema: common
# Available since version 2.8
# ALTER DEFAULT PRIVILEGES FOR ROLE librarian IN SCHEMA library GRANT SELECT ON TABLES TO reader
# GRANT SELECT privileges for new TABLES objects created by librarian as
# default to the role reader.
# For specific
- name: ALTER privs
postgresql_privs:
db: library
schema: library
objs: TABLES
privs: SELECT
type: default_privs
role: reader
target_roles: librarian
# Available since version 2.8
# ALTER DEFAULT PRIVILEGES FOR ROLE librarian IN SCHEMA library REVOKE SELECT ON TABLES FROM reader
# REVOKE SELECT privileges for new TABLES objects created by librarian as
# default from the role reader.
# For specific
- name: ALTER privs
postgresql_privs:
db: library
state: absent
schema: library
objs: TABLES
privs: SELECT
type: default_privs
role: reader
target_roles: librarian
# Available since version 2.10
- name: Grant type privileges for pg_catalog.numeric type to alice
postgresql_privs:
type: type
roles: alice
privs: ALL
objs: numeric
schema: pg_catalog
db: acme
'''
RETURN = r'''
queries:
description: List of executed queries.
returned: always
type: list
sample: ['REVOKE GRANT OPTION FOR INSERT ON TABLE "books" FROM "reader";']
version_added: '2.8'
'''
import traceback
PSYCOPG2_IMP_ERR = None
try:
import psycopg2
import psycopg2.extensions
except ImportError:
PSYCOPG2_IMP_ERR = traceback.format_exc()
psycopg2 = None
# import module snippets
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
from ansible.module_utils.database import pg_quote_identifier
from ansible.module_utils.postgres import postgres_common_argument_spec
from ansible.module_utils._text import to_native
VALID_PRIVS = frozenset(('SELECT', 'INSERT', 'UPDATE', 'DELETE', 'TRUNCATE',
'REFERENCES', 'TRIGGER', 'CREATE', 'CONNECT',
'TEMPORARY', 'TEMP', 'EXECUTE', 'USAGE', 'ALL', 'USAGE'))
VALID_DEFAULT_OBJS = {'TABLES': ('ALL', 'SELECT', 'INSERT', 'UPDATE', 'DELETE', 'TRUNCATE', 'REFERENCES', 'TRIGGER'),
'SEQUENCES': ('ALL', 'SELECT', 'UPDATE', 'USAGE'),
'FUNCTIONS': ('ALL', 'EXECUTE'),
'TYPES': ('ALL', 'USAGE')}
executed_queries = []
class Error(Exception):
pass
def role_exists(module, cursor, rolname):
"""Check user exists or not"""
query = "SELECT 1 FROM pg_roles WHERE rolname = '%s'" % rolname
try:
cursor.execute(query)
return cursor.rowcount > 0
except Exception as e:
module.fail_json(msg="Cannot execute SQL '%s': %s" % (query, to_native(e)))
return False
# We don't have functools.partial in Python < 2.5
def partial(f, *args, **kwargs):
"""Partial function application"""
def g(*g_args, **g_kwargs):
new_kwargs = kwargs.copy()
new_kwargs.update(g_kwargs)
return f(*(args + g_args), **g_kwargs)
g.f = f
g.args = args
g.kwargs = kwargs
return g
class Connection(object):
"""Wrapper around a psycopg2 connection with some convenience methods"""
def __init__(self, params, module):
self.database = params.database
self.module = module
# To use defaults values, keyword arguments must be absent, so
# check which values are empty and don't include in the **kw
# dictionary
params_map = {
"host": "host",
"login": "user",
"password": "password",
"port": "port",
"database": "database",
"ssl_mode": "sslmode",
"ca_cert": "sslrootcert"
}
kw = dict((params_map[k], getattr(params, k)) for k in params_map
if getattr(params, k) != '' and getattr(params, k) is not None)
# If a unix_socket is specified, incorporate it here.
is_localhost = "host" not in kw or kw["host"] == "" or kw["host"] == "localhost"
if is_localhost and params.unix_socket != "":
kw["host"] = params.unix_socket
sslrootcert = params.ca_cert
if psycopg2.__version__ < '2.4.3' and sslrootcert is not None:
raise ValueError('psycopg2 must be at least 2.4.3 in order to user the ca_cert parameter')
self.connection = psycopg2.connect(**kw)
self.cursor = self.connection.cursor()
def commit(self):
self.connection.commit()
def rollback(self):
self.connection.rollback()
@property
def encoding(self):
"""Connection encoding in Python-compatible form"""
return psycopg2.extensions.encodings[self.connection.encoding]
# Methods for querying database objects
# PostgreSQL < 9.0 doesn't support "ALL TABLES IN SCHEMA schema"-like
# phrases in GRANT or REVOKE statements, therefore alternative methods are
# provided here.
def schema_exists(self, schema):
query = """SELECT count(*)
FROM pg_catalog.pg_namespace WHERE nspname = %s"""
self.cursor.execute(query, (schema,))
return self.cursor.fetchone()[0] > 0
def get_all_tables_in_schema(self, schema):
if not self.schema_exists(schema):
raise Error('Schema "%s" does not exist.' % schema)
query = """SELECT relname
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE nspname = %s AND relkind in ('r', 'v', 'm', 'p')"""
self.cursor.execute(query, (schema,))
return [t[0] for t in self.cursor.fetchall()]
def get_all_sequences_in_schema(self, schema):
if not self.schema_exists(schema):
raise Error('Schema "%s" does not exist.' % schema)
query = """SELECT relname
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE nspname = %s AND relkind = 'S'"""
self.cursor.execute(query, (schema,))
return [t[0] for t in self.cursor.fetchall()]
def get_all_functions_in_schema(self, schema):
if not self.schema_exists(schema):
raise Error('Schema "%s" does not exist.' % schema)
query = """SELECT p.proname, oidvectortypes(p.proargtypes)
FROM pg_catalog.pg_proc p
JOIN pg_namespace n ON n.oid = p.pronamespace
WHERE nspname = %s"""
self.cursor.execute(query, (schema,))
return ["%s(%s)" % (t[0], t[1]) for t in self.cursor.fetchall()]
# Methods for getting access control lists and group membership info
# To determine whether anything has changed after granting/revoking
# privileges, we compare the access control lists of the specified database
# objects before and afterwards. Python's list/string comparison should
# suffice for change detection, we should not actually have to parse ACLs.
# The same should apply to group membership information.
def get_table_acls(self, schema, tables):
query = """SELECT relacl
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE nspname = %s AND relkind in ('r','p','v','m') AND relname = ANY (%s)
ORDER BY relname"""
self.cursor.execute(query, (schema, tables))
return [t[0] for t in self.cursor.fetchall()]
def get_sequence_acls(self, schema, sequences):
query = """SELECT relacl
FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE nspname = %s AND relkind = 'S' AND relname = ANY (%s)
ORDER BY relname"""
self.cursor.execute(query, (schema, sequences))
return [t[0] for t in self.cursor.fetchall()]
def get_function_acls(self, schema, function_signatures):
funcnames = [f.split('(', 1)[0] for f in function_signatures]
query = """SELECT proacl
FROM pg_catalog.pg_proc p
JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE nspname = %s AND proname = ANY (%s)
ORDER BY proname, proargtypes"""
self.cursor.execute(query, (schema, funcnames))
return [t[0] for t in self.cursor.fetchall()]
def get_schema_acls(self, schemas):
query = """SELECT nspacl FROM pg_catalog.pg_namespace
WHERE nspname = ANY (%s) ORDER BY nspname"""
self.cursor.execute(query, (schemas,))
return [t[0] for t in self.cursor.fetchall()]
def get_language_acls(self, languages):
query = """SELECT lanacl FROM pg_catalog.pg_language
WHERE lanname = ANY (%s) ORDER BY lanname"""
self.cursor.execute(query, (languages,))
return [t[0] for t in self.cursor.fetchall()]
def get_tablespace_acls(self, tablespaces):
query = """SELECT spcacl FROM pg_catalog.pg_tablespace
WHERE spcname = ANY (%s) ORDER BY spcname"""
self.cursor.execute(query, (tablespaces,))
return [t[0] for t in self.cursor.fetchall()]
def get_database_acls(self, databases):
query = """SELECT datacl FROM pg_catalog.pg_database
WHERE datname = ANY (%s) ORDER BY datname"""
self.cursor.execute(query, (databases,))
return [t[0] for t in self.cursor.fetchall()]
def get_group_memberships(self, groups):
query = """SELECT roleid, grantor, member, admin_option
FROM pg_catalog.pg_auth_members am
JOIN pg_catalog.pg_roles r ON r.oid = am.roleid
WHERE r.rolname = ANY(%s)
ORDER BY roleid, grantor, member"""
self.cursor.execute(query, (groups,))
return self.cursor.fetchall()
def get_default_privs(self, schema, *args):
query = """SELECT defaclacl
FROM pg_default_acl a
JOIN pg_namespace b ON a.defaclnamespace=b.oid
WHERE b.nspname = %s;"""
self.cursor.execute(query, (schema,))
return [t[0] for t in self.cursor.fetchall()]
def get_foreign_data_wrapper_acls(self, fdws):
query = """SELECT fdwacl FROM pg_catalog.pg_foreign_data_wrapper
WHERE fdwname = ANY (%s) ORDER BY fdwname"""
self.cursor.execute(query, (fdws,))
return [t[0] for t in self.cursor.fetchall()]
def get_foreign_server_acls(self, fs):
query = """SELECT srvacl FROM pg_catalog.pg_foreign_server
WHERE srvname = ANY (%s) ORDER BY srvname"""
self.cursor.execute(query, (fs,))
return [t[0] for t in self.cursor.fetchall()]
def get_type_acls(self, schema, types):
query = """SELECT t.typacl FROM pg_catalog.pg_type t
JOIN pg_catalog.pg_namespace n ON n.oid = t.typnamespace
WHERE n.nspname = %s AND t.typname = ANY (%s) ORDER BY typname"""
self.cursor.execute(query, (schema, types))
return [t[0] for t in self.cursor.fetchall()]
# Manipulating privileges
def manipulate_privs(self, obj_type, privs, objs, roles, target_roles,
state, grant_option, schema_qualifier=None, fail_on_role=True):
"""Manipulate database object privileges.
:param obj_type: Type of database object to grant/revoke
privileges for.
:param privs: Either a list of privileges to grant/revoke
or None if type is "group".
:param objs: List of database objects to grant/revoke
privileges for.
:param roles: Either a list of role names or "PUBLIC"
for the implicitly defined "PUBLIC" group
:param target_roles: List of role names to grant/revoke
default privileges as.
:param state: "present" to grant privileges, "absent" to revoke.
:param grant_option: Only for state "present": If True, set
grant/admin option. If False, revoke it.
If None, don't change grant option.
:param schema_qualifier: Some object types ("TABLE", "SEQUENCE",
"FUNCTION") must be qualified by schema.
Ignored for other Types.
"""
# get_status: function to get current status
if obj_type == 'table':
get_status = partial(self.get_table_acls, schema_qualifier)
elif obj_type == 'sequence':
get_status = partial(self.get_sequence_acls, schema_qualifier)
elif obj_type == 'function':
get_status = partial(self.get_function_acls, schema_qualifier)
elif obj_type == 'schema':
get_status = self.get_schema_acls
elif obj_type == 'language':
get_status = self.get_language_acls
elif obj_type == 'tablespace':
get_status = self.get_tablespace_acls
elif obj_type == 'database':
get_status = self.get_database_acls
elif obj_type == 'group':
get_status = self.get_group_memberships
elif obj_type == 'default_privs':
get_status = partial(self.get_default_privs, schema_qualifier)
elif obj_type == 'foreign_data_wrapper':
get_status = self.get_foreign_data_wrapper_acls
elif obj_type == 'foreign_server':
get_status = self.get_foreign_server_acls
elif obj_type == 'type':
get_status = partial(self.get_type_acls, schema_qualifier)
else:
raise Error('Unsupported database object type "%s".' % obj_type)
# Return False (nothing has changed) if there are no objs to work on.
if not objs:
return False
# obj_ids: quoted db object identifiers (sometimes schema-qualified)
if obj_type == 'function':
obj_ids = []
for obj in objs:
try:
f, args = obj.split('(', 1)
except Exception:
raise Error('Illegal function signature: "%s".' % obj)
obj_ids.append('"%s"."%s"(%s' % (schema_qualifier, f, args))
elif obj_type in ['table', 'sequence', 'type']:
obj_ids = ['"%s"."%s"' % (schema_qualifier, o) for o in objs]
else:
obj_ids = ['"%s"' % o for o in objs]
# set_what: SQL-fragment specifying what to set for the target roles:
# Either group membership or privileges on objects of a certain type
if obj_type == 'group':
set_what = ','.join('"%s"' % i for i in obj_ids)
elif obj_type == 'default_privs':
# We don't want privs to be quoted here
set_what = ','.join(privs)
else:
# function types are already quoted above
if obj_type != 'function':
obj_ids = [pg_quote_identifier(i, 'table') for i in obj_ids]
# Note: obj_type has been checked against a set of string literals
# and privs was escaped when it was parsed
# Note: Underscores are replaced with spaces to support multi-word obj_type
set_what = '%s ON %s %s' % (','.join(privs), obj_type.replace('_', ' '),
','.join(obj_ids))
# for_whom: SQL-fragment specifying for whom to set the above
if roles == 'PUBLIC':
for_whom = 'PUBLIC'
else:
for_whom = []
for r in roles:
if not role_exists(self.module, self.cursor, r):
if fail_on_role:
self.module.fail_json(msg="Role '%s' does not exist" % r.strip())
else:
self.module.warn("Role '%s' does not exist, pass it" % r.strip())
else:
for_whom.append('"%s"' % r)
if not for_whom:
return False
for_whom = ','.join(for_whom)
# as_who:
as_who = None
if target_roles:
as_who = ','.join('"%s"' % r for r in target_roles)
status_before = get_status(objs)
query = QueryBuilder(state) \
.for_objtype(obj_type) \
.with_grant_option(grant_option) \
.for_whom(for_whom) \
.as_who(as_who) \
.for_schema(schema_qualifier) \
.set_what(set_what) \
.for_objs(objs) \
.build()
executed_queries.append(query)
self.cursor.execute(query)
status_after = get_status(objs)
status_before.sort()
status_after.sort()
return status_before != status_after
class QueryBuilder(object):
def __init__(self, state):
self._grant_option = None
self._for_whom = None
self._as_who = None
self._set_what = None
self._obj_type = None
self._state = state
self._schema = None
self._objs = None
self.query = []
def for_objs(self, objs):
self._objs = objs
return self
def for_schema(self, schema):
self._schema = schema
return self
def with_grant_option(self, option):
self._grant_option = option
return self
def for_whom(self, who):
self._for_whom = who
return self
def as_who(self, target_roles):
self._as_who = target_roles
return self
def set_what(self, what):
self._set_what = what
return self
def for_objtype(self, objtype):
self._obj_type = objtype
return self
def build(self):
if self._state == 'present':
self.build_present()
elif self._state == 'absent':
self.build_absent()
else:
self.build_absent()
return '\n'.join(self.query)
def add_default_revoke(self):
for obj in self._objs:
if self._as_who:
self.query.append(
'ALTER DEFAULT PRIVILEGES FOR ROLE {0} IN SCHEMA {1} REVOKE ALL ON {2} FROM {3};'.format(self._as_who,
self._schema, obj,
self._for_whom))
else:
self.query.append(
'ALTER DEFAULT PRIVILEGES IN SCHEMA {0} REVOKE ALL ON {1} FROM {2};'.format(self._schema, obj,
self._for_whom))
def add_grant_option(self):
if self._grant_option:
if self._obj_type == 'group':
self.query[-1] += ' WITH ADMIN OPTION;'
else:
self.query[-1] += ' WITH GRANT OPTION;'
else:
self.query[-1] += ';'
if self._obj_type == 'group':
self.query.append('REVOKE ADMIN OPTION FOR {0} FROM {1};'.format(self._set_what, self._for_whom))
elif not self._obj_type == 'default_privs':
self.query.append('REVOKE GRANT OPTION FOR {0} FROM {1};'.format(self._set_what, self._for_whom))
def add_default_priv(self):
for obj in self._objs:
if self._as_who:
self.query.append(
'ALTER DEFAULT PRIVILEGES FOR ROLE {0} IN SCHEMA {1} GRANT {2} ON {3} TO {4}'.format(self._as_who,
self._schema,
self._set_what,
obj,
self._for_whom))
else:
self.query.append(
'ALTER DEFAULT PRIVILEGES IN SCHEMA {0} GRANT {1} ON {2} TO {3}'.format(self._schema,
self._set_what,
obj,
self._for_whom))
self.add_grant_option()
if self._as_who:
self.query.append(
'ALTER DEFAULT PRIVILEGES FOR ROLE {0} IN SCHEMA {1} GRANT USAGE ON TYPES TO {2}'.format(self._as_who,
self._schema,
self._for_whom))
else:
self.query.append(
'ALTER DEFAULT PRIVILEGES IN SCHEMA {0} GRANT USAGE ON TYPES TO {1}'.format(self._schema, self._for_whom))
self.add_grant_option()
def build_present(self):
if self._obj_type == 'default_privs':
self.add_default_revoke()
self.add_default_priv()
else:
self.query.append('GRANT {0} TO {1}'.format(self._set_what, self._for_whom))
self.add_grant_option()
def build_absent(self):
if self._obj_type == 'default_privs':
self.query = []
for obj in ['TABLES', 'SEQUENCES', 'TYPES']:
if self._as_who:
self.query.append(
'ALTER DEFAULT PRIVILEGES FOR ROLE {0} IN SCHEMA {1} REVOKE ALL ON {2} FROM {3};'.format(self._as_who,
self._schema, obj,
self._for_whom))
else:
self.query.append(
'ALTER DEFAULT PRIVILEGES IN SCHEMA {0} REVOKE ALL ON {1} FROM {2};'.format(self._schema, obj,
self._for_whom))
else:
self.query.append('REVOKE {0} FROM {1};'.format(self._set_what, self._for_whom))
def main():
argument_spec = postgres_common_argument_spec()
argument_spec.update(
database=dict(required=True, aliases=['db', 'login_db']),
state=dict(default='present', choices=['present', 'absent']),
privs=dict(required=False, aliases=['priv']),
type=dict(default='table',
choices=['table',
'sequence',
'function',
'database',
'schema',
'language',
'tablespace',
'group',
'default_privs',
'foreign_data_wrapper',
'foreign_server',
'type', ]),
objs=dict(required=False, aliases=['obj']),
schema=dict(required=False),
roles=dict(required=True, aliases=['role']),
session_role=dict(required=False),
target_roles=dict(required=False),
grant_option=dict(required=False, type='bool',
aliases=['admin_option']),
host=dict(default='', aliases=['login_host']),
unix_socket=dict(default='', aliases=['login_unix_socket']),
login=dict(default='postgres', aliases=['login_user']),
password=dict(default='', aliases=['login_password'], no_log=True),
fail_on_role=dict(type='bool', default=True),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
fail_on_role = module.params['fail_on_role']
# Create type object as namespace for module params
p = type('Params', (), module.params)
# param "schema": default, allowed depends on param "type"
if p.type in ['table', 'sequence', 'function', 'type', 'default_privs']:
p.schema = p.schema or 'public'
elif p.schema:
module.fail_json(msg='Argument "schema" is not allowed '
'for type "%s".' % p.type)
# param "objs": default, required depends on param "type"
if p.type == 'database':
p.objs = p.objs or p.database
elif not p.objs:
module.fail_json(msg='Argument "objs" is required '
'for type "%s".' % p.type)
# param "privs": allowed, required depends on param "type"
if p.type == 'group':
if p.privs:
module.fail_json(msg='Argument "privs" is not allowed '
'for type "group".')
elif not p.privs:
module.fail_json(msg='Argument "privs" is required '
'for type "%s".' % p.type)
# Connect to Database
if not psycopg2:
module.fail_json(msg=missing_required_lib('psycopg2'), exception=PSYCOPG2_IMP_ERR)
try:
conn = Connection(p, module)
except psycopg2.Error as e:
module.fail_json(msg='Could not connect to database: %s' % to_native(e), exception=traceback.format_exc())
except TypeError as e:
if 'sslrootcert' in e.args[0]:
module.fail_json(msg='Postgresql server must be at least version 8.4 to support sslrootcert')
module.fail_json(msg="unable to connect to database: %s" % to_native(e), exception=traceback.format_exc())
except ValueError as e:
# We raise this when the psycopg library is too old
module.fail_json(msg=to_native(e))
if p.session_role:
try:
conn.cursor.execute('SET ROLE "%s"' % p.session_role)
except Exception as e:
module.fail_json(msg="Could not switch to role %s: %s" % (p.session_role, to_native(e)), exception=traceback.format_exc())
try:
# privs
if p.privs:
privs = frozenset(pr.upper() for pr in p.privs.split(','))
if not privs.issubset(VALID_PRIVS):
module.fail_json(msg='Invalid privileges specified: %s' % privs.difference(VALID_PRIVS))
else:
privs = None
# objs:
if p.type == 'table' and p.objs == 'ALL_IN_SCHEMA':
objs = conn.get_all_tables_in_schema(p.schema)
elif p.type == 'sequence' and p.objs == 'ALL_IN_SCHEMA':
objs = conn.get_all_sequences_in_schema(p.schema)
elif p.type == 'function' and p.objs == 'ALL_IN_SCHEMA':
objs = conn.get_all_functions_in_schema(p.schema)
elif p.type == 'default_privs':
if p.objs == 'ALL_DEFAULT':
objs = frozenset(VALID_DEFAULT_OBJS.keys())
else:
objs = frozenset(obj.upper() for obj in p.objs.split(','))
if not objs.issubset(VALID_DEFAULT_OBJS):
module.fail_json(
msg='Invalid Object set specified: %s' % objs.difference(VALID_DEFAULT_OBJS.keys()))
# Again, do we have valid privs specified for object type:
valid_objects_for_priv = frozenset(obj for obj in objs if privs.issubset(VALID_DEFAULT_OBJS[obj]))
if not valid_objects_for_priv == objs:
module.fail_json(
msg='Invalid priv specified. Valid object for priv: {0}. Objects: {1}'.format(
valid_objects_for_priv, objs))
else:
objs = p.objs.split(',')
# function signatures are encoded using ':' to separate args
if p.type == 'function':
objs = [obj.replace(':', ',') for obj in objs]
# roles
if p.roles == 'PUBLIC':
roles = 'PUBLIC'
else:
roles = p.roles.split(',')
if len(roles) == 1 and not role_exists(module, conn.cursor, roles[0]):
module.exit_json(changed=False)
if fail_on_role:
module.fail_json(msg="Role '%s' does not exist" % roles[0].strip())
else:
module.warn("Role '%s' does not exist, nothing to do" % roles[0].strip())
# check if target_roles is set with type: default_privs
if p.target_roles and not p.type == 'default_privs':
module.warn('"target_roles" will be ignored '
'Argument "type: default_privs" is required for usage of "target_roles".')
# target roles
if p.target_roles:
target_roles = p.target_roles.split(',')
else:
target_roles = None
changed = conn.manipulate_privs(
obj_type=p.type,
privs=privs,
objs=objs,
roles=roles,
target_roles=target_roles,
state=p.state,
grant_option=p.grant_option,
schema_qualifier=p.schema,
fail_on_role=fail_on_role,
)
except Error as e:
conn.rollback()
module.fail_json(msg=e.message, exception=traceback.format_exc())
except psycopg2.Error as e:
conn.rollback()
module.fail_json(msg=to_native(e.message))
if module.check_mode:
conn.rollback()
else:
conn.commit()
module.exit_json(changed=changed, queries=executed_queries)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,268 |
azure_rm_virtualmachinescaleset fails when using image that requires plan information
|
The module azure_rm_virtualmachine has fields for plan information. Module azure_rm_virtualmachinescaleset needs to have them and forward them when creating a VM with an image that requires plan info to be specified. The CIS hardened images from the Azure Marketplace is a prime example.
|
https://github.com/ansible/ansible/issues/65268
|
https://github.com/ansible/ansible/pull/65335
|
8a423868d960efdf02ef8ee9b88a0ea33de556c5
|
dfd998bcbc98c11154cb8a7576408b4ac3732e12
| 2019-11-25T23:51:26Z |
python
| 2019-12-18T04:55:33Z |
changelogs/fragments/65335-add-plan-to-azure-vmscaleset-module.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,268 |
azure_rm_virtualmachinescaleset fails when using image that requires plan information
|
The module azure_rm_virtualmachine has fields for plan information. Module azure_rm_virtualmachinescaleset needs to have them and forward them when creating a VM with an image that requires plan info to be specified. The CIS hardened images from the Azure Marketplace is a prime example.
|
https://github.com/ansible/ansible/issues/65268
|
https://github.com/ansible/ansible/pull/65335
|
8a423868d960efdf02ef8ee9b88a0ea33de556c5
|
dfd998bcbc98c11154cb8a7576408b4ac3732e12
| 2019-11-25T23:51:26Z |
python
| 2019-12-18T04:55:33Z |
lib/ansible/modules/cloud/azure/azure_rm_virtualmachinescaleset.py
|
#!/usr/bin/python
#
# Copyright (c) 2016 Sertac Ozercan, <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: azure_rm_virtualmachinescaleset
version_added: "2.4"
short_description: Manage Azure virtual machine scale sets
description:
- Create and update a virtual machine scale set.
- Note that this module was called M(azure_rm_virtualmachine_scaleset) before Ansible 2.8. The usage did not change.
options:
resource_group:
description:
- Name of the resource group containing the virtual machine scale set.
required: true
name:
description:
- Name of the virtual machine.
required: true
state:
description:
- Assert the state of the virtual machine scale set.
- State C(present) will check that the machine exists with the requested configuration. If the configuration
of the existing machine does not match, the machine will be updated.
- State C(absent) will remove the virtual machine scale set.
default: present
choices:
- absent
- present
location:
description:
- Valid Azure location. Defaults to location of the resource group.
short_hostname:
description:
- Short host name.
vm_size:
description:
- A valid Azure VM size value. For example, C(Standard_D4).
- The list of choices varies depending on the subscription and location. Check your subscription for available choices.
capacity:
description:
- Capacity of VMSS.
default: 1
tier:
description:
- SKU Tier.
choices:
- Basic
- Standard
upgrade_policy:
description:
- Upgrade policy.
- Required when creating the Azure virtual machine scale sets.
choices:
- Manual
- Automatic
admin_username:
description:
- Admin username used to access the host after it is created. Required when creating a VM.
admin_password:
description:
- Password for the admin username.
- Not required if the os_type is Linux and SSH password authentication is disabled by setting I(ssh_password_enabled=false).
ssh_password_enabled:
description:
- When the os_type is Linux, setting I(ssh_password_enabled=false) will disable SSH password authentication and require use of SSH keys.
type: bool
default: true
ssh_public_keys:
description:
- For I(os_type=Linux) provide a list of SSH keys.
- Each item in the list should be a dictionary where the dictionary contains two keys, C(path) and C(key_data).
- Set the C(path) to the default location of the authorized_keys files.
- On an Enterprise Linux host, for example, the I(path=/home/<admin username>/.ssh/authorized_keys).
Set C(key_data) to the actual value of the public key.
image:
description:
- Specifies the image used to build the VM.
- If a string, the image is sourced from a custom image based on the name.
- If a dict with the keys I(publisher), I(offer), I(sku), and I(version), the image is sourced from a Marketplace image.
Note that set I(version=latest) to get the most recent version of a given image.
- If a dict with the keys I(name) and I(resource_group), the image is sourced from a custom image based on the I(name) and I(resource_group) set.
Note that the key I(resource_group) is optional and if omitted, all images in the subscription will be searched for by I(name).
- Custom image support was added in Ansible 2.5.
required: true
os_disk_caching:
description:
- Type of OS disk caching.
choices:
- ReadOnly
- ReadWrite
default: ReadOnly
aliases:
- disk_caching
os_type:
description:
- Base type of operating system.
choices:
- Windows
- Linux
default: Linux
managed_disk_type:
description:
- Managed disk type.
choices:
- Standard_LRS
- Premium_LRS
data_disks:
description:
- Describes list of data disks.
version_added: "2.4"
suboptions:
lun:
description:
- The logical unit number for data disk.
default: 0
version_added: "2.4"
disk_size_gb:
description:
- The initial disk size in GB for blank data disks.
version_added: "2.4"
managed_disk_type:
description:
- Managed data disk type.
choices:
- Standard_LRS
- Premium_LRS
version_added: "2.4"
caching:
description:
- Type of data disk caching.
choices:
- ReadOnly
- ReadWrite
default: ReadOnly
version_added: "2.4"
virtual_network_resource_group:
description:
- When creating a virtual machine, if a specific virtual network from another resource group should be
used.
- Use this parameter to specify the resource group to use.
version_added: "2.5"
virtual_network_name:
description:
- Virtual Network name.
aliases:
- virtual_network
subnet_name:
description:
- Subnet name.
aliases:
- subnet
load_balancer:
description:
- Load balancer name.
version_added: "2.5"
application_gateway:
description:
- Application gateway name.
version_added: "2.8"
remove_on_absent:
description:
- When removing a VM using I(state=absent), also remove associated resources.
- It can be C(all) or a list with any of the following ['network_interfaces', 'virtual_storage', 'public_ips'].
- Any other input will be ignored.
default: ['all']
enable_accelerated_networking:
description:
- Indicates whether user wants to allow accelerated networking for virtual machines in scaleset being created.
version_added: "2.7"
type: bool
security_group:
description:
- Existing security group with which to associate the subnet.
- It can be the security group name which is in the same resource group.
- It can be the resource ID.
- It can be a dict which contains I(name) and I(resource_group) of the security group.
version_added: "2.7"
aliases:
- security_group_name
overprovision:
description:
- Specifies whether the Virtual Machine Scale Set should be overprovisioned.
type: bool
default: True
version_added: "2.8"
single_placement_group:
description:
- When true this limits the scale set to a single placement group, of max size 100 virtual machines.
type: bool
default: True
version_added: "2.9"
zones:
description:
- A list of Availability Zones for your virtual machine scale set.
type: list
version_added: "2.8"
custom_data:
description:
- Data which is made available to the virtual machine and used by e.g., C(cloud-init).
- Many images in the marketplace are not cloud-init ready. Thus, data sent to I(custom_data) would be ignored.
- If the image you are attempting to use is not listed in
U(https://docs.microsoft.com/en-us/azure/virtual-machines/linux/using-cloud-init#cloud-init-overview),
follow these steps U(https://docs.microsoft.com/en-us/azure/virtual-machines/linux/cloudinit-prepare-custom-image).
version_added: "2.8"
extends_documentation_fragment:
- azure
- azure_tags
author:
- Sertac Ozercan (@sozercan)
'''
EXAMPLES = '''
- name: Create VMSS
azure_rm_virtualmachinescaleset:
resource_group: myResourceGroup
name: testvmss
vm_size: Standard_DS1_v2
capacity: 2
virtual_network_name: testvnet
upgrade_policy: Manual
subnet_name: testsubnet
admin_username: adminUser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert yor ssh public key here... >
managed_disk_type: Standard_LRS
image:
offer: CoreOS
publisher: CoreOS
sku: Stable
version: latest
data_disks:
- lun: 0
disk_size_gb: 64
caching: ReadWrite
managed_disk_type: Standard_LRS
- name: Create a VMSS with a custom image
azure_rm_virtualmachinescaleset:
resource_group: myResourceGroup
name: testvmss
vm_size: Standard_DS1_v2
capacity: 2
virtual_network_name: testvnet
upgrade_policy: Manual
subnet_name: testsubnet
admin_username: adminUser
admin_password: password01
managed_disk_type: Standard_LRS
image: customimage001
- name: Create a VMSS with over 100 instances
azure_rm_virtualmachinescaleset:
resource_group: myResourceGroup
name: testvmss
vm_size: Standard_DS1_v2
capacity: 120
single_placement_group: False
virtual_network_name: testvnet
upgrade_policy: Manual
subnet_name: testsubnet
admin_username: adminUser
admin_password: password01
managed_disk_type: Standard_LRS
image: customimage001
- name: Create a VMSS with a custom image from a particular resource group
azure_rm_virtualmachinescaleset:
resource_group: myResourceGroup
name: testvmss
vm_size: Standard_DS1_v2
capacity: 2
virtual_network_name: testvnet
upgrade_policy: Manual
subnet_name: testsubnet
admin_username: adminUser
admin_password: password01
managed_disk_type: Standard_LRS
image:
name: customimage001
resource_group: myResourceGroup
'''
RETURN = '''
azure_vmss:
description:
- Facts about the current state of the object.
- Note that facts are not part of the registered output but available directly.
returned: always
type: dict
sample: {
"properties": {
"overprovision": true,
"singlePlacementGroup": true,
"upgradePolicy": {
"mode": "Manual"
},
"virtualMachineProfile": {
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "testvmss",
"properties": {
"dnsSettings": {
"dnsServers": []
},
"enableAcceleratedNetworking": false,
"ipConfigurations": [
{
"name": "default",
"properties": {
"privateIPAddressVersion": "IPv4",
"subnet": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/virtualNetworks/testvnet/subnets/testsubnet"
}
}
}
],
"primary": true
}
}
]
},
"osProfile": {
"adminUsername": "testuser",
"computerNamePrefix": "testvmss",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"ssh": {
"publicKeys": [
{
"keyData": "",
"path": "/home/testuser/.ssh/authorized_keys"
}
]
}
},
"secrets": []
},
"storageProfile": {
"dataDisks": [
{
"caching": "ReadWrite",
"createOption": "empty",
"diskSizeGB": 64,
"lun": 0,
"managedDisk": {
"storageAccountType": "Standard_LRS"
}
}
],
"imageReference": {
"offer": "CoreOS",
"publisher": "CoreOS",
"sku": "Stable",
"version": "899.17.0"
},
"osDisk": {
"caching": "ReadWrite",
"createOption": "fromImage",
"managedDisk": {
"storageAccountType": "Standard_LRS"
}
}
}
}
},
"sku": {
"capacity": 2,
"name": "Standard_DS1_v2",
"tier": "Standard"
},
"tags": null,
"type": "Microsoft.Compute/virtualMachineScaleSets"
}
''' # NOQA
import random
import re
import base64
try:
from msrestazure.azure_exceptions import CloudError
from msrestazure.tools import parse_resource_id
except ImportError:
# This is handled in azure_rm_common
pass
from ansible.module_utils.azure_rm_common import AzureRMModuleBase, azure_id_to_dict, format_resource_id
from ansible.module_utils.basic import to_native, to_bytes
AZURE_OBJECT_CLASS = 'VirtualMachineScaleSet'
AZURE_ENUM_MODULES = ['azure.mgmt.compute.models']
class AzureRMVirtualMachineScaleSet(AzureRMModuleBase):
def __init__(self):
self.module_arg_spec = dict(
resource_group=dict(type='str', required=True),
name=dict(type='str', required=True),
state=dict(choices=['present', 'absent'], default='present', type='str'),
location=dict(type='str'),
short_hostname=dict(type='str'),
vm_size=dict(type='str'),
tier=dict(type='str', choices=['Basic', 'Standard']),
capacity=dict(type='int', default=1),
upgrade_policy=dict(type='str', choices=['Automatic', 'Manual']),
admin_username=dict(type='str'),
admin_password=dict(type='str', no_log=True),
ssh_password_enabled=dict(type='bool', default=True),
ssh_public_keys=dict(type='list'),
image=dict(type='raw'),
os_disk_caching=dict(type='str', aliases=['disk_caching'], choices=['ReadOnly', 'ReadWrite'],
default='ReadOnly'),
os_type=dict(type='str', choices=['Linux', 'Windows'], default='Linux'),
managed_disk_type=dict(type='str', choices=['Standard_LRS', 'Premium_LRS']),
data_disks=dict(type='list'),
subnet_name=dict(type='str', aliases=['subnet']),
load_balancer=dict(type='str'),
application_gateway=dict(type='str'),
virtual_network_resource_group=dict(type='str'),
virtual_network_name=dict(type='str', aliases=['virtual_network']),
remove_on_absent=dict(type='list', default=['all']),
enable_accelerated_networking=dict(type='bool'),
security_group=dict(type='raw', aliases=['security_group_name']),
overprovision=dict(type='bool', default=True),
single_placement_group=dict(type='bool', default=True),
zones=dict(type='list'),
custom_data=dict(type='str')
)
self.resource_group = None
self.name = None
self.state = None
self.location = None
self.short_hostname = None
self.vm_size = None
self.capacity = None
self.tier = None
self.upgrade_policy = None
self.admin_username = None
self.admin_password = None
self.ssh_password_enabled = None
self.ssh_public_keys = None
self.image = None
self.os_disk_caching = None
self.managed_disk_type = None
self.data_disks = None
self.os_type = None
self.subnet_name = None
self.virtual_network_resource_group = None
self.virtual_network_name = None
self.tags = None
self.differences = None
self.load_balancer = None
self.application_gateway = None
self.enable_accelerated_networking = None
self.security_group = None
self.overprovision = None
self.single_placement_group = None
self.zones = None
self.custom_data = None
required_if = [
('state', 'present', [
'vm_size'])
]
mutually_exclusive = [('load_balancer', 'application_gateway')]
self.results = dict(
changed=False,
actions=[],
ansible_facts=dict(azure_vmss=None)
)
super(AzureRMVirtualMachineScaleSet, self).__init__(
derived_arg_spec=self.module_arg_spec,
supports_check_mode=True,
required_if=required_if,
mutually_exclusive=mutually_exclusive)
def exec_module(self, **kwargs):
nsg = None
for key in list(self.module_arg_spec.keys()) + ['tags']:
setattr(self, key, kwargs[key])
if self.module._name == 'azure_rm_virtualmachine_scaleset':
self.module.deprecate("The 'azure_rm_virtualmachine_scaleset' module has been renamed to 'azure_rm_virtualmachinescaleset'", version='2.12')
# make sure options are lower case
self.remove_on_absent = set([resource.lower() for resource in self.remove_on_absent])
# convert elements to ints
self.zones = [int(i) for i in self.zones] if self.zones else None
# default virtual_network_resource_group to resource_group
if not self.virtual_network_resource_group:
self.virtual_network_resource_group = self.resource_group
changed = False
results = dict()
vmss = None
disable_ssh_password = None
vmss_dict = None
virtual_network = None
subnet = None
image_reference = None
custom_image = False
load_balancer_backend_address_pools = None
load_balancer_inbound_nat_pools = None
load_balancer = None
application_gateway = None
application_gateway_backend_address_pools = None
support_lb_change = True
resource_group = self.get_resource_group(self.resource_group)
if not self.location:
# Set default location
self.location = resource_group.location
if self.custom_data:
self.custom_data = to_native(base64.b64encode(to_bytes(self.custom_data)))
if self.state == 'present':
# Verify parameters and resolve any defaults
if self.vm_size and not self.vm_size_is_valid():
self.fail("Parameter error: vm_size {0} is not valid for your subscription and location.".format(
self.vm_size
))
# if self.virtual_network_name:
# virtual_network = self.get_virtual_network(self.virtual_network_name)
if self.ssh_public_keys:
msg = "Parameter error: expecting ssh_public_keys to be a list of type dict where " \
"each dict contains keys: path, key_data."
for key in self.ssh_public_keys:
if not isinstance(key, dict):
self.fail(msg)
if not key.get('path') or not key.get('key_data'):
self.fail(msg)
if self.image and isinstance(self.image, dict):
if all(key in self.image for key in ('publisher', 'offer', 'sku', 'version')):
marketplace_image = self.get_marketplace_image_version()
if self.image['version'] == 'latest':
self.image['version'] = marketplace_image.name
self.log("Using image version {0}".format(self.image['version']))
image_reference = self.compute_models.ImageReference(
publisher=self.image['publisher'],
offer=self.image['offer'],
sku=self.image['sku'],
version=self.image['version']
)
elif self.image.get('name'):
custom_image = True
image_reference = self.get_custom_image_reference(
self.image.get('name'),
self.image.get('resource_group'))
elif self.image.get('id'):
try:
image_reference = self.compute_models.ImageReference(id=self.image['id'])
except Exception as exc:
self.fail("id Error: Cannot get image from the reference id - {0}".format(self.image['id']))
else:
self.fail("parameter error: expecting image to contain [publisher, offer, sku, version], [name, resource_group] or [id]")
elif self.image and isinstance(self.image, str):
custom_image = True
image_reference = self.get_custom_image_reference(self.image)
elif self.image:
self.fail("parameter error: expecting image to be a string or dict not {0}".format(type(self.image).__name__))
disable_ssh_password = not self.ssh_password_enabled
if self.load_balancer:
load_balancer = self.get_load_balancer(self.load_balancer)
load_balancer_backend_address_pools = ([self.compute_models.SubResource(id=resource.id)
for resource in load_balancer.backend_address_pools]
if load_balancer.backend_address_pools else None)
load_balancer_inbound_nat_pools = ([self.compute_models.SubResource(id=resource.id)
for resource in load_balancer.inbound_nat_pools]
if load_balancer.inbound_nat_pools else None)
if self.application_gateway:
application_gateway = self.get_application_gateway(self.application_gateway)
application_gateway_backend_address_pools = ([self.compute_models.SubResource(id=resource.id)
for resource in application_gateway.backend_address_pools]
if application_gateway.backend_address_pools else None)
try:
self.log("Fetching virtual machine scale set {0}".format(self.name))
vmss = self.compute_client.virtual_machine_scale_sets.get(self.resource_group, self.name)
self.check_provisioning_state(vmss, self.state)
vmss_dict = self.serialize_vmss(vmss)
if self.state == 'present':
differences = []
results = vmss_dict
if self.os_disk_caching and \
self.os_disk_caching != vmss_dict['properties']['virtualMachineProfile']['storageProfile']['osDisk']['caching']:
self.log('CHANGED: virtual machine scale set {0} - OS disk caching'.format(self.name))
differences.append('OS Disk caching')
changed = True
vmss_dict['properties']['virtualMachineProfile']['storageProfile']['osDisk']['caching'] = self.os_disk_caching
if self.capacity and \
self.capacity != vmss_dict['sku']['capacity']:
self.log('CHANGED: virtual machine scale set {0} - Capacity'.format(self.name))
differences.append('Capacity')
changed = True
vmss_dict['sku']['capacity'] = self.capacity
if self.data_disks and \
len(self.data_disks) != len(vmss_dict['properties']['virtualMachineProfile']['storageProfile'].get('dataDisks', [])):
self.log('CHANGED: virtual machine scale set {0} - Data Disks'.format(self.name))
differences.append('Data Disks')
changed = True
if self.upgrade_policy and \
self.upgrade_policy != vmss_dict['properties']['upgradePolicy']['mode']:
self.log('CHANGED: virtual machine scale set {0} - Upgrade Policy'.format(self.name))
differences.append('Upgrade Policy')
changed = True
vmss_dict['properties']['upgradePolicy']['mode'] = self.upgrade_policy
if image_reference and \
image_reference.as_dict() != vmss_dict['properties']['virtualMachineProfile']['storageProfile']['imageReference']:
self.log('CHANGED: virtual machine scale set {0} - Image'.format(self.name))
differences.append('Image')
changed = True
vmss_dict['properties']['virtualMachineProfile']['storageProfile']['imageReference'] = image_reference.as_dict()
update_tags, vmss_dict['tags'] = self.update_tags(vmss_dict.get('tags', dict()))
if update_tags:
differences.append('Tags')
changed = True
if bool(self.overprovision) != bool(vmss_dict['properties']['overprovision']):
differences.append('overprovision')
changed = True
if bool(self.single_placement_group) != bool(vmss_dict['properties']['singlePlacementGroup']):
differences.append('single_placement_group')
changed = True
vmss_dict['zones'] = [int(i) for i in vmss_dict['zones']] if 'zones' in vmss_dict and vmss_dict['zones'] else None
if self.zones != vmss_dict['zones']:
self.log("CHANGED: virtual machine scale sets {0} zones".format(self.name))
differences.append('Zones')
changed = True
vmss_dict['zones'] = self.zones
nicConfigs = vmss_dict['properties']['virtualMachineProfile']['networkProfile']['networkInterfaceConfigurations']
backend_address_pool = nicConfigs[0]['properties']['ipConfigurations'][0]['properties'].get('loadBalancerBackendAddressPools', [])
backend_address_pool += nicConfigs[0]['properties']['ipConfigurations'][0]['properties'].get('applicationGatewayBackendAddressPools', [])
lb_or_ag_id = None
if (len(nicConfigs) != 1 or len(backend_address_pool) != 1):
support_lb_change = False # Currently not support for the vmss contains more than one loadbalancer
self.module.warn('Updating more than one load balancer on VMSS is currently not supported')
else:
if load_balancer:
lb_or_ag_id = "{0}/".format(load_balancer.id)
elif application_gateway:
lb_or_ag_id = "{0}/".format(application_gateway.id)
backend_address_pool_id = backend_address_pool[0].get('id')
if bool(lb_or_ag_id) != bool(backend_address_pool_id) or not backend_address_pool_id.startswith(lb_or_ag_id):
differences.append('load_balancer')
changed = True
if self.custom_data:
if self.custom_data != vmss_dict['properties']['virtualMachineProfile']['osProfile'].get('customData'):
differences.append('custom_data')
changed = True
vmss_dict['properties']['virtualMachineProfile']['osProfile']['customData'] = self.custom_data
self.differences = differences
elif self.state == 'absent':
self.log("CHANGED: virtual machine scale set {0} exists and requested state is 'absent'".format(self.name))
results = dict()
changed = True
except CloudError:
self.log('Virtual machine scale set {0} does not exist'.format(self.name))
if self.state == 'present':
self.log("CHANGED: virtual machine scale set {0} does not exist but state is 'present'.".format(self.name))
changed = True
self.results['changed'] = changed
self.results['ansible_facts']['azure_vmss'] = results
if self.check_mode:
return self.results
if changed:
if self.state == 'present':
if not vmss:
# Create the VMSS
self.log("Create virtual machine scale set {0}".format(self.name))
self.results['actions'].append('Created VMSS {0}'.format(self.name))
if self.os_type == 'Linux':
if disable_ssh_password and not self.ssh_public_keys:
self.fail("Parameter error: ssh_public_keys required when disabling SSH password.")
if not self.virtual_network_name:
default_vnet = self.create_default_vnet()
virtual_network = default_vnet.id
self.virtual_network_name = default_vnet.name
if self.subnet_name:
subnet = self.get_subnet(self.virtual_network_name, self.subnet_name)
if not self.short_hostname:
self.short_hostname = self.name
if not image_reference:
self.fail("Parameter error: an image is required when creating a virtual machine.")
managed_disk = self.compute_models.VirtualMachineScaleSetManagedDiskParameters(storage_account_type=self.managed_disk_type)
if self.security_group:
nsg = self.parse_nsg()
if nsg:
self.security_group = self.network_models.NetworkSecurityGroup(id=nsg.get('id'))
os_profile = None
if self.admin_username or self.custom_data or self.ssh_public_keys:
os_profile = self.compute_models.VirtualMachineScaleSetOSProfile(
admin_username=self.admin_username,
computer_name_prefix=self.short_hostname,
custom_data=self.custom_data
)
vmss_resource = self.compute_models.VirtualMachineScaleSet(
location=self.location,
overprovision=self.overprovision,
single_placement_group=self.single_placement_group,
tags=self.tags,
upgrade_policy=self.compute_models.UpgradePolicy(
mode=self.upgrade_policy
),
sku=self.compute_models.Sku(
name=self.vm_size,
capacity=self.capacity,
tier=self.tier,
),
virtual_machine_profile=self.compute_models.VirtualMachineScaleSetVMProfile(
os_profile=os_profile,
storage_profile=self.compute_models.VirtualMachineScaleSetStorageProfile(
os_disk=self.compute_models.VirtualMachineScaleSetOSDisk(
managed_disk=managed_disk,
create_option=self.compute_models.DiskCreateOptionTypes.from_image,
caching=self.os_disk_caching,
),
image_reference=image_reference,
),
network_profile=self.compute_models.VirtualMachineScaleSetNetworkProfile(
network_interface_configurations=[
self.compute_models.VirtualMachineScaleSetNetworkConfiguration(
name=self.name,
primary=True,
ip_configurations=[
self.compute_models.VirtualMachineScaleSetIPConfiguration(
name='default',
subnet=self.compute_models.ApiEntityReference(
id=subnet.id
),
primary=True,
load_balancer_backend_address_pools=load_balancer_backend_address_pools,
load_balancer_inbound_nat_pools=load_balancer_inbound_nat_pools,
application_gateway_backend_address_pools=application_gateway_backend_address_pools
)
],
enable_accelerated_networking=self.enable_accelerated_networking,
network_security_group=self.security_group
)
]
)
),
zones=self.zones
)
if self.admin_password:
vmss_resource.virtual_machine_profile.os_profile.admin_password = self.admin_password
if self.os_type == 'Linux' and os_profile:
vmss_resource.virtual_machine_profile.os_profile.linux_configuration = self.compute_models.LinuxConfiguration(
disable_password_authentication=disable_ssh_password
)
if self.ssh_public_keys:
ssh_config = self.compute_models.SshConfiguration()
ssh_config.public_keys = \
[self.compute_models.SshPublicKey(path=key['path'], key_data=key['key_data']) for key in self.ssh_public_keys]
vmss_resource.virtual_machine_profile.os_profile.linux_configuration.ssh = ssh_config
if self.data_disks:
data_disks = []
for data_disk in self.data_disks:
data_disk_managed_disk = self.compute_models.VirtualMachineScaleSetManagedDiskParameters(
storage_account_type=data_disk.get('managed_disk_type', None)
)
data_disk['caching'] = data_disk.get(
'caching',
self.compute_models.CachingTypes.read_only
)
data_disks.append(self.compute_models.VirtualMachineScaleSetDataDisk(
lun=data_disk.get('lun', None),
caching=data_disk.get('caching', None),
create_option=self.compute_models.DiskCreateOptionTypes.empty,
disk_size_gb=data_disk.get('disk_size_gb', None),
managed_disk=data_disk_managed_disk,
))
vmss_resource.virtual_machine_profile.storage_profile.data_disks = data_disks
self.log("Create virtual machine with parameters:")
self.create_or_update_vmss(vmss_resource)
elif self.differences and len(self.differences) > 0:
self.log("Update virtual machine scale set {0}".format(self.name))
self.results['actions'].append('Updated VMSS {0}'.format(self.name))
vmss_resource = self.get_vmss()
vmss_resource.virtual_machine_profile.storage_profile.os_disk.caching = self.os_disk_caching
vmss_resource.sku.capacity = self.capacity
vmss_resource.overprovision = self.overprovision
vmss_resource.single_placement_group = self.single_placement_group
if support_lb_change:
if self.load_balancer:
vmss_resource.virtual_machine_profile.network_profile.network_interface_configurations[0] \
.ip_configurations[0].load_balancer_backend_address_pools = load_balancer_backend_address_pools
vmss_resource.virtual_machine_profile.network_profile.network_interface_configurations[0] \
.ip_configurations[0].load_balancer_inbound_nat_pools = load_balancer_inbound_nat_pools
vmss_resource.virtual_machine_profile.network_profile.network_interface_configurations[0] \
.ip_configurations[0].application_gateway_backend_address_pools = None
elif self.application_gateway:
vmss_resource.virtual_machine_profile.network_profile.network_interface_configurations[0] \
.ip_configurations[0].application_gateway_backend_address_pools = application_gateway_backend_address_pools
vmss_resource.virtual_machine_profile.network_profile.network_interface_configurations[0] \
.ip_configurations[0].load_balancer_backend_address_pools = None
vmss_resource.virtual_machine_profile.network_profile.network_interface_configurations[0] \
.ip_configurations[0].load_balancer_inbound_nat_pools = None
if self.data_disks is not None:
data_disks = []
for data_disk in self.data_disks:
data_disks.append(self.compute_models.VirtualMachineScaleSetDataDisk(
lun=data_disk['lun'],
caching=data_disk['caching'],
create_option=self.compute_models.DiskCreateOptionTypes.empty,
disk_size_gb=data_disk['disk_size_gb'],
managed_disk=self.compute_models.VirtualMachineScaleSetManagedDiskParameters(
storage_account_type=data_disk.get('managed_disk_type', None)
),
))
vmss_resource.virtual_machine_profile.storage_profile.data_disks = data_disks
if image_reference is not None:
vmss_resource.virtual_machine_profile.storage_profile.image_reference = image_reference
self.log("Update virtual machine with parameters:")
self.create_or_update_vmss(vmss_resource)
self.results['ansible_facts']['azure_vmss'] = self.serialize_vmss(self.get_vmss())
elif self.state == 'absent':
# delete the VM
self.log("Delete virtual machine scale set {0}".format(self.name))
self.results['ansible_facts']['azure_vmss'] = None
self.delete_vmss(vmss)
# until we sort out how we want to do this globally
del self.results['actions']
return self.results
def get_vmss(self):
'''
Get the VMSS
:return: VirtualMachineScaleSet object
'''
try:
vmss = self.compute_client.virtual_machine_scale_sets.get(self.resource_group, self.name)
return vmss
except CloudError as exc:
self.fail("Error getting virtual machine scale set {0} - {1}".format(self.name, str(exc)))
def get_virtual_network(self, name):
try:
vnet = self.network_client.virtual_networks.get(self.virtual_network_resource_group, name)
return vnet
except CloudError as exc:
self.fail("Error fetching virtual network {0} - {1}".format(name, str(exc)))
def get_subnet(self, vnet_name, subnet_name):
self.log("Fetching subnet {0} in virtual network {1}".format(subnet_name, vnet_name))
try:
subnet = self.network_client.subnets.get(self.virtual_network_resource_group, vnet_name, subnet_name)
except CloudError as exc:
self.fail("Error: fetching subnet {0} in virtual network {1} - {2}".format(
subnet_name,
vnet_name,
str(exc)))
return subnet
def get_load_balancer(self, id):
id_dict = parse_resource_id(id)
try:
return self.network_client.load_balancers.get(id_dict.get('resource_group', self.resource_group), id_dict.get('name'))
except CloudError as exc:
self.fail("Error fetching load balancer {0} - {1}".format(id, str(exc)))
def get_application_gateway(self, id):
id_dict = parse_resource_id(id)
try:
return self.network_client.application_gateways.get(id_dict.get('resource_group', self.resource_group), id_dict.get('name'))
except CloudError as exc:
self.fail("Error fetching application_gateway {0} - {1}".format(id, str(exc)))
def serialize_vmss(self, vmss):
'''
Convert a VirtualMachineScaleSet object to dict.
:param vm: VirtualMachineScaleSet object
:return: dict
'''
result = self.serialize_obj(vmss, AZURE_OBJECT_CLASS, enum_modules=AZURE_ENUM_MODULES)
result['id'] = vmss.id
result['name'] = vmss.name
result['type'] = vmss.type
result['location'] = vmss.location
result['tags'] = vmss.tags
return result
def delete_vmss(self, vmss):
self.log("Deleting virtual machine scale set {0}".format(self.name))
self.results['actions'].append("Deleted virtual machine scale set {0}".format(self.name))
try:
poller = self.compute_client.virtual_machine_scale_sets.delete(self.resource_group, self.name)
# wait for the poller to finish
self.get_poller_result(poller)
except CloudError as exc:
self.fail("Error deleting virtual machine scale set {0} - {1}".format(self.name, str(exc)))
return True
def get_marketplace_image_version(self):
try:
versions = self.compute_client.virtual_machine_images.list(self.location,
self.image['publisher'],
self.image['offer'],
self.image['sku'])
except CloudError as exc:
self.fail("Error fetching image {0} {1} {2} - {3}".format(self.image['publisher'],
self.image['offer'],
self.image['sku'],
str(exc)))
if versions and len(versions) > 0:
if self.image['version'] == 'latest':
return versions[len(versions) - 1]
for version in versions:
if version.name == self.image['version']:
return version
self.fail("Error could not find image {0} {1} {2} {3}".format(self.image['publisher'],
self.image['offer'],
self.image['sku'],
self.image['version']))
def get_custom_image_reference(self, name, resource_group=None):
try:
if resource_group:
vm_images = self.compute_client.images.list_by_resource_group(resource_group)
else:
vm_images = self.compute_client.images.list()
except Exception as exc:
self.fail("Error fetching custom images from subscription - {0}".format(str(exc)))
for vm_image in vm_images:
if vm_image.name == name:
self.log("Using custom image id {0}".format(vm_image.id))
return self.compute_models.ImageReference(id=vm_image.id)
self.fail("Error could not find image with name {0}".format(name))
def create_or_update_vmss(self, params):
try:
poller = self.compute_client.virtual_machine_scale_sets.create_or_update(self.resource_group, self.name, params)
self.get_poller_result(poller)
except CloudError as exc:
self.fail("Error creating or updating virtual machine {0} - {1}".format(self.name, str(exc)))
def vm_size_is_valid(self):
'''
Validate self.vm_size against the list of virtual machine sizes available for the account and location.
:return: boolean
'''
try:
sizes = self.compute_client.virtual_machine_sizes.list(self.location)
except CloudError as exc:
self.fail("Error retrieving available machine sizes - {0}".format(str(exc)))
for size in sizes:
if size.name == self.vm_size:
return True
return False
def parse_nsg(self):
nsg = self.security_group
resource_group = self.resource_group
if isinstance(self.security_group, dict):
nsg = self.security_group.get('name')
resource_group = self.security_group.get('resource_group', self.resource_group)
id = format_resource_id(val=nsg,
subscription_id=self.subscription_id,
namespace='Microsoft.Network',
types='networkSecurityGroups',
resource_group=resource_group)
name = azure_id_to_dict(id).get('name')
return dict(id=id, name=name)
def main():
AzureRMVirtualMachineScaleSet()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,391 |
Meraki_content_filtering not working with net_id
|
##### SUMMARY
I am trying to apply Content Filtering rules using meraki_content_filter and when I use net_id with a valid network ID, I get an error fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "No network found with the name None", "response": "OK (unknown bytes)", "status": 200}
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
meraki_content_filter
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /home/amestruetemper.com/mwinslow/.ansible.cfg
configured module search path = [u'/home/amestruetemper.com/mwinslow/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/home/amestruetemper.com/mwinslow/.ansible.cfg) = [u'/home/amestruetemper.com/mwinslow/ansible/inventory']
DEFAULT_PRIVATE_KEY_FILE(/home/amestruetemper.com/mwinslow/.ansible.cfg) = /etc/ansible/keys/id_rsa
DEFAULT_ROLES_PATH(/home/amestruetemper.com/mwinslow/.ansible.cfg) = [u'/home/amestruetemper.com/mwinslow/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/home/amestruetemper.com/mwinslow/.ansible.cfg) = skippy
HOST_KEY_CHECKING(/home/amestruetemper.com/mwinslow/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS Linux release 7.6.1810 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a playbook following the doc but replace net_name with net_id
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: meraki test
hosts: localhost
tasks:
- name: Set Content Filtering
meraki_content_filtering:
auth_key: VALID_KEY
org_id: VALID_ORG
net_id: N_VALID_NET_ID
state: present
category_list_size: top sites
blocked_categories:
- "Adult and Pornography"
- "Alcohol and Tobacco"
- "Illegal"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should add content filtering rules to the test network.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
It fails because it is looking for net_name. If I change net_id to net_name and put the network name in, it will work. I need to use net_id so that if a network name changes it won't affect the script.
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [meraki test] ************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************
ok: [localhost]
TASK [Set Content Filtering] **************************************************************************************************************************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "No network found with the name None", "response": "OK (unknown bytes)", "status": 200}
PLAY RECAP ********************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/59391
|
https://github.com/ansible/ansible/pull/59395
|
9fafbe3ab2e5d9d6802753fb0006b2726d6adb3b
|
71ea16995ae536b7073fe4c3ab2e8f3a126ebc28
| 2019-07-22T14:42:59Z |
python
| 2019-12-18T17:10:16Z |
changelogs/fragments/59395_meraki_content_filtering.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,391 |
Meraki_content_filtering not working with net_id
|
##### SUMMARY
I am trying to apply Content Filtering rules using meraki_content_filter and when I use net_id with a valid network ID, I get an error fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "No network found with the name None", "response": "OK (unknown bytes)", "status": 200}
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
meraki_content_filter
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.8.1
config file = /home/amestruetemper.com/mwinslow/.ansible.cfg
configured module search path = [u'/home/amestruetemper.com/mwinslow/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_HOST_LIST(/home/amestruetemper.com/mwinslow/.ansible.cfg) = [u'/home/amestruetemper.com/mwinslow/ansible/inventory']
DEFAULT_PRIVATE_KEY_FILE(/home/amestruetemper.com/mwinslow/.ansible.cfg) = /etc/ansible/keys/id_rsa
DEFAULT_ROLES_PATH(/home/amestruetemper.com/mwinslow/.ansible.cfg) = [u'/home/amestruetemper.com/mwinslow/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/home/amestruetemper.com/mwinslow/.ansible.cfg) = skippy
HOST_KEY_CHECKING(/home/amestruetemper.com/mwinslow/.ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
CentOS Linux release 7.6.1810 (Core)
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Create a playbook following the doc but replace net_name with net_id
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: meraki test
hosts: localhost
tasks:
- name: Set Content Filtering
meraki_content_filtering:
auth_key: VALID_KEY
org_id: VALID_ORG
net_id: N_VALID_NET_ID
state: present
category_list_size: top sites
blocked_categories:
- "Adult and Pornography"
- "Alcohol and Tobacco"
- "Illegal"
delegate_to: localhost
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
It should add content filtering rules to the test network.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
It fails because it is looking for net_name. If I change net_id to net_name and put the network name in, it will work. I need to use net_id so that if a network name changes it won't affect the script.
<!--- Paste verbatim command output between quotes -->
```paste below
PLAY [meraki test] ************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************
ok: [localhost]
TASK [Set Content Filtering] **************************************************************************************************************************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "No network found with the name None", "response": "OK (unknown bytes)", "status": 200}
PLAY RECAP ********************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
|
https://github.com/ansible/ansible/issues/59391
|
https://github.com/ansible/ansible/pull/59395
|
9fafbe3ab2e5d9d6802753fb0006b2726d6adb3b
|
71ea16995ae536b7073fe4c3ab2e8f3a126ebc28
| 2019-07-22T14:42:59Z |
python
| 2019-12-18T17:10:16Z |
lib/ansible/modules/network/meraki/meraki_content_filtering.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Kevin Breit (@kbreit) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: meraki_content_filtering
short_description: Edit Meraki MX content filtering policies
version_added: "2.8"
description:
- Allows for setting policy on content filtering.
options:
auth_key:
description:
- Authentication key provided by the dashboard. Required if environmental variable MERAKI_KEY is not set.
type: str
net_name:
description:
- Name of a network.
aliases: [ network ]
type: str
net_id:
description:
- ID number of a network.
type: str
state:
description:
- States that a policy should be created or modified.
choices: [present, query]
default: present
type: str
allowed_urls:
description:
- List of URL patterns which should be allowed.
type: list
blocked_urls:
description:
- List of URL patterns which should be blocked.
type: list
blocked_categories:
description:
- List of content categories which should be blocked.
- Use the C(meraki_content_filtering_facts) module for a full list of categories.
type: list
category_list_size:
description:
- Determines whether a network filters fo rall URLs in a category or only the list of top blocked sites.
choices: [ top sites, full list ]
type: str
subset:
description:
- Display only certain facts.
choices: [categories, policy]
type: str
version_added: '2.9'
author:
- Kevin Breit (@kbreit)
extends_documentation_fragment: meraki
'''
EXAMPLES = r'''
- name: Set single allowed URL pattern
meraki_content_filtering:
auth_key: abc123
org_name: YourOrg
net_name: YourMXNet
allowed_urls:
- "http://www.ansible.com/*"
- name: Set blocked URL category
meraki_content_filtering:
auth_key: abc123
org_name: YourOrg
net_name: YourMXNet
state: present
category_list_size: full list
blocked_categories:
- "Adult and Pornography"
- name: Remove match patterns and categories
meraki_content_filtering:
auth_key: abc123
org_name: YourOrg
net_name: YourMXNet
state: present
category_list_size: full list
allowed_urls: []
blocked_urls: []
'''
RETURN = r'''
data:
description: Information about the created or manipulated object.
returned: info
type: complex
contains:
id:
description: Identification string of network.
returned: success
type: str
sample: N_12345
'''
import os
from ansible.module_utils.basic import AnsibleModule, json, env_fallback
from ansible.module_utils.urls import fetch_url
from ansible.module_utils._text import to_native
from ansible.module_utils.common.dict_transformations import recursive_diff
from ansible.module_utils.network.meraki.meraki import MerakiModule, meraki_argument_spec
def get_category_dict(meraki, full_list, category):
for i in full_list['categories']:
if i['name'] == category:
return i['id']
meraki.fail_json(msg="{0} is not a valid content filtering category".format(category))
def main():
# define the available arguments/parameters that a user can pass to
# the module
argument_spec = meraki_argument_spec()
argument_spec.update(
net_id=dict(type='str'),
net_name=dict(type='str', aliases=['network']),
state=dict(type='str', default='present', choices=['present', 'query']),
allowed_urls=dict(type='list'),
blocked_urls=dict(type='list'),
blocked_categories=dict(type='list'),
category_list_size=dict(type='str', choices=['top sites', 'full list']),
subset=dict(type='str', choices=['categories', 'policy']),
)
# the AnsibleModule object will be our abstraction working with Ansible
# this includes instantiation, a couple of common attr would be the
# args/params passed to the execution, as well as if the module
# supports check mode
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True,
)
meraki = MerakiModule(module, function='content_filtering')
module.params['follow_redirects'] = 'all'
category_urls = {'content_filtering': '/networks/{net_id}/contentFiltering/categories'}
policy_urls = {'content_filtering': '/networks/{net_id}/contentFiltering'}
meraki.url_catalog['categories'] = category_urls
meraki.url_catalog['policy'] = policy_urls
if meraki.params['net_name'] and meraki.params['net_id']:
meraki.fail_json(msg='net_name and net_id are mutually exclusive')
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
org_id = meraki.params['org_id']
if not org_id:
org_id = meraki.get_org_id(meraki.params['org_name'])
net_id = None
if net_id is None:
nets = meraki.get_nets(org_id=org_id)
net_id = meraki.get_net_id(org_id, meraki.params['net_name'], data=nets)
if meraki.params['state'] == 'query':
if meraki.params['subset']:
if meraki.params['subset'] == 'categories':
path = meraki.construct_path('categories', net_id=net_id)
elif meraki.params['subset'] == 'policy':
path = meraki.construct_path('policy', net_id=net_id)
meraki.result['data'] = meraki.request(path, method='GET')
else:
response_data = {'categories': None,
'policy': None,
}
path = meraki.construct_path('categories', net_id=net_id)
response_data['categories'] = meraki.request(path, method='GET')
path = meraki.construct_path('policy', net_id=net_id)
response_data['policy'] = meraki.request(path, method='GET')
meraki.result['data'] = response_data
if module.params['state'] == 'present':
payload = dict()
if meraki.params['allowed_urls']:
payload['allowedUrlPatterns'] = meraki.params['allowed_urls']
if meraki.params['blocked_urls']:
payload['blockedUrlPatterns'] = meraki.params['blocked_urls']
if meraki.params['blocked_categories']:
if len(meraki.params['blocked_categories']) == 0: # Corner case for resetting
payload['blockedUrlCategories'] = []
else:
category_path = meraki.construct_path('categories', net_id=net_id)
categories = meraki.request(category_path, method='GET')
payload['blockedUrlCategories'] = []
for category in meraki.params['blocked_categories']:
payload['blockedUrlCategories'].append(get_category_dict(meraki,
categories,
category))
if meraki.params['category_list_size']:
if meraki.params['category_list_size'].lower() == 'top sites':
payload['urlCategoryListSize'] = "topSites"
elif meraki.params['category_list_size'].lower() == 'full list':
payload['urlCategoryListSize'] = "fullList"
path = meraki.construct_path('policy', net_id=net_id)
current = meraki.request(path, method='GET')
proposed = current.copy()
proposed.update(payload)
if meraki.is_update_required(current, payload) is True:
meraki.result['diff'] = dict()
diff = recursive_diff(current, payload)
meraki.result['diff']['before'] = diff[0]
meraki.result['diff']['after'] = diff[1]
if module.check_mode:
current.update(payload)
meraki.result['changed'] = True
meraki.result['data'] = current
meraki.exit_json(**meraki.result)
response = meraki.request(path, method='PUT', payload=json.dumps(payload))
meraki.result['data'] = response
meraki.result['changed'] = True
else:
meraki.result['data'] = current
if module.check_mode:
meraki.result['data'] = current
meraki.exit_json(**meraki.result)
meraki.result['data'] = current
meraki.exit_json(**meraki.result)
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
meraki.exit_json(**meraki.result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,084 |
ec2_launch_template return values do not conform to documentation
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The ec2_launch_template documentation states that it provides two return values, other than the common return values. This includes:
Key | Returned | Description
-- | -- | --
default_version integer | when state=present | The version that will be used if only the template name is specified. Often this is the same as the latest version, but not always.
latest_version integer | when state=present | Latest available version of the launch template
However, the function `format_module_output`, which is [used](https://github.com/ansible/ansible/blob/272dceef4258fdfdc90281ce1d0b02236f16a797/lib/ansible/modules/cloud/amazon/ec2_launch_template.py#L499) to generate this module's output, stores default_template and latest_template, not default_version and latest_version.
```python
output['default_template'] = [
v for v in template_versions
if v.get('default_version')
][0]
output['latest_template'] = [
v for v in template_versions
if (
v.get('version_number') and
int(v['version_number']) == int(template['latest_version_number'])
)
][0]
return output
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_launch_template
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8.4
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
1. Store results of ec2_launch_template
2. Examine the contents of the results, which does not conform to the documentation.
##### EXPECTED RESULTS
Either two integer key/values should be available, or the documentation should correctly indicate that the output includes default_template and latest_template.
##### ACTUAL RESULTS
default_template and latest_template exist in the results, default_version and latest_version do not.
|
https://github.com/ansible/ansible/issues/61084
|
https://github.com/ansible/ansible/pull/61279
|
791e9dabe3c1cb50a315d82fbb7252f4a38885f6
|
c40832af482c3cc7b0c291ead5228ca582593419
| 2019-08-22T02:29:04Z |
python
| 2019-12-18T20:53:57Z |
changelogs/fragments/61279-ec2_launch_template-output.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,084 |
ec2_launch_template return values do not conform to documentation
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The ec2_launch_template documentation states that it provides two return values, other than the common return values. This includes:
Key | Returned | Description
-- | -- | --
default_version integer | when state=present | The version that will be used if only the template name is specified. Often this is the same as the latest version, but not always.
latest_version integer | when state=present | Latest available version of the launch template
However, the function `format_module_output`, which is [used](https://github.com/ansible/ansible/blob/272dceef4258fdfdc90281ce1d0b02236f16a797/lib/ansible/modules/cloud/amazon/ec2_launch_template.py#L499) to generate this module's output, stores default_template and latest_template, not default_version and latest_version.
```python
output['default_template'] = [
v for v in template_versions
if v.get('default_version')
][0]
output['latest_template'] = [
v for v in template_versions
if (
v.get('version_number') and
int(v['version_number']) == int(template['latest_version_number'])
)
][0]
return output
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_launch_template
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8.4
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
1. Store results of ec2_launch_template
2. Examine the contents of the results, which does not conform to the documentation.
##### EXPECTED RESULTS
Either two integer key/values should be available, or the documentation should correctly indicate that the output includes default_template and latest_template.
##### ACTUAL RESULTS
default_template and latest_template exist in the results, default_version and latest_version do not.
|
https://github.com/ansible/ansible/issues/61084
|
https://github.com/ansible/ansible/pull/61279
|
791e9dabe3c1cb50a315d82fbb7252f4a38885f6
|
c40832af482c3cc7b0c291ead5228ca582593419
| 2019-08-22T02:29:04Z |
python
| 2019-12-18T20:53:57Z |
lib/ansible/modules/cloud/amazon/ec2_launch_template.py
|
#!/usr/bin/python
# Copyright (c) 2018 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: ec2_launch_template
version_added: "2.8"
short_description: Manage EC2 launch templates
description:
- Create, modify, and delete EC2 Launch Templates, which can be used to
create individual instances or with Autoscaling Groups.
- The I(ec2_instance) and I(ec2_asg) modules can, instead of specifying all
parameters on those tasks, be passed a Launch Template which contains
settings like instance size, disk type, subnet, and more.
requirements:
- botocore
- boto3 >= 1.6.0
extends_documentation_fragment:
- aws
- ec2
author:
- Ryan Scott Brown (@ryansb)
options:
template_id:
description:
- The ID for the launch template, can be used for all cases except creating a new Launch Template.
aliases: [id]
type: str
template_name:
description:
- The template name. This must be unique in the region-account combination you are using.
aliases: [name]
type: str
default_version:
description:
- Which version should be the default when users spin up new instances based on this template? By default, the latest version will be made the default.
type: str
default: latest
state:
description:
- Whether the launch template should exist or not.
- Deleting specific versions of a launch template is not supported at this time.
choices: [present, absent]
default: present
type: str
block_device_mappings:
description:
- The block device mapping. Supplying both a snapshot ID and an encryption
value as arguments for block-device mapping results in an error. This is
because only blank volumes can be encrypted on start, and these are not
created from a snapshot. If a snapshot is the basis for the volume, it
contains data by definition and its encryption status cannot be changed
using this action.
type: list
elements: dict
suboptions:
device_name:
description: The device name (for example, /dev/sdh or xvdh).
type: str
no_device:
description: Suppresses the specified device included in the block device mapping of the AMI.
type: str
virtual_name:
description: >
The virtual device name (ephemeralN). Instance store volumes are
numbered starting from 0. An instance type with 2 available instance
store volumes can specify mappings for ephemeral0 and ephemeral1. The
number of available instance store volumes depends on the instance
type. After you connect to the instance, you must mount the volume.
type: str
ebs:
description: Parameters used to automatically set up EBS volumes when the instance is launched.
type: dict
suboptions:
delete_on_termination:
description: Indicates whether the EBS volume is deleted on instance termination.
type: bool
encrypted:
description: >
Indicates whether the EBS volume is encrypted. Encrypted volumes
can only be attached to instances that support Amazon EBS
encryption. If you are creating a volume from a snapshot, you
can't specify an encryption value.
type: bool
iops:
description:
- The number of I/O operations per second (IOPS) that the volume
supports. For io1, this represents the number of IOPS that are
provisioned for the volume. For gp2, this represents the baseline
performance of the volume and the rate at which the volume
accumulates I/O credits for bursting. For more information about
General Purpose SSD baseline performance, I/O credits, and
bursting, see Amazon EBS Volume Types in the Amazon Elastic
Compute Cloud User Guide.
- >
Condition: This parameter is required for requests to create io1
volumes; it is not used in requests to create gp2, st1, sc1, or
standard volumes.
type: int
kms_key_id:
description: The ARN of the AWS Key Management Service (AWS KMS) CMK used for encryption.
type: str
snapshot_id:
description: The ID of the snapshot to create the volume from.
type: str
volume_size:
description:
- The size of the volume, in GiB.
- "Default: If you're creating the volume from a snapshot and don't specify a volume size, the default is the snapshot size."
type: int
volume_type:
description: The volume type
type: str
cpu_options:
description:
- Choose CPU settings for the EC2 instances that will be created with this template.
- For more information, see U(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html)
type: dict
suboptions:
core_count:
description: The number of CPU cores for the instance.
type: int
threads_per_core:
description: >
The number of threads per CPU core. To disable Intel Hyper-Threading
Technology for the instance, specify a value of 1. Otherwise, specify
the default value of 2.
type: int
credit_specification:
description: The credit option for CPU usage of the instance. Valid for T2 or T3 instances only.
type: dict
suboptions:
cpu_credits:
description: >
The credit option for CPU usage of a T2 or T3 instance. Valid values
are C(standard) and C(unlimited).
type: str
disable_api_termination:
description: >
This helps protect instances from accidental termination. If set to true,
you can't terminate the instance using the Amazon EC2 console, CLI, or
API. To change this attribute to false after launch, use
I(ModifyInstanceAttribute).
type: bool
ebs_optimized:
description: >
Indicates whether the instance is optimized for Amazon EBS I/O. This
optimization provides dedicated throughput to Amazon EBS and an optimized
configuration stack to provide optimal Amazon EBS I/O performance. This
optimization isn't available with all instance types. Additional usage
charges apply when using an EBS-optimized instance.
type: bool
elastic_gpu_specifications:
type: list
elements: dict
description: Settings for Elastic GPU attachments. See U(https://aws.amazon.com/ec2/elastic-gpus/) for details.
suboptions:
type:
description: The type of Elastic GPU to attach
type: str
iam_instance_profile:
description: >
The name or ARN of an IAM instance profile. Requires permissions to
describe existing instance roles to confirm ARN is properly formed.
type: str
image_id:
description: >
The AMI ID to use for new instances launched with this template. This
value is region-dependent since AMIs are not global resources.
type: str
instance_initiated_shutdown_behavior:
description: >
Indicates whether an instance stops or terminates when you initiate
shutdown from the instance using the operating system shutdown command.
choices: [stop, terminate]
type: str
instance_market_options:
description: Options for alternative instance markets, currently only the spot market is supported.
type: dict
suboptions:
market_type:
description: The market type. This should always be 'spot'.
type: str
spot_options:
description: Spot-market specific settings.
type: dict
suboptions:
block_duration_minutes:
description: >
The required duration for the Spot Instances (also known as Spot
blocks), in minutes. This value must be a multiple of 60 (60,
120, 180, 240, 300, or 360).
type: int
instance_interruption_behavior:
description: The behavior when a Spot Instance is interrupted. The default is C(terminate).
choices: [hibernate, stop, terminate]
type: str
max_price:
description: The highest hourly price you're willing to pay for this Spot Instance.
type: str
spot_instance_type:
description: The request type to send.
choices: [one-time, persistent]
type: str
instance_type:
description: >
The instance type, such as C(c5.2xlarge). For a full list of instance types, see
U(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html).
type: str
kernel_id:
description: >
The ID of the kernel. We recommend that you use PV-GRUB instead of
kernels and RAM disks. For more information, see
U(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UserProvidedkernels.html)
type: str
key_name:
description:
- The name of the key pair. You can create a key pair using M(ec2_key).
- If you do not specify a key pair, you can't connect to the instance
unless you choose an AMI that is configured to allow users another way to
log in.
type: str
monitoring:
description: Settings for instance monitoring.
type: dict
suboptions:
enabled:
type: bool
description: Whether to turn on detailed monitoring for new instances. This will incur extra charges.
network_interfaces:
description: One or more network interfaces.
type: list
elements: dict
suboptions:
associate_public_ip_address:
description: Associates a public IPv4 address with eth0 for a new network interface.
type: bool
delete_on_termination:
description: Indicates whether the network interface is deleted when the instance is terminated.
type: bool
description:
description: A description for the network interface.
type: str
device_index:
description: The device index for the network interface attachment.
type: int
groups:
description: List of security group IDs to include on this instance.
type: list
elements: str
ipv6_address_count:
description: >
The number of IPv6 addresses to assign to a network interface. Amazon
EC2 automatically selects the IPv6 addresses from the subnet range.
You can't use this option if specifying the I(ipv6_addresses) option.
type: int
ipv6_addresses:
description: >
A list of one or more specific IPv6 addresses from the IPv6 CIDR
block range of your subnet. You can't use this option if you're
specifying the I(ipv6_address_count) option.
type: list
elements: str
network_interface_id:
description: The eni ID of a network interface to attach.
type: str
private_ip_address:
description: The primary private IPv4 address of the network interface.
type: str
subnet_id:
description: The ID of the subnet for the network interface.
type: str
placement:
description: The placement group settings for the instance.
type: dict
suboptions:
affinity:
description: The affinity setting for an instance on a Dedicated Host.
type: str
availability_zone:
description: The Availability Zone for the instance.
type: str
group_name:
description: The name of the placement group for the instance.
type: str
host_id:
description: The ID of the Dedicated Host for the instance.
type: str
tenancy:
description: >
The tenancy of the instance (if the instance is running in a VPC). An
instance with a tenancy of dedicated runs on single-tenant hardware.
type: str
ram_disk_id:
description: >
The ID of the RAM disk to launch the instance with. We recommend that you
use PV-GRUB instead of kernels and RAM disks. For more information, see
U(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UserProvidedkernels.html)
type: str
security_group_ids:
description: A list of security group IDs (VPC or EC2-Classic) that the new instances will be added to.
type: list
elements: str
security_groups:
description: A list of security group names (VPC or EC2-Classic) that the new instances will be added to.
type: list
elements: str
tags:
type: dict
description:
- A set of key-value pairs to be applied to resources when this Launch Template is used.
- "Tag key constraints: Tag keys are case-sensitive and accept a maximum of 127 Unicode characters. May not begin with I(aws:)"
- "Tag value constraints: Tag values are case-sensitive and accept a maximum of 255 Unicode characters."
user_data:
description: >
The Base64-encoded user data to make available to the instance. For more information, see the Linux
U(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) and Windows
U(http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-instance-metadata.html#instancedata-add-user-data)
documentation on user-data.
type: str
'''
EXAMPLES = '''
- name: Create an ec2 launch template
ec2_launch_template:
name: "my_template"
image_id: "ami-04b762b4289fba92b"
key_name: my_ssh_key
instance_type: t2.micro
iam_instance_profile: myTestProfile
disable_api_termination: true
- name: >
Create a new version of an existing ec2 launch template with a different instance type,
while leaving an older version as the default version
ec2_launch_template:
name: "my_template"
default_version: 1
instance_type: c5.4xlarge
- name: Delete an ec2 launch template
ec2_launch_template:
name: "my_template"
state: absent
# This module does not yet allow deletion of specific versions of launch templates
'''
RETURN = '''
latest_version:
description: Latest available version of the launch template
returned: when state=present
type: int
default_version:
description: The version that will be used if only the template name is specified. Often this is the same as the latest version, but not always.
returned: when state=present
type: int
'''
import re
from uuid import uuid4
from ansible.module_utils._text import to_text
from ansible.module_utils.aws.core import AnsibleAWSModule, is_boto3_error_code, get_boto3_client_method_parameters
from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict, snake_dict_to_camel_dict
from ansible.module_utils.ec2 import ansible_dict_to_boto3_tag_list, AWSRetry, boto3_tag_list_to_ansible_dict, ansible_dict_to_boto3_tag_list
try:
from botocore.exceptions import ClientError, BotoCoreError, WaiterError
except ImportError:
pass # caught by AnsibleAWSModule
def determine_iam_role(module, name_or_arn):
if re.match(r'^arn:aws:iam::\d+:instance-profile/[\w+=/,.@-]+$', name_or_arn):
return name_or_arn
iam = module.client('iam', retry_decorator=AWSRetry.jittered_backoff())
try:
role = iam.get_instance_profile(InstanceProfileName=name_or_arn, aws_retry=True)
return {'arn': role['InstanceProfile']['Arn']}
except is_boto3_error_code('NoSuchEntity') as e:
module.fail_json_aws(e, msg="Could not find instance_role {0}".format(name_or_arn))
except (BotoCoreError, ClientError) as e: # pylint: disable=duplicate-except
module.fail_json_aws(e, msg="An error occurred while searching for instance_role {0}. Please try supplying the full ARN.".format(name_or_arn))
def existing_templates(module):
ec2 = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
matches = None
try:
if module.params.get('template_id'):
matches = ec2.describe_launch_templates(LaunchTemplateIds=[module.params.get('template_id')])
elif module.params.get('template_name'):
matches = ec2.describe_launch_templates(LaunchTemplateNames=[module.params.get('template_name')])
except is_boto3_error_code('InvalidLaunchTemplateName.NotFoundException') as e:
# no named template was found, return nothing/empty versions
return None, []
except is_boto3_error_code('InvalidLaunchTemplateId.Malformed') as e: # pylint: disable=duplicate-except
module.fail_json_aws(e, msg='Launch template with ID {0} is not a valid ID. It should start with `lt-....`'.format(
module.params.get('launch_template_id')))
except is_boto3_error_code('InvalidLaunchTemplateId.NotFoundException') as e: # pylint: disable=duplicate-except
module.fail_json_aws(
e, msg='Launch template with ID {0} could not be found, please supply a name '
'instead so that a new template can be created'.format(module.params.get('launch_template_id')))
except (ClientError, BotoCoreError, WaiterError) as e: # pylint: disable=duplicate-except
module.fail_json_aws(e, msg='Could not check existing launch templates. This may be an IAM permission problem.')
else:
template = matches['LaunchTemplates'][0]
template_id, template_version, template_default = template['LaunchTemplateId'], template['LatestVersionNumber'], template['DefaultVersionNumber']
try:
return template, ec2.describe_launch_template_versions(LaunchTemplateId=template_id)['LaunchTemplateVersions']
except (ClientError, BotoCoreError, WaiterError) as e:
module.fail_json_aws(e, msg='Could not find launch template versions for {0} (ID: {1}).'.format(template['LaunchTemplateName'], template_id))
def params_to_launch_data(module, template_params):
if template_params.get('tags'):
template_params['tag_specifications'] = [
{
'resource_type': r_type,
'tags': [
{'Key': k, 'Value': v} for k, v
in template_params['tags'].items()
]
}
for r_type in ('instance', 'volume')
]
del template_params['tags']
if module.params.get('iam_instance_profile'):
template_params['iam_instance_profile'] = determine_iam_role(module, module.params['iam_instance_profile'])
params = snake_dict_to_camel_dict(
dict((k, v) for k, v in template_params.items() if v is not None),
capitalize_first=True,
)
return params
def delete_template(module):
ec2 = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
template, template_versions = existing_templates(module)
deleted_versions = []
if template or template_versions:
non_default_versions = [to_text(t['VersionNumber']) for t in template_versions if not t['DefaultVersion']]
if non_default_versions:
try:
v_resp = ec2.delete_launch_template_versions(
LaunchTemplateId=template['LaunchTemplateId'],
Versions=non_default_versions,
)
if v_resp['UnsuccessfullyDeletedLaunchTemplateVersions']:
module.warn('Failed to delete template versions {0} on launch template {1}'.format(
v_resp['UnsuccessfullyDeletedLaunchTemplateVersions'],
template['LaunchTemplateId'],
))
deleted_versions = [camel_dict_to_snake_dict(v) for v in v_resp['SuccessfullyDeletedLaunchTemplateVersions']]
except (ClientError, BotoCoreError) as e:
module.fail_json_aws(e, msg="Could not delete existing versions of the launch template {0}".format(template['LaunchTemplateId']))
try:
resp = ec2.delete_launch_template(
LaunchTemplateId=template['LaunchTemplateId'],
)
except (ClientError, BotoCoreError) as e:
module.fail_json_aws(e, msg="Could not delete launch template {0}".format(template['LaunchTemplateId']))
return {
'deleted_versions': deleted_versions,
'deleted_template': camel_dict_to_snake_dict(resp['LaunchTemplate']),
'changed': True,
}
else:
return {'changed': False}
def create_or_update(module, template_options):
ec2 = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
template, template_versions = existing_templates(module)
out = {}
lt_data = params_to_launch_data(module, dict((k, v) for k, v in module.params.items() if k in template_options))
if not (template or template_versions):
# create a full new one
try:
resp = ec2.create_launch_template(
LaunchTemplateName=module.params['template_name'],
LaunchTemplateData=lt_data,
ClientToken=uuid4().hex,
aws_retry=True,
)
except (ClientError, BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't create launch template")
template, template_versions = existing_templates(module)
out['changed'] = True
elif template and template_versions:
most_recent = sorted(template_versions, key=lambda x: x['VersionNumber'])[-1]
if lt_data == most_recent['LaunchTemplateData']:
out['changed'] = False
return out
try:
resp = ec2.create_launch_template_version(
LaunchTemplateId=template['LaunchTemplateId'],
LaunchTemplateData=lt_data,
ClientToken=uuid4().hex,
aws_retry=True,
)
if module.params.get('default_version') in (None, ''):
# no need to do anything, leave the existing version as default
pass
elif module.params.get('default_version') == 'latest':
set_default = ec2.modify_launch_template(
LaunchTemplateId=template['LaunchTemplateId'],
DefaultVersion=to_text(resp['LaunchTemplateVersion']['VersionNumber']),
ClientToken=uuid4().hex,
aws_retry=True,
)
else:
try:
int(module.params.get('default_version'))
except ValueError:
module.fail_json(msg='default_version param was not a valid integer, got "{0}"'.format(module.params.get('default_version')))
set_default = ec2.modify_launch_template(
LaunchTemplateId=template['LaunchTemplateId'],
DefaultVersion=to_text(int(module.params.get('default_version'))),
ClientToken=uuid4().hex,
aws_retry=True,
)
except (ClientError, BotoCoreError) as e:
module.fail_json_aws(e, msg="Couldn't create subsequent launch template version")
template, template_versions = existing_templates(module)
out['changed'] = True
return out
def format_module_output(module):
output = {}
template, template_versions = existing_templates(module)
template = camel_dict_to_snake_dict(template)
template_versions = [camel_dict_to_snake_dict(v) for v in template_versions]
for v in template_versions:
for ts in (v['launch_template_data'].get('tag_specifications') or []):
ts['tags'] = boto3_tag_list_to_ansible_dict(ts.pop('tags'))
output.update(dict(template=template, versions=template_versions))
output['default_template'] = [
v for v in template_versions
if v.get('default_version')
][0]
output['latest_template'] = [
v for v in template_versions
if (
v.get('version_number') and
int(v['version_number']) == int(template['latest_version_number'])
)
][0]
return output
def main():
template_options = dict(
block_device_mappings=dict(
type='list',
options=dict(
device_name=dict(),
ebs=dict(
type='dict',
options=dict(
delete_on_termination=dict(type='bool'),
encrypted=dict(type='bool'),
iops=dict(type='int'),
kms_key_id=dict(),
snapshot_id=dict(),
volume_size=dict(type='int'),
volume_type=dict(),
),
),
no_device=dict(),
virtual_name=dict(),
),
),
cpu_options=dict(
type='dict',
options=dict(
core_count=dict(type='int'),
threads_per_core=dict(type='int'),
),
),
credit_specification=dict(
dict(type='dict'),
options=dict(
cpu_credits=dict(),
),
),
disable_api_termination=dict(type='bool'),
ebs_optimized=dict(type='bool'),
elastic_gpu_specifications=dict(
options=dict(type=dict()),
type='list',
),
iam_instance_profile=dict(),
image_id=dict(),
instance_initiated_shutdown_behavior=dict(choices=['stop', 'terminate']),
instance_market_options=dict(
type='dict',
options=dict(
market_type=dict(),
spot_options=dict(
type='dict',
options=dict(
block_duration_minutes=dict(type='int'),
instance_interruption_behavior=dict(choices=['hibernate', 'stop', 'terminate']),
max_price=dict(),
spot_instance_type=dict(choices=['one-time', 'persistent']),
),
),
),
),
instance_type=dict(),
kernel_id=dict(),
key_name=dict(),
monitoring=dict(
type='dict',
options=dict(
enabled=dict(type='bool')
),
),
network_interfaces=dict(
type='list',
options=dict(
associate_public_ip_address=dict(type='bool'),
delete_on_termination=dict(type='bool'),
description=dict(),
device_index=dict(type='int'),
groups=dict(type='list'),
ipv6_address_count=dict(type='int'),
ipv6_addresses=dict(type='list'),
network_interface_id=dict(),
private_ip_address=dict(),
subnet_id=dict(),
),
),
placement=dict(
options=dict(
affinity=dict(),
availability_zone=dict(),
group_name=dict(),
host_id=dict(),
tenancy=dict(),
),
type='dict',
),
ram_disk_id=dict(),
security_group_ids=dict(type='list'),
security_groups=dict(type='list'),
tags=dict(type='dict'),
user_data=dict(),
)
arg_spec = dict(
state=dict(choices=['present', 'absent'], default='present'),
template_name=dict(aliases=['name']),
template_id=dict(aliases=['id']),
default_version=dict(default='latest'),
)
arg_spec.update(template_options)
module = AnsibleAWSModule(
argument_spec=arg_spec,
required_one_of=[
('template_name', 'template_id')
],
supports_check_mode=True
)
if not module.boto3_at_least('1.6.0'):
module.fail_json(msg="ec2_launch_template requires boto3 >= 1.6.0")
for interface in (module.params.get('network_interfaces') or []):
if interface.get('ipv6_addresses'):
interface['ipv6_addresses'] = [{'ipv6_address': x} for x in interface['ipv6_addresses']]
if module.params.get('state') == 'present':
out = create_or_update(module, template_options)
out.update(format_module_output(module))
elif module.params.get('state') == 'absent':
out = delete_template(module)
else:
module.fail_json(msg='Unsupported value "{0}" for `state` parameter'.format(module.params.get('state')))
module.exit_json(**out)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,084 |
ec2_launch_template return values do not conform to documentation
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The ec2_launch_template documentation states that it provides two return values, other than the common return values. This includes:
Key | Returned | Description
-- | -- | --
default_version integer | when state=present | The version that will be used if only the template name is specified. Often this is the same as the latest version, but not always.
latest_version integer | when state=present | Latest available version of the launch template
However, the function `format_module_output`, which is [used](https://github.com/ansible/ansible/blob/272dceef4258fdfdc90281ce1d0b02236f16a797/lib/ansible/modules/cloud/amazon/ec2_launch_template.py#L499) to generate this module's output, stores default_template and latest_template, not default_version and latest_version.
```python
output['default_template'] = [
v for v in template_versions
if v.get('default_version')
][0]
output['latest_template'] = [
v for v in template_versions
if (
v.get('version_number') and
int(v['version_number']) == int(template['latest_version_number'])
)
][0]
return output
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_launch_template
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8.4
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
1. Store results of ec2_launch_template
2. Examine the contents of the results, which does not conform to the documentation.
##### EXPECTED RESULTS
Either two integer key/values should be available, or the documentation should correctly indicate that the output includes default_template and latest_template.
##### ACTUAL RESULTS
default_template and latest_template exist in the results, default_version and latest_version do not.
|
https://github.com/ansible/ansible/issues/61084
|
https://github.com/ansible/ansible/pull/61279
|
791e9dabe3c1cb50a315d82fbb7252f4a38885f6
|
c40832af482c3cc7b0c291ead5228ca582593419
| 2019-08-22T02:29:04Z |
python
| 2019-12-18T20:53:57Z |
test/integration/targets/ec2_launch_template/playbooks/roles/ec2_launch_template/tasks/main.yml
|
---
# A Note about ec2 environment variable name preference:
# - EC2_URL -> AWS_URL
# - EC2_ACCESS_KEY -> AWS_ACCESS_KEY_ID -> AWS_ACCESS_KEY
# - EC2_SECRET_KEY -> AWS_SECRET_ACCESS_KEY -> AWX_SECRET_KEY
# - EC2_REGION -> AWS_REGION
#
# - include: ../../../../../setup_ec2/tasks/common.yml module_name: ec2_instance
- module_defaults:
group/aws:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
security_token: "{{ security_token | default(omit) }}"
region: "{{ aws_region }}"
block:
- include_tasks: cpu_options.yml
- include_tasks: iam_instance_role.yml
always:
- debug:
msg: teardown goes here
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,084 |
ec2_launch_template return values do not conform to documentation
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The ec2_launch_template documentation states that it provides two return values, other than the common return values. This includes:
Key | Returned | Description
-- | -- | --
default_version integer | when state=present | The version that will be used if only the template name is specified. Often this is the same as the latest version, but not always.
latest_version integer | when state=present | Latest available version of the launch template
However, the function `format_module_output`, which is [used](https://github.com/ansible/ansible/blob/272dceef4258fdfdc90281ce1d0b02236f16a797/lib/ansible/modules/cloud/amazon/ec2_launch_template.py#L499) to generate this module's output, stores default_template and latest_template, not default_version and latest_version.
```python
output['default_template'] = [
v for v in template_versions
if v.get('default_version')
][0]
output['latest_template'] = [
v for v in template_versions
if (
v.get('version_number') and
int(v['version_number']) == int(template['latest_version_number'])
)
][0]
return output
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_launch_template
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
2.8.4
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
N/A
##### OS / ENVIRONMENT
N/A
##### STEPS TO REPRODUCE
1. Store results of ec2_launch_template
2. Examine the contents of the results, which does not conform to the documentation.
##### EXPECTED RESULTS
Either two integer key/values should be available, or the documentation should correctly indicate that the output includes default_template and latest_template.
##### ACTUAL RESULTS
default_template and latest_template exist in the results, default_version and latest_version do not.
|
https://github.com/ansible/ansible/issues/61084
|
https://github.com/ansible/ansible/pull/61279
|
791e9dabe3c1cb50a315d82fbb7252f4a38885f6
|
c40832af482c3cc7b0c291ead5228ca582593419
| 2019-08-22T02:29:04Z |
python
| 2019-12-18T20:53:57Z |
test/integration/targets/ec2_launch_template/playbooks/roles/ec2_launch_template/tasks/versions.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,746 |
redfish_config contains deprecated call to be removed in 2.10
|
##### SUMMARY
redfish_config contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/remote_management/redfish/redfish_config.py:277:8: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/remote_management/redfish/redfish_config.py
```
##### ANSIBLE VERSION
```
2.10
```
|
https://github.com/ansible/ansible/issues/65746
|
https://github.com/ansible/ansible/pull/65894
|
d3b6db37549517b5d8234e04b247f01e2f9b49f0
|
973e36c6b69db9e473e72502c7a4a9ad2d9193e7
| 2019-12-11T20:46:48Z |
python
| 2019-12-19T06:09:28Z |
changelogs/fragments/65894-redfish-bios-attributes.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,746 |
redfish_config contains deprecated call to be removed in 2.10
|
##### SUMMARY
redfish_config contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/remote_management/redfish/redfish_config.py:277:8: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/remote_management/redfish/redfish_config.py
```
##### ANSIBLE VERSION
```
2.10
```
|
https://github.com/ansible/ansible/issues/65746
|
https://github.com/ansible/ansible/pull/65894
|
d3b6db37549517b5d8234e04b247f01e2f9b49f0
|
973e36c6b69db9e473e72502c7a4a9ad2d9193e7
| 2019-12-11T20:46:48Z |
python
| 2019-12-19T06:09:28Z |
docs/docsite/rst/porting_guides/porting_guide_2.10.rst
|
.. _porting_2.10_guide:
**************************
Ansible 2.10 Porting Guide
**************************
This section discusses the behavioral changes between Ansible 2.9 and Ansible 2.10.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `Ansible Changelog for 2.10 <https://github.com/ansible/ansible/blob/devel/changelogs/CHANGELOG-v2.10.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
No notable changes
Modules
=======
Modules removed
---------------
The following modules no longer exist:
* letsencrypt use :ref:`acme_certificate <acme_certificate_module>` instead.
Deprecation notices
-------------------
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ldap_attr use :ref:`ldap_attrs <ldap_attrs_module>` instead.
The following functionality will be removed in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`openssl_csr <openssl_csr_module>` module's option ``version`` no longer supports values other than ``1`` (the current only standardized CSR version).
* :ref:`docker_container <docker_container_module>`: the ``trust_image_content`` option will be removed. It has always been ignored by the module.
* :ref:`iam_managed_policy <iam_managed_policy_module>`: the ``fail_on_delete`` option will be removed. It has always been ignored by the module.
* :ref:`s3_lifecycle <s3_lifecycle_module>`: the ``requester_pays`` option will be removed. It has always been ignored by the module.
* :ref:`s3_sync <s3_sync_module>`: the ``retries`` option will be removed. It has always been ignored by the module.
* The return values ``err`` and ``out`` of :ref:`docker_stack <docker_stack_module>` have been deprecated. Use ``stdout`` and ``stderr`` from now on instead.
* :ref:`cloudformation <cloudformation_module>`: the ``template_format`` option will be removed. It has been ignored by the module since Ansible 2.3.
* :ref:`data_pipeline <data_pipeline_module>`: the ``version`` option will be removed. It has always been ignored by the module.
* :ref:`ec2_eip <ec2_eip_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.3.
* :ref:`ec2_key <ec2_key_module>`: the ``wait`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_key <ec2_key_module>`: the ``wait_timeout`` option will be removed. It has had no effect since Ansible 2.5.
* :ref:`ec2_lc <ec2_lc_module>`: the ``associate_public_ip_address`` option will be removed. It has always been ignored by the module.
* :ref:`iam_policy <iam_policy_module>`: the ``policy_document`` option will be removed. To maintain the existing behavior use the ``policy_json`` option and read the file with the ``lookup`` plugin.
The following functionality will change in Ansible 2.14. Please update update your playbooks accordingly.
* The :ref:`docker_container <docker_container_module>` module has a new option, ``container_default_behavior``, whose default value will change from ``compatibility`` to ``no_defaults``. Set to an explicit value to avoid deprecation warnings.
* The :ref:`docker_container <docker_container_module>` module's ``network_mode`` option will be set by default to the name of the first network in ``networks`` if at least one network is given and ``networks_cli_compatible`` is ``true`` (will be default from Ansible 2.12 on). Set to an explicit value to avoid deprecation warnings if you specify networks and set ``networks_cli_compatible`` to ``true``. The current default (not specifying it) is equivalent to the value ``default``.
* :ref:`iam_policy <iam_policy_module>`: the default value for the ``skip_duplicates`` option will change from ``true`` to ``false``. To maintain the existing behavior explicitly set it to ``true``.
* :ref:`iam_role <iam_role_module>`: the ``purge_policies`` option (also know as ``purge_policy``) default value will change from ``true`` to ``false``
* :ref:`elb_network_lb <elb_network_lb_module>`: the default behaviour for the ``state`` option will change from ``absent`` to ``present``. To maintain the existing behavior explicitly set state to ``absent``.
The following modules will be removed in Ansible 2.14. Please update your playbooks accordingly.
* ``vmware_dns_config`` use :ref:`vmware_host_dns <vmware_host_dns_module>` instead.
Noteworthy module changes
-------------------------
* :ref:`vmware_datastore_maintenancemode <vmware_datastore_maintenancemode_module>` now returns ``datastore_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_kernel_manager <vmware_host_kernel_manager_module>` now returns ``host_kernel_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_ntp <vmware_host_ntp_module>` now returns ``host_ntp_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_host_service_manager <vmware_host_service_manager_module>` now returns ``host_service_status`` instead of Ansible internal key ``results``.
* :ref:`vmware_tag <vmware_tag_module>` now returns ``tag_status`` instead of Ansible internal key ``results``.
* The deprecated ``recurse`` option in :ref:`pacman <pacman_module>` module has been removed, you should use ``extra_args=--recursive`` instead.
* :ref:`vmware_guest_custom_attributes <vmware_guest_custom_attributes_module>` module does not require VM name which was a required parameter for releases prior to Ansible 2.10.
* :ref:`zabbix_action <zabbix_action_module>` no longer requires ``esc_period`` and ``event_source`` arguments when ``state=absent``.
* :ref:`gitlab_user <gitlab_user_module>` no longer requires ``name``, ``email`` and ``password`` arguments when ``state=absent``.
* :ref:`win_pester <win_pester_module>` no longer runs all ``*.ps1`` file in the directory specified due to it executing potentially unknown scripts. It will follow the default behaviour of only running tests for files that are like ``*.tests.ps1`` which is built into Pester itself
* :ref:`win_find <win_find_module>` has been refactored to better match the behaviour of the ``find`` module. Here is what has changed:
* When the directory specified by ``paths`` does not exist or is a file, it will no longer fail and will just warn the user
* Junction points are no longer reported as ``islnk``, use ``isjunction`` to properly report these files. This behaviour matches the :ref:`win_stat <win_stat_module>`
* Directories no longer return a ``size``, this matches the ``stat`` and ``find`` behaviour and has been removed due to the difficulties in correctly reporting the size of a directory
Plugins
=======
Noteworthy plugin changes
-------------------------
* The ``hashi_vault`` lookup plugin now returns the latest version when using the KV v2 secrets engine. Previously, it returned all versions of the secret which required additional steps to extract and filter the desired version.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,746 |
redfish_config contains deprecated call to be removed in 2.10
|
##### SUMMARY
redfish_config contains call to Display.deprecated or AnsibleModule.deprecate and is scheduled for removal
```
lib/ansible/modules/remote_management/redfish/redfish_config.py:277:8: ansible-deprecated-version: Deprecated version ('2.10') found in call to Display.deprecated or AnsibleModule.deprecate
```
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
```
lib/ansible/modules/remote_management/redfish/redfish_config.py
```
##### ANSIBLE VERSION
```
2.10
```
|
https://github.com/ansible/ansible/issues/65746
|
https://github.com/ansible/ansible/pull/65894
|
d3b6db37549517b5d8234e04b247f01e2f9b49f0
|
973e36c6b69db9e473e72502c7a4a9ad2d9193e7
| 2019-12-11T20:46:48Z |
python
| 2019-12-19T06:09:28Z |
lib/ansible/modules/remote_management/redfish/redfish_config.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2017-2018 Dell EMC Inc.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: redfish_config
version_added: "2.7"
short_description: Manages Out-Of-Band controllers using Redfish APIs
description:
- Builds Redfish URIs locally and sends them to remote OOB controllers to
set or update a configuration attribute.
- Manages BIOS configuration settings.
- Manages OOB controller configuration settings.
options:
category:
required: true
description:
- Category to execute on OOB controller
type: str
command:
required: true
description:
- List of commands to execute on OOB controller
type: list
baseuri:
required: true
description:
- Base URI of OOB controller
type: str
username:
required: true
description:
- User for authentication with OOB controller
type: str
version_added: "2.8"
password:
required: true
description:
- Password for authentication with OOB controller
type: str
bios_attribute_name:
required: false
description:
- name of BIOS attr to update (deprecated - use bios_attributes instead)
default: 'null'
type: str
version_added: "2.8"
bios_attribute_value:
required: false
description:
- value of BIOS attr to update (deprecated - use bios_attributes instead)
default: 'null'
type: str
version_added: "2.8"
bios_attributes:
required: false
description:
- dictionary of BIOS attributes to update
default: {}
type: dict
version_added: "2.10"
timeout:
description:
- Timeout in seconds for URL requests to OOB controller
default: 10
type: int
version_added: "2.8"
boot_order:
required: false
description:
- list of BootOptionReference strings specifying the BootOrder
default: []
type: list
version_added: "2.10"
network_protocols:
required: false
description:
- setting dict of manager services to update
type: dict
version_added: "2.10"
resource_id:
required: false
description:
- The ID of the System, Manager or Chassis to modify
type: str
version_added: "2.10"
nic_addr:
required: false
description:
- EthernetInterface Address string on OOB controller
default: 'null'
type: str
version_added: '2.10'
nic_config:
required: false
description:
- setting dict of EthernetInterface on OOB controller
type: dict
version_added: '2.10'
author: "Jose Delarosa (@jose-delarosa)"
'''
EXAMPLES = '''
- name: Set BootMode to UEFI
redfish_config:
category: Systems
command: SetBiosAttributes
resource_id: 437XR1138R2
bios_attributes:
BootMode: "Uefi"
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set multiple BootMode attributes
redfish_config:
category: Systems
command: SetBiosAttributes
resource_id: 437XR1138R2
bios_attributes:
BootMode: "Bios"
OneTimeBootMode: "Enabled"
BootSeqRetry: "Enabled"
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Enable PXE Boot for NIC1 using deprecated options
redfish_config:
category: Systems
command: SetBiosAttributes
resource_id: 437XR1138R2
bios_attribute_name: PxeDev1EnDis
bios_attribute_value: Enabled
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set BIOS default settings with a timeout of 20 seconds
redfish_config:
category: Systems
command: SetBiosDefaultSettings
resource_id: 437XR1138R2
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
timeout: 20
- name: Set boot order
redfish_config:
category: Systems
command: SetBootOrder
boot_order:
- Boot0002
- Boot0001
- Boot0000
- Boot0003
- Boot0004
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set boot order to the default
redfish_config:
category: Systems
command: SetDefaultBootOrder
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set Manager Network Protocols
redfish_config:
category: Manager
command: SetNetworkProtocols
network_protocols:
SNMP:
ProtocolEnabled: True
Port: 161
HTTP:
ProtocolEnabled: False
Port: 8080
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
- name: Set Manager NIC
redfish_config:
category: Manager
command: SetManagerNic
nic_config:
DHCPv4:
DHCPEnabled: False
IPv4StaticAddresses:
Address: 192.168.1.3
Gateway: 192.168.1.1
SubnetMask: 255.255.255.0
baseuri: "{{ baseuri }}"
username: "{{ username }}"
password: "{{ password }}"
'''
RETURN = '''
msg:
description: Message with action result or error description
returned: always
type: str
sample: "Action was successful"
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.redfish_utils import RedfishUtils
from ansible.module_utils._text import to_native
# More will be added as module features are expanded
CATEGORY_COMMANDS_ALL = {
"Systems": ["SetBiosDefaultSettings", "SetBiosAttributes", "SetBootOrder",
"SetDefaultBootOrder"],
"Manager": ["SetNetworkProtocols", "SetManagerNic"]
}
def main():
result = {}
module = AnsibleModule(
argument_spec=dict(
category=dict(required=True),
command=dict(required=True, type='list'),
baseuri=dict(required=True),
username=dict(required=True),
password=dict(required=True, no_log=True),
bios_attribute_name=dict(default='null'),
bios_attribute_value=dict(default='null'),
bios_attributes=dict(type='dict', default={}),
timeout=dict(type='int', default=10),
boot_order=dict(type='list', elements='str', default=[]),
network_protocols=dict(
type='dict',
default={}
),
resource_id=dict(),
nic_addr=dict(default='null'),
nic_config=dict(
type='dict',
default={}
)
),
supports_check_mode=False
)
category = module.params['category']
command_list = module.params['command']
# admin credentials used for authentication
creds = {'user': module.params['username'],
'pswd': module.params['password']}
# timeout
timeout = module.params['timeout']
# BIOS attributes to update
bios_attributes = module.params['bios_attributes']
if module.params['bios_attribute_name'] != 'null':
bios_attributes[module.params['bios_attribute_name']] = module.params[
'bios_attribute_value']
module.deprecate(msg='The bios_attribute_name/bios_attribute_value '
'options are deprecated. Use bios_attributes instead',
version='2.10')
# boot order
boot_order = module.params['boot_order']
# System, Manager or Chassis ID to modify
resource_id = module.params['resource_id']
# manager nic
nic_addr = module.params['nic_addr']
nic_config = module.params['nic_config']
# Build root URI
root_uri = "https://" + module.params['baseuri']
rf_utils = RedfishUtils(creds, root_uri, timeout, module,
resource_id=resource_id, data_modification=True)
# Check that Category is valid
if category not in CATEGORY_COMMANDS_ALL:
module.fail_json(msg=to_native("Invalid Category '%s'. Valid Categories = %s" % (category, CATEGORY_COMMANDS_ALL.keys())))
# Check that all commands are valid
for cmd in command_list:
# Fail if even one command given is invalid
if cmd not in CATEGORY_COMMANDS_ALL[category]:
module.fail_json(msg=to_native("Invalid Command '%s'. Valid Commands = %s" % (cmd, CATEGORY_COMMANDS_ALL[category])))
# Organize by Categories / Commands
if category == "Systems":
# execute only if we find a System resource
result = rf_utils._find_systems_resource()
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
for command in command_list:
if command == "SetBiosDefaultSettings":
result = rf_utils.set_bios_default_settings()
elif command == "SetBiosAttributes":
result = rf_utils.set_bios_attributes(bios_attributes)
elif command == "SetBootOrder":
result = rf_utils.set_boot_order(boot_order)
elif command == "SetDefaultBootOrder":
result = rf_utils.set_default_boot_order()
elif category == "Manager":
# execute only if we find a Manager service resource
result = rf_utils._find_managers_resource()
if result['ret'] is False:
module.fail_json(msg=to_native(result['msg']))
for command in command_list:
if command == "SetNetworkProtocols":
result = rf_utils.set_network_protocols(module.params['network_protocols'])
elif command == "SetManagerNic":
result = rf_utils.set_manager_nic(nic_addr, nic_config)
# Return data back or fail with proper message
if result['ret'] is True:
module.exit_json(changed=result['changed'], msg=to_native(result['msg']))
else:
module.fail_json(msg=to_native(result['msg']))
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 63,844 |
azure_rm_virtualmachine: VM OS disk policy gets updated even when not specified
|
##### SUMMARY
If you happen to use the __azure_rm_virtualmachine__ module to manage a VM powerstate, if no os_disk_caching attribute is not specified, the module sets *ReadOnly* as default value.
Unfortunately, if the already existing VM has a different type of OS disk cache, e.g. *ReadWrite*, the module attempts to update the VM whereas, as os_disk_caching has not been specified, it is not expected.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
* lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py
##### ANSIBLE VERSION
```paste below
ansible 2.8.5
config file = /home/***/ansible/ansible_dev.cfg
configured module search path = [u'/home/***/ansible/library']
ansible python module location = /home/***/ansible/.venv/lib/python2.7/site-packages/ansible
executable location = /home/***/ansible/.venv/bin/ansible
python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
##### CONFIGURATION
```paste below
DEFAULT_BECOME_METHOD(/home/***/ansible/ansible_dev.cfg) = sudo
DEFAULT_BECOME_USER(/home/***/ansible/ansible_dev.cfg) = root
DEFAULT_GATHER_TIMEOUT(/home/***/ansible/ansible_dev.cfg) = 30
DEFAULT_HOST_LIST(/home/***/ansible/ansible_dev.cfg) = [u'/home/***/ansible/inventory/azure-dev']
DEFAULT_LOG_PATH(/home/***/ansible/ansible_dev.cfg) = /home/***/ansible.log
DEFAULT_MODULE_PATH(/home/***/ansible/ansible_dev.cfg) = [u'/home/***/ansible/library']
DEFAULT_ROLES_PATH(/home/***/ansible/ansible_dev.cfg) = [u'/home/***/ansible/roles']
DEFAULT_VAULT_IDENTITY_LIST(/home/***/ansible/ansible_dev.cfg) = []
HOST_KEY_CHECKING(/home/***/ansible/ansible_dev.cfg) = False
INVENTORY_ENABLED(/home/***/ansible/ansible_dev.cfg) = [u'yaml', u'azure_rm', u'script']
RETRY_FILES_ENABLED(/home/***/ansible/ansible_dev.cfg) = False
```
##### OS / ENVIRONMENT
Controller: RHEL 7.4
Target VM: RHEL 7.4
##### STEPS TO REPRODUCE
1. Create VM with "ReadWrite" as disk cache type
1. Try to power it off:
```yaml
- name: Stop Azure VM
azure_rm_virtualmachine:
name: "{{ vm_name }}"
started: no
state: present
resource_group: "***"
subscription_id: "***"
tenant: "***"
client_id: "***"
secret: "***"
```
##### EXPECTED RESULTS
The VM is stopped and the OS disk caching is not updated.
##### ACTUAL RESULTS
The VM OS Disk caching is updated and the VM is stopped.
When debugging the code, there is a difference appened to the __self.differences__ list:
* OS Disk caching
The workaround right now is to specify the __os_disk_caching__ attribute with the OS disk caching type that matches the one of the VM
|
https://github.com/ansible/ansible/issues/63844
|
https://github.com/ansible/ansible/pull/65601
|
ee5822597038ab02f2587081366f8a146d6f227d
|
a168e73713f896b75487ce22306490de9ed2b3ce
| 2019-10-23T08:36:33Z |
python
| 2019-12-19T07:48:38Z |
lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py
|
#!/usr/bin/python
#
# Copyright (c) 2016 Matt Davis, <[email protected]>
# Chris Houseknecht, <[email protected]>
# Copyright (c) 2018 James E. King, III (@jeking3) <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: azure_rm_virtualmachine
version_added: "2.1"
short_description: Manage Azure virtual machines
description:
- Manage and configure virtual machines (VMs) and associated resources on Azure.
- Requires a resource group containing at least one virtual network with at least one subnet.
- Supports images from the Azure Marketplace, which can be discovered with M(azure_rm_virtualmachineimage_facts).
- Supports custom images since Ansible 2.5.
- To use I(custom_data) on a Linux image, the image must have cloud-init enabled. If cloud-init is not enabled, I(custom_data) is ignored.
options:
resource_group:
description:
- Name of the resource group containing the VM.
required: true
name:
description:
- Name of the VM.
required: true
custom_data:
description:
- Data made available to the VM and used by C(cloud-init).
- Only used on Linux images with C(cloud-init) enabled.
- Consult U(https://docs.microsoft.com/en-us/azure/virtual-machines/linux/using-cloud-init#cloud-init-overview) for cloud-init ready images.
- To enable cloud-init on a Linux image, follow U(https://docs.microsoft.com/en-us/azure/virtual-machines/linux/cloudinit-prepare-custom-image).
version_added: "2.5"
state:
description:
- State of the VM.
- Set to C(present) to create a VM with the configuration specified by other options, or to update the configuration of an existing VM.
- Set to C(absent) to remove a VM.
- Does not affect power state. Use I(started)/I(allocated)/I(restarted) parameters to change the power state of a VM.
default: present
choices:
- absent
- present
started:
description:
- Whether the VM is started or stopped.
- Set to (true) with I(state=present) to start the VM.
- Set to C(false) to stop the VM.
default: true
type: bool
allocated:
description:
- Whether the VM is allocated or deallocated, only useful with I(state=present).
default: True
type: bool
generalized:
description:
- Whether the VM is generalized or not.
- Set to C(true) with I(state=present) to generalize the VM.
- Generalizing a VM is irreversible.
type: bool
version_added: "2.8"
restarted:
description:
- Set to C(true) with I(state=present) to restart a running VM.
type: bool
location:
description:
- Valid Azure location for the VM. Defaults to location of the resource group.
short_hostname:
description:
- Name assigned internally to the host. On a Linux VM this is the name returned by the C(hostname) command.
- When creating a VM, short_hostname defaults to I(name).
vm_size:
description:
- A valid Azure VM size value. For example, C(Standard_D4).
- Choices vary depending on the subscription and location. Check your subscription for available choices.
- Required when creating a VM.
admin_username:
description:
- Admin username used to access the VM after it is created.
- Required when creating a VM.
admin_password:
description:
- Password for the admin username.
- Not required if the I(os_type=Linux) and SSH password authentication is disabled by setting I(ssh_password_enabled=false).
ssh_password_enabled:
description:
- Whether to enable or disable SSH passwords.
- When I(os_type=Linux), set to C(false) to disable SSH password authentication and require use of SSH keys.
default: true
type: bool
ssh_public_keys:
description:
- For I(os_type=Linux) provide a list of SSH keys.
- Accepts a list of dicts where each dictionary contains two keys, I(path) and I(key_data).
- Set I(path) to the default location of the authorized_keys files. For example, I(path=/home/<admin username>/.ssh/authorized_keys).
- Set I(key_data) to the actual value of the public key.
image:
description:
- The image used to build the VM.
- For custom images, the name of the image. To narrow the search to a specific resource group, a dict with the keys I(name) and I(resource_group).
- For Marketplace images, a dict with the keys I(publisher), I(offer), I(sku), and I(version).
- Set I(version=latest) to get the most recent version of a given image.
required: true
availability_set:
description:
- Name or ID of an existing availability set to add the VM to. The I(availability_set) should be in the same resource group as VM.
version_added: "2.5"
storage_account_name:
description:
- Name of a storage account that supports creation of VHD blobs.
- If not specified for a new VM, a new storage account named <vm name>01 will be created using storage type C(Standard_LRS).
aliases:
- storage_account
storage_container_name:
description:
- Name of the container to use within the storage account to store VHD blobs.
- If not specified, a default container will be created.
default: vhds
aliases:
- storage_container
storage_blob_name:
description:
- Name of the storage blob used to hold the OS disk image of the VM.
- Must end with '.vhd'.
- If not specified, defaults to the VM name + '.vhd'.
aliases:
- storage_blob
managed_disk_type:
description:
- Managed OS disk type.
- Create OS disk with managed disk if defined.
- If not defined, the OS disk will be created with virtual hard disk (VHD).
choices:
- Standard_LRS
- StandardSSD_LRS
- Premium_LRS
version_added: "2.4"
os_disk_name:
description:
- OS disk name.
version_added: "2.8"
os_disk_caching:
description:
- Type of OS disk caching.
choices:
- ReadOnly
- ReadWrite
default: ReadOnly
aliases:
- disk_caching
os_disk_size_gb:
description:
- Type of OS disk size in GB.
version_added: "2.7"
os_type:
description:
- Base type of operating system.
choices:
- Windows
- Linux
default: Linux
data_disks:
description:
- Describes list of data disks.
- Use M(azure_rm_mangeddisk) to manage the specific disk.
version_added: "2.4"
suboptions:
lun:
description:
- The logical unit number for data disk.
- This value is used to identify data disks within the VM and therefore must be unique for each data disk attached to a VM.
required: true
version_added: "2.4"
disk_size_gb:
description:
- The initial disk size in GB for blank data disks.
- This value cannot be larger than C(1023) GB.
- Size can be changed only when the virtual machine is deallocated.
- Not sure when I(managed_disk_id) defined.
version_added: "2.4"
managed_disk_type:
description:
- Managed data disk type.
- Only used when OS disk created with managed disk.
choices:
- Standard_LRS
- StandardSSD_LRS
- Premium_LRS
version_added: "2.4"
storage_account_name:
description:
- Name of an existing storage account that supports creation of VHD blobs.
- If not specified for a new VM, a new storage account started with I(name) will be created using storage type C(Standard_LRS).
- Only used when OS disk created with virtual hard disk (VHD).
- Used when I(managed_disk_type) not defined.
- Cannot be updated unless I(lun) updated.
version_added: "2.4"
storage_container_name:
description:
- Name of the container to use within the storage account to store VHD blobs.
- If no name is specified a default container named 'vhds' will created.
- Only used when OS disk created with virtual hard disk (VHD).
- Used when I(managed_disk_type) not defined.
- Cannot be updated unless I(lun) updated.
default: vhds
version_added: "2.4"
storage_blob_name:
description:
- Name of the storage blob used to hold the OS disk image of the VM.
- Must end with '.vhd'.
- Default to the I(name) + timestamp + I(lun) + '.vhd'.
- Only used when OS disk created with virtual hard disk (VHD).
- Used when I(managed_disk_type) not defined.
- Cannot be updated unless I(lun) updated.
version_added: "2.4"
caching:
description:
- Type of data disk caching.
choices:
- ReadOnly
- ReadWrite
default: ReadOnly
version_added: "2.4"
public_ip_allocation_method:
description:
- Allocation method for the public IP of the VM.
- Used only if a network interface is not specified.
- When set to C(Dynamic), the public IP address may change any time the VM is rebooted or power cycled.
- The C(Disabled) choice was added in Ansible 2.6.
choices:
- Dynamic
- Static
- Disabled
default: Static
aliases:
- public_ip_allocation
open_ports:
description:
- List of ports to open in the security group for the VM, when a security group and network interface are created with a VM.
- For Linux hosts, defaults to allowing inbound TCP connections to port 22.
- For Windows hosts, defaults to opening ports 3389 and 5986.
network_interface_names:
description:
- Network interface names to add to the VM.
- Can be a string of name or resource ID of the network interface.
- Can be a dict containing I(resource_group) and I(name) of the network interface.
- If a network interface name is not provided when the VM is created, a default network interface will be created.
- To create a new network interface, at least one Virtual Network with one Subnet must exist.
type: list
aliases:
- network_interfaces
virtual_network_resource_group:
description:
- The resource group to use when creating a VM with another resource group's virtual network.
version_added: "2.4"
virtual_network_name:
description:
- The virtual network to use when creating a VM.
- If not specified, a new network interface will be created and assigned to the first virtual network found in the resource group.
- Use with I(virtual_network_resource_group) to place the virtual network in another resource group.
aliases:
- virtual_network
subnet_name:
description:
- Subnet for the VM.
- Defaults to the first subnet found in the virtual network or the subnet of the I(network_interface_name), if provided.
- If the subnet is in another resource group, specify the resource group with I(virtual_network_resource_group).
aliases:
- subnet
remove_on_absent:
description:
- Associated resources to remove when removing a VM using I(state=absent).
- To remove all resources related to the VM being removed, including auto-created resources, set to C(all).
- To remove only resources that were automatically created while provisioning the VM being removed, set to C(all_autocreated).
- To remove only specific resources, set to C(network_interfaces), C(virtual_storage) or C(public_ips).
- Any other input will be ignored.
type: list
default: ['all']
plan:
description:
- Third-party billing plan for the VM.
version_added: "2.5"
type: dict
suboptions:
name:
description:
- Billing plan name.
required: true
product:
description:
- Product name.
required: true
publisher:
description:
- Publisher offering the plan.
required: true
promotion_code:
description:
- Optional promotion code.
accept_terms:
description:
- Accept terms for Marketplace images that require it.
- Only Azure service admin/account admin users can purchase images from the Marketplace.
- Only valid when a I(plan) is specified.
type: bool
default: false
version_added: "2.7"
zones:
description:
- A list of Availability Zones for your VM.
type: list
version_added: "2.8"
license_type:
description:
- On-premise license for the image or disk.
- Only used for images that contain the Windows Server operating system.
- To remove all license type settings, set to the string C(None).
version_added: "2.8"
choices:
- Windows_Server
- Windows_Client
vm_identity:
description:
- Identity for the VM.
version_added: "2.8"
choices:
- SystemAssigned
winrm:
description:
- List of Windows Remote Management configurations of the VM.
version_added: "2.8"
suboptions:
protocol:
description:
- The protocol of the winrm listener.
required: true
choices:
- http
- https
source_vault:
description:
- The relative URL of the Key Vault containing the certificate.
certificate_url:
description:
- The URL of a certificate that has been uploaded to Key Vault as a secret.
certificate_store:
description:
- The certificate store on the VM to which the certificate should be added.
- The specified certificate store is implicitly in the LocalMachine account.
boot_diagnostics:
description:
- Manage boot diagnostics settings for a VM.
- Boot diagnostics includes a serial console and remote console screenshots.
version_added: '2.9'
suboptions:
enabled:
description:
- Flag indicating if boot diagnostics are enabled.
required: true
type: bool
storage_account:
description:
- The name of an existing storage account to use for boot diagnostics.
- If not specified, uses I(storage_account_name) defined one level up.
- If storage account is not specified anywhere, and C(enabled) is C(true), a default storage account is created for boot diagnostics data.
required: false
extends_documentation_fragment:
- azure
- azure_tags
author:
- Chris Houseknecht (@chouseknecht)
- Matt Davis (@nitzmahone)
- Christopher Perrin (@cperrin88)
- James E. King III (@jeking3)
'''
EXAMPLES = '''
- name: Create VM with defaults
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm10
admin_username: chouseknecht
admin_password: <your password here>
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
- name: Create an availability set for managed disk vm
azure_rm_availabilityset:
name: avs-managed-disk
resource_group: myResourceGroup
platform_update_domain_count: 5
platform_fault_domain_count: 2
sku: Aligned
- name: Create a VM with managed disk
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: vm-managed-disk
admin_username: adminUser
availability_set: avs-managed-disk
managed_disk_type: Standard_LRS
image:
offer: CoreOS
publisher: CoreOS
sku: Stable
version: latest
vm_size: Standard_D4
- name: Create a VM with existing storage account and NIC
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
vm_size: Standard_D4
storage_account: testaccount001
admin_username: adminUser
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert yor ssh public key here... >
network_interfaces: testvm001
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
- name: Create a VM with OS and multiple data managed disks
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_D4
managed_disk_type: Standard_LRS
admin_username: adminUser
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert yor ssh public key here... >
image:
offer: CoreOS
publisher: CoreOS
sku: Stable
version: latest
data_disks:
- lun: 0
managed_disk_id: "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Compute/disks/myDisk"
- lun: 1
disk_size_gb: 128
managed_disk_type: Premium_LRS
- name: Create a VM with OS and multiple data storage accounts
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_DS1_v2
admin_username: adminUser
ssh_password_enabled: false
ssh_public_keys:
- path: /home/adminUser/.ssh/authorized_keys
key_data: < insert yor ssh public key here... >
network_interfaces: testvm001
storage_container: osdisk
storage_blob: osdisk.vhd
boot_diagnostics:
enabled: yes
image:
offer: CoreOS
publisher: CoreOS
sku: Stable
version: latest
data_disks:
- lun: 0
disk_size_gb: 64
storage_container_name: datadisk1
storage_blob_name: datadisk1.vhd
- lun: 1
disk_size_gb: 128
storage_container_name: datadisk2
storage_blob_name: datadisk2.vhd
- name: Create a VM with a custom image
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_DS1_v2
admin_username: adminUser
admin_password: password01
image: customimage001
- name: Create a VM with a custom image from a particular resource group
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_DS1_v2
admin_username: adminUser
admin_password: password01
image:
name: customimage001
resource_group: myResourceGroup
- name: Create a VM with an image id
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_DS1_v2
admin_username: adminUser
admin_password: password01
image:
id: '{{image_id}}'
- name: Create VM with specified OS disk size
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: big-os-disk
admin_username: chouseknecht
admin_password: <your password here>
os_disk_size_gb: 512
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
- name: Create VM with OS and Plan, accepting the terms
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: f5-nva
admin_username: chouseknecht
admin_password: <your password here>
image:
publisher: f5-networks
offer: f5-big-ip-best
sku: f5-bigip-virtual-edition-200m-best-hourly
version: latest
plan:
name: f5-bigip-virtual-edition-200m-best-hourly
product: f5-big-ip-best
publisher: f5-networks
- name: Power Off
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
started: no
- name: Deallocate
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
allocated: no
- name: Power On
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
- name: Restart
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
restarted: yes
- name: Create a VM with an Availability Zone
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm001
vm_size: Standard_DS1_v2
admin_username: adminUser
admin_password: password01
image: customimage001
zones: [1]
- name: Remove a VM and all resources that were autocreated
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm002
remove_on_absent: all_autocreated
state: absent
'''
RETURN = '''
powerstate:
description:
- Indicates if the state is C(running), C(stopped), C(deallocated), C(generalized).
returned: always
type: str
sample: running
deleted_vhd_uris:
description:
- List of deleted Virtual Hard Disk URIs.
returned: 'on delete'
type: list
sample: ["https://testvm104519.blob.core.windows.net/vhds/testvm10.vhd"]
deleted_network_interfaces:
description:
- List of deleted NICs.
returned: 'on delete'
type: list
sample: ["testvm1001"]
deleted_public_ips:
description:
- List of deleted public IP address names.
returned: 'on delete'
type: list
sample: ["testvm1001"]
azure_vm:
description:
- Facts about the current state of the object. Note that facts are not part of the registered output but available directly.
returned: always
type: dict
sample: {
"properties": {
"availabilitySet": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Compute/availabilitySets/MYAVAILABILITYSET"
},
"hardwareProfile": {
"vmSize": "Standard_D1"
},
"instanceView": {
"disks": [
{
"name": "testvm10.vhd",
"statuses": [
{
"code": "ProvisioningState/succeeded",
"displayStatus": "Provisioning succeeded",
"level": "Info",
"time": "2016-03-30T07:11:16.187272Z"
}
]
}
],
"statuses": [
{
"code": "ProvisioningState/succeeded",
"displayStatus": "Provisioning succeeded",
"level": "Info",
"time": "2016-03-30T20:33:38.946916Z"
},
{
"code": "PowerState/running",
"displayStatus": "VM running",
"level": "Info"
}
],
"vmAgent": {
"extensionHandlers": [],
"statuses": [
{
"code": "ProvisioningState/succeeded",
"displayStatus": "Ready",
"level": "Info",
"message": "GuestAgent is running and accepting new configurations.",
"time": "2016-03-30T20:31:16.000Z"
}
],
"vmAgentVersion": "WALinuxAgent-2.0.16"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/networkInterfaces/testvm10_NIC01",
"name": "testvm10_NIC01",
"properties": {
"dnsSettings": {
"appliedDnsServers": [],
"dnsServers": []
},
"enableIPForwarding": false,
"ipConfigurations": [
{
"etag": 'W/"041c8c2a-d5dd-4cd7-8465-9125cfbe2cf8"',
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/networkInterfaces/testvm10_NIC01/ipConfigurations/default",
"name": "default",
"properties": {
"privateIPAddress": "10.10.0.5",
"privateIPAllocationMethod": "Dynamic",
"provisioningState": "Succeeded",
"publicIPAddress": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/testvm10_PIP01",
"name": "testvm10_PIP01",
"properties": {
"idleTimeoutInMinutes": 4,
"ipAddress": "13.92.246.197",
"ipConfiguration": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Network/networkInterfaces/testvm10_NIC01/ipConfigurations/default"
},
"provisioningState": "Succeeded",
"publicIPAllocationMethod": "Static",
"resourceGuid": "3447d987-ca0d-4eca-818b-5dddc0625b42"
}
}
}
}
],
"macAddress": "00-0D-3A-12-AA-14",
"primary": true,
"provisioningState": "Succeeded",
"resourceGuid": "10979e12-ccf9-42ee-9f6d-ff2cc63b3844",
"virtualMachine": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroup/myResourceGroup/providers/Microsoft.Compute/virtualMachines/testvm10"
}
}
}
]
},
"osProfile": {
"adminUsername": "chouseknecht",
"computerName": "test10",
"linuxConfiguration": {
"disablePasswordAuthentication": false
},
"secrets": []
},
"provisioningState": "Succeeded",
"storageProfile": {
"dataDisks": [
{
"caching": "ReadWrite",
"createOption": "empty",
"diskSizeGB": 64,
"lun": 0,
"name": "datadisk1.vhd",
"vhd": {
"uri": "https://testvm10sa1.blob.core.windows.net/datadisk/datadisk1.vhd"
}
}
],
"imageReference": {
"offer": "CentOS",
"publisher": "OpenLogic",
"sku": "7.1",
"version": "7.1.20160308"
},
"osDisk": {
"caching": "ReadOnly",
"createOption": "fromImage",
"name": "testvm10.vhd",
"osType": "Linux",
"vhd": {
"uri": "https://testvm10sa1.blob.core.windows.net/vhds/testvm10.vhd"
}
}
}
},
"type": "Microsoft.Compute/virtualMachines"
}
''' # NOQA
import base64
import random
import re
try:
from msrestazure.azure_exceptions import CloudError
from msrestazure.tools import parse_resource_id
from msrest.polling import LROPoller
except ImportError:
# This is handled in azure_rm_common
pass
from ansible.module_utils.basic import to_native, to_bytes
from ansible.module_utils.azure_rm_common import AzureRMModuleBase, azure_id_to_dict, normalize_location_name, format_resource_id
AZURE_OBJECT_CLASS = 'VirtualMachine'
AZURE_ENUM_MODULES = ['azure.mgmt.compute.models']
def extract_names_from_blob_uri(blob_uri, storage_suffix):
# HACK: ditch this once python SDK supports get by URI
m = re.match(r'^https?://(?P<accountname>[^.]+)\.blob\.{0}/'
r'(?P<containername>[^/]+)/(?P<blobname>.+)$'.format(storage_suffix), blob_uri)
if not m:
raise Exception("unable to parse blob uri '%s'" % blob_uri)
extracted_names = m.groupdict()
return extracted_names
class AzureRMVirtualMachine(AzureRMModuleBase):
def __init__(self):
self.module_arg_spec = dict(
resource_group=dict(type='str', required=True),
name=dict(type='str', required=True),
custom_data=dict(type='str'),
state=dict(choices=['present', 'absent'], default='present', type='str'),
location=dict(type='str'),
short_hostname=dict(type='str'),
vm_size=dict(type='str'),
admin_username=dict(type='str'),
admin_password=dict(type='str', no_log=True),
ssh_password_enabled=dict(type='bool', default=True),
ssh_public_keys=dict(type='list'),
image=dict(type='raw'),
availability_set=dict(type='str'),
storage_account_name=dict(type='str', aliases=['storage_account']),
storage_container_name=dict(type='str', aliases=['storage_container'], default='vhds'),
storage_blob_name=dict(type='str', aliases=['storage_blob']),
os_disk_caching=dict(type='str', aliases=['disk_caching'], choices=['ReadOnly', 'ReadWrite'],
default='ReadOnly'),
os_disk_size_gb=dict(type='int'),
managed_disk_type=dict(type='str', choices=['Standard_LRS', 'StandardSSD_LRS', 'Premium_LRS']),
os_disk_name=dict(type='str'),
os_type=dict(type='str', choices=['Linux', 'Windows'], default='Linux'),
public_ip_allocation_method=dict(type='str', choices=['Dynamic', 'Static', 'Disabled'], default='Static',
aliases=['public_ip_allocation']),
open_ports=dict(type='list'),
network_interface_names=dict(type='list', aliases=['network_interfaces'], elements='raw'),
remove_on_absent=dict(type='list', default=['all']),
virtual_network_resource_group=dict(type='str'),
virtual_network_name=dict(type='str', aliases=['virtual_network']),
subnet_name=dict(type='str', aliases=['subnet']),
allocated=dict(type='bool', default=True),
restarted=dict(type='bool', default=False),
started=dict(type='bool', default=True),
generalized=dict(type='bool', default=False),
data_disks=dict(type='list'),
plan=dict(type='dict'),
zones=dict(type='list'),
accept_terms=dict(type='bool', default=False),
license_type=dict(type='str', choices=['Windows_Server', 'Windows_Client']),
vm_identity=dict(type='str', choices=['SystemAssigned']),
winrm=dict(type='list'),
boot_diagnostics=dict(type='dict'),
)
self.resource_group = None
self.name = None
self.custom_data = None
self.state = None
self.location = None
self.short_hostname = None
self.vm_size = None
self.admin_username = None
self.admin_password = None
self.ssh_password_enabled = None
self.ssh_public_keys = None
self.image = None
self.availability_set = None
self.storage_account_name = None
self.storage_container_name = None
self.storage_blob_name = None
self.os_type = None
self.os_disk_caching = None
self.os_disk_size_gb = None
self.managed_disk_type = None
self.os_disk_name = None
self.network_interface_names = None
self.remove_on_absent = set()
self.tags = None
self.force = None
self.public_ip_allocation_method = None
self.open_ports = None
self.virtual_network_resource_group = None
self.virtual_network_name = None
self.subnet_name = None
self.allocated = None
self.restarted = None
self.started = None
self.generalized = None
self.differences = None
self.data_disks = None
self.plan = None
self.accept_terms = None
self.zones = None
self.license_type = None
self.vm_identity = None
self.boot_diagnostics = None
self.results = dict(
changed=False,
actions=[],
powerstate_change=None,
ansible_facts=dict(azure_vm=None)
)
super(AzureRMVirtualMachine, self).__init__(derived_arg_spec=self.module_arg_spec,
supports_check_mode=True)
@property
def boot_diagnostics_present(self):
return self.boot_diagnostics is not None and 'enabled' in self.boot_diagnostics
def get_boot_diagnostics_storage_account(self, limited=False, vm_dict=None):
"""
Get the boot diagnostics storage account.
Arguments:
- limited - if true, limit the logic to the boot_diagnostics storage account
this is used if initial creation of the VM has a stanza with
boot_diagnostics disabled, so we only create a storage account
if the user specifies a storage account name inside the boot_diagnostics
schema
- vm_dict - if invoked on an update, this is the current state of the vm including
tags, like the default storage group tag '_own_sa_'.
Normal behavior:
- try the self.boot_diagnostics.storage_account field
- if not there, try the self.storage_account_name field
- if not there, use the default storage account
If limited is True:
- try the self.boot_diagnostics.storage_account field
- if not there, None
"""
bsa = None
if 'storage_account' in self.boot_diagnostics:
bsa = self.get_storage_account(self.boot_diagnostics['storage_account'])
elif limited:
return None
elif self.storage_account_name:
bsa = self.get_storage_account(self.storage_account_name)
else:
bsa = self.create_default_storage_account(vm_dict=vm_dict)
self.log("boot diagnostics storage account:")
self.log(self.serialize_obj(bsa, 'StorageAccount'), pretty_print=True)
return bsa
def exec_module(self, **kwargs):
for key in list(self.module_arg_spec.keys()) + ['tags']:
setattr(self, key, kwargs[key])
# make sure options are lower case
self.remove_on_absent = set([resource.lower() for resource in self.remove_on_absent])
# convert elements to ints
self.zones = [int(i) for i in self.zones] if self.zones else None
changed = False
powerstate_change = None
results = dict()
vm = None
network_interfaces = []
requested_storage_uri = None
requested_vhd_uri = None
data_disk_requested_vhd_uri = None
disable_ssh_password = None
vm_dict = None
image_reference = None
custom_image = False
resource_group = self.get_resource_group(self.resource_group)
if not self.location:
# Set default location
self.location = resource_group.location
self.location = normalize_location_name(self.location)
if self.state == 'present':
# Verify parameters and resolve any defaults
if self.vm_size and not self.vm_size_is_valid():
self.fail("Parameter error: vm_size {0} is not valid for your subscription and location.".format(
self.vm_size
))
if self.network_interface_names:
for nic_name in self.network_interface_names:
nic = self.parse_network_interface(nic_name)
network_interfaces.append(nic)
if self.ssh_public_keys:
msg = "Parameter error: expecting ssh_public_keys to be a list of type dict where " \
"each dict contains keys: path, key_data."
for key in self.ssh_public_keys:
if not isinstance(key, dict):
self.fail(msg)
if not key.get('path') or not key.get('key_data'):
self.fail(msg)
if self.image and isinstance(self.image, dict):
if all(key in self.image for key in ('publisher', 'offer', 'sku', 'version')):
marketplace_image = self.get_marketplace_image_version()
if self.image['version'] == 'latest':
self.image['version'] = marketplace_image.name
self.log("Using image version {0}".format(self.image['version']))
image_reference = self.compute_models.ImageReference(
publisher=self.image['publisher'],
offer=self.image['offer'],
sku=self.image['sku'],
version=self.image['version']
)
elif self.image.get('name'):
custom_image = True
image_reference = self.get_custom_image_reference(
self.image.get('name'),
self.image.get('resource_group'))
elif self.image.get('id'):
try:
image_reference = self.compute_models.ImageReference(id=self.image['id'])
except Exception as exc:
self.fail("id Error: Cannot get image from the reference id - {0}".format(self.image['id']))
else:
self.fail("parameter error: expecting image to contain [publisher, offer, sku, version], [name, resource_group] or [id]")
elif self.image and isinstance(self.image, str):
custom_image = True
image_reference = self.get_custom_image_reference(self.image)
elif self.image:
self.fail("parameter error: expecting image to be a string or dict not {0}".format(type(self.image).__name__))
if self.plan:
if not self.plan.get('name') or not self.plan.get('product') or not self.plan.get('publisher'):
self.fail("parameter error: plan must include name, product, and publisher")
if not self.storage_blob_name and not self.managed_disk_type:
self.storage_blob_name = self.name + '.vhd'
elif self.managed_disk_type:
self.storage_blob_name = self.name
if self.storage_account_name and not self.managed_disk_type:
properties = self.get_storage_account(self.storage_account_name)
requested_storage_uri = properties.primary_endpoints.blob
requested_vhd_uri = '{0}{1}/{2}'.format(requested_storage_uri,
self.storage_container_name,
self.storage_blob_name)
disable_ssh_password = not self.ssh_password_enabled
try:
self.log("Fetching virtual machine {0}".format(self.name))
vm = self.compute_client.virtual_machines.get(self.resource_group, self.name, expand='instanceview')
self.check_provisioning_state(vm, self.state)
vm_dict = self.serialize_vm(vm)
if self.state == 'present':
differences = []
current_nics = []
results = vm_dict
# Try to determine if the VM needs to be updated
if self.network_interface_names:
for nic in vm_dict['properties']['networkProfile']['networkInterfaces']:
current_nics.append(nic['id'])
if set(current_nics) != set(network_interfaces):
self.log('CHANGED: virtual machine {0} - network interfaces are different.'.format(self.name))
differences.append('Network Interfaces')
updated_nics = [dict(id=id, primary=(i == 0))
for i, id in enumerate(network_interfaces)]
vm_dict['properties']['networkProfile']['networkInterfaces'] = updated_nics
changed = True
if self.os_disk_caching and \
self.os_disk_caching != vm_dict['properties']['storageProfile']['osDisk']['caching']:
self.log('CHANGED: virtual machine {0} - OS disk caching'.format(self.name))
differences.append('OS Disk caching')
changed = True
vm_dict['properties']['storageProfile']['osDisk']['caching'] = self.os_disk_caching
if self.os_disk_name and \
self.os_disk_name != vm_dict['properties']['storageProfile']['osDisk']['name']:
self.log('CHANGED: virtual machine {0} - OS disk name'.format(self.name))
differences.append('OS Disk name')
changed = True
vm_dict['properties']['storageProfile']['osDisk']['name'] = self.os_disk_name
if self.os_disk_size_gb and \
self.os_disk_size_gb != vm_dict['properties']['storageProfile']['osDisk'].get('diskSizeGB'):
self.log('CHANGED: virtual machine {0} - OS disk size '.format(self.name))
differences.append('OS Disk size')
changed = True
vm_dict['properties']['storageProfile']['osDisk']['diskSizeGB'] = self.os_disk_size_gb
if self.vm_size and \
self.vm_size != vm_dict['properties']['hardwareProfile']['vmSize']:
self.log('CHANGED: virtual machine {0} - size '.format(self.name))
differences.append('VM size')
changed = True
vm_dict['properties']['hardwareProfile']['vmSize'] = self.vm_size
update_tags, vm_dict['tags'] = self.update_tags(vm_dict.get('tags', dict()))
if update_tags:
differences.append('Tags')
changed = True
if self.short_hostname and self.short_hostname != vm_dict['properties']['osProfile']['computerName']:
self.log('CHANGED: virtual machine {0} - short hostname'.format(self.name))
differences.append('Short Hostname')
changed = True
vm_dict['properties']['osProfile']['computerName'] = self.short_hostname
if self.started and vm_dict['powerstate'] not in ['starting', 'running'] and self.allocated:
self.log("CHANGED: virtual machine {0} not running and requested state 'running'".format(self.name))
changed = True
powerstate_change = 'poweron'
elif self.state == 'present' and vm_dict['powerstate'] == 'running' and self.restarted:
self.log("CHANGED: virtual machine {0} {1} and requested state 'restarted'"
.format(self.name, vm_dict['powerstate']))
changed = True
powerstate_change = 'restarted'
elif self.state == 'present' and not self.allocated and vm_dict['powerstate'] not in ['deallocated', 'deallocating']:
self.log("CHANGED: virtual machine {0} {1} and requested state 'deallocated'"
.format(self.name, vm_dict['powerstate']))
changed = True
powerstate_change = 'deallocated'
elif not self.started and vm_dict['powerstate'] == 'running':
self.log("CHANGED: virtual machine {0} running and requested state 'stopped'".format(self.name))
changed = True
powerstate_change = 'poweroff'
elif self.generalized and vm_dict['powerstate'] != 'generalized':
self.log("CHANGED: virtual machine {0} requested to be 'generalized'".format(self.name))
changed = True
powerstate_change = 'generalized'
vm_dict['zones'] = [int(i) for i in vm_dict['zones']] if 'zones' in vm_dict and vm_dict['zones'] else None
if self.zones != vm_dict['zones']:
self.log("CHANGED: virtual machine {0} zones".format(self.name))
differences.append('Zones')
changed = True
if self.license_type is not None and vm_dict['properties'].get('licenseType') != self.license_type:
differences.append('License Type')
changed = True
# Defaults for boot diagnostics
if 'diagnosticsProfile' not in vm_dict['properties']:
vm_dict['properties']['diagnosticsProfile'] = {}
if 'bootDiagnostics' not in vm_dict['properties']['diagnosticsProfile']:
vm_dict['properties']['diagnosticsProfile']['bootDiagnostics'] = {
'enabled': False,
'storageUri': None
}
if self.boot_diagnostics_present:
current_boot_diagnostics = vm_dict['properties']['diagnosticsProfile']['bootDiagnostics']
boot_diagnostics_changed = False
if self.boot_diagnostics['enabled'] != current_boot_diagnostics['enabled']:
current_boot_diagnostics['enabled'] = self.boot_diagnostics['enabled']
boot_diagnostics_changed = True
boot_diagnostics_storage_account = self.get_boot_diagnostics_storage_account(
limited=not self.boot_diagnostics['enabled'], vm_dict=vm_dict)
boot_diagnostics_blob = boot_diagnostics_storage_account.primary_endpoints.blob if boot_diagnostics_storage_account else None
if current_boot_diagnostics['storageUri'] != boot_diagnostics_blob:
current_boot_diagnostics['storageUri'] = boot_diagnostics_blob
boot_diagnostics_changed = True
if boot_diagnostics_changed:
differences.append('Boot Diagnostics')
changed = True
# Adding boot diagnostics can create a default storage account after initial creation
# this means we might also need to update the _own_sa_ tag
own_sa = (self.tags or {}).get('_own_sa_', None)
cur_sa = vm_dict.get('tags', {}).get('_own_sa_', None)
if own_sa and own_sa != cur_sa:
if 'Tags' not in differences:
differences.append('Tags')
if 'tags' not in vm_dict:
vm_dict['tags'] = {}
vm_dict['tags']['_own_sa_'] = own_sa
changed = True
self.differences = differences
elif self.state == 'absent':
self.log("CHANGED: virtual machine {0} exists and requested state is 'absent'".format(self.name))
results = dict()
changed = True
except CloudError:
self.log('Virtual machine {0} does not exist'.format(self.name))
if self.state == 'present':
self.log("CHANGED: virtual machine {0} does not exist but state is 'present'.".format(self.name))
changed = True
self.results['changed'] = changed
self.results['ansible_facts']['azure_vm'] = results
self.results['powerstate_change'] = powerstate_change
if self.check_mode:
return self.results
if changed:
if self.state == 'present':
if not vm:
# Create the VM
self.log("Create virtual machine {0}".format(self.name))
self.results['actions'].append('Created VM {0}'.format(self.name))
if self.os_type == 'Linux':
if disable_ssh_password and not self.ssh_public_keys:
self.fail("Parameter error: ssh_public_keys required when disabling SSH password.")
if not image_reference:
self.fail("Parameter error: an image is required when creating a virtual machine.")
availability_set_resource = None
if self.availability_set:
parsed_availability_set = parse_resource_id(self.availability_set)
availability_set = self.get_availability_set(parsed_availability_set.get('resource_group', self.resource_group),
parsed_availability_set.get('name'))
availability_set_resource = self.compute_models.SubResource(id=availability_set.id)
if self.zones:
self.fail("Parameter error: you can't use Availability Set and Availability Zones at the same time")
# Get defaults
if not self.network_interface_names:
default_nic = self.create_default_nic()
self.log("network interface:")
self.log(self.serialize_obj(default_nic, 'NetworkInterface'), pretty_print=True)
network_interfaces = [default_nic.id]
# os disk
if not self.storage_account_name and not self.managed_disk_type:
storage_account = self.create_default_storage_account()
self.log("os disk storage account:")
self.log(self.serialize_obj(storage_account, 'StorageAccount'), pretty_print=True)
requested_storage_uri = 'https://{0}.blob.{1}/'.format(
storage_account.name,
self._cloud_environment.suffixes.storage_endpoint)
requested_vhd_uri = '{0}{1}/{2}'.format(
requested_storage_uri,
self.storage_container_name,
self.storage_blob_name)
if not self.short_hostname:
self.short_hostname = self.name
nics = [self.compute_models.NetworkInterfaceReference(id=id, primary=(i == 0))
for i, id in enumerate(network_interfaces)]
# os disk
if self.managed_disk_type:
vhd = None
managed_disk = self.compute_models.ManagedDiskParameters(storage_account_type=self.managed_disk_type)
elif custom_image:
vhd = None
managed_disk = None
else:
vhd = self.compute_models.VirtualHardDisk(uri=requested_vhd_uri)
managed_disk = None
plan = None
if self.plan:
plan = self.compute_models.Plan(name=self.plan.get('name'), product=self.plan.get('product'),
publisher=self.plan.get('publisher'),
promotion_code=self.plan.get('promotion_code'))
# do this before creating vm_resource as it can modify tags
if self.boot_diagnostics_present and self.boot_diagnostics['enabled']:
boot_diag_storage_account = self.get_boot_diagnostics_storage_account()
os_profile = None
if self.admin_username:
os_profile = self.compute_models.OSProfile(
admin_username=self.admin_username,
computer_name=self.short_hostname,
)
vm_resource = self.compute_models.VirtualMachine(
location=self.location,
tags=self.tags,
os_profile=os_profile,
hardware_profile=self.compute_models.HardwareProfile(
vm_size=self.vm_size
),
storage_profile=self.compute_models.StorageProfile(
os_disk=self.compute_models.OSDisk(
name=self.os_disk_name if self.os_disk_name else self.storage_blob_name,
vhd=vhd,
managed_disk=managed_disk,
create_option=self.compute_models.DiskCreateOptionTypes.from_image,
caching=self.os_disk_caching,
disk_size_gb=self.os_disk_size_gb
),
image_reference=image_reference,
),
network_profile=self.compute_models.NetworkProfile(
network_interfaces=nics
),
availability_set=availability_set_resource,
plan=plan,
zones=self.zones,
)
if self.license_type is not None:
vm_resource.license_type = self.license_type
if self.vm_identity:
vm_resource.identity = self.compute_models.VirtualMachineIdentity(type=self.vm_identity)
if self.winrm:
winrm_listeners = list()
for winrm_listener in self.winrm:
winrm_listeners.append(self.compute_models.WinRMListener(
protocol=winrm_listener.get('protocol'),
certificate_url=winrm_listener.get('certificate_url')
))
if winrm_listener.get('source_vault'):
if not vm_resource.os_profile.secrets:
vm_resource.os_profile.secrets = list()
vm_resource.os_profile.secrets.append(self.compute_models.VaultSecretGroup(
source_vault=self.compute_models.SubResource(
id=winrm_listener.get('source_vault')
),
vault_certificates=[
self.compute_models.VaultCertificate(
certificate_url=winrm_listener.get('certificate_url'),
certificate_store=winrm_listener.get('certificate_store')
),
]
))
winrm = self.compute_models.WinRMConfiguration(
listeners=winrm_listeners
)
if not vm_resource.os_profile.windows_configuration:
vm_resource.os_profile.windows_configuration = self.compute_models.WindowsConfiguration(
win_rm=winrm
)
elif not vm_resource.os_profile.windows_configuration.win_rm:
vm_resource.os_profile.windows_configuration.win_rm = winrm
if self.boot_diagnostics_present:
vm_resource.diagnostics_profile = self.compute_models.DiagnosticsProfile(
boot_diagnostics=self.compute_models.BootDiagnostics(
enabled=self.boot_diagnostics['enabled'],
storage_uri=boot_diag_storage_account.primary_endpoints.blob))
if self.admin_password:
vm_resource.os_profile.admin_password = self.admin_password
if self.custom_data:
# Azure SDK (erroneously?) wants native string type for this
vm_resource.os_profile.custom_data = to_native(base64.b64encode(to_bytes(self.custom_data)))
if self.os_type == 'Linux' and os_profile:
vm_resource.os_profile.linux_configuration = self.compute_models.LinuxConfiguration(
disable_password_authentication=disable_ssh_password
)
if self.ssh_public_keys:
ssh_config = self.compute_models.SshConfiguration()
ssh_config.public_keys = \
[self.compute_models.SshPublicKey(path=key['path'], key_data=key['key_data']) for key in self.ssh_public_keys]
vm_resource.os_profile.linux_configuration.ssh = ssh_config
# data disk
if self.data_disks:
data_disks = []
count = 0
for data_disk in self.data_disks:
if not data_disk.get('managed_disk_type'):
if not data_disk.get('storage_blob_name'):
data_disk['storage_blob_name'] = self.name + '-data-' + str(count) + '.vhd'
count += 1
if data_disk.get('storage_account_name'):
data_disk_storage_account = self.get_storage_account(data_disk['storage_account_name'])
else:
data_disk_storage_account = self.create_default_storage_account()
self.log("data disk storage account:")
self.log(self.serialize_obj(data_disk_storage_account, 'StorageAccount'), pretty_print=True)
if not data_disk.get('storage_container_name'):
data_disk['storage_container_name'] = 'vhds'
data_disk_requested_vhd_uri = 'https://{0}.blob.{1}/{2}/{3}'.format(
data_disk_storage_account.name,
self._cloud_environment.suffixes.storage_endpoint,
data_disk['storage_container_name'],
data_disk['storage_blob_name']
)
if not data_disk.get('managed_disk_type'):
data_disk_managed_disk = None
disk_name = data_disk['storage_blob_name']
data_disk_vhd = self.compute_models.VirtualHardDisk(uri=data_disk_requested_vhd_uri)
else:
data_disk_vhd = None
data_disk_managed_disk = self.compute_models.ManagedDiskParameters(storage_account_type=data_disk['managed_disk_type'])
disk_name = self.name + "-datadisk-" + str(count)
count += 1
data_disk['caching'] = data_disk.get(
'caching', 'ReadOnly'
)
data_disks.append(self.compute_models.DataDisk(
lun=data_disk['lun'],
name=disk_name,
vhd=data_disk_vhd,
caching=data_disk['caching'],
create_option=self.compute_models.DiskCreateOptionTypes.empty,
disk_size_gb=data_disk['disk_size_gb'],
managed_disk=data_disk_managed_disk,
))
vm_resource.storage_profile.data_disks = data_disks
# Before creating VM accept terms of plan if `accept_terms` is True
if self.accept_terms is True:
if not self.plan or not all([self.plan.get('name'), self.plan.get('product'), self.plan.get('publisher')]):
self.fail("parameter error: plan must be specified and include name, product, and publisher")
try:
plan_name = self.plan.get('name')
plan_product = self.plan.get('product')
plan_publisher = self.plan.get('publisher')
term = self.marketplace_client.marketplace_agreements.get(
publisher_id=plan_publisher, offer_id=plan_product, plan_id=plan_name)
term.accepted = True
self.marketplace_client.marketplace_agreements.create(
publisher_id=plan_publisher, offer_id=plan_product, plan_id=plan_name, parameters=term)
except Exception as exc:
self.fail(("Error accepting terms for virtual machine {0} with plan {1}. " +
"Only service admin/account admin users can purchase images " +
"from the marketplace. - {2}").format(self.name, self.plan, str(exc)))
self.log("Create virtual machine with parameters:")
self.create_or_update_vm(vm_resource, 'all_autocreated' in self.remove_on_absent)
elif self.differences and len(self.differences) > 0:
# Update the VM based on detected config differences
self.log("Update virtual machine {0}".format(self.name))
self.results['actions'].append('Updated VM {0}'.format(self.name))
nics = [self.compute_models.NetworkInterfaceReference(id=interface['id'], primary=(i == 0))
for i, interface in enumerate(vm_dict['properties']['networkProfile']['networkInterfaces'])]
# os disk
if not vm_dict['properties']['storageProfile']['osDisk'].get('managedDisk'):
managed_disk = None
vhd = self.compute_models.VirtualHardDisk(uri=vm_dict['properties']['storageProfile']['osDisk'].get('vhd', {}).get('uri'))
else:
vhd = None
managed_disk = self.compute_models.ManagedDiskParameters(
storage_account_type=vm_dict['properties']['storageProfile']['osDisk']['managedDisk'].get('storageAccountType')
)
availability_set_resource = None
try:
availability_set_resource = self.compute_models.SubResource(id=vm_dict['properties']['availabilitySet'].get('id'))
except Exception:
# pass if the availability set is not set
pass
if 'imageReference' in vm_dict['properties']['storageProfile'].keys():
if 'id' in vm_dict['properties']['storageProfile']['imageReference'].keys():
image_reference = self.compute_models.ImageReference(
id=vm_dict['properties']['storageProfile']['imageReference']['id']
)
else:
image_reference = self.compute_models.ImageReference(
publisher=vm_dict['properties']['storageProfile']['imageReference'].get('publisher'),
offer=vm_dict['properties']['storageProfile']['imageReference'].get('offer'),
sku=vm_dict['properties']['storageProfile']['imageReference'].get('sku'),
version=vm_dict['properties']['storageProfile']['imageReference'].get('version')
)
else:
image_reference = None
# You can't change a vm zone
if vm_dict['zones'] != self.zones:
self.fail("You can't change the Availability Zone of a virtual machine (have: {0}, want: {1})".format(vm_dict['zones'], self.zones))
if 'osProfile' in vm_dict['properties']:
os_profile = self.compute_models.OSProfile(
admin_username=vm_dict['properties'].get('osProfile', {}).get('adminUsername'),
computer_name=vm_dict['properties'].get('osProfile', {}).get('computerName')
)
else:
os_profile = None
vm_resource = self.compute_models.VirtualMachine(
location=vm_dict['location'],
os_profile=os_profile,
hardware_profile=self.compute_models.HardwareProfile(
vm_size=vm_dict['properties']['hardwareProfile'].get('vmSize')
),
storage_profile=self.compute_models.StorageProfile(
os_disk=self.compute_models.OSDisk(
name=vm_dict['properties']['storageProfile']['osDisk'].get('name'),
vhd=vhd,
managed_disk=managed_disk,
create_option=vm_dict['properties']['storageProfile']['osDisk'].get('createOption'),
os_type=vm_dict['properties']['storageProfile']['osDisk'].get('osType'),
caching=vm_dict['properties']['storageProfile']['osDisk'].get('caching'),
disk_size_gb=vm_dict['properties']['storageProfile']['osDisk'].get('diskSizeGB')
),
image_reference=image_reference
),
availability_set=availability_set_resource,
network_profile=self.compute_models.NetworkProfile(
network_interfaces=nics
)
)
if self.license_type is not None:
vm_resource.license_type = self.license_type
if self.boot_diagnostics is not None:
vm_resource.diagnostics_profile = self.compute_models.DiagnosticsProfile(
boot_diagnostics=self.compute_models.BootDiagnostics(
enabled=vm_dict['properties']['diagnosticsProfile']['bootDiagnostics']['enabled'],
storage_uri=vm_dict['properties']['diagnosticsProfile']['bootDiagnostics']['storageUri']))
if vm_dict.get('tags'):
vm_resource.tags = vm_dict['tags']
# Add custom_data, if provided
if vm_dict['properties'].get('osProfile', {}).get('customData'):
custom_data = vm_dict['properties']['osProfile']['customData']
# Azure SDK (erroneously?) wants native string type for this
vm_resource.os_profile.custom_data = to_native(base64.b64encode(to_bytes(custom_data)))
# Add admin password, if one provided
if vm_dict['properties'].get('osProfile', {}).get('adminPassword'):
vm_resource.os_profile.admin_password = vm_dict['properties']['osProfile']['adminPassword']
# Add linux configuration, if applicable
linux_config = vm_dict['properties'].get('osProfile', {}).get('linuxConfiguration')
if linux_config:
ssh_config = linux_config.get('ssh', None)
vm_resource.os_profile.linux_configuration = self.compute_models.LinuxConfiguration(
disable_password_authentication=linux_config.get('disablePasswordAuthentication', False)
)
if ssh_config:
public_keys = ssh_config.get('publicKeys')
if public_keys:
vm_resource.os_profile.linux_configuration.ssh = self.compute_models.SshConfiguration(public_keys=[])
for key in public_keys:
vm_resource.os_profile.linux_configuration.ssh.public_keys.append(
self.compute_models.SshPublicKey(path=key['path'], key_data=key['keyData'])
)
# data disk
if vm_dict['properties']['storageProfile'].get('dataDisks'):
data_disks = []
for data_disk in vm_dict['properties']['storageProfile']['dataDisks']:
if data_disk.get('managedDisk'):
managed_disk_type = data_disk['managedDisk'].get('storageAccountType')
data_disk_managed_disk = self.compute_models.ManagedDiskParameters(storage_account_type=managed_disk_type)
data_disk_vhd = None
else:
data_disk_vhd = data_disk['vhd']['uri']
data_disk_managed_disk = None
data_disks.append(self.compute_models.DataDisk(
lun=int(data_disk['lun']),
name=data_disk.get('name'),
vhd=data_disk_vhd,
caching=data_disk.get('caching'),
create_option=data_disk.get('createOption'),
disk_size_gb=int(data_disk['diskSizeGB']),
managed_disk=data_disk_managed_disk,
))
vm_resource.storage_profile.data_disks = data_disks
self.log("Update virtual machine with parameters:")
self.create_or_update_vm(vm_resource, False)
# Make sure we leave the machine in requested power state
if (powerstate_change == 'poweron' and
self.results['ansible_facts']['azure_vm']['powerstate'] != 'running'):
# Attempt to power on the machine
self.power_on_vm()
elif (powerstate_change == 'poweroff' and
self.results['ansible_facts']['azure_vm']['powerstate'] == 'running'):
# Attempt to power off the machine
self.power_off_vm()
elif powerstate_change == 'restarted':
self.restart_vm()
elif powerstate_change == 'deallocated':
self.deallocate_vm()
elif powerstate_change == 'generalized':
self.power_off_vm()
self.generalize_vm()
self.results['ansible_facts']['azure_vm'] = self.serialize_vm(self.get_vm())
elif self.state == 'absent':
# delete the VM
self.log("Delete virtual machine {0}".format(self.name))
self.results['ansible_facts']['azure_vm'] = None
self.delete_vm(vm)
# until we sort out how we want to do this globally
del self.results['actions']
return self.results
def get_vm(self):
'''
Get the VM with expanded instanceView
:return: VirtualMachine object
'''
try:
vm = self.compute_client.virtual_machines.get(self.resource_group, self.name, expand='instanceview')
return vm
except Exception as exc:
self.fail("Error getting virtual machine {0} - {1}".format(self.name, str(exc)))
def serialize_vm(self, vm):
'''
Convert a VirtualMachine object to dict.
:param vm: VirtualMachine object
:return: dict
'''
result = self.serialize_obj(vm, AZURE_OBJECT_CLASS, enum_modules=AZURE_ENUM_MODULES)
result['id'] = vm.id
result['name'] = vm.name
result['type'] = vm.type
result['location'] = vm.location
result['tags'] = vm.tags
result['powerstate'] = dict()
if vm.instance_view:
result['powerstate'] = next((s.code.replace('PowerState/', '')
for s in vm.instance_view.statuses if s.code.startswith('PowerState')), None)
for s in vm.instance_view.statuses:
if s.code.lower() == "osstate/generalized":
result['powerstate'] = 'generalized'
# Expand network interfaces to include config properties
for interface in vm.network_profile.network_interfaces:
int_dict = azure_id_to_dict(interface.id)
nic = self.get_network_interface(int_dict['resourceGroups'], int_dict['networkInterfaces'])
for interface_dict in result['properties']['networkProfile']['networkInterfaces']:
if interface_dict['id'] == interface.id:
nic_dict = self.serialize_obj(nic, 'NetworkInterface')
interface_dict['name'] = int_dict['networkInterfaces']
interface_dict['properties'] = nic_dict['properties']
# Expand public IPs to include config properties
for interface in result['properties']['networkProfile']['networkInterfaces']:
for config in interface['properties']['ipConfigurations']:
if config['properties'].get('publicIPAddress'):
pipid_dict = azure_id_to_dict(config['properties']['publicIPAddress']['id'])
try:
pip = self.network_client.public_ip_addresses.get(pipid_dict['resourceGroups'],
pipid_dict['publicIPAddresses'])
except Exception as exc:
self.fail("Error fetching public ip {0} - {1}".format(pipid_dict['publicIPAddresses'],
str(exc)))
pip_dict = self.serialize_obj(pip, 'PublicIPAddress')
config['properties']['publicIPAddress']['name'] = pipid_dict['publicIPAddresses']
config['properties']['publicIPAddress']['properties'] = pip_dict['properties']
self.log(result, pretty_print=True)
if self.state != 'absent' and not result['powerstate']:
self.fail("Failed to determine PowerState of virtual machine {0}".format(self.name))
return result
def power_off_vm(self):
self.log("Powered off virtual machine {0}".format(self.name))
self.results['actions'].append("Powered off virtual machine {0}".format(self.name))
try:
poller = self.compute_client.virtual_machines.power_off(self.resource_group, self.name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error powering off virtual machine {0} - {1}".format(self.name, str(exc)))
return True
def power_on_vm(self):
self.results['actions'].append("Powered on virtual machine {0}".format(self.name))
self.log("Power on virtual machine {0}".format(self.name))
try:
poller = self.compute_client.virtual_machines.start(self.resource_group, self.name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error powering on virtual machine {0} - {1}".format(self.name, str(exc)))
return True
def restart_vm(self):
self.results['actions'].append("Restarted virtual machine {0}".format(self.name))
self.log("Restart virtual machine {0}".format(self.name))
try:
poller = self.compute_client.virtual_machines.restart(self.resource_group, self.name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error restarting virtual machine {0} - {1}".format(self.name, str(exc)))
return True
def deallocate_vm(self):
self.results['actions'].append("Deallocated virtual machine {0}".format(self.name))
self.log("Deallocate virtual machine {0}".format(self.name))
try:
poller = self.compute_client.virtual_machines.deallocate(self.resource_group, self.name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error deallocating virtual machine {0} - {1}".format(self.name, str(exc)))
return True
def generalize_vm(self):
self.results['actions'].append("Generalize virtual machine {0}".format(self.name))
self.log("Generalize virtual machine {0}".format(self.name))
try:
response = self.compute_client.virtual_machines.generalize(self.resource_group, self.name)
if isinstance(response, LROPoller):
self.get_poller_result(response)
except Exception as exc:
self.fail("Error generalizing virtual machine {0} - {1}".format(self.name, str(exc)))
return True
def remove_autocreated_resources(self, tags):
if tags:
sa_name = tags.get('_own_sa_')
nic_name = tags.get('_own_nic_')
pip_name = tags.get('_own_pip_')
nsg_name = tags.get('_own_nsg_')
if sa_name:
self.delete_storage_account(self.resource_group, sa_name)
if nic_name:
self.delete_nic(self.resource_group, nic_name)
if pip_name:
self.delete_pip(self.resource_group, pip_name)
if nsg_name:
self.delete_nsg(self.resource_group, nsg_name)
def delete_vm(self, vm):
vhd_uris = []
managed_disk_ids = []
nic_names = []
pip_names = []
if 'all_autocreated' not in self.remove_on_absent:
if self.remove_on_absent.intersection(set(['all', 'virtual_storage'])):
# store the attached vhd info so we can nuke it after the VM is gone
if(vm.storage_profile.os_disk.managed_disk):
self.log('Storing managed disk ID for deletion')
managed_disk_ids.append(vm.storage_profile.os_disk.managed_disk.id)
elif(vm.storage_profile.os_disk.vhd):
self.log('Storing VHD URI for deletion')
vhd_uris.append(vm.storage_profile.os_disk.vhd.uri)
data_disks = vm.storage_profile.data_disks
for data_disk in data_disks:
if data_disk is not None:
if(data_disk.vhd):
vhd_uris.append(data_disk.vhd.uri)
elif(data_disk.managed_disk):
managed_disk_ids.append(data_disk.managed_disk.id)
# FUTURE enable diff mode, move these there...
self.log("VHD URIs to delete: {0}".format(', '.join(vhd_uris)))
self.results['deleted_vhd_uris'] = vhd_uris
self.log("Managed disk IDs to delete: {0}".format(', '.join(managed_disk_ids)))
self.results['deleted_managed_disk_ids'] = managed_disk_ids
if self.remove_on_absent.intersection(set(['all', 'network_interfaces'])):
# store the attached nic info so we can nuke them after the VM is gone
self.log('Storing NIC names for deletion.')
for interface in vm.network_profile.network_interfaces:
id_dict = azure_id_to_dict(interface.id)
nic_names.append(dict(name=id_dict['networkInterfaces'], resource_group=id_dict['resourceGroups']))
self.log('NIC names to delete {0}'.format(str(nic_names)))
self.results['deleted_network_interfaces'] = nic_names
if self.remove_on_absent.intersection(set(['all', 'public_ips'])):
# also store each nic's attached public IPs and delete after the NIC is gone
for nic_dict in nic_names:
nic = self.get_network_interface(nic_dict['resource_group'], nic_dict['name'])
for ipc in nic.ip_configurations:
if ipc.public_ip_address:
pip_dict = azure_id_to_dict(ipc.public_ip_address.id)
pip_names.append(dict(name=pip_dict['publicIPAddresses'], resource_group=pip_dict['resourceGroups']))
self.log('Public IPs to delete are {0}'.format(str(pip_names)))
self.results['deleted_public_ips'] = pip_names
self.log("Deleting virtual machine {0}".format(self.name))
self.results['actions'].append("Deleted virtual machine {0}".format(self.name))
try:
poller = self.compute_client.virtual_machines.delete(self.resource_group, self.name)
# wait for the poller to finish
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error deleting virtual machine {0} - {1}".format(self.name, str(exc)))
# TODO: parallelize nic, vhd, and public ip deletions with begin_deleting
# TODO: best-effort to keep deleting other linked resources if we encounter an error
if self.remove_on_absent.intersection(set(['all', 'virtual_storage'])):
self.log('Deleting VHDs')
self.delete_vm_storage(vhd_uris)
self.log('Deleting managed disks')
self.delete_managed_disks(managed_disk_ids)
if 'all' in self.remove_on_absent or 'all_autocreated' in self.remove_on_absent:
self.remove_autocreated_resources(vm.tags)
if self.remove_on_absent.intersection(set(['all', 'network_interfaces'])):
self.log('Deleting network interfaces')
for nic_dict in nic_names:
self.delete_nic(nic_dict['resource_group'], nic_dict['name'])
if self.remove_on_absent.intersection(set(['all', 'public_ips'])):
self.log('Deleting public IPs')
for pip_dict in pip_names:
self.delete_pip(pip_dict['resource_group'], pip_dict['name'])
if 'all' in self.remove_on_absent or 'all_autocreated' in self.remove_on_absent:
self.remove_autocreated_resources(vm.tags)
return True
def get_network_interface(self, resource_group, name):
try:
nic = self.network_client.network_interfaces.get(resource_group, name)
return nic
except Exception as exc:
self.fail("Error fetching network interface {0} - {1}".format(name, str(exc)))
return True
def delete_nic(self, resource_group, name):
self.log("Deleting network interface {0}".format(name))
self.results['actions'].append("Deleted network interface {0}".format(name))
try:
poller = self.network_client.network_interfaces.delete(resource_group, name)
except Exception as exc:
self.fail("Error deleting network interface {0} - {1}".format(name, str(exc)))
self.get_poller_result(poller)
# Delete doesn't return anything. If we get this far, assume success
return True
def delete_pip(self, resource_group, name):
self.results['actions'].append("Deleted public IP {0}".format(name))
try:
poller = self.network_client.public_ip_addresses.delete(resource_group, name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error deleting {0} - {1}".format(name, str(exc)))
# Delete returns nada. If we get here, assume that all is well.
return True
def delete_nsg(self, resource_group, name):
self.results['actions'].append("Deleted NSG {0}".format(name))
try:
poller = self.network_client.network_security_groups.delete(resource_group, name)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error deleting {0} - {1}".format(name, str(exc)))
return True
def delete_managed_disks(self, managed_disk_ids):
for mdi in managed_disk_ids:
try:
poller = self.rm_client.resources.delete_by_id(mdi, '2017-03-30')
self.get_poller_result(poller)
except Exception as exc:
self.fail("Error deleting managed disk {0} - {1}".format(mdi, str(exc)))
return True
def delete_storage_account(self, resource_group, name):
self.log("Delete storage account {0}".format(name))
self.results['actions'].append("Deleted storage account {0}".format(name))
try:
self.storage_client.storage_accounts.delete(self.resource_group, name)
except Exception as exc:
self.fail("Error deleting storage account {0} - {1}".format(name, str(exc)))
return True
def delete_vm_storage(self, vhd_uris):
# FUTURE: figure out a cloud_env independent way to delete these
for uri in vhd_uris:
self.log("Extracting info from blob uri '{0}'".format(uri))
try:
blob_parts = extract_names_from_blob_uri(uri, self._cloud_environment.suffixes.storage_endpoint)
except Exception as exc:
self.fail("Error parsing blob URI {0}".format(str(exc)))
storage_account_name = blob_parts['accountname']
container_name = blob_parts['containername']
blob_name = blob_parts['blobname']
blob_client = self.get_blob_client(self.resource_group, storage_account_name)
self.log("Delete blob {0}:{1}".format(container_name, blob_name))
self.results['actions'].append("Deleted blob {0}:{1}".format(container_name, blob_name))
try:
blob_client.delete_blob(container_name, blob_name)
except Exception as exc:
self.fail("Error deleting blob {0}:{1} - {2}".format(container_name, blob_name, str(exc)))
return True
def get_marketplace_image_version(self):
try:
versions = self.compute_client.virtual_machine_images.list(self.location,
self.image['publisher'],
self.image['offer'],
self.image['sku'])
except Exception as exc:
self.fail("Error fetching image {0} {1} {2} - {3}".format(self.image['publisher'],
self.image['offer'],
self.image['sku'],
str(exc)))
if versions and len(versions) > 0:
if self.image['version'] == 'latest':
return versions[len(versions) - 1]
for version in versions:
if version.name == self.image['version']:
return version
self.fail("Error could not find image {0} {1} {2} {3}".format(self.image['publisher'],
self.image['offer'],
self.image['sku'],
self.image['version']))
return None
def get_custom_image_reference(self, name, resource_group=None):
try:
if resource_group:
vm_images = self.compute_client.images.list_by_resource_group(resource_group)
else:
vm_images = self.compute_client.images.list()
except Exception as exc:
self.fail("Error fetching custom images from subscription - {0}".format(str(exc)))
for vm_image in vm_images:
if vm_image.name == name:
self.log("Using custom image id {0}".format(vm_image.id))
return self.compute_models.ImageReference(id=vm_image.id)
self.fail("Error could not find image with name {0}".format(name))
return None
def get_availability_set(self, resource_group, name):
try:
return self.compute_client.availability_sets.get(resource_group, name)
except Exception as exc:
self.fail("Error fetching availability set {0} - {1}".format(name, str(exc)))
def get_storage_account(self, name):
try:
account = self.storage_client.storage_accounts.get_properties(self.resource_group,
name)
return account
except Exception as exc:
self.fail("Error fetching storage account {0} - {1}".format(name, str(exc)))
def create_or_update_vm(self, params, remove_autocreated_on_failure):
try:
poller = self.compute_client.virtual_machines.create_or_update(self.resource_group, self.name, params)
self.get_poller_result(poller)
except Exception as exc:
if remove_autocreated_on_failure:
self.remove_autocreated_resources(params.tags)
self.fail("Error creating or updating virtual machine {0} - {1}".format(self.name, str(exc)))
def vm_size_is_valid(self):
'''
Validate self.vm_size against the list of virtual machine sizes available for the account and location.
:return: boolean
'''
try:
sizes = self.compute_client.virtual_machine_sizes.list(self.location)
except Exception as exc:
self.fail("Error retrieving available machine sizes - {0}".format(str(exc)))
for size in sizes:
if size.name == self.vm_size:
return True
return False
def create_default_storage_account(self, vm_dict=None):
'''
Create (once) a default storage account <vm name>XXXX, where XXXX is a random number.
NOTE: If <vm name>XXXX exists, use it instead of failing. Highly unlikely.
If this method is called multiple times across executions it will return the same
storage account created with the random name which is stored in a tag on the VM.
vm_dict is passed in during an update, so we can obtain the _own_sa_ tag and return
the default storage account we created in a previous invocation
:return: storage account object
'''
account = None
valid_name = False
if self.tags is None:
self.tags = {}
if self.tags.get('_own_sa_', None):
# We previously created one in the same invocation
return self.get_storage_account(self.tags['_own_sa_'])
if vm_dict and vm_dict.get('tags', {}).get('_own_sa_', None):
# We previously created one in a previous invocation
# We must be updating, like adding boot diagnostics
return self.get_storage_account(vm_dict['tags']['_own_sa_'])
# Attempt to find a valid storage account name
storage_account_name_base = re.sub('[^a-zA-Z0-9]', '', self.name[:20].lower())
for i in range(0, 5):
rand = random.randrange(1000, 9999)
storage_account_name = storage_account_name_base + str(rand)
if self.check_storage_account_name(storage_account_name):
valid_name = True
break
if not valid_name:
self.fail("Failed to create a unique storage account name for {0}. Try using a different VM name."
.format(self.name))
try:
account = self.storage_client.storage_accounts.get_properties(self.resource_group, storage_account_name)
except CloudError:
pass
if account:
self.log("Storage account {0} found.".format(storage_account_name))
self.check_provisioning_state(account)
return account
sku = self.storage_models.Sku(name=self.storage_models.SkuName.standard_lrs)
sku.tier = self.storage_models.SkuTier.standard
kind = self.storage_models.Kind.storage
parameters = self.storage_models.StorageAccountCreateParameters(sku=sku, kind=kind, location=self.location)
self.log("Creating storage account {0} in location {1}".format(storage_account_name, self.location))
self.results['actions'].append("Created storage account {0}".format(storage_account_name))
try:
poller = self.storage_client.storage_accounts.create(self.resource_group, storage_account_name, parameters)
self.get_poller_result(poller)
except Exception as exc:
self.fail("Failed to create storage account: {0} - {1}".format(storage_account_name, str(exc)))
self.tags['_own_sa_'] = storage_account_name
return self.get_storage_account(storage_account_name)
def check_storage_account_name(self, name):
self.log("Checking storage account name availability for {0}".format(name))
try:
response = self.storage_client.storage_accounts.check_name_availability(name)
if response.reason == 'AccountNameInvalid':
raise Exception("Invalid default storage account name: {0}".format(name))
except Exception as exc:
self.fail("Error checking storage account name availability for {0} - {1}".format(name, str(exc)))
return response.name_available
def create_default_nic(self):
'''
Create a default Network Interface <vm name>01. Requires an existing virtual network
with one subnet. If NIC <vm name>01 exists, use it. Otherwise, create one.
:return: NIC object
'''
network_interface_name = self.name + '01'
nic = None
if self.tags is None:
self.tags = {}
self.log("Create default NIC {0}".format(network_interface_name))
self.log("Check to see if NIC {0} exists".format(network_interface_name))
try:
nic = self.network_client.network_interfaces.get(self.resource_group, network_interface_name)
except CloudError:
pass
if nic:
self.log("NIC {0} found.".format(network_interface_name))
self.check_provisioning_state(nic)
return nic
self.log("NIC {0} does not exist.".format(network_interface_name))
virtual_network_resource_group = None
if self.virtual_network_resource_group:
virtual_network_resource_group = self.virtual_network_resource_group
else:
virtual_network_resource_group = self.resource_group
if self.virtual_network_name:
try:
self.network_client.virtual_networks.list(virtual_network_resource_group, self.virtual_network_name)
virtual_network_name = self.virtual_network_name
except CloudError as exc:
self.fail("Error: fetching virtual network {0} - {1}".format(self.virtual_network_name, str(exc)))
else:
# Find a virtual network
no_vnets_msg = "Error: unable to find virtual network in resource group {0}. A virtual network " \
"with at least one subnet must exist in order to create a NIC for the virtual " \
"machine.".format(virtual_network_resource_group)
virtual_network_name = None
try:
vnets = self.network_client.virtual_networks.list(virtual_network_resource_group)
except CloudError:
self.log('cloud error!')
self.fail(no_vnets_msg)
for vnet in vnets:
virtual_network_name = vnet.name
self.log('vnet name: {0}'.format(vnet.name))
break
if not virtual_network_name:
self.fail(no_vnets_msg)
if self.subnet_name:
try:
subnet = self.network_client.subnets.get(virtual_network_resource_group, virtual_network_name, self.subnet_name)
subnet_id = subnet.id
except Exception as exc:
self.fail("Error: fetching subnet {0} - {1}".format(self.subnet_name, str(exc)))
else:
no_subnets_msg = "Error: unable to find a subnet in virtual network {0}. A virtual network " \
"with at least one subnet must exist in order to create a NIC for the virtual " \
"machine.".format(virtual_network_name)
subnet_id = None
try:
subnets = self.network_client.subnets.list(virtual_network_resource_group, virtual_network_name)
except CloudError:
self.fail(no_subnets_msg)
for subnet in subnets:
subnet_id = subnet.id
self.log('subnet id: {0}'.format(subnet_id))
break
if not subnet_id:
self.fail(no_subnets_msg)
pip = None
if self.public_ip_allocation_method != 'Disabled':
self.results['actions'].append('Created default public IP {0}'.format(self.name + '01'))
sku = self.network_models.PublicIPAddressSku(name="Standard") if self.zones else None
pip_facts = self.create_default_pip(self.resource_group, self.location, self.name + '01', self.public_ip_allocation_method, sku=sku)
pip = self.network_models.PublicIPAddress(id=pip_facts.id, location=pip_facts.location, resource_guid=pip_facts.resource_guid, sku=sku)
self.tags['_own_pip_'] = self.name + '01'
self.results['actions'].append('Created default security group {0}'.format(self.name + '01'))
group = self.create_default_securitygroup(self.resource_group, self.location, self.name + '01', self.os_type,
self.open_ports)
self.tags['_own_nsg_'] = self.name + '01'
parameters = self.network_models.NetworkInterface(
location=self.location,
ip_configurations=[
self.network_models.NetworkInterfaceIPConfiguration(
private_ip_allocation_method='Dynamic',
)
]
)
parameters.ip_configurations[0].subnet = self.network_models.Subnet(id=subnet_id)
parameters.ip_configurations[0].name = 'default'
parameters.network_security_group = self.network_models.NetworkSecurityGroup(id=group.id,
location=group.location,
resource_guid=group.resource_guid)
parameters.ip_configurations[0].public_ip_address = pip
self.log("Creating NIC {0}".format(network_interface_name))
self.log(self.serialize_obj(parameters, 'NetworkInterface'), pretty_print=True)
self.results['actions'].append("Created NIC {0}".format(network_interface_name))
try:
poller = self.network_client.network_interfaces.create_or_update(self.resource_group,
network_interface_name,
parameters)
new_nic = self.get_poller_result(poller)
self.tags['_own_nic_'] = network_interface_name
except Exception as exc:
self.fail("Error creating network interface {0} - {1}".format(network_interface_name, str(exc)))
return new_nic
def parse_network_interface(self, nic):
nic = self.parse_resource_to_dict(nic)
if 'name' not in nic:
self.fail("Invalid network interface {0}".format(str(nic)))
return format_resource_id(val=nic['name'],
subscription_id=nic['subscription_id'],
resource_group=nic['resource_group'],
namespace='Microsoft.Network',
types='networkInterfaces')
def main():
AzureRMVirtualMachine()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,721 |
Support specify disks in ovirt_snapshot module
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
Currently it's not possible to specify disks to be included in the snapshot, always all disks are used.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
ovirt_snapshot
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
Could look like:
```yaml
ovirt_snapshot:
description: My snap
vm_name: myvm
disks:
- id: 123
- id: 456
```
or
```yaml
ovirt_snapshot:
description: My snap
vm_name: myvm
disks:
- name: myvm_disk_db
- name: myvm_disk_store
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/65721
|
https://github.com/ansible/ansible/pull/65729
|
a168e73713f896b75487ce22306490de9ed2b3ce
|
9f6c210eac83b1f40e5a8a3d352e51e5d4bd8066
| 2019-12-11T10:32:15Z |
python
| 2019-12-19T07:52:11Z |
lib/ansible/modules/cloud/ovirt/ovirt_snapshot.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2016 Red Hat, Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_snapshot
short_description: "Module to manage Virtual Machine Snapshots in oVirt/RHV"
version_added: "2.3"
author: "Ondra Machacek (@machacekondra)"
description:
- "Module to manage Virtual Machine Snapshots in oVirt/RHV"
options:
snapshot_id:
description:
- "ID of the snapshot to manage."
vm_name:
description:
- "Name of the Virtual Machine to manage."
required: true
state:
description:
- "Should the Virtual Machine snapshot be restore/present/absent."
choices: ['restore', 'present', 'absent']
default: present
description:
description:
- "Description of the snapshot."
disk_id:
description:
- "Disk id which you want to upload or download"
- "To get disk, you need to define disk_id or disk_name"
version_added: "2.8"
disk_name:
description:
- "Disk name which you want to upload or download"
version_added: "2.8"
download_image_path:
description:
- "Path on a file system where snapshot should be downloaded."
- "Note that you must have an valid oVirt/RHV engine CA in your system trust store
or you must provide it in C(ca_file) parameter."
- "Note that the snapshot is not downloaded when the file already exists,
but you can forcibly download the snapshot when using C(force) I (true)."
version_added: "2.8"
upload_image_path:
description:
- "Path to disk image, which should be uploaded."
version_added: "2.8"
use_memory:
description:
- "If I(true) and C(state) is I(present) save memory of the Virtual
Machine if it's running."
- "If I(true) and C(state) is I(restore) restore memory of the
Virtual Machine."
- "Note that Virtual Machine will be paused while saving the memory."
aliases:
- "restore_memory"
- "save_memory"
type: bool
keep_days_old:
description:
- "Number of days after which should snapshot be deleted."
- "It will check all snapshots of virtual machine and delete them, if they are older."
version_added: "2.8"
notes:
- "Note that without a guest agent the data on the created snapshot may be
inconsistent."
- "Deleting a snapshot does not remove any information from the virtual
machine - it simply removes a return-point. However, restoring a virtual
machine from a snapshot deletes any content that was written to the
virtual machine after the time the snapshot was taken."
extends_documentation_fragment: ovirt
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
# Create snapshot:
- ovirt_snapshot:
vm_name: rhel7
description: MySnapshot
register: snapshot
# Create snapshot and save memory:
- ovirt_snapshot:
vm_name: rhel7
description: SnapWithMem
use_memory: true
register: snapshot
# Restore snapshot:
- ovirt_snapshot:
state: restore
vm_name: rhel7
snapshot_id: "{{ snapshot.id }}"
# Remove snapshot:
- ovirt_snapshot:
state: absent
vm_name: rhel7
snapshot_id: "{{ snapshot.id }}"
# Upload local image to disk and attach it to vm:
# Since Ansible 2.8
- ovirt_snapshot:
name: mydisk
vm_name: myvm
upload_image_path: /path/to/mydisk.qcow2
# Download snapshot to local file system:
# Since Ansible 2.8
- ovirt_snapshot:
snapshot_id: 7de90f31-222c-436c-a1ca-7e655bd5b60c
disk_name: DiskName
vm_name: myvm
download_image_path: /home/user/mysnaphost.qcow2
# Delete all snapshots older than 2 days
- ovirt_snapshot:
vm_name: test
keep_days_old: 2
'''
RETURN = '''
id:
description: ID of the snapshot which is managed
returned: On success if snapshot is found.
type: str
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
snapshot:
description: "Dictionary of all the snapshot attributes. Snapshot attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/snapshot."
returned: On success if snapshot is found.
type: dict
snapshots:
description: List of deleted snapshots when keep_days_old is defined and snapshot is older than the input days
returned: On success returns deleted snapshots
type: list
'''
import traceback
try:
import ovirtsdk4.types as otypes
except ImportError:
pass
import os
import ssl
import time
from ansible.module_utils.six.moves.http_client import HTTPSConnection, IncompleteRead
from ansible.module_utils.six.moves.urllib.parse import urlparse
from datetime import datetime
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
check_sdk,
create_connection,
get_dict_of_struct,
get_entity,
ovirt_full_argument_spec,
search_by_name,
wait,
get_id_by_name
)
def transfer(connection, module, direction, transfer_func):
transfers_service = connection.system_service().image_transfers_service()
transfer = transfers_service.add(
otypes.ImageTransfer(
image=otypes.Image(
id=module.params['disk_id'],
),
direction=direction,
)
)
transfer_service = transfers_service.image_transfer_service(transfer.id)
try:
# After adding a new transfer for the disk, the transfer's status will be INITIALIZING.
# Wait until the init phase is over. The actual transfer can start when its status is "Transferring".
while transfer.phase == otypes.ImageTransferPhase.INITIALIZING:
time.sleep(module.params['poll_interval'])
transfer = transfer_service.get()
proxy_url = urlparse(transfer.proxy_url)
context = ssl.create_default_context()
auth = module.params['auth']
if auth.get('insecure'):
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
elif auth.get('ca_file'):
context.load_verify_locations(cafile=auth.get('ca_file'))
proxy_connection = HTTPSConnection(
proxy_url.hostname,
proxy_url.port,
context=context,
)
transfer_func(
transfer_service,
proxy_connection,
proxy_url,
transfer.signed_ticket
)
return True
finally:
transfer_service.finalize()
while transfer.phase in [
otypes.ImageTransferPhase.TRANSFERRING,
otypes.ImageTransferPhase.FINALIZING_SUCCESS,
]:
time.sleep(module.params['poll_interval'])
transfer = transfer_service.get()
if transfer.phase in [
otypes.ImageTransferPhase.UNKNOWN,
otypes.ImageTransferPhase.FINISHED_FAILURE,
otypes.ImageTransferPhase.FINALIZING_FAILURE,
otypes.ImageTransferPhase.CANCELLED,
]:
raise Exception(
"Error occurred while uploading image. The transfer is in %s" % transfer.phase
)
if module.params.get('logical_unit'):
disks_service = connection.system_service().disks_service()
wait(
service=disks_service.service(module.params['id']),
condition=lambda d: d.status == otypes.DiskStatus.OK,
wait=module.params['wait'],
timeout=module.params['timeout'],
)
def upload_disk_image(connection, module):
def _transfer(transfer_service, proxy_connection, proxy_url, transfer_ticket):
BUF_SIZE = 128 * 1024
path = module.params['upload_image_path']
image_size = os.path.getsize(path)
proxy_connection.putrequest("PUT", proxy_url.path)
proxy_connection.putheader('Content-Length', "%d" % (image_size,))
proxy_connection.endheaders()
with open(path, "rb") as disk:
pos = 0
while pos < image_size:
to_read = min(image_size - pos, BUF_SIZE)
chunk = disk.read(to_read)
if not chunk:
transfer_service.pause()
raise RuntimeError("Unexpected end of file at pos=%d" % pos)
proxy_connection.send(chunk)
pos += len(chunk)
return transfer(
connection,
module,
otypes.ImageTransferDirection.UPLOAD,
transfer_func=_transfer,
)
def download_disk_image(connection, module):
def _transfer(transfer_service, proxy_connection, proxy_url, transfer_ticket):
BUF_SIZE = 128 * 1024
transfer_headers = {
'Authorization': transfer_ticket,
}
proxy_connection.request(
'GET',
proxy_url.path,
headers=transfer_headers,
)
r = proxy_connection.getresponse()
path = module.params["download_image_path"]
image_size = int(r.getheader('Content-Length'))
with open(path, "wb") as mydisk:
pos = 0
while pos < image_size:
to_read = min(image_size - pos, BUF_SIZE)
chunk = r.read(to_read)
if not chunk:
raise RuntimeError("Socket disconnected")
mydisk.write(chunk)
pos += len(chunk)
return transfer(
connection,
module,
otypes.ImageTransferDirection.DOWNLOAD,
transfer_func=_transfer,
)
def create_snapshot(module, vm_service, snapshots_service):
changed = False
snapshot = get_entity(
snapshots_service.snapshot_service(module.params['snapshot_id'])
)
if snapshot is None:
if not module.check_mode:
snapshot = snapshots_service.add(
otypes.Snapshot(
description=module.params.get('description'),
persist_memorystate=module.params.get('use_memory'),
)
)
changed = True
wait(
service=snapshots_service.snapshot_service(snapshot.id),
condition=lambda snap: snap.snapshot_status == otypes.SnapshotStatus.OK,
wait=module.params['wait'],
timeout=module.params['timeout'],
)
return {
'changed': changed,
'id': snapshot.id,
'snapshot': get_dict_of_struct(snapshot),
}
def remove_snapshot(module, vm_service, snapshots_service, snapshot_id=None):
changed = False
if not snapshot_id:
snapshot_id = module.params['snapshot_id']
snapshot = get_entity(
snapshots_service.snapshot_service(snapshot_id)
)
if snapshot:
snapshot_service = snapshots_service.snapshot_service(snapshot.id)
if not module.check_mode:
snapshot_service.remove()
changed = True
wait(
service=snapshot_service,
condition=lambda snapshot: snapshot is None,
wait=module.params['wait'],
timeout=module.params['timeout'],
)
return {
'changed': changed,
'id': snapshot.id if snapshot else None,
'snapshot': get_dict_of_struct(snapshot),
}
def restore_snapshot(module, vm_service, snapshots_service):
changed = False
snapshot_service = snapshots_service.snapshot_service(
module.params['snapshot_id']
)
snapshot = get_entity(snapshot_service)
if snapshot is None:
raise Exception(
"Snapshot with id '%s' doesn't exist" % module.params['snapshot_id']
)
if snapshot.snapshot_status != otypes.SnapshotStatus.IN_PREVIEW:
if not module.check_mode:
snapshot_service.restore(
restore_memory=module.params.get('use_memory'),
)
changed = True
else:
if not module.check_mode:
vm_service.commit_snapshot()
changed = True
if changed:
wait(
service=snapshot_service,
condition=lambda snap: snap.snapshot_status == otypes.SnapshotStatus.OK,
wait=module.params['wait'],
timeout=module.params['timeout'],
)
return {
'changed': changed,
'id': snapshot.id if snapshot else None,
'snapshot': get_dict_of_struct(snapshot),
}
def get_snapshot_disk_id(module, snapshots_service):
snapshot_service = snapshots_service.snapshot_service(module.params.get('snapshot_id'))
snapshot_disks_service = snapshot_service.disks_service()
disk_id = ''
if module.params.get('disk_id'):
disk_id = module.params.get('disk_id')
elif module.params.get('disk_name'):
disk_id = get_id_by_name(snapshot_disks_service, module.params.get('disk_name'))
return disk_id
def remove_old_snapshosts(module, vm_service, snapshots_service):
deleted_snapshots = []
changed = False
date_now = datetime.now()
for snapshot in snapshots_service.list():
if snapshot.vm is not None and snapshot.vm.name == module.params.get('vm_name'):
diff = date_now - snapshot.date.replace(tzinfo=None)
if diff.days >= module.params.get('keep_days_old'):
snapshot = remove_snapshot(module, vm_service, snapshots_service, snapshot.id).get('snapshot')
deleted_snapshots.append(snapshot)
changed = True
return dict(snapshots=deleted_snapshots, changed=changed)
def main():
argument_spec = ovirt_full_argument_spec(
state=dict(
choices=['restore', 'present', 'absent'],
default='present',
),
vm_name=dict(required=True),
snapshot_id=dict(default=None),
disk_id=dict(default=None),
disk_name=dict(default=None),
description=dict(default=None),
download_image_path=dict(default=None),
upload_image_path=dict(default=None),
keep_days_old=dict(default=None, type='int'),
use_memory=dict(
default=None,
type='bool',
aliases=['restore_memory', 'save_memory'],
),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_if=[
('state', 'absent', ['snapshot_id']),
('state', 'restore', ['snapshot_id']),
]
)
check_sdk(module)
ret = {}
vm_name = module.params.get('vm_name')
auth = module.params['auth']
connection = create_connection(auth)
vms_service = connection.system_service().vms_service()
vm = search_by_name(vms_service, vm_name)
if not vm:
module.fail_json(
msg="Vm '{name}' doesn't exist.".format(name=vm_name),
)
vm_service = vms_service.vm_service(vm.id)
snapshots_service = vms_service.vm_service(vm.id).snapshots_service()
try:
state = module.params['state']
if state == 'present':
if module.params.get('disk_id') or module.params.get('disk_name'):
module.params['disk_id'] = get_snapshot_disk_id(module, snapshots_service)
if module.params['upload_image_path']:
ret['changed'] = upload_disk_image(connection, module)
if module.params['download_image_path']:
ret['changed'] = download_disk_image(connection, module)
if module.params.get('keep_days_old') is not None:
ret = remove_old_snapshosts(module, vm_service, snapshots_service)
else:
ret = create_snapshot(module, vm_service, snapshots_service)
elif state == 'restore':
ret = restore_snapshot(module, vm_service, snapshots_service)
elif state == 'absent':
ret = remove_snapshot(module, vm_service, snapshots_service)
module.exit_json(**ret)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,550 |
ovirt_network fails creating network when external_provider is set.
|
overit_network fails when trying to create a network that uses an external provider.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
module ovirt_network
##### ANSIBLE VERSION
```
ansible 2.9.1
config file = /Users/krist/Work/INFRA/Bern/PlopslandRHEV/ansible.cfg
configured module search path = ['/Users/krist/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.1/libexec/lib/python3.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.5 (default, Nov 1 2019, 02:16:23) [Clang 11.0.0 (clang-1100.0.33.8)]##### CONFIGURATION
```
##### OS / ENVIRONMENT
MacOSX 10.15
##### STEPS TO REPRODUCE
Consider the following playbook:
```
- name: prepare the RHEV cluster
hosts: localhost
tasks:
- block:
- name: Obtain SSO Token
ovirt_auth:
url: "{{ ovirt_api }}"
username: admin@internal
ca_file: files/ca.pem
password: "{{ ovirt_password }}"
- name: Create private networks
ovirt_network:
auth: "{{ ovirt_auth }}"
data_center: Default
name: boot
vm_network: true
external_provider: ovirt-provider-ovn
state: present
always:
- name: Always revoke the SSO token
ovirt_auth:
state: absent
ovirt_auth: "{{ ovirt_auth }}"
```
##### EXPECTED RESULTS
A network "boot" is created, using the external provider ovirt-provider-ovn
##### ACTUAL RESULTS
An error is thrown:
```
TASK [Create private networks] *********************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Exception: Entity 'boot' was not found.
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Entity 'boot' was not found."}
```
Looking at hte source it appears then when you define a external provider that was is in fact attempted is to import a network form that external provider, not crearte one:
```
if module.params.get('external_provider'):
ret = networks_module.import_external_network()
else:
ret = networks_module.create(search_params=search_params)
```
That is not what you would expect, not according to the docs anyway.
|
https://github.com/ansible/ansible/issues/65550
|
https://github.com/ansible/ansible/pull/65701
|
9f6c210eac83b1f40e5a8a3d352e51e5d4bd8066
|
6a880b78a2305ea71f211b741f16873b41538c1f
| 2019-12-05T09:59:19Z |
python
| 2019-12-19T07:52:35Z |
lib/ansible/modules/cloud/ovirt/ovirt_network.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2016 Red Hat, Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_network
short_description: Module to manage logical networks in oVirt/RHV
version_added: "2.3"
author: "Ondra Machacek (@machacekondra)"
description:
- "Module to manage logical networks in oVirt/RHV"
options:
id:
description:
- "ID of the network to manage."
version_added: "2.8"
name:
description:
- "Name of the network to manage."
required: true
state:
description:
- "Should the network be present or absent"
choices: ['present', 'absent']
default: present
data_center:
description:
- "Datacenter name where network reside."
description:
description:
- "Description of the network."
comment:
description:
- "Comment of the network."
vlan_tag:
description:
- "Specify VLAN tag."
external_provider:
description:
- "Name of external network provider."
version_added: 2.8
vm_network:
description:
- "If I(True) network will be marked as network for VM."
- "VM network carries traffic relevant to the virtual machine."
type: bool
mtu:
description:
- "Maximum transmission unit (MTU) of the network."
clusters:
description:
- "List of dictionaries describing how the network is managed in specific cluster."
suboptions:
name:
description:
- Cluster name.
assigned:
description:
- I(true) if the network should be assigned to cluster. Default is I(true).
type: bool
required:
description:
- I(true) if the network must remain operational for all hosts associated with this network.
type: bool
display:
description:
- I(true) if the network should marked as display network.
type: bool
migration:
description:
- I(true) if the network should marked as migration network.
type: bool
gluster:
description:
- I(true) if the network should marked as gluster network.
type: bool
label:
description:
- "Name of the label to assign to the network."
version_added: "2.5"
extends_documentation_fragment: ovirt
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
# Create network
- ovirt_network:
data_center: mydatacenter
name: mynetwork
vlan_tag: 1
vm_network: true
# Remove network
- ovirt_network:
state: absent
name: mynetwork
# Change Network Name
- ovirt_network:
id: 00000000-0000-0000-0000-000000000000
name: "new_network_name"
data_center: mydatacenter
# Add network from external provider
- ovirt_network:
data_center: mydatacenter
name: mynetwork
external_provider: ovirt-provider-ovn
'''
RETURN = '''
id:
description: "ID of the managed network"
returned: "On success if network is found."
type: str
sample: 7de90f31-222c-436c-a1ca-7e655bd5b60c
network:
description: "Dictionary of all the network attributes. Network attributes can be found on your oVirt/RHV instance
at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/network."
returned: "On success if network is found."
type: dict
'''
import traceback
try:
import ovirtsdk4.types as otypes
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
BaseModule,
check_sdk,
check_params,
create_connection,
equal,
ovirt_full_argument_spec,
search_by_name,
get_id_by_name,
get_dict_of_struct,
get_entity
)
class NetworksModule(BaseModule):
def import_external_network(self):
ons_service = self._connection.system_service().openstack_network_providers_service()
on_service = ons_service.provider_service(get_id_by_name(ons_service, self.param('external_provider')))
networks_service = on_service.networks_service()
network_service = networks_service.network_service(get_id_by_name(networks_service, self.param('name')))
network_service.import_(data_center=otypes.DataCenter(name=self._module.params['data_center']))
return {"network": get_dict_of_struct(network_service.get()), "changed": True}
def build_entity(self):
return otypes.Network(
name=self._module.params['name'],
comment=self._module.params['comment'],
description=self._module.params['description'],
id=self._module.params['id'],
data_center=otypes.DataCenter(
name=self._module.params['data_center'],
) if self._module.params['data_center'] else None,
vlan=otypes.Vlan(
self._module.params['vlan_tag'],
) if self._module.params['vlan_tag'] else None,
usages=[
otypes.NetworkUsage.VM if self._module.params['vm_network'] else None
] if self._module.params['vm_network'] is not None else None,
mtu=self._module.params['mtu'],
)
def post_create(self, entity):
self._update_label_assignments(entity)
def _update_label_assignments(self, entity):
if self.param('label') is None:
return
labels_service = self._service.service(entity.id).network_labels_service()
labels = [lbl.id for lbl in labels_service.list()]
if not self.param('label') in labels:
if not self._module.check_mode:
if labels:
labels_service.label_service(labels[0]).remove()
labels_service.add(
label=otypes.NetworkLabel(id=self.param('label'))
)
self.changed = True
def update_check(self, entity):
self._update_label_assignments(entity)
return (
equal(self._module.params.get('comment'), entity.comment) and
equal(self._module.params.get('name'), entity.name) and
equal(self._module.params.get('external_provider'), entity.external_provider) and
equal(self._module.params.get('description'), entity.description) and
equal(self._module.params.get('vlan_tag'), getattr(entity.vlan, 'id', None)) and
equal(self._module.params.get('vm_network'), True if entity.usages else False) and
equal(self._module.params.get('mtu'), entity.mtu)
)
class ClusterNetworksModule(BaseModule):
def __init__(self, network_id, cluster_network, *args, **kwargs):
super(ClusterNetworksModule, self).__init__(*args, **kwargs)
self._network_id = network_id
self._cluster_network = cluster_network
self._old_usages = []
self._cluster_network_entity = get_entity(self._service.network_service(network_id))
if self._cluster_network_entity is not None:
self._old_usages = self._cluster_network_entity.usages
def build_entity(self):
return otypes.Network(
id=self._network_id,
name=self._module.params['name'],
required=self._cluster_network.get('required'),
display=self._cluster_network.get('display'),
usages=list(set([
otypes.NetworkUsage(usage)
for usage in ['display', 'gluster', 'migration']
if self._cluster_network.get(usage, False)
] + self._old_usages))
if (
self._cluster_network.get('display') is not None or
self._cluster_network.get('gluster') is not None or
self._cluster_network.get('migration') is not None
) else None,
)
def update_check(self, entity):
return (
equal(self._cluster_network.get('required'), entity.required) and
equal(self._cluster_network.get('display'), entity.display) and
all(
x in [
str(usage)
for usage in getattr(entity, 'usages', [])
# VM + MANAGEMENT is part of root network
if usage != otypes.NetworkUsage.VM and usage != otypes.NetworkUsage.MANAGEMENT
]
for x in [
usage
for usage in ['display', 'gluster', 'migration']
if self._cluster_network.get(usage, False)
]
)
)
def main():
argument_spec = ovirt_full_argument_spec(
state=dict(
choices=['present', 'absent'],
default='present',
),
data_center=dict(required=True),
id=dict(default=None),
name=dict(required=True),
description=dict(default=None),
comment=dict(default=None),
external_provider=dict(default=None),
vlan_tag=dict(default=None, type='int'),
vm_network=dict(default=None, type='bool'),
mtu=dict(default=None, type='int'),
clusters=dict(default=None, type='list'),
label=dict(default=None),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
check_sdk(module)
check_params(module)
try:
auth = module.params.pop('auth')
connection = create_connection(auth)
clusters_service = connection.system_service().clusters_service()
networks_service = connection.system_service().networks_service()
networks_module = NetworksModule(
connection=connection,
module=module,
service=networks_service,
)
state = module.params['state']
search_params = {
'name': module.params['name'],
'datacenter': module.params['data_center'],
}
if state == 'present':
if module.params.get('external_provider'):
ret = networks_module.import_external_network()
else:
ret = networks_module.create(search_params=search_params)
# Update clusters networks:
if module.params.get('clusters') is not None:
for param_cluster in module.params.get('clusters'):
cluster = search_by_name(clusters_service, param_cluster.get('name'))
if cluster is None:
raise Exception("Cluster '%s' was not found." % param_cluster.get('name'))
cluster_networks_service = clusters_service.service(cluster.id).networks_service()
cluster_networks_module = ClusterNetworksModule(
network_id=ret['id'],
cluster_network=param_cluster,
connection=connection,
module=module,
service=cluster_networks_service,
)
if param_cluster.get('assigned', True):
ret = cluster_networks_module.create()
else:
ret = cluster_networks_module.remove()
elif state == 'absent':
ret = networks_module.remove(search_params=search_params)
module.exit_json(**ret)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,871 |
Azure CI errors
|
##### SUMMARY
Currently the CI integration tests fail for `azure_rm_storageaccount`.
The test `Assert CNAME failure` in `test/integration/targets/azure_rm_storageaccount/tasks/main.yml` fails because the result message from the previous step doesn't match the expected string, though the tested action does fail as expected. At first glance it seems that some upstream project improved their error messages.
It expects to find: "custom domain name could not be verified"
In the error message: "Failed to update custom domain: Azure Error: StorageCustomDomainNameNotValid\nMessage: Storage custom domain name ansible.com is not valid."
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
azure_rm_storageaccount
##### ANSIBLE VERSION
Current `ansible/devel` branch
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Shippable CI tests
##### STEPS TO REPRODUCE
Start a CI run for `ansible/devel` branch and make sure the Azure tests actually run.
##### EXPECTED RESULTS
CI succeeds
##### ACTUAL RESULTS
Some Azure CI tests consistently fail:
- T=azure/2.7/2
- T=azure/3.6/2
From https://app.shippable.com/github/ansible/ansible/runs/153739/123/console
```
34:40 TASK [azure_rm_storageaccount : Change account type and add custom domain] *****
34:40 task path: /root/ansible/test/results/.tmp/integration/azure_rm_storageaccount-1_wlZF-Γ
ΓΕΓΞ²ΕΓ/test/integration/targets/azure_rm_storageaccount/tasks/main.yml:102
34:40 <testhost> ESTABLISH LOCAL CONNECTION FOR USER: root
34:40 <testhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
34:40 <testhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1576496300.53-21371603272736 `" && echo ansible-tmp-1576496300.53-21371603272736="` echo /root/.ansible/tmp/ansible-tmp-1576496300.53-21371603272736 `" ) && sleep 0'
34:40 Using module file /root/ansible/lib/ansible/modules/cloud/azure/azure_rm_storageaccount.py
34:40 <testhost> PUT /root/.ansible/tmp/ansible-local-5062uBzF81/tmpuHCv56 TO /root/.ansible/tmp/ansible-tmp-1576496300.53-21371603272736/AnsiballZ_azure_rm_storageaccount.py
34:40 <testhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1576496300.53-21371603272736/ /root/.ansible/tmp/ansible-tmp-1576496300.53-21371603272736/AnsiballZ_azure_rm_storageaccount.py && sleep 0'
34:40 <testhost> EXEC /bin/sh -c 'RESOURCE_GROUP_SECONDARY=ansible-core-ci-prod-aa236c49-9ebf-4e43-95fe-bfa57d70012c-2 RESOURCE_GROUP=ansible-core-ci-prod-aa236c49-9ebf-4e43-95fe-bfa57d70012c-1 AZURE_CLIENT_ID=d856067d-31a2-499b-9cdc-8570fafbcb28 AZURE_TENANT=51cfe857-2f92-4581-b504-ee3eba3db075 AZURE_SECRET=b0c8-N2o4-v0M4-u1K6 AZURE_SUBSCRIPTION_ID=6d22db98-3e5f-4ab9-bdf9-2f911a2775f7 /tmp/python-g20dyI-ansible/python /root/.ansible/tmp/ansible-tmp-1576496300.53-21371603272736/AnsiballZ_azure_rm_storageaccount.py && sleep 0'
34:42 <testhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1576496300.53-21371603272736/ > /dev/null 2>&1 && sleep 0'
34:42 The full traceback is:
34:42 WARNING: The below traceback may *not* be related to the actual failure.
34:42 File "/tmp/ansible_azure_rm_storageaccount_payload_ILdxb7/ansible_azure_rm_storageaccount_payload.zip/ansible/modules/cloud/azure/azure_rm_storageaccount.py", line 556, in update_account
34:42 File "/usr/local/lib/python2.7/dist-packages/azure/mgmt/storage/v2018_07_01/operations/storage_accounts_operations.py", line 415, in update
34:42 raise exp
34:42
34:42 fatal: [testhost]: FAILED! => {
34:42 "changed": false,
34:42 "invocation": {
34:42 "module_args": {
34:42 "access_tier": null,
34:42 "account_type": "Standard_GRS",
34:42 "ad_user": null,
34:42 "adfs_authority_url": null,
34:42 "api_profile": "latest",
34:42 "append_tags": true,
34:42 "auth_source": null,
34:42 "blob_cors": null,
34:42 "cert_validation_mode": null,
34:42 "client_id": null,
34:42 "cloud_environment": "AzureCloud",
34:42 "custom_domain": {
34:42 "name": "ansible.com",
34:42 "use_sub_domain": false
34:42 },
34:42 "force_delete_nonempty": false,
34:42 "https_only": false,
34:42 "kind": "Storage",
34:42 "location": null,
34:42 "name": "6841cc3648a2c983b82d177c",
34:42 "password": null,
34:42 "profile": null,
34:42 "resource_group": "ansible-core-ci-prod-aa236c49-9ebf-4e43-95fe-bfa57d70012c-1",
34:42 "secret": null,
34:42 "state": "present",
34:42 "subscription_id": null,
34:42 "tags": null,
34:42 "tenant": null
34:42 }
34:42 },
34:42 "msg": "Failed to update custom domain: Azure Error: StorageCustomDomainNameNotValid\nMessage: Storage custom domain name ansible.com is not valid."
34:42 }
34:42 ...ignoring
34:42
34:42 TASK [azure_rm_storageaccount : Assert CNAME failure] **************************
34:42 task path: /root/ansible/test/results/.tmp/integration/azure_rm_storageaccount-1_wlZF-Γ
ΓΕΓΞ²ΕΓ/test/integration/targets/azure_rm_storageaccount/tasks/main.yml:111
34:42 fatal: [testhost]: FAILED! => {
34:42 "assertion": "'custom domain name could not be verified' in change_account['msg']",
34:42 "changed": false,
34:42 "evaluated_to": false,
34:42 "msg": "Assertion failed"
34:42 }
34:42
34:42 PLAY RECAP *********************************************************************
34:42 testhost : ok=14 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=3
```
|
https://github.com/ansible/ansible/issues/65871
|
https://github.com/ansible/ansible/pull/65875
|
bd989052b17d571e1395c3bac5128551403ce396
|
14ebceec2535ba7ff51d75cc926198a69d356711
| 2019-12-16T12:04:33Z |
python
| 2019-12-19T17:12:38Z |
test/integration/targets/azure_rm_storageaccount/tasks/main.yml
|
- name: Create storage account name
set_fact:
storage_account: "{{ resource_group | hash('md5') | truncate(24, True, '') }}"
- name: Test invalid account name
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "invalid_char$"
register: invalid_name
ignore_errors: yes
- name: Assert task failed
assert: { that: "invalid_name['failed'] == True" }
- name: Delete storage account
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
state: absent
force_delete_nonempty: True
- name: Create new storage account
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
account_type: Standard_LRS
append_tags: no
blob_cors:
- allowed_origins:
- http://www.example.com/
allowed_methods:
- GET
- POST
allowed_headers:
- x-ms-meta-data*
- x-ms-meta-target*
- x-ms-meta-abc
exposed_headers:
- x-ms-meta-*
max_age_in_seconds: 200
tags:
test: test
galaxy: galaxy
register: output
- name: Assert status succeeded and results include an Id value
assert:
that:
- output.changed
- output.state.id is defined
- output.state.blob_cors | length == 1
- name: Create new storage account (idempotence)
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
account_type: Standard_LRS
append_tags: no
blob_cors:
- allowed_origins:
- http://www.example.com/
allowed_methods:
- GET
- POST
allowed_headers:
- x-ms-meta-data*
- x-ms-meta-target*
- x-ms-meta-abc
exposed_headers:
- x-ms-meta-*
max_age_in_seconds: 200
tags:
test: test
galaxy: galaxy
register: output
- assert:
that:
- not output.changed
- name: Gather facts by tags
azure_rm_storageaccount_facts:
resource_group: "{{ resource_group }}"
tags:
- test
- galaxy
- assert:
that: azure_storageaccounts | length >= 1
- name: Change account type
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
account_type: Premium_LRS
register: change_account
ignore_errors: yes
- name: Assert account type change failed
assert: { that: "change_account['failed'] == True" }
- name: Change account type and add custom domain
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
account_type: Standard_GRS
custom_domain: { name: ansible.com, use_sub_domain: no }
register: change_account
ignore_errors: yes
- name: Assert CNAME failure
assert: { that: "'custom domain name could not be verified' in change_account['msg']" }
- name: Update account tags
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
append_tags: no
tags:
testing: testing
delete: never
register: output
- assert:
that:
- "output.state.tags | length == 2"
- "output.state.tags.testing == 'testing'"
- "output.state.tags.delete == 'never'"
- name: Gather facts
azure_rm_storageaccount_facts:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
show_connection_string: True
show_blob_cors: True
- assert:
that:
- "azure_storageaccounts| length == 1"
- "storageaccounts | length == 1"
- not storageaccounts[0].custom_domain
- storageaccounts[0].account_type == "Standard_GRS"
- storageaccounts[0].primary_endpoints.blob.connectionstring
- storageaccounts[0].blob_cors
- name: Gather facts
azure_rm_storageaccount_facts:
resource_group: "{{ resource_group }}"
- assert:
that:
- "azure_storageaccounts | length > 0"
- name: Delete acccount
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,814 |
azure_rm_storageaccount integration tests failing due to error message change
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
`azure_rm_storageaccount` tests are failing because I think the error message has changed.
https://app.shippable.com/github/ansible/ansible/runs/153582/72/console
This is the failure message:
```
"msg": "Failed to update custom domain: Azure Error: StorageCustomDomainNameNotValid\nMessage: Storage custom domain name ansible.com is not valid."
```
Which doesn't match the `assert` check:
```
"assertion": "'custom domain name could not be verified' in change_account['msg']",
```
I don't know enough about what this is doing to tell if it's failing for some other reason, but my guess is that the error message changed and we just need to update the `assert` test.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`test/integration/targets/azure_rm_storageaccount/`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
devel
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
N/A
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Shippable
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Run `azure_rm_storageaccount` integration test
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Tests pass
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Tests fail
<!--- Paste verbatim command output between quotes -->
```paste below
13:58 TASK [azure_rm_storageaccount : Assert CNAME failure] **************************
13:58 task path: /root/ansible/test/integration/targets/azure_rm_storageaccount/tasks/main.yml:69
13:58 fatal: [testhost]: FAILED! => {
13:58 "assertion": "'custom domain name could not be verified' in change_account['msg']",
13:58 "changed": false,
13:58 "evaluated_to": false,
13:58 "msg": "Assertion failed"
13:58 }
```
|
https://github.com/ansible/ansible/issues/65814
|
https://github.com/ansible/ansible/pull/65875
|
bd989052b17d571e1395c3bac5128551403ce396
|
14ebceec2535ba7ff51d75cc926198a69d356711
| 2019-12-13T17:26:05Z |
python
| 2019-12-19T17:12:38Z |
test/integration/targets/azure_rm_storageaccount/tasks/main.yml
|
- name: Create storage account name
set_fact:
storage_account: "{{ resource_group | hash('md5') | truncate(24, True, '') }}"
- name: Test invalid account name
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "invalid_char$"
register: invalid_name
ignore_errors: yes
- name: Assert task failed
assert: { that: "invalid_name['failed'] == True" }
- name: Delete storage account
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
state: absent
force_delete_nonempty: True
- name: Create new storage account
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
account_type: Standard_LRS
append_tags: no
blob_cors:
- allowed_origins:
- http://www.example.com/
allowed_methods:
- GET
- POST
allowed_headers:
- x-ms-meta-data*
- x-ms-meta-target*
- x-ms-meta-abc
exposed_headers:
- x-ms-meta-*
max_age_in_seconds: 200
tags:
test: test
galaxy: galaxy
register: output
- name: Assert status succeeded and results include an Id value
assert:
that:
- output.changed
- output.state.id is defined
- output.state.blob_cors | length == 1
- name: Create new storage account (idempotence)
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
account_type: Standard_LRS
append_tags: no
blob_cors:
- allowed_origins:
- http://www.example.com/
allowed_methods:
- GET
- POST
allowed_headers:
- x-ms-meta-data*
- x-ms-meta-target*
- x-ms-meta-abc
exposed_headers:
- x-ms-meta-*
max_age_in_seconds: 200
tags:
test: test
galaxy: galaxy
register: output
- assert:
that:
- not output.changed
- name: Gather facts by tags
azure_rm_storageaccount_facts:
resource_group: "{{ resource_group }}"
tags:
- test
- galaxy
- assert:
that: azure_storageaccounts | length >= 1
- name: Change account type
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
account_type: Premium_LRS
register: change_account
ignore_errors: yes
- name: Assert account type change failed
assert: { that: "change_account['failed'] == True" }
- name: Change account type and add custom domain
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
account_type: Standard_GRS
custom_domain: { name: ansible.com, use_sub_domain: no }
register: change_account
ignore_errors: yes
- name: Assert CNAME failure
assert: { that: "'custom domain name could not be verified' in change_account['msg']" }
- name: Update account tags
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
append_tags: no
tags:
testing: testing
delete: never
register: output
- assert:
that:
- "output.state.tags | length == 2"
- "output.state.tags.testing == 'testing'"
- "output.state.tags.delete == 'never'"
- name: Gather facts
azure_rm_storageaccount_facts:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
show_connection_string: True
show_blob_cors: True
- assert:
that:
- "azure_storageaccounts| length == 1"
- "storageaccounts | length == 1"
- not storageaccounts[0].custom_domain
- storageaccounts[0].account_type == "Standard_GRS"
- storageaccounts[0].primary_endpoints.blob.connectionstring
- storageaccounts[0].blob_cors
- name: Gather facts
azure_rm_storageaccount_facts:
resource_group: "{{ resource_group }}"
- assert:
that:
- "azure_storageaccounts | length > 0"
- name: Delete acccount
azure_rm_storageaccount:
resource_group: "{{ resource_group }}"
name: "{{ storage_account }}"
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,254 |
The --force-handlers option if used with 'strategy: free' does not run handlers on all hosts.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
As of Ansible 2.7 the `--force-handlers` option is no longer running the handler/s on all hosts when used with strategy free. Only tested back to 2.5 and this was working as I would expect it to on 2.5 & 2.6.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
plugins/strategy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /Users/bdudas/dev-net/handlers/ansible.cfg
configured module search path = [u'/Users/bdudas/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bdudas/dev-net/handlers/ansible/lib/ansible
executable location = /Users/bdudas/dev-net/handlers/ansible/bin/ansible
python version = 2.7.16 (default, Oct 16 2019, 00:34:56) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_STRATEGY(/Users/bdudas/dev-net/handlers/ansible.cfg) = free
HOST_KEY_CHECKING(/Users/bdudas/dev-net/handlers/ansible.cfg) = False
RETRY_FILES_ENABLED(/Users/bdudas/dev-net/handlers/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: all
gather_facts: false
tasks:
- command: hostname
register: id
notify: recovery_x
- debug:
var: id
- name: failure
fail: msg="Testing Failure"
when: inventory_hostname == 'covfefe'
handlers:
- name: recovery_x
shell: hostname
...
```
ansible-playbook <playbook.yml> --force-handlers
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Handlers will run against all hosts when using the `--force-handlers` option.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Handler is only run on 1 of the 2 target hosts.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/65254
|
https://github.com/ansible/ansible/pull/65576
|
89703b328490966db287700bb4bc3a422e96d98e
|
c8704573396e7480b3e1b33b2ddda2b6325d0d80
| 2019-11-25T15:02:05Z |
python
| 2019-12-19T19:10:51Z |
changelogs/fragments/65576-fix-free-strategy-handler-filtering.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,254 |
The --force-handlers option if used with 'strategy: free' does not run handlers on all hosts.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
As of Ansible 2.7 the `--force-handlers` option is no longer running the handler/s on all hosts when used with strategy free. Only tested back to 2.5 and this was working as I would expect it to on 2.5 & 2.6.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
plugins/strategy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /Users/bdudas/dev-net/handlers/ansible.cfg
configured module search path = [u'/Users/bdudas/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bdudas/dev-net/handlers/ansible/lib/ansible
executable location = /Users/bdudas/dev-net/handlers/ansible/bin/ansible
python version = 2.7.16 (default, Oct 16 2019, 00:34:56) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_STRATEGY(/Users/bdudas/dev-net/handlers/ansible.cfg) = free
HOST_KEY_CHECKING(/Users/bdudas/dev-net/handlers/ansible.cfg) = False
RETRY_FILES_ENABLED(/Users/bdudas/dev-net/handlers/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: all
gather_facts: false
tasks:
- command: hostname
register: id
notify: recovery_x
- debug:
var: id
- name: failure
fail: msg="Testing Failure"
when: inventory_hostname == 'covfefe'
handlers:
- name: recovery_x
shell: hostname
...
```
ansible-playbook <playbook.yml> --force-handlers
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Handlers will run against all hosts when using the `--force-handlers` option.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Handler is only run on 1 of the 2 target hosts.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/65254
|
https://github.com/ansible/ansible/pull/65576
|
89703b328490966db287700bb4bc3a422e96d98e
|
c8704573396e7480b3e1b33b2ddda2b6325d0d80
| 2019-11-25T15:02:05Z |
python
| 2019-12-19T19:10:51Z |
lib/ansible/plugins/strategy/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import cmd
import functools
import os
import pprint
import sys
import threading
import time
from collections import deque
from multiprocessing import Lock
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleFileNotFound, AnsibleParserError, AnsibleUndefinedVariable
from ansible.executor import action_write_locks
from ansible.executor.process.worker import WorkerProcess
from ansible.executor.task_result import TaskResult
from ansible.inventory.host import Host
from ansible.module_utils.six.moves import queue as Queue
from ansible.module_utils.six import iteritems, itervalues, string_types
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task_include import TaskInclude
from ansible.plugins import loader as plugin_loader
from ansible.template import Templar
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars
from ansible.vars.clean import strip_internal_keys, module_response_deepcopy
display = Display()
__all__ = ['StrategyBase']
class StrategySentinel:
pass
def SharedPluginLoaderObj():
'''This only exists for backwards compat, do not use.
'''
display.deprecated('SharedPluginLoaderObj is deprecated, please directly use ansible.plugins.loader',
version='2.11')
return plugin_loader
_sentinel = StrategySentinel()
def results_thread_main(strategy):
while True:
try:
result = strategy._final_q.get()
if isinstance(result, StrategySentinel):
break
else:
strategy._results_lock.acquire()
strategy._results.append(result)
strategy._results_lock.release()
except (IOError, EOFError):
break
except Queue.Empty:
pass
def debug_closure(func):
"""Closure to wrap ``StrategyBase._process_pending_results`` and invoke the task debugger"""
@functools.wraps(func)
def inner(self, iterator, one_pass=False, max_passes=None):
status_to_stats_map = (
('is_failed', 'failures'),
('is_unreachable', 'dark'),
('is_changed', 'changed'),
('is_skipped', 'skipped'),
)
# We don't know the host yet, copy the previous states, for lookup after we process new results
prev_host_states = iterator._host_states.copy()
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
_processed_results = []
for result in results:
task = result._task
host = result._host
_queued_task_args = self._queued_task_cache.pop((host.name, task._uuid), None)
task_vars = _queued_task_args['task_vars']
play_context = _queued_task_args['play_context']
# Try to grab the previous host state, if it doesn't exist use get_host_state to generate an empty state
try:
prev_host_state = prev_host_states[host.name]
except KeyError:
prev_host_state = iterator.get_host_state(host)
while result.needs_debugger(globally_enabled=self.debugger_active):
next_action = NextAction()
dbg = Debugger(task, host, task_vars, play_context, result, next_action)
dbg.cmdloop()
if next_action.result == NextAction.REDO:
# rollback host state
self._tqm.clear_failed_hosts()
iterator._host_states[host.name] = prev_host_state
for method, what in status_to_stats_map:
if getattr(result, method)():
self._tqm._stats.decrement(what, host.name)
self._tqm._stats.decrement('ok', host.name)
# redo
self._queue_task(host, task, task_vars, play_context)
_processed_results.extend(debug_closure(func)(self, iterator, one_pass))
break
elif next_action.result == NextAction.CONTINUE:
_processed_results.append(result)
break
elif next_action.result == NextAction.EXIT:
# Matches KeyboardInterrupt from bin/ansible
sys.exit(99)
else:
_processed_results.append(result)
return _processed_results
return inner
class StrategyBase:
'''
This is the base class for strategy plugins, which contains some common
code useful to all strategies like running handlers, cleanup actions, etc.
'''
# by default, strategies should support throttling but we allow individual
# strategies to disable this and either forego supporting it or managing
# the throttling internally (as `free` does)
ALLOW_BASE_THROTTLING = True
def __init__(self, tqm):
self._tqm = tqm
self._inventory = tqm.get_inventory()
self._workers = tqm._workers
self._variable_manager = tqm.get_variable_manager()
self._loader = tqm.get_loader()
self._final_q = tqm._final_q
self._step = context.CLIARGS.get('step', False)
self._diff = context.CLIARGS.get('diff', False)
self.flush_cache = context.CLIARGS.get('flush_cache', False)
# the task cache is a dictionary of tuples of (host.name, task._uuid)
# used to find the original task object of in-flight tasks and to store
# the task args/vars and play context info used to queue the task.
self._queued_task_cache = {}
# Backwards compat: self._display isn't really needed, just import the global display and use that.
self._display = display
# internal counters
self._pending_results = 0
self._cur_worker = 0
# this dictionary is used to keep track of hosts that have
# outstanding tasks still in queue
self._blocked_hosts = dict()
# this dictionary is used to keep track of hosts that have
# flushed handlers
self._flushed_hosts = dict()
self._results = deque()
self._results_lock = threading.Condition(threading.Lock())
# create the result processing thread for reading results in the background
self._results_thread = threading.Thread(target=results_thread_main, args=(self,))
self._results_thread.daemon = True
self._results_thread.start()
# holds the list of active (persistent) connections to be shutdown at
# play completion
self._active_connections = dict()
# Caches for get_host calls, to avoid calling excessively
# These values should be set at the top of the ``run`` method of each
# strategy plugin. Use ``_set_hosts_cache`` to set these values
self._hosts_cache = []
self._hosts_cache_all = []
self.debugger_active = C.ENABLE_TASK_DEBUGGER
def _set_hosts_cache(self, play, refresh=True):
"""Responsible for setting _hosts_cache and _hosts_cache_all
See comment in ``__init__`` for the purpose of these caches
"""
if not refresh and all((self._hosts_cache, self._hosts_cache_all)):
return
if Templar(None).is_template(play.hosts):
_pattern = 'all'
else:
_pattern = play.hosts or 'all'
self._hosts_cache_all = [h.name for h in self._inventory.get_hosts(pattern=_pattern, ignore_restrictions=True)]
self._hosts_cache = [h.name for h in self._inventory.get_hosts(play.hosts, order=play.order)]
def cleanup(self):
# close active persistent connections
for sock in itervalues(self._active_connections):
try:
conn = Connection(sock)
conn.reset()
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
self._final_q.put(_sentinel)
self._results_thread.join()
def run(self, iterator, play_context, result=0):
# execute one more pass through the iterator without peeking, to
# make sure that all of the hosts are advanced to their final task.
# This should be safe, as everything should be ITERATING_COMPLETE by
# this point, though the strategy may not advance the hosts itself.
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
iterator.get_next_task_for_host(self._inventory.hosts[host])
except KeyError:
iterator.get_next_task_for_host(self._inventory.get_host(host))
# save the failed/unreachable hosts, as the run_handlers()
# method will clear that information during its execution
failed_hosts = iterator.get_failed_hosts()
unreachable_hosts = self._tqm._unreachable_hosts.keys()
display.debug("running handlers")
handler_result = self.run_handlers(iterator, play_context)
if isinstance(handler_result, bool) and not handler_result:
result |= self._tqm.RUN_ERROR
elif not handler_result:
result |= handler_result
# now update with the hosts (if any) that failed or were
# unreachable during the handler execution phase
failed_hosts = set(failed_hosts).union(iterator.get_failed_hosts())
unreachable_hosts = set(unreachable_hosts).union(self._tqm._unreachable_hosts.keys())
# return the appropriate code, depending on the status hosts after the run
if not isinstance(result, bool) and result != self._tqm.RUN_OK:
return result
elif len(unreachable_hosts) > 0:
return self._tqm.RUN_UNREACHABLE_HOSTS
elif len(failed_hosts) > 0:
return self._tqm.RUN_FAILED_HOSTS
else:
return self._tqm.RUN_OK
def get_hosts_remaining(self, play):
self._set_hosts_cache(play, refresh=False)
ignore = set(self._tqm._failed_hosts).union(self._tqm._unreachable_hosts)
return [host for host in self._hosts_cache if host not in ignore]
def get_failed_hosts(self, play):
self._set_hosts_cache(play, refresh=False)
return [host for host in self._hosts_cache if host in self._tqm._failed_hosts]
def add_tqm_variables(self, vars, play):
'''
Base class method to add extra variables/information to the list of task
vars sent through the executor engine regarding the task queue manager state.
'''
vars['ansible_current_hosts'] = self.get_hosts_remaining(play)
vars['ansible_failed_hosts'] = self.get_failed_hosts(play)
def _queue_task(self, host, task, task_vars, play_context):
''' handles queueing the task up to be sent to a worker '''
display.debug("entering _queue_task() for %s/%s" % (host.name, task.action))
# Add a write lock for tasks.
# Maybe this should be added somewhere further up the call stack but
# this is the earliest in the code where we have task (1) extracted
# into its own variable and (2) there's only a single code path
# leading to the module being run. This is called by three
# functions: __init__.py::_do_handler_run(), linear.py::run(), and
# free.py::run() so we'd have to add to all three to do it there.
# The next common higher level is __init__.py::run() and that has
# tasks inside of play_iterator so we'd have to extract them to do it
# there.
if task.action not in action_write_locks.action_write_locks:
display.debug('Creating lock for %s' % task.action)
action_write_locks.action_write_locks[task.action] = Lock()
# create a templar and template things we need later for the queuing process
templar = Templar(loader=self._loader, variables=task_vars)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
# and then queue the new task
try:
queued = False
starting_worker = self._cur_worker
while True:
worker_prc = self._workers[self._cur_worker]
if worker_prc is None or not worker_prc.is_alive():
self._queued_task_cache[(host.name, task._uuid)] = {
'host': host,
'task': task,
'task_vars': task_vars,
'play_context': play_context
}
worker_prc = WorkerProcess(self._final_q, task_vars, host, task, play_context, self._loader, self._variable_manager, plugin_loader)
self._workers[self._cur_worker] = worker_prc
self._tqm.send_callback('v2_runner_on_start', host, task)
worker_prc.start()
display.debug("worker is %d (out of %d available)" % (self._cur_worker + 1, len(self._workers)))
queued = True
self._cur_worker += 1
# Determine the "rewind point" of the worker list. This means we start
# iterating over the list of workers until the end of the list is found.
# Normally, that is simply the length of the workers list (as determined
# by the forks or serial setting), however a task/block/play may "throttle"
# that limit down.
rewind_point = len(self._workers)
if throttle > 0 and self.ALLOW_BASE_THROTTLING:
if task.run_once:
display.debug("Ignoring 'throttle' as 'run_once' is also set for '%s'" % task.get_name())
else:
if throttle <= rewind_point:
display.debug("task: %s, throttle: %d" % (task.get_name(), throttle))
rewind_point = throttle
if self._cur_worker >= rewind_point:
self._cur_worker = 0
if queued:
break
elif self._cur_worker == starting_worker:
time.sleep(0.0001)
self._pending_results += 1
except (EOFError, IOError, AssertionError) as e:
# most likely an abort
display.debug("got an error while queuing: %s" % e)
return
display.debug("exiting _queue_task() for %s/%s" % (host.name, task.action))
def get_task_hosts(self, iterator, task_host, task):
if task.run_once:
host_list = [host for host in self._hosts_cache if host not in self._tqm._unreachable_hosts]
else:
host_list = [task_host.name]
return host_list
def get_delegated_hosts(self, result, task):
host_name = result.get('_ansible_delegated_vars', {}).get('ansible_delegated_host', None)
return [host_name or task.delegate_to]
@debug_closure
def _process_pending_results(self, iterator, one_pass=False, max_passes=None):
'''
Reads results off the final queue and takes appropriate action
based on the result (executing callbacks, updating state, etc.).
'''
ret_results = []
handler_templar = Templar(self._loader)
def get_original_host(host_name):
# FIXME: this should not need x2 _inventory
host_name = to_text(host_name)
if host_name in self._inventory.hosts:
return self._inventory.hosts[host_name]
else:
return self._inventory.get_host(host_name)
def search_handler_blocks_by_name(handler_name, handler_blocks):
# iterate in reversed order since last handler loaded with the same name wins
for handler_block in reversed(handler_blocks):
for handler_task in handler_block.block:
if handler_task.name:
if not handler_task.cached_name:
if handler_templar.is_template(handler_task.name):
handler_templar.available_variables = self._variable_manager.get_vars(play=iterator._play,
task=handler_task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
handler_task.name = handler_templar.template(handler_task.name)
handler_task.cached_name = True
try:
# first we check with the full result of get_name(), which may
# include the role name (if the handler is from a role). If that
# is not found, we resort to the simple name field, which doesn't
# have anything extra added to it.
if handler_task.name == handler_name:
return handler_task
else:
if handler_task.get_name() == handler_name:
return handler_task
except (UndefinedError, AnsibleUndefinedVariable):
# We skip this handler due to the fact that it may be using
# a variable in the name that was conditionally included via
# set_fact or some other method, and we don't want to error
# out unnecessarily
continue
return None
cur_pass = 0
while True:
try:
self._results_lock.acquire()
task_result = self._results.popleft()
except IndexError:
break
finally:
self._results_lock.release()
# get the original host and task. We then assign them to the TaskResult for use in callbacks/etc.
original_host = get_original_host(task_result._host)
queue_cache_entry = (original_host.name, task_result._task)
found_task = self._queued_task_cache.get(queue_cache_entry)['task']
original_task = found_task.copy(exclude_parent=True, exclude_tasks=True)
original_task._parent = found_task._parent
original_task.from_attrs(task_result._task_fields)
task_result._host = original_host
task_result._task = original_task
# send callbacks for 'non final' results
if '_ansible_retry' in task_result._result:
self._tqm.send_callback('v2_runner_retry', task_result)
continue
elif '_ansible_item_result' in task_result._result:
if task_result.is_failed() or task_result.is_unreachable():
self._tqm.send_callback('v2_runner_item_on_failed', task_result)
elif task_result.is_skipped():
self._tqm.send_callback('v2_runner_item_on_skipped', task_result)
else:
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
self._tqm.send_callback('v2_runner_item_on_ok', task_result)
continue
if original_task.register:
host_list = self.get_task_hosts(iterator, original_host, original_task)
clean_copy = strip_internal_keys(module_response_deepcopy(task_result._result))
if 'invocation' in clean_copy:
del clean_copy['invocation']
for target_host in host_list:
self._variable_manager.set_nonpersistent_facts(target_host, {original_task.register: clean_copy})
# all host status messages contain 2 entries: (msg, task_result)
role_ran = False
if task_result.is_failed():
role_ran = True
ignore_errors = original_task.ignore_errors
if not ignore_errors:
display.debug("marking %s as failed" % original_host.name)
if original_task.run_once:
# if we're using run_once, we have to fail every host here
for h in self._inventory.get_hosts(iterator._play.hosts):
if h.name not in self._tqm._unreachable_hosts:
state, _ = iterator.get_next_task_for_host(h, peek=True)
iterator.mark_host_failed(h)
state, new_task = iterator.get_next_task_for_host(h, peek=True)
else:
iterator.mark_host_failed(original_host)
# grab the current state and if we're iterating on the rescue portion
# of a block then we save the failed task in a special var for use
# within the rescue/always
state, _ = iterator.get_next_task_for_host(original_host, peek=True)
if iterator.is_failed(original_host) and state and state.run_state == iterator.ITERATING_COMPLETE:
self._tqm._failed_hosts[original_host.name] = True
if state and iterator.get_active_state(state).run_state == iterator.ITERATING_RESCUE:
self._tqm._stats.increment('rescued', original_host.name)
self._variable_manager.set_nonpersistent_facts(
original_host.name,
dict(
ansible_failed_task=original_task.serialize(),
ansible_failed_result=task_result._result,
),
)
else:
self._tqm._stats.increment('failures', original_host.name)
else:
self._tqm._stats.increment('ok', original_host.name)
self._tqm._stats.increment('ignored', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
self._tqm.send_callback('v2_runner_on_failed', task_result, ignore_errors=ignore_errors)
elif task_result.is_unreachable():
ignore_unreachable = original_task.ignore_unreachable
if not ignore_unreachable:
self._tqm._unreachable_hosts[original_host.name] = True
iterator._play._removed_hosts.append(original_host.name)
else:
self._tqm._stats.increment('skipped', original_host.name)
task_result._result['skip_reason'] = 'Host %s is unreachable' % original_host.name
self._tqm._stats.increment('dark', original_host.name)
self._tqm.send_callback('v2_runner_on_unreachable', task_result)
elif task_result.is_skipped():
self._tqm._stats.increment('skipped', original_host.name)
self._tqm.send_callback('v2_runner_on_skipped', task_result)
else:
role_ran = True
if original_task.loop:
# this task had a loop, and has more than one result, so
# loop over all of them instead of a single result
result_items = task_result._result.get('results', [])
else:
result_items = [task_result._result]
for result_item in result_items:
if '_ansible_notify' in result_item:
if task_result.is_changed():
# The shared dictionary for notified handlers is a proxy, which
# does not detect when sub-objects within the proxy are modified.
# So, per the docs, we reassign the list so the proxy picks up and
# notifies all other threads
for handler_name in result_item['_ansible_notify']:
found = False
# Find the handler using the above helper. First we look up the
# dependency chain of the current task (if it's from a role), otherwise
# we just look through the list of handlers in the current play/all
# roles and use the first one that matches the notify name
target_handler = search_handler_blocks_by_name(handler_name, iterator._play.handlers)
if target_handler is not None:
found = True
if target_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', target_handler, original_host)
for listening_handler_block in iterator._play.handlers:
for listening_handler in listening_handler_block.block:
listeners = getattr(listening_handler, 'listen', []) or []
if not listeners:
continue
listeners = listening_handler.get_validated_value(
'listen', listening_handler._valid_attrs['listen'], listeners, handler_templar
)
if handler_name not in listeners:
continue
else:
found = True
if listening_handler.notify_host(original_host):
self._tqm.send_callback('v2_playbook_on_notify', listening_handler, original_host)
# and if none were found, then we raise an error
if not found:
msg = ("The requested handler '%s' was not found in either the main handlers list nor in the listening "
"handlers list" % handler_name)
if C.ERROR_ON_MISSING_HANDLER:
raise AnsibleError(msg)
else:
display.warning(msg)
if 'add_host' in result_item:
# this task added a new host (add_host module)
new_host_info = result_item.get('add_host', dict())
self._add_host(new_host_info, iterator)
elif 'add_group' in result_item:
# this task added a new group (group_by module)
self._add_group(original_host, result_item)
if 'ansible_facts' in result_item:
# if delegated fact and we are delegating facts, we need to change target host for them
if original_task.delegate_to is not None and original_task.delegate_facts:
host_list = self.get_delegated_hosts(result_item, original_task)
else:
host_list = self.get_task_hosts(iterator, original_host, original_task)
if original_task.action == 'include_vars':
for (var_name, var_value) in iteritems(result_item['ansible_facts']):
# find the host we're actually referring too here, which may
# be a host that is not really in inventory at all
for target_host in host_list:
self._variable_manager.set_host_variable(target_host, var_name, var_value)
else:
cacheable = result_item.pop('_ansible_facts_cacheable', False)
for target_host in host_list:
# so set_fact is a misnomer but 'cacheable = true' was meant to create an 'actual fact'
# to avoid issues with precedence and confusion with set_fact normal operation,
# we set BOTH fact and nonpersistent_facts (aka hostvar)
# when fact is retrieved from cache in subsequent operations it will have the lower precedence,
# but for playbook setting it the 'higher' precedence is kept
if original_task.action != 'set_fact' or cacheable:
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
if original_task.action == 'set_fact':
self._variable_manager.set_nonpersistent_facts(target_host, result_item['ansible_facts'].copy())
if 'ansible_stats' in result_item and 'data' in result_item['ansible_stats'] and result_item['ansible_stats']['data']:
if 'per_host' not in result_item['ansible_stats'] or result_item['ansible_stats']['per_host']:
host_list = self.get_task_hosts(iterator, original_host, original_task)
else:
host_list = [None]
data = result_item['ansible_stats']['data']
aggregate = 'aggregate' in result_item['ansible_stats'] and result_item['ansible_stats']['aggregate']
for myhost in host_list:
for k in data.keys():
if aggregate:
self._tqm._stats.update_custom_stats(k, data[k], myhost)
else:
self._tqm._stats.set_custom_stats(k, data[k], myhost)
if 'diff' in task_result._result:
if self._diff or getattr(original_task, 'diff', False):
self._tqm.send_callback('v2_on_file_diff', task_result)
if not isinstance(original_task, TaskInclude):
self._tqm._stats.increment('ok', original_host.name)
if 'changed' in task_result._result and task_result._result['changed']:
self._tqm._stats.increment('changed', original_host.name)
# finally, send the ok for this task
self._tqm.send_callback('v2_runner_on_ok', task_result)
self._pending_results -= 1
if original_host.name in self._blocked_hosts:
del self._blocked_hosts[original_host.name]
# If this is a role task, mark the parent role as being run (if
# the task was ok or failed, but not skipped or unreachable)
if original_task._role is not None and role_ran: # TODO: and original_task.action != 'include_role':?
# lookup the role in the ROLE_CACHE to make sure we're dealing
# with the correct object and mark it as executed
for (entry, role_obj) in iteritems(iterator._play.ROLE_CACHE[original_task._role._role_name]):
if role_obj._uuid == original_task._role._uuid:
role_obj._had_task_run[original_host.name] = True
ret_results.append(task_result)
if one_pass or max_passes is not None and (cur_pass + 1) >= max_passes:
break
cur_pass += 1
return ret_results
def _wait_on_handler_results(self, iterator, handler, notified_hosts):
'''
Wait for the handler tasks to complete, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
handler_results = 0
display.debug("waiting for handler results...")
while (self._pending_results > 0 and
handler_results < len(notified_hosts) and
not self._tqm._terminated):
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
handler_results += len([
r._host for r in results if r._host in notified_hosts and
r.task_name == handler.name])
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending handlers, returning what we have")
return ret_results
def _wait_on_pending_results(self, iterator):
'''
Wait for the shared counter to drop to zero, using a short sleep
between checks to ensure we don't spin lock
'''
ret_results = []
display.debug("waiting for pending results...")
while self._pending_results > 0 and not self._tqm._terminated:
if self._tqm.has_dead_workers():
raise AnsibleError("A worker was found in a dead state")
results = self._process_pending_results(iterator)
ret_results.extend(results)
if self._pending_results > 0:
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
display.debug("no more pending results, returning what we have")
return ret_results
def _add_host(self, host_info, iterator):
'''
Helper function to add a new host to inventory based on a task result.
'''
if host_info:
host_name = host_info.get('host_name')
# Check if host in inventory, add if not
if host_name not in self._inventory.hosts:
self._inventory.add_host(host_name, 'all')
self._hosts_cache_all.append(host_name)
new_host = self._inventory.hosts.get(host_name)
# Set/update the vars for this host
new_host.vars = combine_vars(new_host.get_vars(), host_info.get('host_vars', dict()))
new_groups = host_info.get('groups', [])
for group_name in new_groups:
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
new_group = self._inventory.groups[group_name]
new_group.add_host(self._inventory.hosts[host_name])
# reconcile inventory, ensures inventory rules are followed
self._inventory.reconcile_inventory()
def _add_group(self, host, result_item):
'''
Helper function to add a group (if it does not exist), and to assign the
specified host to that group.
'''
changed = False
# the host here is from the executor side, which means it was a
# serialized/cloned copy and we'll need to look up the proper
# host object from the master inventory
real_host = self._inventory.hosts.get(host.name)
if real_host is None:
if host.name == self._inventory.localhost.name:
real_host = self._inventory.localhost
else:
raise AnsibleError('%s cannot be matched in inventory' % host.name)
group_name = result_item.get('add_group')
parent_group_names = result_item.get('parent_groups', [])
if group_name not in self._inventory.groups:
group_name = self._inventory.add_group(group_name)
for name in parent_group_names:
if name not in self._inventory.groups:
# create the new group and add it to inventory
self._inventory.add_group(name)
changed = True
group = self._inventory.groups[group_name]
for parent_group_name in parent_group_names:
parent_group = self._inventory.groups[parent_group_name]
parent_group.add_child_group(group)
if real_host.name not in group.get_hosts():
group.add_host(real_host)
changed = True
if group_name not in host.get_groups():
real_host.add_group(group)
changed = True
if changed:
self._inventory.reconcile_inventory()
return changed
def _copy_included_file(self, included_file):
'''
A proven safe and performant way to create a copy of an included file
'''
ti_copy = included_file._task.copy(exclude_parent=True)
ti_copy._parent = included_file._task._parent
temp_vars = ti_copy.vars.copy()
temp_vars.update(included_file._vars)
ti_copy.vars = temp_vars
return ti_copy
def _load_included_file(self, included_file, iterator, is_handler=False):
'''
Loads an included YAML file of tasks, applying the optional set of variables.
'''
display.debug("loading included file: %s" % included_file._filename)
try:
data = self._loader.load_from_file(included_file._filename)
if data is None:
return []
elif not isinstance(data, list):
raise AnsibleError("included task files must contain a list of tasks")
ti_copy = self._copy_included_file(included_file)
# pop tags out of the include args, if they were specified there, and assign
# them to the include. If the include already had tags specified, we raise an
# error so that users know not to specify them both ways
tags = included_file._task.vars.pop('tags', [])
if isinstance(tags, string_types):
tags = tags.split(',')
if len(tags) > 0:
if len(included_file._task.tags) > 0:
raise AnsibleParserError("Include tasks should not specify tags in more than one way (both via args and directly on the task). "
"Mixing tag specify styles is prohibited for whole import hierarchy, not only for single import statement",
obj=included_file._task._ds)
display.deprecated("You should not specify tags in the include parameters. All tags should be specified using the task-level option",
version='2.12')
included_file._task.tags = tags
block_list = load_list_of_blocks(
data,
play=iterator._play,
parent_block=ti_copy.build_parent_block(),
role=included_file._task._role,
use_handlers=is_handler,
loader=self._loader,
variable_manager=self._variable_manager,
)
# since we skip incrementing the stats when the task result is
# first processed, we do so now for each host in the list
for host in included_file._hosts:
self._tqm._stats.increment('ok', host.name)
except AnsibleError as e:
if isinstance(e, AnsibleFileNotFound):
reason = "Could not find or access '%s' on the Ansible Controller." % to_text(e.file_name)
else:
reason = to_text(e)
# mark all of the hosts including this file as failed, send callbacks,
# and increment the stats for this host
for host in included_file._hosts:
tr = TaskResult(host=host, task=included_file._task, return_data=dict(failed=True, reason=reason))
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
self._tqm._stats.increment('failures', host.name)
self._tqm.send_callback('v2_runner_on_failed', tr)
return []
# finally, send the callback and return the list of blocks loaded
self._tqm.send_callback('v2_playbook_on_include', included_file)
display.debug("done processing included file")
return block_list
def run_handlers(self, iterator, play_context):
'''
Runs handlers on those hosts which have been notified.
'''
result = self._tqm.RUN_OK
for handler_block in iterator._play.handlers:
# FIXME: handlers need to support the rescue/always portions of blocks too,
# but this may take some work in the iterator and gets tricky when
# we consider the ability of meta tasks to flush handlers
for handler in handler_block.block:
if handler.notified_hosts:
result = self._do_handler_run(handler, handler.get_name(), iterator=iterator, play_context=play_context)
if not result:
break
return result
def _do_handler_run(self, handler, handler_name, iterator, play_context, notified_hosts=None):
# FIXME: need to use iterator.get_failed_hosts() instead?
# if not len(self.get_hosts_remaining(iterator._play)):
# self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
# result = False
# break
if notified_hosts is None:
notified_hosts = handler.notified_hosts[:]
notified_hosts = self._filter_notified_hosts(notified_hosts)
if len(notified_hosts) > 0:
saved_name = handler.name
handler.name = handler_name
self._tqm.send_callback('v2_playbook_on_handler_task_start', handler)
handler.name = saved_name
bypass_host_loop = False
try:
action = plugin_loader.action_loader.get(handler.action, class_only=True)
if getattr(action, 'BYPASS_HOST_LOOP', False):
bypass_host_loop = True
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
pass
host_results = []
for host in notified_hosts:
if not iterator.is_failed(host) or iterator._play.force_handlers:
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=handler,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
if not handler.cached_name:
handler.name = templar.template(handler.name)
handler.cached_name = True
self._queue_task(host, handler, task_vars, play_context)
if templar.template(handler.run_once) or bypass_host_loop:
break
# collect the results from the handler run
host_results = self._wait_on_handler_results(iterator, handler, notified_hosts)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
result = True
if len(included_files) > 0:
for included_file in included_files:
try:
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=True)
# for every task in each block brought in by the include, add the list
# of hosts which included the file to the notified_handlers dict
for block in new_blocks:
iterator._play.handlers.append(block)
for task in block.block:
task_name = task.get_name()
display.debug("adding task '%s' included in handler '%s'" % (task_name, handler_name))
task.notified_hosts = included_file._hosts[:]
result = self._do_handler_run(
handler=task,
handler_name=task_name,
iterator=iterator,
play_context=play_context,
notified_hosts=included_file._hosts[:],
)
if not result:
break
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
self._tqm._failed_hosts[host.name] = True
display.warning(to_text(e))
continue
# remove hosts from notification list
handler.notified_hosts = [
h for h in handler.notified_hosts
if h not in notified_hosts]
display.debug("done running handlers, result is: %s" % result)
return result
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# As main strategy is linear, we do not filter hosts
# We return a copy to avoid race conditions
return notified_hosts[:]
def _take_step(self, task, host=None):
ret = False
msg = u'Perform task: %s ' % task
if host:
msg += u'on %s ' % host
msg += u'(N)o/(y)es/(c)ontinue: '
resp = display.prompt(msg)
if resp.lower() in ['y', 'yes']:
display.debug("User ran task")
ret = True
elif resp.lower() in ['c', 'continue']:
display.debug("User ran task and canceled step mode")
self._step = False
ret = True
else:
display.debug("User skipped task")
display.banner(msg)
return ret
def _cond_not_supported_warn(self, task_name):
display.warning("%s task does not support when conditional" % task_name)
def _execute_meta(self, task, play_context, iterator, target_host):
# meta tasks store their args in the _raw_params field of args,
# since they do not use k=v pairs, so get that
meta_action = task.args.get('_raw_params')
def _evaluate_conditional(h):
all_vars = self._variable_manager.get_vars(play=iterator._play, host=h, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
return task.evaluate_conditional(templar, all_vars)
skipped = False
msg = ''
if meta_action == 'noop':
# FIXME: issue a callback for the noop here?
if task.when:
self._cond_not_supported_warn(meta_action)
msg = "noop"
elif meta_action == 'flush_handlers':
if task.when:
self._cond_not_supported_warn(meta_action)
self._flushed_hosts[target_host] = True
self.run_handlers(iterator, play_context)
self._flushed_hosts[target_host] = False
msg = "ran handlers"
elif meta_action == 'refresh_inventory' or self.flush_cache:
if task.when:
self._cond_not_supported_warn(meta_action)
self._inventory.refresh_inventory()
self._set_hosts_cache(iterator._play)
msg = "inventory successfully refreshed"
elif meta_action == 'clear_facts':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
hostname = host.get_name()
self._variable_manager.clear_facts(hostname)
msg = "facts cleared"
else:
skipped = True
elif meta_action == 'clear_host_errors':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
self._tqm._failed_hosts.pop(host.name, False)
self._tqm._unreachable_hosts.pop(host.name, False)
iterator._host_states[host.name].fail_state = iterator.FAILED_NONE
msg = "cleared host errors"
else:
skipped = True
elif meta_action == 'end_play':
if _evaluate_conditional(target_host):
for host in self._inventory.get_hosts(iterator._play.hosts):
if host.name not in self._tqm._unreachable_hosts:
iterator._host_states[host.name].run_state = iterator.ITERATING_COMPLETE
msg = "ending play"
elif meta_action == 'end_host':
if _evaluate_conditional(target_host):
iterator._host_states[target_host.name].run_state = iterator.ITERATING_COMPLETE
msg = "ending play for %s" % target_host.name
else:
skipped = True
msg = "end_host conditional evaluated to false, continuing execution for %s" % target_host.name
elif meta_action == 'reset_connection':
all_vars = self._variable_manager.get_vars(play=iterator._play, host=target_host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
templar = Templar(loader=self._loader, variables=all_vars)
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
play_context = play_context.set_task_and_variable_override(task=task, variables=all_vars, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not play_context.remote_addr:
play_context.remote_addr = target_host.address
# We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist.
play_context.update_vars(all_vars)
if task.when:
self._cond_not_supported_warn(meta_action)
if target_host in self._active_connections:
connection = Connection(self._active_connections[target_host])
del self._active_connections[target_host]
else:
connection = plugin_loader.connection_loader.get(play_context.connection, play_context, os.devnull)
play_context.set_attributes_from_plugin(connection)
if connection:
try:
connection.reset()
msg = 'reset connection'
except ConnectionError as e:
# most likely socket is already closed
display.debug("got an error while closing persistent connection: %s" % e)
else:
msg = 'no connection, nothing to reset'
else:
raise AnsibleError("invalid meta action requested: %s" % meta_action, obj=task._ds)
result = {'msg': msg}
if skipped:
result['skipped'] = True
else:
result['changed'] = False
display.vv("META: %s" % msg)
return [TaskResult(target_host, task, result)]
def get_hosts_left(self, iterator):
''' returns list of available hosts for this iterator by filtering out unreachables '''
hosts_left = []
for host in self._hosts_cache:
if host not in self._tqm._unreachable_hosts:
try:
hosts_left.append(self._inventory.hosts[host])
except KeyError:
hosts_left.append(self._inventory.get_host(host))
return hosts_left
def update_active_connections(self, results):
''' updates the current active persistent connections '''
for r in results:
if 'args' in r._task_fields:
socket_path = r._task_fields['args'].get('_ansible_socket')
if socket_path:
if r._host not in self._active_connections:
self._active_connections[r._host] = socket_path
class NextAction(object):
""" The next action after an interpreter's exit. """
REDO = 1
CONTINUE = 2
EXIT = 3
def __init__(self, result=EXIT):
self.result = result
class Debugger(cmd.Cmd):
prompt_continuous = '> ' # multiple lines
def __init__(self, task, host, task_vars, play_context, result, next_action):
# cmd.Cmd is old-style class
cmd.Cmd.__init__(self)
self.prompt = '[%s] %s (debug)> ' % (host, task)
self.intro = None
self.scope = {}
self.scope['task'] = task
self.scope['task_vars'] = task_vars
self.scope['host'] = host
self.scope['play_context'] = play_context
self.scope['result'] = result
self.next_action = next_action
def cmdloop(self):
try:
cmd.Cmd.cmdloop(self)
except KeyboardInterrupt:
pass
do_h = cmd.Cmd.do_help
def do_EOF(self, args):
"""Quit"""
return self.do_quit(args)
def do_quit(self, args):
"""Quit"""
display.display('User interrupted execution')
self.next_action.result = NextAction.EXIT
return True
do_q = do_quit
def do_continue(self, args):
"""Continue to next result"""
self.next_action.result = NextAction.CONTINUE
return True
do_c = do_continue
def do_redo(self, args):
"""Schedule task for re-execution. The re-execution may not be the next result"""
self.next_action.result = NextAction.REDO
return True
do_r = do_redo
def do_update_task(self, args):
"""Recreate the task from ``task._ds``, and template with updated ``task_vars``"""
templar = Templar(None, shared_loader_obj=None, variables=self.scope['task_vars'])
task = self.scope['task']
task = task.load_data(task._ds)
task.post_validate(templar)
self.scope['task'] = task
do_u = do_update_task
def evaluate(self, args):
try:
return eval(args, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def do_pprint(self, args):
"""Pretty Print"""
try:
result = self.evaluate(args)
display.display(pprint.pformat(result))
except Exception:
pass
do_p = do_pprint
def execute(self, args):
try:
code = compile(args + '\n', '<stdin>', 'single')
exec(code, globals(), self.scope)
except Exception:
t, v = sys.exc_info()[:2]
if isinstance(t, str):
exc_type_name = t
else:
exc_type_name = t.__name__
display.display('***%s:%s' % (exc_type_name, repr(v)))
raise
def default(self, line):
try:
self.execute(line)
except Exception:
pass
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,254 |
The --force-handlers option if used with 'strategy: free' does not run handlers on all hosts.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
As of Ansible 2.7 the `--force-handlers` option is no longer running the handler/s on all hosts when used with strategy free. Only tested back to 2.5 and this was working as I would expect it to on 2.5 & 2.6.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
plugins/strategy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /Users/bdudas/dev-net/handlers/ansible.cfg
configured module search path = [u'/Users/bdudas/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bdudas/dev-net/handlers/ansible/lib/ansible
executable location = /Users/bdudas/dev-net/handlers/ansible/bin/ansible
python version = 2.7.16 (default, Oct 16 2019, 00:34:56) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_STRATEGY(/Users/bdudas/dev-net/handlers/ansible.cfg) = free
HOST_KEY_CHECKING(/Users/bdudas/dev-net/handlers/ansible.cfg) = False
RETRY_FILES_ENABLED(/Users/bdudas/dev-net/handlers/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: all
gather_facts: false
tasks:
- command: hostname
register: id
notify: recovery_x
- debug:
var: id
- name: failure
fail: msg="Testing Failure"
when: inventory_hostname == 'covfefe'
handlers:
- name: recovery_x
shell: hostname
...
```
ansible-playbook <playbook.yml> --force-handlers
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Handlers will run against all hosts when using the `--force-handlers` option.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Handler is only run on 1 of the 2 target hosts.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/65254
|
https://github.com/ansible/ansible/pull/65576
|
89703b328490966db287700bb4bc3a422e96d98e
|
c8704573396e7480b3e1b33b2ddda2b6325d0d80
| 2019-11-25T15:02:05Z |
python
| 2019-12-19T19:10:51Z |
lib/ansible/plugins/strategy/free.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
strategy: free
short_description: Executes tasks without waiting for all hosts
description:
- Task execution is as fast as possible per batch as defined by C(serial) (default all).
Ansible will not wait for other hosts to finish the current task before queuing more tasks for other hosts.
All hosts are still attempted for the current task, but it prevents blocking new tasks for hosts that have already finished.
- With the free strategy, unlike the default linear strategy, a host that is slow or stuck on a specific task
won't hold up the rest of the hosts and tasks.
version_added: "2.0"
author: Ansible Core Team
'''
import time
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.playbook.included_file import IncludedFile
from ansible.plugins.loader import action_loader
from ansible.plugins.strategy import StrategyBase
from ansible.template import Templar
from ansible.module_utils._text import to_text
from ansible.utils.display import Display
display = Display()
class StrategyModule(StrategyBase):
# This strategy manages throttling on its own, so we don't want it done in queue_task
ALLOW_BASE_THROTTLING = False
def _filter_notified_hosts(self, notified_hosts):
'''
Filter notified hosts accordingly to strategy
'''
# We act only on hosts that are ready to flush handlers
return [host for host in notified_hosts
if host in self._flushed_hosts and self._flushed_hosts[host]]
def __init__(self, tqm):
super(StrategyModule, self).__init__(tqm)
self._host_pinned = False
def run(self, iterator, play_context):
'''
The "free" strategy is a bit more complex, in that it allows tasks to
be sent to hosts as quickly as they can be processed. This means that
some hosts may finish very quickly if run tasks result in little or no
work being done versus other systems.
The algorithm used here also tries to be more "fair" when iterating
through hosts by remembering the last host in the list to be given a task
and starting the search from there as opposed to the top of the hosts
list again, which would end up favoring hosts near the beginning of the
list.
'''
# the last host to be given a task
last_host = 0
result = self._tqm.RUN_OK
# start with all workers being counted as being free
workers_free = len(self._workers)
self._set_hosts_cache(iterator._play)
work_to_do = True
while work_to_do and not self._tqm._terminated:
hosts_left = self.get_hosts_left(iterator)
if len(hosts_left) == 0:
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
result = False
break
work_to_do = False # assume we have no more work to do
starting_host = last_host # save current position so we know when we've looped back around and need to break
# try and find an unblocked host with a task to run
host_results = []
while True:
host = hosts_left[last_host]
display.debug("next free host: %s" % host)
host_name = host.get_name()
# peek at the next task for the host, to see if there's
# anything to do do for this host
(state, task) = iterator.get_next_task_for_host(host, peek=True)
display.debug("free host state: %s" % state, host=host_name)
display.debug("free host task: %s" % task, host=host_name)
if host_name not in self._tqm._unreachable_hosts and task:
# set the flag so the outer loop knows we've still found
# some work which needs to be done
work_to_do = True
display.debug("this host has work to do", host=host_name)
# check to see if this host is blocked (still executing a previous task)
if (host_name not in self._blocked_hosts or not self._blocked_hosts[host_name]):
display.debug("getting variables", host=host_name)
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
display.debug("done getting variables", host=host_name)
try:
throttle = int(templar.template(task.throttle))
except Exception as e:
raise AnsibleError("Failed to convert the throttle value to an integer.", obj=task._ds, orig_exc=e)
if throttle > 0:
same_tasks = 0
for worker in self._workers:
if worker and worker.is_alive() and worker._task._uuid == task._uuid:
same_tasks += 1
display.debug("task: %s, same_tasks: %d" % (task.get_name(), same_tasks))
if same_tasks >= throttle:
break
# pop the task, mark the host blocked, and queue it
self._blocked_hosts[host_name] = True
(state, task) = iterator.get_next_task_for_host(host)
try:
action = action_loader.get(task.action, class_only=True)
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
action = None
try:
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
display.debug("done templating", host=host_name)
except Exception:
# just ignore any errors during task name templating,
# we don't care if it just shows the raw name
display.debug("templating failed for some reason", host=host_name)
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
if run_once:
if action and getattr(action, 'BYPASS_HOST_LOOP', False):
raise AnsibleError("The '%s' module bypasses the host loop, which is currently not supported in the free strategy "
"and would instead execute for every host in the inventory list." % task.action, obj=task._ds)
else:
display.warning("Using run_once with the free strategy is not currently supported. This task will still be "
"executed for every host in the inventory list.")
# check to see if this task should be skipped, due to it being a member of a
# role which has already run (and whether that role allows duplicate execution)
if task._role and task._role.has_run(host):
# If there is no metadata, the default behavior is to not allow duplicates,
# if there is metadata, check to see if the allow_duplicates flag was set to true
if task._role._metadata is None or task._role._metadata and not task._role._metadata.allow_duplicates:
display.debug("'%s' skipped because role has already run" % task, host=host_name)
del self._blocked_hosts[host_name]
continue
if task.action == 'meta':
self._execute_meta(task, play_context, iterator, target_host=host)
self._blocked_hosts[host_name] = False
else:
# handle step if needed, skip meta actions as they are used internally
if not self._step or self._take_step(task, host_name):
if task.any_errors_fatal:
display.warning("Using any_errors_fatal with the free strategy is not supported, "
"as tasks are executed independently on each host")
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
self._queue_task(host, task, task_vars, play_context)
# each task is counted as a worker being busy
workers_free -= 1
del task_vars
else:
display.debug("%s is blocked, skipping for now" % host_name)
# all workers have tasks to do (and the current host isn't done with the play).
# loop back to starting host and break out
if self._host_pinned and workers_free == 0 and work_to_do:
last_host = starting_host
break
# move on to the next host and make sure we
# haven't gone past the end of our hosts list
last_host += 1
if last_host > len(hosts_left) - 1:
last_host = 0
# if we've looped around back to the start, break out
if last_host == starting_host:
break
results = self._process_pending_results(iterator)
host_results.extend(results)
# each result is counted as a worker being free again
workers_free += len(results)
self.update_active_connections(results)
included_files = IncludedFile.process_include_results(
host_results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
if len(included_files) > 0:
all_blocks = dict((host, []) for host in hosts_left)
for included_file in included_files:
display.debug("collecting new blocks for %s" % included_file)
try:
if included_file._is_role:
new_ir = self._copy_included_file(included_file)
new_blocks, handler_blocks = new_ir.get_block_list(
play=iterator._play,
variable_manager=self._variable_manager,
loader=self._loader,
)
else:
new_blocks = self._load_included_file(included_file, iterator=iterator)
except AnsibleError as e:
for host in included_file._hosts:
iterator.mark_host_failed(host)
display.warning(to_text(e))
continue
for new_block in new_blocks:
task_vars = self._variable_manager.get_vars(play=iterator._play, task=new_block._parent,
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all)
final_block = new_block.filter_tagged_tasks(task_vars)
for host in hosts_left:
if host in included_file._hosts:
all_blocks[host].append(final_block)
display.debug("done collecting new blocks for %s" % included_file)
display.debug("adding all collected blocks from %d included file(s) to iterator" % len(included_files))
for host in hosts_left:
iterator.add_tasks(host, all_blocks[host])
display.debug("done adding collected blocks to iterator")
# pause briefly so we don't spin lock
time.sleep(C.DEFAULT_INTERNAL_POLL_INTERVAL)
# collect all the final results
results = self._wait_on_pending_results(iterator)
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
return super(StrategyModule, self).run(iterator, play_context, result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,254 |
The --force-handlers option if used with 'strategy: free' does not run handlers on all hosts.
|
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
As of Ansible 2.7 the `--force-handlers` option is no longer running the handler/s on all hosts when used with strategy free. Only tested back to 2.5 and this was working as I would expect it to on 2.5 & 2.6.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
plugins/strategy
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.0.dev0
config file = /Users/bdudas/dev-net/handlers/ansible.cfg
configured module search path = [u'/Users/bdudas/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Users/bdudas/dev-net/handlers/ansible/lib/ansible
executable location = /Users/bdudas/dev-net/handlers/ansible/bin/ansible
python version = 2.7.16 (default, Oct 16 2019, 00:34:56) [GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_STRATEGY(/Users/bdudas/dev-net/handlers/ansible.cfg) = free
HOST_KEY_CHECKING(/Users/bdudas/dev-net/handlers/ansible.cfg) = False
RETRY_FILES_ENABLED(/Users/bdudas/dev-net/handlers/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: all
gather_facts: false
tasks:
- command: hostname
register: id
notify: recovery_x
- debug:
var: id
- name: failure
fail: msg="Testing Failure"
when: inventory_hostname == 'covfefe'
handlers:
- name: recovery_x
shell: hostname
...
```
ansible-playbook <playbook.yml> --force-handlers
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Handlers will run against all hosts when using the `--force-handlers` option.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Handler is only run on 1 of the 2 target hosts.
<!--- Paste verbatim command output between quotes -->
```paste below
```
|
https://github.com/ansible/ansible/issues/65254
|
https://github.com/ansible/ansible/pull/65576
|
89703b328490966db287700bb4bc3a422e96d98e
|
c8704573396e7480b3e1b33b2ddda2b6325d0d80
| 2019-11-25T15:02:05Z |
python
| 2019-12-19T19:10:51Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*?]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*?]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*?]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*?]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,421 |
vmware_cluster_ha support for advanced configuration parameters
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The vmware_cluster_ha module should support advanced configuration parameters in a generic way.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_cluster_ha
##### ADDITIONAL INFORMATION
The feature would allow advanced configurations like isolation address handling.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
name: Enable HA with multiple custom isolation addresses for stretched vSAN
vmware_cluster_ha:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: datacenter
cluster_name: cluster
enable_ha: yes
configuration_parameters:
das.usedefaultisolationaddress: false
das.isolationaddress0: '{{ primary_isolation_address }}'
das.isolationaddress1: '{{ secondary_isolation_address }}'
delegate_to: localhost
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/61421
|
https://github.com/ansible/ansible/pull/65675
|
c8704573396e7480b3e1b33b2ddda2b6325d0d80
|
fec883dfffcd8685d5d57a07463e402c2cd36931
| 2019-08-28T08:26:05Z |
python
| 2019-12-19T19:19:45Z |
changelogs/fragments/58824-vmware_cluster_ha-advanced-settings.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,421 |
vmware_cluster_ha support for advanced configuration parameters
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The vmware_cluster_ha module should support advanced configuration parameters in a generic way.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_cluster_ha
##### ADDITIONAL INFORMATION
The feature would allow advanced configurations like isolation address handling.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
name: Enable HA with multiple custom isolation addresses for stretched vSAN
vmware_cluster_ha:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: datacenter
cluster_name: cluster
enable_ha: yes
configuration_parameters:
das.usedefaultisolationaddress: false
das.isolationaddress0: '{{ primary_isolation_address }}'
das.isolationaddress1: '{{ secondary_isolation_address }}'
delegate_to: localhost
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/61421
|
https://github.com/ansible/ansible/pull/65675
|
c8704573396e7480b3e1b33b2ddda2b6325d0d80
|
fec883dfffcd8685d5d57a07463e402c2cd36931
| 2019-08-28T08:26:05Z |
python
| 2019-12-19T19:19:45Z |
lib/ansible/module_utils/vmware.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Joseph Callen <jcallen () csc.com>
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, James E. King III (@jeking3) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import atexit
import ansible.module_utils.common._collections_compat as collections_compat
import json
import os
import re
import ssl
import time
import traceback
from random import randint
from distutils.version import StrictVersion
REQUESTS_IMP_ERR = None
try:
# requests is required for exception handling of the ConnectionError
import requests
HAS_REQUESTS = True
except ImportError:
REQUESTS_IMP_ERR = traceback.format_exc()
HAS_REQUESTS = False
PYVMOMI_IMP_ERR = None
try:
from pyVim import connect
from pyVmomi import vim, vmodl, VmomiSupport
HAS_PYVMOMI = True
HAS_PYVMOMIJSON = hasattr(VmomiSupport, 'VmomiJSONEncoder')
except ImportError:
PYVMOMI_IMP_ERR = traceback.format_exc()
HAS_PYVMOMI = False
HAS_PYVMOMIJSON = False
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.six import integer_types, iteritems, string_types, raise_from
from ansible.module_utils.basic import env_fallback, missing_required_lib
class TaskError(Exception):
def __init__(self, *args, **kwargs):
super(TaskError, self).__init__(*args, **kwargs)
def wait_for_task(task, max_backoff=64, timeout=3600):
"""Wait for given task using exponential back-off algorithm.
Args:
task: VMware task object
max_backoff: Maximum amount of sleep time in seconds
timeout: Timeout for the given task in seconds
Returns: Tuple with True and result for successful task
Raises: TaskError on failure
"""
failure_counter = 0
start_time = time.time()
while True:
if time.time() - start_time >= timeout:
raise TaskError("Timeout")
if task.info.state == vim.TaskInfo.State.success:
return True, task.info.result
if task.info.state == vim.TaskInfo.State.error:
error_msg = task.info.error
host_thumbprint = None
try:
error_msg = error_msg.msg
if hasattr(task.info.error, 'thumbprint'):
host_thumbprint = task.info.error.thumbprint
except AttributeError:
pass
finally:
raise_from(TaskError(error_msg, host_thumbprint), task.info.error)
if task.info.state in [vim.TaskInfo.State.running, vim.TaskInfo.State.queued]:
sleep_time = min(2 ** failure_counter + randint(1, 1000) / 1000, max_backoff)
time.sleep(sleep_time)
failure_counter += 1
def wait_for_vm_ip(content, vm, timeout=300):
facts = dict()
interval = 15
while timeout > 0:
_facts = gather_vm_facts(content, vm)
if _facts['ipv4'] or _facts['ipv6']:
facts = _facts
break
time.sleep(interval)
timeout -= interval
return facts
def find_obj(content, vimtype, name, first=True, folder=None):
container = content.viewManager.CreateContainerView(folder or content.rootFolder, recursive=True, type=vimtype)
# Get all objects matching type (and name if given)
obj_list = [obj for obj in container.view if not name or to_text(obj.name) == to_text(name)]
container.Destroy()
# Return first match or None
if first:
if obj_list:
return obj_list[0]
return None
# Return all matching objects or empty list
return obj_list
def find_dvspg_by_name(dv_switch, portgroup_name):
portgroup_name = quote_obj_name(portgroup_name)
portgroups = dv_switch.portgroup
for pg in portgroups:
if pg.name == portgroup_name:
return pg
return None
def find_object_by_name(content, name, obj_type, folder=None, recurse=True):
if not isinstance(obj_type, list):
obj_type = [obj_type]
objects = get_all_objs(content, obj_type, folder=folder, recurse=recurse)
for obj in objects:
if obj.name == name:
return obj
return None
def find_cluster_by_name(content, cluster_name, datacenter=None):
if datacenter and hasattr(datacenter, 'hostFolder'):
folder = datacenter.hostFolder
else:
folder = content.rootFolder
return find_object_by_name(content, cluster_name, [vim.ClusterComputeResource], folder=folder)
def find_datacenter_by_name(content, datacenter_name):
return find_object_by_name(content, datacenter_name, [vim.Datacenter])
def get_parent_datacenter(obj):
""" Walk the parent tree to find the objects datacenter """
if isinstance(obj, vim.Datacenter):
return obj
datacenter = None
while True:
if not hasattr(obj, 'parent'):
break
obj = obj.parent
if isinstance(obj, vim.Datacenter):
datacenter = obj
break
return datacenter
def find_datastore_by_name(content, datastore_name, datacenter_name=None):
return find_object_by_name(content, datastore_name, [vim.Datastore], datacenter_name)
def find_dvs_by_name(content, switch_name, folder=None):
return find_object_by_name(content, switch_name, [vim.DistributedVirtualSwitch], folder=folder)
def find_hostsystem_by_name(content, hostname):
return find_object_by_name(content, hostname, [vim.HostSystem])
def find_resource_pool_by_name(content, resource_pool_name):
return find_object_by_name(content, resource_pool_name, [vim.ResourcePool])
def find_network_by_name(content, network_name):
return find_object_by_name(content, quote_obj_name(network_name), [vim.Network])
def find_vm_by_id(content, vm_id, vm_id_type="vm_name", datacenter=None,
cluster=None, folder=None, match_first=False):
""" UUID is unique to a VM, every other id returns the first match. """
si = content.searchIndex
vm = None
if vm_id_type == 'dns_name':
vm = si.FindByDnsName(datacenter=datacenter, dnsName=vm_id, vmSearch=True)
elif vm_id_type == 'uuid':
# Search By BIOS UUID rather than instance UUID
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=False, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'instance_uuid':
vm = si.FindByUuid(datacenter=datacenter, instanceUuid=True, uuid=vm_id, vmSearch=True)
elif vm_id_type == 'ip':
vm = si.FindByIp(datacenter=datacenter, ip=vm_id, vmSearch=True)
elif vm_id_type == 'vm_name':
folder = None
if cluster:
folder = cluster
elif datacenter:
folder = datacenter.hostFolder
vm = find_vm_by_name(content, vm_id, folder)
elif vm_id_type == 'inventory_path':
searchpath = folder
# get all objects for this path
f_obj = si.FindByInventoryPath(searchpath)
if f_obj:
if isinstance(f_obj, vim.Datacenter):
f_obj = f_obj.vmFolder
for c_obj in f_obj.childEntity:
if not isinstance(c_obj, vim.VirtualMachine):
continue
if c_obj.name == vm_id:
vm = c_obj
if match_first:
break
return vm
def find_vm_by_name(content, vm_name, folder=None, recurse=True):
return find_object_by_name(content, vm_name, [vim.VirtualMachine], folder=folder, recurse=recurse)
def find_host_portgroup_by_name(host, portgroup_name):
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return None
def compile_folder_path_for_object(vobj):
""" make a /vm/foo/bar/baz like folder path for an object """
paths = []
if isinstance(vobj, vim.Folder):
paths.append(vobj.name)
thisobj = vobj
while hasattr(thisobj, 'parent'):
thisobj = thisobj.parent
try:
moid = thisobj._moId
except AttributeError:
moid = None
if moid in ['group-d1', 'ha-folder-root']:
break
if isinstance(thisobj, vim.Folder):
paths.append(thisobj.name)
paths.reverse()
return '/' + '/'.join(paths)
def _get_vm_prop(vm, attributes):
"""Safely get a property or return None"""
result = vm
for attribute in attributes:
try:
result = getattr(result, attribute)
except (AttributeError, IndexError):
return None
return result
def gather_vm_facts(content, vm):
""" Gather facts from vim.VirtualMachine object. """
facts = {
'module_hw': True,
'hw_name': vm.config.name,
'hw_power_status': vm.summary.runtime.powerState,
'hw_guest_full_name': vm.summary.guest.guestFullName,
'hw_guest_id': vm.summary.guest.guestId,
'hw_product_uuid': vm.config.uuid,
'hw_processor_count': vm.config.hardware.numCPU,
'hw_cores_per_socket': vm.config.hardware.numCoresPerSocket,
'hw_memtotal_mb': vm.config.hardware.memoryMB,
'hw_interfaces': [],
'hw_datastores': [],
'hw_files': [],
'hw_esxi_host': None,
'hw_guest_ha_state': None,
'hw_is_template': vm.config.template,
'hw_folder': None,
'hw_version': vm.config.version,
'instance_uuid': vm.config.instanceUuid,
'guest_tools_status': _get_vm_prop(vm, ('guest', 'toolsRunningStatus')),
'guest_tools_version': _get_vm_prop(vm, ('guest', 'toolsVersion')),
'guest_question': vm.summary.runtime.question,
'guest_consolidation_needed': vm.summary.runtime.consolidationNeeded,
'ipv4': None,
'ipv6': None,
'annotation': vm.config.annotation,
'customvalues': {},
'snapshots': [],
'current_snapshot': None,
'vnc': {},
'moid': vm._moId,
'vimref': "vim.VirtualMachine:%s" % vm._moId,
}
# facts that may or may not exist
if vm.summary.runtime.host:
try:
host = vm.summary.runtime.host
facts['hw_esxi_host'] = host.summary.config.name
facts['hw_cluster'] = host.parent.name if host.parent and isinstance(host.parent, vim.ClusterComputeResource) else None
except vim.fault.NoPermission:
# User does not have read permission for the host system,
# proceed without this value. This value does not contribute or hamper
# provisioning or power management operations.
pass
if vm.summary.runtime.dasVmProtection:
facts['hw_guest_ha_state'] = vm.summary.runtime.dasVmProtection.dasProtected
datastores = vm.datastore
for ds in datastores:
facts['hw_datastores'].append(ds.info.name)
try:
files = vm.config.files
layout = vm.layout
if files:
facts['hw_files'] = [files.vmPathName]
for item in layout.snapshot:
for snap in item.snapshotFile:
if 'vmsn' in snap:
facts['hw_files'].append(snap)
for item in layout.configFile:
facts['hw_files'].append(os.path.join(os.path.dirname(files.vmPathName), item))
for item in vm.layout.logFile:
facts['hw_files'].append(os.path.join(files.logDirectory, item))
for item in vm.layout.disk:
for disk in item.diskFile:
facts['hw_files'].append(disk)
except Exception:
pass
facts['hw_folder'] = PyVmomi.get_vm_path(content, vm)
cfm = content.customFieldsManager
# Resolve custom values
for value_obj in vm.summary.customValue:
kn = value_obj.key
if cfm is not None and cfm.field:
for f in cfm.field:
if f.key == value_obj.key:
kn = f.name
# Exit the loop immediately, we found it
break
facts['customvalues'][kn] = value_obj.value
net_dict = {}
vmnet = _get_vm_prop(vm, ('guest', 'net'))
if vmnet:
for device in vmnet:
if device.deviceConfigId > 0:
net_dict[device.macAddress] = list(device.ipAddress)
if vm.guest.ipAddress:
if ':' in vm.guest.ipAddress:
facts['ipv6'] = vm.guest.ipAddress
else:
facts['ipv4'] = vm.guest.ipAddress
ethernet_idx = 0
for entry in vm.config.hardware.device:
if not hasattr(entry, 'macAddress'):
continue
if entry.macAddress:
mac_addr = entry.macAddress
mac_addr_dash = mac_addr.replace(':', '-')
else:
mac_addr = mac_addr_dash = None
if (hasattr(entry, 'backing') and hasattr(entry.backing, 'port') and
hasattr(entry.backing.port, 'portKey') and hasattr(entry.backing.port, 'portgroupKey')):
port_group_key = entry.backing.port.portgroupKey
port_key = entry.backing.port.portKey
else:
port_group_key = None
port_key = None
factname = 'hw_eth' + str(ethernet_idx)
facts[factname] = {
'addresstype': entry.addressType,
'label': entry.deviceInfo.label,
'macaddress': mac_addr,
'ipaddresses': net_dict.get(entry.macAddress, None),
'macaddress_dash': mac_addr_dash,
'summary': entry.deviceInfo.summary,
'portgroup_portkey': port_key,
'portgroup_key': port_group_key,
}
facts['hw_interfaces'].append('eth' + str(ethernet_idx))
ethernet_idx += 1
snapshot_facts = list_snapshots(vm)
if 'snapshots' in snapshot_facts:
facts['snapshots'] = snapshot_facts['snapshots']
facts['current_snapshot'] = snapshot_facts['current_snapshot']
facts['vnc'] = get_vnc_extraconfig(vm)
return facts
def deserialize_snapshot_obj(obj):
return {'id': obj.id,
'name': obj.name,
'description': obj.description,
'creation_time': obj.createTime,
'state': obj.state}
def list_snapshots_recursively(snapshots):
snapshot_data = []
for snapshot in snapshots:
snapshot_data.append(deserialize_snapshot_obj(snapshot))
snapshot_data = snapshot_data + list_snapshots_recursively(snapshot.childSnapshotList)
return snapshot_data
def get_current_snap_obj(snapshots, snapob):
snap_obj = []
for snapshot in snapshots:
if snapshot.snapshot == snapob:
snap_obj.append(snapshot)
snap_obj = snap_obj + get_current_snap_obj(snapshot.childSnapshotList, snapob)
return snap_obj
def list_snapshots(vm):
result = {}
snapshot = _get_vm_prop(vm, ('snapshot',))
if not snapshot:
return result
if vm.snapshot is None:
return result
result['snapshots'] = list_snapshots_recursively(vm.snapshot.rootSnapshotList)
current_snapref = vm.snapshot.currentSnapshot
current_snap_obj = get_current_snap_obj(vm.snapshot.rootSnapshotList, current_snapref)
if current_snap_obj:
result['current_snapshot'] = deserialize_snapshot_obj(current_snap_obj[0])
else:
result['current_snapshot'] = dict()
return result
def get_vnc_extraconfig(vm):
result = {}
for opts in vm.config.extraConfig:
for optkeyname in ['enabled', 'ip', 'port', 'password']:
if opts.key.lower() == "remotedisplay.vnc." + optkeyname:
result[optkeyname] = opts.value
return result
def vmware_argument_spec():
return dict(
hostname=dict(type='str',
required=False,
fallback=(env_fallback, ['VMWARE_HOST']),
),
username=dict(type='str',
aliases=['user', 'admin'],
required=False,
fallback=(env_fallback, ['VMWARE_USER'])),
password=dict(type='str',
aliases=['pass', 'pwd'],
required=False,
no_log=True,
fallback=(env_fallback, ['VMWARE_PASSWORD'])),
port=dict(type='int',
default=443,
fallback=(env_fallback, ['VMWARE_PORT'])),
validate_certs=dict(type='bool',
required=False,
default=True,
fallback=(env_fallback, ['VMWARE_VALIDATE_CERTS'])
),
proxy_host=dict(type='str',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_HOST'])),
proxy_port=dict(type='int',
required=False,
default=None,
fallback=(env_fallback, ['VMWARE_PROXY_PORT'])),
)
def connect_to_api(module, disconnect_atexit=True, return_si=False):
hostname = module.params['hostname']
username = module.params['username']
password = module.params['password']
port = module.params.get('port', 443)
validate_certs = module.params['validate_certs']
if not hostname:
module.fail_json(msg="Hostname parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_HOST=ESXI_HOSTNAME'")
if not username:
module.fail_json(msg="Username parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_USER=ESXI_USERNAME'")
if not password:
module.fail_json(msg="Password parameter is missing."
" Please specify this parameter in task or"
" export environment variable like 'export VMWARE_PASSWORD=ESXI_PASSWORD'")
if validate_certs and not hasattr(ssl, 'SSLContext'):
module.fail_json(msg='pyVim does not support changing verification mode with python < 2.7.9. Either update '
'python or use validate_certs=false.')
elif validate_certs:
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True
ssl_context.load_default_certs()
elif hasattr(ssl, 'SSLContext'):
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_NONE
ssl_context.check_hostname = False
else: # Python < 2.7.9 or RHEL/Centos < 7.4
ssl_context = None
service_instance = None
proxy_host = module.params.get('proxy_host')
proxy_port = module.params.get('proxy_port')
connect_args = dict(
host=hostname,
port=port,
)
if ssl_context:
connect_args.update(sslContext=ssl_context)
msg_suffix = ''
try:
if proxy_host:
msg_suffix = " [proxy: %s:%d]" % (proxy_host, proxy_port)
connect_args.update(httpProxyHost=proxy_host, httpProxyPort=proxy_port)
smart_stub = connect.SmartStubAdapter(**connect_args)
session_stub = connect.VimSessionOrientedStub(smart_stub, connect.VimSessionOrientedStub.makeUserLoginMethod(username, password))
service_instance = vim.ServiceInstance('ServiceInstance', session_stub)
else:
connect_args.update(user=username, pwd=password)
service_instance = connect.SmartConnect(**connect_args)
except vim.fault.InvalidLogin as invalid_login:
msg = "Unable to log on to vCenter or ESXi API at %s:%s " % (hostname, port)
module.fail_json(msg="%s as %s: %s" % (msg, username, invalid_login.msg) + msg_suffix)
except vim.fault.NoPermission as no_permission:
module.fail_json(msg="User %s does not have required permission"
" to log on to vCenter or ESXi API at %s:%s : %s" % (username, hostname, port, no_permission.msg))
except (requests.ConnectionError, ssl.SSLError) as generic_req_exc:
module.fail_json(msg="Unable to connect to vCenter or ESXi API at %s on TCP/%s: %s" % (hostname, port, generic_req_exc))
except vmodl.fault.InvalidRequest as invalid_request:
# Request is malformed
msg = "Failed to get a response from server %s:%s " % (hostname, port)
module.fail_json(msg="%s as request is malformed: %s" % (msg, invalid_request.msg) + msg_suffix)
except Exception as generic_exc:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port) + msg_suffix
module.fail_json(msg="%s : %s" % (msg, generic_exc))
if service_instance is None:
msg = "Unknown error while connecting to vCenter or ESXi API at %s:%s" % (hostname, port)
module.fail_json(msg=msg + msg_suffix)
# Disabling atexit should be used in special cases only.
# Such as IP change of the ESXi host which removes the connection anyway.
# Also removal significantly speeds up the return of the module
if disconnect_atexit:
atexit.register(connect.Disconnect, service_instance)
if return_si:
return service_instance, service_instance.RetrieveContent()
return service_instance.RetrieveContent()
def get_all_objs(content, vimtype, folder=None, recurse=True):
if not folder:
folder = content.rootFolder
obj = {}
container = content.viewManager.CreateContainerView(folder, vimtype, recurse)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
def run_command_in_guest(content, vm, username, password, program_path, program_args, program_cwd, program_env):
result = {'failed': False}
tools_status = vm.guest.toolsStatus
if (tools_status == 'toolsNotInstalled' or
tools_status == 'toolsNotRunning'):
result['failed'] = True
result['msg'] = "VMwareTools is not installed or is not running in the guest"
return result
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/NamePasswordAuthentication.rst
creds = vim.vm.guest.NamePasswordAuthentication(
username=username, password=password
)
try:
# https://github.com/vmware/pyvmomi/blob/master/docs/vim/vm/guest/ProcessManager.rst
pm = content.guestOperationsManager.processManager
# https://www.vmware.com/support/developer/converter-sdk/conv51_apireference/vim.vm.guest.ProcessManager.ProgramSpec.html
ps = vim.vm.guest.ProcessManager.ProgramSpec(
# programPath=program,
# arguments=args
programPath=program_path,
arguments=program_args,
workingDirectory=program_cwd,
)
res = pm.StartProgramInGuest(vm, creds, ps)
result['pid'] = res
pdata = pm.ListProcessesInGuest(vm, creds, [res])
# wait for pid to finish
while not pdata[0].endTime:
time.sleep(1)
pdata = pm.ListProcessesInGuest(vm, creds, [res])
result['owner'] = pdata[0].owner
result['startTime'] = pdata[0].startTime.isoformat()
result['endTime'] = pdata[0].endTime.isoformat()
result['exitCode'] = pdata[0].exitCode
if result['exitCode'] != 0:
result['failed'] = True
result['msg'] = "program exited non-zero"
else:
result['msg'] = "program completed successfully"
except Exception as e:
result['msg'] = str(e)
result['failed'] = True
return result
def serialize_spec(clonespec):
"""Serialize a clonespec or a relocation spec"""
data = {}
attrs = dir(clonespec)
attrs = [x for x in attrs if not x.startswith('_')]
for x in attrs:
xo = getattr(clonespec, x)
if callable(xo):
continue
xt = type(xo)
if xo is None:
data[x] = None
elif isinstance(xo, vim.vm.ConfigSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.RelocateSpec):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDisk):
data[x] = serialize_spec(xo)
elif isinstance(xo, vim.vm.device.VirtualDeviceSpec.FileOperation):
data[x] = to_text(xo)
elif isinstance(xo, vim.Description):
data[x] = {
'dynamicProperty': serialize_spec(xo.dynamicProperty),
'dynamicType': serialize_spec(xo.dynamicType),
'label': serialize_spec(xo.label),
'summary': serialize_spec(xo.summary),
}
elif hasattr(xo, 'name'):
data[x] = to_text(xo) + ':' + to_text(xo.name)
elif isinstance(xo, vim.vm.ProfileSpec):
pass
elif issubclass(xt, list):
data[x] = []
for xe in xo:
data[x].append(serialize_spec(xe))
elif issubclass(xt, string_types + integer_types + (float, bool)):
if issubclass(xt, integer_types):
data[x] = int(xo)
else:
data[x] = to_text(xo)
elif issubclass(xt, bool):
data[x] = xo
elif issubclass(xt, dict):
data[to_text(x)] = {}
for k, v in xo.items():
k = to_text(k)
data[x][k] = serialize_spec(v)
else:
data[x] = str(xt)
return data
def find_host_by_cluster_datacenter(module, content, datacenter_name, cluster_name, host_name):
dc = find_datacenter_by_name(content, datacenter_name)
if dc is None:
module.fail_json(msg="Unable to find datacenter with name %s" % datacenter_name)
cluster = find_cluster_by_name(content, cluster_name, datacenter=dc)
if cluster is None:
module.fail_json(msg="Unable to find cluster with name %s" % cluster_name)
for host in cluster.host:
if host.name == host_name:
return host, cluster
return None, cluster
def set_vm_power_state(content, vm, state, force, timeout=0):
"""
Set the power status for a VM determined by the current and
requested states. force is forceful
"""
facts = gather_vm_facts(content, vm)
expected_state = state.replace('_', '').replace('-', '').lower()
current_state = facts['hw_power_status'].lower()
result = dict(
changed=False,
failed=False,
)
# Need Force
if not force and current_state not in ['poweredon', 'poweredoff']:
result['failed'] = True
result['msg'] = "Virtual Machine is in %s power state. Force is required!" % current_state
result['instance'] = gather_vm_facts(content, vm)
return result
# State is not already true
if current_state != expected_state:
task = None
try:
if expected_state == 'poweredoff':
task = vm.PowerOff()
elif expected_state == 'poweredon':
task = vm.PowerOn()
elif expected_state == 'restarted':
if current_state in ('poweredon', 'poweringon', 'resetting', 'poweredoff'):
task = vm.Reset()
else:
result['failed'] = True
result['msg'] = "Cannot restart virtual machine in the current state %s" % current_state
elif expected_state == 'suspended':
if current_state in ('poweredon', 'poweringon'):
task = vm.Suspend()
else:
result['failed'] = True
result['msg'] = 'Cannot suspend virtual machine in the current state %s' % current_state
elif expected_state in ['shutdownguest', 'rebootguest']:
if current_state == 'poweredon':
if vm.guest.toolsRunningStatus == 'guestToolsRunning':
if expected_state == 'shutdownguest':
task = vm.ShutdownGuest()
if timeout > 0:
result.update(wait_for_poweroff(vm, timeout))
else:
task = vm.RebootGuest()
# Set result['changed'] immediately because
# shutdown and reboot return None.
result['changed'] = True
else:
result['failed'] = True
result['msg'] = "VMware tools should be installed for guest shutdown/reboot"
else:
result['failed'] = True
result['msg'] = "Virtual machine %s must be in poweredon state for guest shutdown/reboot" % vm.name
else:
result['failed'] = True
result['msg'] = "Unsupported expected state provided: %s" % expected_state
except Exception as e:
result['failed'] = True
result['msg'] = to_text(e)
if task:
wait_for_task(task)
if task.info.state == 'error':
result['failed'] = True
result['msg'] = task.info.error.msg
else:
result['changed'] = True
# need to get new metadata if changed
result['instance'] = gather_vm_facts(content, vm)
return result
def wait_for_poweroff(vm, timeout=300):
result = dict()
interval = 15
while timeout > 0:
if vm.runtime.powerState.lower() == 'poweredoff':
break
time.sleep(interval)
timeout -= interval
else:
result['failed'] = True
result['msg'] = 'Timeout while waiting for VM power off.'
return result
def is_integer(value, type_of='int'):
try:
VmomiSupport.vmodlTypes[type_of](value)
return True
except (TypeError, ValueError):
return False
def is_boolean(value):
if str(value).lower() in ['true', 'on', 'yes', 'false', 'off', 'no']:
return True
return False
def is_truthy(value):
if str(value).lower() in ['true', 'on', 'yes']:
return True
return False
def quote_obj_name(object_name=None):
"""
Replace special characters in object name
with urllib quote equivalent
"""
if not object_name:
return None
from collections import OrderedDict
SPECIAL_CHARS = OrderedDict({
'%': '%25',
'/': '%2f',
'\\': '%5c'
})
for key in SPECIAL_CHARS.keys():
if key in object_name:
object_name = object_name.replace(key, SPECIAL_CHARS[key])
return object_name
class PyVmomi(object):
def __init__(self, module):
"""
Constructor
"""
if not HAS_REQUESTS:
module.fail_json(msg=missing_required_lib('requests'),
exception=REQUESTS_IMP_ERR)
if not HAS_PYVMOMI:
module.fail_json(msg=missing_required_lib('PyVmomi'),
exception=PYVMOMI_IMP_ERR)
self.module = module
self.params = module.params
self.current_vm_obj = None
self.si, self.content = connect_to_api(self.module, return_si=True)
self.custom_field_mgr = []
if self.content.customFieldsManager: # not an ESXi
self.custom_field_mgr = self.content.customFieldsManager.field
def is_vcenter(self):
"""
Check if given hostname is vCenter or ESXi host
Returns: True if given connection is with vCenter server
False if given connection is with ESXi server
"""
api_type = None
try:
api_type = self.content.about.apiType
except (vmodl.RuntimeFault, vim.fault.VimFault) as exc:
self.module.fail_json(msg="Failed to get status of vCenter server : %s" % exc.msg)
if api_type == 'VirtualCenter':
return True
elif api_type == 'HostAgent':
return False
def get_managed_objects_properties(self, vim_type, properties=None):
"""
Look up a Managed Object Reference in vCenter / ESXi Environment
:param vim_type: Type of vim object e.g, for datacenter - vim.Datacenter
:param properties: List of properties related to vim object e.g. Name
:return: local content object
"""
# Get Root Folder
root_folder = self.content.rootFolder
if properties is None:
properties = ['name']
# Create Container View with default root folder
mor = self.content.viewManager.CreateContainerView(root_folder, [vim_type], True)
# Create Traversal spec
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
name="traversal_spec",
path='view',
skip=False,
type=vim.view.ContainerView
)
# Create Property Spec
property_spec = vmodl.query.PropertyCollector.PropertySpec(
type=vim_type, # Type of object to retrieved
all=False,
pathSet=properties
)
# Create Object Spec
object_spec = vmodl.query.PropertyCollector.ObjectSpec(
obj=mor,
skip=True,
selectSet=[traversal_spec]
)
# Create Filter Spec
filter_spec = vmodl.query.PropertyCollector.FilterSpec(
objectSet=[object_spec],
propSet=[property_spec],
reportMissingObjectsInResults=False
)
return self.content.propertyCollector.RetrieveContents([filter_spec])
# Virtual Machine related functions
def get_vm(self):
"""
Find unique virtual machine either by UUID, MoID or Name.
Returns: virtual machine object if found, else None.
"""
vm_obj = None
user_desired_path = None
use_instance_uuid = self.params.get('use_instance_uuid') or False
if 'uuid' in self.params and self.params['uuid']:
if not use_instance_uuid:
vm_obj = find_vm_by_id(self.content, vm_id=self.params['uuid'], vm_id_type="uuid")
elif use_instance_uuid:
vm_obj = find_vm_by_id(self.content,
vm_id=self.params['uuid'],
vm_id_type="instance_uuid")
elif 'name' in self.params and self.params['name']:
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
vms = []
for temp_vm_object in objects:
if (
len(temp_vm_object.propSet) == 1 and
temp_vm_object.propSet[0].val == self.params['name']):
vms.append(temp_vm_object.obj)
# get_managed_objects_properties may return multiple virtual machine,
# following code tries to find user desired one depending upon the folder specified.
if len(vms) > 1:
# We have found multiple virtual machines, decide depending upon folder value
if self.params['folder'] is None:
self.module.fail_json(msg="Multiple virtual machines with same name [%s] found, "
"Folder value is a required parameter to find uniqueness "
"of the virtual machine" % self.params['name'],
details="Please see documentation of the vmware_guest module "
"for folder parameter.")
# Get folder path where virtual machine is located
# User provided folder where user thinks virtual machine is present
user_folder = self.params['folder']
# User defined datacenter
user_defined_dc = self.params['datacenter']
# User defined datacenter's object
datacenter_obj = find_datacenter_by_name(self.content, self.params['datacenter'])
# Get Path for Datacenter
dcpath = compile_folder_path_for_object(vobj=datacenter_obj)
# Nested folder does not return trailing /
if not dcpath.endswith('/'):
dcpath += '/'
if user_folder in [None, '', '/']:
# User provided blank value or
# User provided only root value, we fail
self.module.fail_json(msg="vmware_guest found multiple virtual machines with same "
"name [%s], please specify folder path other than blank "
"or '/'" % self.params['name'])
elif user_folder.startswith('/vm/'):
# User provided nested folder under VMware default vm folder i.e. folder = /vm/india/finance
user_desired_path = "%s%s%s" % (dcpath, user_defined_dc, user_folder)
else:
# User defined datacenter is not nested i.e. dcpath = '/' , or
# User defined datacenter is nested i.e. dcpath = '/F0/DC0' or
# User provided folder starts with / and datacenter i.e. folder = /ha-datacenter/ or
# User defined folder starts with datacenter without '/' i.e.
# folder = DC0/vm/india/finance or
# folder = DC0/vm
user_desired_path = user_folder
for vm in vms:
# Check if user has provided same path as virtual machine
actual_vm_folder_path = self.get_vm_path(content=self.content, vm_name=vm)
if not actual_vm_folder_path.startswith("%s%s" % (dcpath, user_defined_dc)):
continue
if user_desired_path in actual_vm_folder_path:
vm_obj = vm
break
elif vms:
# Unique virtual machine found.
vm_obj = vms[0]
elif 'moid' in self.params and self.params['moid']:
vm_obj = VmomiSupport.templateOf('VirtualMachine')(self.params['moid'], self.si._stub)
if vm_obj:
self.current_vm_obj = vm_obj
return vm_obj
def gather_facts(self, vm):
"""
Gather facts of virtual machine.
Args:
vm: Name of virtual machine.
Returns: Facts dictionary of the given virtual machine.
"""
return gather_vm_facts(self.content, vm)
@staticmethod
def get_vm_path(content, vm_name):
"""
Find the path of virtual machine.
Args:
content: VMware content object
vm_name: virtual machine managed object
Returns: Folder of virtual machine if exists, else None
"""
folder_name = None
folder = vm_name.parent
if folder:
folder_name = folder.name
fp = folder.parent
# climb back up the tree to find our path, stop before the root folder
while fp is not None and fp.name is not None and fp != content.rootFolder:
folder_name = fp.name + '/' + folder_name
try:
fp = fp.parent
except Exception:
break
folder_name = '/' + folder_name
return folder_name
def get_vm_or_template(self, template_name=None):
"""
Find the virtual machine or virtual machine template using name
used for cloning purpose.
Args:
template_name: Name of virtual machine or virtual machine template
Returns: virtual machine or virtual machine template object
"""
template_obj = None
if not template_name:
return template_obj
if "/" in template_name:
vm_obj_path = os.path.dirname(template_name)
vm_obj_name = os.path.basename(template_name)
template_obj = find_vm_by_id(self.content, vm_obj_name, vm_id_type="inventory_path", folder=vm_obj_path)
if template_obj:
return template_obj
else:
template_obj = find_vm_by_id(self.content, vm_id=template_name, vm_id_type="uuid")
if template_obj:
return template_obj
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
templates = []
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == template_name:
templates.append(temp_vm_object.obj)
break
if len(templates) > 1:
# We have found multiple virtual machine templates
self.module.fail_json(msg="Multiple virtual machines or templates with same name [%s] found." % template_name)
elif templates:
template_obj = templates[0]
return template_obj
# Cluster related functions
def find_cluster_by_name(self, cluster_name, datacenter_name=None):
"""
Find Cluster by name in given datacenter
Args:
cluster_name: Name of cluster name to find
datacenter_name: (optional) Name of datacenter
Returns: True if found
"""
return find_cluster_by_name(self.content, cluster_name, datacenter=datacenter_name)
def get_all_hosts_by_cluster(self, cluster_name):
"""
Get all hosts from cluster by cluster name
Args:
cluster_name: Name of cluster
Returns: List of hosts
"""
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
return [host for host in cluster_obj.host]
else:
return []
# Hosts related functions
def find_hostsystem_by_name(self, host_name):
"""
Find Host by name
Args:
host_name: Name of ESXi host
Returns: True if found
"""
return find_hostsystem_by_name(self.content, hostname=host_name)
def get_all_host_objs(self, cluster_name=None, esxi_host_name=None):
"""
Get all host system managed object
Args:
cluster_name: Name of Cluster
esxi_host_name: Name of ESXi server
Returns: A list of all host system managed objects, else empty list
"""
host_obj_list = []
if not self.is_vcenter():
hosts = get_all_objs(self.content, [vim.HostSystem]).keys()
if hosts:
host_obj_list.append(list(hosts)[0])
else:
if cluster_name:
cluster_obj = self.find_cluster_by_name(cluster_name=cluster_name)
if cluster_obj:
host_obj_list = [host for host in cluster_obj.host]
else:
self.module.fail_json(changed=False, msg="Cluster '%s' not found" % cluster_name)
elif esxi_host_name:
if isinstance(esxi_host_name, str):
esxi_host_name = [esxi_host_name]
for host in esxi_host_name:
esxi_host_obj = self.find_hostsystem_by_name(host_name=host)
if esxi_host_obj:
host_obj_list.append(esxi_host_obj)
else:
self.module.fail_json(changed=False, msg="ESXi '%s' not found" % host)
return host_obj_list
def host_version_at_least(self, version=None, vm_obj=None, host_name=None):
"""
Check that the ESXi Host is at least a specific version number
Args:
vm_obj: virtual machine object, required one of vm_obj, host_name
host_name (string): ESXi host name
version (tuple): a version tuple, for example (6, 7, 0)
Returns: bool
"""
if vm_obj:
host_system = vm_obj.summary.runtime.host
elif host_name:
host_system = self.find_hostsystem_by_name(host_name=host_name)
else:
self.module.fail_json(msg='VM object or ESXi host name must be set one.')
if host_system and version:
host_version = host_system.summary.config.product.version
return StrictVersion(host_version) >= StrictVersion('.'.join(map(str, version)))
else:
self.module.fail_json(msg='Unable to get the ESXi host from vm: %s, or hostname %s,'
'or the passed ESXi version: %s is None.' % (vm_obj, host_name, version))
# Network related functions
@staticmethod
def find_host_portgroup_by_name(host, portgroup_name):
"""
Find Portgroup on given host
Args:
host: Host config object
portgroup_name: Name of portgroup
Returns: True if found else False
"""
for portgroup in host.config.network.portgroup:
if portgroup.spec.name == portgroup_name:
return portgroup
return False
def get_all_port_groups_by_host(self, host_system):
"""
Get all Port Group by host
Args:
host_system: Name of Host System
Returns: List of Port Group Spec
"""
pgs_list = []
for pg in host_system.config.network.portgroup:
pgs_list.append(pg)
return pgs_list
def find_network_by_name(self, network_name=None):
"""
Get network specified by name
Args:
network_name: Name of network
Returns: List of network managed objects
"""
networks = []
if not network_name:
return networks
objects = self.get_managed_objects_properties(vim_type=vim.Network, properties=['name'])
for temp_vm_object in objects:
if len(temp_vm_object.propSet) != 1:
continue
for temp_vm_object_property in temp_vm_object.propSet:
if temp_vm_object_property.val == network_name:
networks.append(temp_vm_object.obj)
break
return networks
def network_exists_by_name(self, network_name=None):
"""
Check if network with a specified name exists or not
Args:
network_name: Name of network
Returns: True if network exists else False
"""
ret = False
if not network_name:
return ret
ret = True if self.find_network_by_name(network_name=network_name) else False
return ret
# Datacenter
def find_datacenter_by_name(self, datacenter_name):
"""
Get datacenter managed object by name
Args:
datacenter_name: Name of datacenter
Returns: datacenter managed object if found else None
"""
return find_datacenter_by_name(self.content, datacenter_name=datacenter_name)
def is_datastore_valid(self, datastore_obj=None):
"""
Check if datastore selected is valid or not
Args:
datastore_obj: datastore managed object
Returns: True if datastore is valid, False if not
"""
if not datastore_obj \
or datastore_obj.summary.maintenanceMode != 'normal' \
or not datastore_obj.summary.accessible:
return False
return True
def find_datastore_by_name(self, datastore_name, datacenter_name=None):
"""
Get datastore managed object by name
Args:
datastore_name: Name of datastore
datacenter_name: Name of datacenter where the datastore resides. This is needed because Datastores can be
shared across Datacenters, so we need to specify the datacenter to assure we get the correct Managed Object Reference
Returns: datastore managed object if found else None
"""
return find_datastore_by_name(self.content, datastore_name=datastore_name, datacenter_name=datacenter_name)
# Datastore cluster
def find_datastore_cluster_by_name(self, datastore_cluster_name):
"""
Get datastore cluster managed object by name
Args:
datastore_cluster_name: Name of datastore cluster
Returns: Datastore cluster managed object if found else None
"""
data_store_clusters = get_all_objs(self.content, [vim.StoragePod])
for dsc in data_store_clusters:
if dsc.name == datastore_cluster_name:
return dsc
return None
# Resource pool
def find_resource_pool_by_name(self, resource_pool_name, folder=None):
"""
Get resource pool managed object by name
Args:
resource_pool_name: Name of resource pool
Returns: Resource pool managed object if found else None
"""
if not folder:
folder = self.content.rootFolder
resource_pools = get_all_objs(self.content, [vim.ResourcePool], folder=folder)
for rp in resource_pools:
if rp.name == resource_pool_name:
return rp
return None
def find_resource_pool_by_cluster(self, resource_pool_name='Resources', cluster=None):
"""
Get resource pool managed object by cluster object
Args:
resource_pool_name: Name of resource pool
cluster: Managed object of cluster
Returns: Resource pool managed object if found else None
"""
desired_rp = None
if not cluster:
return desired_rp
if resource_pool_name != 'Resources':
# Resource pool name is different than default 'Resources'
resource_pools = cluster.resourcePool.resourcePool
if resource_pools:
for rp in resource_pools:
if rp.name == resource_pool_name:
desired_rp = rp
break
else:
desired_rp = cluster.resourcePool
return desired_rp
# VMDK stuff
def vmdk_disk_path_split(self, vmdk_path):
"""
Takes a string in the format
[datastore_name] path/to/vm_name.vmdk
Returns a tuple with multiple strings:
1. datastore_name: The name of the datastore (without brackets)
2. vmdk_fullpath: The "path/to/vm_name.vmdk" portion
3. vmdk_filename: The "vm_name.vmdk" portion of the string (os.path.basename equivalent)
4. vmdk_folder: The "path/to/" portion of the string (os.path.dirname equivalent)
"""
try:
datastore_name = re.match(r'^\[(.*?)\]', vmdk_path, re.DOTALL).groups()[0]
vmdk_fullpath = re.match(r'\[.*?\] (.*)$', vmdk_path).groups()[0]
vmdk_filename = os.path.basename(vmdk_fullpath)
vmdk_folder = os.path.dirname(vmdk_fullpath)
return datastore_name, vmdk_fullpath, vmdk_filename, vmdk_folder
except (IndexError, AttributeError) as e:
self.module.fail_json(msg="Bad path '%s' for filename disk vmdk image: %s" % (vmdk_path, to_native(e)))
def find_vmdk_file(self, datastore_obj, vmdk_fullpath, vmdk_filename, vmdk_folder):
"""
Return vSphere file object or fail_json
Args:
datastore_obj: Managed object of datastore
vmdk_fullpath: Path of VMDK file e.g., path/to/vm/vmdk_filename.vmdk
vmdk_filename: Name of vmdk e.g., VM0001_1.vmdk
vmdk_folder: Base dir of VMDK e.g, path/to/vm
"""
browser = datastore_obj.browser
datastore_name = datastore_obj.name
datastore_name_sq = "[" + datastore_name + "]"
if browser is None:
self.module.fail_json(msg="Unable to access browser for datastore %s" % datastore_name)
detail_query = vim.host.DatastoreBrowser.FileInfo.Details(
fileOwner=True,
fileSize=True,
fileType=True,
modification=True
)
search_spec = vim.host.DatastoreBrowser.SearchSpec(
details=detail_query,
matchPattern=[vmdk_filename],
searchCaseInsensitive=True,
)
search_res = browser.SearchSubFolders(
datastorePath=datastore_name_sq,
searchSpec=search_spec
)
changed = False
vmdk_path = datastore_name_sq + " " + vmdk_fullpath
try:
changed, result = wait_for_task(search_res)
except TaskError as task_e:
self.module.fail_json(msg=to_native(task_e))
if not changed:
self.module.fail_json(msg="No valid disk vmdk image found for path %s" % vmdk_path)
target_folder_paths = [
datastore_name_sq + " " + vmdk_folder + '/',
datastore_name_sq + " " + vmdk_folder,
]
for file_result in search_res.info.result:
for f in getattr(file_result, 'file'):
if f.path == vmdk_filename and file_result.folderPath in target_folder_paths:
return f
self.module.fail_json(msg="No vmdk file found for path specified [%s]" % vmdk_path)
#
# Conversion to JSON
#
def _deepmerge(self, d, u):
"""
Deep merges u into d.
Credit:
https://bit.ly/2EDOs1B (stackoverflow question 3232943)
License:
cc-by-sa 3.0 (https://creativecommons.org/licenses/by-sa/3.0/)
Changes:
using collections_compat for compatibility
Args:
- d (dict): dict to merge into
- u (dict): dict to merge into d
Returns:
dict, with u merged into d
"""
for k, v in iteritems(u):
if isinstance(v, collections_compat.Mapping):
d[k] = self._deepmerge(d.get(k, {}), v)
else:
d[k] = v
return d
def _extract(self, data, remainder):
"""
This is used to break down dotted properties for extraction.
Args:
- data (dict): result of _jsonify on a property
- remainder: the remainder of the dotted property to select
Return:
dict
"""
result = dict()
if '.' not in remainder:
result[remainder] = data[remainder]
return result
key, remainder = remainder.split('.', 1)
result[key] = self._extract(data[key], remainder)
return result
def _jsonify(self, obj):
"""
Convert an object from pyVmomi into JSON.
Args:
- obj (object): vim object
Return:
dict
"""
return json.loads(json.dumps(obj, cls=VmomiSupport.VmomiJSONEncoder,
sort_keys=True, strip_dynamic=True))
def to_json(self, obj, properties=None):
"""
Convert a vSphere (pyVmomi) Object into JSON. This is a deep
transformation. The list of properties is optional - if not
provided then all properties are deeply converted. The resulting
JSON is sorted to improve human readability.
Requires upstream support from pyVmomi > 6.7.1
(https://github.com/vmware/pyvmomi/pull/732)
Args:
- obj (object): vim object
- properties (list, optional): list of properties following
the property collector specification, for example:
["config.hardware.memoryMB", "name", "overallStatus"]
default is a complete object dump, which can be large
Return:
dict
"""
if not HAS_PYVMOMIJSON:
self.module.fail_json(msg='The installed version of pyvmomi lacks JSON output support; need pyvmomi>6.7.1')
result = dict()
if properties:
for prop in properties:
try:
if '.' in prop:
key, remainder = prop.split('.', 1)
tmp = dict()
tmp[key] = self._extract(self._jsonify(getattr(obj, key)), remainder)
self._deepmerge(result, tmp)
else:
result[prop] = self._jsonify(getattr(obj, prop))
# To match gather_vm_facts output
prop_name = prop
if prop.lower() == '_moid':
prop_name = 'moid'
elif prop.lower() == '_vimref':
prop_name = 'vimref'
result[prop_name] = result[prop]
except (AttributeError, KeyError):
self.module.fail_json(msg="Property '{0}' not found.".format(prop))
else:
result = self._jsonify(obj)
return result
def get_folder_path(self, cur):
full_path = '/' + cur.name
while hasattr(cur, 'parent') and cur.parent:
if cur.parent == self.content.rootFolder:
break
cur = cur.parent
full_path = '/' + cur.name + full_path
return full_path
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,421 |
vmware_cluster_ha support for advanced configuration parameters
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The vmware_cluster_ha module should support advanced configuration parameters in a generic way.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_cluster_ha
##### ADDITIONAL INFORMATION
The feature would allow advanced configurations like isolation address handling.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
name: Enable HA with multiple custom isolation addresses for stretched vSAN
vmware_cluster_ha:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: datacenter
cluster_name: cluster
enable_ha: yes
configuration_parameters:
das.usedefaultisolationaddress: false
das.isolationaddress0: '{{ primary_isolation_address }}'
das.isolationaddress1: '{{ secondary_isolation_address }}'
delegate_to: localhost
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/61421
|
https://github.com/ansible/ansible/pull/65675
|
c8704573396e7480b3e1b33b2ddda2b6325d0d80
|
fec883dfffcd8685d5d57a07463e402c2cd36931
| 2019-08-28T08:26:05Z |
python
| 2019-12-19T19:19:45Z |
lib/ansible/modules/cloud/vmware/vmware_cluster_ha.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Joseph Callen <jcallen () csc.com>
# Copyright: (c) 2018, Ansible Project
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_cluster_ha
short_description: Manage High Availability (HA) on VMware vSphere clusters
description:
- Manages HA configuration on VMware vSphere clusters.
- All values and VMware object names are case sensitive.
version_added: '2.9'
author:
- Joseph Callen (@jcpowermac)
- Abhijeet Kasurde (@Akasurde)
requirements:
- Tested on ESXi 5.5 and 6.5.
- PyVmomi installed.
options:
cluster_name:
description:
- The name of the cluster to be managed.
type: str
required: yes
datacenter:
description:
- The name of the datacenter.
type: str
required: yes
aliases: [ datacenter_name ]
enable_ha:
description:
- Whether to enable HA.
type: bool
default: 'no'
ha_host_monitoring:
description:
- Whether HA restarts virtual machines after a host fails.
- If set to C(enabled), HA restarts virtual machines after a host fails.
- If set to C(disabled), HA does not restart virtual machines after a host fails.
- If C(enable_ha) is set to C(no), then this value is ignored.
type: str
choices: [ 'enabled', 'disabled' ]
default: 'enabled'
ha_vm_monitoring:
description:
- State of virtual machine health monitoring service.
- If set to C(vmAndAppMonitoring), HA response to both virtual machine and application heartbeat failure.
- If set to C(vmMonitoringDisabled), virtual machine health monitoring is disabled.
- If set to C(vmMonitoringOnly), HA response to virtual machine heartbeat failure.
- If C(enable_ha) is set to C(no), then this value is ignored.
type: str
choices: ['vmAndAppMonitoring', 'vmMonitoringOnly', 'vmMonitoringDisabled']
default: 'vmMonitoringDisabled'
host_isolation_response:
description:
- Indicates whether or VMs should be powered off if a host determines that it is isolated from the rest of the compute resource.
- If set to C(none), do not power off VMs in the event of a host network isolation.
- If set to C(powerOff), power off VMs in the event of a host network isolation.
- If set to C(shutdown), shut down VMs guest operating system in the event of a host network isolation.
type: str
choices: ['none', 'powerOff', 'shutdown']
default: 'none'
slot_based_admission_control:
description:
- Configure slot based admission control policy.
- C(slot_based_admission_control), C(reservation_based_admission_control) and C(failover_host_admission_control) are mutually exclusive.
suboptions:
failover_level:
description:
- Number of host failures that should be tolerated.
type: int
required: true
type: dict
reservation_based_admission_control:
description:
- Configure reservation based admission control policy.
- C(slot_based_admission_control), C(reservation_based_admission_control) and C(failover_host_admission_control) are mutually exclusive.
suboptions:
failover_level:
description:
- Number of host failures that should be tolerated.
type: int
required: true
auto_compute_percentages:
description:
- By default, C(failover_level) is used to calculate C(cpu_failover_resources_percent) and C(memory_failover_resources_percent).
If a user wants to override the percentage values, he has to set this field to false.
type: bool
default: true
cpu_failover_resources_percent:
description:
- Percentage of CPU resources in the cluster to reserve for failover.
Ignored if C(auto_compute_percentages) is not set to false.
type: int
default: 50
memory_failover_resources_percent:
description:
- Percentage of memory resources in the cluster to reserve for failover.
Ignored if C(auto_compute_percentages) is not set to false.
type: int
default: 50
type: dict
failover_host_admission_control:
description:
- Configure dedicated failover hosts.
- C(slot_based_admission_control), C(reservation_based_admission_control) and C(failover_host_admission_control) are mutually exclusive.
suboptions:
failover_hosts:
description:
- List of dedicated failover hosts.
type: list
required: true
type: dict
ha_vm_failure_interval:
description:
- The number of seconds after which virtual machine is declared as failed
if no heartbeat has been received.
- This setting is only valid if C(ha_vm_monitoring) is set to, either C(vmAndAppMonitoring) or C(vmMonitoringOnly).
- Unit is seconds.
type: int
default: 30
ha_vm_min_up_time:
description:
- The number of seconds for the virtual machine's heartbeats to stabilize after
the virtual machine has been powered on.
- Valid only when I(ha_vm_monitoring) is set to either C(vmAndAppMonitoring) or C(vmMonitoringOnly).
- Unit is seconds.
type: int
default: 120
ha_vm_max_failures:
description:
- Maximum number of failures and automated resets allowed during the time
that C(ha_vm_max_failure_window) specifies.
- Valid only when I(ha_vm_monitoring) is set to either C(vmAndAppMonitoring) or C(vmMonitoringOnly).
type: int
default: 3
ha_vm_max_failure_window:
description:
- The number of seconds for the window during which up to C(ha_vm_max_failures) resets
can occur before automated responses stop.
- Valid only when I(ha_vm_monitoring) is set to either C(vmAndAppMonitoring) or C(vmMonitoringOnly).
- Unit is seconds.
- Default specifies no failure window.
type: int
default: -1
ha_restart_priority:
description:
- Priority HA gives to a virtual machine if sufficient capacity is not available
to power on all failed virtual machines.
- Valid only if I(ha_vm_monitoring) is set to either C(vmAndAppMonitoring) or C(vmMonitoringOnly).
- If set to C(disabled), then HA is disabled for this virtual machine.
- If set to C(high), then virtual machine with this priority have a higher chance of powering on after a failure,
when there is insufficient capacity on hosts to meet all virtual machine needs.
- If set to C(medium), then virtual machine with this priority have an intermediate chance of powering on after a failure,
when there is insufficient capacity on hosts to meet all virtual machine needs.
- If set to C(low), then virtual machine with this priority have a lower chance of powering on after a failure,
when there is insufficient capacity on hosts to meet all virtual machine needs.
type: str
default: 'medium'
choices: [ 'disabled', 'high', 'low', 'medium' ]
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r"""
- name: Enable HA without admission control
vmware_cluster_ha:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: datacenter
cluster_name: cluster
enable_ha: yes
delegate_to: localhost
- name: Enable HA and VM monitoring without admission control
vmware_cluster_ha:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter_name: DC0
cluster_name: "{{ cluster_name }}"
enable_ha: True
ha_vm_monitoring: vmMonitoringOnly
delegate_to: localhost
- name: Enable HA with admission control reserving 50% of resources for HA
vmware_cluster_ha:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: datacenter
cluster_name: cluster
enable_ha: yes
reservation_based_admission_control:
auto_compute_percentages: False
failover_level: 1
cpu_failover_resources_percent: 50
memory_failover_resources_percent: 50
delegate_to: localhost
"""
RETURN = r"""#
"""
try:
from pyVmomi import vim, vmodl
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import (PyVmomi, TaskError, find_datacenter_by_name,
vmware_argument_spec, wait_for_task)
from ansible.module_utils._text import to_native
class VMwareCluster(PyVmomi):
def __init__(self, module):
super(VMwareCluster, self).__init__(module)
self.cluster_name = module.params['cluster_name']
self.datacenter_name = module.params['datacenter']
self.enable_ha = module.params['enable_ha']
self.datacenter = None
self.cluster = None
self.host_isolation_response = getattr(vim.cluster.DasVmSettings.IsolationResponse, self.params.get('host_isolation_response'))
if self.enable_ha and (
self.params.get('slot_based_admission_control') or
self.params.get('reservation_based_admission_control') or
self.params.get('failover_host_admission_control')):
self.ha_admission_control = True
else:
self.ha_admission_control = False
self.datacenter = find_datacenter_by_name(self.content, self.datacenter_name)
if self.datacenter is None:
self.module.fail_json(msg="Datacenter %s does not exist." % self.datacenter_name)
self.cluster = self.find_cluster_by_name(cluster_name=self.cluster_name)
if self.cluster is None:
self.module.fail_json(msg="Cluster %s does not exist." % self.cluster_name)
def get_failover_hosts(self):
"""
Get failover hosts for failover_host_admission_control policy
Returns: List of ESXi hosts sorted by name
"""
policy = self.params.get('failover_host_admission_control')
hosts = []
all_hosts = dict((h.name, h) for h in self.get_all_hosts_by_cluster(self.cluster_name))
for host in policy.get('failover_hosts'):
if host in all_hosts:
hosts.append(all_hosts.get(host))
else:
self.module.fail_json(msg="Host %s is not a member of cluster %s." % (host, self.cluster_name))
hosts.sort(key=lambda h: h.name)
return hosts
def check_ha_config_diff(self):
"""
Check HA configuration diff
Returns: True if there is diff, else False
"""
das_config = self.cluster.configurationEx.dasConfig
if das_config.enabled != self.enable_ha:
return True
if self.enable_ha and (
das_config.vmMonitoring != self.params.get('ha_vm_monitoring') or
das_config.hostMonitoring != self.params.get('ha_host_monitoring') or
das_config.admissionControlEnabled != self.ha_admission_control or
das_config.defaultVmSettings.restartPriority != self.params.get('ha_restart_priority') or
das_config.defaultVmSettings.isolationResponse != self.host_isolation_response or
das_config.defaultVmSettings.vmToolsMonitoringSettings.vmMonitoring != self.params.get('ha_vm_monitoring') or
das_config.defaultVmSettings.vmToolsMonitoringSettings.failureInterval != self.params.get('ha_vm_failure_interval') or
das_config.defaultVmSettings.vmToolsMonitoringSettings.minUpTime != self.params.get('ha_vm_min_up_time') or
das_config.defaultVmSettings.vmToolsMonitoringSettings.maxFailures != self.params.get('ha_vm_max_failures') or
das_config.defaultVmSettings.vmToolsMonitoringSettings.maxFailureWindow != self.params.get('ha_vm_max_failure_window')):
return True
if self.ha_admission_control:
if self.params.get('slot_based_admission_control'):
policy = self.params.get('slot_based_admission_control')
if not isinstance(das_config.admissionControlPolicy, vim.cluster.FailoverLevelAdmissionControlPolicy) or \
das_config.admissionControlPolicy.failoverLevel != policy.get('failover_level'):
return True
elif self.params.get('reservation_based_admission_control'):
policy = self.params.get('reservation_based_admission_control')
auto_compute_percentages = policy.get('auto_compute_percentages')
if not isinstance(das_config.admissionControlPolicy, vim.cluster.FailoverResourcesAdmissionControlPolicy) or \
das_config.admissionControlPolicy.autoComputePercentages != auto_compute_percentages or \
das_config.admissionControlPolicy.failoverLevel != policy.get('failover_level'):
return True
if not auto_compute_percentages:
if das_config.admissionControlPolicy.cpuFailoverResourcesPercent != policy.get('cpu_failover_resources_percent') or \
das_config.admissionControlPolicy.memoryFailoverResourcesPercent != policy.get('memory_failover_resources_percent'):
return True
elif self.params.get('failover_host_admission_control'):
policy = self.params.get('failover_host_admission_control')
if not isinstance(das_config.admissionControlPolicy, vim.cluster.FailoverHostAdmissionControlPolicy):
return True
das_config.admissionControlPolicy.failoverHosts.sort(key=lambda h: h.name)
if das_config.admissionControlPolicy.failoverHosts != self.get_failover_hosts():
return True
return False
def configure_ha(self):
"""
Manage HA Configuration
"""
changed, result = False, None
if self.check_ha_config_diff():
if not self.module.check_mode:
cluster_config_spec = vim.cluster.ConfigSpecEx()
cluster_config_spec.dasConfig = vim.cluster.DasConfigInfo()
cluster_config_spec.dasConfig.enabled = self.enable_ha
if self.enable_ha:
vm_tool_spec = vim.cluster.VmToolsMonitoringSettings()
vm_tool_spec.enabled = True
vm_tool_spec.vmMonitoring = self.params.get('ha_vm_monitoring')
vm_tool_spec.failureInterval = self.params.get('ha_vm_failure_interval')
vm_tool_spec.minUpTime = self.params.get('ha_vm_min_up_time')
vm_tool_spec.maxFailures = self.params.get('ha_vm_max_failures')
vm_tool_spec.maxFailureWindow = self.params.get('ha_vm_max_failure_window')
das_vm_config = vim.cluster.DasVmSettings()
das_vm_config.restartPriority = self.params.get('ha_restart_priority')
das_vm_config.isolationResponse = self.host_isolation_response
das_vm_config.vmToolsMonitoringSettings = vm_tool_spec
cluster_config_spec.dasConfig.defaultVmSettings = das_vm_config
cluster_config_spec.dasConfig.admissionControlEnabled = self.ha_admission_control
if self.ha_admission_control:
if self.params.get('slot_based_admission_control'):
cluster_config_spec.dasConfig.admissionControlPolicy = vim.cluster.FailoverLevelAdmissionControlPolicy()
policy = self.params.get('slot_based_admission_control')
cluster_config_spec.dasConfig.admissionControlPolicy.failoverLevel = policy.get('failover_level')
elif self.params.get('reservation_based_admission_control'):
cluster_config_spec.dasConfig.admissionControlPolicy = vim.cluster.FailoverResourcesAdmissionControlPolicy()
policy = self.params.get('reservation_based_admission_control')
auto_compute_percentages = policy.get('auto_compute_percentages')
cluster_config_spec.dasConfig.admissionControlPolicy.autoComputePercentages = auto_compute_percentages
cluster_config_spec.dasConfig.admissionControlPolicy.failoverLevel = policy.get('failover_level')
if not auto_compute_percentages:
cluster_config_spec.dasConfig.admissionControlPolicy.cpuFailoverResourcesPercent = \
policy.get('cpu_failover_resources_percent')
cluster_config_spec.dasConfig.admissionControlPolicy.memoryFailoverResourcesPercent = \
policy.get('memory_failover_resources_percent')
elif self.params.get('failover_host_admission_control'):
cluster_config_spec.dasConfig.admissionControlPolicy = vim.cluster.FailoverHostAdmissionControlPolicy()
policy = self.params.get('failover_host_admission_control')
cluster_config_spec.dasConfig.admissionControlPolicy.failoverHosts = self.get_failover_hosts()
cluster_config_spec.dasConfig.hostMonitoring = self.params.get('ha_host_monitoring')
cluster_config_spec.dasConfig.vmMonitoring = self.params.get('ha_vm_monitoring')
try:
task = self.cluster.ReconfigureComputeResource_Task(cluster_config_spec, True)
changed, result = wait_for_task(task)
except vmodl.RuntimeFault as runtime_fault:
self.module.fail_json(msg=to_native(runtime_fault.msg))
except vmodl.MethodFault as method_fault:
self.module.fail_json(msg=to_native(method_fault.msg))
except TaskError as task_e:
self.module.fail_json(msg=to_native(task_e))
except Exception as generic_exc:
self.module.fail_json(msg="Failed to update cluster"
" due to generic exception %s" % to_native(generic_exc))
else:
changed = True
self.module.exit_json(changed=changed, result=result)
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(dict(
cluster_name=dict(type='str', required=True),
datacenter=dict(type='str', required=True, aliases=['datacenter_name']),
# HA
enable_ha=dict(type='bool', default=False),
ha_host_monitoring=dict(type='str',
default='enabled',
choices=['enabled', 'disabled']),
host_isolation_response=dict(type='str',
default='none',
choices=['none', 'powerOff', 'shutdown']),
# HA VM Monitoring related parameters
ha_vm_monitoring=dict(type='str',
choices=['vmAndAppMonitoring', 'vmMonitoringOnly', 'vmMonitoringDisabled'],
default='vmMonitoringDisabled'),
ha_vm_failure_interval=dict(type='int', default=30),
ha_vm_min_up_time=dict(type='int', default=120),
ha_vm_max_failures=dict(type='int', default=3),
ha_vm_max_failure_window=dict(type='int', default=-1),
ha_restart_priority=dict(type='str',
choices=['high', 'low', 'medium', 'disabled'],
default='medium'),
# HA Admission Control related parameters
slot_based_admission_control=dict(type='dict', options=dict(
failover_level=dict(type='int', required=True),
)),
reservation_based_admission_control=dict(type='dict', options=dict(
auto_compute_percentages=dict(type='bool', default=True),
failover_level=dict(type='int', required=True),
cpu_failover_resources_percent=dict(type='int', default=50),
memory_failover_resources_percent=dict(type='int', default=50),
)),
failover_host_admission_control=dict(type='dict', options=dict(
failover_hosts=dict(type='list', elements='str', required=True),
)),
))
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[
['slot_based_admission_control', 'reservation_based_admission_control', 'failover_host_admission_control']
]
)
vmware_cluster_ha = VMwareCluster(module)
vmware_cluster_ha.configure_ha()
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 61,421 |
vmware_cluster_ha support for advanced configuration parameters
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The vmware_cluster_ha module should support advanced configuration parameters in a generic way.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
vmware_cluster_ha
##### ADDITIONAL INFORMATION
The feature would allow advanced configurations like isolation address handling.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
name: Enable HA with multiple custom isolation addresses for stretched vSAN
vmware_cluster_ha:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter_name: datacenter
cluster_name: cluster
enable_ha: yes
configuration_parameters:
das.usedefaultisolationaddress: false
das.isolationaddress0: '{{ primary_isolation_address }}'
das.isolationaddress1: '{{ secondary_isolation_address }}'
delegate_to: localhost
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/61421
|
https://github.com/ansible/ansible/pull/65675
|
c8704573396e7480b3e1b33b2ddda2b6325d0d80
|
fec883dfffcd8685d5d57a07463e402c2cd36931
| 2019-08-28T08:26:05Z |
python
| 2019-12-19T19:19:45Z |
test/integration/targets/vmware_cluster_ha/tasks/main.yml
|
# Test code for the vmware_cluster module.
# Copyright: (c) 2017, Abhijeet Kasurde <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- import_role:
name: prepare_vmware_tests
# Setup: Create test cluster
- name: Create test cluster
vmware_cluster:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
state: present
# Testcase 0001: Enable HA
- name: Enable HA
vmware_cluster_ha:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
enable_ha: yes
register: cluster_ha_result_0001
- name: Ensure HA is enabled
assert:
that:
- "{{ cluster_ha_result_0001.changed == true }}"
# Testcase 0002: Enable Slot based Admission Control
- name: Enable Slot based Admission Control
vmware_cluster_ha:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
enable_ha: yes
slot_based_admission_control:
failover_level: 1
register: cluster_ha_result_0002
- name: Ensure Admission Cotrol is enabled
assert:
that:
- "{{ cluster_ha_result_0002.changed == true }}"
# Testcase 0003: Enable Cluster resource Percentage based Admission Control
- name: Enable Cluster resource Percentage based Admission Control
vmware_cluster_ha:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
enable_ha: yes
reservation_based_admission_control:
auto_compute_percentages: false
failover_level: 1
cpu_failover_resources_percent: 33
memory_failover_resources_percent: 33
register: cluster_ha_result_0003
- name: Ensure Admission Cotrol is enabled
assert:
that:
- "{{ cluster_ha_result_0003.changed == true }}"
# Testcase 0004: Set Isolation Response to powerOff
- name: Set Isolation Response to powerOff
vmware_cluster_ha:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
enable_ha: yes
host_isolation_response: 'powerOff'
register: cluster_ha_result_0004
- name: Ensure Isolation Response is enabled
assert:
that:
- "{{ cluster_ha_result_0004.changed == true }}"
# Testcase 0005: Set Isolation Response to shutdown
- name: Set Isolation Response to shutdown
vmware_cluster_ha:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
enable_ha: yes
host_isolation_response: 'shutdown'
register: cluster_ha_result_0005
- name: Ensure Isolation Response is enabled
assert:
that:
- "{{ cluster_ha_result_0005.changed == true }}"
# Testcase 0006: Disable HA
- name: Disable HA
vmware_cluster_ha:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
enable_ha: no
register: cluster_ha_result_0006
- name: Ensure HA is disabled
assert:
that:
- "{{ cluster_ha_result_0006.changed == true }}"
# Delete test cluster
- name: Delete test cluster
vmware_cluster:
validate_certs: False
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter_name: "{{ dc1 }}"
cluster_name: test_cluster_ha
state: absent
when: vcsim is not defined
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,636 |
[get_url] argument sha256sum declared deprecated without target version
|
### SUMMARY
```
sha256sum:
description:
- If a SHA-256 checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
This option is deprecated. Use C(checksum) instead.
```
There is no `module.deprecate` either for `sha256sum`.
the argument has been declared deprecated in commit b3b11fbce26 (2017)
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
get_url
* ansible version: 2.10
|
https://github.com/ansible/ansible/issues/65636
|
https://github.com/ansible/ansible/pull/65637
|
fec883dfffcd8685d5d57a07463e402c2cd36931
|
4351a756e100da91a56fb7bc9bb83dc0c194f615
| 2019-12-08T14:16:31Z |
python
| 2019-12-19T19:47:27Z |
lib/ansible/modules/net_tools/basics/get_url.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: get_url
short_description: Downloads files from HTTP, HTTPS, or FTP to node
description:
- Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote
server I(must) have direct access to the remote resource.
- By default, if an environment variable C(<protocol>_proxy) is set on
the target host, requests will be sent through that proxy. This
behaviour can be overridden by setting a variable for this task
(see `setting the environment
<https://docs.ansible.com/playbooks_environment.html>`_),
or by using the use_proxy option.
- HTTP redirects can redirect from HTTP to HTTPS so you should be sure that
your proxy environment for both protocols is correct.
- From Ansible 2.4 when run with C(--check), it will do a HEAD request to validate the URL but
will not download the entire file or verify it against hashes.
- For Windows targets, use the M(win_get_url) module instead.
version_added: '0.6'
options:
url:
description:
- HTTP, HTTPS, or FTP URL in the form (http|https|ftp)://[user[:pass]]@host.domain[:port]/path
type: str
required: true
dest:
description:
- Absolute path of where to download the file to.
- If C(dest) is a directory, either the server provided filename or, if
none provided, the base name of the URL on the remote server will be
used. If a directory, C(force) has no effect.
- If C(dest) is a directory, the file will always be downloaded
(regardless of the C(force) option), but replaced only if the contents changed..
type: path
required: true
tmp_dest:
description:
- Absolute path of where temporary file is downloaded to.
- When run on Ansible 2.5 or greater, path defaults to ansible's remote_tmp setting
- When run on Ansible prior to 2.5, it defaults to C(TMPDIR), C(TEMP) or C(TMP) env variables or a platform specific value.
- U(https://docs.python.org/2/library/tempfile.html#tempfile.tempdir)
type: path
version_added: '2.1'
force:
description:
- If C(yes) and C(dest) is not a directory, will download the file every
time and replace the file if the contents change. If C(no), the file
will only be downloaded if the destination does not exist. Generally
should be C(yes) only for small local files.
- Prior to 0.6, this module behaved as if C(yes) was the default.
- Alias C(thirsty) has been deprecated and will be removed in 2.13.
type: bool
default: no
aliases: [ thirsty ]
version_added: '0.7'
backup:
description:
- Create a backup file including the timestamp information so you can get
the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '2.1'
sha256sum:
description:
- If a SHA-256 checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
This option is deprecated. Use C(checksum) instead.
default: ''
version_added: "1.3"
checksum:
description:
- 'If a checksum is passed to this parameter, the digest of the
destination file will be calculated after it is downloaded to ensure
its integrity and verify that the transfer completed successfully.
Format: <algorithm>:<checksum|url>, e.g. checksum="sha256:D98291AC[...]B6DC7B97",
checksum="sha256:http://example.com/path/sha256sum.txt"'
- If you worry about portability, only the sha1 algorithm is available
on all platforms and python versions.
- The third party hashlib library can be installed for access to additional algorithms.
- Additionally, if a checksum is passed to this parameter, and the file exist under
the C(dest) location, the I(destination_checksum) would be calculated, and if
checksum equals I(destination_checksum), the file download would be skipped
(unless C(force) is true). If the checksum does not equal I(destination_checksum),
the destination file is deleted.
type: str
default: ''
version_added: "2.0"
use_proxy:
description:
- if C(no), it will not use a proxy, even if one is defined in
an environment variable on the target hosts.
type: bool
default: yes
validate_certs:
description:
- If C(no), SSL certificates will not be validated.
- This should only be used on personally controlled sites using self-signed certificates.
type: bool
default: yes
timeout:
description:
- Timeout in seconds for URL request.
type: int
default: 10
version_added: '1.8'
headers:
description:
- Add custom HTTP headers to a request in hash/dict format.
- The hash/dict format was added in Ansible 2.6.
- Previous versions used a C("key:value,key:value") string format.
- The C("key:value,key:value") string format is deprecated and will be removed in version 2.10.
type: raw
version_added: '2.0'
url_username:
description:
- The username for use in HTTP basic authentication.
- This parameter can be used without C(url_password) for sites that allow empty passwords.
- Since version 2.8 you can also use the C(username) alias for this option.
type: str
aliases: ['username']
version_added: '1.6'
url_password:
description:
- The password for use in HTTP basic authentication.
- If the C(url_username) parameter is not specified, the C(url_password) parameter will not be used.
- Since version 2.8 you can also use the 'password' alias for this option.
type: str
aliases: ['password']
version_added: '1.6'
force_basic_auth:
description:
- Force the sending of the Basic authentication header upon initial request.
- httplib2, the library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail.
type: bool
default: no
version_added: '2.0'
client_cert:
description:
- PEM formatted certificate chain file to be used for SSL client authentication.
- This file can also include the key as well, and if the key is included, C(client_key) is not required.
type: path
version_added: '2.4'
client_key:
description:
- PEM formatted file that contains your private key to be used for SSL client authentication.
- If C(client_cert) contains both the certificate and key, this option is not required.
type: path
version_added: '2.4'
http_agent:
description:
- Header to identify as, generally appears in web server logs.
type: str
default: ansible-httpget
# informational: requirements for nodes
extends_documentation_fragment:
- files
notes:
- For Windows targets, use the M(win_get_url) module instead.
seealso:
- module: uri
- module: win_get_url
author:
- Jan-Piet Mens (@jpmens)
'''
EXAMPLES = r'''
- name: Download foo.conf
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
mode: '0440'
- name: Download file and force basic auth
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
force_basic_auth: yes
- name: Download file with custom HTTP headers
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
headers:
key1: one
key2: two
- name: Download file with check (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c
- name: Download file with check (md5)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: md5:66dffb5228a211e61d6d7ef4a86f5758
- name: Download file with checksum url (sha256)
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
checksum: sha256:http://example.com/path/sha256sum.txt
- name: Download file from a file path
get_url:
url: file:///tmp/afile.txt
dest: /tmp/afilecopy.txt
- name: < Fetch file that requires authentication.
username/password only available since 2.8, in older versions you need to use url_username/url_password
get_url:
url: http://example.com/path/file.conf
dest: /etc/foo.conf
username: bar
password: '{{ mysecret }}'
'''
RETURN = r'''
backup_file:
description: name of backup file created after download
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
checksum_dest:
description: sha1 checksum of the file after copy
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
checksum_src:
description: sha1 checksum of the file
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
dest:
description: destination file/path
returned: success
type: str
sample: /path/to/file.txt
elapsed:
description: The number of seconds that elapsed while performing the download
returned: always
type: int
sample: 23
gid:
description: group id of the file
returned: success
type: int
sample: 100
group:
description: group of the file
returned: success
type: str
sample: "httpd"
md5sum:
description: md5 checksum of the file after download
returned: when supported
type: str
sample: "2a5aeecc61dc98c4d780b14b330e3282"
mode:
description: permissions of the target
returned: success
type: str
sample: "0644"
msg:
description: the HTTP message from the request
returned: always
type: str
sample: OK (unknown bytes)
owner:
description: owner of the file
returned: success
type: str
sample: httpd
secontext:
description: the SELinux security context of the file
returned: success
type: str
sample: unconfined_u:object_r:user_tmp_t:s0
size:
description: size of the target
returned: success
type: int
sample: 1220
src:
description: source file used after download
returned: always
type: str
sample: /tmp/tmpAdFLdV
state:
description: state of the target
returned: success
type: str
sample: file
status_code:
description: the HTTP status code from the request
returned: always
type: int
sample: 200
uid:
description: owner id of the file, after execution
returned: success
type: int
sample: 100
url:
description: the actual URL used for the request
returned: always
type: str
sample: https://www.ansible.com/
'''
import datetime
import os
import re
import shutil
import tempfile
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlsplit
from ansible.module_utils._text import to_native
from ansible.module_utils.urls import fetch_url, url_argument_spec
# ==============================================================
# url handling
def url_filename(url):
fn = os.path.basename(urlsplit(url)[2])
if fn == '':
return 'index.html'
return fn
def url_get(module, url, dest, use_proxy, last_mod_time, force, timeout=10, headers=None, tmp_dest=''):
"""
Download data from the url and store in a temporary file.
Return (tempfile, info about the request)
"""
if module.check_mode:
method = 'HEAD'
else:
method = 'GET'
start = datetime.datetime.utcnow()
rsp, info = fetch_url(module, url, use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout, headers=headers, method=method)
elapsed = (datetime.datetime.utcnow() - start).seconds
if info['status'] == 304:
module.exit_json(url=url, dest=dest, changed=False, msg=info.get('msg', ''), elapsed=elapsed)
# Exceptions in fetch_url may result in a status -1, the ensures a proper error to the user in all cases
if info['status'] == -1:
module.fail_json(msg=info['msg'], url=url, dest=dest, elapsed=elapsed)
if info['status'] != 200 and not url.startswith('file:/') and not (url.startswith('ftp:/') and info.get('msg', '').startswith('OK')):
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], url=url, dest=dest, elapsed=elapsed)
# create a temporary file and copy content to do checksum-based replacement
if tmp_dest:
# tmp_dest should be an existing dir
tmp_dest_is_dir = os.path.isdir(tmp_dest)
if not tmp_dest_is_dir:
if os.path.exists(tmp_dest):
module.fail_json(msg="%s is a file but should be a directory." % tmp_dest, elapsed=elapsed)
else:
module.fail_json(msg="%s directory does not exist." % tmp_dest, elapsed=elapsed)
else:
tmp_dest = module.tmpdir
fd, tempname = tempfile.mkstemp(dir=tmp_dest)
f = os.fdopen(fd, 'wb')
try:
shutil.copyfileobj(rsp, f)
except Exception as e:
os.remove(tempname)
module.fail_json(msg="failed to create temporary content file: %s" % to_native(e), elapsed=elapsed, exception=traceback.format_exc())
f.close()
rsp.close()
return tempname, info
def extract_filename_from_headers(headers):
"""
Extracts a filename from the given dict of HTTP headers.
Looks for the content-disposition header and applies a regex.
Returns the filename if successful, else None."""
cont_disp_regex = 'attachment; ?filename="?([^"]+)'
res = None
if 'content-disposition' in headers:
cont_disp = headers['content-disposition']
match = re.match(cont_disp_regex, cont_disp)
if match:
res = match.group(1)
# Try preventing any funny business.
res = os.path.basename(res)
return res
# ==============================================================
# main
def main():
argument_spec = url_argument_spec()
# setup aliases
argument_spec['url_username']['aliases'] = ['username']
argument_spec['url_password']['aliases'] = ['password']
argument_spec.update(
url=dict(type='str', required=True),
dest=dict(type='path', required=True),
backup=dict(type='bool'),
sha256sum=dict(type='str', default=''),
checksum=dict(type='str', default=''),
timeout=dict(type='int', default=10),
headers=dict(type='raw'),
tmp_dest=dict(type='path'),
)
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=argument_spec,
add_file_common_args=True,
supports_check_mode=True,
mutually_exclusive=[['checksum', 'sha256sum']],
)
if module.params.get('thirsty'):
module.deprecate('The alias "thirsty" has been deprecated and will be removed, use "force" instead', version='2.13')
url = module.params['url']
dest = module.params['dest']
backup = module.params['backup']
force = module.params['force']
sha256sum = module.params['sha256sum']
checksum = module.params['checksum']
use_proxy = module.params['use_proxy']
timeout = module.params['timeout']
tmp_dest = module.params['tmp_dest']
result = dict(
changed=False,
checksum_dest=None,
checksum_src=None,
dest=dest,
elapsed=0,
url=url,
)
# Parse headers to dict
if isinstance(module.params['headers'], dict):
headers = module.params['headers']
elif module.params['headers']:
try:
headers = dict(item.split(':', 1) for item in module.params['headers'].split(','))
module.deprecate('Supplying `headers` as a string is deprecated. Please use dict/hash format for `headers`', version='2.10')
except Exception:
module.fail_json(msg="The string representation for the `headers` parameter requires a key:value,key:value syntax to be properly parsed.", **result)
else:
headers = None
dest_is_dir = os.path.isdir(dest)
last_mod_time = None
# workaround for usage of deprecated sha256sum parameter
if sha256sum:
checksum = 'sha256:%s' % (sha256sum)
# checksum specified, parse for algorithm and checksum
if checksum:
try:
algorithm, checksum = checksum.split(':', 1)
except ValueError:
module.fail_json(msg="The checksum parameter has to be in format <algorithm>:<checksum>", **result)
if checksum.startswith('http://') or checksum.startswith('https://') or checksum.startswith('ftp://'):
checksum_url = checksum
# download checksum file to checksum_tmpsrc
checksum_tmpsrc, checksum_info = url_get(module, checksum_url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
with open(checksum_tmpsrc) as f:
lines = [line.rstrip('\n') for line in f]
os.remove(checksum_tmpsrc)
checksum_map = {}
for line in lines:
parts = line.split(None, 1)
if len(parts) == 2:
checksum_map[parts[0]] = parts[1]
filename = url_filename(url)
# Look through each line in the checksum file for a hash corresponding to
# the filename in the url, returning the first hash that is found.
for cksum in (s for (s, f) in checksum_map.items() if f.strip('./') == filename):
checksum = cksum
break
else:
checksum = None
if checksum is None:
module.fail_json(msg="Unable to find a checksum for file '%s' in '%s'" % (filename, checksum_url))
# Remove any non-alphanumeric characters, including the infamous
# Unicode zero-width space
checksum = re.sub(r'\W+', '', checksum).lower()
# Ensure the checksum portion is a hexdigest
try:
int(checksum, 16)
except ValueError:
module.fail_json(msg='The checksum format is invalid', **result)
if not dest_is_dir and os.path.exists(dest):
checksum_mismatch = False
# If the download is not forced and there is a checksum, allow
# checksum match to skip the download.
if not force and checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
checksum_mismatch = True
# Not forcing redownload, unless checksum does not match
if not force and checksum and not checksum_mismatch:
# Not forcing redownload, unless checksum does not match
# allow file attribute changes
module.params['path'] = dest
file_args = module.load_file_common_arguments(module.params)
file_args['path'] = dest
result['changed'] = module.set_fs_attributes_if_different(file_args, False)
if result['changed']:
module.exit_json(msg="file already exists but file attributes changed", **result)
module.exit_json(msg="file already exists", **result)
# If the file already exists, prepare the last modified time for the
# request.
mtime = os.path.getmtime(dest)
last_mod_time = datetime.datetime.utcfromtimestamp(mtime)
# If the checksum does not match we have to force the download
# because last_mod_time may be newer than on remote
if checksum_mismatch:
force = True
# download to tmpsrc
start = datetime.datetime.utcnow()
tmpsrc, info = url_get(module, url, dest, use_proxy, last_mod_time, force, timeout, headers, tmp_dest)
result['elapsed'] = (datetime.datetime.utcnow() - start).seconds
result['src'] = tmpsrc
# Now the request has completed, we can finally generate the final
# destination file name from the info dict.
if dest_is_dir:
filename = extract_filename_from_headers(info)
if not filename:
# Fall back to extracting the filename from the URL.
# Pluck the URL from the info, since a redirect could have changed
# it.
filename = url_filename(info['url'])
dest = os.path.join(dest, filename)
result['dest'] = dest
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="Request failed", status_code=info['status'], response=info['msg'], **result)
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Source %s is not readable" % (tmpsrc), **result)
result['checksum_src'] = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (dest), **result)
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not readable" % (dest), **result)
result['checksum_dest'] = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(dest)):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s does not exist" % (os.path.dirname(dest)), **result)
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s is not writable" % (os.path.dirname(dest)), **result)
if module.check_mode:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
result['changed'] = ('checksum_dest' not in result or
result['checksum_src'] != result['checksum_dest'])
module.exit_json(msg=info.get('msg', ''), **result)
backup_file = None
if result['checksum_src'] != result['checksum_dest']:
try:
if backup:
if os.path.exists(dest):
backup_file = module.backup_local(dest)
module.atomic_move(tmpsrc, dest)
except Exception as e:
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, to_native(e)),
exception=traceback.format_exc(), **result)
result['changed'] = True
else:
result['changed'] = False
if os.path.exists(tmpsrc):
os.remove(tmpsrc)
if checksum != '':
destination_checksum = module.digest_from_file(dest, algorithm)
if checksum != destination_checksum:
os.remove(dest)
module.fail_json(msg="The checksum for %s did not match %s; it was %s." % (dest, checksum, destination_checksum), **result)
# allow file attribute changes
module.params['path'] = dest
file_args = module.load_file_common_arguments(module.params)
file_args['path'] = dest
result['changed'] = module.set_fs_attributes_if_different(file_args, result['changed'])
# Backwards compat only. We'll return None on FIPS enabled systems
try:
result['md5sum'] = module.md5(dest)
except ValueError:
result['md5sum'] = None
if backup_file:
result['backup_file'] = backup_file
# Mission complete
module.exit_json(msg=info.get('msg', ''), status_code=info.get('status', ''), **result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 65,265 |
elb_target_group and elb_network_lb should support UDP
|
<!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
elb_network_lb and elb_target_group should support UDP, currently only HTTP, HTTPS and TCP are supported by ansible.
Based on the current pages, UDP and TCP_UDP are supported as protocols:
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-listener.html
- https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-target-group.html
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
- elb_target_group
- elb_network_lb
##### ADDITIONAL INFORMATION
It would be useful to use UDP as a protocol for an NLB, additionally, attached target groups would need to support UDP.
TCP_UDP is another protocol that it would be useful to add support for (a port that would take TCP or UDP traffic and not care)
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create logstash target group
elb_target_group:
name: logstash_bunyan_production_tg
protocol: udp
port: 5600
vpc_id: "{{ vpc_facts.vpcs[0].id }}"
state: present
- name: Create nlb
elb_network_lb:
name: logstash_nlb
subnets: "{{ private_subnets + public_subnets }}"
health_check_path: "/"
health_check_port: "9600"
health_check_protocol: "http"
protocol: "udp"
port: "5600"
listeners:
- Protocol: UDP
Port: 5600
DefaultActions:
- Type: forward
TargetGroupName: logstash_bunyan_production_tg
state: present
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/65265
|
https://github.com/ansible/ansible/pull/65828
|
32a8b620f399fdc9698bd31ea1e619b2eb72b666
|
7bb925489e7338975973cdcd9aacfe2870053b09
| 2019-11-25T21:21:48Z |
python
| 2019-12-19T22:06:16Z |
changelogs/fragments/65265-allow-udp-tcpudp-protocol.yaml
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.